From mikael.vidstedt at oracle.com Tue Jul 1 00:11:30 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Mon, 30 Jun 2014 17:11:30 -0700 Subject: RFR (XXS) [URGENT]: 8048232: Fix for 8046471 breaks PPC64 build In-Reply-To: References: <53AC6DAA.2010807@oracle.com> Message-ID: <53B1FCB2.4050606@oracle.com> Looks good. Cheers, Mikael On 2014-06-30 07:28, Volker Simonis wrote: > Can somebody please review and push this small build change to fix our > ppc64 build errors. > > Thanks, > Volker > > On Fri, Jun 27, 2014 at 5:48 PM, Volker Simonis > wrote: >> On Thu, Jun 26, 2014 at 10:59 PM, Volker Simonis >> wrote: >>> >>> On Thursday, June 26, 2014, Mikael Vidstedt >>> wrote: >>>> >>>> This will work for top level builds. For Hotspot-only builds ARCH will >>>> (still) be the value of uname -m, so if you want to support Hotspot-only >>>> builds you'll probably want to do the "ifneq (,$(findstring $(ARCH), ppc))" >>>> trick to catch both "ppc" (which is what a top level build will use) and >>>> "ppc64" (for Hotspot-only). >>>> >>> Hi Mikael, >>> >>> yes you're right. >> I have to correct myself - you're nearly right:) >> >> In the term "$(findstring $(ARCH), ppc)" '$ARCH' is the needle and >> 'ppc is the stack, so it won't catch 'ppc64' either. I could write >> "$(findstring ppc, $(ARCH))" which would catch both, 'ppc' and 'ppc64' >> but I decided to use the slightly more verbose "$(findstring $(ARCH), >> ppc ppc64)" because it seemed clearer to me. I also added a comment to >> explain the problematic of the different ARCH values for top-level and >> HotSpot-only builds. Once we have the new HS build, this can hopefully >> all go away. >> >> By, the way, I also had to apply this change to your ppc-modifications >> in make/linux/makefiles/defs.make. And I think that the same reasoning >> may also apply to "$(findstring $(ARCH), sparc)" which won't catch >> 'sparc64' any more after your change but I have no Linux/SPARC box to >> test this. You may change it accordingly at your discretion. >> >> So here's the new webrev: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >> >> Please review and sponsor:) >> >> Thank you and best regards, >> Volker >> >>> I only tested a complete make but I indeed want to support >>> HotSpot only makes as well. I'll change it as requested although I won't >>> have chance to do that before tomorrow morning (European time). >>> >>> Thanks you and best regards, >>> Volker >>> >>>> Sorry for breaking it. >>>> >>>> Cheers, >>>> Mikael >>>> >>>> PS. We so need to clean up these makefiles... >>>> >>>> On 2014-06-26 07:25, Volker Simonis wrote: >>>>> Hi, >>>>> >>>>> could somebody please review and push the following tiny change: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8048232 >>>>> >>>>> It fixes the build on Linux/PPC64 after "8046471 Use >>>>> OPENJDK_TARGET_CPU_ARCH instead of legacy value for hotspot ARCH". >>>>> >>>>> Before 8046471, the top-level make passed ARCH=ppc64 to the HotSpot >>>>> make. After 8046471, it now passes ARCH=ppc. But there was one place >>>>> in make/linux/Makefile which checked for ARCH=ppc64 in order to >>>>> disable the TIERED build. This place has to be adapted to handle the >>>>> new ARCH value. >>>>> >>>>> Please push this right to http://hg.openjdk.java.net/jdk9/hs/hotspot >>>>> in order to get it into http://hg.openjdk.java.net/jdk9/dev/hotspot >>>>> together with 8046471. >>>>> >>>>> Note: this change depends on 8046471 in the hotspot AND in the >>>>> top-level directory! >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>> From coleen.phillimore at oracle.com Tue Jul 1 00:50:15 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 30 Jun 2014 20:50:15 -0400 Subject: RFR 8047737 Move array component mirror to instance of java/lang/Class In-Reply-To: References: <53ADC4D4.4030403@oracle.com> <53B0FBEF.5030607@oracle.com> <53B15B50.6070405@oracle.com> Message-ID: <53B205C7.8070804@oracle.com> On 6/30/14, 3:50 PM, Christian Thalinger wrote: > private Class(ClassLoader loader) { > // Initialize final field for classLoader. The initialization value of non-null > // prevents future JIT optimizations from assuming this final field is null. > classLoader = loader; > + componentType = null; > } > > Are we worried about the same optimization? Hi, I've decided to make them consistent and add another parameter to the Class constructor. http://cr.openjdk.java.net/~coleenp/8047737_jdk_2/ Thanks, Coleen > > + compute_optional_offset(_component_mirror_offset, > + klass_oop, vmSymbols::componentType_name(), > + vmSymbols::class_signature()); > > Is there a followup cleanup to make it non-optional? Or, are you > waiting for JPRT to be able to push hotspot and jdk changes together? > > On Jun 30, 2014, at 5:42 AM, Coleen Phillimore > > > wrote: > >> >> On 6/30/14, 1:55 AM, David Holmes wrote: >>> Hi Coleen, >>> >>> Your webrev links are to internal locations. >> >> Sorry, I cut/pasted the wrong links. They are: >> >> http://cr.openjdk.java.net/~coleenp/8047737_jdk/ >> >> http://cr.openjdk.java.net/~coleenp/8047737_hotspot/ >> >> and the full version >> >> http://cr.openjdk.java.net/~coleenp/8047737_hotspot/ >> >> Thank you for pointing this out David. >> >> Coleen >> >>> >>> David >>> >>> On 28/06/2014 5:24 AM, Coleen Phillimore wrote: >>>> Summary: Add field in java.lang.Class for componentType to simplify oop >>>> processing and intrinsics in JVM >>>> >>>> This is part of ongoing work to clean up oop pointers in the metadata >>>> and simplify the interface between the JDK j.l.C and the JVM. There's a >>>> performance benefit at the end of all of this if we can remove all oop >>>> pointers from metadata. mirror in Klass is the only one left after >>>> this full change. >>>> >>>> See bug https://bugs.openjdk.java.net/browse/JDK-8047737 >>>> >>>> There are a couple steps to this change because Hotspot testing is done >>>> with promoted JDKs. The first step is this webrev: >>>> >>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8047737_jdk/ >>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8047737_hotspot/ >>>> >>>> When the JDK is promoted, the code to remove >>>> ArrayKlass::_component_mirror will be changed under a new bug id. >>>> >>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8047737_hotspot_full >>>> >>>> Finally, a compatibility request and licensee notification will >>>> occur to >>>> remove the function JVM_GetComponentType. >>>> >>>> Performance testing was done that shows no difference in performance. >>>> The isArray() call is a compiler intrinsic which is now called instead >>>> of getComponentType, which was recognized as a compiler intrinsic. >>>> >>>> JDK jtreg testing, hotspot jtreg testing, hotspot NSK testing and jck8 >>>> tests were performed on both the change requested (1st one) and the >>>> full >>>> change. >>>> >>>> hotspot NSK tests were run on the hotspot-only change with a >>>> promoted JDK. >>>> >>>> Please send your comments. >>>> >>>> Thanks, >>>> Coleen >> > From david.holmes at oracle.com Tue Jul 1 02:17:32 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 01 Jul 2014 12:17:32 +1000 Subject: [8u40] RFR(S): Set T family feature bit on Niagara systems In-Reply-To: <53B1F5D0.1040608@oracle.com> References: <53B1F5D0.1040608@oracle.com> Message-ID: <53B21A3C.7040502@oracle.com> I can confirm that is an accurate backport of the changeset. Thanks, David On 1/07/2014 9:42 AM, Mikael Vidstedt wrote: > > Please review this 8u40 backport request. The fix was pushed to jdk9 a > couple of weeks ago and has not shown any problems. > > The change from jdk9 applies to jdk8u/hs-dev without conflicts. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8046769 > Webrev: > http://cr.openjdk.java.net/~mikael/webrevs/8046769/webrev.00/webrev/ > jdk9 change: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/2399ebcea84d > > Thanks, > Mikael > From igor.veresov at oracle.com Tue Jul 1 04:08:12 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 30 Jun 2014 21:08:12 -0700 Subject: =?utf-8?Q?Re=3A_Compressed-OOP=27s_on_JVM=E2=80=8F?= In-Reply-To: <53B12451.6090701@redhat.com> References: <53B12451.6090701@redhat.com> Message-ID: On Jun 30, 2014, at 1:48 AM, Andrew Haley wrote: > > Like this: > > $ java -Xmx1G -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsMode -XX:HeapBaseMinAddress=1G -version > > heap address: 0x0000000040000000, size: 1024 MB, zero based Compressed Oops, 32-bits Oops > > Narrow klass base: 0x0000000000000000, Narrow klass shift: 0 > Compressed class space size: 1073741824 Address: 0x0000000080000000 Req Addr: 0x0000000080000000 > openjdk version "1.8.0-internal" > >> If they are encoded/decoded, what is the value of bit shifting ? > > I'm not sure what this refers to. ?32-bits Oops? means zero shift. igor From christian.thalinger at oracle.com Tue Jul 1 04:51:23 2014 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Mon, 30 Jun 2014 21:51:23 -0700 Subject: RFR 8047737 Move array component mirror to instance of java/lang/Class In-Reply-To: <53B205C7.8070804@oracle.com> References: <53ADC4D4.4030403@oracle.com> <53B0FBEF.5030607@oracle.com> <53B15B50.6070405@oracle.com> <53B205C7.8070804@oracle.com> Message-ID: On Jun 30, 2014, at 5:50 PM, Coleen Phillimore wrote: > > On 6/30/14, 3:50 PM, Christian Thalinger wrote: >> private Class(ClassLoader loader) { >> // Initialize final field for classLoader. The initialization value of non-null >> // prevents future JIT optimizations from assuming this final field is null. >> classLoader = loader; >> + componentType = null; >> } >> >> Are we worried about the same optimization? > > Hi, I've decided to make them consistent and add another parameter to the Class constructor. > > http://cr.openjdk.java.net/~coleenp/8047737_jdk_2/ Thanks. > > Thanks, > Coleen >> >> + compute_optional_offset(_component_mirror_offset, >> + klass_oop, vmSymbols::componentType_name(), >> + vmSymbols::class_signature()); >> >> Is there a followup cleanup to make it non-optional? Or, are you waiting for JPRT to be able to push hotspot and jdk changes together? >> >> On Jun 30, 2014, at 5:42 AM, Coleen Phillimore > wrote: >> >>> >>> On 6/30/14, 1:55 AM, David Holmes wrote: >>>> Hi Coleen, >>>> >>>> Your webrev links are to internal locations. >>> >>> Sorry, I cut/pasted the wrong links. They are: >>> >>> http://cr.openjdk.java.net/~coleenp/8047737_jdk/ >>> http://cr.openjdk.java.net/~coleenp/8047737_hotspot/ >>> >>> and the full version >>> >>> http://cr.openjdk.java.net/~coleenp/8047737_hotspot/ >>> >>> Thank you for pointing this out David. >>> >>> Coleen >>> >>>> >>>> David >>>> >>>> On 28/06/2014 5:24 AM, Coleen Phillimore wrote: >>>>> Summary: Add field in java.lang.Class for componentType to simplify oop >>>>> processing and intrinsics in JVM >>>>> >>>>> This is part of ongoing work to clean up oop pointers in the metadata >>>>> and simplify the interface between the JDK j.l.C and the JVM. There's a >>>>> performance benefit at the end of all of this if we can remove all oop >>>>> pointers from metadata. mirror in Klass is the only one left after >>>>> this full change. >>>>> >>>>> See bug https://bugs.openjdk.java.net/browse/JDK-8047737 >>>>> >>>>> There are a couple steps to this change because Hotspot testing is done >>>>> with promoted JDKs. The first step is this webrev: >>>>> >>>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8047737_jdk/ >>>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8047737_hotspot/ >>>>> >>>>> When the JDK is promoted, the code to remove >>>>> ArrayKlass::_component_mirror will be changed under a new bug id. >>>>> >>>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8047737_hotspot_full >>>>> >>>>> Finally, a compatibility request and licensee notification will occur to >>>>> remove the function JVM_GetComponentType. >>>>> >>>>> Performance testing was done that shows no difference in performance. >>>>> The isArray() call is a compiler intrinsic which is now called instead >>>>> of getComponentType, which was recognized as a compiler intrinsic. >>>>> >>>>> JDK jtreg testing, hotspot jtreg testing, hotspot NSK testing and jck8 >>>>> tests were performed on both the change requested (1st one) and the full >>>>> change. >>>>> >>>>> hotspot NSK tests were run on the hotspot-only change with a promoted JDK. >>>>> >>>>> Please send your comments. >>>>> >>>>> Thanks, >>>>> Coleen >>> >> > From goetz.lindenmaier at sap.com Tue Jul 1 07:29:57 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 1 Jul 2014 07:29:57 +0000 Subject: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes In-Reply-To: <53B18E18.80707@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED7FF4@DEWDFEMB12A.global.corp.sap> <53B18E18.80707@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CED8471@DEWDFEMB12A.global.corp.sap> Hi Coleen, thanks for the review! I based it on gc, as Stefan pushed my atomic.inline.hpp change into that repo. Now that change propagated to the other repos, and this one applies nicely (I just checked hs-rt). So I'd appreciate if you sponsor it! But I still need a second review I guess. Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore Sent: Montag, 30. Juni 2014 18:20 To: hotspot-dev at openjdk.java.net Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes Goetz, I reviewed this change and it looks great. Thank you for cleaning this up. Since it's based on hs-gc repository, I think someone from the GC group should sponsor. Otherwise, I'd be happy to. Thanks! Coleen (this was my reply to another RFR, sorry) On 6/29/14, 5:00 PM, Lindenmaier, Goetz wrote: > Hi, > > This change adds a new header os.inline.hpp including the os_.include.hpp > headers. This allows to remove around 30 os dependent include cascades, some of > them even without adding the os.inline.hpp header in that file. > Also, os.inline.hpp is added in several files that call functions from these > headers where it was missing so far. > > Some further cleanups: > OrderAccess include in adaptiveFreeList.cpp is needed because of freeChunk.hpp. > > The include of os.inline.hpp in thread.inline.hpp is needed because > Thread::current() uses thread() from ThreadLocalStorage, which again uses > os::thread_local_storage_at which is implemented platform dependent. > > I moved some methods without dependencies to other .include.hpp files > to os_windows.hpp/os_posix.hpp. This reduces the need for os.inline.hpp > includes a lot. > > Please review and test this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ > > I compiled and tested this without precompiled headers on linuxx86_64, > linuxppc64, windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, > aixppc64 in opt, dbg and fastdbg versions. > > Thanks and best regards, > Goetz. From coleen.phillimore at oracle.com Tue Jul 1 11:24:15 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 01 Jul 2014 07:24:15 -0400 Subject: RFR 8047737 Move array component mirror to instance of java/lang/Class In-Reply-To: References: <53ADC4D4.4030403@oracle.com> <53B0FBEF.5030607@oracle.com> <53B15B50.6070405@oracle.com> <53B205C7.8070804@oracle.com> Message-ID: <53B29A5F.6000603@oracle.com> Thank you! Coleen On 7/1/14, 12:51 AM, Christian Thalinger wrote: > On Jun 30, 2014, at 5:50 PM, Coleen Phillimore wrote: > >> On 6/30/14, 3:50 PM, Christian Thalinger wrote: >>> private Class(ClassLoader loader) { >>> // Initialize final field for classLoader. The initialization value of non-null >>> // prevents future JIT optimizations from assuming this final field is null. >>> classLoader = loader; >>> + componentType = null; >>> } >>> >>> Are we worried about the same optimization? >> Hi, I've decided to make them consistent and add another parameter to the Class constructor. >> >> http://cr.openjdk.java.net/~coleenp/8047737_jdk_2/ > Thanks. > >> Thanks, >> Coleen >>> + compute_optional_offset(_component_mirror_offset, >>> + klass_oop, vmSymbols::componentType_name(), >>> + vmSymbols::class_signature()); >>> >>> Is there a followup cleanup to make it non-optional? Or, are you waiting for JPRT to be able to push hotspot and jdk changes together? >>> >>> On Jun 30, 2014, at 5:42 AM, Coleen Phillimore > wrote: >>> >>>> On 6/30/14, 1:55 AM, David Holmes wrote: >>>>> Hi Coleen, >>>>> >>>>> Your webrev links are to internal locations. >>>> Sorry, I cut/pasted the wrong links. They are: >>>> >>>> http://cr.openjdk.java.net/~coleenp/8047737_jdk/ >>>> http://cr.openjdk.java.net/~coleenp/8047737_hotspot/ >>>> >>>> and the full version >>>> >>>> http://cr.openjdk.java.net/~coleenp/8047737_hotspot/ >>>> >>>> Thank you for pointing this out David. >>>> >>>> Coleen >>>> >>>>> David >>>>> >>>>> On 28/06/2014 5:24 AM, Coleen Phillimore wrote: >>>>>> Summary: Add field in java.lang.Class for componentType to simplify oop >>>>>> processing and intrinsics in JVM >>>>>> >>>>>> This is part of ongoing work to clean up oop pointers in the metadata >>>>>> and simplify the interface between the JDK j.l.C and the JVM. There's a >>>>>> performance benefit at the end of all of this if we can remove all oop >>>>>> pointers from metadata. mirror in Klass is the only one left after >>>>>> this full change. >>>>>> >>>>>> See bug https://bugs.openjdk.java.net/browse/JDK-8047737 >>>>>> >>>>>> There are a couple steps to this change because Hotspot testing is done >>>>>> with promoted JDKs. The first step is this webrev: >>>>>> >>>>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8047737_jdk/ >>>>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8047737_hotspot/ >>>>>> >>>>>> When the JDK is promoted, the code to remove >>>>>> ArrayKlass::_component_mirror will be changed under a new bug id. >>>>>> >>>>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8047737_hotspot_full >>>>>> >>>>>> Finally, a compatibility request and licensee notification will occur to >>>>>> remove the function JVM_GetComponentType. >>>>>> >>>>>> Performance testing was done that shows no difference in performance. >>>>>> The isArray() call is a compiler intrinsic which is now called instead >>>>>> of getComponentType, which was recognized as a compiler intrinsic. >>>>>> >>>>>> JDK jtreg testing, hotspot jtreg testing, hotspot NSK testing and jck8 >>>>>> tests were performed on both the change requested (1st one) and the full >>>>>> change. >>>>>> >>>>>> hotspot NSK tests were run on the hotspot-only change with a promoted JDK. >>>>>> >>>>>> Please send your comments. >>>>>> >>>>>> Thanks, >>>>>> Coleen From coleen.phillimore at oracle.com Tue Jul 1 11:27:10 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 01 Jul 2014 07:27:10 -0400 Subject: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CED8471@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED7FF4@DEWDFEMB12A.global.corp.sap> <53B18E18.80707@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED8471@DEWDFEMB12A.global.corp.sap> Message-ID: <53B29B0E.4060200@oracle.com> Okay, I'll do it. Since you have a Reviewer, all you need is another reviewer (note capitalization). Thanks! Coleen On 7/1/14, 3:29 AM, Lindenmaier, Goetz wrote: > Hi Coleen, > > thanks for the review! > I based it on gc, as Stefan pushed my atomic.inline.hpp change > into that repo. Now that change propagated to the other repos, > and this one applies nicely (I just checked hs-rt). > > So I'd appreciate if you sponsor it! But I still need a second review I guess. > > Best regards, > Goetz. > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore > Sent: Montag, 30. Juni 2014 18:20 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes > > > Goetz, > I reviewed this change and it looks great. Thank you for cleaning this > up. Since it's based on hs-gc repository, I think someone from the GC > group should sponsor. Otherwise, I'd be happy to. > > Thanks! > Coleen > > (this was my reply to another RFR, sorry) > > On 6/29/14, 5:00 PM, Lindenmaier, Goetz wrote: >> Hi, >> >> This change adds a new header os.inline.hpp including the os_.include.hpp >> headers. This allows to remove around 30 os dependent include cascades, some of >> them even without adding the os.inline.hpp header in that file. >> Also, os.inline.hpp is added in several files that call functions from these >> headers where it was missing so far. >> >> Some further cleanups: >> OrderAccess include in adaptiveFreeList.cpp is needed because of freeChunk.hpp. >> >> The include of os.inline.hpp in thread.inline.hpp is needed because >> Thread::current() uses thread() from ThreadLocalStorage, which again uses >> os::thread_local_storage_at which is implemented platform dependent. >> >> I moved some methods without dependencies to other .include.hpp files >> to os_windows.hpp/os_posix.hpp. This reduces the need for os.inline.hpp >> includes a lot. >> >> Please review and test this change. I please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ >> >> I compiled and tested this without precompiled headers on linuxx86_64, >> linuxppc64, windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >> aixppc64 in opt, dbg and fastdbg versions. >> >> Thanks and best regards, >> Goetz. From david.holmes at oracle.com Tue Jul 1 11:41:43 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 01 Jul 2014 21:41:43 +1000 Subject: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes In-Reply-To: <53B29B0E.4060200@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED7FF4@DEWDFEMB12A.global.corp.sap> <53B18E18.80707@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED8471@DEWDFEMB12A.global.corp.sap> <53B29B0E.4060200@oracle.com> Message-ID: <53B29E77.8000500@oracle.com> Coleen, Goetz, This looks good to me too. But it needs to be checked against our closed code (I expect changes will be needed there) and we also need to check things work okay with and without precompiled header support (the solaris build will verify that IIRC). Thanks, David On 1/07/2014 9:27 PM, Coleen Phillimore wrote: > > Okay, I'll do it. Since you have a Reviewer, all you need is another > reviewer (note capitalization). > Thanks! > Coleen > > On 7/1/14, 3:29 AM, Lindenmaier, Goetz wrote: >> Hi Coleen, >> >> thanks for the review! >> I based it on gc, as Stefan pushed my atomic.inline.hpp change >> into that repo. Now that change propagated to the other repos, >> and this one applies nicely (I just checked hs-rt). >> >> So I'd appreciate if you sponsor it! But I still need a second review >> I guess. >> >> Best regards, >> Goetz. >> >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Coleen Phillimore >> Sent: Montag, 30. Juni 2014 18:20 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp >> and clean up includes >> >> >> Goetz, >> I reviewed this change and it looks great. Thank you for cleaning this >> up. Since it's based on hs-gc repository, I think someone from the GC >> group should sponsor. Otherwise, I'd be happy to. >> >> Thanks! >> Coleen >> >> (this was my reply to another RFR, sorry) >> >> On 6/29/14, 5:00 PM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This change adds a new header os.inline.hpp including the >>> os_.include.hpp >>> headers. This allows to remove around 30 os dependent include >>> cascades, some of >>> them even without adding the os.inline.hpp header in that file. >>> Also, os.inline.hpp is added in several files that call functions >>> from these >>> headers where it was missing so far. >>> >>> Some further cleanups: >>> OrderAccess include in adaptiveFreeList.cpp is needed because of >>> freeChunk.hpp. >>> >>> The include of os.inline.hpp in thread.inline.hpp is needed because >>> Thread::current() uses thread() from ThreadLocalStorage, which again >>> uses >>> os::thread_local_storage_at which is implemented platform dependent. >>> >>> I moved some methods without dependencies to other .include.hpp files >>> to os_windows.hpp/os_posix.hpp. This reduces the need for os.inline.hpp >>> includes a lot. >>> >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ >>> >>> I compiled and tested this without precompiled headers on linuxx86_64, >>> linuxppc64, windowsx86_64, solaris_sparc64, solaris_sparc32, >>> darwinx86_64, >>> aixppc64 in opt, dbg and fastdbg versions. >>> >>> Thanks and best regards, >>> Goetz. > From lois.foltan at oracle.com Tue Jul 1 11:42:18 2014 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 01 Jul 2014 07:42:18 -0400 Subject: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes In-Reply-To: <53B29B0E.4060200@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED7FF4@DEWDFEMB12A.global.corp.sap> <53B18E18.80707@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED8471@DEWDFEMB12A.global.corp.sap> <53B29B0E.4060200@oracle.com> Message-ID: <53B29E9A.3060404@oracle.com> Looks good, minor comment - a large majority of these files need copyright updates. Lois On 7/1/2014 7:27 AM, Coleen Phillimore wrote: > > Okay, I'll do it. Since you have a Reviewer, all you need is another > reviewer (note capitalization). > Thanks! > Coleen > > On 7/1/14, 3:29 AM, Lindenmaier, Goetz wrote: >> Hi Coleen, >> >> thanks for the review! >> I based it on gc, as Stefan pushed my atomic.inline.hpp change >> into that repo. Now that change propagated to the other repos, >> and this one applies nicely (I just checked hs-rt). >> >> So I'd appreciate if you sponsor it! But I still need a second >> review I guess. >> >> Best regards, >> Goetz. >> >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Coleen Phillimore >> Sent: Montag, 30. Juni 2014 18:20 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR (M): 8048241: Introduce umbrella header >> os.inline.hpp and clean up includes >> >> >> Goetz, >> I reviewed this change and it looks great. Thank you for cleaning this >> up. Since it's based on hs-gc repository, I think someone from the GC >> group should sponsor. Otherwise, I'd be happy to. >> >> Thanks! >> Coleen >> >> (this was my reply to another RFR, sorry) >> >> On 6/29/14, 5:00 PM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This change adds a new header os.inline.hpp including the >>> os_.include.hpp >>> headers. This allows to remove around 30 os dependent include >>> cascades, some of >>> them even without adding the os.inline.hpp header in that file. >>> Also, os.inline.hpp is added in several files that call functions >>> from these >>> headers where it was missing so far. >>> >>> Some further cleanups: >>> OrderAccess include in adaptiveFreeList.cpp is needed because of >>> freeChunk.hpp. >>> >>> The include of os.inline.hpp in thread.inline.hpp is needed because >>> Thread::current() uses thread() from ThreadLocalStorage, which again >>> uses >>> os::thread_local_storage_at which is implemented platform dependent. >>> >>> I moved some methods without dependencies to other .include.hpp files >>> to os_windows.hpp/os_posix.hpp. This reduces the need for >>> os.inline.hpp >>> includes a lot. >>> >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ >>> >>> I compiled and tested this without precompiled headers on linuxx86_64, >>> linuxppc64, windowsx86_64, solaris_sparc64, solaris_sparc32, >>> darwinx86_64, >>> aixppc64 in opt, dbg and fastdbg versions. >>> >>> Thanks and best regards, >>> Goetz. > From volker.simonis at gmail.com Tue Jul 1 12:33:39 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 1 Jul 2014 14:33:39 +0200 Subject: RFR (XXS) [URGENT]: 8048232: Fix for 8046471 breaks PPC64 build In-Reply-To: <53B1FCB2.4050606@oracle.com> References: <53AC6DAA.2010807@oracle.com> <53B1FCB2.4050606@oracle.com> Message-ID: Hi Mikael, thanks for reviewing at the change. Can I please have one more reviewer/sponsor for this tiny change? Thanks, Volker On Tue, Jul 1, 2014 at 2:11 AM, Mikael Vidstedt wrote: > > Looks good. > > Cheers, > Mikael > > > On 2014-06-30 07:28, Volker Simonis wrote: >> >> Can somebody please review and push this small build change to fix our >> ppc64 build errors. >> >> Thanks, >> Volker >> >> On Fri, Jun 27, 2014 at 5:48 PM, Volker Simonis >> wrote: >>> >>> On Thu, Jun 26, 2014 at 10:59 PM, Volker Simonis >>> wrote: >>>> >>>> >>>> On Thursday, June 26, 2014, Mikael Vidstedt >>>> wrote: >>>>> >>>>> >>>>> This will work for top level builds. For Hotspot-only builds ARCH will >>>>> (still) be the value of uname -m, so if you want to support >>>>> Hotspot-only >>>>> builds you'll probably want to do the "ifneq (,$(findstring $(ARCH), >>>>> ppc))" >>>>> trick to catch both "ppc" (which is what a top level build will use) >>>>> and >>>>> "ppc64" (for Hotspot-only). >>>>> >>>> Hi Mikael, >>>> >>>> yes you're right. >>> >>> I have to correct myself - you're nearly right:) >>> >>> In the term "$(findstring $(ARCH), ppc)" '$ARCH' is the needle and >>> 'ppc is the stack, so it won't catch 'ppc64' either. I could write >>> "$(findstring ppc, $(ARCH))" which would catch both, 'ppc' and 'ppc64' >>> but I decided to use the slightly more verbose "$(findstring $(ARCH), >>> ppc ppc64)" because it seemed clearer to me. I also added a comment to >>> explain the problematic of the different ARCH values for top-level and >>> HotSpot-only builds. Once we have the new HS build, this can hopefully >>> all go away. >>> >>> By, the way, I also had to apply this change to your ppc-modifications >>> in make/linux/makefiles/defs.make. And I think that the same reasoning >>> may also apply to "$(findstring $(ARCH), sparc)" which won't catch >>> 'sparc64' any more after your change but I have no Linux/SPARC box to >>> test this. You may change it accordingly at your discretion. >>> >>> So here's the new webrev: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >>> >>> Please review and sponsor:) >>> >>> Thank you and best regards, >>> Volker >>> >>>> I only tested a complete make but I indeed want to support >>>> HotSpot only makes as well. I'll change it as requested although I won't >>>> have chance to do that before tomorrow morning (European time). >>>> >>>> Thanks you and best regards, >>>> Volker >>>> >>>>> Sorry for breaking it. >>>>> >>>>> Cheers, >>>>> Mikael >>>>> >>>>> PS. We so need to clean up these makefiles... >>>>> >>>>> On 2014-06-26 07:25, Volker Simonis wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> could somebody please review and push the following tiny change: >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232/ >>>>>> https://bugs.openjdk.java.net/browse/JDK-8048232 >>>>>> >>>>>> It fixes the build on Linux/PPC64 after "8046471 Use >>>>>> OPENJDK_TARGET_CPU_ARCH instead of legacy value for hotspot ARCH". >>>>>> >>>>>> Before 8046471, the top-level make passed ARCH=ppc64 to the HotSpot >>>>>> make. After 8046471, it now passes ARCH=ppc. But there was one place >>>>>> in make/linux/Makefile which checked for ARCH=ppc64 in order to >>>>>> disable the TIERED build. This place has to be adapted to handle the >>>>>> new ARCH value. >>>>>> >>>>>> Please push this right to http://hg.openjdk.java.net/jdk9/hs/hotspot >>>>>> in order to get it into http://hg.openjdk.java.net/jdk9/dev/hotspot >>>>>> together with 8046471. >>>>>> >>>>>> Note: this change depends on 8046471 in the hotspot AND in the >>>>>> top-level directory! >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>> >>>>> > From goetz.lindenmaier at sap.com Tue Jul 1 12:43:09 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 1 Jul 2014 12:43:09 +0000 Subject: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes In-Reply-To: <53B29E9A.3060404@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED7FF4@DEWDFEMB12A.global.corp.sap> <53B18E18.80707@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED8471@DEWDFEMB12A.global.corp.sap> <53B29B0E.4060200@oracle.com> <53B29E9A.3060404@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CED8582@DEWDFEMB12A.global.corp.sap> Hi Lois, you are right, I'll fix the copyrights. Thanks for the review! Goetz. -----Original Message----- From: Lois Foltan [mailto:lois.foltan at oracle.com] Sent: Dienstag, 1. Juli 2014 13:42 To: Coleen Phillimore Cc: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes Looks good, minor comment - a large majority of these files need copyright updates. Lois On 7/1/2014 7:27 AM, Coleen Phillimore wrote: > > Okay, I'll do it. Since you have a Reviewer, all you need is another > reviewer (note capitalization). > Thanks! > Coleen > > On 7/1/14, 3:29 AM, Lindenmaier, Goetz wrote: >> Hi Coleen, >> >> thanks for the review! >> I based it on gc, as Stefan pushed my atomic.inline.hpp change >> into that repo. Now that change propagated to the other repos, >> and this one applies nicely (I just checked hs-rt). >> >> So I'd appreciate if you sponsor it! But I still need a second >> review I guess. >> >> Best regards, >> Goetz. >> >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Coleen Phillimore >> Sent: Montag, 30. Juni 2014 18:20 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR (M): 8048241: Introduce umbrella header >> os.inline.hpp and clean up includes >> >> >> Goetz, >> I reviewed this change and it looks great. Thank you for cleaning this >> up. Since it's based on hs-gc repository, I think someone from the GC >> group should sponsor. Otherwise, I'd be happy to. >> >> Thanks! >> Coleen >> >> (this was my reply to another RFR, sorry) >> >> On 6/29/14, 5:00 PM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This change adds a new header os.inline.hpp including the >>> os_.include.hpp >>> headers. This allows to remove around 30 os dependent include >>> cascades, some of >>> them even without adding the os.inline.hpp header in that file. >>> Also, os.inline.hpp is added in several files that call functions >>> from these >>> headers where it was missing so far. >>> >>> Some further cleanups: >>> OrderAccess include in adaptiveFreeList.cpp is needed because of >>> freeChunk.hpp. >>> >>> The include of os.inline.hpp in thread.inline.hpp is needed because >>> Thread::current() uses thread() from ThreadLocalStorage, which again >>> uses >>> os::thread_local_storage_at which is implemented platform dependent. >>> >>> I moved some methods without dependencies to other .include.hpp files >>> to os_windows.hpp/os_posix.hpp. This reduces the need for >>> os.inline.hpp >>> includes a lot. >>> >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ >>> >>> I compiled and tested this without precompiled headers on linuxx86_64, >>> linuxppc64, windowsx86_64, solaris_sparc64, solaris_sparc32, >>> darwinx86_64, >>> aixppc64 in opt, dbg and fastdbg versions. >>> >>> Thanks and best regards, >>> Goetz. > From goetz.lindenmaier at sap.com Tue Jul 1 12:46:01 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 1 Jul 2014 12:46:01 +0000 Subject: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes In-Reply-To: <53B29E77.8000500@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED7FF4@DEWDFEMB12A.global.corp.sap> <53B18E18.80707@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED8471@DEWDFEMB12A.global.corp.sap> <53B29B0E.4060200@oracle.com> <53B29E77.8000500@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CED8597@DEWDFEMB12A.global.corp.sap> Hi David, I tested without precompiled headers, also on solaris. There might be differences due to other compiler versions or such, but I consider this very unlikely. The closed code should be checked, I guess. Thanks for the review! Goetz. -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Dienstag, 1. Juli 2014 13:42 To: Coleen Phillimore; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes Coleen, Goetz, This looks good to me too. But it needs to be checked against our closed code (I expect changes will be needed there) and we also need to check things work okay with and without precompiled header support (the solaris build will verify that IIRC). Thanks, David On 1/07/2014 9:27 PM, Coleen Phillimore wrote: > > Okay, I'll do it. Since you have a Reviewer, all you need is another > reviewer (note capitalization). > Thanks! > Coleen > > On 7/1/14, 3:29 AM, Lindenmaier, Goetz wrote: >> Hi Coleen, >> >> thanks for the review! >> I based it on gc, as Stefan pushed my atomic.inline.hpp change >> into that repo. Now that change propagated to the other repos, >> and this one applies nicely (I just checked hs-rt). >> >> So I'd appreciate if you sponsor it! But I still need a second review >> I guess. >> >> Best regards, >> Goetz. >> >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Coleen Phillimore >> Sent: Montag, 30. Juni 2014 18:20 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp >> and clean up includes >> >> >> Goetz, >> I reviewed this change and it looks great. Thank you for cleaning this >> up. Since it's based on hs-gc repository, I think someone from the GC >> group should sponsor. Otherwise, I'd be happy to. >> >> Thanks! >> Coleen >> >> (this was my reply to another RFR, sorry) >> >> On 6/29/14, 5:00 PM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This change adds a new header os.inline.hpp including the >>> os_.include.hpp >>> headers. This allows to remove around 30 os dependent include >>> cascades, some of >>> them even without adding the os.inline.hpp header in that file. >>> Also, os.inline.hpp is added in several files that call functions >>> from these >>> headers where it was missing so far. >>> >>> Some further cleanups: >>> OrderAccess include in adaptiveFreeList.cpp is needed because of >>> freeChunk.hpp. >>> >>> The include of os.inline.hpp in thread.inline.hpp is needed because >>> Thread::current() uses thread() from ThreadLocalStorage, which again >>> uses >>> os::thread_local_storage_at which is implemented platform dependent. >>> >>> I moved some methods without dependencies to other .include.hpp files >>> to os_windows.hpp/os_posix.hpp. This reduces the need for os.inline.hpp >>> includes a lot. >>> >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ >>> >>> I compiled and tested this without precompiled headers on linuxx86_64, >>> linuxppc64, windowsx86_64, solaris_sparc64, solaris_sparc32, >>> darwinx86_64, >>> aixppc64 in opt, dbg and fastdbg versions. >>> >>> Thanks and best regards, >>> Goetz. > From goetz.lindenmaier at sap.com Tue Jul 1 12:54:11 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 1 Jul 2014 12:54:11 +0000 Subject: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes In-Reply-To: <53B29B0E.4060200@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED7FF4@DEWDFEMB12A.global.corp.sap> <53B18E18.80707@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED8471@DEWDFEMB12A.global.corp.sap> <53B29B0E.4060200@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CED85A7@DEWDFEMB12A.global.corp.sap> Thanks for sponsoring, Coleen! I updated the webrev with the copyright messages fix. http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ I also added three [rR]eviewers. Thanks for the hint that only on 'R' is needed! Best regards, Goetz. -----Original Message----- From: Coleen Phillimore [mailto:coleen.phillimore at oracle.com] Sent: Dienstag, 1. Juli 2014 13:27 To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes Okay, I'll do it. Since you have a Reviewer, all you need is another reviewer (note capitalization). Thanks! Coleen On 7/1/14, 3:29 AM, Lindenmaier, Goetz wrote: > Hi Coleen, > > thanks for the review! > I based it on gc, as Stefan pushed my atomic.inline.hpp change > into that repo. Now that change propagated to the other repos, > and this one applies nicely (I just checked hs-rt). > > So I'd appreciate if you sponsor it! But I still need a second review I guess. > > Best regards, > Goetz. > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore > Sent: Montag, 30. Juni 2014 18:20 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes > > > Goetz, > I reviewed this change and it looks great. Thank you for cleaning this > up. Since it's based on hs-gc repository, I think someone from the GC > group should sponsor. Otherwise, I'd be happy to. > > Thanks! > Coleen > > (this was my reply to another RFR, sorry) > > On 6/29/14, 5:00 PM, Lindenmaier, Goetz wrote: >> Hi, >> >> This change adds a new header os.inline.hpp including the os_.include.hpp >> headers. This allows to remove around 30 os dependent include cascades, some of >> them even without adding the os.inline.hpp header in that file. >> Also, os.inline.hpp is added in several files that call functions from these >> headers where it was missing so far. >> >> Some further cleanups: >> OrderAccess include in adaptiveFreeList.cpp is needed because of freeChunk.hpp. >> >> The include of os.inline.hpp in thread.inline.hpp is needed because >> Thread::current() uses thread() from ThreadLocalStorage, which again uses >> os::thread_local_storage_at which is implemented platform dependent. >> >> I moved some methods without dependencies to other .include.hpp files >> to os_windows.hpp/os_posix.hpp. This reduces the need for os.inline.hpp >> includes a lot. >> >> Please review and test this change. I please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ >> >> I compiled and tested this without precompiled headers on linuxx86_64, >> linuxppc64, windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >> aixppc64 in opt, dbg and fastdbg versions. >> >> Thanks and best regards, >> Goetz. From stefan.karlsson at oracle.com Tue Jul 1 13:44:48 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 01 Jul 2014 15:44:48 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle Message-ID: <53B2BB50.1080606@oracle.com> Hi all, Please, review this patch to enable unloading of classes and other metadata after a G1 concurrent cycle. http://cr.openjdk.java.net/~stefank/8048248/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8048248 The patch includes the following changes: 1) Tracing through alive Klasses and CLDs during concurrent mark, instead of marking all of them during the initial mark pause. 2) Making HeapRegions walkable in the presence of unparseable objects due to their classes being unloaded. 3) The process roots code has been changed to allow G1's combined initial mark and scavenge. 4) The CodeBlobClosures have been refactored to distinguish the marking variant from the oop updating variants. 5) Calls to the G1 pre-barrier have been added to some places, such as the StringTable, to guard against object resurrection, similar to how j.l.ref.Reference#get is treated with a read barrier. 6) Parallelizing the cleaning of metadata and compiled methods during the remark pause. A number of patches to prepare for this RFE has already been pushed to JDK 9: 8047362: Add a version of CompiledIC_at that doesn't create a new RelocIterator 8047326: Consolidate all CompiledIC::CompiledIC implementations and move it to compiledIC.cpp 8047323: Remove unused _copy_metadata_obj_cl in G1CopyingKeepAliveClosure 8047373: Clean the ExceptionCache in one pass 8046670: Make CMS metadata aware closures applicable for other collectors 8035746: Add missing Klass::oop_is_instanceClassLoader() function 8035648: Don't use Handle in java_lang_String::print 8035412: Cleanup ClassLoaderData::is_alive 8035393: Use CLDClosure instead of CLDToOopClosure in frame::oops_interpreted_do 8034764: Use process_strong_roots to adjust the StringTable 8034761: Remove the do_code_roots parameter from process_strong_roots 8033923: Use BufferingOopClosure for G1 code root scanning 8033764: Remove the usage of StarTask from BufferingOopClosure 8012687: Remove unused is_root checks and closures 8047818: G1 HeapRegions can no longer be ContiguousSpaces 8048214: Linker error when compiling G1SATBCardTableModRefBS after include order changes 8047821: G1 Does not use the save_marks functionality as intended 8047820: G1 Block offset table does not need to support generic Space classes 8047819: G1 HeapRegionDCTOC does not need to inherit ContiguousSpaceDCTOC 8038405: Clean up some virtual fucntions in Space class hierarchy 8038412: Move object_iterate_careful down from Space to ContigousSpace and CFLSpace 8038404: Move object_iterate_mem from Space to CMS since it is only ever used by CMS 8038399: Remove dead oop_iterate MemRegion variants from SharedHeap, Generation and Space classe 8037958: ConcurrentMark::cleanup leaks BitMaps if VerifyDuringGC is enabled 8032379: Remove the is_scavenging flag to process_strong_roots Testing: We've been running Kitchensink, gc-test-suite, internal nightly testing and test lists, and CRM FA benchmarks. thanks, StefanK & Mikael Gerdin From erik.helin at oracle.com Tue Jul 1 14:58:30 2014 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 01 Jul 2014 16:58:30 +0200 Subject: RFR: 8048895: Back out JDK-8027915 Message-ID: <3697438.WVDdWcA3Vi@ehelin-laptop> Hi all, the change JDK-8027915, "TestParallelHeapSizeFlags fails with unexpected heap size", did not work as anticipated because of the interaction with os::commit_memory on Solaris. os::commit_memory takes a size_t `alignment_hint` as parameter. This parameter is used differently on different operating systems: it is ignored on all operating system except for Solaris. For Solaris, the hint is used for selecting the large page size. If the alignment_hint is smaller than the largest page size available, the hint is assumed to be the *exact* page size. This problem had previously been hidden because due to various alignments of heap sizes and generation sizes always made sure that we ended up with an alignment_hint of 4 MB by default, which also happens to be a valid page size. The bug can be shown to exist prior to JDK-8027915, for example by running: java -Xms32m -Xmx128m -XX:LargePageSizeInBytes=256m -version This patch is the anti-delta of JDK-8027915 and it applied cleanly. Webrev: http://cr.openjdk.java.net/~ehelin/8048895/webrev.00/ Thanks, Erik From mikael.gerdin at oracle.com Tue Jul 1 15:26:49 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 01 Jul 2014 17:26:49 +0200 Subject: RFR: 8048895: Back out JDK-8027915 In-Reply-To: <3697438.WVDdWcA3Vi@ehelin-laptop> References: <3697438.WVDdWcA3Vi@ehelin-laptop> Message-ID: <4709002.Xpd0TJjmVj@mgerdin03> Hi, On Tuesday 01 July 2014 16.58.30 Erik Helin wrote: > Hi all, > > the change JDK-8027915, "TestParallelHeapSizeFlags fails with unexpected > heap size", did not work as anticipated because of the interaction with > os::commit_memory on Solaris. > > os::commit_memory takes a size_t `alignment_hint` as parameter. This > parameter is used differently on different operating systems: it is > ignored on all operating system except for Solaris. For Solaris, the > hint is used for selecting the large page size. If the alignment_hint is > smaller than the largest page size available, the hint is assumed to be > the *exact* page size. > > This problem had previously been hidden because due to various > alignments of heap sizes and generation sizes always made sure that we > ended up with an alignment_hint of 4 MB by default, which also happens to > be a valid page size. > > The bug can be shown to exist prior to JDK-8027915, for example by > running: > java -Xms32m -Xmx128m -XX:LargePageSizeInBytes=256m -version > > This patch is the anti-delta of JDK-8027915 and it applied cleanly. > > Webrev: > http://cr.openjdk.java.net/~ehelin/8048895/webrev.00/ The backout looks good. /Mikael > > Thanks, > Erik From joe.darcy at oracle.com Tue Jul 1 16:42:59 2014 From: joe.darcy at oracle.com (Joe Darcy) Date: Tue, 01 Jul 2014 09:42:59 -0700 Subject: JDK 9 RFR of JDK-8048620: Remove unneeded/obsolete -source/-target options in hotspot tests In-Reply-To: <53AE04E1.4000806@oracle.com> References: <53AE04E1.4000806@oracle.com> Message-ID: <53B2E513.5020608@oracle.com> *ping* -Joe On 06/27/2014 04:57 PM, Joe Darcy wrote: > Hello, > > As a consequence of a policy for retiring old javac -source and > -target options (JEP 182 [1]), in JDK 9, only -source/-target of 6/1.6 > and higher will be supported [2]. This work is being tracked under bug > > JDK-8011044: Remove support for 1.5 and earlier source and target > options > https://bugs.openjdk.java.net/browse/JDK-8011044 > > Many subtasks related to this are already complete, including updating > regression tests in the jdk and langtools repos. It has come to my > attention that the hotspot repo also has a few tests that use -source > and -target that should be updated. Please review the changes: > > http://cr.openjdk.java.net/~darcy/8048620.0/ > > Full patch below. From what I could tell looking at the bug and tests, > these tests are not sensitive to the class file version so they > shouldn't need to use an explicit -source or -target option and should > just accept the JDK-default. > > There is one additional test which uses -source/-target, > test/compiler/6932496/Test6932496.java. This test *does* appear > sensitive to class file version (no jsr / jret instruction in target 6 > or higher) so I have not modified this test. If the test is not > actually sensitive to class file version, it can be updated like the > others. If it is sensitive and if testing this is still relevant, the > class file in question will need to be generated in some other way, > such as as by using ASM. > > Regardless of the outcome of the technical discussion around > Test6932496.java, I'd appreciate if a "hotspot buddy" could shepherd > this fix through the HotSpot processes. > > Thanks, > > -Joe > > [1] http://openjdk.java.net/jeps/182 > > [2] > http://mail.openjdk.java.net/pipermail/jdk9-dev/2014-January/000328.html > > --- old/test/compiler/6775880/Test.java 2014-06-27 > 16:24:25.000000000 -0700 > +++ new/test/compiler/6775880/Test.java 2014-06-27 > 16:24:25.000000000 -0700 > @@ -26,7 +26,6 @@ > * @test > * @bug 6775880 > * @summary EA +DeoptimizeALot: > assert(mon_info->owner()->is_locked(),"object must be locked now") > - * @compile -source 1.4 -target 1.4 Test.java > * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -Xbatch > -XX:+DoEscapeAnalysis -XX:+DeoptimizeALot > -XX:CompileCommand=exclude,java.lang.AbstractStringBuilder::append Test > */ > > --- old/test/runtime/6626217/Test6626217.sh 2014-06-27 > 16:24:26.000000000 -0700 > +++ new/test/runtime/6626217/Test6626217.sh 2014-06-27 > 16:24:26.000000000 -0700 > @@ -54,7 +54,7 @@ > > # Compile all the usual suspects, including the default 'many_loader' > ${CP} many_loader1.java.foo many_loader.java > -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint *.java > +${JAVAC} ${TESTJAVACOPTS} -Xlint *.java > > # Rename the class files, so the custom loader (and not the system > loader) will find it > ${MV} from_loader2.class from_loader2.impl2 > @@ -62,7 +62,7 @@ > # Compile the next version of 'many_loader' > ${MV} many_loader.class many_loader.impl1 > ${CP} many_loader2.java.foo many_loader.java > -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint > many_loader.java > +${JAVAC} ${TESTJAVACOPTS} -Xlint many_loader.java > > # Rename the class file, so the custom loader (and not the system > loader) will find it > ${MV} many_loader.class many_loader.impl2 > --- old/test/runtime/8003720/Test8003720.java 2014-06-27 > 16:24:26.000000000 -0700 > +++ new/test/runtime/8003720/Test8003720.java 2014-06-27 > 16:24:26.000000000 -0700 > @@ -26,7 +26,7 @@ > * @test > * @bug 8003720 > * @summary Method in interpreter stack frame can be deallocated > - * @compile -XDignore.symbol.file -source 1.7 -target 1.7 Victim.java > + * @compile -XDignore.symbol.file Victim.java > * @run main/othervm -Xverify:all -Xint Test8003720 > */ > > From mikael.vidstedt at oracle.com Tue Jul 1 17:24:11 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Tue, 01 Jul 2014 10:24:11 -0700 Subject: [8u40] RFR(S): Set T family feature bit on Niagara systems In-Reply-To: <53B21A3C.7040502@oracle.com> References: <53B1F5D0.1040608@oracle.com> <53B21A3C.7040502@oracle.com> Message-ID: <53B2EEBB.8070600@oracle.com> Thanks David! Cheers, Mikael On 2014-06-30 19:17, David Holmes wrote: > I can confirm that is an accurate backport of the changeset. > > Thanks, > David > > On 1/07/2014 9:42 AM, Mikael Vidstedt wrote: >> >> Please review this 8u40 backport request. The fix was pushed to jdk9 a >> couple of weeks ago and has not shown any problems. >> >> The change from jdk9 applies to jdk8u/hs-dev without conflicts. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8046769 >> Webrev: >> http://cr.openjdk.java.net/~mikael/webrevs/8046769/webrev.00/webrev/ >> jdk9 change: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/2399ebcea84d >> >> Thanks, >> Mikael >> From mandy.chung at oracle.com Wed Jul 2 06:26:43 2014 From: mandy.chung at oracle.com (Mandy Chung) Date: Tue, 01 Jul 2014 23:26:43 -0700 Subject: RFR 8047737 Move array component mirror to instance of java/lang/Class In-Reply-To: References: <53ADC4D4.4030403@oracle.com> <53B0FBEF.5030607@oracle.com> <53B15B50.6070405@oracle.com> <53B205C7.8070804@oracle.com> Message-ID: <53B3A623.1050804@oracle.com> On 6/30/2014 9:51 PM, Christian Thalinger wrote: > On Jun 30, 2014, at 5:50 PM, Coleen Phillimore wrote: > > > On 6/30/14, 3:50 PM, Christian Thalinger wrote: >>> private Class(ClassLoader loader) { >>> // Initialize final field for classLoader. The initialization value of non-null >>> // prevents future JIT optimizations from assuming this final field is null. >>> classLoader = loader; >>> + componentType = null; >>> } >>> >>> Are we worried about the same optimization? >> Hi, I've decided to make them consistent and add another parameter to the Class constructor. >> >> http://cr.openjdk.java.net/~coleenp/8047737_jdk_2/ The jdk change looks okay while I am beginning to think whether we really want to keep expanding this constructor to deal with this future JIT optimization (you will be moving more fields out from the VM to java.lang.Class). There are places in JDK initializing the final fields to null while the final field value is overridden via native/VM - e.g. System.in, System.out, etc. I would prefer reverting the classLoader constructor change to expanding the constructor for any new field being added. Handle it (and other places in JDK) when such JIT optimization comes. Mandy From aph at redhat.com Wed Jul 2 08:11:42 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 02 Jul 2014 09:11:42 +0100 Subject: Please look at my JEP In-Reply-To: <53A2F354.40606@redhat.com> References: <53A2F354.40606@redhat.com> Message-ID: <53B3BEBE.6060103@redhat.com> Hi everybody, Please can someone review my JEP? It's very simple, and until we can get things moving this is blocking a significant contribution to OpenJDK. https://bugs.openjdk.java.net/browse/JDK-8044552 Thanks, Andrew. On 19/06/14 15:27, Andrew Haley wrote: > The JEP is here: > > https://bugs.openjdk.java.net/browse/JDK-8044552 > > As you may know, we've been working on this port for some time. > It is now at the stage where it may be considered for inclusion > in OpenJDK. It passes all its tests, and although there is still > some tidying up to do, I think we should move to the next stage. > > Thanks, > Andrew. > From stefan.karlsson at oracle.com Wed Jul 2 08:25:39 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 02 Jul 2014 10:25:39 +0200 Subject: RFR: 8048895: Back out JDK-8027915 In-Reply-To: <3697438.WVDdWcA3Vi@ehelin-laptop> References: <3697438.WVDdWcA3Vi@ehelin-laptop> Message-ID: <53B3C203.9090808@oracle.com> On 2014-07-01 16:58, Erik Helin wrote: > Hi all, > > the change JDK-8027915, "TestParallelHeapSizeFlags fails with unexpected > heap size", did not work as anticipated because of the interaction with > os::commit_memory on Solaris. > > os::commit_memory takes a size_t `alignment_hint` as parameter. This > parameter is used differently on different operating systems: it is > ignored on all operating system except for Solaris. For Solaris, the > hint is used for selecting the large page size. If the alignment_hint is > smaller than the largest page size available, the hint is assumed to be > the *exact* page size. > > This problem had previously been hidden because due to various > alignments of heap sizes and generation sizes always made sure that we > ended up with an alignment_hint of 4 MB by default, which also happens to > be a valid page size. > > The bug can be shown to exist prior to JDK-8027915, for example by > running: > java -Xms32m -Xmx128m -XX:LargePageSizeInBytes=256m -version > > This patch is the anti-delta of JDK-8027915 and it applied cleanly. > > Webrev: > http://cr.openjdk.java.net/~ehelin/8048895/webrev.00/ combinediff 8027915.patch 8048895.patch returns nothing, so this backout looks good. StefanK > > Thanks, > Erik > From erik.helin at oracle.com Wed Jul 2 08:43:15 2014 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 02 Jul 2014 10:43:15 +0200 Subject: RFR: 8048895: Back out JDK-8027915 In-Reply-To: <4709002.Xpd0TJjmVj@mgerdin03> References: <3697438.WVDdWcA3Vi@ehelin-laptop> <4709002.Xpd0TJjmVj@mgerdin03> Message-ID: <3935874.qjNEN4tAef@ehelin-laptop> On Tuesday 01 July 2014 17:26:49 PM Mikael Gerdin wrote: > Hi, > > On Tuesday 01 July 2014 16.58.30 Erik Helin wrote: > > Hi all, > > > > the change JDK-8027915, "TestParallelHeapSizeFlags fails with unexpected > > heap size", did not work as anticipated because of the interaction with > > os::commit_memory on Solaris. > > > > os::commit_memory takes a size_t `alignment_hint` as parameter. This > > parameter is used differently on different operating systems: it is > > ignored on all operating system except for Solaris. For Solaris, the > > hint is used for selecting the large page size. If the alignment_hint is > > smaller than the largest page size available, the hint is assumed to be > > the *exact* page size. > > > > This problem had previously been hidden because due to various > > alignments of heap sizes and generation sizes always made sure that we > > ended up with an alignment_hint of 4 MB by default, which also happens to > > be a valid page size. > > > > The bug can be shown to exist prior to JDK-8027915, for example by > > running: > > java -Xms32m -Xmx128m -XX:LargePageSizeInBytes=256m -version > > > > This patch is the anti-delta of JDK-8027915 and it applied cleanly. > > > > Webrev: > > http://cr.openjdk.java.net/~ehelin/8048895/webrev.00/ > > The backout looks good. Thanks! Erik > /Mikael > > > Thanks, > > Erik From erik.helin at oracle.com Wed Jul 2 08:43:29 2014 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 02 Jul 2014 10:43:29 +0200 Subject: RFR: 8048895: Back out JDK-8027915 In-Reply-To: <53B3C203.9090808@oracle.com> References: <3697438.WVDdWcA3Vi@ehelin-laptop> <53B3C203.9090808@oracle.com> Message-ID: <3009333.uGgz9QOp4B@ehelin-laptop> On Wednesday 02 July 2014 10:25:39 AM Stefan Karlsson wrote: > On 2014-07-01 16:58, Erik Helin wrote: > > Hi all, > > > > the change JDK-8027915, "TestParallelHeapSizeFlags fails with unexpected > > heap size", did not work as anticipated because of the interaction with > > os::commit_memory on Solaris. > > > > os::commit_memory takes a size_t `alignment_hint` as parameter. This > > parameter is used differently on different operating systems: it is > > ignored on all operating system except for Solaris. For Solaris, the > > hint is used for selecting the large page size. If the alignment_hint is > > smaller than the largest page size available, the hint is assumed to be > > the *exact* page size. > > > > This problem had previously been hidden because due to various > > alignments of heap sizes and generation sizes always made sure that we > > ended up with an alignment_hint of 4 MB by default, which also happens to > > be a valid page size. > > > > The bug can be shown to exist prior to JDK-8027915, for example by > > running: > > java -Xms32m -Xmx128m -XX:LargePageSizeInBytes=256m -version > > > > This patch is the anti-delta of JDK-8027915 and it applied cleanly. > > > > Webrev: > > http://cr.openjdk.java.net/~ehelin/8048895/webrev.00/ > > combinediff 8027915.patch 8048895.patch returns nothing, so this backout > looks good. Thanks! Erik > StefanK > > > Thanks, > > Erik From roland.westrelin at oracle.com Wed Jul 2 09:33:46 2014 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Wed, 2 Jul 2014 11:33:46 +0200 Subject: RFR: 8048554: C2: Pass AES lookup tables for platforms with no HW crypto acceleration In-Reply-To: <6F653FA9-7E55-49F0-AC19-EBFDA9E1A6DE@oracle.com> References: <6F653FA9-7E55-49F0-AC19-EBFDA9E1A6DE@oracle.com> Message-ID: > http://cr.openjdk.java.net/~vladidan/8048554/ Can you move: 5879 BasicType bt = field->layout_type(); in the if: 5881 if (!is_static) { ? typo (require): 5961 // Some platforms with no AES HW acceleration might requre the lookup tables Otherwise, looks good to me. Roland. From david.holmes at oracle.com Wed Jul 2 10:12:29 2014 From: david.holmes at oracle.com (David Holmes) Date: Wed, 02 Jul 2014 20:12:29 +1000 Subject: JDK 9 RFR of JDK-8048620: Remove unneeded/obsolete -source/-target options in hotspot tests In-Reply-To: <53B2E513.5020608@oracle.com> References: <53AE04E1.4000806@oracle.com> <53B2E513.5020608@oracle.com> Message-ID: <53B3DB0D.8070700@oracle.com> Hi Joe, I can provide you one Review. It seems to me the -source/-target were being set to ensure a minimum version (probably on -target was needed but -source had to come along for the ride), so removing them seems fine. Note hotspot protocol requires copyright updates at the time of checkin - thanks. Also you will need to create the changeset against the group repo for whomever your sponsor is (though your existing patch from the webrev will probably apply cleanly). A second reviewer (small R) is needed. If they don't sponsor it I will. Cheers, David On 2/07/2014 2:42 AM, Joe Darcy wrote: > *ping* > > -Joe > > On 06/27/2014 04:57 PM, Joe Darcy wrote: >> Hello, >> >> As a consequence of a policy for retiring old javac -source and >> -target options (JEP 182 [1]), in JDK 9, only -source/-target of 6/1.6 >> and higher will be supported [2]. This work is being tracked under bug >> >> JDK-8011044: Remove support for 1.5 and earlier source and target >> options >> https://bugs.openjdk.java.net/browse/JDK-8011044 >> >> Many subtasks related to this are already complete, including updating >> regression tests in the jdk and langtools repos. It has come to my >> attention that the hotspot repo also has a few tests that use -source >> and -target that should be updated. Please review the changes: >> >> http://cr.openjdk.java.net/~darcy/8048620.0/ >> >> Full patch below. From what I could tell looking at the bug and tests, >> these tests are not sensitive to the class file version so they >> shouldn't need to use an explicit -source or -target option and should >> just accept the JDK-default. >> >> There is one additional test which uses -source/-target, >> test/compiler/6932496/Test6932496.java. This test *does* appear >> sensitive to class file version (no jsr / jret instruction in target 6 >> or higher) so I have not modified this test. If the test is not >> actually sensitive to class file version, it can be updated like the >> others. If it is sensitive and if testing this is still relevant, the >> class file in question will need to be generated in some other way, >> such as as by using ASM. >> >> Regardless of the outcome of the technical discussion around >> Test6932496.java, I'd appreciate if a "hotspot buddy" could shepherd >> this fix through the HotSpot processes. >> >> Thanks, >> >> -Joe >> >> [1] http://openjdk.java.net/jeps/182 >> >> [2] >> http://mail.openjdk.java.net/pipermail/jdk9-dev/2014-January/000328.html >> >> --- old/test/compiler/6775880/Test.java 2014-06-27 >> 16:24:25.000000000 -0700 >> +++ new/test/compiler/6775880/Test.java 2014-06-27 >> 16:24:25.000000000 -0700 >> @@ -26,7 +26,6 @@ >> * @test >> * @bug 6775880 >> * @summary EA +DeoptimizeALot: >> assert(mon_info->owner()->is_locked(),"object must be locked now") >> - * @compile -source 1.4 -target 1.4 Test.java >> * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -Xbatch >> -XX:+DoEscapeAnalysis -XX:+DeoptimizeALot >> -XX:CompileCommand=exclude,java.lang.AbstractStringBuilder::append Test >> */ >> >> --- old/test/runtime/6626217/Test6626217.sh 2014-06-27 >> 16:24:26.000000000 -0700 >> +++ new/test/runtime/6626217/Test6626217.sh 2014-06-27 >> 16:24:26.000000000 -0700 >> @@ -54,7 +54,7 @@ >> >> # Compile all the usual suspects, including the default 'many_loader' >> ${CP} many_loader1.java.foo many_loader.java >> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint *.java >> +${JAVAC} ${TESTJAVACOPTS} -Xlint *.java >> >> # Rename the class files, so the custom loader (and not the system >> loader) will find it >> ${MV} from_loader2.class from_loader2.impl2 >> @@ -62,7 +62,7 @@ >> # Compile the next version of 'many_loader' >> ${MV} many_loader.class many_loader.impl1 >> ${CP} many_loader2.java.foo many_loader.java >> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint >> many_loader.java >> +${JAVAC} ${TESTJAVACOPTS} -Xlint many_loader.java >> >> # Rename the class file, so the custom loader (and not the system >> loader) will find it >> ${MV} many_loader.class many_loader.impl2 >> --- old/test/runtime/8003720/Test8003720.java 2014-06-27 >> 16:24:26.000000000 -0700 >> +++ new/test/runtime/8003720/Test8003720.java 2014-06-27 >> 16:24:26.000000000 -0700 >> @@ -26,7 +26,7 @@ >> * @test >> * @bug 8003720 >> * @summary Method in interpreter stack frame can be deallocated >> - * @compile -XDignore.symbol.file -source 1.7 -target 1.7 Victim.java >> + * @compile -XDignore.symbol.file Victim.java >> * @run main/othervm -Xverify:all -Xint Test8003720 >> */ >> >> > From doug.simon at oracle.com Wed Jul 2 10:39:23 2014 From: doug.simon at oracle.com (Doug Simon) Date: Wed, 2 Jul 2014 12:39:23 +0200 Subject: RFR 8047737 Move array component mirror to instance of java/lang/Class In-Reply-To: References: Message-ID: <01C43464-D53E-4660-8BDE-544709871AD8@oracle.com> > Date: Tue, 01 Jul 2014 23:26:43 -0700 > From: Mandy Chung > To: Coleen Phillimore > Cc: hotspot-dev developers , > core-libs-dev > Subject: Re: RFR 8047737 Move array component mirror to instance of > java/lang/Class > Message-ID: <53B3A623.1050804 at oracle.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > > On 6/30/2014 9:51 PM, Christian Thalinger wrote: >> On Jun 30, 2014, at 5:50 PM, Coleen Phillimore wrote: >> >> >> On 6/30/14, 3:50 PM, Christian Thalinger wrote: >>>> private Class(ClassLoader loader) { >>>> // Initialize final field for classLoader. The initialization value of non-null >>>> // prevents future JIT optimizations from assuming this final field is null. >>>> classLoader = loader; >>>> + componentType = null; >>>> } >>>> >>>> Are we worried about the same optimization? >>> Hi, I've decided to make them consistent and add another parameter to the Class constructor. >>> >>> http://cr.openjdk.java.net/~coleenp/8047737_jdk_2/ > > The jdk change looks okay while I am beginning to think whether we really want to keep expanding this constructor to deal with this future JIT optimization (you will be moving more fields out from the VM to java.lang.Class). > > There are places in JDK initializing the final fields to null while the final field value is overridden via native/VM - e.g. System.in, System.out, etc. I would prefer reverting the classLoader constructor change to expanding the constructor for any new field being added. Handle it (and other places in JDK) when such JIT optimization comes. I think doing the assignment to a non-null value in the constructor is the right thing. Do we really want to keep expanding code like this in ciField.cpp: KlassHandle k = _holder->get_Klass(); assert( SystemDictionary::System_klass() != NULL, "Check once per vm"); if( k() == SystemDictionary::System_klass() ) { // Check offsets for case 2: System.in, System.out, or System.err if( _offset == java_lang_System::in_offset_in_bytes() || _offset == java_lang_System::out_offset_in_bytes() || _offset == java_lang_System::err_offset_in_bytes() ) { _is_constant = false; return; } } This code may also need to be duplicated and maintained for other JIT compilers that don?t use the C++ compiler interface. In practice at the moment, I don?t think it makes a difference either way. Apart from @Stable fields, I think non-static fields with a null value are not treated as constant (by all JITs I know of). -Doug From david.holmes at oracle.com Wed Jul 2 10:39:21 2014 From: david.holmes at oracle.com (David Holmes) Date: Wed, 02 Jul 2014 20:39:21 +1000 Subject: Please look at my JEP In-Reply-To: <53B3BEBE.6060103@redhat.com> References: <53A2F354.40606@redhat.com> <53B3BEBE.6060103@redhat.com> Message-ID: <53B3E159.8090708@oracle.com> Hi Andrew, On 2/07/2014 6:11 PM, Andrew Haley wrote: > Hi everybody, > > Please can someone review my JEP? > > It's very simple, and until we can get things moving this is > blocking a significant contribution to OpenJDK. > > https://bugs.openjdk.java.net/browse/JDK-8044552 The JEP 2.0 process [1] is still being formulated so it is unclear to me how to advance this JEP at this time. I'm not even sure how to officially "review" it as such. The basic proposal is sound (similar to ppc64/aix). The engineering plan would need a lot more detail I think - perhaps discuss with Volker & Goetz to get details of what was needed for PPC64 with regard to the staging etc. Perhaps prepare webrevs for teh shared code changes and have them looked at by the different groups: hotspot, build, core-libs. Also it needs to be done in the context of the JDK 9 project, so this email probably needs to go to the jdk9-dev alias, and solicit support from the JDK 9 project lead. Cheers, David [1] http://cr.openjdk.java.net/~mr/jep/jep-2.0-02.html > Thanks, > Andrew. > > > > On 19/06/14 15:27, Andrew Haley wrote: >> The JEP is here: >> >> https://bugs.openjdk.java.net/browse/JDK-8044552 >> >> As you may know, we've been working on this port for some time. >> It is now at the stage where it may be considered for inclusion >> in OpenJDK. It passes all its tests, and although there is still >> some tidying up to do, I think we should move to the next stage. >> >> Thanks, >> Andrew. >> > From harold.seigel at oracle.com Wed Jul 2 12:11:06 2014 From: harold.seigel at oracle.com (harold seigel) Date: Wed, 02 Jul 2014 08:11:06 -0400 Subject: JDK 9 RFR of JDK-8048620: Remove unneeded/obsolete -source/-target options in hotspot tests In-Reply-To: <53B3DB0D.8070700@oracle.com> References: <53AE04E1.4000806@oracle.com> <53B2E513.5020608@oracle.com> <53B3DB0D.8070700@oracle.com> Message-ID: <53B3F6DA.1050209@oracle.com> Hi Joe, Your changes look good to me, also. Would you like me to sponsor your change? Thanks, Harold On 7/2/2014 6:12 AM, David Holmes wrote: > Hi Joe, > > I can provide you one Review. It seems to me the -source/-target were > being set to ensure a minimum version (probably on -target was needed > but -source had to come along for the ride), so removing them seems fine. > > Note hotspot protocol requires copyright updates at the time of > checkin - thanks. > > Also you will need to create the changeset against the group repo for > whomever your sponsor is (though your existing patch from the webrev > will probably apply cleanly). > > A second reviewer (small R) is needed. If they don't sponsor it I will. > > Cheers, > David > > > > On 2/07/2014 2:42 AM, Joe Darcy wrote: >> *ping* >> >> -Joe >> >> On 06/27/2014 04:57 PM, Joe Darcy wrote: >>> Hello, >>> >>> As a consequence of a policy for retiring old javac -source and >>> -target options (JEP 182 [1]), in JDK 9, only -source/-target of 6/1.6 >>> and higher will be supported [2]. This work is being tracked under bug >>> >>> JDK-8011044: Remove support for 1.5 and earlier source and target >>> options >>> https://bugs.openjdk.java.net/browse/JDK-8011044 >>> >>> Many subtasks related to this are already complete, including updating >>> regression tests in the jdk and langtools repos. It has come to my >>> attention that the hotspot repo also has a few tests that use -source >>> and -target that should be updated. Please review the changes: >>> >>> http://cr.openjdk.java.net/~darcy/8048620.0/ >>> >>> Full patch below. From what I could tell looking at the bug and tests, >>> these tests are not sensitive to the class file version so they >>> shouldn't need to use an explicit -source or -target option and should >>> just accept the JDK-default. >>> >>> There is one additional test which uses -source/-target, >>> test/compiler/6932496/Test6932496.java. This test *does* appear >>> sensitive to class file version (no jsr / jret instruction in target 6 >>> or higher) so I have not modified this test. If the test is not >>> actually sensitive to class file version, it can be updated like the >>> others. If it is sensitive and if testing this is still relevant, the >>> class file in question will need to be generated in some other way, >>> such as as by using ASM. >>> >>> Regardless of the outcome of the technical discussion around >>> Test6932496.java, I'd appreciate if a "hotspot buddy" could shepherd >>> this fix through the HotSpot processes. >>> >>> Thanks, >>> >>> -Joe >>> >>> [1] http://openjdk.java.net/jeps/182 >>> >>> [2] >>> http://mail.openjdk.java.net/pipermail/jdk9-dev/2014-January/000328.html >>> >>> >>> --- old/test/compiler/6775880/Test.java 2014-06-27 >>> 16:24:25.000000000 -0700 >>> +++ new/test/compiler/6775880/Test.java 2014-06-27 >>> 16:24:25.000000000 -0700 >>> @@ -26,7 +26,6 @@ >>> * @test >>> * @bug 6775880 >>> * @summary EA +DeoptimizeALot: >>> assert(mon_info->owner()->is_locked(),"object must be locked now") >>> - * @compile -source 1.4 -target 1.4 Test.java >>> * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -Xbatch >>> -XX:+DoEscapeAnalysis -XX:+DeoptimizeALot >>> -XX:CompileCommand=exclude,java.lang.AbstractStringBuilder::append Test >>> */ >>> >>> --- old/test/runtime/6626217/Test6626217.sh 2014-06-27 >>> 16:24:26.000000000 -0700 >>> +++ new/test/runtime/6626217/Test6626217.sh 2014-06-27 >>> 16:24:26.000000000 -0700 >>> @@ -54,7 +54,7 @@ >>> >>> # Compile all the usual suspects, including the default 'many_loader' >>> ${CP} many_loader1.java.foo many_loader.java >>> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint *.java >>> +${JAVAC} ${TESTJAVACOPTS} -Xlint *.java >>> >>> # Rename the class files, so the custom loader (and not the system >>> loader) will find it >>> ${MV} from_loader2.class from_loader2.impl2 >>> @@ -62,7 +62,7 @@ >>> # Compile the next version of 'many_loader' >>> ${MV} many_loader.class many_loader.impl1 >>> ${CP} many_loader2.java.foo many_loader.java >>> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint >>> many_loader.java >>> +${JAVAC} ${TESTJAVACOPTS} -Xlint many_loader.java >>> >>> # Rename the class file, so the custom loader (and not the system >>> loader) will find it >>> ${MV} many_loader.class many_loader.impl2 >>> --- old/test/runtime/8003720/Test8003720.java 2014-06-27 >>> 16:24:26.000000000 -0700 >>> +++ new/test/runtime/8003720/Test8003720.java 2014-06-27 >>> 16:24:26.000000000 -0700 >>> @@ -26,7 +26,7 @@ >>> * @test >>> * @bug 8003720 >>> * @summary Method in interpreter stack frame can be deallocated >>> - * @compile -XDignore.symbol.file -source 1.7 -target 1.7 Victim.java >>> + * @compile -XDignore.symbol.file Victim.java >>> * @run main/othervm -Xverify:all -Xint Test8003720 >>> */ >>> >>> >> From vladimir.danushevsky at oracle.com Wed Jul 2 12:49:13 2014 From: vladimir.danushevsky at oracle.com (Vladimir Danushevsky) Date: Wed, 2 Jul 2014 08:49:13 -0400 Subject: RFR: 8048554: C2: Pass AES lookup tables for platforms with no HW crypto acceleration In-Reply-To: References: <6F653FA9-7E55-49F0-AC19-EBFDA9E1A6DE@oracle.com> Message-ID: <594866A6-79AF-478D-9234-FF59F324FDEF@oracle.com> Hi Roland, On Jul 2, 2014, at 5:33 AM, Roland Westrelin wrote: >> http://cr.openjdk.java.net/~vladidan/8048554/ > > Can you move: > > 5879 BasicType bt = field->layout_type(); > > in the if: > > 5881 if (!is_static) { > > ? The BasicType value is being referenced in both variants of is_static: line 5888 line 5892 line 5894 > > typo (require): > > 5961 // Some platforms with no AES HW acceleration might requre the lookup tables Will do. Thanks a lot, Vlad > > Otherwise, looks good to me. > > Roland. From roland.westrelin at oracle.com Wed Jul 2 12:50:51 2014 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Wed, 2 Jul 2014 14:50:51 +0200 Subject: RFR: 8048554: C2: Pass AES lookup tables for platforms with no HW crypto acceleration In-Reply-To: <594866A6-79AF-478D-9234-FF59F324FDEF@oracle.com> References: <6F653FA9-7E55-49F0-AC19-EBFDA9E1A6DE@oracle.com> <594866A6-79AF-478D-9234-FF59F324FDEF@oracle.com> Message-ID: <82C0CF29-0E46-424F-A5F2-D26E3E1A86E8@oracle.com> >>> http://cr.openjdk.java.net/~vladidan/8048554/ >> >> Can you move: >> >> 5879 BasicType bt = field->layout_type(); >> >> in the if: >> >> 5881 if (!is_static) { >> >> ? > > The BasicType value is being referenced in both variants of is_static: > line 5888 > line 5892 > line 5894 You?re right. Sorry I misread the code. That looks good. Roland. > >> >> typo (require): >> >> 5961 // Some platforms with no AES HW acceleration might requre the lookup tables > > Will do. > > Thanks a lot, > Vlad > >> >> Otherwise, looks good to me. >> >> Roland. > From aph at redhat.com Wed Jul 2 12:57:28 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 02 Jul 2014 13:57:28 +0100 Subject: Please look at my JEP In-Reply-To: <53B3E159.8090708@oracle.com> References: <53A2F354.40606@redhat.com> <53B3BEBE.6060103@redhat.com> <53B3E159.8090708@oracle.com> Message-ID: <53B401B8.80600@redhat.com> Hi, On 07/02/2014 11:39 AM, David Holmes wrote: > Hi Andrew, > > On 2/07/2014 6:11 PM, Andrew Haley wrote: >> Hi everybody, >> >> Please can someone review my JEP? >> >> It's very simple, and until we can get things moving this is >> blocking a significant contribution to OpenJDK. >> >> https://bugs.openjdk.java.net/browse/JDK-8044552 > > The JEP 2.0 process [1] is still being formulated so it is unclear to me > how to advance this JEP at this time. I'm doing this as part of JEP 2.0 because Mark Reinhold asked me to. He suggested I post the request here. Perhaps he can advise. > I'm not even sure how to officially "review" it as such. The basic > proposal is sound (similar to ppc64/aix). The engineering plan would > need a lot more detail I think - perhaps discuss with Volker & Goetz > to get details of what was needed for PPC64 with regard to the > staging etc. Perhaps prepare webrevs for teh shared code changes and > have them looked at by the different groups: hotspot, build, > core-libs. Also it needs to be done in the context of the JDK 9 > project, so this email probably needs to go to the jdk9-dev alias, > and solicit support from the JDK 9 project lead. Right, but as I understand it all that is required at this stage is a basic sanity check of the proposal, which is very similar to that for PPC64. I'm of course happy to produce webrevs for the shared code or anything else. Andrew. From coleen.phillimore at oracle.com Wed Jul 2 13:16:29 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 02 Jul 2014 09:16:29 -0400 Subject: RFR 8047737 Move array component mirror to instance of java/lang/Class In-Reply-To: <53B3FDF2.9060708@gmail.com> References: <53ADC4D4.4030403@oracle.com> <53B0FBEF.5030607@oracle.com> <53B15B50.6070405@oracle.com> <53B205C7.8070804@oracle.com> <53B3A623.1050804@oracle.com> <53B3F985.9030109@gmail.com> <53B3FD3F.30901@gmail.com> <53B3FDF2.9060708@gmail.com> Message-ID: <53B4062D.5080203@oracle.com> On 7/2/14, 8:41 AM, Peter Levart wrote: > On 07/02/2014 02:38 PM, Peter Levart wrote: >> On 07/02/2014 02:22 PM, Peter Levart wrote: >>> On 07/02/2014 08:26 AM, Mandy Chung wrote: >>>> >>>> On 6/30/2014 9:51 PM, Christian Thalinger wrote: >>>>> On Jun 30, 2014, at 5:50 PM, Coleen Phillimore >>>>> wrote: >>>>> >>>>> >>>>> On 6/30/14, 3:50 PM, Christian Thalinger wrote: >>>>>>> private Class(ClassLoader loader) { >>>>>>> // Initialize final field for classLoader. The >>>>>>> initialization value of non-null >>>>>>> // prevents future JIT optimizations from assuming >>>>>>> this final field is null. >>>>>>> classLoader = loader; >>>>>>> + componentType = null; >>>>>>> } >>>>>>> >>>>>>> Are we worried about the same optimization? >>>>>> Hi, I've decided to make them consistent and add another >>>>>> parameter to the Class constructor. >>>>>> >>>>>> http://cr.openjdk.java.net/~coleenp/8047737_jdk_2/ >>>> >>>> The jdk change looks okay while I am beginning to think whether we >>>> really want to keep expanding this constructor to deal with this >>>> future JIT optimization (you will be moving more fields out from >>>> the VM to java.lang.Class). >>>> >>>> There are places in JDK initializing the final fields to null while >>>> the final field value is overridden via native/VM - e.g. System.in, >>>> System.out, etc. I would prefer reverting the classLoader >>>> constructor change to expanding the constructor for any new field >>>> being added. Handle it (and other places in JDK) when such JIT >>>> optimization comes. >>>> >>>> Mandy >>>> >>> >>> What about: >>> >>> >>> private Class() { >>> classLoader = none(); >>> componentType = none(); >>> ... >>> } >>> >>> private T none() { throw new Error(); } >>> >>> >>> I think this should be resistant to future optimizations. >> >> And you could even remove the special-casing in >> AccessibleObject.setAccessible0() then. >> >> Regards, Peter > > I take it back. Such java.lang.Class instance would still be > constructed and GC will see it. The setAccessible0 check is still needed because we do other things to the mirror inside the jvm. Coleen > >> >>> >>> Regards, Peter >>> >> > From maynardj at us.ibm.com Wed Jul 2 16:15:08 2014 From: maynardj at us.ibm.com (Maynard Johnson) Date: Wed, 02 Jul 2014 11:15:08 -0500 Subject: Fwd: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero In-Reply-To: <53AAE839.8050105@us.ibm.com> References: <53AAE839.8050105@us.ibm.com> Message-ID: <53B4300C.7040401@us.ibm.com> Cross-posting to see if Hotspot developers can help. -Maynard -------- Original Message -------- Subject: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero Date: Wed, 25 Jun 2014 10:18:17 -0500 From: Maynard Johnson To: ppc-aix-port-dev at openjdk.java.net Hello, PowerPC OpenJDK folks, I am just now starting to get involved in the OpenJDK project. My goal is to ensure that the standard serviceability tools and tooling (jdb, JVMTI, jmap, etc.) work correctly on the PowerLinux platform. I selected JVMTI to start with since I have some experience from a client perspective with the JVMTI API. An OSS profiling tool for which I am the maintainer (oprofile) provides an agent library that implements the JVMTI API. Using this agent library to profile Java apps on my Intel-based laptop with OpenJDK (using various versions, up to current jdk9-dev) works fine. But the same profiling scenario attempted on my PowerLinux box (POWER7/Fedora 20) fails miserably. The oprofile agent library registers for callbacks for CompiledMethodLoad, CompiledMethodUnload, and DynamicCodeGenerated. In the callback functions, it writes information about the JVMTI event to a file. After profiling completes, oprofile's post-processing phase involves interpreting the information from the agent library's output file and generating an ELF file to represent the JITed code. When I profile an OpenJDK app on my Power system, the post-processing phase fails while trying to resolve overlapping symbols. The failure is due to the fact that it is unexpectedly finding symbols with code size of zero overlapping at the starting address of some other symbol with non-zero code size. The symbols in question here are from DynamicCodeGenerated events. Are these "code size=0" events valid? If so, I can fix the oprofile code to handle them. If they're not valid, then below is some debug information I've collected so far. ---------------------------- I instrumented JvmtiExport::post_dynamic_code_generated_internal (in hotspot/src/share/vm/prims/jvmtiExport.cpp) to print a debug line when a symbol with code size of zero was detected and then ran the following command: java -agentpath:/jvm/openjdk-1.9.0-internal/demo/jvmti/CodeLoadInfo/lib/libCodeLoadInfo.so -version The debug output from my instrumentation was as follows: Code size is ZERO!! Dynamic code generated event sent for flush_icache_stub; code begin: 0x3fff68000080; code end: 0x3fff68000080 Code size is ZERO!! Dynamic code generated event sent for throw_exception; code begin: 0x3fff68000a90; code end: 0x3fff68000a90 Code size is ZERO!! Dynamic code generated event sent for throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 Code size is ZERO!! Dynamic code generated event sent for throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 Code size is ZERO!! Dynamic code generated event sent for throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 Code size is ZERO!! Dynamic code generated event sent for verify_oop; code begin: 0x3fff6801665c; code end: 0x3fff6801665c openjdk version "1.9.0-internal" OpenJDK Runtime Environment (build 1.9.0-internal-mpj_2014_06_18_09_55-b00) OpenJDK 64-Bit Server VM (build 1.9.0-internal-mpj_2014_06_18_09_55-b00, mixed mode) I don't have access to an AIX system to know if the same issue would be seen there. Let me know if there's any other information I can provide. Thanks for the help. -Maynard From daniel.daugherty at oracle.com Wed Jul 2 16:28:48 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 02 Jul 2014 10:28:48 -0600 Subject: Fwd: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero In-Reply-To: <53B4300C.7040401@us.ibm.com> References: <53AAE839.8050105@us.ibm.com> <53B4300C.7040401@us.ibm.com> Message-ID: <53B43340.6020508@oracle.com> Adding the Serviceability team to the thread since JVM/TI is owned by them... Dan On 7/2/14 10:15 AM, Maynard Johnson wrote: > Cross-posting to see if Hotspot developers can help. > > -Maynard > > > -------- Original Message -------- > Subject: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero > Date: Wed, 25 Jun 2014 10:18:17 -0500 > From: Maynard Johnson > To: ppc-aix-port-dev at openjdk.java.net > > Hello, PowerPC OpenJDK folks, > I am just now starting to get involved in the OpenJDK project. My goal is to ensure that the standard serviceability tools and tooling (jdb, JVMTI, jmap, etc.) work correctly on the PowerLinux platform. I selected JVMTI to start with since I have some experience from a client perspective with the JVMTI API. An OSS profiling tool for which I am the maintainer (oprofile) provides an agent library that implements the JVMTI API. Using this agent library to profile Java apps on my Intel-based laptop with OpenJDK (using various versions, up to current jdk9-dev) works fine. But the same profiling scenario attempted on my PowerLinux box (POWER7/Fedora 20) fails miserably. > > The oprofile agent library registers for callbacks for CompiledMethodLoad, CompiledMethodUnload, and DynamicCodeGenerated. In the callback functions, it writes information about the JVMTI event to a file. After profiling completes, oprofile's post-processing phase involves interpreting the information from the agent library's output file and generating an ELF file to represent the JITed code. When I profile an OpenJDK app on my Power system, the post-processing phase fails while trying to resolve overlapping symbols. The failure is due to the fact that it is unexpectedly finding symbols with code size of zero overlapping at the starting address of some other symbol with non-zero code size. The symbols in question here are from DynamicCodeGenerated events. > > Are these "code size=0" events valid? If so, I can fix the oprofile code to handle them. If they're not valid, then below is some debug information I've collected so far. > > ---------------------------- > > I instrumented JvmtiExport::post_dynamic_code_generated_internal (in hotspot/src/share/vm/prims/jvmtiExport.cpp) to print a debug line when a symbol with code size of zero was detected and then ran the following command: > > java -agentpath:/jvm/openjdk-1.9.0-internal/demo/jvmti/CodeLoadInfo/lib/libCodeLoadInfo.so -version > > The debug output from my instrumentation was as follows: > > Code size is ZERO!! Dynamic code generated event sent for flush_icache_stub; code begin: 0x3fff68000080; code end: 0x3fff68000080 > Code size is ZERO!! Dynamic code generated event sent for throw_exception; code begin: 0x3fff68000a90; code end: 0x3fff68000a90 > Code size is ZERO!! Dynamic code generated event sent for throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 > Code size is ZERO!! Dynamic code generated event sent for throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 > Code size is ZERO!! Dynamic code generated event sent for throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 > Code size is ZERO!! Dynamic code generated event sent for verify_oop; code begin: 0x3fff6801665c; code end: 0x3fff6801665c > openjdk version "1.9.0-internal" > OpenJDK Runtime Environment (build 1.9.0-internal-mpj_2014_06_18_09_55-b00) > OpenJDK 64-Bit Server VM (build 1.9.0-internal-mpj_2014_06_18_09_55-b00, mixed mode) > > > I don't have access to an AIX system to know if the same issue would be seen there. Let me know if there's any other information I can provide. > > Thanks for the help. > > -Maynard > > > From coleen.phillimore at oracle.com Wed Jul 2 17:05:00 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 02 Jul 2014 13:05:00 -0400 Subject: RFR 8047737 Move array component mirror to instance of java/lang/Class In-Reply-To: <53B3A623.1050804@oracle.com> References: <53ADC4D4.4030403@oracle.com> <53B0FBEF.5030607@oracle.com> <53B15B50.6070405@oracle.com> <53B205C7.8070804@oracle.com> <53B3A623.1050804@oracle.com> Message-ID: <53B43BBC.3030501@oracle.com> Hi Mandy, The componentType field is the last one that I'm planning on moving out for now, so I'd like to keep the code as is. If more are added because of more performance opportunities, I think we can revisit this. I agree with Doug that we don't want any more special code like this in the JVM to disable these optimizations if they are ever implemented. Thank you for reviewing the code. Coleen On 7/2/14, 2:26 AM, Mandy Chung wrote: > > On 6/30/2014 9:51 PM, Christian Thalinger wrote: >> On Jun 30, 2014, at 5:50 PM, Coleen Phillimore >> wrote: >> >> >> On 6/30/14, 3:50 PM, Christian Thalinger wrote: >>>> private Class(ClassLoader loader) { >>>> // Initialize final field for classLoader. The >>>> initialization value of non-null >>>> // prevents future JIT optimizations from assuming this >>>> final field is null. >>>> classLoader = loader; >>>> + componentType = null; >>>> } >>>> >>>> Are we worried about the same optimization? >>> Hi, I've decided to make them consistent and add another parameter >>> to the Class constructor. >>> >>> http://cr.openjdk.java.net/~coleenp/8047737_jdk_2/ > > The jdk change looks okay while I am beginning to think whether we > really want to keep expanding this constructor to deal with this > future JIT optimization (you will be moving more fields out from the > VM to java.lang.Class). > > There are places in JDK initializing the final fields to null while > the final field value is overridden via native/VM - e.g. System.in, > System.out, etc. I would prefer reverting the classLoader constructor > change to expanding the constructor for any new field being added. > Handle it (and other places in JDK) when such JIT optimization comes. > > Mandy > From mandy.chung at oracle.com Wed Jul 2 17:21:33 2014 From: mandy.chung at oracle.com (Mandy Chung) Date: Wed, 02 Jul 2014 10:21:33 -0700 Subject: RFR 8047737 Move array component mirror to instance of java/lang/Class In-Reply-To: <53B43BBC.3030501@oracle.com> References: <53ADC4D4.4030403@oracle.com> <53B0FBEF.5030607@oracle.com> <53B15B50.6070405@oracle.com> <53B205C7.8070804@oracle.com> <53B3A623.1050804@oracle.com> <53B43BBC.3030501@oracle.com> Message-ID: <53B43F9D.806@oracle.com> I wasn't aware of the VM special case of System.{in,out,err} fields that Doug pointed out (thanks. I should have checked). That's okay with me. Mandy On 7/2/2014 10:05 AM, Coleen Phillimore wrote: > > Hi Mandy, > > The componentType field is the last one that I'm planning on moving > out for now, so I'd like to keep the code as is. If more are added > because of more performance opportunities, I think we can revisit this. > > I agree with Doug that we don't want any more special code like this > in the JVM to disable these optimizations if they are ever implemented. > > Thank you for reviewing the code. > Coleen > > On 7/2/14, 2:26 AM, Mandy Chung wrote: >> >> On 6/30/2014 9:51 PM, Christian Thalinger wrote: >>> On Jun 30, 2014, at 5:50 PM, Coleen Phillimore >>> wrote: >>> >>> >>> On 6/30/14, 3:50 PM, Christian Thalinger wrote: >>>>> private Class(ClassLoader loader) { >>>>> // Initialize final field for classLoader. The >>>>> initialization value of non-null >>>>> // prevents future JIT optimizations from assuming this >>>>> final field is null. >>>>> classLoader = loader; >>>>> + componentType = null; >>>>> } >>>>> >>>>> Are we worried about the same optimization? >>>> Hi, I've decided to make them consistent and add another parameter >>>> to the Class constructor. >>>> >>>> http://cr.openjdk.java.net/~coleenp/8047737_jdk_2/ >> >> The jdk change looks okay while I am beginning to think whether we >> really want to keep expanding this constructor to deal with this >> future JIT optimization (you will be moving more fields out from the >> VM to java.lang.Class). >> >> There are places in JDK initializing the final fields to null while >> the final field value is overridden via native/VM - e.g. System.in, >> System.out, etc. I would prefer reverting the classLoader >> constructor change to expanding the constructor for any new field >> being added. Handle it (and other places in JDK) when such JIT >> optimization comes. >> >> Mandy >> > From volker.simonis at gmail.com Wed Jul 2 17:26:08 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 2 Jul 2014 19:26:08 +0200 Subject: Please look at my JEP In-Reply-To: <53B401B8.80600@redhat.com> References: <53A2F354.40606@redhat.com> <53B3BEBE.6060103@redhat.com> <53B3E159.8090708@oracle.com> <53B401B8.80600@redhat.com> Message-ID: Hi Andrew, first of all I want to say that although I've already done a complete (and successfull) round trough the JEP process with our PowerPC/AIX port, the whole process still remained mysterious to me. Generally, coming from outside Oracle, it is hard to inspire people to review a JEP for a project which doesn't help them and which in the end will cause a lot of work for them. If nobody else want to comment on the Draft, I'd push the JEP to the "Submitted" state right away. Nevertheless, providing some more details about the approximate number and size of changes in the Wiki together with links to the webrevs may be helpful for reviewers. One thing we did for example was to create a patch queue (i.e. a Mercurial Queue - see [2]) which could be imported into the latest development branch (in your case that would probably be jdk9/hs/hotspot). Each of the patches in the queue was: - more or less self-contained - usually a collection/merge of several changes from our porting repository - easy to review (not too big, not too small) - verbosely named to summarize their content We tried hard to always keep the queue up-to date with respect to the upstream repository such that reviewers could easily build and test the changes (that means nightly builds and daily fixes:). Just see our porting repository [3] to get an impression of our patch queue and read Goetz's mail [4] which explains how this patch queue can be applied to an upstream repository. You could then easily link these patches (or lists thereof) into your JEP/Wiki/Integration Plan. Once you have all this in place, I think there's no excuse, why your JEP shouldn't be endorsed and funded. At first glance, David is right that this review should be done in the context of the JDK 9 project. On the other hand, I'm pretty sure that the vast majority of your changes are in the HotSpot repository and it's the HotSpot changes which causes most of the trouble because you always need an Oracle sponsor for each of them even if you're a comitter. And anyway, your JEP needs to be endorsed and sponsored by a Group Lead anyway (see "Making decisions and building consensus" in [7]), so I think the hotspot group would be the right sponsoring group and John Coomes as the hotspot Group Lead the right person to endorse it. If I remember right, just about at this stage of the preocess we created our "PowerPC/AIX Port Integration Plan" [5] together with some people from Oracle which already contained a quite detailed estimation of the efforts, durations, dependencies and confidence levels in each of the planned integration steps. One of the main points was the creation of a so called "staging" forest inside our project repository which was a clone of the latest development forest (in your case that would probably be jdk9/hs/hotspot). Notice that this step HAS TO BE DONE by Oracle, because the staging forest also contains Oracles closed part of the OpenJDK (a nice Oxymoron:) and it has to, because all your HotSpot changes will have to pass Oracles internal testing which also exercises the closed platforms. Sometimes it is also necessary that Oracle adapts its closed sources such that they still work together with the open part after your changes and this can obviously only be done by Oracle. Because of its special nature, the "staging" repository can only be synced with the upstream development repository by an Oracle employee (and this person should be a good friend of yours, because you'll need his help:) With this staging repository in place, we finally created bug IDs and webrevs for each of the patches from our patch queue [3], submitted them for review to the corresponding mailing lists and submitted them into our staging repository once they were reviewed (or asked our sponsors insode Oracle to submit them for HotSpot changes). So what could be the next steps to push your JEP forward: - publish a more detailed description of your proposed changes (patch queue, webrevs) - publish build results (see for example [6] which contained nightly builds of our staging repository and the upstream repository with our patch queue applied before our port was merged into the main code line) - publish more information about your port on your Wiki page (e.g. your FOSDEM talks which I liked a lot:), ARM manuals, etc) which makes it easier vor reviewers to understand your code. - contact the corresponding Group Leads directly and ask them what additional information they need to endorse/fund your JEP (do this more often if you get no answer:) - campaign for your project among among all your Oracle connections (you probably know the names better than I do, if not contact me :) You have to understand that a port integration is a considerable effort from Oracles side and they won't do it until you don't push them really hard. Regards, Volker PS: still not a reviewer but maybe an OpenJDK Porting ?Area Lead? :) [1] http://cr.openjdk.java.net/~mr/jep/jep-2.0-02.html [2] http://mercurial.selenic.com/wiki/MqExtension [3] http://hg.openjdk.java.net/ppc-aix-port/jdk8/hotspot/file/tip/ppc_patches [4] http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-June/009856.html [5] https://wiki.openjdk.java.net/pages/viewpage.action?pageId=13729959 [6] http://cr.openjdk.java.net/~simonis/ppc-aix-port/index.html [7] http://openjdk.java.net/jeps/1 On Wed, Jul 2, 2014 at 2:57 PM, Andrew Haley wrote: > Hi, > > On 07/02/2014 11:39 AM, David Holmes wrote: >> Hi Andrew, >> >> On 2/07/2014 6:11 PM, Andrew Haley wrote: >>> Hi everybody, >>> >>> Please can someone review my JEP? >>> >>> It's very simple, and until we can get things moving this is >>> blocking a significant contribution to OpenJDK. >>> >>> https://bugs.openjdk.java.net/browse/JDK-8044552 >> >> The JEP 2.0 process [1] is still being formulated so it is unclear to me >> how to advance this JEP at this time. > > I'm doing this as part of JEP 2.0 because Mark Reinhold asked me to. > He suggested I post the request here. Perhaps he can advise. > >> I'm not even sure how to officially "review" it as such. The basic >> proposal is sound (similar to ppc64/aix). The engineering plan would >> need a lot more detail I think - perhaps discuss with Volker & Goetz >> to get details of what was needed for PPC64 with regard to the >> staging etc. Perhaps prepare webrevs for teh shared code changes and >> have them looked at by the different groups: hotspot, build, >> core-libs. Also it needs to be done in the context of the JDK 9 >> project, so this email probably needs to go to the jdk9-dev alias, >> and solicit support from the JDK 9 project lead. > > Right, but as I understand it all that is required at this stage is a > basic sanity check of the proposal, which is very similar to that for > PPC64. > > I'm of course happy to produce webrevs for the shared code or anything > else. > > Andrew. > From aph at redhat.com Wed Jul 2 17:36:04 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 02 Jul 2014 18:36:04 +0100 Subject: Please look at my JEP In-Reply-To: References: <53A2F354.40606@redhat.com> <53B3BEBE.6060103@redhat.com> <53B3E159.8090708@oracle.com> <53B401B8.80600@redhat.com> Message-ID: <53B44304.6070209@redhat.com> On 07/02/2014 06:26 PM, Volker Simonis wrote: > You have to understand that a port integration is a considerable > effort from Oracles side and they won't do it until you don't push > them really hard. Okay. I am certain that the integration of this port into HotSpot mainline will be a lot easier than your PowerPC/AIX was: the number of non-trivial changes is nearly zero. [By a trivial change I mean '#ifdef AARCH64 #include " or adding "|| defined(AARCH64)" to some existing code. Barring typos, these cannot affect existing code.] If it's necessary to produce webrevs to get in-principle approval of the JEP, I will do so. Thanks, Andrew. From volker.simonis at gmail.com Wed Jul 2 17:38:42 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 2 Jul 2014 19:38:42 +0200 Subject: Fwd: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero In-Reply-To: <53B43340.6020508@oracle.com> References: <53AAE839.8050105@us.ibm.com> <53B4300C.7040401@us.ibm.com> <53B43340.6020508@oracle.com> Message-ID: Hi Maynard, I really apologize that I've somehow missed your first message. ppc-aix-port-dev was the right list to post to. I'll analyze this problem instantly and let you know why we post this zero-code size events. Regards, Volker PS: really great to see that somebody is working on oprofile/OpenJDK integration! On Wed, Jul 2, 2014 at 6:28 PM, Daniel D. Daugherty wrote: > Adding the Serviceability team to the thread since JVM/TI is owned > by them... > > Dan > > > > On 7/2/14 10:15 AM, Maynard Johnson wrote: >> >> Cross-posting to see if Hotspot developers can help. >> >> -Maynard >> >> >> -------- Original Message -------- >> Subject: PowerPC issue: Some JVMTI dynamic code generated events have code >> size of zero >> Date: Wed, 25 Jun 2014 10:18:17 -0500 >> From: Maynard Johnson >> To: ppc-aix-port-dev at openjdk.java.net >> >> Hello, PowerPC OpenJDK folks, >> I am just now starting to get involved in the OpenJDK project. My goal is >> to ensure that the standard serviceability tools and tooling (jdb, JVMTI, >> jmap, etc.) work correctly on the PowerLinux platform. I selected JVMTI to >> start with since I have some experience from a client perspective with the >> JVMTI API. An OSS profiling tool for which I am the maintainer (oprofile) >> provides an agent library that implements the JVMTI API. Using this agent >> library to profile Java apps on my Intel-based laptop with OpenJDK (using >> various versions, up to current jdk9-dev) works fine. But the same >> profiling scenario attempted on my PowerLinux box (POWER7/Fedora 20) fails >> miserably. >> >> The oprofile agent library registers for callbacks for CompiledMethodLoad, >> CompiledMethodUnload, and DynamicCodeGenerated. In the callback functions, >> it writes information about the JVMTI event to a file. After profiling >> completes, oprofile's post-processing phase involves interpreting the >> information from the agent library's output file and generating an ELF file >> to represent the JITed code. When I profile an OpenJDK app on my Power >> system, the post-processing phase fails while trying to resolve overlapping >> symbols. The failure is due to the fact that it is unexpectedly finding >> symbols with code size of zero overlapping at the starting address of some >> other symbol with non-zero code size. The symbols in question here are from >> DynamicCodeGenerated events. >> >> Are these "code size=0" events valid? If so, I can fix the oprofile code >> to handle them. If they're not valid, then below is some debug information >> I've collected so far. >> >> ---------------------------- >> >> I instrumented JvmtiExport::post_dynamic_code_generated_internal (in >> hotspot/src/share/vm/prims/jvmtiExport.cpp) to print a debug line when a >> symbol with code size of zero was detected and then ran the following >> command: >> >> java >> -agentpath:/jvm/openjdk-1.9.0-internal/demo/jvmti/CodeLoadInfo/lib/libCodeLoadInfo.so >> -version >> >> The debug output from my instrumentation was as follows: >> >> Code size is ZERO!! Dynamic code generated event sent for >> flush_icache_stub; code begin: 0x3fff68000080; code end: 0x3fff68000080 >> Code size is ZERO!! Dynamic code generated event sent for >> throw_exception; code begin: 0x3fff68000a90; code end: 0x3fff68000a90 >> Code size is ZERO!! Dynamic code generated event sent for >> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >> Code size is ZERO!! Dynamic code generated event sent for >> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >> Code size is ZERO!! Dynamic code generated event sent for >> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >> Code size is ZERO!! Dynamic code generated event sent for verify_oop; >> code begin: 0x3fff6801665c; code end: 0x3fff6801665c >> openjdk version "1.9.0-internal" >> OpenJDK Runtime Environment (build >> 1.9.0-internal-mpj_2014_06_18_09_55-b00) >> OpenJDK 64-Bit Server VM (build >> 1.9.0-internal-mpj_2014_06_18_09_55-b00, mixed mode) >> >> >> I don't have access to an AIX system to know if the same issue would be >> seen there. Let me know if there's any other information I can provide. >> >> Thanks for the help. >> >> -Maynard >> >> >> > From volker.simonis at gmail.com Wed Jul 2 18:21:44 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 2 Jul 2014 20:21:44 +0200 Subject: Fwd: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero In-Reply-To: References: <53AAE839.8050105@us.ibm.com> <53B4300C.7040401@us.ibm.com> <53B43340.6020508@oracle.com> Message-ID: After a quick look I can say that at least for the "flush_icache_stub" and "verify_oop" cases we indeed generate no code. Other platforms like x86 for example generate code for instruction cache flushing. The starting address of this code is saved in a function pointer and called if necessary. On PPC64 we just save the address of a normal C-funtion in this function pointer and implement the cache flush with the help of inline assembler in the C-function. However this saving of the C-function address in the corresponding function pointer is still done in a helper method which triggers the creation of the JvmtiExport::post_dynamic_code_generated_internal event - but with zero size in that case. I agree that it is questionable if we really need to post these events although they didn't hurt until know. Maybe we can remove them - please let me think one more night about it:) Regards, Volker On Wed, Jul 2, 2014 at 7:38 PM, Volker Simonis wrote: > Hi Maynard, > > I really apologize that I've somehow missed your first message. > ppc-aix-port-dev was the right list to post to. > > I'll analyze this problem instantly and let you know why we post this > zero-code size events. > > Regards, > Volker > > PS: really great to see that somebody is working on oprofile/OpenJDK > integration! > > > On Wed, Jul 2, 2014 at 6:28 PM, Daniel D. Daugherty > wrote: >> Adding the Serviceability team to the thread since JVM/TI is owned >> by them... >> >> Dan >> >> >> >> On 7/2/14 10:15 AM, Maynard Johnson wrote: >>> >>> Cross-posting to see if Hotspot developers can help. >>> >>> -Maynard >>> >>> >>> -------- Original Message -------- >>> Subject: PowerPC issue: Some JVMTI dynamic code generated events have code >>> size of zero >>> Date: Wed, 25 Jun 2014 10:18:17 -0500 >>> From: Maynard Johnson >>> To: ppc-aix-port-dev at openjdk.java.net >>> >>> Hello, PowerPC OpenJDK folks, >>> I am just now starting to get involved in the OpenJDK project. My goal is >>> to ensure that the standard serviceability tools and tooling (jdb, JVMTI, >>> jmap, etc.) work correctly on the PowerLinux platform. I selected JVMTI to >>> start with since I have some experience from a client perspective with the >>> JVMTI API. An OSS profiling tool for which I am the maintainer (oprofile) >>> provides an agent library that implements the JVMTI API. Using this agent >>> library to profile Java apps on my Intel-based laptop with OpenJDK (using >>> various versions, up to current jdk9-dev) works fine. But the same >>> profiling scenario attempted on my PowerLinux box (POWER7/Fedora 20) fails >>> miserably. >>> >>> The oprofile agent library registers for callbacks for CompiledMethodLoad, >>> CompiledMethodUnload, and DynamicCodeGenerated. In the callback functions, >>> it writes information about the JVMTI event to a file. After profiling >>> completes, oprofile's post-processing phase involves interpreting the >>> information from the agent library's output file and generating an ELF file >>> to represent the JITed code. When I profile an OpenJDK app on my Power >>> system, the post-processing phase fails while trying to resolve overlapping >>> symbols. The failure is due to the fact that it is unexpectedly finding >>> symbols with code size of zero overlapping at the starting address of some >>> other symbol with non-zero code size. The symbols in question here are from >>> DynamicCodeGenerated events. >>> >>> Are these "code size=0" events valid? If so, I can fix the oprofile code >>> to handle them. If they're not valid, then below is some debug information >>> I've collected so far. >>> >>> ---------------------------- >>> >>> I instrumented JvmtiExport::post_dynamic_code_generated_internal (in >>> hotspot/src/share/vm/prims/jvmtiExport.cpp) to print a debug line when a >>> symbol with code size of zero was detected and then ran the following >>> command: >>> >>> java >>> -agentpath:/jvm/openjdk-1.9.0-internal/demo/jvmti/CodeLoadInfo/lib/libCodeLoadInfo.so >>> -version >>> >>> The debug output from my instrumentation was as follows: >>> >>> Code size is ZERO!! Dynamic code generated event sent for >>> flush_icache_stub; code begin: 0x3fff68000080; code end: 0x3fff68000080 >>> Code size is ZERO!! Dynamic code generated event sent for >>> throw_exception; code begin: 0x3fff68000a90; code end: 0x3fff68000a90 >>> Code size is ZERO!! Dynamic code generated event sent for >>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>> Code size is ZERO!! Dynamic code generated event sent for >>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>> Code size is ZERO!! Dynamic code generated event sent for >>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>> Code size is ZERO!! Dynamic code generated event sent for verify_oop; >>> code begin: 0x3fff6801665c; code end: 0x3fff6801665c >>> openjdk version "1.9.0-internal" >>> OpenJDK Runtime Environment (build >>> 1.9.0-internal-mpj_2014_06_18_09_55-b00) >>> OpenJDK 64-Bit Server VM (build >>> 1.9.0-internal-mpj_2014_06_18_09_55-b00, mixed mode) >>> >>> >>> I don't have access to an AIX system to know if the same issue would be >>> seen there. Let me know if there's any other information I can provide. >>> >>> Thanks for the help. >>> >>> -Maynard >>> >>> >>> >> From volker.simonis at gmail.com Wed Jul 2 18:27:51 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 2 Jul 2014 20:27:51 +0200 Subject: RFR (XXS) [URGENT]: 8048232: Fix for 8046471 breaks PPC64 build In-Reply-To: References: <53AC6DAA.2010807@oracle.com> <53B1FCB2.4050606@oracle.com> Message-ID: Hi Daniel, I saw that you've sponsored 8046471 which unfortunately broke our PPC64 build. Could you please be so kind to also review and sponsor this tiny little change which fixes the problems on PPC64. Thank you and best regards, Volker On Tue, Jul 1, 2014 at 2:33 PM, Volker Simonis wrote: > Hi Mikael, > > thanks for reviewing at the change. > > Can I please have one more reviewer/sponsor for this tiny change? > > Thanks, > Volker > > > On Tue, Jul 1, 2014 at 2:11 AM, Mikael Vidstedt > wrote: >> >> Looks good. >> >> Cheers, >> Mikael >> >> >> On 2014-06-30 07:28, Volker Simonis wrote: >>> >>> Can somebody please review and push this small build change to fix our >>> ppc64 build errors. >>> >>> Thanks, >>> Volker >>> >>> On Fri, Jun 27, 2014 at 5:48 PM, Volker Simonis >>> wrote: >>>> >>>> On Thu, Jun 26, 2014 at 10:59 PM, Volker Simonis >>>> wrote: >>>>> >>>>> >>>>> On Thursday, June 26, 2014, Mikael Vidstedt >>>>> wrote: >>>>>> >>>>>> >>>>>> This will work for top level builds. For Hotspot-only builds ARCH will >>>>>> (still) be the value of uname -m, so if you want to support >>>>>> Hotspot-only >>>>>> builds you'll probably want to do the "ifneq (,$(findstring $(ARCH), >>>>>> ppc))" >>>>>> trick to catch both "ppc" (which is what a top level build will use) >>>>>> and >>>>>> "ppc64" (for Hotspot-only). >>>>>> >>>>> Hi Mikael, >>>>> >>>>> yes you're right. >>>> >>>> I have to correct myself - you're nearly right:) >>>> >>>> In the term "$(findstring $(ARCH), ppc)" '$ARCH' is the needle and >>>> 'ppc is the stack, so it won't catch 'ppc64' either. I could write >>>> "$(findstring ppc, $(ARCH))" which would catch both, 'ppc' and 'ppc64' >>>> but I decided to use the slightly more verbose "$(findstring $(ARCH), >>>> ppc ppc64)" because it seemed clearer to me. I also added a comment to >>>> explain the problematic of the different ARCH values for top-level and >>>> HotSpot-only builds. Once we have the new HS build, this can hopefully >>>> all go away. >>>> >>>> By, the way, I also had to apply this change to your ppc-modifications >>>> in make/linux/makefiles/defs.make. And I think that the same reasoning >>>> may also apply to "$(findstring $(ARCH), sparc)" which won't catch >>>> 'sparc64' any more after your change but I have no Linux/SPARC box to >>>> test this. You may change it accordingly at your discretion. >>>> >>>> So here's the new webrev: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >>>> >>>> Please review and sponsor:) >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>>> I only tested a complete make but I indeed want to support >>>>> HotSpot only makes as well. I'll change it as requested although I won't >>>>> have chance to do that before tomorrow morning (European time). >>>>> >>>>> Thanks you and best regards, >>>>> Volker >>>>> >>>>>> Sorry for breaking it. >>>>>> >>>>>> Cheers, >>>>>> Mikael >>>>>> >>>>>> PS. We so need to clean up these makefiles... >>>>>> >>>>>> On 2014-06-26 07:25, Volker Simonis wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> could somebody please review and push the following tiny change: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232/ >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8048232 >>>>>>> >>>>>>> It fixes the build on Linux/PPC64 after "8046471 Use >>>>>>> OPENJDK_TARGET_CPU_ARCH instead of legacy value for hotspot ARCH". >>>>>>> >>>>>>> Before 8046471, the top-level make passed ARCH=ppc64 to the HotSpot >>>>>>> make. After 8046471, it now passes ARCH=ppc. But there was one place >>>>>>> in make/linux/Makefile which checked for ARCH=ppc64 in order to >>>>>>> disable the TIERED build. This place has to be adapted to handle the >>>>>>> new ARCH value. >>>>>>> >>>>>>> Please push this right to http://hg.openjdk.java.net/jdk9/hs/hotspot >>>>>>> in order to get it into http://hg.openjdk.java.net/jdk9/dev/hotspot >>>>>>> together with 8046471. >>>>>>> >>>>>>> Note: this change depends on 8046471 in the hotspot AND in the >>>>>>> top-level directory! >>>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>> >>>>>> >> From daniel.daugherty at oracle.com Wed Jul 2 20:46:53 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 02 Jul 2014 14:46:53 -0600 Subject: RFR (XXS) [URGENT]: 8048232: Fix for 8046471 breaks PPC64 build In-Reply-To: References: <53AC6DAA.2010807@oracle.com> <53B1FCB2.4050606@oracle.com> Message-ID: <53B46FBD.6020905@oracle.com> Hi Volker, Yes, I can sponsor this change also. > http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ make/linux/Makefile No comments. make/linux/makefiles/defs.make No comments. Thumbs up! I also see this below: > Please push this right to http://hg.openjdk.java.net/jdk9/hs/hotspot > in order to get it into http://hg.openjdk.java.net/jdk9/dev/hotspot > together with 8046471. However, I don't see an approval from Alejandro on this e-mail thread nor is it possible to catch up to the fix for 8046471 since it was included in the 2014-06-27 Main_Baseline snapshot that should get pushed to JDK9-dev soon. My current plan is to push the fix to RT_Baseline and follow the normal process. Dan On 7/2/14 12:27 PM, Volker Simonis wrote: > Hi Daniel, > > I saw that you've sponsored 8046471 which unfortunately broke our PPC64 build. > > Could you please be so kind to also review and sponsor this tiny > little change which fixes the problems on PPC64. > > Thank you and best regards, > Volker > > > On Tue, Jul 1, 2014 at 2:33 PM, Volker Simonis wrote: >> Hi Mikael, >> >> thanks for reviewing at the change. >> >> Can I please have one more reviewer/sponsor for this tiny change? >> >> Thanks, >> Volker >> >> >> On Tue, Jul 1, 2014 at 2:11 AM, Mikael Vidstedt >> wrote: >>> Looks good. >>> >>> Cheers, >>> Mikael >>> >>> >>> On 2014-06-30 07:28, Volker Simonis wrote: >>>> Can somebody please review and push this small build change to fix our >>>> ppc64 build errors. >>>> >>>> Thanks, >>>> Volker >>>> >>>> On Fri, Jun 27, 2014 at 5:48 PM, Volker Simonis >>>> wrote: >>>>> On Thu, Jun 26, 2014 at 10:59 PM, Volker Simonis >>>>> wrote: >>>>>> >>>>>> On Thursday, June 26, 2014, Mikael Vidstedt >>>>>> wrote: >>>>>>> >>>>>>> This will work for top level builds. For Hotspot-only builds ARCH will >>>>>>> (still) be the value of uname -m, so if you want to support >>>>>>> Hotspot-only >>>>>>> builds you'll probably want to do the "ifneq (,$(findstring $(ARCH), >>>>>>> ppc))" >>>>>>> trick to catch both "ppc" (which is what a top level build will use) >>>>>>> and >>>>>>> "ppc64" (for Hotspot-only). >>>>>>> >>>>>> Hi Mikael, >>>>>> >>>>>> yes you're right. >>>>> I have to correct myself - you're nearly right:) >>>>> >>>>> In the term "$(findstring $(ARCH), ppc)" '$ARCH' is the needle and >>>>> 'ppc is the stack, so it won't catch 'ppc64' either. I could write >>>>> "$(findstring ppc, $(ARCH))" which would catch both, 'ppc' and 'ppc64' >>>>> but I decided to use the slightly more verbose "$(findstring $(ARCH), >>>>> ppc ppc64)" because it seemed clearer to me. I also added a comment to >>>>> explain the problematic of the different ARCH values for top-level and >>>>> HotSpot-only builds. Once we have the new HS build, this can hopefully >>>>> all go away. >>>>> >>>>> By, the way, I also had to apply this change to your ppc-modifications >>>>> in make/linux/makefiles/defs.make. And I think that the same reasoning >>>>> may also apply to "$(findstring $(ARCH), sparc)" which won't catch >>>>> 'sparc64' any more after your change but I have no Linux/SPARC box to >>>>> test this. You may change it accordingly at your discretion. >>>>> >>>>> So here's the new webrev: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >>>>> >>>>> Please review and sponsor:) >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>>>> I only tested a complete make but I indeed want to support >>>>>> HotSpot only makes as well. I'll change it as requested although I won't >>>>>> have chance to do that before tomorrow morning (European time). >>>>>> >>>>>> Thanks you and best regards, >>>>>> Volker >>>>>> >>>>>>> Sorry for breaking it. >>>>>>> >>>>>>> Cheers, >>>>>>> Mikael >>>>>>> >>>>>>> PS. We so need to clean up these makefiles... >>>>>>> >>>>>>> On 2014-06-26 07:25, Volker Simonis wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> could somebody please review and push the following tiny change: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232/ >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8048232 >>>>>>>> >>>>>>>> It fixes the build on Linux/PPC64 after "8046471 Use >>>>>>>> OPENJDK_TARGET_CPU_ARCH instead of legacy value for hotspot ARCH". >>>>>>>> >>>>>>>> Before 8046471, the top-level make passed ARCH=ppc64 to the HotSpot >>>>>>>> make. After 8046471, it now passes ARCH=ppc. But there was one place >>>>>>>> in make/linux/Makefile which checked for ARCH=ppc64 in order to >>>>>>>> disable the TIERED build. This place has to be adapted to handle the >>>>>>>> new ARCH value. >>>>>>>> >>>>>>>> Please push this right to http://hg.openjdk.java.net/jdk9/hs/hotspot >>>>>>>> in order to get it into http://hg.openjdk.java.net/jdk9/dev/hotspot >>>>>>>> together with 8046471. >>>>>>>> >>>>>>>> Note: this change depends on 8046471 in the hotspot AND in the >>>>>>>> top-level directory! >>>>>>>> >>>>>>>> Thank you and best regards, >>>>>>>> Volker >>>>>>> From daniel.daugherty at oracle.com Wed Jul 2 20:56:29 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 02 Jul 2014 14:56:29 -0600 Subject: RFR (XXS) [URGENT]: 8048232: Fix for 8046471 breaks PPC64 build In-Reply-To: <53B46FBD.6020905@oracle.com> References: <53AC6DAA.2010807@oracle.com> <53B1FCB2.4050606@oracle.com> <53B46FBD.6020905@oracle.com> Message-ID: <53B471FD.3000906@oracle.com> Volker, I also updated the copyright years so here's the changeset info: $ hg log -v -r tip changeset: 6665:9035762a846c tag: tip user: simonis date: Wed Jul 02 13:50:16 2014 -0700 files: make/linux/Makefile make/linux/makefiles/defs.make description: 8048232: Fix for 8046471 breaks PPC64 build Reviewed-by: mikael, dcubed Dan On 7/2/14 2:46 PM, Daniel D. Daugherty wrote: > Hi Volker, > > Yes, I can sponsor this change also. > > > http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ > > make/linux/Makefile > No comments. > > make/linux/makefiles/defs.make > No comments. > > Thumbs up! > > I also see this below: > > > Please push this right to http://hg.openjdk.java.net/jdk9/hs/hotspot > > in order to get it into http://hg.openjdk.java.net/jdk9/dev/hotspot > > together with 8046471. > > However, I don't see an approval from Alejandro on this e-mail thread > nor is it possible to catch up to the fix for 8046471 since it was > included in the 2014-06-27 Main_Baseline snapshot that should get > pushed to JDK9-dev soon. > > My current plan is to push the fix to RT_Baseline and follow the > normal process. > > Dan > > > On 7/2/14 12:27 PM, Volker Simonis wrote: >> Hi Daniel, >> >> I saw that you've sponsored 8046471 which unfortunately broke our >> PPC64 build. >> >> Could you please be so kind to also review and sponsor this tiny >> little change which fixes the problems on PPC64. >> >> Thank you and best regards, >> Volker >> >> >> On Tue, Jul 1, 2014 at 2:33 PM, Volker Simonis >> wrote: >>> Hi Mikael, >>> >>> thanks for reviewing at the change. >>> >>> Can I please have one more reviewer/sponsor for this tiny change? >>> >>> Thanks, >>> Volker >>> >>> >>> On Tue, Jul 1, 2014 at 2:11 AM, Mikael Vidstedt >>> wrote: >>>> Looks good. >>>> >>>> Cheers, >>>> Mikael >>>> >>>> >>>> On 2014-06-30 07:28, Volker Simonis wrote: >>>>> Can somebody please review and push this small build change to fix >>>>> our >>>>> ppc64 build errors. >>>>> >>>>> Thanks, >>>>> Volker >>>>> >>>>> On Fri, Jun 27, 2014 at 5:48 PM, Volker Simonis >>>>> wrote: >>>>>> On Thu, Jun 26, 2014 at 10:59 PM, Volker Simonis >>>>>> wrote: >>>>>>> >>>>>>> On Thursday, June 26, 2014, Mikael Vidstedt >>>>>>> >>>>>>> wrote: >>>>>>>> >>>>>>>> This will work for top level builds. For Hotspot-only builds >>>>>>>> ARCH will >>>>>>>> (still) be the value of uname -m, so if you want to support >>>>>>>> Hotspot-only >>>>>>>> builds you'll probably want to do the "ifneq (,$(findstring >>>>>>>> $(ARCH), >>>>>>>> ppc))" >>>>>>>> trick to catch both "ppc" (which is what a top level build will >>>>>>>> use) >>>>>>>> and >>>>>>>> "ppc64" (for Hotspot-only). >>>>>>>> >>>>>>> Hi Mikael, >>>>>>> >>>>>>> yes you're right. >>>>>> I have to correct myself - you're nearly right:) >>>>>> >>>>>> In the term "$(findstring $(ARCH), ppc)" '$ARCH' is the needle and >>>>>> 'ppc is the stack, so it won't catch 'ppc64' either. I could write >>>>>> "$(findstring ppc, $(ARCH))" which would catch both, 'ppc' and >>>>>> 'ppc64' >>>>>> but I decided to use the slightly more verbose "$(findstring >>>>>> $(ARCH), >>>>>> ppc ppc64)" because it seemed clearer to me. I also added a >>>>>> comment to >>>>>> explain the problematic of the different ARCH values for >>>>>> top-level and >>>>>> HotSpot-only builds. Once we have the new HS build, this can >>>>>> hopefully >>>>>> all go away. >>>>>> >>>>>> By, the way, I also had to apply this change to your >>>>>> ppc-modifications >>>>>> in make/linux/makefiles/defs.make. And I think that the same >>>>>> reasoning >>>>>> may also apply to "$(findstring $(ARCH), sparc)" which won't catch >>>>>> 'sparc64' any more after your change but I have no Linux/SPARC >>>>>> box to >>>>>> test this. You may change it accordingly at your discretion. >>>>>> >>>>>> So here's the new webrev: >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >>>>>> >>>>>> Please review and sponsor:) >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>>> >>>>>>> I only tested a complete make but I indeed want to support >>>>>>> HotSpot only makes as well. I'll change it as requested although >>>>>>> I won't >>>>>>> have chance to do that before tomorrow morning (European time). >>>>>>> >>>>>>> Thanks you and best regards, >>>>>>> Volker >>>>>>> >>>>>>>> Sorry for breaking it. >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Mikael >>>>>>>> >>>>>>>> PS. We so need to clean up these makefiles... >>>>>>>> >>>>>>>> On 2014-06-26 07:25, Volker Simonis wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> could somebody please review and push the following tiny change: >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232/ >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8048232 >>>>>>>>> >>>>>>>>> It fixes the build on Linux/PPC64 after "8046471 Use >>>>>>>>> OPENJDK_TARGET_CPU_ARCH instead of legacy value for hotspot >>>>>>>>> ARCH". >>>>>>>>> >>>>>>>>> Before 8046471, the top-level make passed ARCH=ppc64 to the >>>>>>>>> HotSpot >>>>>>>>> make. After 8046471, it now passes ARCH=ppc. But there was one >>>>>>>>> place >>>>>>>>> in make/linux/Makefile which checked for ARCH=ppc64 in order to >>>>>>>>> disable the TIERED build. This place has to be adapted to >>>>>>>>> handle the >>>>>>>>> new ARCH value. >>>>>>>>> >>>>>>>>> Please push this right to >>>>>>>>> http://hg.openjdk.java.net/jdk9/hs/hotspot >>>>>>>>> in order to get it into >>>>>>>>> http://hg.openjdk.java.net/jdk9/dev/hotspot >>>>>>>>> together with 8046471. >>>>>>>>> >>>>>>>>> Note: this change depends on 8046471 in the hotspot AND in the >>>>>>>>> top-level directory! >>>>>>>>> >>>>>>>>> Thank you and best regards, >>>>>>>>> Volker >>>>>>>> > > > From alejandro.murillo at oracle.com Wed Jul 2 22:50:13 2014 From: alejandro.murillo at oracle.com (Alejandro E Murillo) Date: Wed, 02 Jul 2014 16:50:13 -0600 Subject: RFR (XXS) [URGENT]: 8048232: Fix for 8046471 breaks PPC64 build In-Reply-To: <53B46FBD.6020905@oracle.com> References: <53AC6DAA.2010807@oracle.com> <53B1FCB2.4050606@oracle.com> <53B46FBD.6020905@oracle.com> Message-ID: <53B48CA5.30002@oracle.com> On 7/2/2014 2:46 PM, Daniel D. Daugherty wrote: > Hi Volker, > > Yes, I can sponsor this change also. > > > http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ > > make/linux/Makefile > No comments. > > make/linux/makefiles/defs.make > No comments. > > Thumbs up! > > I also see this below: > > > Please push this right to http://hg.openjdk.java.net/jdk9/hs/hotspot > > in order to get it into http://hg.openjdk.java.net/jdk9/dev/hotspot > > together with 8046471. > > However, I don't see an approval from Alejandro on this e-mail thread I missed that. Please do not push changes straight to jdk9/hs/hotspot or jdk9/dev/hotspot. We recently had problems by doing so. Even small changes need to go through nightly, unless it is an emergency fix. We risk introducing further problems > nor is it possible to catch up to the fix for 8046471 since it was > included in the 2014-06-27 Main_Baseline snapshot that should get > pushed to JDK9-dev soon. There was a problem preventing the integration of last week snapshot into jdk9/dev, so 8046471 should be in jdk9/dev next week Alejandro > > My current plan is to push the fix to RT_Baseline and follow the > normal process. > > Dan > > > On 7/2/14 12:27 PM, Volker Simonis wrote: >> Hi Daniel, >> >> I saw that you've sponsored 8046471 which unfortunately broke our >> PPC64 build. >> >> Could you please be so kind to also review and sponsor this tiny >> little change which fixes the problems on PPC64. >> >> Thank you and best regards, >> Volker >> >> >> On Tue, Jul 1, 2014 at 2:33 PM, Volker Simonis >> wrote: >>> Hi Mikael, >>> >>> thanks for reviewing at the change. >>> >>> Can I please have one more reviewer/sponsor for this tiny change? >>> >>> Thanks, >>> Volker >>> >>> >>> On Tue, Jul 1, 2014 at 2:11 AM, Mikael Vidstedt >>> wrote: >>>> Looks good. >>>> >>>> Cheers, >>>> Mikael >>>> >>>> >>>> On 2014-06-30 07:28, Volker Simonis wrote: >>>>> Can somebody please review and push this small build change to fix >>>>> our >>>>> ppc64 build errors. >>>>> >>>>> Thanks, >>>>> Volker >>>>> >>>>> On Fri, Jun 27, 2014 at 5:48 PM, Volker Simonis >>>>> wrote: >>>>>> On Thu, Jun 26, 2014 at 10:59 PM, Volker Simonis >>>>>> wrote: >>>>>>> >>>>>>> On Thursday, June 26, 2014, Mikael Vidstedt >>>>>>> >>>>>>> wrote: >>>>>>>> >>>>>>>> This will work for top level builds. For Hotspot-only builds >>>>>>>> ARCH will >>>>>>>> (still) be the value of uname -m, so if you want to support >>>>>>>> Hotspot-only >>>>>>>> builds you'll probably want to do the "ifneq (,$(findstring >>>>>>>> $(ARCH), >>>>>>>> ppc))" >>>>>>>> trick to catch both "ppc" (which is what a top level build will >>>>>>>> use) >>>>>>>> and >>>>>>>> "ppc64" (for Hotspot-only). >>>>>>>> >>>>>>> Hi Mikael, >>>>>>> >>>>>>> yes you're right. >>>>>> I have to correct myself - you're nearly right:) >>>>>> >>>>>> In the term "$(findstring $(ARCH), ppc)" '$ARCH' is the needle and >>>>>> 'ppc is the stack, so it won't catch 'ppc64' either. I could write >>>>>> "$(findstring ppc, $(ARCH))" which would catch both, 'ppc' and >>>>>> 'ppc64' >>>>>> but I decided to use the slightly more verbose "$(findstring >>>>>> $(ARCH), >>>>>> ppc ppc64)" because it seemed clearer to me. I also added a >>>>>> comment to >>>>>> explain the problematic of the different ARCH values for >>>>>> top-level and >>>>>> HotSpot-only builds. Once we have the new HS build, this can >>>>>> hopefully >>>>>> all go away. >>>>>> >>>>>> By, the way, I also had to apply this change to your >>>>>> ppc-modifications >>>>>> in make/linux/makefiles/defs.make. And I think that the same >>>>>> reasoning >>>>>> may also apply to "$(findstring $(ARCH), sparc)" which won't catch >>>>>> 'sparc64' any more after your change but I have no Linux/SPARC >>>>>> box to >>>>>> test this. You may change it accordingly at your discretion. >>>>>> >>>>>> So here's the new webrev: >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >>>>>> >>>>>> Please review and sponsor:) >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>>> >>>>>>> I only tested a complete make but I indeed want to support >>>>>>> HotSpot only makes as well. I'll change it as requested although >>>>>>> I won't >>>>>>> have chance to do that before tomorrow morning (European time). >>>>>>> >>>>>>> Thanks you and best regards, >>>>>>> Volker >>>>>>> >>>>>>>> Sorry for breaking it. >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Mikael >>>>>>>> >>>>>>>> PS. We so need to clean up these makefiles... >>>>>>>> >>>>>>>> On 2014-06-26 07:25, Volker Simonis wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> could somebody please review and push the following tiny change: >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232/ >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8048232 >>>>>>>>> >>>>>>>>> It fixes the build on Linux/PPC64 after "8046471 Use >>>>>>>>> OPENJDK_TARGET_CPU_ARCH instead of legacy value for hotspot >>>>>>>>> ARCH". >>>>>>>>> >>>>>>>>> Before 8046471, the top-level make passed ARCH=ppc64 to the >>>>>>>>> HotSpot >>>>>>>>> make. After 8046471, it now passes ARCH=ppc. But there was one >>>>>>>>> place >>>>>>>>> in make/linux/Makefile which checked for ARCH=ppc64 in order to >>>>>>>>> disable the TIERED build. This place has to be adapted to >>>>>>>>> handle the >>>>>>>>> new ARCH value. >>>>>>>>> >>>>>>>>> Please push this right to >>>>>>>>> http://hg.openjdk.java.net/jdk9/hs/hotspot >>>>>>>>> in order to get it into >>>>>>>>> http://hg.openjdk.java.net/jdk9/dev/hotspot >>>>>>>>> together with 8046471. >>>>>>>>> >>>>>>>>> Note: this change depends on 8046471 in the hotspot AND in the >>>>>>>>> top-level directory! >>>>>>>>> >>>>>>>>> Thank you and best regards, >>>>>>>>> Volker >>>>>>>> > -- Alejandro From daniel.daugherty at oracle.com Wed Jul 2 23:17:33 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 02 Jul 2014 17:17:33 -0600 Subject: RFR (XXS) [URGENT]: 8048232: Fix for 8046471 breaks PPC64 build In-Reply-To: <53B48CA5.30002@oracle.com> References: <53AC6DAA.2010807@oracle.com> <53B1FCB2.4050606@oracle.com> <53B46FBD.6020905@oracle.com> <53B48CA5.30002@oracle.com> Message-ID: <53B4930D.2020008@oracle.com> Alejandro, I pushed to RT_Baseline. Didn't want to by-pass the usual process without a darn good reason... Dan On 7/2/14 4:50 PM, Alejandro E Murillo wrote: > > On 7/2/2014 2:46 PM, Daniel D. Daugherty wrote: >> Hi Volker, >> >> Yes, I can sponsor this change also. >> >> > http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >> >> make/linux/Makefile >> No comments. >> >> make/linux/makefiles/defs.make >> No comments. >> >> Thumbs up! >> >> I also see this below: >> >> > Please push this right to http://hg.openjdk.java.net/jdk9/hs/hotspot >> > in order to get it into http://hg.openjdk.java.net/jdk9/dev/hotspot >> > together with 8046471. >> >> However, I don't see an approval from Alejandro on this e-mail thread > I missed that. Please do not push changes straight to jdk9/hs/hotspot > or jdk9/dev/hotspot. We recently had problems by doing so. > Even small changes need to go through nightly, unless it is an > emergency fix. > We risk introducing further problems >> nor is it possible to catch up to the fix for 8046471 since it was >> included in the 2014-06-27 Main_Baseline snapshot that should get >> pushed to JDK9-dev soon. > There was a problem preventing the integration of last week snapshot > into jdk9/dev, > so 8046471 should be in jdk9/dev next week > > Alejandro >> >> My current plan is to push the fix to RT_Baseline and follow the >> normal process. >> >> Dan >> >> >> On 7/2/14 12:27 PM, Volker Simonis wrote: >>> Hi Daniel, >>> >>> I saw that you've sponsored 8046471 which unfortunately broke our >>> PPC64 build. >>> >>> Could you please be so kind to also review and sponsor this tiny >>> little change which fixes the problems on PPC64. >>> >>> Thank you and best regards, >>> Volker >>> >>> >>> On Tue, Jul 1, 2014 at 2:33 PM, Volker Simonis >>> wrote: >>>> Hi Mikael, >>>> >>>> thanks for reviewing at the change. >>>> >>>> Can I please have one more reviewer/sponsor for this tiny change? >>>> >>>> Thanks, >>>> Volker >>>> >>>> >>>> On Tue, Jul 1, 2014 at 2:11 AM, Mikael Vidstedt >>>> wrote: >>>>> Looks good. >>>>> >>>>> Cheers, >>>>> Mikael >>>>> >>>>> >>>>> On 2014-06-30 07:28, Volker Simonis wrote: >>>>>> Can somebody please review and push this small build change to >>>>>> fix our >>>>>> ppc64 build errors. >>>>>> >>>>>> Thanks, >>>>>> Volker >>>>>> >>>>>> On Fri, Jun 27, 2014 at 5:48 PM, Volker Simonis >>>>>> wrote: >>>>>>> On Thu, Jun 26, 2014 at 10:59 PM, Volker Simonis >>>>>>> wrote: >>>>>>>> >>>>>>>> On Thursday, June 26, 2014, Mikael Vidstedt >>>>>>>> >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> This will work for top level builds. For Hotspot-only builds >>>>>>>>> ARCH will >>>>>>>>> (still) be the value of uname -m, so if you want to support >>>>>>>>> Hotspot-only >>>>>>>>> builds you'll probably want to do the "ifneq (,$(findstring >>>>>>>>> $(ARCH), >>>>>>>>> ppc))" >>>>>>>>> trick to catch both "ppc" (which is what a top level build >>>>>>>>> will use) >>>>>>>>> and >>>>>>>>> "ppc64" (for Hotspot-only). >>>>>>>>> >>>>>>>> Hi Mikael, >>>>>>>> >>>>>>>> yes you're right. >>>>>>> I have to correct myself - you're nearly right:) >>>>>>> >>>>>>> In the term "$(findstring $(ARCH), ppc)" '$ARCH' is the needle and >>>>>>> 'ppc is the stack, so it won't catch 'ppc64' either. I could write >>>>>>> "$(findstring ppc, $(ARCH))" which would catch both, 'ppc' and >>>>>>> 'ppc64' >>>>>>> but I decided to use the slightly more verbose "$(findstring >>>>>>> $(ARCH), >>>>>>> ppc ppc64)" because it seemed clearer to me. I also added a >>>>>>> comment to >>>>>>> explain the problematic of the different ARCH values for >>>>>>> top-level and >>>>>>> HotSpot-only builds. Once we have the new HS build, this can >>>>>>> hopefully >>>>>>> all go away. >>>>>>> >>>>>>> By, the way, I also had to apply this change to your >>>>>>> ppc-modifications >>>>>>> in make/linux/makefiles/defs.make. And I think that the same >>>>>>> reasoning >>>>>>> may also apply to "$(findstring $(ARCH), sparc)" which won't catch >>>>>>> 'sparc64' any more after your change but I have no Linux/SPARC >>>>>>> box to >>>>>>> test this. You may change it accordingly at your discretion. >>>>>>> >>>>>>> So here's the new webrev: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >>>>>>> >>>>>>> Please review and sponsor:) >>>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>>> >>>>>>>> I only tested a complete make but I indeed want to support >>>>>>>> HotSpot only makes as well. I'll change it as requested >>>>>>>> although I won't >>>>>>>> have chance to do that before tomorrow morning (European time). >>>>>>>> >>>>>>>> Thanks you and best regards, >>>>>>>> Volker >>>>>>>> >>>>>>>>> Sorry for breaking it. >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> Mikael >>>>>>>>> >>>>>>>>> PS. We so need to clean up these makefiles... >>>>>>>>> >>>>>>>>> On 2014-06-26 07:25, Volker Simonis wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> could somebody please review and push the following tiny change: >>>>>>>>>> >>>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232/ >>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8048232 >>>>>>>>>> >>>>>>>>>> It fixes the build on Linux/PPC64 after "8046471 Use >>>>>>>>>> OPENJDK_TARGET_CPU_ARCH instead of legacy value for hotspot >>>>>>>>>> ARCH". >>>>>>>>>> >>>>>>>>>> Before 8046471, the top-level make passed ARCH=ppc64 to the >>>>>>>>>> HotSpot >>>>>>>>>> make. After 8046471, it now passes ARCH=ppc. But there was >>>>>>>>>> one place >>>>>>>>>> in make/linux/Makefile which checked for ARCH=ppc64 in order to >>>>>>>>>> disable the TIERED build. This place has to be adapted to >>>>>>>>>> handle the >>>>>>>>>> new ARCH value. >>>>>>>>>> >>>>>>>>>> Please push this right to >>>>>>>>>> http://hg.openjdk.java.net/jdk9/hs/hotspot >>>>>>>>>> in order to get it into >>>>>>>>>> http://hg.openjdk.java.net/jdk9/dev/hotspot >>>>>>>>>> together with 8046471. >>>>>>>>>> >>>>>>>>>> Note: this change depends on 8046471 in the hotspot AND in the >>>>>>>>>> top-level directory! >>>>>>>>>> >>>>>>>>>> Thank you and best regards, >>>>>>>>>> Volker >>>>>>>>> >> > From mikael.vidstedt at oracle.com Thu Jul 3 00:32:46 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 02 Jul 2014 17:32:46 -0700 Subject: RFR(S): 8046818: Hotspot build system looking for sdt.h in the wrong place Message-ID: <53B4A4AE.5000709@oracle.com> Please review the below fix. When using a compiler which does not use the default "/" as system root the system headers are not picked up from /usr/include. This means that looking for sdt.h in /usr/include is not correct - even if the machine in question has that header file available it may still fail at compile time if the compiler doesn't have sdt.h in its system root. gcc has a useful option (-print-sysroot) which prints the system root being used. I have not found any equivalent option for clang, so this fix will unfortunately not solve this problem when using clang. When we rewrite the Hotspot build system to use the new top level build system this can be solved in a better way. Bug: https://bugs.openjdk.java.net/browse/JDK-8046818 Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8046818/webrev.00/webrev/ Cheers, Mikael From mikael.vidstedt at oracle.com Thu Jul 3 01:08:21 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 02 Jul 2014 18:08:21 -0700 Subject: RFR(S): 8049071: Add jtreg jobs to JPRT for Hotspot Message-ID: <53B4AD05.3070702@oracle.com> Please review this enhancement which adds the scaffolding needed to run the hotspot jtreg tests in JPRT. Bug: https://bugs.openjdk.java.net/browse/JDK-8049071 Webrev (/): http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/top/webrev/ Webrev (hotspot/): http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/hotspot/webrev/ Summary: We want to run the hotspot regression tests on every hotspot push. This change enables this and adds four new test groups to the set of tests being run on hotspot pushes. The new test sets still need to be populated. Narrative: The majority of the changes are in the hotspot/test/Makefile. The changes are almost entirely stolen from jdk/test/Makefile but have been massaged to support (at least) three different use cases, two of which were supported earlier: 1. Running the non-jtreg tests (servertest, clienttest and internalvmtests), also supporting the use of the "hotspot_" for when the tests are invoked from the JDK top level 2. Running jtreg tests by selecting test to run using the TESTDIRS variable 3. Running jtreg tests by selecting the test group to run (NEW) The third/new use case is implemented by making any target named hotspot_% *except* the ones listed in 1. lead to the corresponding jtreg test group in TEST.groups being run. For example, running "make hotspot_gc" leads to all the tests in the hotspot_gc test group in TEST.groups to be run and so on. I also removed the packtest targets, because as far as I can tell they're not used anyway. Note that the new component test groups in TEST.group - hotspot_compiler, hotspot_gc, hotspot_runtime and hotspot_serviceability - are currently empty, or more precisely they only run a single test each. The intention is that these should be populated by the respective teams to include stable and relatively fast tests. Tests added to the groups will be run on hotspot push jobs, and therefore will be blocking pushes in case they fail. Cheers, Mikael From staffan.larsen at oracle.com Thu Jul 3 06:45:54 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Thu, 3 Jul 2014 08:45:54 +0200 Subject: RFR(S): 8046818: Hotspot build system looking for sdt.h in the wrong place In-Reply-To: <53B4A4AE.5000709@oracle.com> References: <53B4A4AE.5000709@oracle.com> Message-ID: <92C2C6C3-1911-4AFD-AD36-B3FA46A89F10@oracle.com> Looks good! Thanks, /Staffan On 3 jul 2014, at 02:32, Mikael Vidstedt wrote: > > Please review the below fix. > > When using a compiler which does not use the default "/" as system root the system headers are not picked up from /usr/include. This means that looking for sdt.h in /usr/include is not correct - even if the machine in question has that header file available it may still fail at compile time if the compiler doesn't have sdt.h in its system root. > > gcc has a useful option (-print-sysroot) which prints the system root being used. I have not found any equivalent option for clang, so this fix will unfortunately not solve this problem when using clang. When we rewrite the Hotspot build system to use the new top level build system this can be solved in a better way. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8046818 > Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8046818/webrev.00/webrev/ > > Cheers, > Mikael > From goetz.lindenmaier at sap.com Thu Jul 3 07:02:17 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 3 Jul 2014 07:02:17 +0000 Subject: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes In-Reply-To: <53B29B0E.4060200@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED7FF4@DEWDFEMB12A.global.corp.sap> <53B18E18.80707@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED8471@DEWDFEMB12A.global.corp.sap> <53B29B0E.4060200@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CED8996@DEWDFEMB12A.global.corp.sap> Coleen, thanks for doing the push! Best regards, Goetz -----Original Message----- From: Coleen Phillimore [mailto:coleen.phillimore at oracle.com] Sent: Dienstag, 1. Juli 2014 13:27 To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes Okay, I'll do it. Since you have a Reviewer, all you need is another reviewer (note capitalization). Thanks! Coleen On 7/1/14, 3:29 AM, Lindenmaier, Goetz wrote: > Hi Coleen, > > thanks for the review! > I based it on gc, as Stefan pushed my atomic.inline.hpp change > into that repo. Now that change propagated to the other repos, > and this one applies nicely (I just checked hs-rt). > > So I'd appreciate if you sponsor it! But I still need a second review I guess. > > Best regards, > Goetz. > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore > Sent: Montag, 30. Juni 2014 18:20 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR (M): 8048241: Introduce umbrella header os.inline.hpp and clean up includes > > > Goetz, > I reviewed this change and it looks great. Thank you for cleaning this > up. Since it's based on hs-gc repository, I think someone from the GC > group should sponsor. Otherwise, I'd be happy to. > > Thanks! > Coleen > > (this was my reply to another RFR, sorry) > > On 6/29/14, 5:00 PM, Lindenmaier, Goetz wrote: >> Hi, >> >> This change adds a new header os.inline.hpp including the os_.include.hpp >> headers. This allows to remove around 30 os dependent include cascades, some of >> them even without adding the os.inline.hpp header in that file. >> Also, os.inline.hpp is added in several files that call functions from these >> headers where it was missing so far. >> >> Some further cleanups: >> OrderAccess include in adaptiveFreeList.cpp is needed because of freeChunk.hpp. >> >> The include of os.inline.hpp in thread.inline.hpp is needed because >> Thread::current() uses thread() from ThreadLocalStorage, which again uses >> os::thread_local_storage_at which is implemented platform dependent. >> >> I moved some methods without dependencies to other .include.hpp files >> to os_windows.hpp/os_posix.hpp. This reduces the need for os.inline.hpp >> includes a lot. >> >> Please review and test this change. I please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ >> >> I compiled and tested this without precompiled headers on linuxx86_64, >> linuxppc64, windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >> aixppc64 in opt, dbg and fastdbg versions. >> >> Thanks and best regards, >> Goetz. From volker.simonis at gmail.com Thu Jul 3 07:04:16 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 3 Jul 2014 09:04:16 +0200 Subject: RFR (XXS) [URGENT]: 8048232: Fix for 8046471 breaks PPC64 build In-Reply-To: <53B4930D.2020008@oracle.com> References: <53AC6DAA.2010807@oracle.com> <53B1FCB2.4050606@oracle.com> <53B46FBD.6020905@oracle.com> <53B48CA5.30002@oracle.com> <53B4930D.2020008@oracle.com> Message-ID: Hi Daniel, thanks a lot for pushing the change. Regards, Volker On Thu, Jul 3, 2014 at 1:17 AM, Daniel D. Daugherty wrote: > Alejandro, > > I pushed to RT_Baseline. Didn't want to by-pass the usual process > without a darn good reason... > > Dan > > > > On 7/2/14 4:50 PM, Alejandro E Murillo wrote: >> >> >> On 7/2/2014 2:46 PM, Daniel D. Daugherty wrote: >>> >>> Hi Volker, >>> >>> Yes, I can sponsor this change also. >>> >>> > http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >>> >>> make/linux/Makefile >>> No comments. >>> >>> make/linux/makefiles/defs.make >>> No comments. >>> >>> Thumbs up! >>> >>> I also see this below: >>> >>> > Please push this right to http://hg.openjdk.java.net/jdk9/hs/hotspot >>> > in order to get it into http://hg.openjdk.java.net/jdk9/dev/hotspot >>> > together with 8046471. >>> >>> However, I don't see an approval from Alejandro on this e-mail thread >> >> I missed that. Please do not push changes straight to jdk9/hs/hotspot >> or jdk9/dev/hotspot. We recently had problems by doing so. >> Even small changes need to go through nightly, unless it is an emergency >> fix. >> We risk introducing further problems >>> >>> nor is it possible to catch up to the fix for 8046471 since it was >>> included in the 2014-06-27 Main_Baseline snapshot that should get >>> pushed to JDK9-dev soon. >> >> There was a problem preventing the integration of last week snapshot into >> jdk9/dev, >> so 8046471 should be in jdk9/dev next week >> >> Alejandro >>> >>> >>> My current plan is to push the fix to RT_Baseline and follow the >>> normal process. >>> >>> Dan >>> >>> >>> On 7/2/14 12:27 PM, Volker Simonis wrote: >>>> >>>> Hi Daniel, >>>> >>>> I saw that you've sponsored 8046471 which unfortunately broke our PPC64 >>>> build. >>>> >>>> Could you please be so kind to also review and sponsor this tiny >>>> little change which fixes the problems on PPC64. >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> >>>> On Tue, Jul 1, 2014 at 2:33 PM, Volker Simonis >>>> wrote: >>>>> >>>>> Hi Mikael, >>>>> >>>>> thanks for reviewing at the change. >>>>> >>>>> Can I please have one more reviewer/sponsor for this tiny change? >>>>> >>>>> Thanks, >>>>> Volker >>>>> >>>>> >>>>> On Tue, Jul 1, 2014 at 2:11 AM, Mikael Vidstedt >>>>> wrote: >>>>>> >>>>>> Looks good. >>>>>> >>>>>> Cheers, >>>>>> Mikael >>>>>> >>>>>> >>>>>> On 2014-06-30 07:28, Volker Simonis wrote: >>>>>>> >>>>>>> Can somebody please review and push this small build change to fix >>>>>>> our >>>>>>> ppc64 build errors. >>>>>>> >>>>>>> Thanks, >>>>>>> Volker >>>>>>> >>>>>>> On Fri, Jun 27, 2014 at 5:48 PM, Volker Simonis >>>>>>> wrote: >>>>>>>> >>>>>>>> On Thu, Jun 26, 2014 at 10:59 PM, Volker Simonis >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thursday, June 26, 2014, Mikael Vidstedt >>>>>>>>> >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> This will work for top level builds. For Hotspot-only builds ARCH >>>>>>>>>> will >>>>>>>>>> (still) be the value of uname -m, so if you want to support >>>>>>>>>> Hotspot-only >>>>>>>>>> builds you'll probably want to do the "ifneq (,$(findstring >>>>>>>>>> $(ARCH), >>>>>>>>>> ppc))" >>>>>>>>>> trick to catch both "ppc" (which is what a top level build will >>>>>>>>>> use) >>>>>>>>>> and >>>>>>>>>> "ppc64" (for Hotspot-only). >>>>>>>>>> >>>>>>>>> Hi Mikael, >>>>>>>>> >>>>>>>>> yes you're right. >>>>>>>> >>>>>>>> I have to correct myself - you're nearly right:) >>>>>>>> >>>>>>>> In the term "$(findstring $(ARCH), ppc)" '$ARCH' is the needle and >>>>>>>> 'ppc is the stack, so it won't catch 'ppc64' either. I could write >>>>>>>> "$(findstring ppc, $(ARCH))" which would catch both, 'ppc' and >>>>>>>> 'ppc64' >>>>>>>> but I decided to use the slightly more verbose "$(findstring >>>>>>>> $(ARCH), >>>>>>>> ppc ppc64)" because it seemed clearer to me. I also added a comment >>>>>>>> to >>>>>>>> explain the problematic of the different ARCH values for top-level >>>>>>>> and >>>>>>>> HotSpot-only builds. Once we have the new HS build, this can >>>>>>>> hopefully >>>>>>>> all go away. >>>>>>>> >>>>>>>> By, the way, I also had to apply this change to your >>>>>>>> ppc-modifications >>>>>>>> in make/linux/makefiles/defs.make. And I think that the same >>>>>>>> reasoning >>>>>>>> may also apply to "$(findstring $(ARCH), sparc)" which won't catch >>>>>>>> 'sparc64' any more after your change but I have no Linux/SPARC box >>>>>>>> to >>>>>>>> test this. You may change it accordingly at your discretion. >>>>>>>> >>>>>>>> So here's the new webrev: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232.v2/ >>>>>>>> >>>>>>>> Please review and sponsor:) >>>>>>>> >>>>>>>> Thank you and best regards, >>>>>>>> Volker >>>>>>>> >>>>>>>>> I only tested a complete make but I indeed want to support >>>>>>>>> HotSpot only makes as well. I'll change it as requested although I >>>>>>>>> won't >>>>>>>>> have chance to do that before tomorrow morning (European time). >>>>>>>>> >>>>>>>>> Thanks you and best regards, >>>>>>>>> Volker >>>>>>>>> >>>>>>>>>> Sorry for breaking it. >>>>>>>>>> >>>>>>>>>> Cheers, >>>>>>>>>> Mikael >>>>>>>>>> >>>>>>>>>> PS. We so need to clean up these makefiles... >>>>>>>>>> >>>>>>>>>> On 2014-06-26 07:25, Volker Simonis wrote: >>>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> could somebody please review and push the following tiny change: >>>>>>>>>>> >>>>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8048232/ >>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8048232 >>>>>>>>>>> >>>>>>>>>>> It fixes the build on Linux/PPC64 after "8046471 Use >>>>>>>>>>> OPENJDK_TARGET_CPU_ARCH instead of legacy value for hotspot >>>>>>>>>>> ARCH". >>>>>>>>>>> >>>>>>>>>>> Before 8046471, the top-level make passed ARCH=ppc64 to the >>>>>>>>>>> HotSpot >>>>>>>>>>> make. After 8046471, it now passes ARCH=ppc. But there was one >>>>>>>>>>> place >>>>>>>>>>> in make/linux/Makefile which checked for ARCH=ppc64 in order to >>>>>>>>>>> disable the TIERED build. This place has to be adapted to handle >>>>>>>>>>> the >>>>>>>>>>> new ARCH value. >>>>>>>>>>> >>>>>>>>>>> Please push this right to >>>>>>>>>>> http://hg.openjdk.java.net/jdk9/hs/hotspot >>>>>>>>>>> in order to get it into >>>>>>>>>>> http://hg.openjdk.java.net/jdk9/dev/hotspot >>>>>>>>>>> together with 8046471. >>>>>>>>>>> >>>>>>>>>>> Note: this change depends on 8046471 in the hotspot AND in the >>>>>>>>>>> top-level directory! >>>>>>>>>>> >>>>>>>>>>> Thank you and best regards, >>>>>>>>>>> Volker >>>>>>>>>> >>>>>>>>>> >>> >> > From stefan.karlsson at oracle.com Thu Jul 3 11:14:43 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 03 Jul 2014 13:14:43 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: <53B2BB50.1080606@oracle.com> References: <53B2BB50.1080606@oracle.com> Message-ID: <53B53B23.9040305@oracle.com> Hi again, Here's a new patch. Changes: 1) The first webrev didn't include the change to this file: http://cr.openjdk.java.net/~stefank/8048248/webrev.01/test/testlibrary/whitebox/sun/hotspot/WhiteBox.java.udiff.html 2) Fixes a bug that happens with Class Redefinition. http://cr.openjdk.java.net/~stefank/8048248/webrev.01/src/share/vm/classfile/metadataOnStackMark.cpp.udiff.html We don't want to call CodeCache::alive_nmethods_do(nmethod::mark_on_stack) unnecessarily, since its one of the more expensive operations done during the remark pause. If Class Redefinition isn't used we don't need to mark through the code cache, since no deallocated metadata should have made its way into a nmethod. Unfortunately, this is not true for Class Redefinition. Class Redefinition will create new versions of Methods and ConstantPools and needs to keep the old versions alive until all references to the old version have been cleaned out from the JVM. The current patch only calls CodeCache::alive_nmethods_do(nmethod::mark_on_stack) if Class Redefintion is used. This has the effect that code using Class Redefinition will have higher remark pauses. In an earlier version of the G1 class unloading patch I parallelized and combined the nmethod mark_on_stack code with the CodeCache cleaning code, but it was removed in favor of removing the call to CodeCache::alive_nmethods_do. We might want to revive that patch, or optimize this some other way, but I would prefer to not do that in this patch. thanks, StefanK On 2014-07-01 15:44, Stefan Karlsson wrote: > Hi all, > > Please, review this patch to enable unloading of classes and other > metadata after a G1 concurrent cycle. > > http://cr.openjdk.java.net/~stefank/8048248/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8048248 > > The patch includes the following changes: > > 1) Tracing through alive Klasses and CLDs during concurrent mark, > instead of marking all of them during the initial mark pause. > 2) Making HeapRegions walkable in the presence of unparseable objects > due to their classes being unloaded. > 3) The process roots code has been changed to allow G1's combined > initial mark and scavenge. > 4) The CodeBlobClosures have been refactored to distinguish the > marking variant from the oop updating variants. > 5) Calls to the G1 pre-barrier have been added to some places, such as > the StringTable, to guard against object resurrection, similar to how > j.l.ref.Reference#get is treated with a read barrier. > 6) Parallelizing the cleaning of metadata and compiled methods during > the remark pause. > > A number of patches to prepare for this RFE has already been pushed to > JDK 9: > > 8047362: Add a version of CompiledIC_at that doesn't create a new > RelocIterator > 8047326: Consolidate all CompiledIC::CompiledIC implementations and > move it to compiledIC.cpp > 8047323: Remove unused _copy_metadata_obj_cl in G1CopyingKeepAliveClosure > 8047373: Clean the ExceptionCache in one pass > 8046670: Make CMS metadata aware closures applicable for other collectors > 8035746: Add missing Klass::oop_is_instanceClassLoader() function > 8035648: Don't use Handle in java_lang_String::print > 8035412: Cleanup ClassLoaderData::is_alive > 8035393: Use CLDClosure instead of CLDToOopClosure in > frame::oops_interpreted_do > 8034764: Use process_strong_roots to adjust the StringTable > 8034761: Remove the do_code_roots parameter from process_strong_roots > 8033923: Use BufferingOopClosure for G1 code root scanning > 8033764: Remove the usage of StarTask from BufferingOopClosure > 8012687: Remove unused is_root checks and closures > 8047818: G1 HeapRegions can no longer be ContiguousSpaces > 8048214: Linker error when compiling G1SATBCardTableModRefBS after > include order changes > 8047821: G1 Does not use the save_marks functionality as intended > 8047820: G1 Block offset table does not need to support generic Space > classes > 8047819: G1 HeapRegionDCTOC does not need to inherit ContiguousSpaceDCTOC > 8038405: Clean up some virtual fucntions in Space class hierarchy > 8038412: Move object_iterate_careful down from Space to ContigousSpace > and CFLSpace > 8038404: Move object_iterate_mem from Space to CMS since it is only > ever used by CMS > 8038399: Remove dead oop_iterate MemRegion variants from SharedHeap, > Generation and Space classe > 8037958: ConcurrentMark::cleanup leaks BitMaps if VerifyDuringGC is > enabled > 8032379: Remove the is_scavenging flag to process_strong_roots > > Testing: > > We've been running Kitchensink, gc-test-suite, internal nightly > testing and test lists, and CRM FA benchmarks. > > thanks, > StefanK & Mikael Gerdin From david.holmes at oracle.com Thu Jul 3 12:03:13 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 03 Jul 2014 22:03:13 +1000 Subject: RFR(S): 8046818: Hotspot build system looking for sdt.h in the wrong place In-Reply-To: <53B4A4AE.5000709@oracle.com> References: <53B4A4AE.5000709@oracle.com> Message-ID: <53B54681.1090608@oracle.com> Looks ok. David On 3/07/2014 10:32 AM, Mikael Vidstedt wrote: > > Please review the below fix. > > When using a compiler which does not use the default "/" as system root > the system headers are not picked up from /usr/include. This means that > looking for sdt.h in /usr/include is not correct - even if the machine > in question has that header file available it may still fail at compile > time if the compiler doesn't have sdt.h in its system root. > > gcc has a useful option (-print-sysroot) which prints the system root > being used. I have not found any equivalent option for clang, so this > fix will unfortunately not solve this problem when using clang. When we > rewrite the Hotspot build system to use the new top level build system > this can be solved in a better way. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8046818 > Webrev: > http://cr.openjdk.java.net/~mikael/webrevs/8046818/webrev.00/webrev/ > > Cheers, > Mikael > From thomas.schatzl at oracle.com Thu Jul 3 15:02:10 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 03 Jul 2014 17:02:10 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: <53B53B23.9040305@oracle.com> References: <53B2BB50.1080606@oracle.com> <53B53B23.9040305@oracle.com> Message-ID: <1404399730.2851.33.camel@cirrus> Hi Mikael+Stefan, On Thu, 2014-07-03 at 13:14 +0200, Stefan Karlsson wrote: > Hi again, > > Here's a new patch. > > Changes: > 1) The first webrev didn't include the change to this file: > http://cr.openjdk.java.net/~stefank/8048248/webrev.01/test/testlibrary/whitebox/sun/hotspot/WhiteBox.java.udiff.html > > 2) Fixes a bug that happens with Class Redefinition. > http://cr.openjdk.java.net/~stefank/8048248/webrev.01/src/share/vm/classfile/metadataOnStackMark.cpp.udiff.html > > We don't want to call > CodeCache::alive_nmethods_do(nmethod::mark_on_stack) unnecessarily, > since its one of the more expensive operations done during the remark > pause. If Class Redefinition isn't used we don't need to mark through > the code cache, since no deallocated metadata should have made its way > into a nmethod. Unfortunately, this is not true for Class Redefinition. > Class Redefinition will create new versions of Methods and ConstantPools > and needs to keep the old versions alive until all references to the old > version have been cleaned out from the JVM. > > The current patch only calls > CodeCache::alive_nmethods_do(nmethod::mark_on_stack) if Class > Redefintion is used. This has the effect that code using Class > Redefinition will have higher remark pauses. > > In an earlier version of the G1 class unloading patch I parallelized and > combined the nmethod mark_on_stack code with the CodeCache cleaning > code, but it was removed in favor of removing the call to > CodeCache::alive_nmethods_do. We might want to revive that patch, or > optimize this some other way, but I would prefer to not do that in this > patch. Fine with me. I went through the change (again after the recent internal review) and could not find any big issue. Thanks for latest modifications. Here is a list of minor comments for this, most current change. Note that I am no expert in runtime/compiler code, but I think it looks reasonable :) classLoaderData.cpp: - indentation of AllAliveClosure::found_dead() body wrong - AllAliveClosure should be put under #ifdef ASSERT - line 898 should be removed, the "next" declared in this line is not used (Klass* next = null; and another one within the while-loop) - additional newline in line 917/918 stringTable.cpp: - maybe instead of "notify_gc" call the method "ensure_string_alive(...)". I would like that better. And potentially add a method "ensure_oop_alive()" or so in CollectedHeap which default implementation does nothing, and G1 overrides. That seems cleaner to me than the string table and the ciObjectFactory knowing about the G1SATB*randomstring*BS. codeCache.cpp: - extra added line 341 - first, thanks for making a fast-exit for the scavengable_nmethods() mechanism. Maybe instead of doing the early exit on UseG1GC, what do you think about adding a predicate in CollectedHeap about that? Not sure about a name, and it's up to you if you want to do that. - either line 535 or 536 could be removed :) compiledIC.cpp: - the indentation in line 102 to 107 seems to be messed up. - newlines at nmethod::verify_icholder_relocations() nmethod.cpp: - extra newline at 1303 sharedHeap/g1CollectedHeap: the barriers implementation for the strongrootsscope and the G1CodeCacheUnloadingTask are different, maybe it would be good to make them similar. Both implementations seem to be okay. g1CollectedHeap.cpp: - line 5226: "post-poned" -> "postponed" (I think) - line 5235: additional newline In general I really like that G1 can do class unloading after remark now :) That helps a lot in longer-running applications. Thanks for your great work, Thomas > > thanks, > StefanK > > On 2014-07-01 15:44, Stefan Karlsson wrote: > > Hi all, > > > > Please, review this patch to enable unloading of classes and other > > metadata after a G1 concurrent cycle. > > > > http://cr.openjdk.java.net/~stefank/8048248/webrev.00/ > > https://bugs.openjdk.java.net/browse/JDK-8048248 > > > > The patch includes the following changes: > > > > 1) Tracing through alive Klasses and CLDs during concurrent mark, > > instead of marking all of them during the initial mark pause. > > 2) Making HeapRegions walkable in the presence of unparseable objects > > due to their classes being unloaded. > > 3) The process roots code has been changed to allow G1's combined > > initial mark and scavenge. > > 4) The CodeBlobClosures have been refactored to distinguish the > > marking variant from the oop updating variants. > > 5) Calls to the G1 pre-barrier have been added to some places, such as > > the StringTable, to guard against object resurrection, similar to how > > j.l.ref.Reference#get is treated with a read barrier. > > 6) Parallelizing the cleaning of metadata and compiled methods during > > the remark pause. > > > > A number of patches to prepare for this RFE has already been pushed to > > JDK 9: > > > > 8047362: Add a version of CompiledIC_at that doesn't create a new > > RelocIterator > > 8047326: Consolidate all CompiledIC::CompiledIC implementations and > > move it to compiledIC.cpp > > 8047323: Remove unused _copy_metadata_obj_cl in G1CopyingKeepAliveClosure > > 8047373: Clean the ExceptionCache in one pass > > 8046670: Make CMS metadata aware closures applicable for other collectors > > 8035746: Add missing Klass::oop_is_instanceClassLoader() function > > 8035648: Don't use Handle in java_lang_String::print > > 8035412: Cleanup ClassLoaderData::is_alive > > 8035393: Use CLDClosure instead of CLDToOopClosure in > > frame::oops_interpreted_do > > 8034764: Use process_strong_roots to adjust the StringTable > > 8034761: Remove the do_code_roots parameter from process_strong_roots > > 8033923: Use BufferingOopClosure for G1 code root scanning > > 8033764: Remove the usage of StarTask from BufferingOopClosure > > 8012687: Remove unused is_root checks and closures > > 8047818: G1 HeapRegions can no longer be ContiguousSpaces > > 8048214: Linker error when compiling G1SATBCardTableModRefBS after > > include order changes > > 8047821: G1 Does not use the save_marks functionality as intended > > 8047820: G1 Block offset table does not need to support generic Space > > classes > > 8047819: G1 HeapRegionDCTOC does not need to inherit ContiguousSpaceDCTOC > > 8038405: Clean up some virtual fucntions in Space class hierarchy > > 8038412: Move object_iterate_careful down from Space to ContigousSpace > > and CFLSpace > > 8038404: Move object_iterate_mem from Space to CMS since it is only > > ever used by CMS > > 8038399: Remove dead oop_iterate MemRegion variants from SharedHeap, > > Generation and Space classe > > 8037958: ConcurrentMark::cleanup leaks BitMaps if VerifyDuringGC is > > enabled > > 8032379: Remove the is_scavenging flag to process_strong_roots > > > > Testing: > > > > We've been running Kitchensink, gc-test-suite, internal nightly > > testing and test lists, and CRM FA benchmarks. > > > > thanks, > > StefanK & Mikael Gerdin > From lois.foltan at oracle.com Thu Jul 3 19:26:52 2014 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 03 Jul 2014 15:26:52 -0400 Subject: RFR(S): 8046818: Hotspot build system looking for sdt.h in the wrong place In-Reply-To: <53B54681.1090608@oracle.com> References: <53B4A4AE.5000709@oracle.com> <53B54681.1090608@oracle.com> Message-ID: <53B5AE7C.4030106@oracle.com> Looks ok to me as well. Lois On 7/3/2014 8:03 AM, David Holmes wrote: > Looks ok. > > David > > On 3/07/2014 10:32 AM, Mikael Vidstedt wrote: >> >> Please review the below fix. >> >> When using a compiler which does not use the default "/" as system root >> the system headers are not picked up from /usr/include. This means that >> looking for sdt.h in /usr/include is not correct - even if the machine >> in question has that header file available it may still fail at compile >> time if the compiler doesn't have sdt.h in its system root. >> >> gcc has a useful option (-print-sysroot) which prints the system root >> being used. I have not found any equivalent option for clang, so this >> fix will unfortunately not solve this problem when using clang. When we >> rewrite the Hotspot build system to use the new top level build system >> this can be solved in a better way. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8046818 >> Webrev: >> http://cr.openjdk.java.net/~mikael/webrevs/8046818/webrev.00/webrev/ >> >> Cheers, >> Mikael >> From stefan.karlsson at oracle.com Thu Jul 3 19:57:29 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 03 Jul 2014 21:57:29 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: <53B53B23.9040305@oracle.com> References: <53B2BB50.1080606@oracle.com> <53B53B23.9040305@oracle.com> Message-ID: <53B5B5A9.4020406@oracle.com> Hi, A new patch can be found at: http://cr.openjdk.java.net/~stefank/8048248/webrev.02/ http://cr.openjdk.java.net/~stefank/8048248/webrev.02.delta/ The new patch: 1) Fixes a bug when the user specifies -XX:ParallelGCThreads=0 2) Fixes most of Thomas Schatzl's review comments 3) Fixes a bug in how G1RemarkGCTraceTime is used, which caused incorrect measurements of the phases "System Dictionary Unloading" and "Parallel Unloading". thanks, StefanK On 2014-07-03 13:14, Stefan Karlsson wrote: > Hi again, > > Here's a new patch. > > Changes: > 1) The first webrev didn't include the change to this file: > http://cr.openjdk.java.net/~stefank/8048248/webrev.01/test/testlibrary/whitebox/sun/hotspot/WhiteBox.java.udiff.html > > > 2) Fixes a bug that happens with Class Redefinition. > http://cr.openjdk.java.net/~stefank/8048248/webrev.01/src/share/vm/classfile/metadataOnStackMark.cpp.udiff.html > > > We don't want to call > CodeCache::alive_nmethods_do(nmethod::mark_on_stack) unnecessarily, > since its one of the more expensive operations done during the remark > pause. If Class Redefinition isn't used we don't need to mark through > the code cache, since no deallocated metadata should have made its way > into a nmethod. Unfortunately, this is not true for Class > Redefinition. Class Redefinition will create new versions of Methods > and ConstantPools and needs to keep the old versions alive until all > references to the old version have been cleaned out from the JVM. > > The current patch only calls > CodeCache::alive_nmethods_do(nmethod::mark_on_stack) if Class > Redefintion is used. This has the effect that code using Class > Redefinition will have higher remark pauses. > > In an earlier version of the G1 class unloading patch I parallelized > and combined the nmethod mark_on_stack code with the CodeCache > cleaning code, but it was removed in favor of removing the call to > CodeCache::alive_nmethods_do. We might want to revive that patch, or > optimize this some other way, but I would prefer to not do that in > this patch. > > thanks, > StefanK > > On 2014-07-01 15:44, Stefan Karlsson wrote: >> Hi all, >> >> Please, review this patch to enable unloading of classes and other >> metadata after a G1 concurrent cycle. >> >> http://cr.openjdk.java.net/~stefank/8048248/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8048248 >> >> The patch includes the following changes: >> >> 1) Tracing through alive Klasses and CLDs during concurrent mark, >> instead of marking all of them during the initial mark pause. >> 2) Making HeapRegions walkable in the presence of unparseable objects >> due to their classes being unloaded. >> 3) The process roots code has been changed to allow G1's combined >> initial mark and scavenge. >> 4) The CodeBlobClosures have been refactored to distinguish the >> marking variant from the oop updating variants. >> 5) Calls to the G1 pre-barrier have been added to some places, such >> as the StringTable, to guard against object resurrection, similar to >> how j.l.ref.Reference#get is treated with a read barrier. >> 6) Parallelizing the cleaning of metadata and compiled methods during >> the remark pause. >> >> A number of patches to prepare for this RFE has already been pushed >> to JDK 9: >> >> 8047362: Add a version of CompiledIC_at that doesn't create a new >> RelocIterator >> 8047326: Consolidate all CompiledIC::CompiledIC implementations and >> move it to compiledIC.cpp >> 8047323: Remove unused _copy_metadata_obj_cl in >> G1CopyingKeepAliveClosure >> 8047373: Clean the ExceptionCache in one pass >> 8046670: Make CMS metadata aware closures applicable for other >> collectors >> 8035746: Add missing Klass::oop_is_instanceClassLoader() function >> 8035648: Don't use Handle in java_lang_String::print >> 8035412: Cleanup ClassLoaderData::is_alive >> 8035393: Use CLDClosure instead of CLDToOopClosure in >> frame::oops_interpreted_do >> 8034764: Use process_strong_roots to adjust the StringTable >> 8034761: Remove the do_code_roots parameter from process_strong_roots >> 8033923: Use BufferingOopClosure for G1 code root scanning >> 8033764: Remove the usage of StarTask from BufferingOopClosure >> 8012687: Remove unused is_root checks and closures >> 8047818: G1 HeapRegions can no longer be ContiguousSpaces >> 8048214: Linker error when compiling G1SATBCardTableModRefBS after >> include order changes >> 8047821: G1 Does not use the save_marks functionality as intended >> 8047820: G1 Block offset table does not need to support generic Space >> classes >> 8047819: G1 HeapRegionDCTOC does not need to inherit >> ContiguousSpaceDCTOC >> 8038405: Clean up some virtual fucntions in Space class hierarchy >> 8038412: Move object_iterate_careful down from Space to >> ContigousSpace and CFLSpace >> 8038404: Move object_iterate_mem from Space to CMS since it is only >> ever used by CMS >> 8038399: Remove dead oop_iterate MemRegion variants from SharedHeap, >> Generation and Space classe >> 8037958: ConcurrentMark::cleanup leaks BitMaps if VerifyDuringGC is >> enabled >> 8032379: Remove the is_scavenging flag to process_strong_roots >> >> Testing: >> >> We've been running Kitchensink, gc-test-suite, internal nightly >> testing and test lists, and CRM FA benchmarks. >> >> thanks, >> StefanK & Mikael Gerdin > From stefan.karlsson at oracle.com Thu Jul 3 20:02:03 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 03 Jul 2014 22:02:03 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: <1404399730.2851.33.camel@cirrus> References: <53B2BB50.1080606@oracle.com> <53B53B23.9040305@oracle.com> <1404399730.2851.33.camel@cirrus> Message-ID: <53B5B6BB.1060301@oracle.com> Hi Thomas, Thanks for reviewing these changes! I've fixed most of your comments, but I've indicated below were I've deferred your cleanup suggestions. On 2014-07-03 17:02, Thomas Schatzl wrote: > Hi Mikael+Stefan, > > On Thu, 2014-07-03 at 13:14 +0200, Stefan Karlsson wrote: >> Hi again, >> >> Here's a new patch. >> >> Changes: >> 1) The first webrev didn't include the change to this file: >> http://cr.openjdk.java.net/~stefank/8048248/webrev.01/test/testlibrary/whitebox/sun/hotspot/WhiteBox.java.udiff.html >> >> 2) Fixes a bug that happens with Class Redefinition. >> http://cr.openjdk.java.net/~stefank/8048248/webrev.01/src/share/vm/classfile/metadataOnStackMark.cpp.udiff.html >> >> We don't want to call >> CodeCache::alive_nmethods_do(nmethod::mark_on_stack) unnecessarily, >> since its one of the more expensive operations done during the remark >> pause. If Class Redefinition isn't used we don't need to mark through >> the code cache, since no deallocated metadata should have made its way >> into a nmethod. Unfortunately, this is not true for Class Redefinition. >> Class Redefinition will create new versions of Methods and ConstantPools >> and needs to keep the old versions alive until all references to the old >> version have been cleaned out from the JVM. >> >> The current patch only calls >> CodeCache::alive_nmethods_do(nmethod::mark_on_stack) if Class >> Redefintion is used. This has the effect that code using Class >> Redefinition will have higher remark pauses. >> >> In an earlier version of the G1 class unloading patch I parallelized and >> combined the nmethod mark_on_stack code with the CodeCache cleaning >> code, but it was removed in favor of removing the call to >> CodeCache::alive_nmethods_do. We might want to revive that patch, or >> optimize this some other way, but I would prefer to not do that in this >> patch. > Fine with me. > > I went through the change (again after the recent internal review) and > could not find any big issue. Thanks for latest modifications. > > Here is a list of minor comments for this, most current change. > > Note that I am no expert in runtime/compiler code, but I think it looks > reasonable :) > > classLoaderData.cpp: > - indentation of AllAliveClosure::found_dead() body wrong > - AllAliveClosure should be put under #ifdef ASSERT > - line 898 should be removed, the "next" declared in this line is not > used (Klass* next = null; and another one within the while-loop) > - additional newline in line 917/918 > > stringTable.cpp: > - maybe instead of "notify_gc" call the method > "ensure_string_alive(...)". I would like that better. And potentially > add a method "ensure_oop_alive()" or so in CollectedHeap which default > implementation does nothing, and G1 overrides. That seems cleaner to me > than the string table and the ciObjectFactory knowing about the > G1SATB*randomstring*BS. I'd like to handle this with a separate RFE. > > codeCache.cpp: > - extra added line 341 > - first, thanks for making a fast-exit for the scavengable_nmethods() > mechanism. Maybe instead of doing the early exit on UseG1GC, what do you > think about adding a predicate in CollectedHeap about that? Not sure > about a name, and it's up to you if you want to do that. I agree that this isn't a nice solution. I think we should have one entry point for adding/removing the code roots remset and then dispatch to different implementations for the different GCs. I'd prefer to handle this as a separate cleanup. > > - either line 535 or 536 could be removed :) > > compiledIC.cpp: > - the indentation in line 102 to 107 seems to be messed up. > - newlines at nmethod::verify_icholder_relocations() > > nmethod.cpp: > - extra newline at 1303 > > sharedHeap/g1CollectedHeap: the barriers implementation for the > strongrootsscope and the G1CodeCacheUnloadingTask are different, maybe > it would be good to make them similar. Both implementations seem to be > okay. > > g1CollectedHeap.cpp: > - line 5226: "post-poned" -> "postponed" (I think) > - line 5235: additional newline > > In general I really like that G1 can do class unloading after remark > now :) > That helps a lot in longer-running applications. > > Thanks for your great work, Thanks a lot! StefanK > Thomas > >> thanks, >> StefanK >> >> On 2014-07-01 15:44, Stefan Karlsson wrote: >>> Hi all, >>> >>> Please, review this patch to enable unloading of classes and other >>> metadata after a G1 concurrent cycle. >>> >>> http://cr.openjdk.java.net/~stefank/8048248/webrev.00/ >>> https://bugs.openjdk.java.net/browse/JDK-8048248 >>> >>> The patch includes the following changes: >>> >>> 1) Tracing through alive Klasses and CLDs during concurrent mark, >>> instead of marking all of them during the initial mark pause. >>> 2) Making HeapRegions walkable in the presence of unparseable objects >>> due to their classes being unloaded. >>> 3) The process roots code has been changed to allow G1's combined >>> initial mark and scavenge. >>> 4) The CodeBlobClosures have been refactored to distinguish the >>> marking variant from the oop updating variants. >>> 5) Calls to the G1 pre-barrier have been added to some places, such as >>> the StringTable, to guard against object resurrection, similar to how >>> j.l.ref.Reference#get is treated with a read barrier. >>> 6) Parallelizing the cleaning of metadata and compiled methods during >>> the remark pause. >>> >>> A number of patches to prepare for this RFE has already been pushed to >>> JDK 9: >>> >>> 8047362: Add a version of CompiledIC_at that doesn't create a new >>> RelocIterator >>> 8047326: Consolidate all CompiledIC::CompiledIC implementations and >>> move it to compiledIC.cpp >>> 8047323: Remove unused _copy_metadata_obj_cl in G1CopyingKeepAliveClosure >>> 8047373: Clean the ExceptionCache in one pass >>> 8046670: Make CMS metadata aware closures applicable for other collectors >>> 8035746: Add missing Klass::oop_is_instanceClassLoader() function >>> 8035648: Don't use Handle in java_lang_String::print >>> 8035412: Cleanup ClassLoaderData::is_alive >>> 8035393: Use CLDClosure instead of CLDToOopClosure in >>> frame::oops_interpreted_do >>> 8034764: Use process_strong_roots to adjust the StringTable >>> 8034761: Remove the do_code_roots parameter from process_strong_roots >>> 8033923: Use BufferingOopClosure for G1 code root scanning >>> 8033764: Remove the usage of StarTask from BufferingOopClosure >>> 8012687: Remove unused is_root checks and closures >>> 8047818: G1 HeapRegions can no longer be ContiguousSpaces >>> 8048214: Linker error when compiling G1SATBCardTableModRefBS after >>> include order changes >>> 8047821: G1 Does not use the save_marks functionality as intended >>> 8047820: G1 Block offset table does not need to support generic Space >>> classes >>> 8047819: G1 HeapRegionDCTOC does not need to inherit ContiguousSpaceDCTOC >>> 8038405: Clean up some virtual fucntions in Space class hierarchy >>> 8038412: Move object_iterate_careful down from Space to ContigousSpace >>> and CFLSpace >>> 8038404: Move object_iterate_mem from Space to CMS since it is only >>> ever used by CMS >>> 8038399: Remove dead oop_iterate MemRegion variants from SharedHeap, >>> Generation and Space classe >>> 8037958: ConcurrentMark::cleanup leaks BitMaps if VerifyDuringGC is >>> enabled >>> 8032379: Remove the is_scavenging flag to process_strong_roots >>> >>> Testing: >>> >>> We've been running Kitchensink, gc-test-suite, internal nightly >>> testing and test lists, and CRM FA benchmarks. >>> >>> thanks, >>> StefanK & Mikael Gerdin > From joe.darcy at oracle.com Thu Jul 3 22:36:07 2014 From: joe.darcy at oracle.com (Joe Darcy) Date: Thu, 03 Jul 2014 15:36:07 -0700 Subject: JDK 9 RFR of JDK-8048620: Remove unneeded/obsolete -source/-target options in hotspot tests In-Reply-To: <53B3F6DA.1050209@oracle.com> References: <53AE04E1.4000806@oracle.com> <53B2E513.5020608@oracle.com> <53B3DB0D.8070700@oracle.com> <53B3F6DA.1050209@oracle.com> Message-ID: <53B5DAD7.6030205@oracle.com> Hi Harold, Yes; please sponsor this change; thanks, -Joe On 07/02/2014 05:11 AM, harold seigel wrote: > Hi Joe, > > Your changes look good to me, also. > > Would you like me to sponsor your change? > > Thanks, Harold > > On 7/2/2014 6:12 AM, David Holmes wrote: >> Hi Joe, >> >> I can provide you one Review. It seems to me the -source/-target were >> being set to ensure a minimum version (probably on -target was needed >> but -source had to come along for the ride), so removing them seems >> fine. >> >> Note hotspot protocol requires copyright updates at the time of >> checkin - thanks. >> >> Also you will need to create the changeset against the group repo for >> whomever your sponsor is (though your existing patch from the webrev >> will probably apply cleanly). >> >> A second reviewer (small R) is needed. If they don't sponsor it I will. >> >> Cheers, >> David >> >> >> >> On 2/07/2014 2:42 AM, Joe Darcy wrote: >>> *ping* >>> >>> -Joe >>> >>> On 06/27/2014 04:57 PM, Joe Darcy wrote: >>>> Hello, >>>> >>>> As a consequence of a policy for retiring old javac -source and >>>> -target options (JEP 182 [1]), in JDK 9, only -source/-target of 6/1.6 >>>> and higher will be supported [2]. This work is being tracked under bug >>>> >>>> JDK-8011044: Remove support for 1.5 and earlier source and target >>>> options >>>> https://bugs.openjdk.java.net/browse/JDK-8011044 >>>> >>>> Many subtasks related to this are already complete, including updating >>>> regression tests in the jdk and langtools repos. It has come to my >>>> attention that the hotspot repo also has a few tests that use -source >>>> and -target that should be updated. Please review the changes: >>>> >>>> http://cr.openjdk.java.net/~darcy/8048620.0/ >>>> >>>> Full patch below. From what I could tell looking at the bug and tests, >>>> these tests are not sensitive to the class file version so they >>>> shouldn't need to use an explicit -source or -target option and should >>>> just accept the JDK-default. >>>> >>>> There is one additional test which uses -source/-target, >>>> test/compiler/6932496/Test6932496.java. This test *does* appear >>>> sensitive to class file version (no jsr / jret instruction in target 6 >>>> or higher) so I have not modified this test. If the test is not >>>> actually sensitive to class file version, it can be updated like the >>>> others. If it is sensitive and if testing this is still relevant, the >>>> class file in question will need to be generated in some other way, >>>> such as as by using ASM. >>>> >>>> Regardless of the outcome of the technical discussion around >>>> Test6932496.java, I'd appreciate if a "hotspot buddy" could shepherd >>>> this fix through the HotSpot processes. >>>> >>>> Thanks, >>>> >>>> -Joe >>>> >>>> [1] http://openjdk.java.net/jeps/182 >>>> >>>> [2] >>>> http://mail.openjdk.java.net/pipermail/jdk9-dev/2014-January/000328.html >>>> >>>> >>>> --- old/test/compiler/6775880/Test.java 2014-06-27 >>>> 16:24:25.000000000 -0700 >>>> +++ new/test/compiler/6775880/Test.java 2014-06-27 >>>> 16:24:25.000000000 -0700 >>>> @@ -26,7 +26,6 @@ >>>> * @test >>>> * @bug 6775880 >>>> * @summary EA +DeoptimizeALot: >>>> assert(mon_info->owner()->is_locked(),"object must be locked now") >>>> - * @compile -source 1.4 -target 1.4 Test.java >>>> * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -Xbatch >>>> -XX:+DoEscapeAnalysis -XX:+DeoptimizeALot >>>> -XX:CompileCommand=exclude,java.lang.AbstractStringBuilder::append >>>> Test >>>> */ >>>> >>>> --- old/test/runtime/6626217/Test6626217.sh 2014-06-27 >>>> 16:24:26.000000000 -0700 >>>> +++ new/test/runtime/6626217/Test6626217.sh 2014-06-27 >>>> 16:24:26.000000000 -0700 >>>> @@ -54,7 +54,7 @@ >>>> >>>> # Compile all the usual suspects, including the default 'many_loader' >>>> ${CP} many_loader1.java.foo many_loader.java >>>> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint *.java >>>> +${JAVAC} ${TESTJAVACOPTS} -Xlint *.java >>>> >>>> # Rename the class files, so the custom loader (and not the system >>>> loader) will find it >>>> ${MV} from_loader2.class from_loader2.impl2 >>>> @@ -62,7 +62,7 @@ >>>> # Compile the next version of 'many_loader' >>>> ${MV} many_loader.class many_loader.impl1 >>>> ${CP} many_loader2.java.foo many_loader.java >>>> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint >>>> many_loader.java >>>> +${JAVAC} ${TESTJAVACOPTS} -Xlint many_loader.java >>>> >>>> # Rename the class file, so the custom loader (and not the system >>>> loader) will find it >>>> ${MV} many_loader.class many_loader.impl2 >>>> --- old/test/runtime/8003720/Test8003720.java 2014-06-27 >>>> 16:24:26.000000000 -0700 >>>> +++ new/test/runtime/8003720/Test8003720.java 2014-06-27 >>>> 16:24:26.000000000 -0700 >>>> @@ -26,7 +26,7 @@ >>>> * @test >>>> * @bug 8003720 >>>> * @summary Method in interpreter stack frame can be deallocated >>>> - * @compile -XDignore.symbol.file -source 1.7 -target 1.7 Victim.java >>>> + * @compile -XDignore.symbol.file Victim.java >>>> * @run main/othervm -Xverify:all -Xint Test8003720 >>>> */ >>>> >>>> >>> > From david.holmes at oracle.com Fri Jul 4 04:46:43 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 04 Jul 2014 14:46:43 +1000 Subject: RFR(S): 8049071: Add jtreg jobs to JPRT for Hotspot In-Reply-To: <53B4AD05.3070702@oracle.com> References: <53B4AD05.3070702@oracle.com> Message-ID: <53B631B3.6090505@oracle.com> Hi Mikael, Generally looks okay - took me a minute to remember that jtreg groups combine as set unions :) A couple of things: 226 # Unless explicitly defined below, hotspot_ is interpreted as the jtreg test group The jtreg group is actually called hotspot_ 227 hotspot_%: 228 $(ECHO) "Running tests: $@" 229 for each in $@; do \ 230 $(MAKE) -j 1 TEST_SELECTION=":$$each" UNIQUE_DIR=$$each jtreg_tests; \ 231 done While hotspot_% can match multiple targets each target will be distinct - ie $@ will only every have a single value and the for loop will only execute once - and hence is unnecessary. This seems borne out with a simple test: > cat Makefile hotspot_%: @echo "Running tests: $@" @for each in $@; do \ echo $$each ;\ done > make hotspot_a hotspot_b Running tests: hotspot_a hotspot_a Running tests: hotspot_b hotspot_b Cheers, David On 3/07/2014 11:08 AM, Mikael Vidstedt wrote: > > Please review this enhancement which adds the scaffolding needed to run > the hotspot jtreg tests in JPRT. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8049071 > Webrev (/): > http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/top/webrev/ > Webrev (hotspot/): > http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/hotspot/webrev/ > > > Summary: > > We want to run the hotspot regression tests on every hotspot push. This > change enables this and adds four new test groups to the set of tests > being run on hotspot pushes. The new test sets still need to be populated. > > Narrative: > > The majority of the changes are in the hotspot/test/Makefile. The > changes are almost entirely stolen from jdk/test/Makefile but have been > massaged to support (at least) three different use cases, two of which > were supported earlier: > > 1. Running the non-jtreg tests (servertest, clienttest and > internalvmtests), also supporting the use of the "hotspot_" for when the > tests are invoked from the JDK top level > 2. Running jtreg tests by selecting test to run using the TESTDIRS variable > 3. Running jtreg tests by selecting the test group to run (NEW) > > The third/new use case is implemented by making any target named > hotspot_% *except* the ones listed in 1. lead to the corresponding jtreg > test group in TEST.groups being run. For example, running "make > hotspot_gc" leads to all the tests in the hotspot_gc test group in > TEST.groups to be run and so on. > > I also removed the packtest targets, because as far as I can tell > they're not used anyway. > > Note that the new component test groups in TEST.group - > hotspot_compiler, hotspot_gc, hotspot_runtime and hotspot_serviceability > - are currently empty, or more precisely they only run a single test > each. The intention is that these should be populated by the respective > teams to include stable and relatively fast tests. Tests added to the > groups will be run on hotspot push jobs, and therefore will be blocking > pushes in case they fail. > > Cheers, > Mikael > From david.holmes at oracle.com Fri Jul 4 05:29:25 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 04 Jul 2014 15:29:25 +1000 Subject: RFR(S): 8049071: Add jtreg jobs to JPRT for Hotspot In-Reply-To: <53B631B3.6090505@oracle.com> References: <53B4AD05.3070702@oracle.com> <53B631B3.6090505@oracle.com> Message-ID: <53B63BB5.8090602@oracle.com> On 4/07/2014 2:46 PM, David Holmes wrote: > Hi Mikael, > > Generally looks okay - took me a minute to remember that jtreg groups > combine as set unions :) > > A couple of things: > > 226 # Unless explicitly defined below, hotspot_ is interpreted as the > jtreg test group > > The jtreg group is actually called hotspot_ > > 227 hotspot_%: > 228 $(ECHO) "Running tests: $@" > 229 for each in $@; do \ > 230 $(MAKE) -j 1 TEST_SELECTION=":$$each" > UNIQUE_DIR=$$each jtreg_tests; \ > 231 done > > While hotspot_% can match multiple targets each target will be distinct > - ie $@ will only every have a single value and the for loop will only > execute once - and hence is unnecessary. This seems borne out with a > simple test: > > > cat Makefile > hotspot_%: > @echo "Running tests: $@" > @for each in $@; do \ > echo $$each ;\ > done > > > make hotspot_a hotspot_b > Running tests: hotspot_a > hotspot_a > Running tests: hotspot_b > hotspot_b Though if you have a quoting issue with the invocation: > make "hotspot_a hotspot_b" Running tests: hotspot_a hotspot_b hotspot_a hotspot_b things turn out different. David > Cheers, > David > > On 3/07/2014 11:08 AM, Mikael Vidstedt wrote: >> >> Please review this enhancement which adds the scaffolding needed to run >> the hotspot jtreg tests in JPRT. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8049071 >> Webrev (/): >> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/top/webrev/ >> Webrev (hotspot/): >> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/hotspot/webrev/ >> >> >> >> Summary: >> >> We want to run the hotspot regression tests on every hotspot push. This >> change enables this and adds four new test groups to the set of tests >> being run on hotspot pushes. The new test sets still need to be >> populated. >> >> Narrative: >> >> The majority of the changes are in the hotspot/test/Makefile. The >> changes are almost entirely stolen from jdk/test/Makefile but have been >> massaged to support (at least) three different use cases, two of which >> were supported earlier: >> >> 1. Running the non-jtreg tests (servertest, clienttest and >> internalvmtests), also supporting the use of the "hotspot_" for when the >> tests are invoked from the JDK top level >> 2. Running jtreg tests by selecting test to run using the TESTDIRS >> variable >> 3. Running jtreg tests by selecting the test group to run (NEW) >> >> The third/new use case is implemented by making any target named >> hotspot_% *except* the ones listed in 1. lead to the corresponding jtreg >> test group in TEST.groups being run. For example, running "make >> hotspot_gc" leads to all the tests in the hotspot_gc test group in >> TEST.groups to be run and so on. >> >> I also removed the packtest targets, because as far as I can tell >> they're not used anyway. >> >> Note that the new component test groups in TEST.group - >> hotspot_compiler, hotspot_gc, hotspot_runtime and hotspot_serviceability >> - are currently empty, or more precisely they only run a single test >> each. The intention is that these should be populated by the respective >> teams to include stable and relatively fast tests. Tests added to the >> groups will be run on hotspot push jobs, and therefore will be blocking >> pushes in case they fail. >> >> Cheers, >> Mikael >> From roland.westrelin at oracle.com Fri Jul 4 08:06:03 2014 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Fri, 4 Jul 2014 10:06:03 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: <53B5B5A9.4020406@oracle.com> References: <53B2BB50.1080606@oracle.com> <53B53B23.9040305@oracle.com> <53B5B5A9.4020406@oracle.com> Message-ID: Hi Stefan, > http://cr.openjdk.java.net/~stefank/8048248/webrev.02/ So why can you change: 475 nm->fix_oop_relocations(); to 526 DEBUG_ONLY(nm->verify_oop_relocations()); in CodeCache::gc_epilogue() and 1914 cur->fix_oop_relocations(); to 2188 cur->verify_oop_relocations(); in nmethod::oops_do_marking_epilogue() ? This comment: 1196 // Find all calls in an nmethod, and clear the ones that points to zombie methods in nmethod.cpp doesn?t seem good. Roland. From stefan.karlsson at oracle.com Fri Jul 4 08:22:36 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 04 Jul 2014 10:22:36 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: References: <53B2BB50.1080606@oracle.com> <53B53B23.9040305@oracle.com> <53B5B5A9.4020406@oracle.com> Message-ID: <53B6644C.7000807@oracle.com> On 2014-07-04 10:06, Roland Westrelin wrote: > Hi Stefan, > >> http://cr.openjdk.java.net/~stefank/8048248/webrev.02/ > So why can you change: > > 475 nm->fix_oop_relocations(); > > to > > 526 DEBUG_ONLY(nm->verify_oop_relocations()); > > in CodeCache::gc_epilogue() > > and > > 1914 cur->fix_oop_relocations(); > > to > > 2188 cur->verify_oop_relocations(); > > in nmethod::oops_do_marking_epilogue() The call to fix_oop_relocations() are called when iterate over the oops in an nmethod, instead of doing it as a separate pass at the end. See: +void CodeBlobToOopClosure::do_nmethod(nmethod* nm) { + nm->oops_do(_cl); + if (_fix_relocations) { + nm->fix_oop_relocations(); + } +} Every time we setup aCodeBlobToOopClosure or aMarkingCodeBlobClosure we specify if we should call fix_oop_reloctations() or not. For example: --- old/src/share/vm/gc_implementation/parallelScavenge/psTasks.cpp 2014-07-03 21:13:02.077583821 +0200 +++ new/src/share/vm/gc_implementation/parallelScavenge/psTasks.cpp 2014-07-03 21:13:01.965583825 +0200 @@ -100,7 +100,7 @@ case code_cache: { - CodeBlobToOopClosure each_scavengable_code_blob(&roots_to_old_closure, /*do_marking=*/ true); + MarkingCodeBlobClosure each_scavengable_code_blob(&roots_to_old_closure, CodeBlobToOopClosure::FixRelocations); CodeCache::scavenge_root_nmethods_do(&each_scavengable_code_blob); } break; > > ? > > This comment: > 1196 // Find all calls in an nmethod, and clear the ones that points to zombie methods > in nmethod.cpp doesn?t seem good. I'll remove it. Thanks! StefanK > > Roland. From roland.westrelin at oracle.com Fri Jul 4 08:40:58 2014 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Fri, 4 Jul 2014 10:40:58 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: <53B6644C.7000807@oracle.com> References: <53B2BB50.1080606@oracle.com> <53B53B23.9040305@oracle.com> <53B5B5A9.4020406@oracle.com> <53B6644C.7000807@oracle.com> Message-ID: <2DAA6868-0B7D-4F71-B69C-C23D6CDB60EF@oracle.com> > The call to fix_oop_relocations() are called when iterate over the oops in an nmethod, instead of doing it as a separate pass at the end. Ok. I took a look at the compiler related files and saw nothing wrong. Roland. From stefan.karlsson at oracle.com Fri Jul 4 08:42:32 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 04 Jul 2014 10:42:32 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: <2DAA6868-0B7D-4F71-B69C-C23D6CDB60EF@oracle.com> References: <53B2BB50.1080606@oracle.com> <53B53B23.9040305@oracle.com> <53B5B5A9.4020406@oracle.com> <53B6644C.7000807@oracle.com> <2DAA6868-0B7D-4F71-B69C-C23D6CDB60EF@oracle.com> Message-ID: <53B668F8.7050804@oracle.com> On 2014-07-04 10:40, Roland Westrelin wrote: >> The call to fix_oop_relocations() are called when iterate over the oops in an nmethod, instead of doing it as a separate pass at the end. > Ok. I took a look at the compiler related files and saw nothing wrong. Thanks a lot! StefanK > > Roland. From erik.helin at oracle.com Fri Jul 4 15:41:32 2014 From: erik.helin at oracle.com (Erik Helin) Date: Fri, 04 Jul 2014 17:41:32 +0200 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: <53B5B5A9.4020406@oracle.com> References: <53B2BB50.1080606@oracle.com> <53B53B23.9040305@oracle.com> <53B5B5A9.4020406@oracle.com> Message-ID: <1506130.jXhI6NMK9q@ehelin-laptop> Hi Stefan and Mikael, thanks for all your hard work with this patch! On Thursday 03 July 2014 21:57:29 PM Stefan Karlsson wrote: > Hi, > > A new patch can be found at: > http://cr.openjdk.java.net/~stefank/8048248/webrev.02/ > http://cr.openjdk.java.net/~stefank/8048248/webrev.02.delta/ This looks good to me, Reviewed! Thanks, Erik > The new patch: > 1) Fixes a bug when the user specifies -XX:ParallelGCThreads=0 > > 2) Fixes most of Thomas Schatzl's review comments > > 3) Fixes a bug in how G1RemarkGCTraceTime is used, which caused > incorrect measurements of the phases "System Dictionary Unloading" and > "Parallel Unloading". > > thanks, > StefanK > > On 2014-07-03 13:14, Stefan Karlsson wrote: > > Hi again, > > > > Here's a new patch. > > > > Changes: > > 1) The first webrev didn't include the change to this file: > > http://cr.openjdk.java.net/~stefank/8048248/webrev.01/test/testlibrary/whi > > tebox/sun/hotspot/WhiteBox.java.udiff.html > > > > > > 2) Fixes a bug that happens with Class Redefinition. > > http://cr.openjdk.java.net/~stefank/8048248/webrev.01/src/share/vm/classfi > > le/metadataOnStackMark.cpp.udiff.html > > > > > > We don't want to call > > CodeCache::alive_nmethods_do(nmethod::mark_on_stack) unnecessarily, > > since its one of the more expensive operations done during the remark > > pause. If Class Redefinition isn't used we don't need to mark through > > the code cache, since no deallocated metadata should have made its way > > into a nmethod. Unfortunately, this is not true for Class > > Redefinition. Class Redefinition will create new versions of Methods > > and ConstantPools and needs to keep the old versions alive until all > > references to the old version have been cleaned out from the JVM. > > > > The current patch only calls > > CodeCache::alive_nmethods_do(nmethod::mark_on_stack) if Class > > Redefintion is used. This has the effect that code using Class > > Redefinition will have higher remark pauses. > > > > In an earlier version of the G1 class unloading patch I parallelized > > and combined the nmethod mark_on_stack code with the CodeCache > > cleaning code, but it was removed in favor of removing the call to > > CodeCache::alive_nmethods_do. We might want to revive that patch, or > > optimize this some other way, but I would prefer to not do that in > > this patch. > > > > thanks, > > StefanK > > > > On 2014-07-01 15:44, Stefan Karlsson wrote: > >> Hi all, > >> > >> Please, review this patch to enable unloading of classes and other > >> metadata after a G1 concurrent cycle. > >> > >> http://cr.openjdk.java.net/~stefank/8048248/webrev.00/ > >> https://bugs.openjdk.java.net/browse/JDK-8048248 > >> > >> The patch includes the following changes: > >> > >> 1) Tracing through alive Klasses and CLDs during concurrent mark, > >> instead of marking all of them during the initial mark pause. > >> 2) Making HeapRegions walkable in the presence of unparseable objects > >> due to their classes being unloaded. > >> 3) The process roots code has been changed to allow G1's combined > >> initial mark and scavenge. > >> 4) The CodeBlobClosures have been refactored to distinguish the > >> marking variant from the oop updating variants. > >> 5) Calls to the G1 pre-barrier have been added to some places, such > >> as the StringTable, to guard against object resurrection, similar to > >> how j.l.ref.Reference#get is treated with a read barrier. > >> 6) Parallelizing the cleaning of metadata and compiled methods during > >> the remark pause. > >> > >> A number of patches to prepare for this RFE has already been pushed > >> to JDK 9: > >> > >> 8047362: Add a version of CompiledIC_at that doesn't create a new > >> RelocIterator > >> 8047326: Consolidate all CompiledIC::CompiledIC implementations and > >> move it to compiledIC.cpp > >> 8047323: Remove unused _copy_metadata_obj_cl in > >> G1CopyingKeepAliveClosure > >> 8047373: Clean the ExceptionCache in one pass > >> 8046670: Make CMS metadata aware closures applicable for other > >> collectors > >> 8035746: Add missing Klass::oop_is_instanceClassLoader() function > >> 8035648: Don't use Handle in java_lang_String::print > >> 8035412: Cleanup ClassLoaderData::is_alive > >> 8035393: Use CLDClosure instead of CLDToOopClosure in > >> frame::oops_interpreted_do > >> 8034764: Use process_strong_roots to adjust the StringTable > >> 8034761: Remove the do_code_roots parameter from process_strong_roots > >> 8033923: Use BufferingOopClosure for G1 code root scanning > >> 8033764: Remove the usage of StarTask from BufferingOopClosure > >> 8012687: Remove unused is_root checks and closures > >> 8047818: G1 HeapRegions can no longer be ContiguousSpaces > >> 8048214: Linker error when compiling G1SATBCardTableModRefBS after > >> include order changes > >> 8047821: G1 Does not use the save_marks functionality as intended > >> 8047820: G1 Block offset table does not need to support generic Space > >> classes > >> 8047819: G1 HeapRegionDCTOC does not need to inherit > >> ContiguousSpaceDCTOC > >> 8038405: Clean up some virtual fucntions in Space class hierarchy > >> 8038412: Move object_iterate_careful down from Space to > >> ContigousSpace and CFLSpace > >> 8038404: Move object_iterate_mem from Space to CMS since it is only > >> ever used by CMS > >> 8038399: Remove dead oop_iterate MemRegion variants from SharedHeap, > >> Generation and Space classe > >> 8037958: ConcurrentMark::cleanup leaks BitMaps if VerifyDuringGC is > >> enabled > >> 8032379: Remove the is_scavenging flag to process_strong_roots > >> > >> Testing: > >> > >> We've been running Kitchensink, gc-test-suite, internal nightly > >> testing and test lists, and CRM FA benchmarks. > >> > >> thanks, > >> StefanK & Mikael Gerdin From coleen.phillimore at oracle.com Sat Jul 5 17:05:41 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Sat, 05 Jul 2014 13:05:41 -0400 Subject: RFR: 8048248: G1 Class Unloading after completing a concurrent mark cycle In-Reply-To: <1506130.jXhI6NMK9q@ehelin-laptop> References: <53B2BB50.1080606@oracle.com> <53B53B23.9040305@oracle.com> <53B5B5A9.4020406@oracle.com> <1506130.jXhI6NMK9q@ehelin-laptop> Message-ID: <53B83065.4070908@oracle.com> Hi, I have looked at the runtime and metadata changes and glanced at one file of g1 changes. I have a couple of comments but this looks good. http://cr.openjdk.java.net/~stefank/8048248/webrev.02/src/share/vm/utilities/array.hpp.udiff.html Why does this include classLoaderData.hpp? http://cr.openjdk.java.net/~stefank/8048248/webrev.02/src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp.frames.html Does this have to include metadataOnStackMark.hpp ? Or is that leftover from the parallel metadataOnStackMark code? 5068 class G1CodeCacheUnloadingTask { 5235 class G1KlassCleaningTask { These need to inherit from allocation types (CHeapObj, ResourceObj or whatever for NMT). Can you check your other classes to see if they have inherited from memory allocation classes too? For the G1ClassCleaningTask and parallel cleaning weak links, you might want a comment to explain why this is too slow serially and must be done with all this code. Also please add a comment in front of ClassLoaderDataGraphKlassIteratorAtomic in classLoaderData.hpp what this is used for (and maybe why). I think related to above. That's all I saw that didn't look okay to me. Reviewed. This is a ton of work! Coleen On 7/4/14, 11:41 AM, Erik Helin wrote: > Hi Stefan and Mikael, > > thanks for all your hard work with this patch! > > On Thursday 03 July 2014 21:57:29 PM Stefan Karlsson wrote: >> Hi, >> >> A new patch can be found at: >> http://cr.openjdk.java.net/~stefank/8048248/webrev.02/ >> http://cr.openjdk.java.net/~stefank/8048248/webrev.02.delta/ > This looks good to me, Reviewed! > > Thanks, > Erik > >> The new patch: >> 1) Fixes a bug when the user specifies -XX:ParallelGCThreads=0 >> >> 2) Fixes most of Thomas Schatzl's review comments >> >> 3) Fixes a bug in how G1RemarkGCTraceTime is used, which caused >> incorrect measurements of the phases "System Dictionary Unloading" and >> "Parallel Unloading". >> >> thanks, >> StefanK >> >> On 2014-07-03 13:14, Stefan Karlsson wrote: >>> Hi again, >>> >>> Here's a new patch. >>> >>> Changes: >>> 1) The first webrev didn't include the change to this file: >>> http://cr.openjdk.java.net/~stefank/8048248/webrev.01/test/testlibrary/whi >>> tebox/sun/hotspot/WhiteBox.java.udiff.html >>> >>> >>> 2) Fixes a bug that happens with Class Redefinition. >>> http://cr.openjdk.java.net/~stefank/8048248/webrev.01/src/share/vm/classfi >>> le/metadataOnStackMark.cpp.udiff.html >>> >>> >>> We don't want to call >>> CodeCache::alive_nmethods_do(nmethod::mark_on_stack) unnecessarily, >>> since its one of the more expensive operations done during the remark >>> pause. If Class Redefinition isn't used we don't need to mark through >>> the code cache, since no deallocated metadata should have made its way >>> into a nmethod. Unfortunately, this is not true for Class >>> Redefinition. Class Redefinition will create new versions of Methods >>> and ConstantPools and needs to keep the old versions alive until all >>> references to the old version have been cleaned out from the JVM. >>> >>> The current patch only calls >>> CodeCache::alive_nmethods_do(nmethod::mark_on_stack) if Class >>> Redefintion is used. This has the effect that code using Class >>> Redefinition will have higher remark pauses. >>> >>> In an earlier version of the G1 class unloading patch I parallelized >>> and combined the nmethod mark_on_stack code with the CodeCache >>> cleaning code, but it was removed in favor of removing the call to >>> CodeCache::alive_nmethods_do. We might want to revive that patch, or >>> optimize this some other way, but I would prefer to not do that in >>> this patch. >>> >>> thanks, >>> StefanK >>> >>> On 2014-07-01 15:44, Stefan Karlsson wrote: >>>> Hi all, >>>> >>>> Please, review this patch to enable unloading of classes and other >>>> metadata after a G1 concurrent cycle. >>>> >>>> http://cr.openjdk.java.net/~stefank/8048248/webrev.00/ >>>> https://bugs.openjdk.java.net/browse/JDK-8048248 >>>> >>>> The patch includes the following changes: >>>> >>>> 1) Tracing through alive Klasses and CLDs during concurrent mark, >>>> instead of marking all of them during the initial mark pause. >>>> 2) Making HeapRegions walkable in the presence of unparseable objects >>>> due to their classes being unloaded. >>>> 3) The process roots code has been changed to allow G1's combined >>>> initial mark and scavenge. >>>> 4) The CodeBlobClosures have been refactored to distinguish the >>>> marking variant from the oop updating variants. >>>> 5) Calls to the G1 pre-barrier have been added to some places, such >>>> as the StringTable, to guard against object resurrection, similar to >>>> how j.l.ref.Reference#get is treated with a read barrier. >>>> 6) Parallelizing the cleaning of metadata and compiled methods during >>>> the remark pause. >>>> >>>> A number of patches to prepare for this RFE has already been pushed >>>> to JDK 9: >>>> >>>> 8047362: Add a version of CompiledIC_at that doesn't create a new >>>> RelocIterator >>>> 8047326: Consolidate all CompiledIC::CompiledIC implementations and >>>> move it to compiledIC.cpp >>>> 8047323: Remove unused _copy_metadata_obj_cl in >>>> G1CopyingKeepAliveClosure >>>> 8047373: Clean the ExceptionCache in one pass >>>> 8046670: Make CMS metadata aware closures applicable for other >>>> collectors >>>> 8035746: Add missing Klass::oop_is_instanceClassLoader() function >>>> 8035648: Don't use Handle in java_lang_String::print >>>> 8035412: Cleanup ClassLoaderData::is_alive >>>> 8035393: Use CLDClosure instead of CLDToOopClosure in >>>> frame::oops_interpreted_do >>>> 8034764: Use process_strong_roots to adjust the StringTable >>>> 8034761: Remove the do_code_roots parameter from process_strong_roots >>>> 8033923: Use BufferingOopClosure for G1 code root scanning >>>> 8033764: Remove the usage of StarTask from BufferingOopClosure >>>> 8012687: Remove unused is_root checks and closures >>>> 8047818: G1 HeapRegions can no longer be ContiguousSpaces >>>> 8048214: Linker error when compiling G1SATBCardTableModRefBS after >>>> include order changes >>>> 8047821: G1 Does not use the save_marks functionality as intended >>>> 8047820: G1 Block offset table does not need to support generic Space >>>> classes >>>> 8047819: G1 HeapRegionDCTOC does not need to inherit >>>> ContiguousSpaceDCTOC >>>> 8038405: Clean up some virtual fucntions in Space class hierarchy >>>> 8038412: Move object_iterate_careful down from Space to >>>> ContigousSpace and CFLSpace >>>> 8038404: Move object_iterate_mem from Space to CMS since it is only >>>> ever used by CMS >>>> 8038399: Remove dead oop_iterate MemRegion variants from SharedHeap, >>>> Generation and Space classe >>>> 8037958: ConcurrentMark::cleanup leaks BitMaps if VerifyDuringGC is >>>> enabled >>>> 8032379: Remove the is_scavenging flag to process_strong_roots >>>> >>>> Testing: >>>> >>>> We've been running Kitchensink, gc-test-suite, internal nightly >>>> testing and test lists, and CRM FA benchmarks. >>>> >>>> thanks, >>>> StefanK & Mikael Gerdin From stefan.karlsson at oracle.com Mon Jul 7 08:29:32 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 07 Jul 2014 10:29:32 +0200 Subject: RFR: 8049411: Minimal VM build broken after gcId.cpp was added Message-ID: <53BA5A6C.3040308@oracle.com> Hi all, Please, review this change to fix a build problem with the minimal VM. http://cr.openjdk.java.net/~stefank/8049411/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8049411 When gcId.cpp was introduced, it wasn't added to the list of files to keep in the gc_implementation/shared directory. thanks, StefanK From bengt.rutisson at oracle.com Mon Jul 7 08:37:42 2014 From: bengt.rutisson at oracle.com (Bengt Rutisson) Date: Mon, 07 Jul 2014 10:37:42 +0200 Subject: RFR: 8049411: Minimal VM build broken after gcId.cpp was added In-Reply-To: <53BA5A6C.3040308@oracle.com> References: <53BA5A6C.3040308@oracle.com> Message-ID: <53BA5C56.5040202@oracle.com> Hi Stefan, Looks good! Thanks for fixing this! Bengt On 2014-07-07 10:29, Stefan Karlsson wrote: > Hi all, > > Please, review this change to fix a build problem with the minimal VM. > > http://cr.openjdk.java.net/~stefank/8049411/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8049411 > > When gcId.cpp was introduced, it wasn't added to the list of files to > keep in the gc_implementation/shared directory. > > thanks, > StefanK From erik.helin at oracle.com Mon Jul 7 08:42:03 2014 From: erik.helin at oracle.com (Erik Helin) Date: Mon, 07 Jul 2014 10:42:03 +0200 Subject: RFR: 8049411: Minimal VM build broken after gcId.cpp was added In-Reply-To: <53BA5A6C.3040308@oracle.com> References: <53BA5A6C.3040308@oracle.com> Message-ID: <2347526.EWHVxYWl6c@ehelin-laptop> On Monday 07 July 2014 10:29:32 AM Stefan Karlsson wrote: > Hi all, > > Please, review this change to fix a build problem with the minimal VM. > > http://cr.openjdk.java.net/~stefank/8049411/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8049411 Looks good, Reviewed. Thanks, Erik > When gcId.cpp was introduced, it wasn't added to the list of files to > keep in the gc_implementation/shared directory. > > thanks, > StefanK From thomas.schatzl at oracle.com Mon Jul 7 08:42:44 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 07 Jul 2014 10:42:44 +0200 Subject: RFR: 8049411: Minimal VM build broken after gcId.cpp was added In-Reply-To: <53BA5A6C.3040308@oracle.com> References: <53BA5A6C.3040308@oracle.com> Message-ID: <1404722564.2735.1.camel@cirrus> Hi Stefan, On Mon, 2014-07-07 at 10:29 +0200, Stefan Karlsson wrote: > Hi all, > > Please, review this change to fix a build problem with the minimal VM. > > http://cr.openjdk.java.net/~stefank/8049411/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8049411 > > When gcId.cpp was introduced, it wasn't added to the list of files to > keep in the gc_implementation/shared directory. can you fix the extra tab character in the new line at the end? Otherwise it looks good. I do not need to see that review again. Thomas From david.holmes at oracle.com Mon Jul 7 08:43:47 2014 From: david.holmes at oracle.com (David Holmes) Date: Mon, 07 Jul 2014 18:43:47 +1000 Subject: RFR: 8049411: Minimal VM build broken after gcId.cpp was added In-Reply-To: <53BA5A6C.3040308@oracle.com> References: <53BA5A6C.3040308@oracle.com> Message-ID: <53BA5DC3.3070608@oracle.com> Looks okay, but how did the original change get through JPRT ??? David On 7/07/2014 6:29 PM, Stefan Karlsson wrote: > Hi all, > > Please, review this change to fix a build problem with the minimal VM. > > http://cr.openjdk.java.net/~stefank/8049411/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8049411 > > When gcId.cpp was introduced, it wasn't added to the list of files to > keep in the gc_implementation/shared directory. > > thanks, > StefanK From goetz.lindenmaier at sap.com Mon Jul 7 08:52:31 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 7 Jul 2014 08:52:31 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories Message-ID: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> Hi, I decided to clean up the remaining include cascades, too. This change introduces umbrella headers for the files in the cpu subdirectories: src/share/vm/utilities/bytes.hpp src/share/vm/opto/ad.hpp src/share/vm/code/nativeInst.hpp src/share/vm/code/vmreg.inline.hpp src/share/vm/interpreter/interp_masm.hpp It also cleans up the include cascades for adGlobals*.hpp, jniTypes*.hpp, vm_version*.hpp and register*.hpp. Where possible, this change avoids includes in headers. Eventually it adds a forward declaration. vmreg_.inline.hpp contains functions declared in register_cpu.hpp and vmreg.hpp, so there is no obvious mapping to the shared files. Still, I did not split the files in the cpu directories, as they are rather small. I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly contains machine dependent, c2 specific register information. So I think optoreg.hpp is a good header to place the adGlobals_.hpp includes in, and then use optoreg.hpp where symbols from adGlobals are needed. I moved the constructor and destructor of CodeletMark to the .cpp file, I don't think this is performance relevant. But having them in the header requirs to pull interp_masm.hpp into interpreter.hpp, and thus all the assembler include headers into a lot of files. Please review and test this change. I please need a sponsor. http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ I compiled and tested this without precompiled headers on linuxx86_64, linuxppc64, windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, aixppc64, ntamd64 in opt, dbg and fastdbg versions. Currently, the change applies to hs-rt, but once my other change arrives in other repos, it will work there, too. (I tested it together with the other change against jdk9/dev, too.) Best regards, Goetz. PS: I also did all the Copyright adaptions ;) From stefan.karlsson at oracle.com Mon Jul 7 08:44:15 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 07 Jul 2014 10:44:15 +0200 Subject: RFR: 8049411: Minimal VM build broken after gcId.cpp was added In-Reply-To: <1404722564.2735.1.camel@cirrus> References: <53BA5A6C.3040308@oracle.com> <1404722564.2735.1.camel@cirrus> Message-ID: <53BA5DDF.1000803@oracle.com> On 2014-07-07 10:42, Thomas Schatzl wrote: > Hi Stefan, > > On Mon, 2014-07-07 at 10:29 +0200, Stefan Karlsson wrote: >> Hi all, >> >> Please, review this change to fix a build problem with the minimal VM. >> >> http://cr.openjdk.java.net/~stefank/8049411/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8049411 >> >> When gcId.cpp was introduced, it wasn't added to the list of files to >> keep in the gc_implementation/shared directory. > can you fix the extra tab character in the new line at the end? > > Otherwise it looks good. I do not need to see that review again. I fixed the tab and fixed the sort order. thanks, StefanK > > Thomas > > From stefan.karlsson at oracle.com Mon Jul 7 08:52:28 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 07 Jul 2014 10:52:28 +0200 Subject: RFR: 8049411: Minimal VM build broken after gcId.cpp was added In-Reply-To: <53BA5DC3.3070608@oracle.com> References: <53BA5A6C.3040308@oracle.com> <53BA5DC3.3070608@oracle.com> Message-ID: <53BA5FCC.7070307@oracle.com> On 2014-07-07 10:43, David Holmes wrote: > Looks okay, but how did the original change get through JPRT ??? Thanks. I guess we don't build and run minimal in JPRT? We used to build the kernel version on Windows, but I can't find any reference to minimal in the jprt.properties. thanks, StefanK > > David > > On 7/07/2014 6:29 PM, Stefan Karlsson wrote: >> Hi all, >> >> Please, review this change to fix a build problem with the minimal VM. >> >> http://cr.openjdk.java.net/~stefank/8049411/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8049411 >> >> When gcId.cpp was introduced, it wasn't added to the list of files to >> keep in the gc_implementation/shared directory. >> >> thanks, >> StefanK From david.holmes at oracle.com Mon Jul 7 11:01:45 2014 From: david.holmes at oracle.com (David Holmes) Date: Mon, 07 Jul 2014 21:01:45 +1000 Subject: RFR: 8049411: Minimal VM build broken after gcId.cpp was added In-Reply-To: <53BA5FCC.7070307@oracle.com> References: <53BA5A6C.3040308@oracle.com> <53BA5DC3.3070608@oracle.com> <53BA5FCC.7070307@oracle.com> Message-ID: <53BA7E19.1000804@oracle.com> On 7/07/2014 6:52 PM, Stefan Karlsson wrote: > On 2014-07-07 10:43, David Holmes wrote: >> Looks okay, but how did the original change get through JPRT ??? > > Thanks. > > I guess we don't build and run minimal in JPRT? We used to build the > kernel version on Windows, but I can't find any reference to minimal in > the jprt.properties. It is built as part of the embedded builds in a standard hotspot integration job. How was the problem detected? Nightly builds or test failures? It's conceivable the omission got through the build but caused a runtime link failure. David > thanks, > StefanK > >> >> David >> >> On 7/07/2014 6:29 PM, Stefan Karlsson wrote: >>> Hi all, >>> >>> Please, review this change to fix a build problem with the minimal VM. >>> >>> http://cr.openjdk.java.net/~stefank/8049411/webrev.00/ >>> https://bugs.openjdk.java.net/browse/JDK-8049411 >>> >>> When gcId.cpp was introduced, it wasn't added to the list of files to >>> keep in the gc_implementation/shared directory. >>> >>> thanks, >>> StefanK > From stefan.karlsson at oracle.com Mon Jul 7 10:58:31 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 07 Jul 2014 12:58:31 +0200 Subject: RFR: 8049411: Minimal VM build broken after gcId.cpp was added In-Reply-To: <53BA7E19.1000804@oracle.com> References: <53BA5A6C.3040308@oracle.com> <53BA5DC3.3070608@oracle.com> <53BA5FCC.7070307@oracle.com> <53BA7E19.1000804@oracle.com> Message-ID: <53BA7D57.1080303@oracle.com> On 2014-07-07 13:01, David Holmes wrote: > On 7/07/2014 6:52 PM, Stefan Karlsson wrote: >> On 2014-07-07 10:43, David Holmes wrote: >>> Looks okay, but how did the original change get through JPRT ??? >> >> Thanks. >> >> I guess we don't build and run minimal in JPRT? We used to build the >> kernel version on Windows, but I can't find any reference to minimal in >> the jprt.properties. > > It is built as part of the embedded builds in a standard hotspot > integration job. How was the problem detected? Nightly builds or test > failures? It's conceivable the omission got through the build but > caused a runtime link failure. I was doing a minimal build before pushing the G1 Class Unloading changes. The build went fine, but the JVM couldn't be started. I've heard from Mikael Gerding that there is supposed to be a linker flag to prevent this kind of failures from happening. StefanK > > David > >> thanks, >> StefanK >> >>> >>> David >>> >>> On 7/07/2014 6:29 PM, Stefan Karlsson wrote: >>>> Hi all, >>>> >>>> Please, review this change to fix a build problem with the minimal VM. >>>> >>>> http://cr.openjdk.java.net/~stefank/8049411/webrev.00/ >>>> https://bugs.openjdk.java.net/browse/JDK-8049411 >>>> >>>> When gcId.cpp was introduced, it wasn't added to the list of files to >>>> keep in the gc_implementation/shared directory. >>>> >>>> thanks, >>>> StefanK >> From stefan.karlsson at oracle.com Mon Jul 7 11:06:51 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 07 Jul 2014 13:06:51 +0200 Subject: 8049420: Backout 8048248 to correct attribution Message-ID: <53BA7F4B.3060603@oracle.com> Hi all, I missed adding Mikael Gerding to the Contributed-by line of the G1 Class Unloading change. Since it's a rather large contribution we we're going to do a little dance to get it right: The following change has already been pushed: 8048248: G1 Class Unloading after completing a concurrent mark cycle We'll do a backout that Bengt and Erik have already reviewed: Backout: 8049420: Backout 8048248 to correct attribution Then exactly the same changeset with the correct attribution will be resubmitted as: 8049421: G1 Class Unloading after completing a concurrent mark cycle thanks, StefanK From maynardj at us.ibm.com Mon Jul 7 14:18:30 2014 From: maynardj at us.ibm.com (Maynard Johnson) Date: Mon, 07 Jul 2014 09:18:30 -0500 Subject: Fwd: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero In-Reply-To: References: <53AAE839.8050105@us.ibm.com> <53B4300C.7040401@us.ibm.com> <53B43340.6020508@oracle.com> Message-ID: <53BAAC36.8030507@us.ibm.com> On 07/02/2014 01:21 PM, Volker Simonis wrote: > After a quick look I can say that at least for the "flush_icache_stub" > and "verify_oop" cases we indeed generate no code. Other platforms > like x86 for example generate code for instruction cache flushing. The > starting address of this code is saved in a function pointer and > called if necessary. On PPC64 we just save the address of a normal > C-funtion in this function pointer and implement the cache flush with > the help of inline assembler in the C-function. However this saving of > the C-function address in the corresponding function pointer is still > done in a helper method which triggers the creation of the > JvmtiExport::post_dynamic_code_generated_internal event - but with > zero size in that case. > > I agree that it is questionable if we really need to post these events > although they didn't hurt until know. Maybe we can remove them - > please let me think one more night about it:) Any further thoughts on this, Volker? Thanks. -Maynard > > Regards, > Volker > > > > On Wed, Jul 2, 2014 at 7:38 PM, Volker Simonis wrote: >> Hi Maynard, >> >> I really apologize that I've somehow missed your first message. >> ppc-aix-port-dev was the right list to post to. >> >> I'll analyze this problem instantly and let you know why we post this >> zero-code size events. >> >> Regards, >> Volker >> >> PS: really great to see that somebody is working on oprofile/OpenJDK >> integration! >> >> >> On Wed, Jul 2, 2014 at 6:28 PM, Daniel D. Daugherty >> wrote: >>> Adding the Serviceability team to the thread since JVM/TI is owned >>> by them... >>> >>> Dan >>> >>> >>> >>> On 7/2/14 10:15 AM, Maynard Johnson wrote: >>>> >>>> Cross-posting to see if Hotspot developers can help. >>>> >>>> -Maynard >>>> >>>> >>>> -------- Original Message -------- >>>> Subject: PowerPC issue: Some JVMTI dynamic code generated events have code >>>> size of zero >>>> Date: Wed, 25 Jun 2014 10:18:17 -0500 >>>> From: Maynard Johnson >>>> To: ppc-aix-port-dev at openjdk.java.net >>>> >>>> Hello, PowerPC OpenJDK folks, >>>> I am just now starting to get involved in the OpenJDK project. My goal is >>>> to ensure that the standard serviceability tools and tooling (jdb, JVMTI, >>>> jmap, etc.) work correctly on the PowerLinux platform. I selected JVMTI to >>>> start with since I have some experience from a client perspective with the >>>> JVMTI API. An OSS profiling tool for which I am the maintainer (oprofile) >>>> provides an agent library that implements the JVMTI API. Using this agent >>>> library to profile Java apps on my Intel-based laptop with OpenJDK (using >>>> various versions, up to current jdk9-dev) works fine. But the same >>>> profiling scenario attempted on my PowerLinux box (POWER7/Fedora 20) fails >>>> miserably. >>>> >>>> The oprofile agent library registers for callbacks for CompiledMethodLoad, >>>> CompiledMethodUnload, and DynamicCodeGenerated. In the callback functions, >>>> it writes information about the JVMTI event to a file. After profiling >>>> completes, oprofile's post-processing phase involves interpreting the >>>> information from the agent library's output file and generating an ELF file >>>> to represent the JITed code. When I profile an OpenJDK app on my Power >>>> system, the post-processing phase fails while trying to resolve overlapping >>>> symbols. The failure is due to the fact that it is unexpectedly finding >>>> symbols with code size of zero overlapping at the starting address of some >>>> other symbol with non-zero code size. The symbols in question here are from >>>> DynamicCodeGenerated events. >>>> >>>> Are these "code size=0" events valid? If so, I can fix the oprofile code >>>> to handle them. If they're not valid, then below is some debug information >>>> I've collected so far. >>>> >>>> ---------------------------- >>>> >>>> I instrumented JvmtiExport::post_dynamic_code_generated_internal (in >>>> hotspot/src/share/vm/prims/jvmtiExport.cpp) to print a debug line when a >>>> symbol with code size of zero was detected and then ran the following >>>> command: >>>> >>>> java >>>> -agentpath:/jvm/openjdk-1.9.0-internal/demo/jvmti/CodeLoadInfo/lib/libCodeLoadInfo.so >>>> -version >>>> >>>> The debug output from my instrumentation was as follows: >>>> >>>> Code size is ZERO!! Dynamic code generated event sent for >>>> flush_icache_stub; code begin: 0x3fff68000080; code end: 0x3fff68000080 >>>> Code size is ZERO!! Dynamic code generated event sent for >>>> throw_exception; code begin: 0x3fff68000a90; code end: 0x3fff68000a90 >>>> Code size is ZERO!! Dynamic code generated event sent for >>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>> Code size is ZERO!! Dynamic code generated event sent for >>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>> Code size is ZERO!! Dynamic code generated event sent for >>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>> Code size is ZERO!! Dynamic code generated event sent for verify_oop; >>>> code begin: 0x3fff6801665c; code end: 0x3fff6801665c >>>> openjdk version "1.9.0-internal" >>>> OpenJDK Runtime Environment (build >>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00) >>>> OpenJDK 64-Bit Server VM (build >>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00, mixed mode) >>>> >>>> >>>> I don't have access to an AIX system to know if the same issue would be >>>> seen there. Let me know if there's any other information I can provide. >>>> >>>> Thanks for the help. >>>> >>>> -Maynard >>>> >>>> >>>> >>> > From roland.westrelin at oracle.com Mon Jul 7 15:29:28 2014 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Mon, 7 Jul 2014 17:29:28 +0200 Subject: [8u] 8046542: [I.finalize() calls from methods compiled by C1 do not cause IllegalAccessError on Sparc Message-ID: <465AA71F-6A2A-4FCB-A68F-019EA254CA3F@oracle.com> 8u backport request. The change was pushed to jdk9 last week and went through a few nights of testing that didn't show any problem. The change applies cleanly to 8u. https://bugs.openjdk.java.net/browse/JDK-8046542 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/6edfcaac0639 Roland. From volker.simonis at gmail.com Mon Jul 7 15:51:20 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 7 Jul 2014 17:51:20 +0200 Subject: Fwd: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero In-Reply-To: <53BAAC36.8030507@us.ibm.com> References: <53AAE839.8050105@us.ibm.com> <53B4300C.7040401@us.ibm.com> <53B43340.6020508@oracle.com> <53BAAC36.8030507@us.ibm.com> Message-ID: Hi Maynard, I've opened bug "PPC64: Don't use StubCodeMarks for zero-length stubs" (https://bugs.openjdk.java.net/browse/JDK-8049441) for this issue. Until it is resolved in the main code line, you can use the attached patch to work around the problem. Regards, Volker On Mon, Jul 7, 2014 at 4:18 PM, Maynard Johnson wrote: > On 07/02/2014 01:21 PM, Volker Simonis wrote: >> After a quick look I can say that at least for the "flush_icache_stub" >> and "verify_oop" cases we indeed generate no code. Other platforms >> like x86 for example generate code for instruction cache flushing. The >> starting address of this code is saved in a function pointer and >> called if necessary. On PPC64 we just save the address of a normal >> C-funtion in this function pointer and implement the cache flush with >> the help of inline assembler in the C-function. However this saving of >> the C-function address in the corresponding function pointer is still >> done in a helper method which triggers the creation of the >> JvmtiExport::post_dynamic_code_generated_internal event - but with >> zero size in that case. >> >> I agree that it is questionable if we really need to post these events >> although they didn't hurt until know. Maybe we can remove them - >> please let me think one more night about it:) > Any further thoughts on this, Volker? Thanks. > > -Maynard >> >> Regards, >> Volker >> >> >> >> On Wed, Jul 2, 2014 at 7:38 PM, Volker Simonis wrote: >>> Hi Maynard, >>> >>> I really apologize that I've somehow missed your first message. >>> ppc-aix-port-dev was the right list to post to. >>> >>> I'll analyze this problem instantly and let you know why we post this >>> zero-code size events. >>> >>> Regards, >>> Volker >>> >>> PS: really great to see that somebody is working on oprofile/OpenJDK >>> integration! >>> >>> >>> On Wed, Jul 2, 2014 at 6:28 PM, Daniel D. Daugherty >>> wrote: >>>> Adding the Serviceability team to the thread since JVM/TI is owned >>>> by them... >>>> >>>> Dan >>>> >>>> >>>> >>>> On 7/2/14 10:15 AM, Maynard Johnson wrote: >>>>> >>>>> Cross-posting to see if Hotspot developers can help. >>>>> >>>>> -Maynard >>>>> >>>>> >>>>> -------- Original Message -------- >>>>> Subject: PowerPC issue: Some JVMTI dynamic code generated events have code >>>>> size of zero >>>>> Date: Wed, 25 Jun 2014 10:18:17 -0500 >>>>> From: Maynard Johnson >>>>> To: ppc-aix-port-dev at openjdk.java.net >>>>> >>>>> Hello, PowerPC OpenJDK folks, >>>>> I am just now starting to get involved in the OpenJDK project. My goal is >>>>> to ensure that the standard serviceability tools and tooling (jdb, JVMTI, >>>>> jmap, etc.) work correctly on the PowerLinux platform. I selected JVMTI to >>>>> start with since I have some experience from a client perspective with the >>>>> JVMTI API. An OSS profiling tool for which I am the maintainer (oprofile) >>>>> provides an agent library that implements the JVMTI API. Using this agent >>>>> library to profile Java apps on my Intel-based laptop with OpenJDK (using >>>>> various versions, up to current jdk9-dev) works fine. But the same >>>>> profiling scenario attempted on my PowerLinux box (POWER7/Fedora 20) fails >>>>> miserably. >>>>> >>>>> The oprofile agent library registers for callbacks for CompiledMethodLoad, >>>>> CompiledMethodUnload, and DynamicCodeGenerated. In the callback functions, >>>>> it writes information about the JVMTI event to a file. After profiling >>>>> completes, oprofile's post-processing phase involves interpreting the >>>>> information from the agent library's output file and generating an ELF file >>>>> to represent the JITed code. When I profile an OpenJDK app on my Power >>>>> system, the post-processing phase fails while trying to resolve overlapping >>>>> symbols. The failure is due to the fact that it is unexpectedly finding >>>>> symbols with code size of zero overlapping at the starting address of some >>>>> other symbol with non-zero code size. The symbols in question here are from >>>>> DynamicCodeGenerated events. >>>>> >>>>> Are these "code size=0" events valid? If so, I can fix the oprofile code >>>>> to handle them. If they're not valid, then below is some debug information >>>>> I've collected so far. >>>>> >>>>> ---------------------------- >>>>> >>>>> I instrumented JvmtiExport::post_dynamic_code_generated_internal (in >>>>> hotspot/src/share/vm/prims/jvmtiExport.cpp) to print a debug line when a >>>>> symbol with code size of zero was detected and then ran the following >>>>> command: >>>>> >>>>> java >>>>> -agentpath:/jvm/openjdk-1.9.0-internal/demo/jvmti/CodeLoadInfo/lib/libCodeLoadInfo.so >>>>> -version >>>>> >>>>> The debug output from my instrumentation was as follows: >>>>> >>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>> flush_icache_stub; code begin: 0x3fff68000080; code end: 0x3fff68000080 >>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>> throw_exception; code begin: 0x3fff68000a90; code end: 0x3fff68000a90 >>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>> Code size is ZERO!! Dynamic code generated event sent for verify_oop; >>>>> code begin: 0x3fff6801665c; code end: 0x3fff6801665c >>>>> openjdk version "1.9.0-internal" >>>>> OpenJDK Runtime Environment (build >>>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00) >>>>> OpenJDK 64-Bit Server VM (build >>>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00, mixed mode) >>>>> >>>>> >>>>> I don't have access to an AIX system to know if the same issue would be >>>>> seen there. Let me know if there's any other information I can provide. >>>>> >>>>> Thanks for the help. >>>>> >>>>> -Maynard >>>>> >>>>> >>>>> >>>> >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: fix_zero_length_stubs.patch Type: text/x-patch Size: 1180 bytes Desc: not available URL: From joe.darcy at oracle.com Mon Jul 7 16:10:51 2014 From: joe.darcy at oracle.com (Joe Darcy) Date: Mon, 07 Jul 2014 09:10:51 -0700 Subject: JDK 9 RFR of JDK-8048620: Remove unneeded/obsolete -source/-target options in hotspot tests In-Reply-To: <53B5DAD7.6030205@oracle.com> References: <53AE04E1.4000806@oracle.com> <53B2E513.5020608@oracle.com> <53B3DB0D.8070700@oracle.com> <53B3F6DA.1050209@oracle.com> <53B5DAD7.6030205@oracle.com> Message-ID: <53BAC68B.2070209@oracle.com> Hello, I've sent a patch with the updated copyrights to Harold. As far as that goes, getting those changes back should proceed. However, there is one more test which needs further examination; from the email initiating this thread: > There is one additional test which uses -source/-target, > test/compiler/6932496/Test6932496.java. This test *does* appear > sensitive to class file version (no jsr / jret instruction in target 6 > or higher) so I have not modified this test. If the test is not > actually sensitive to class file version, it can be updated like the > others. If it is sensitive and if testing this is still relevant, the > class file in question will need to be generated in some other way, > such as as by using ASM. Thanks, -Joe On 07/03/2014 03:36 PM, Joe Darcy wrote: > Hi Harold, > > Yes; please sponsor this change; thanks, > > -Joe > > On 07/02/2014 05:11 AM, harold seigel wrote: >> Hi Joe, >> >> Your changes look good to me, also. >> >> Would you like me to sponsor your change? >> >> Thanks, Harold >> >> On 7/2/2014 6:12 AM, David Holmes wrote: >>> Hi Joe, >>> >>> I can provide you one Review. It seems to me the -source/-target >>> were being set to ensure a minimum version (probably on -target was >>> needed but -source had to come along for the ride), so removing them >>> seems fine. >>> >>> Note hotspot protocol requires copyright updates at the time of >>> checkin - thanks. >>> >>> Also you will need to create the changeset against the group repo >>> for whomever your sponsor is (though your existing patch from the >>> webrev will probably apply cleanly). >>> >>> A second reviewer (small R) is needed. If they don't sponsor it I will. >>> >>> Cheers, >>> David >>> >>> >>> >>> On 2/07/2014 2:42 AM, Joe Darcy wrote: >>>> *ping* >>>> >>>> -Joe >>>> >>>> On 06/27/2014 04:57 PM, Joe Darcy wrote: >>>>> Hello, >>>>> >>>>> As a consequence of a policy for retiring old javac -source and >>>>> -target options (JEP 182 [1]), in JDK 9, only -source/-target of >>>>> 6/1.6 >>>>> and higher will be supported [2]. This work is being tracked under >>>>> bug >>>>> >>>>> JDK-8011044: Remove support for 1.5 and earlier source and target >>>>> options >>>>> https://bugs.openjdk.java.net/browse/JDK-8011044 >>>>> >>>>> Many subtasks related to this are already complete, including >>>>> updating >>>>> regression tests in the jdk and langtools repos. It has come to my >>>>> attention that the hotspot repo also has a few tests that use -source >>>>> and -target that should be updated. Please review the changes: >>>>> >>>>> http://cr.openjdk.java.net/~darcy/8048620.0/ >>>>> >>>>> Full patch below. From what I could tell looking at the bug and >>>>> tests, >>>>> these tests are not sensitive to the class file version so they >>>>> shouldn't need to use an explicit -source or -target option and >>>>> should >>>>> just accept the JDK-default. >>>>> >>>>> There is one additional test which uses -source/-target, >>>>> test/compiler/6932496/Test6932496.java. This test *does* appear >>>>> sensitive to class file version (no jsr / jret instruction in >>>>> target 6 >>>>> or higher) so I have not modified this test. If the test is not >>>>> actually sensitive to class file version, it can be updated like the >>>>> others. If it is sensitive and if testing this is still relevant, the >>>>> class file in question will need to be generated in some other way, >>>>> such as as by using ASM. >>>>> >>>>> Regardless of the outcome of the technical discussion around >>>>> Test6932496.java, I'd appreciate if a "hotspot buddy" could shepherd >>>>> this fix through the HotSpot processes. >>>>> >>>>> Thanks, >>>>> >>>>> -Joe >>>>> >>>>> [1] http://openjdk.java.net/jeps/182 >>>>> >>>>> [2] >>>>> http://mail.openjdk.java.net/pipermail/jdk9-dev/2014-January/000328.html >>>>> >>>>> >>>>> --- old/test/compiler/6775880/Test.java 2014-06-27 >>>>> 16:24:25.000000000 -0700 >>>>> +++ new/test/compiler/6775880/Test.java 2014-06-27 >>>>> 16:24:25.000000000 -0700 >>>>> @@ -26,7 +26,6 @@ >>>>> * @test >>>>> * @bug 6775880 >>>>> * @summary EA +DeoptimizeALot: >>>>> assert(mon_info->owner()->is_locked(),"object must be locked now") >>>>> - * @compile -source 1.4 -target 1.4 Test.java >>>>> * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -Xbatch >>>>> -XX:+DoEscapeAnalysis -XX:+DeoptimizeALot >>>>> -XX:CompileCommand=exclude,java.lang.AbstractStringBuilder::append >>>>> Test >>>>> */ >>>>> >>>>> --- old/test/runtime/6626217/Test6626217.sh 2014-06-27 >>>>> 16:24:26.000000000 -0700 >>>>> +++ new/test/runtime/6626217/Test6626217.sh 2014-06-27 >>>>> 16:24:26.000000000 -0700 >>>>> @@ -54,7 +54,7 @@ >>>>> >>>>> # Compile all the usual suspects, including the default >>>>> 'many_loader' >>>>> ${CP} many_loader1.java.foo many_loader.java >>>>> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint *.java >>>>> +${JAVAC} ${TESTJAVACOPTS} -Xlint *.java >>>>> >>>>> # Rename the class files, so the custom loader (and not the system >>>>> loader) will find it >>>>> ${MV} from_loader2.class from_loader2.impl2 >>>>> @@ -62,7 +62,7 @@ >>>>> # Compile the next version of 'many_loader' >>>>> ${MV} many_loader.class many_loader.impl1 >>>>> ${CP} many_loader2.java.foo many_loader.java >>>>> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint >>>>> many_loader.java >>>>> +${JAVAC} ${TESTJAVACOPTS} -Xlint many_loader.java >>>>> >>>>> # Rename the class file, so the custom loader (and not the system >>>>> loader) will find it >>>>> ${MV} many_loader.class many_loader.impl2 >>>>> --- old/test/runtime/8003720/Test8003720.java 2014-06-27 >>>>> 16:24:26.000000000 -0700 >>>>> +++ new/test/runtime/8003720/Test8003720.java 2014-06-27 >>>>> 16:24:26.000000000 -0700 >>>>> @@ -26,7 +26,7 @@ >>>>> * @test >>>>> * @bug 8003720 >>>>> * @summary Method in interpreter stack frame can be deallocated >>>>> - * @compile -XDignore.symbol.file -source 1.7 -target 1.7 >>>>> Victim.java >>>>> + * @compile -XDignore.symbol.file Victim.java >>>>> * @run main/othervm -Xverify:all -Xint Test8003720 >>>>> */ >>>>> >>>>> >>>> >> > From andrey.x.zakharov at oracle.com Mon Jul 7 16:48:21 2014 From: andrey.x.zakharov at oracle.com (Andrey Zakharov) Date: Mon, 07 Jul 2014 20:48:21 +0400 Subject: RFR: 8011397: JTREG needs to copy additional WhiteBox class file to JTwork/scratch/sun/hotspot In-Reply-To: <53AAE5DA.2030700@oracle.com> References: <536B7CF0.6010508@oracle.com> <536B9E36.6090802@oracle.com> <536BA28D.7030808@oracle.com> <3503372.bSN5QEX8PY@mgerdin03> <5371F734.6090809@oracle.com> <537CC81C.6010905@oracle.com> <537DB9B0.8010200@oracle.com> <537F33CB.5050505@oracle.com> <5395CBE3.5030502@oracle.com> <539727A6.5030307@oracle.com> <539F1404.6070307@oracle.com> <53AAE5DA.2030700@oracle.com> Message-ID: <53BACF55.2020301@oracle.com> Hi ,all Mikael, can you please review it. webrev: http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ Thanks. On 25.06.2014 19:08, Andrey Zakharov wrote: > Hi, all > So in progress of previous email - > webrev: > http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ > > Thanks. > > On 16.06.2014 19:57, Andrey Zakharov wrote: >> Hi, all >> So issue is that when tests with WhiteBox API has been invoked with >> -Xverify:all it fails with Exception java.lang.NoClassDefFoundError: >> sun/hotspot/WhiteBox$WhiteBoxPermission >> Solutions that are observed: >> 1. Copy WhiteBoxPermission with WhiteBox. But >> >> Perhaps this is a good time to get rid of ClassFileInstaller >> altogether? >> >> 2. Using bootclasspath to hook pre-built whitebox (due @library >> /testlibrary/whitebox) . Some tests has @run main/othervm, some uses >> ProcessBuilder. >> - main/othervm/bootclasspath adds ${test.src} and >> ${test.classes}to options. >> - With ProcessBuilder we can just add ${test.classes} >> Question here is, can it broke some tests ? While testing this, I >> found only https://bugs.openjdk.java.net/browse/JDK-8046231, others >> looks fine. >> >> 3. Make ClassFileInstaller deal with inner classes like that: >> diff -r 6ed24aedeef0 -r c01651363ba8 >> test/testlibrary/ClassFileInstaller.java >> --- a/test/testlibrary/ClassFileInstaller.java Thu Jun 05 19:02:56 >> 2014 +0400 >> +++ b/test/testlibrary/ClassFileInstaller.java Fri Jun 06 18:18:11 >> 2014 +0400 >> @@ -50,6 +50,16 @@ >> } >> // Create the class file >> Files.copy(is, p, StandardCopyOption.REPLACE_EXISTING); >> + >> + for (Class cls : >> Class.forName(arg).getDeclaredClasses()) { >> + //if (!Modifier.isStatic(cls.getModifiers())) { >> + String pathNameSub = >> cls.getCanonicalName().replace('.', '/').concat(".class"); >> + Path pathSub = Paths.get(pathNameSub); >> + InputStream streamSub = >> cl.getResourceAsStream(pathNameSub); >> + Files.copy(streamSub, pathSub, >> StandardCopyOption.REPLACE_EXISTING); >> + //} >> + } >> + >> } >> } >> } >> >> Works fine for ordinary classes, but fails for WhiteBox due >> Class.forName initiate Class. WhiteBox has "static" section, and >> initialization fails as it cannot bind to native methods >> "registerNatives" and so on. >> >> >> So, lets return to first one option? Just add everywhere >> * @run main ClassFileInstaller sun.hotspot.WhiteBox >> + * @run main ClassFileInstaller sun.hotspot.WhiteBox$WhiteBoxPermission >> >> Thanks. >> >> >> On 10.06.2014 19:43, Igor Ignatyev wrote: >>> Andrey, >>> >>> I don't like this idea, since it completely changes the tests. >>> 'run/othervm/bootclasspath' adds all paths from CP to BCP, so the >>> tests whose main idea was testing WB methods themselves (sanity, >>> compiler/whitebox, ...) don't check that it's possible to use WB >>> when the application isn't in BCP. >>> >>> Igor >>> >>> On 06/09/2014 06:59 PM, Andrey Zakharov wrote: >>>> Hi, everybody >>>> I have tested my changes on major platforms and found one bug, filed: >>>> https://bugs.openjdk.java.net/browse/JDK-8046231 >>>> Also, i did another try to make ClassFileInstaller to copy all inner >>>> classes within parent, but this fails for WhiteBox due its static >>>> "registerNatives" dependency. >>>> >>>> Please, review suggested changes: >>>> - replace ClassFileInstaller and run/othervm with >>>> "run/othervm/bootclasspath". >>>> bootclasspath parameter for othervm adds-Xbootclasspath/a: >>>> option with ${test.src} and ${test.classes}according to >>>> http://hg.openjdk.java.net/code-tools/jtreg/file/31003a1c46d9/src/share/classes/com/sun/javatest/regtest/MainAction.java. >>>> >>>> Is this suitable for our needs - give to test compiled WhiteBox? >>>> - replace explicit -Xbootclasspath option values (".") in >>>> ProcessBuilder invocations to ${test.classes} where WhiteBox has been >>>> compiled. >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.00/ >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8011397 >>>> Thanks. >>>> >>>> >>>> On 23.05.2014 15:40, Andrey Zakharov wrote: >>>>> >>>>> On 22.05.2014 12:47, Igor Ignatyev wrote: >>>>>> Andrey, >>>>>> >>>>>> 1. You changed dozen of tests, have you tested your changes? >>>>> Locally, aurora on the way. >>>>>> >>>>>> 2. Your changes of year in copyright is wrong. it has to be >>>>>> $first_year, [$last_year, ], see Mark's email[1] for details. >>>>>> >>>>>> [1] >>>>>> http://mail.openjdk.java.net/pipermail/jdk7-dev/2010-May/001321.html >>>>> Thanks, fixed. will be uploaded soon. >>>>> >>>>> >>>>>> >>>>>> Igor >>>>>> >>>>>> On 05/21/2014 07:37 PM, Andrey Zakharov wrote: >>>>>>> >>>>>>> On 13.05.2014 14:43, Andrey Zakharov wrote: >>>>>>>> Hi >>>>>>>> So here is trivial patch - >>>>>>>> removing ClassFileInstaller sun.hotspot.WhiteBox and adding >>>>>>>> main/othervm/bootclasspath >>>>>>>> where this needed >>>>>>>> >>>>>>>> Also, some tests are modified as >>>>>>>> - "-Xbootclasspath/a:.", >>>>>>>> + "-Xbootclasspath/a:" + >>>>>>>> System.getProperty("test.classes"), >>>>>>>> >>>>>>>> Thanks. >>>>>>> webrev: http://cr.openjdk.java.net/~jwilhelm/8011397/webrev.02/ >>>>>>> bug: https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>> Thanks. >>>>>>> >>>>>>>> >>>>>>>> On 09.05.2014 12:13, Mikael Gerdin wrote: >>>>>>>>> On Thursday 08 May 2014 19.28.13 Igor Ignatyev wrote: >>>>>>>>>> // cc'ing hotspot-dev instaed of compiler, runtime and gc lists. >>>>>>>>>> >>>>>>>>>> On 05/08/2014 07:09 PM, Filipp Zhinkin wrote: >>>>>>>>>>> Andrey, >>>>>>>>>>> >>>>>>>>>>> I've CC'ed compiler and runtime mailing list, because you're >>>>>>>>>>> changes >>>>>>>>>>> affect test for other components as too. >>>>>>>>>>> >>>>>>>>>>> I don't like your solution (but I'm not a reviewer, so treat my >>>>>>>>>>> words >>>>>>>>>>> just as suggestion), >>>>>>>>>>> because we'll have to write more meta information for each test >>>>>>>>>>> and it >>>>>>>>>>> is very easy to >>>>>>>>>>> forget to install WhiteBoxPermission if you don't test your >>>>>>>>>>> test >>>>>>>>>>> with >>>>>>>>>>> some security manager. >>>>>>>>>>> >>>>>>>>>>> From my point of view, it will be better to extend >>>>>>>>>>> ClassFileInstaller >>>>>>>>>>> >>>>>>>>>>> so it will copy not only >>>>>>>>>>> a class whose name was passed as an arguments, but also all >>>>>>>>>>> inner >>>>>>>>>>> classes of that class. >>>>>>>>>>> And if someone want copy only specified class without inner >>>>>>>>>>> classes, >>>>>>>>>>> then some option >>>>>>>>>>> could be added to ClassFileInstaller to force such behaviour. >>>>>>>>> Perhaps this is a good time to get rid of ClassFileInstaller >>>>>>>>> altogether? >>>>>>>>> >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8009117 >>>>>>>>> >>>>>>>>> The reason for its existence is that the WhiteBox class needs >>>>>>>>> to be >>>>>>>>> on the >>>>>>>>> boot class path. >>>>>>>>> If we can live with having all the test's classes on the boot >>>>>>>>> class >>>>>>>>> path then >>>>>>>>> we could use the /bootclasspath option in jtreg as stated in >>>>>>>>> the RFE. >>>>>>>>> >>>>>>>>> /Mikael >>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Filipp. >>>>>>>>>>> >>>>>>>>>>> On 05/08/2014 04:47 PM, Andrey Zakharov wrote: >>>>>>>>>>>> Hi! >>>>>>>>>>>> Suggesting patch with fixes for >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>>>>>>> >>>>>>>>>>>> webrev: >>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20275/8011397.tgz >>>>>>>>>>>> >>>>>>>>>>>> patch: >>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20274/8011397.WhiteBoxPer >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> mission >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks. >>>>>>>> >>>>>>> >>>>> >>>> >> > From igor.veresov at oracle.com Mon Jul 7 17:32:02 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 7 Jul 2014 10:32:02 -0700 Subject: [8u] 8046542: [I.finalize() calls from methods compiled by C1 do not cause IllegalAccessError on Sparc In-Reply-To: <465AA71F-6A2A-4FCB-A68F-019EA254CA3F@oracle.com> References: <465AA71F-6A2A-4FCB-A68F-019EA254CA3F@oracle.com> Message-ID: <7FFE39AD-36DF-4435-BFF2-364DE61F8267@oracle.com> Good. igor On Jul 7, 2014, at 8:29 AM, Roland Westrelin wrote: > 8u backport request. The change was pushed to jdk9 last week and went through a few nights of testing that didn't show any problem. > > The change applies cleanly to 8u. > https://bugs.openjdk.java.net/browse/JDK-8046542 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/6edfcaac0639 > > Roland. From maynardj at us.ibm.com Mon Jul 7 17:40:01 2014 From: maynardj at us.ibm.com (Maynard Johnson) Date: Mon, 07 Jul 2014 12:40:01 -0500 Subject: Fwd: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero In-Reply-To: References: <53AAE839.8050105@us.ibm.com> <53B4300C.7040401@us.ibm.com> <53B43340.6020508@oracle.com> <53BAAC36.8030507@us.ibm.com> Message-ID: <53BADB71.2080200@us.ibm.com> On 07/07/2014 10:51 AM, Volker Simonis wrote: > Hi Maynard, > > I've opened bug "PPC64: Don't use StubCodeMarks for zero-length stubs" > (https://bugs.openjdk.java.net/browse/JDK-8049441) for this issue. > Until it is resolved in the main code line, you can use the attached > patch to work around the problem. Thanks. The patch does indeed resolve the problem. Now oprofile can properly handle the JVMTI events and can also resolve samples in JITed code to the associated Java methods. :-) -Maynard > > Regards, > Volker > > > On Mon, Jul 7, 2014 at 4:18 PM, Maynard Johnson wrote: >> On 07/02/2014 01:21 PM, Volker Simonis wrote: >>> After a quick look I can say that at least for the "flush_icache_stub" >>> and "verify_oop" cases we indeed generate no code. Other platforms >>> like x86 for example generate code for instruction cache flushing. The >>> starting address of this code is saved in a function pointer and >>> called if necessary. On PPC64 we just save the address of a normal >>> C-funtion in this function pointer and implement the cache flush with >>> the help of inline assembler in the C-function. However this saving of >>> the C-function address in the corresponding function pointer is still >>> done in a helper method which triggers the creation of the >>> JvmtiExport::post_dynamic_code_generated_internal event - but with >>> zero size in that case. >>> >>> I agree that it is questionable if we really need to post these events >>> although they didn't hurt until know. Maybe we can remove them - >>> please let me think one more night about it:) >> Any further thoughts on this, Volker? Thanks. >> >> -Maynard >>> >>> Regards, >>> Volker >>> >>> >>> >>> On Wed, Jul 2, 2014 at 7:38 PM, Volker Simonis wrote: >>>> Hi Maynard, >>>> >>>> I really apologize that I've somehow missed your first message. >>>> ppc-aix-port-dev was the right list to post to. >>>> >>>> I'll analyze this problem instantly and let you know why we post this >>>> zero-code size events. >>>> >>>> Regards, >>>> Volker >>>> >>>> PS: really great to see that somebody is working on oprofile/OpenJDK >>>> integration! >>>> >>>> >>>> On Wed, Jul 2, 2014 at 6:28 PM, Daniel D. Daugherty >>>> wrote: >>>>> Adding the Serviceability team to the thread since JVM/TI is owned >>>>> by them... >>>>> >>>>> Dan >>>>> >>>>> >>>>> >>>>> On 7/2/14 10:15 AM, Maynard Johnson wrote: >>>>>> >>>>>> Cross-posting to see if Hotspot developers can help. >>>>>> >>>>>> -Maynard >>>>>> >>>>>> >>>>>> -------- Original Message -------- >>>>>> Subject: PowerPC issue: Some JVMTI dynamic code generated events have code >>>>>> size of zero >>>>>> Date: Wed, 25 Jun 2014 10:18:17 -0500 >>>>>> From: Maynard Johnson >>>>>> To: ppc-aix-port-dev at openjdk.java.net >>>>>> >>>>>> Hello, PowerPC OpenJDK folks, >>>>>> I am just now starting to get involved in the OpenJDK project. My goal is >>>>>> to ensure that the standard serviceability tools and tooling (jdb, JVMTI, >>>>>> jmap, etc.) work correctly on the PowerLinux platform. I selected JVMTI to >>>>>> start with since I have some experience from a client perspective with the >>>>>> JVMTI API. An OSS profiling tool for which I am the maintainer (oprofile) >>>>>> provides an agent library that implements the JVMTI API. Using this agent >>>>>> library to profile Java apps on my Intel-based laptop with OpenJDK (using >>>>>> various versions, up to current jdk9-dev) works fine. But the same >>>>>> profiling scenario attempted on my PowerLinux box (POWER7/Fedora 20) fails >>>>>> miserably. >>>>>> >>>>>> The oprofile agent library registers for callbacks for CompiledMethodLoad, >>>>>> CompiledMethodUnload, and DynamicCodeGenerated. In the callback functions, >>>>>> it writes information about the JVMTI event to a file. After profiling >>>>>> completes, oprofile's post-processing phase involves interpreting the >>>>>> information from the agent library's output file and generating an ELF file >>>>>> to represent the JITed code. When I profile an OpenJDK app on my Power >>>>>> system, the post-processing phase fails while trying to resolve overlapping >>>>>> symbols. The failure is due to the fact that it is unexpectedly finding >>>>>> symbols with code size of zero overlapping at the starting address of some >>>>>> other symbol with non-zero code size. The symbols in question here are from >>>>>> DynamicCodeGenerated events. >>>>>> >>>>>> Are these "code size=0" events valid? If so, I can fix the oprofile code >>>>>> to handle them. If they're not valid, then below is some debug information >>>>>> I've collected so far. >>>>>> >>>>>> ---------------------------- >>>>>> >>>>>> I instrumented JvmtiExport::post_dynamic_code_generated_internal (in >>>>>> hotspot/src/share/vm/prims/jvmtiExport.cpp) to print a debug line when a >>>>>> symbol with code size of zero was detected and then ran the following >>>>>> command: >>>>>> >>>>>> java >>>>>> -agentpath:/jvm/openjdk-1.9.0-internal/demo/jvmti/CodeLoadInfo/lib/libCodeLoadInfo.so >>>>>> -version >>>>>> >>>>>> The debug output from my instrumentation was as follows: >>>>>> >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> flush_icache_stub; code begin: 0x3fff68000080; code end: 0x3fff68000080 >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> throw_exception; code begin: 0x3fff68000a90; code end: 0x3fff68000a90 >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>>> Code size is ZERO!! Dynamic code generated event sent for verify_oop; >>>>>> code begin: 0x3fff6801665c; code end: 0x3fff6801665c >>>>>> openjdk version "1.9.0-internal" >>>>>> OpenJDK Runtime Environment (build >>>>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00) >>>>>> OpenJDK 64-Bit Server VM (build >>>>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00, mixed mode) >>>>>> >>>>>> >>>>>> I don't have access to an AIX system to know if the same issue would be >>>>>> seen there. Let me know if there's any other information I can provide. >>>>>> >>>>>> Thanks for the help. >>>>>> >>>>>> -Maynard >>>>>> >>>>>> >>>>>> >>>>> >>> >> From roland.westrelin at oracle.com Mon Jul 7 19:55:42 2014 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Mon, 7 Jul 2014 21:55:42 +0200 Subject: [8u] 8046542: [I.finalize() calls from methods compiled by C1 do not cause IllegalAccessError on Sparc In-Reply-To: <7FFE39AD-36DF-4435-BFF2-364DE61F8267@oracle.com> References: <465AA71F-6A2A-4FCB-A68F-019EA254CA3F@oracle.com> <7FFE39AD-36DF-4435-BFF2-364DE61F8267@oracle.com> Message-ID: Thanks, Igor. Roland. From mikael.vidstedt at oracle.com Mon Jul 7 23:12:10 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Mon, 07 Jul 2014 16:12:10 -0700 Subject: RFR(S): 8049071: Add jtreg jobs to JPRT for Hotspot In-Reply-To: <53B63BB5.8090602@oracle.com> References: <53B4AD05.3070702@oracle.com> <53B631B3.6090505@oracle.com> <53B63BB5.8090602@oracle.com> Message-ID: <53BB294A.8040801@oracle.com> Fixed the comment, removed the loop (the loop logic is btw taken directly from jdk/test/Makefile, but I'll follow up on a fix for that separately). Anybody else want to have a look? top: http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.01/top/webrev/ hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.01/hotspot/webrev/ Thanks, Mikael On 2014-07-03 22:29, David Holmes wrote: > On 4/07/2014 2:46 PM, David Holmes wrote: >> Hi Mikael, >> >> Generally looks okay - took me a minute to remember that jtreg groups >> combine as set unions :) >> >> A couple of things: >> >> 226 # Unless explicitly defined below, hotspot_ is interpreted as the >> jtreg test group >> >> The jtreg group is actually called hotspot_ >> >> 227 hotspot_%: >> 228 $(ECHO) "Running tests: $@" >> 229 for each in $@; do \ >> 230 $(MAKE) -j 1 TEST_SELECTION=":$$each" >> UNIQUE_DIR=$$each jtreg_tests; \ >> 231 done >> >> While hotspot_% can match multiple targets each target will be distinct >> - ie $@ will only every have a single value and the for loop will only >> execute once - and hence is unnecessary. This seems borne out with a >> simple test: >> >> > cat Makefile >> hotspot_%: >> @echo "Running tests: $@" >> @for each in $@; do \ >> echo $$each ;\ >> done >> >> > make hotspot_a hotspot_b >> Running tests: hotspot_a >> hotspot_a >> Running tests: hotspot_b >> hotspot_b > > Though if you have a quoting issue with the invocation: > > > make "hotspot_a hotspot_b" > Running tests: hotspot_a hotspot_b > hotspot_a > hotspot_b > > things turn out different. > > David > > >> Cheers, >> David >> >> On 3/07/2014 11:08 AM, Mikael Vidstedt wrote: >>> >>> Please review this enhancement which adds the scaffolding needed to run >>> the hotspot jtreg tests in JPRT. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8049071 >>> Webrev (/): >>> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/top/webrev/ >>> >>> Webrev (hotspot/): >>> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/hotspot/webrev/ >>> >>> >>> >>> >>> Summary: >>> >>> We want to run the hotspot regression tests on every hotspot push. This >>> change enables this and adds four new test groups to the set of tests >>> being run on hotspot pushes. The new test sets still need to be >>> populated. >>> >>> Narrative: >>> >>> The majority of the changes are in the hotspot/test/Makefile. The >>> changes are almost entirely stolen from jdk/test/Makefile but have been >>> massaged to support (at least) three different use cases, two of which >>> were supported earlier: >>> >>> 1. Running the non-jtreg tests (servertest, clienttest and >>> internalvmtests), also supporting the use of the "hotspot_" for when >>> the >>> tests are invoked from the JDK top level >>> 2. Running jtreg tests by selecting test to run using the TESTDIRS >>> variable >>> 3. Running jtreg tests by selecting the test group to run (NEW) >>> >>> The third/new use case is implemented by making any target named >>> hotspot_% *except* the ones listed in 1. lead to the corresponding >>> jtreg >>> test group in TEST.groups being run. For example, running "make >>> hotspot_gc" leads to all the tests in the hotspot_gc test group in >>> TEST.groups to be run and so on. >>> >>> I also removed the packtest targets, because as far as I can tell >>> they're not used anyway. >>> >>> Note that the new component test groups in TEST.group - >>> hotspot_compiler, hotspot_gc, hotspot_runtime and >>> hotspot_serviceability >>> - are currently empty, or more precisely they only run a single test >>> each. The intention is that these should be populated by the respective >>> teams to include stable and relatively fast tests. Tests added to the >>> groups will be run on hotspot push jobs, and therefore will be blocking >>> pushes in case they fail. >>> >>> Cheers, >>> Mikael >>> From volker.simonis at gmail.com Tue Jul 8 15:41:35 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 8 Jul 2014 17:41:35 +0200 Subject: RFR(XS): 8049441: PPC64: Don't use StubCodeMarks for zero-length stubs Message-ID: Hi, could somebody please review and push the following small, PPC64-only change to any of the hs team repositories: http://cr.openjdk.java.net/~simonis/webrevs/8049441/ https://bugs.openjdk.java.net/browse/JDK-8049441 Background: For some stubs we actually do not really generate code on PPC64 but instead we use a native C-function with inline-assembly. If the generators of these stubs contain a StubCodeMark, they will trigger JvmtiExport::post_dynamic_code_generated_internal events with a zero length code size. These events may fool clients like Oprofile which register for these events (thanks to Maynard Johnson who reported this - see http://mail.openjdk.java.net/pipermail/ppc-aix-port-dev/2014-June/002032.html). This change simply removes the StubCodeMark from ICacheStubGenerator::generate_icache_flush() and generate_verify_oop() because they don't generate assembly code. It also removes the StubCodeMark from generate_throw_exception() because it doesn't really generate a plain stub but a runtime stub for which the JVMT dynamic code event is already generated by RuntimeStub::new_runtime_stub() -> CodeBlob::trace_new_stub() -> JvmtiExport::post_dynamic_code_generated(). Thank you and best regards, Volker From daniel.daugherty at oracle.com Tue Jul 8 15:45:59 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Tue, 08 Jul 2014 09:45:59 -0600 Subject: RFR(XS): 8049441: PPC64: Don't use StubCodeMarks for zero-length stubs In-Reply-To: References: Message-ID: <53BC1237.2060006@oracle.com> Adding the Serviceability Team since JVM/TI belongs to them. Dan On 7/8/14 9:41 AM, Volker Simonis wrote: > Hi, > > could somebody please review and push the following small, PPC64-only > change to any of the hs team repositories: > > http://cr.openjdk.java.net/~simonis/webrevs/8049441/ > https://bugs.openjdk.java.net/browse/JDK-8049441 > > Background: > > For some stubs we actually do not really generate code on PPC64 but > instead we use a native C-function with inline-assembly. If the > generators of these stubs contain a StubCodeMark, they will trigger > JvmtiExport::post_dynamic_code_generated_internal events with a zero > length code size. These events may fool clients like Oprofile which > register for these events (thanks to Maynard Johnson who reported this > - see http://mail.openjdk.java.net/pipermail/ppc-aix-port-dev/2014-June/002032.html). > > This change simply removes the StubCodeMark from > ICacheStubGenerator::generate_icache_flush() and generate_verify_oop() > because they don't generate assembly code. It also removes the > StubCodeMark from generate_throw_exception() because it doesn't really > generate a plain stub but a runtime stub for which the JVMT dynamic > code event is already generated by RuntimeStub::new_runtime_stub() -> > CodeBlob::trace_new_stub() -> > JvmtiExport::post_dynamic_code_generated(). > > Thank you and best regards, > Volker From lois.foltan at oracle.com Tue Jul 8 17:42:12 2014 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 08 Jul 2014 13:42:12 -0400 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> Message-ID: <53BC2D74.4070708@oracle.com> Hi Goetz, Overall this cleanup looks good. Here are specific comments per file: src/cpu/ppc/vm/runtime_ppc.cpp - include nativeInst.hpp instead of nativeInst_ppc.hpp src/cpu/sparc/vm/c1_Runtime1_sparc.cpp - include nativeInst.hpp instead of nativeInst_sparc.hpp - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp (however this could pull in more code than needed since vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) src/cpu/ppc/vm/stubGenerator_ppc.cpp - change not related to clean up of umbrella headers, please explain/justify. src/share/vm/code/vmreg.hpp - Can lines #143-#15 be replaced by an inclusion of vmreg.inline.hpp or will this introduce a cyclical inclusion situation, since vmreg.inline.hpp includes vmreg.hpp? src/share/vm/classfile/classFileStream.cpp - only has a copyright change in the file, no other changes present? src/share/vm/prims/jvmtiClassFileReconstituter.cpp - incorrect copyright, no current year? src/share/vm/opto/ad.hpp - incorrect copyright date for a new file src/share/vm/code/vmreg.inline.hpp - technically this new file does not need to include "asm/register.hpp" since vmreg.hpp already includes it My only lingering concern is the cyclical nature of vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new file "vmreg.inline.hpp" in favor of having files include vmreg.hpp instead? Again since vmreg.inline.hpp includes vmreg.hpp there really is not much difference between the two? Thanks, Lois On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: > Hi, > > I decided to clean up the remaining include cascades, too. > > This change introduces umbrella headers for the files in the cpu subdirectories: > > src/share/vm/utilities/bytes.hpp > src/share/vm/opto/ad.hpp > src/share/vm/code/nativeInst.hpp > src/share/vm/code/vmreg.inline.hpp > src/share/vm/interpreter/interp_masm.hpp > > It also cleans up the include cascades for adGlobals*.hpp, > jniTypes*.hpp, vm_version*.hpp and register*.hpp. > > Where possible, this change avoids includes in headers. > Eventually it adds a forward declaration. > > vmreg_.inline.hpp contains functions declared in register_cpu.hpp > and vmreg.hpp, so there is no obvious mapping to the shared files. > Still, I did not split the files in the cpu directories, as they are > rather small. > > I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly > contains machine dependent, c2 specific register information. So I > think optoreg.hpp is a good header to place the adGlobals_.hpp includes in, > and then use optoreg.hpp where symbols from adGlobals are needed. > > I moved the constructor and destructor of CodeletMark to the .cpp > file, I don't think this is performance relevant. But having them in > the header requirs to pull interp_masm.hpp into interpreter.hpp, and > thus all the assembler include headers into a lot of files. > > Please review and test this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ > > I compiled and tested this without precompiled headers on linuxx86_64, linuxppc64, > windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, aixppc64, ntamd64 > in opt, dbg and fastdbg versions. > > Currently, the change applies to hs-rt, but once my other change arrives in other > repos, it will work there, too. (I tested it together with the other change > against jdk9/dev, too.) > > Best regards, > Goetz. > > PS: I also did all the Copyright adaptions ;) From mike.duigou at oracle.com Tue Jul 8 22:33:00 2014 From: mike.duigou at oracle.com (Mike Duigou) Date: Tue, 8 Jul 2014 15:33:00 -0700 Subject: RFR: 8047734 : Back out use of -Og Message-ID: Hello all; This change backs out use of the optimized for debugging "-Og" in favour of the traditional "-O0". Initial evaluation seemed to indicate that "-Og" provided all the necessary debugging information but this has turned out to be incorrect. It seems that information is missing with the "-Og" optimization option combined with the default "-g" symbols option. More investigation is needed but that will be done in a future changeset. jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047734 webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047734/0/webrev/ (diffs in the generated configure script are not part of this review) The changes will be pushed through the hotspot-compiler repo. Mike From igor.veresov at oracle.com Wed Jul 9 00:15:26 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Tue, 8 Jul 2014 17:15:26 -0700 Subject: RFR: 8047734 : Back out use of -Og In-Reply-To: References: Message-ID: Hi Mike, Thanks for fixing this. Your diff against hs-comp is going to be a bit different though. We already have changes that remove Og for clang (which doesn?t support -Og at all). See http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/8cfc6ff87733 igor On Jul 8, 2014, at 3:33 PM, Mike Duigou wrote: > Hello all; > > This change backs out use of the optimized for debugging "-Og" in favour of the traditional "-O0". Initial evaluation seemed to indicate that "-Og" provided all the necessary debugging information but this has turned out to be incorrect. It seems that information is missing with the "-Og" optimization option combined with the default "-g" symbols option. More investigation is needed but that will be done in a future changeset. > > jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047734 > webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047734/0/webrev/ > > (diffs in the generated configure script are not part of this review) > > The changes will be pushed through the hotspot-compiler repo. > > Mike From mike.duigou at oracle.com Wed Jul 9 00:42:36 2014 From: mike.duigou at oracle.com (Mike Duigou) Date: Tue, 8 Jul 2014 17:42:36 -0700 Subject: RFR: 8047734 : Back out use of -Og In-Reply-To: References: Message-ID: I've rebased the webrev on the current hs-comp forest: http://cr.openjdk.java.net/~mduigou/JDK-8047734/1/webrev/ Mike On Jul 8 2014, at 17:15 , Igor Veresov wrote: > Hi Mike, > > Thanks for fixing this. Your diff against hs-comp is going to be a bit different though. We already have changes that remove Og for clang (which doesn?t support -Og at all). See http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/8cfc6ff87733 > > igor > > On Jul 8, 2014, at 3:33 PM, Mike Duigou wrote: > >> Hello all; >> >> This change backs out use of the optimized for debugging "-Og" in favour of the traditional "-O0". Initial evaluation seemed to indicate that "-Og" provided all the necessary debugging information but this has turned out to be incorrect. It seems that information is missing with the "-Og" optimization option combined with the default "-g" symbols option. More investigation is needed but that will be done in a future changeset. >> >> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047734 >> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047734/0/webrev/ >> >> (diffs in the generated configure script are not part of this review) >> >> The changes will be pushed through the hotspot-compiler repo. >> >> Mike > From david.holmes at oracle.com Wed Jul 9 02:23:48 2014 From: david.holmes at oracle.com (David Holmes) Date: Wed, 09 Jul 2014 12:23:48 +1000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53BC2D74.4070708@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> Message-ID: <53BCA7B4.7020201@oracle.com> Hi Lois, On 9/07/2014 3:42 AM, Lois Foltan wrote: > Hi Goetz, > > Overall this cleanup looks good. Here are specific comments per file: > > src/cpu/ppc/vm/runtime_ppc.cpp > - include nativeInst.hpp instead of nativeInst_ppc.hpp Hmmm - doesn't this go against the argument Coleen was making with regard to the other umbrella header situation? She said a platform specific file should include the platform specific header rather than the generic top-level header. I must admit I'm not completely convinced as it depends on whether the platform specific implementation calls generic functions that may or may not have a platform specific implementation. David ----- > src/cpu/sparc/vm/c1_Runtime1_sparc.cpp > - include nativeInst.hpp instead of nativeInst_sparc.hpp > - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp > (however this could pull in more code than needed since > vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) > > src/cpu/ppc/vm/stubGenerator_ppc.cpp > - change not related to clean up of umbrella headers, please > explain/justify. > > src/share/vm/code/vmreg.hpp > - Can lines #143-#15 be replaced by an inclusion of > vmreg.inline.hpp or will > this introduce a cyclical inclusion situation, since > vmreg.inline.hpp includes vmreg.hpp? > > src/share/vm/classfile/classFileStream.cpp > - only has a copyright change in the file, no other changes present? > > src/share/vm/prims/jvmtiClassFileReconstituter.cpp > - incorrect copyright, no current year? > > src/share/vm/opto/ad.hpp > - incorrect copyright date for a new file > > src/share/vm/code/vmreg.inline.hpp > - technically this new file does not need to include > "asm/register.hpp" since > vmreg.hpp already includes it > > My only lingering concern is the cyclical nature of > vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new > file "vmreg.inline.hpp" in favor of having files include vmreg.hpp > instead? Again since vmreg.inline.hpp includes vmreg.hpp there really > is not much difference between the two? > > Thanks, > Lois > > > On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> I decided to clean up the remaining include cascades, too. >> >> This change introduces umbrella headers for the files in the cpu >> subdirectories: >> >> src/share/vm/utilities/bytes.hpp >> src/share/vm/opto/ad.hpp >> src/share/vm/code/nativeInst.hpp >> src/share/vm/code/vmreg.inline.hpp >> src/share/vm/interpreter/interp_masm.hpp >> >> It also cleans up the include cascades for adGlobals*.hpp, >> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >> >> Where possible, this change avoids includes in headers. >> Eventually it adds a forward declaration. >> >> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >> and vmreg.hpp, so there is no obvious mapping to the shared files. >> Still, I did not split the files in the cpu directories, as they are >> rather small. >> >> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >> contains machine dependent, c2 specific register information. So I >> think optoreg.hpp is a good header to place the adGlobals_.hpp >> includes in, >> and then use optoreg.hpp where symbols from adGlobals are needed. >> >> I moved the constructor and destructor of CodeletMark to the .cpp >> file, I don't think this is performance relevant. But having them in >> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >> thus all the assembler include headers into a lot of files. >> >> Please review and test this change. I please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >> >> I compiled and tested this without precompiled headers on linuxx86_64, >> linuxppc64, >> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >> aixppc64, ntamd64 >> in opt, dbg and fastdbg versions. >> >> Currently, the change applies to hs-rt, but once my other change >> arrives in other >> repos, it will work there, too. (I tested it together with the other >> change >> against jdk9/dev, too.) >> >> Best regards, >> Goetz. >> >> PS: I also did all the Copyright adaptions ;) > From igor.veresov at oracle.com Wed Jul 9 06:09:10 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Tue, 8 Jul 2014 23:09:10 -0700 Subject: RFR: 8047734 : Back out use of -Og In-Reply-To: References: Message-ID: <3039CA62-2682-4901-9C3D-8B9B522FE3F1@oracle.com> Looks good. Thanks! igor On Jul 8, 2014, at 5:42 PM, Mike Duigou wrote: > I've rebased the webrev on the current hs-comp forest: > > http://cr.openjdk.java.net/~mduigou/JDK-8047734/1/webrev/ > > Mike > > On Jul 8 2014, at 17:15 , Igor Veresov wrote: > >> Hi Mike, >> >> Thanks for fixing this. Your diff against hs-comp is going to be a bit different though. We already have changes that remove Og for clang (which doesn?t support -Og at all). See http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/8cfc6ff87733 >> >> igor >> >> On Jul 8, 2014, at 3:33 PM, Mike Duigou wrote: >> >>> Hello all; >>> >>> This change backs out use of the optimized for debugging "-Og" in favour of the traditional "-O0". Initial evaluation seemed to indicate that "-Og" provided all the necessary debugging information but this has turned out to be incorrect. It seems that information is missing with the "-Og" optimization option combined with the default "-g" symbols option. More investigation is needed but that will be done in a future changeset. >>> >>> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047734 >>> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047734/0/webrev/ >>> >>> (diffs in the generated configure script are not part of this review) >>> >>> The changes will be pushed through the hotspot-compiler repo. >>> >>> Mike >> > From lois.foltan at oracle.com Wed Jul 9 12:37:22 2014 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 09 Jul 2014 08:37:22 -0400 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53BCA7B4.7020201@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> Message-ID: <53BD3782.3080002@oracle.com> On 7/8/2014 10:23 PM, David Holmes wrote: > Hi Lois, > > On 9/07/2014 3:42 AM, Lois Foltan wrote: >> Hi Goetz, >> >> Overall this cleanup looks good. Here are specific comments per file: >> >> src/cpu/ppc/vm/runtime_ppc.cpp >> - include nativeInst.hpp instead of nativeInst_ppc.hpp > > Hmmm - doesn't this go against the argument Coleen was making with > regard to the other umbrella header situation? She said a platform > specific file should include the platform specific header rather than > the generic top-level header. > > I must admit I'm not completely convinced as it depends on whether the > platform specific implementation calls generic functions that may or > may not have a platform specific implementation. Hi David, Yes, you are correct, I looked back to some of the email discussion from JDK-8048241. I share your thoughts on this topic but will defer since this was a recent discussion. Goetz, please disregard my comments about including .hpp instead of _platform.hpp below. I think this just affected src/cpu/ppc/vm/runtime_ppc.cpp and src/cpu/sparc/vm/c1_Runtime1_sparc.cpp. Thanks, Lois > > David > ----- > >> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >> - include nativeInst.hpp instead of nativeInst_sparc.hpp >> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >> (however this could pull in more code than needed since >> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >> >> src/cpu/ppc/vm/stubGenerator_ppc.cpp >> - change not related to clean up of umbrella headers, please >> explain/justify. >> >> src/share/vm/code/vmreg.hpp >> - Can lines #143-#15 be replaced by an inclusion of >> vmreg.inline.hpp or will >> this introduce a cyclical inclusion situation, since >> vmreg.inline.hpp includes vmreg.hpp? >> >> src/share/vm/classfile/classFileStream.cpp >> - only has a copyright change in the file, no other changes >> present? >> >> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >> - incorrect copyright, no current year? >> >> src/share/vm/opto/ad.hpp >> - incorrect copyright date for a new file >> >> src/share/vm/code/vmreg.inline.hpp >> - technically this new file does not need to include >> "asm/register.hpp" since >> vmreg.hpp already includes it >> >> My only lingering concern is the cyclical nature of >> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >> is not much difference between the two? >> >> Thanks, >> Lois >> >> >> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> I decided to clean up the remaining include cascades, too. >>> >>> This change introduces umbrella headers for the files in the cpu >>> subdirectories: >>> >>> src/share/vm/utilities/bytes.hpp >>> src/share/vm/opto/ad.hpp >>> src/share/vm/code/nativeInst.hpp >>> src/share/vm/code/vmreg.inline.hpp >>> src/share/vm/interpreter/interp_masm.hpp >>> >>> It also cleans up the include cascades for adGlobals*.hpp, >>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>> >>> Where possible, this change avoids includes in headers. >>> Eventually it adds a forward declaration. >>> >>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>> Still, I did not split the files in the cpu directories, as they are >>> rather small. >>> >>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>> contains machine dependent, c2 specific register information. So I >>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>> includes in, >>> and then use optoreg.hpp where symbols from adGlobals are needed. >>> >>> I moved the constructor and destructor of CodeletMark to the .cpp >>> file, I don't think this is performance relevant. But having them in >>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>> thus all the assembler include headers into a lot of files. >>> >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>> >>> I compiled and tested this without precompiled headers on linuxx86_64, >>> linuxppc64, >>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>> aixppc64, ntamd64 >>> in opt, dbg and fastdbg versions. >>> >>> Currently, the change applies to hs-rt, but once my other change >>> arrives in other >>> repos, it will work there, too. (I tested it together with the other >>> change >>> against jdk9/dev, too.) >>> >>> Best regards, >>> Goetz. >>> >>> PS: I also did all the Copyright adaptions ;) >> From goetz.lindenmaier at sap.com Wed Jul 9 12:53:30 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Wed, 9 Jul 2014 12:53:30 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53BC2D74.4070708@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CED9BBE@DEWDFEMB12A.global.corp.sap> Hi Lois, thanks for looking at this change! In general, I did not replace xxx_.hpp by the generic header, because it's a clear 1:1 relationship. And as David says, if there is only the include cascade in the generic header, there is no point to including it. Except that maybe including the generic files would be a more consistent coding style. If it's agreed on, I'll fix this for all the headers I addressed. ... OK, in the meantime your other mail arrived ... so I'll leave it as is. The basic idea of .inline.hpp files is to avoid cycles when using inline functions. A .inline.hpp file should never be included in a .hpp file, so there will never be a cycle. The VMRegImp:: functions in vmreg_.inline.hpp actually don't depend on other inline functions, so moving them to vmreg_.hpp would be feasible. The XxxRegisterImpl:: functions must remain in a .inline.hpp file, though. Else we get a cycle between register.hpp and vmreg.hpp. If you want to, I move the code and rename the vmreg files to register... I think this would be a good cleanup, as currently the file contains implementations from two different headers, which is unusual. This is also why I placed the register.hpp include in vmreg.inline.hpp. It contains what actually should go to register.inline.hpp, so it should also contain the natural include for register.inline.hpp. stubGenerator_ppc.cpp: The macro removed is declared in interp_masm_ppc.hpp, and I didn't want to include it there, seems not right. So I rather removed the macro. I fixed the Copyright errors: http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ Best regards, Goetz. -----Original Message----- From: Lois Foltan [mailto:lois.foltan at oracle.com] Sent: Dienstag, 8. Juli 2014 19:42 To: Lindenmaier, Goetz Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories Hi Goetz, Overall this cleanup looks good. Here are specific comments per file: src/cpu/ppc/vm/runtime_ppc.cpp - include nativeInst.hpp instead of nativeInst_ppc.hpp src/cpu/sparc/vm/c1_Runtime1_sparc.cpp - include nativeInst.hpp instead of nativeInst_sparc.hpp - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp (however this could pull in more code than needed since vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) src/cpu/ppc/vm/stubGenerator_ppc.cpp - change not related to clean up of umbrella headers, please explain/justify. src/share/vm/code/vmreg.hpp - Can lines #143-#15 be replaced by an inclusion of vmreg.inline.hpp or will this introduce a cyclical inclusion situation, since vmreg.inline.hpp includes vmreg.hpp? src/share/vm/classfile/classFileStream.cpp - only has a copyright change in the file, no other changes present? src/share/vm/prims/jvmtiClassFileReconstituter.cpp - incorrect copyright, no current year? src/share/vm/opto/ad.hpp - incorrect copyright date for a new file src/share/vm/code/vmreg.inline.hpp - technically this new file does not need to include "asm/register.hpp" since vmreg.hpp already includes it My only lingering concern is the cyclical nature of vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new file "vmreg.inline.hpp" in favor of having files include vmreg.hpp instead? Again since vmreg.inline.hpp includes vmreg.hpp there really is not much difference between the two? Thanks, Lois On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: > Hi, > > I decided to clean up the remaining include cascades, too. > > This change introduces umbrella headers for the files in the cpu subdirectories: > > src/share/vm/utilities/bytes.hpp > src/share/vm/opto/ad.hpp > src/share/vm/code/nativeInst.hpp > src/share/vm/code/vmreg.inline.hpp > src/share/vm/interpreter/interp_masm.hpp > > It also cleans up the include cascades for adGlobals*.hpp, > jniTypes*.hpp, vm_version*.hpp and register*.hpp. > > Where possible, this change avoids includes in headers. > Eventually it adds a forward declaration. > > vmreg_.inline.hpp contains functions declared in register_cpu.hpp > and vmreg.hpp, so there is no obvious mapping to the shared files. > Still, I did not split the files in the cpu directories, as they are > rather small. > > I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly > contains machine dependent, c2 specific register information. So I > think optoreg.hpp is a good header to place the adGlobals_.hpp includes in, > and then use optoreg.hpp where symbols from adGlobals are needed. > > I moved the constructor and destructor of CodeletMark to the .cpp > file, I don't think this is performance relevant. But having them in > the header requirs to pull interp_masm.hpp into interpreter.hpp, and > thus all the assembler include headers into a lot of files. > > Please review and test this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ > > I compiled and tested this without precompiled headers on linuxx86_64, linuxppc64, > windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, aixppc64, ntamd64 > in opt, dbg and fastdbg versions. > > Currently, the change applies to hs-rt, but once my other change arrives in other > repos, it will work there, too. (I tested it together with the other change > against jdk9/dev, too.) > > Best regards, > Goetz. > > PS: I also did all the Copyright adaptions ;) From david.holmes at oracle.com Wed Jul 9 12:58:54 2014 From: david.holmes at oracle.com (David Holmes) Date: Wed, 09 Jul 2014 22:58:54 +1000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53BD3782.3080002@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> Message-ID: <53BD3C8E.5070804@oracle.com> On 9/07/2014 10:37 PM, Lois Foltan wrote: > > On 7/8/2014 10:23 PM, David Holmes wrote: >> Hi Lois, >> >> On 9/07/2014 3:42 AM, Lois Foltan wrote: >>> Hi Goetz, >>> >>> Overall this cleanup looks good. Here are specific comments per file: >>> >>> src/cpu/ppc/vm/runtime_ppc.cpp >>> - include nativeInst.hpp instead of nativeInst_ppc.hpp >> >> Hmmm - doesn't this go against the argument Coleen was making with >> regard to the other umbrella header situation? She said a platform >> specific file should include the platform specific header rather than >> the generic top-level header. >> >> I must admit I'm not completely convinced as it depends on whether the >> platform specific implementation calls generic functions that may or >> may not have a platform specific implementation. > > Hi David, > > Yes, you are correct, I looked back to some of the email discussion from > JDK-8048241. I share your thoughts on this topic but will defer since > this was a recent discussion. Goetz, please disregard my comments about > including .hpp instead of _platform.hpp below. I think this > just affected src/cpu/ppc/vm/runtime_ppc.cpp and > src/cpu/sparc/vm/c1_Runtime1_sparc.cpp. I'd like to get some clarification on this just to know how to structure things. Given: foo.hpp foo.inline.hpp foo.platform.inline.hpp which of the above should #include which others, and which should be #include'd by "client" code? Thanks, David > Thanks, > Lois > >> >> David >> ----- >> >>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>> (however this could pull in more code than needed since >>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>> >>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>> - change not related to clean up of umbrella headers, please >>> explain/justify. >>> >>> src/share/vm/code/vmreg.hpp >>> - Can lines #143-#15 be replaced by an inclusion of >>> vmreg.inline.hpp or will >>> this introduce a cyclical inclusion situation, since >>> vmreg.inline.hpp includes vmreg.hpp? >>> >>> src/share/vm/classfile/classFileStream.cpp >>> - only has a copyright change in the file, no other changes >>> present? >>> >>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>> - incorrect copyright, no current year? >>> >>> src/share/vm/opto/ad.hpp >>> - incorrect copyright date for a new file >>> >>> src/share/vm/code/vmreg.inline.hpp >>> - technically this new file does not need to include >>> "asm/register.hpp" since >>> vmreg.hpp already includes it >>> >>> My only lingering concern is the cyclical nature of >>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>> is not much difference between the two? >>> >>> Thanks, >>> Lois >>> >>> >>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> I decided to clean up the remaining include cascades, too. >>>> >>>> This change introduces umbrella headers for the files in the cpu >>>> subdirectories: >>>> >>>> src/share/vm/utilities/bytes.hpp >>>> src/share/vm/opto/ad.hpp >>>> src/share/vm/code/nativeInst.hpp >>>> src/share/vm/code/vmreg.inline.hpp >>>> src/share/vm/interpreter/interp_masm.hpp >>>> >>>> It also cleans up the include cascades for adGlobals*.hpp, >>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>> >>>> Where possible, this change avoids includes in headers. >>>> Eventually it adds a forward declaration. >>>> >>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>> Still, I did not split the files in the cpu directories, as they are >>>> rather small. >>>> >>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>> contains machine dependent, c2 specific register information. So I >>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>> includes in, >>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>> >>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>> file, I don't think this is performance relevant. But having them in >>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>> thus all the assembler include headers into a lot of files. >>>> >>>> Please review and test this change. I please need a sponsor. >>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>> >>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>> linuxppc64, >>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>> aixppc64, ntamd64 >>>> in opt, dbg and fastdbg versions. >>>> >>>> Currently, the change applies to hs-rt, but once my other change >>>> arrives in other >>>> repos, it will work there, too. (I tested it together with the other >>>> change >>>> against jdk9/dev, too.) >>>> >>>> Best regards, >>>> Goetz. >>>> >>>> PS: I also did all the Copyright adaptions ;) >>> > From goetz.lindenmaier at sap.com Wed Jul 9 14:03:58 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Wed, 9 Jul 2014 14:03:58 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53BD3C8E.5070804@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> Hi, foo.hpp as few includes as possible, to avoid cycles. foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp (either directly or via the platform files.) * should include foo.platform.inline.hpp, so that shared files that call functions from foo.platform.inline.hpp need not contain the cascade of all the platform files. If code in foo.platform.inline.hpp is only used in the platform files, it is not necessary to have an umbrella header. foo.platform.inline.hpp Should include what is needed in its code. For client code: With this change I now removed all include cascades of platform files except for those in the 'natural' headers. Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp headers, but include bar.[inline.]hpp.) If it's 1:1, I don't care, as discussed before. Does this make sense? Best regards, Goetz. which of the above should #include which others, and which should be #include'd by "client" code? Thanks, David > Thanks, > Lois > >> >> David >> ----- >> >>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>> (however this could pull in more code than needed since >>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>> >>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>> - change not related to clean up of umbrella headers, please >>> explain/justify. >>> >>> src/share/vm/code/vmreg.hpp >>> - Can lines #143-#15 be replaced by an inclusion of >>> vmreg.inline.hpp or will >>> this introduce a cyclical inclusion situation, since >>> vmreg.inline.hpp includes vmreg.hpp? >>> >>> src/share/vm/classfile/classFileStream.cpp >>> - only has a copyright change in the file, no other changes >>> present? >>> >>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>> - incorrect copyright, no current year? >>> >>> src/share/vm/opto/ad.hpp >>> - incorrect copyright date for a new file >>> >>> src/share/vm/code/vmreg.inline.hpp >>> - technically this new file does not need to include >>> "asm/register.hpp" since >>> vmreg.hpp already includes it >>> >>> My only lingering concern is the cyclical nature of >>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>> is not much difference between the two? >>> >>> Thanks, >>> Lois >>> >>> >>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> I decided to clean up the remaining include cascades, too. >>>> >>>> This change introduces umbrella headers for the files in the cpu >>>> subdirectories: >>>> >>>> src/share/vm/utilities/bytes.hpp >>>> src/share/vm/opto/ad.hpp >>>> src/share/vm/code/nativeInst.hpp >>>> src/share/vm/code/vmreg.inline.hpp >>>> src/share/vm/interpreter/interp_masm.hpp >>>> >>>> It also cleans up the include cascades for adGlobals*.hpp, >>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>> >>>> Where possible, this change avoids includes in headers. >>>> Eventually it adds a forward declaration. >>>> >>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>> Still, I did not split the files in the cpu directories, as they are >>>> rather small. >>>> >>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>> contains machine dependent, c2 specific register information. So I >>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>> includes in, >>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>> >>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>> file, I don't think this is performance relevant. But having them in >>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>> thus all the assembler include headers into a lot of files. >>>> >>>> Please review and test this change. I please need a sponsor. >>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>> >>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>> linuxppc64, >>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>> aixppc64, ntamd64 >>>> in opt, dbg and fastdbg versions. >>>> >>>> Currently, the change applies to hs-rt, but once my other change >>>> arrives in other >>>> repos, it will work there, too. (I tested it together with the other >>>> change >>>> against jdk9/dev, too.) >>>> >>>> Best regards, >>>> Goetz. >>>> >>>> PS: I also did all the Copyright adaptions ;) >>> > From volker.simonis at gmail.com Wed Jul 9 17:36:21 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 9 Jul 2014 19:36:21 +0200 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 Message-ID: Hi, could someone please review and sponsor the following change which does some preliminary work for enabling the SA agent on Linux/PPC64: http://cr.openjdk.java.net/~simonis/webrevs/8049715/ https://bugs.openjdk.java.net/browse/JDK-8049715 Details: Currently, we don't support the SA agent on Linux/PPC64. This change fixes the buildsystem such that the SA libraries (i.e. libsaproc.so and sa-jdi.jar) will be correctly build and copied into the resulting jdk images. This change also contains some small fixes in sa-jdi.jar to correctly detect Linux/PPC64 as supported SA platform. (The actual implementation of the Linux/PPC64 specific code will be handled by "8049716 PPC64: Implement SA on Linux/PPC64" - https://bugs.openjdk.java.net/browse/JDK-8049716). One thing which require special attention are the changes in make/linux/makefiles/defs.make which may touch the closed ppc port. In my change I've simply added 'ppc' to the list of supported architectures, but this may break the 32-bit ppc build. I think the current code is to verbose and error prone anyway. It would be better to have something like: ADD_SA_BINARIES = $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) $(EXPORT_LIB_DIR)/sa-jdi.jar ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) ifeq ($(ZIP_DEBUGINFO_FILES),1) ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz else ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo endif endif ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 ppc64)) EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) endif With this solution we only define ADD_SA_BINARIES once (because the various definitions for the different platforms are equal anyway). But again this may affect other closed ports so please advise which solution you'd prefer. Notice that this change also requires a tiny fix in the top-level repository which must be pushed AFTER this change. Thank you and best regards, Volker From lois.foltan at oracle.com Wed Jul 9 21:31:20 2014 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 09 Jul 2014 17:31:20 -0400 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CED9BBE@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BBE@DEWDFEMB12A.global.corp.sap> Message-ID: <53BDB4A8.4090001@oracle.com> On 7/9/2014 8:53 AM, Lindenmaier, Goetz wrote: > Hi Lois, > > thanks for looking at this change! > > In general, I did not replace xxx_.hpp by the generic header, > because it's a clear 1:1 relationship. And as David says, if there is > only the include cascade in the generic header, there is no point to > including it. Except that maybe including the generic files would > be a more consistent coding style. > If it's agreed on, I'll fix this for all the headers I addressed. > ... OK, in the meantime your other mail arrived ... so I'll leave it > as is. > > The basic idea of .inline.hpp files is to avoid cycles when using inline > functions. A .inline.hpp file should never be included in a .hpp file, so there > will never be a cycle. > > The VMRegImp:: functions in vmreg_.inline.hpp actually don't depend on > other inline functions, so moving them to vmreg_.hpp would be > feasible. The XxxRegisterImpl:: functions must remain in a .inline.hpp > file, though. Else we get a cycle between register.hpp and vmreg.hpp. > If you want to, I move the code and rename the vmreg files to register... > I think this would be a good cleanup, as currently the file contains > implementations from two different headers, which is unusual. > > This is also why I placed the register.hpp include in vmreg.inline.hpp. It > contains what actually should go to register.inline.hpp, so it should also contain > the natural include for register.inline.hpp. Hi Goetz, Thank you for the further explanation with regards to vmreg.inline.hpp. I now understand why your included register.hpp in vmreg.inline.hpp but I still think it is unneccessary since vmreg.inline.hpp includes vmreg.hpp which includes register.hpp as well. > > stubGenerator_ppc.cpp: The macro removed is declared in > interp_masm_ppc.hpp, and I didn't want to include it there, seems > not right. So I rather removed the macro. Ok, got it! > > I fixed the Copyright errors: > http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ Thank you. Just a very minor nit, the copyright on the new file src/share/vm/opto/ad.hpp still seems incorrect. It is "2000, 2014,". Shouldn't it be just "2014," since this is a new file? Overall I am good with your changes, reviewed! Lois > > Best regards, > Goetz. > > -----Original Message----- > From: Lois Foltan [mailto:lois.foltan at oracle.com] > Sent: Dienstag, 8. Juli 2014 19:42 > To: Lindenmaier, Goetz > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > Hi Goetz, > > Overall this cleanup looks good. Here are specific comments per file: > > src/cpu/ppc/vm/runtime_ppc.cpp > - include nativeInst.hpp instead of nativeInst_ppc.hpp > > src/cpu/sparc/vm/c1_Runtime1_sparc.cpp > - include nativeInst.hpp instead of nativeInst_sparc.hpp > - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp > (however this could pull in more code than needed since > vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) > > src/cpu/ppc/vm/stubGenerator_ppc.cpp > - change not related to clean up of umbrella headers, please > explain/justify. > > src/share/vm/code/vmreg.hpp > - Can lines #143-#15 be replaced by an inclusion of > vmreg.inline.hpp or will > this introduce a cyclical inclusion situation, since > vmreg.inline.hpp includes vmreg.hpp? > > src/share/vm/classfile/classFileStream.cpp > - only has a copyright change in the file, no other changes present? > > src/share/vm/prims/jvmtiClassFileReconstituter.cpp > - incorrect copyright, no current year? > > src/share/vm/opto/ad.hpp > - incorrect copyright date for a new file > > src/share/vm/code/vmreg.inline.hpp > - technically this new file does not need to include > "asm/register.hpp" since > vmreg.hpp already includes it > > My only lingering concern is the cyclical nature of > vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new > file "vmreg.inline.hpp" in favor of having files include vmreg.hpp > instead? Again since vmreg.inline.hpp includes vmreg.hpp there really > is not much difference between the two? > > Thanks, > Lois > > > On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> I decided to clean up the remaining include cascades, too. >> >> This change introduces umbrella headers for the files in the cpu subdirectories: >> >> src/share/vm/utilities/bytes.hpp >> src/share/vm/opto/ad.hpp >> src/share/vm/code/nativeInst.hpp >> src/share/vm/code/vmreg.inline.hpp >> src/share/vm/interpreter/interp_masm.hpp >> >> It also cleans up the include cascades for adGlobals*.hpp, >> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >> >> Where possible, this change avoids includes in headers. >> Eventually it adds a forward declaration. >> >> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >> and vmreg.hpp, so there is no obvious mapping to the shared files. >> Still, I did not split the files in the cpu directories, as they are >> rather small. >> >> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >> contains machine dependent, c2 specific register information. So I >> think optoreg.hpp is a good header to place the adGlobals_.hpp includes in, >> and then use optoreg.hpp where symbols from adGlobals are needed. >> >> I moved the constructor and destructor of CodeletMark to the .cpp >> file, I don't think this is performance relevant. But having them in >> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >> thus all the assembler include headers into a lot of files. >> >> Please review and test this change. I please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >> >> I compiled and tested this without precompiled headers on linuxx86_64, linuxppc64, >> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, aixppc64, ntamd64 >> in opt, dbg and fastdbg versions. >> >> Currently, the change applies to hs-rt, but once my other change arrives in other >> repos, it will work there, too. (I tested it together with the other change >> against jdk9/dev, too.) >> >> Best regards, >> Goetz. >> >> PS: I also did all the Copyright adaptions ;) From vladimir.danushevsky at oracle.com Wed Jul 9 21:31:50 2014 From: vladimir.danushevsky at oracle.com (Vladimir Danushevsky) Date: Wed, 9 Jul 2014 17:31:50 -0400 Subject: RFR: 8049776: Implement C1 AES/CBC crypto acceleration framework Message-ID: This patch adds a platform independent framework to support AES/CBC encryption/decryption intrinsics in C1. Platform specific changes that implement the support are not covered by current patch. The platform support is enabled to implementing LIRGenerator::do_CryptoIntrinsics() and LIR_Assembler::emit_crypto_cbc_aes() along with setting Abstract_VM_Version::_supports_crypto_acceleration_client true in the corresponding architecture port. Webrev: http://cr.openjdk.java.net/~vladidan/8049776/webrev.00/ GraphBuilder::append_crypto_cbc_aes generates an InstanceOf check to confirm the cipher being utilized is indeed AES. LIRAssembler code should perform the InstanceOf result check and jump ether into an AES/CBC intrinsic stub or a virtual call stub. RFE: https://bugs.openjdk.java.net/browse/JDK-8049776 Thanks, Vlad From vladimir.kozlov at oracle.com Wed Jul 9 23:56:53 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 09 Jul 2014 16:56:53 -0700 Subject: [8u40] 8042195, 8042737 : Introduce umbrella header Message-ID: <53BDD6C5.8080002@oracle.com> 8u40 backport request. Fixes were pushed into jdk9 2 months ago and nightly testing shows no problems. 8042195: Introduce umbrella header orderAccess.inline.hpp. 8042737: Introduce umbrella header prefetch.inline.hpp Changes (including closed) from jdk9 applied to 8u without conflicts. I also ran JPRT test job to verify backport. http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/2377269bd73d http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/4cc7fe54e0e1 https://bugs.openjdk.java.net/browse/JDK-8042195 https://bugs.openjdk.java.net/browse/JDK-8042737 Thanks, Vladimir From david.holmes at oracle.com Thu Jul 10 02:41:33 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 10 Jul 2014 12:41:33 +1000 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: References: Message-ID: <53BDFD5D.4050908@oracle.com> Hi Volker, Comments below where you might expect them :) On 10/07/2014 3:36 AM, Volker Simonis wrote: > Hi, > > could someone please review and sponsor the following change which > does some preliminary work for enabling the SA agent on Linux/PPC64: > > http://cr.openjdk.java.net/~simonis/webrevs/8049715/ > https://bugs.openjdk.java.net/browse/JDK-8049715 > > Details: > > Currently, we don't support the SA agent on Linux/PPC64. This change > fixes the buildsystem such that the SA libraries (i.e. libsaproc.so > and sa-jdi.jar) will be correctly build and copied into the resulting > jdk images. > > This change also contains some small fixes in sa-jdi.jar to correctly > detect Linux/PPC64 as supported SA platform. (The actual > implementation of the Linux/PPC64 specific code will be handled by > "8049716 PPC64: Implement SA on Linux/PPC64" - > https://bugs.openjdk.java.net/browse/JDK-8049716). > > One thing which require special attention are the changes in > make/linux/makefiles/defs.make which may touch the closed ppc port. In > my change I've simply added 'ppc' to the list of supported > architectures, but this may break the 32-bit ppc build. I think the It wouldn't break it but I was expecting to see ppc64 here. > current code is to verbose and error prone anyway. It would be better > to have something like: > > ADD_SA_BINARIES = > $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) > $(EXPORT_LIB_DIR)/sa-jdi.jar > > ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) > ifeq ($(ZIP_DEBUGINFO_FILES),1) > ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz > else > ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo > endif > endif > > ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 ppc64)) > EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) You wouldn't need/want the $(HS_ARCH) there. > endif > > With this solution we only define ADD_SA_BINARIES once (because the > various definitions for the different platforms are equal anyway). But > again this may affect other closed ports so please advise which > solution you'd prefer. The above is problematic for customizations. An alternative would be to set ADD_SA_BINARIES/default once with all the file names. Then: ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) # No SA Support for IA64 or zero ifneq (, $(findstring $(ARCH), ia64, zero)) ADD_SA_BINARIES/$(ARCH) = Each ARCH handled elsewhere would then still set ADD_SA_BINARIES/$(ARCH) if needed. Does that seem reasonable? > Notice that this change also requires a tiny fix in the top-level > repository which must be pushed AFTER this change. Can you elaborate please? Thanks, David > Thank you and best regards, > Volker > From goetz.lindenmaier at sap.com Thu Jul 10 07:00:04 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 10 Jul 2014 07:00:04 +0000 Subject: [8u40] 8042195, 8042737 : Introduce umbrella header In-Reply-To: <53BDD6C5.8080002@oracle.com> References: <53BDD6C5.8080002@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CED9D0D@DEWDFEMB12A.global.corp.sap> Thanks for doing this! Best regards, Goetz. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Donnerstag, 10. Juli 2014 01:57 To: hotspot-dev at openjdk.java.net Cc: Lindenmaier, Goetz Subject: [8u40] 8042195, 8042737 : Introduce umbrella header 8u40 backport request. Fixes were pushed into jdk9 2 months ago and nightly testing shows no problems. 8042195: Introduce umbrella header orderAccess.inline.hpp. 8042737: Introduce umbrella header prefetch.inline.hpp Changes (including closed) from jdk9 applied to 8u without conflicts. I also ran JPRT test job to verify backport. http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/2377269bd73d http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/4cc7fe54e0e1 https://bugs.openjdk.java.net/browse/JDK-8042195 https://bugs.openjdk.java.net/browse/JDK-8042737 Thanks, Vladimir From volker.simonis at gmail.com Thu Jul 10 10:12:55 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 10 Jul 2014 12:12:55 +0200 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: <53BDFD5D.4050908@oracle.com> References: <53BDFD5D.4050908@oracle.com> Message-ID: Hi David, thanks for looking at this. Here's my new version of the change with some of your suggestions applied: http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 Please find more information inline: On Thu, Jul 10, 2014 at 4:41 AM, David Holmes wrote: > Hi Volker, > > Comments below where you might expect them :) > > > On 10/07/2014 3:36 AM, Volker Simonis wrote: >> >> Hi, >> >> could someone please review and sponsor the following change which >> does some preliminary work for enabling the SA agent on Linux/PPC64: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >> https://bugs.openjdk.java.net/browse/JDK-8049715 >> >> Details: >> >> Currently, we don't support the SA agent on Linux/PPC64. This change >> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >> and sa-jdi.jar) will be correctly build and copied into the resulting >> jdk images. >> >> This change also contains some small fixes in sa-jdi.jar to correctly >> detect Linux/PPC64 as supported SA platform. (The actual >> implementation of the Linux/PPC64 specific code will be handled by >> "8049716 PPC64: Implement SA on Linux/PPC64" - >> https://bugs.openjdk.java.net/browse/JDK-8049716). >> >> One thing which require special attention are the changes in >> make/linux/makefiles/defs.make which may touch the closed ppc port. In >> my change I've simply added 'ppc' to the list of supported >> architectures, but this may break the 32-bit ppc build. I think the > > > It wouldn't break it but I was expecting to see ppc64 here. > The problem is that currently the decision if the SA agent will be build is based on the value of HS_ARCH. But HS_ARCH is the 'basic architecture' (i.e. x86 or sparc) so there's no easy way to choose the SA agent for only a 64-bit platform (like ppc64 or amd64) and not for its 32-bit counterpart (i.e. i386 or ppc). The only possibility with the current solution would be to only conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But that wouldn't make the code nicer either:) > >> current code is to verbose and error prone anyway. It would be better >> to have something like: >> >> ADD_SA_BINARIES = >> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >> $(EXPORT_LIB_DIR)/sa-jdi.jar >> >> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >> ifeq ($(ZIP_DEBUGINFO_FILES),1) >> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >> else >> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >> endif >> endif >> >> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 ppc64)) >> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) > > > You wouldn't need/want the $(HS_ARCH) there. > Sorry, that was a type of course. It should read: ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 ppc64)) EXPORT_LIST += $(ADD_SA_BINARIES) But that's not necessary now anymore (see new version below). > >> endif >> >> With this solution we only define ADD_SA_BINARIES once (because the >> various definitions for the different platforms are equal anyway). But >> again this may affect other closed ports so please advise which >> solution you'd prefer. > > > The above is problematic for customizations. An alternative would be to set > ADD_SA_BINARIES/default once with all the file names. Then: > > ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) > # No SA Support for IA64 or zero > ifneq (, $(findstring $(ARCH), ia64, zero)) > ADD_SA_BINARIES/$(ARCH) = > > Each ARCH handled elsewhere would then still set ADD_SA_BINARIES/$(ARCH) if > needed. > > Does that seem reasonable? > The problem with using ARCH is that it is not "reliable" in the sens that its value differs for top-level and hotspot-only makes. See "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 build". But using ADD_SA_BINARIES/default to save redundant lines is a good idea. I've updated the patch accordingly and think that the new solution is a good compromise between readability and not touching existing/closed part. Are you fine with the new version at http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? > >> Notice that this change also requires a tiny fix in the top-level >> repository which must be pushed AFTER this change. > > > Can you elaborate please? > I've also submitted the corresponding top-level repository change for review which expects to find the SA agent libraries on Linux/ppc64 in order to copy them into the image directory: http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ But once that will be pushed, the build will fail if these HS changes will not be in place to actually build the libraries. > Thanks, > David > > >> Thank you and best regards, >> Volker >> > From roland.westrelin at oracle.com Thu Jul 10 14:37:38 2014 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Thu, 10 Jul 2014 16:37:38 +0200 Subject: RFR: 8049776: Implement C1 AES/CBC crypto acceleration framework In-Reply-To: References: Message-ID: <75FC5F32-36B0-4423-BF23-6C583E74AEFB@oracle.com> > http://cr.openjdk.java.net/~vladidan/8049776/webrev.00/ GraphBuilder::try_inline_intrinsics() does profiling (lines 3654 - 3669 and line 3678). append_crypto_cbc_aes() doesn?t. You should probably move the profiling code lines 3654 - 3669 and line 3678 in their own methods and call them from append_crypto_cbc_aes(). In GraphBuilder::append_crypto_cbc_aes(), InstanceOf should be passed NULL instead of copy_state_exhandling(). Otherwise, it looks good to me. Roland. From mikael.vidstedt at oracle.com Thu Jul 10 20:06:35 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Thu, 10 Jul 2014 13:06:35 -0700 Subject: RFR(S): 8049071: Add jtreg jobs to JPRT for Hotspot In-Reply-To: <53BB294A.8040801@oracle.com> References: <53B4AD05.3070702@oracle.com> <53B631B3.6090505@oracle.com> <53B63BB5.8090602@oracle.com> <53BB294A.8040801@oracle.com> Message-ID: <53BEF24B.2090600@oracle.com> Anybody? Pleeeease? Cheers, Mikael On 2014-07-07 16:12, Mikael Vidstedt wrote: > > Fixed the comment, removed the loop (the loop logic is btw taken > directly from jdk/test/Makefile, but I'll follow up on a fix for that > separately). > > Anybody else want to have a look? > > top: > http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.01/top/webrev/ > hotspot: > http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.01/hotspot/webrev/ > > Thanks, > Mikael > > On 2014-07-03 22:29, David Holmes wrote: >> On 4/07/2014 2:46 PM, David Holmes wrote: >>> Hi Mikael, >>> >>> Generally looks okay - took me a minute to remember that jtreg groups >>> combine as set unions :) >>> >>> A couple of things: >>> >>> 226 # Unless explicitly defined below, hotspot_ is interpreted as >>> the >>> jtreg test group >>> >>> The jtreg group is actually called hotspot_ >>> >>> 227 hotspot_%: >>> 228 $(ECHO) "Running tests: $@" >>> 229 for each in $@; do \ >>> 230 $(MAKE) -j 1 TEST_SELECTION=":$$each" >>> UNIQUE_DIR=$$each jtreg_tests; \ >>> 231 done >>> >>> While hotspot_% can match multiple targets each target will be distinct >>> - ie $@ will only every have a single value and the for loop will only >>> execute once - and hence is unnecessary. This seems borne out with a >>> simple test: >>> >>> > cat Makefile >>> hotspot_%: >>> @echo "Running tests: $@" >>> @for each in $@; do \ >>> echo $$each ;\ >>> done >>> >>> > make hotspot_a hotspot_b >>> Running tests: hotspot_a >>> hotspot_a >>> Running tests: hotspot_b >>> hotspot_b >> >> Though if you have a quoting issue with the invocation: >> >> > make "hotspot_a hotspot_b" >> Running tests: hotspot_a hotspot_b >> hotspot_a >> hotspot_b >> >> things turn out different. >> >> David >> >> >>> Cheers, >>> David >>> >>> On 3/07/2014 11:08 AM, Mikael Vidstedt wrote: >>>> >>>> Please review this enhancement which adds the scaffolding needed to >>>> run >>>> the hotspot jtreg tests in JPRT. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8049071 >>>> Webrev (/): >>>> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/top/webrev/ >>>> >>>> Webrev (hotspot/): >>>> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/hotspot/webrev/ >>>> >>>> >>>> >>>> >>>> Summary: >>>> >>>> We want to run the hotspot regression tests on every hotspot push. >>>> This >>>> change enables this and adds four new test groups to the set of tests >>>> being run on hotspot pushes. The new test sets still need to be >>>> populated. >>>> >>>> Narrative: >>>> >>>> The majority of the changes are in the hotspot/test/Makefile. The >>>> changes are almost entirely stolen from jdk/test/Makefile but have >>>> been >>>> massaged to support (at least) three different use cases, two of which >>>> were supported earlier: >>>> >>>> 1. Running the non-jtreg tests (servertest, clienttest and >>>> internalvmtests), also supporting the use of the "hotspot_" for >>>> when the >>>> tests are invoked from the JDK top level >>>> 2. Running jtreg tests by selecting test to run using the TESTDIRS >>>> variable >>>> 3. Running jtreg tests by selecting the test group to run (NEW) >>>> >>>> The third/new use case is implemented by making any target named >>>> hotspot_% *except* the ones listed in 1. lead to the corresponding >>>> jtreg >>>> test group in TEST.groups being run. For example, running "make >>>> hotspot_gc" leads to all the tests in the hotspot_gc test group in >>>> TEST.groups to be run and so on. >>>> >>>> I also removed the packtest targets, because as far as I can tell >>>> they're not used anyway. >>>> >>>> Note that the new component test groups in TEST.group - >>>> hotspot_compiler, hotspot_gc, hotspot_runtime and >>>> hotspot_serviceability >>>> - are currently empty, or more precisely they only run a single test >>>> each. The intention is that these should be populated by the >>>> respective >>>> teams to include stable and relatively fast tests. Tests added to the >>>> groups will be run on hotspot push jobs, and therefore will be >>>> blocking >>>> pushes in case they fail. >>>> >>>> Cheers, >>>> Mikael >>>> > From vladimir.kozlov at oracle.com Thu Jul 10 21:28:57 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 10 Jul 2014 14:28:57 -0700 Subject: Please look at my JEP In-Reply-To: <53B3BEBE.6060103@redhat.com> References: <53A2F354.40606@redhat.com> <53B3BEBE.6060103@redhat.com> Message-ID: <53BF0599.9010205@oracle.com> The proposed JEP is fine from my view. I added myself as reviewer to the JEP and send it to Mikael for Hotspot architect review. It would be helpful if you can estimate how many patches you will have to understand what Oracle efforts/resources would be needed (reviews, preparing closed changes, pushes). As David pointed we in Oracle are also learning JEP 2.0 process. Small note. There is no more HSX or Hotspot project for jdk9. We have only one jdk9 project with Mark Reinhold as lead. He is final judge of all jdk9 JEPs. Thanks, Vladimir On 7/2/14 1:11 AM, Andrew Haley wrote: > Hi everybody, > > Please can someone review my JEP? > > It's very simple, and until we can get things moving this is > blocking a significant contribution to OpenJDK. > > https://bugs.openjdk.java.net/browse/JDK-8044552 > > Thanks, > Andrew. > > > > On 19/06/14 15:27, Andrew Haley wrote: >> The JEP is here: >> >> https://bugs.openjdk.java.net/browse/JDK-8044552 >> >> As you may know, we've been working on this port for some time. >> It is now at the stage where it may be considered for inclusion >> in OpenJDK. It passes all its tests, and although there is still >> some tidying up to do, I think we should move to the next stage. >> >> Thanks, >> Andrew. >> > From mikael.vidstedt at oracle.com Fri Jul 11 00:21:37 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Thu, 10 Jul 2014 17:21:37 -0700 Subject: Please look at my JEP In-Reply-To: <53B3BEBE.6060103@redhat.com> References: <53A2F354.40606@redhat.com> <53B3BEBE.6060103@redhat.com> Message-ID: <53BF2E11.8010700@oracle.com> Andrew, I looked through the JEP and have a few minor comments. In the "Risks and Assumptions" section: "Fort the PPC port Oracle created a staging repository to contain the changes that had been reviewed and approved." I suggest a rephrase of this sentence to the following: Similar to the PPC/AIX port, a staging forest owned by the AArch64 Port Project will be created (e.g. aarch64-port/stage) to contain changesets that have been Reviewed and approved. I think that there may be a misunderstanding here: "Oracle also created a private hudson instance for the staging repository, to build and test the changes. We would like something similar to happen for this project, but this has not yet been agreed and requires hardware to be provided." For the PPC/AIX port, the Oracle instance only built and tested Oracle-supported configurations to understand impact of the PPC/AIX changes. SAP and IBM were (and continue to be) responsible for building and testing the PPC/AIX port. Oracle will use the same approach for AArch64 so providing hardware to Oracle will not be necessary. In the "Dependencies" section: "AArch64 hardware and operating system software. Red Hat will provide the latter." Given the above, I don't believe that this is applicable. After you make those changes, I'll list myself as a Reviewer and Endorser. The next step will be for you to move the JEP to "Submitted". Cheers, Mikael On 2014-07-02 01:11, Andrew Haley wrote: > Hi everybody, > > Please can someone review my JEP? > > It's very simple, and until we can get things moving this is > blocking a significant contribution to OpenJDK. > > https://bugs.openjdk.java.net/browse/JDK-8044552 > > Thanks, > Andrew. > > > > On 19/06/14 15:27, Andrew Haley wrote: >> The JEP is here: >> >> https://bugs.openjdk.java.net/browse/JDK-8044552 >> >> As you may know, we've been working on this port for some time. >> It is now at the stage where it may be considered for inclusion >> in OpenJDK. It passes all its tests, and although there is still >> some tidying up to do, I think we should move to the next stage. >> >> Thanks, >> Andrew. >> From david.holmes at oracle.com Fri Jul 11 04:36:44 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 11 Jul 2014 14:36:44 +1000 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: References: <53BDFD5D.4050908@oracle.com> Message-ID: <53BF69DC.9010305@oracle.com> Hi Volker, On 10/07/2014 8:12 PM, Volker Simonis wrote: > Hi David, > > thanks for looking at this. Here's my new version of the change with > some of your suggestions applied: > > http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 I have a simpler counter proposal (also default -> DEFAULT as that seems to be the style): # Serviceability Binaries ADD_SA_BINARIES/DEFAULT = $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) \ $(EXPORT_LIB_DIR)/sa-jdi.jar ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) ifeq ($(ZIP_DEBUGINFO_FILES),1) ADD_SA_BINARIES/DEFAULT += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz else ADD_SA_BINARIES/DEFAULT += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo endif endif ADD_SA_BINARIES/$(HS_ARCH) = $(ADD_SA_BINARIES/DEFAULT) # No SA Support for IA64 or zero ADD_SA_BINARIES/ia64 = ADD_SA_BINARIES/zero = --- The open logic only has to worry about open platforms. The custom makefile can accept the default or override as it desires. I thought about conditionally setting ADD_SA_BINARIES/$(HS_ARCH) but the above is simple and clear. Ok? I'll sponsor this one of course (so its safe for other reviewers to jump in now :) ). Thanks, David > Please find more information inline: > > On Thu, Jul 10, 2014 at 4:41 AM, David Holmes wrote: >> Hi Volker, >> >> Comments below where you might expect them :) >> >> >> On 10/07/2014 3:36 AM, Volker Simonis wrote: >>> >>> Hi, >>> >>> could someone please review and sponsor the following change which >>> does some preliminary work for enabling the SA agent on Linux/PPC64: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >>> https://bugs.openjdk.java.net/browse/JDK-8049715 >>> >>> Details: >>> >>> Currently, we don't support the SA agent on Linux/PPC64. This change >>> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >>> and sa-jdi.jar) will be correctly build and copied into the resulting >>> jdk images. >>> >>> This change also contains some small fixes in sa-jdi.jar to correctly >>> detect Linux/PPC64 as supported SA platform. (The actual >>> implementation of the Linux/PPC64 specific code will be handled by >>> "8049716 PPC64: Implement SA on Linux/PPC64" - >>> https://bugs.openjdk.java.net/browse/JDK-8049716). >>> >>> One thing which require special attention are the changes in >>> make/linux/makefiles/defs.make which may touch the closed ppc port. In >>> my change I've simply added 'ppc' to the list of supported >>> architectures, but this may break the 32-bit ppc build. I think the >> >> >> It wouldn't break it but I was expecting to see ppc64 here. >> > > The problem is that currently the decision if the SA agent will be > build is based on the value of HS_ARCH. But HS_ARCH is the 'basic > architecture' (i.e. x86 or sparc) so there's no easy way to choose the > SA agent for only a 64-bit platform (like ppc64 or amd64) and not for > its 32-bit counterpart (i.e. i386 or ppc). > > The only possibility with the current solution would be to only > conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But > that wouldn't make the code nicer either:) > >> >>> current code is to verbose and error prone anyway. It would be better >>> to have something like: >>> >>> ADD_SA_BINARIES = >>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>> >>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>> else >>> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>> endif >>> endif >>> >>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 ppc64)) >>> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) >> >> >> You wouldn't need/want the $(HS_ARCH) there. >> > > Sorry, that was a type of course. It should read: > > ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 ppc64)) > EXPORT_LIST += $(ADD_SA_BINARIES) > > But that's not necessary now anymore (see new version below). > >> >>> endif >>> >>> With this solution we only define ADD_SA_BINARIES once (because the >>> various definitions for the different platforms are equal anyway). But >>> again this may affect other closed ports so please advise which >>> solution you'd prefer. >> >> >> The above is problematic for customizations. An alternative would be to set >> ADD_SA_BINARIES/default once with all the file names. Then: >> >> ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) >> # No SA Support for IA64 or zero >> ifneq (, $(findstring $(ARCH), ia64, zero)) >> ADD_SA_BINARIES/$(ARCH) = >> >> Each ARCH handled elsewhere would then still set ADD_SA_BINARIES/$(ARCH) if >> needed. >> >> Does that seem reasonable? >> > > The problem with using ARCH is that it is not "reliable" in the sens > that its value differs for top-level and hotspot-only makes. See > "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for > hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 > build". > > But using ADD_SA_BINARIES/default to save redundant lines is a good > idea. I've updated the patch accordingly and think that the new > solution is a good compromise between readability and not touching > existing/closed part. > > Are you fine with the new version at > http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? > >> >>> Notice that this change also requires a tiny fix in the top-level >>> repository which must be pushed AFTER this change. >> >> >> Can you elaborate please? >> > > I've also submitted the corresponding top-level repository change for > review which expects to find the SA agent libraries on Linux/ppc64 in > order to copy them into the image directory: > http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ > > But once that will be pushed, the build will fail if these HS changes > will not be in place to actually build the libraries. > >> Thanks, >> David >> >> >>> Thank you and best regards, >>> Volker >>> >> From david.holmes at oracle.com Fri Jul 11 05:18:33 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 11 Jul 2014 15:18:33 +1000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> Message-ID: <53BF73A9.3070105@oracle.com> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: > Hi, > > foo.hpp as few includes as possible, to avoid cycles. > foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp > (either directly or via the platform files.) > * should include foo.platform.inline.hpp, so that shared files that > call functions from foo.platform.inline.hpp need not contain the > cascade of all the platform files. > If code in foo.platform.inline.hpp is only used in the platform files, > it is not necessary to have an umbrella header. > foo.platform.inline.hpp Should include what is needed in its code. > > For client code: > With this change I now removed all include cascades of platform files except for > those in the 'natural' headers. > Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. > (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp > headers, but include bar.[inline.]hpp.) > If it's 1:1, I don't care, as discussed before. > > Does this make sense? I find the overall structure somewhat counter-intuitive from an implementation versus interface perspective. But ... Thanks for the explanation. David > > Best regards, > Goetz. > > > which of the above should #include which others, and which should be > #include'd by "client" code? > > Thanks, > David > >> Thanks, >> Lois >> >>> >>> David >>> ----- >>> >>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>> (however this could pull in more code than needed since >>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>> >>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>> - change not related to clean up of umbrella headers, please >>>> explain/justify. >>>> >>>> src/share/vm/code/vmreg.hpp >>>> - Can lines #143-#15 be replaced by an inclusion of >>>> vmreg.inline.hpp or will >>>> this introduce a cyclical inclusion situation, since >>>> vmreg.inline.hpp includes vmreg.hpp? >>>> >>>> src/share/vm/classfile/classFileStream.cpp >>>> - only has a copyright change in the file, no other changes >>>> present? >>>> >>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>> - incorrect copyright, no current year? >>>> >>>> src/share/vm/opto/ad.hpp >>>> - incorrect copyright date for a new file >>>> >>>> src/share/vm/code/vmreg.inline.hpp >>>> - technically this new file does not need to include >>>> "asm/register.hpp" since >>>> vmreg.hpp already includes it >>>> >>>> My only lingering concern is the cyclical nature of >>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>> is not much difference between the two? >>>> >>>> Thanks, >>>> Lois >>>> >>>> >>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>> Hi, >>>>> >>>>> I decided to clean up the remaining include cascades, too. >>>>> >>>>> This change introduces umbrella headers for the files in the cpu >>>>> subdirectories: >>>>> >>>>> src/share/vm/utilities/bytes.hpp >>>>> src/share/vm/opto/ad.hpp >>>>> src/share/vm/code/nativeInst.hpp >>>>> src/share/vm/code/vmreg.inline.hpp >>>>> src/share/vm/interpreter/interp_masm.hpp >>>>> >>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>> >>>>> Where possible, this change avoids includes in headers. >>>>> Eventually it adds a forward declaration. >>>>> >>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>> Still, I did not split the files in the cpu directories, as they are >>>>> rather small. >>>>> >>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>> contains machine dependent, c2 specific register information. So I >>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>> includes in, >>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>> >>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>> file, I don't think this is performance relevant. But having them in >>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>> thus all the assembler include headers into a lot of files. >>>>> >>>>> Please review and test this change. I please need a sponsor. >>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>> >>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>> linuxppc64, >>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>> aixppc64, ntamd64 >>>>> in opt, dbg and fastdbg versions. >>>>> >>>>> Currently, the change applies to hs-rt, but once my other change >>>>> arrives in other >>>>> repos, it will work there, too. (I tested it together with the other >>>>> change >>>>> against jdk9/dev, too.) >>>>> >>>>> Best regards, >>>>> Goetz. >>>>> >>>>> PS: I also did all the Copyright adaptions ;) >>>> >> From joe.darcy at oracle.com Fri Jul 11 06:18:57 2014 From: joe.darcy at oracle.com (Joe Darcy) Date: Thu, 10 Jul 2014 23:18:57 -0700 Subject: JDK 9 RFR of JDK-8048620: Remove unneeded/obsolete -source/-target options in hotspot tests In-Reply-To: <53BAC68B.2070209@oracle.com> References: <53AE04E1.4000806@oracle.com> <53B2E513.5020608@oracle.com> <53B3DB0D.8070700@oracle.com> <53B3F6DA.1050209@oracle.com> <53B5DAD7.6030205@oracle.com> <53BAC68B.2070209@oracle.com> Message-ID: <53BF81D1.2060901@oracle.com> An update, FYI the changeset removing support from javac for source/target of 1.5 and earlier has been pushed to jdk9/dev: http://mail.openjdk.java.net/pipermail/jdk9-dev/2014-July/000972.html http://hg.openjdk.java.net/jdk9/dev/langtools/rev/fbfbefa43016 Therefore, if test/compiler/6932496/Test6932496.java is not updated, it will fail to compile once it the change above propagates from jdk9/dev to the HotSpot repos. Cheers, -Joe On 07/07/2014 09:10 AM, Joe Darcy wrote: > Hello, > > I've sent a patch with the updated copyrights to Harold. As far as > that goes, getting those changes back should proceed. > > However, there is one more test which needs further examination; from > the email initiating this thread: > >> There is one additional test which uses -source/-target, >> test/compiler/6932496/Test6932496.java. This test *does* appear >> sensitive to class file version (no jsr / jret instruction in target 6 >> or higher) so I have not modified this test. If the test is not >> actually sensitive to class file version, it can be updated like the >> others. If it is sensitive and if testing this is still relevant, the >> class file in question will need to be generated in some other way, >> such as as by using ASM. > > Thanks, > > -Joe > > On 07/03/2014 03:36 PM, Joe Darcy wrote: >> Hi Harold, >> >> Yes; please sponsor this change; thanks, >> >> -Joe >> >> On 07/02/2014 05:11 AM, harold seigel wrote: >>> Hi Joe, >>> >>> Your changes look good to me, also. >>> >>> Would you like me to sponsor your change? >>> >>> Thanks, Harold >>> >>> On 7/2/2014 6:12 AM, David Holmes wrote: >>>> Hi Joe, >>>> >>>> I can provide you one Review. It seems to me the -source/-target >>>> were being set to ensure a minimum version (probably on -target was >>>> needed but -source had to come along for the ride), so removing >>>> them seems fine. >>>> >>>> Note hotspot protocol requires copyright updates at the time of >>>> checkin - thanks. >>>> >>>> Also you will need to create the changeset against the group repo >>>> for whomever your sponsor is (though your existing patch from the >>>> webrev will probably apply cleanly). >>>> >>>> A second reviewer (small R) is needed. If they don't sponsor it I >>>> will. >>>> >>>> Cheers, >>>> David >>>> >>>> >>>> >>>> On 2/07/2014 2:42 AM, Joe Darcy wrote: >>>>> *ping* >>>>> >>>>> -Joe >>>>> >>>>> On 06/27/2014 04:57 PM, Joe Darcy wrote: >>>>>> Hello, >>>>>> >>>>>> As a consequence of a policy for retiring old javac -source and >>>>>> -target options (JEP 182 [1]), in JDK 9, only -source/-target of >>>>>> 6/1.6 >>>>>> and higher will be supported [2]. This work is being tracked >>>>>> under bug >>>>>> >>>>>> JDK-8011044: Remove support for 1.5 and earlier source and >>>>>> target >>>>>> options >>>>>> https://bugs.openjdk.java.net/browse/JDK-8011044 >>>>>> >>>>>> Many subtasks related to this are already complete, including >>>>>> updating >>>>>> regression tests in the jdk and langtools repos. It has come to my >>>>>> attention that the hotspot repo also has a few tests that use >>>>>> -source >>>>>> and -target that should be updated. Please review the changes: >>>>>> >>>>>> http://cr.openjdk.java.net/~darcy/8048620.0/ >>>>>> >>>>>> Full patch below. From what I could tell looking at the bug and >>>>>> tests, >>>>>> these tests are not sensitive to the class file version so they >>>>>> shouldn't need to use an explicit -source or -target option and >>>>>> should >>>>>> just accept the JDK-default. >>>>>> >>>>>> There is one additional test which uses -source/-target, >>>>>> test/compiler/6932496/Test6932496.java. This test *does* appear >>>>>> sensitive to class file version (no jsr / jret instruction in >>>>>> target 6 >>>>>> or higher) so I have not modified this test. If the test is not >>>>>> actually sensitive to class file version, it can be updated like the >>>>>> others. If it is sensitive and if testing this is still relevant, >>>>>> the >>>>>> class file in question will need to be generated in some other way, >>>>>> such as as by using ASM. >>>>>> >>>>>> Regardless of the outcome of the technical discussion around >>>>>> Test6932496.java, I'd appreciate if a "hotspot buddy" could shepherd >>>>>> this fix through the HotSpot processes. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> -Joe >>>>>> >>>>>> [1] http://openjdk.java.net/jeps/182 >>>>>> >>>>>> [2] >>>>>> http://mail.openjdk.java.net/pipermail/jdk9-dev/2014-January/000328.html >>>>>> >>>>>> >>>>>> --- old/test/compiler/6775880/Test.java 2014-06-27 >>>>>> 16:24:25.000000000 -0700 >>>>>> +++ new/test/compiler/6775880/Test.java 2014-06-27 >>>>>> 16:24:25.000000000 -0700 >>>>>> @@ -26,7 +26,6 @@ >>>>>> * @test >>>>>> * @bug 6775880 >>>>>> * @summary EA +DeoptimizeALot: >>>>>> assert(mon_info->owner()->is_locked(),"object must be locked now") >>>>>> - * @compile -source 1.4 -target 1.4 Test.java >>>>>> * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -Xbatch >>>>>> -XX:+DoEscapeAnalysis -XX:+DeoptimizeALot >>>>>> -XX:CompileCommand=exclude,java.lang.AbstractStringBuilder::append Test >>>>>> >>>>>> */ >>>>>> >>>>>> --- old/test/runtime/6626217/Test6626217.sh 2014-06-27 >>>>>> 16:24:26.000000000 -0700 >>>>>> +++ new/test/runtime/6626217/Test6626217.sh 2014-06-27 >>>>>> 16:24:26.000000000 -0700 >>>>>> @@ -54,7 +54,7 @@ >>>>>> >>>>>> # Compile all the usual suspects, including the default >>>>>> 'many_loader' >>>>>> ${CP} many_loader1.java.foo many_loader.java >>>>>> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint *.java >>>>>> +${JAVAC} ${TESTJAVACOPTS} -Xlint *.java >>>>>> >>>>>> # Rename the class files, so the custom loader (and not the system >>>>>> loader) will find it >>>>>> ${MV} from_loader2.class from_loader2.impl2 >>>>>> @@ -62,7 +62,7 @@ >>>>>> # Compile the next version of 'many_loader' >>>>>> ${MV} many_loader.class many_loader.impl1 >>>>>> ${CP} many_loader2.java.foo many_loader.java >>>>>> -${JAVAC} ${TESTJAVACOPTS} -source 1.4 -target 1.4 -Xlint >>>>>> many_loader.java >>>>>> +${JAVAC} ${TESTJAVACOPTS} -Xlint many_loader.java >>>>>> >>>>>> # Rename the class file, so the custom loader (and not the system >>>>>> loader) will find it >>>>>> ${MV} many_loader.class many_loader.impl2 >>>>>> --- old/test/runtime/8003720/Test8003720.java 2014-06-27 >>>>>> 16:24:26.000000000 -0700 >>>>>> +++ new/test/runtime/8003720/Test8003720.java 2014-06-27 >>>>>> 16:24:26.000000000 -0700 >>>>>> @@ -26,7 +26,7 @@ >>>>>> * @test >>>>>> * @bug 8003720 >>>>>> * @summary Method in interpreter stack frame can be deallocated >>>>>> - * @compile -XDignore.symbol.file -source 1.7 -target 1.7 >>>>>> Victim.java >>>>>> + * @compile -XDignore.symbol.file Victim.java >>>>>> * @run main/othervm -Xverify:all -Xint Test8003720 >>>>>> */ >>>>>> >>>>>> >>>>> >>> >> > From aph at redhat.com Fri Jul 11 08:19:16 2014 From: aph at redhat.com (Andrew Haley) Date: Fri, 11 Jul 2014 09:19:16 +0100 Subject: Please look at my JEP In-Reply-To: <53BF0599.9010205@oracle.com> References: <53A2F354.40606@redhat.com> <53B3BEBE.6060103@redhat.com> <53BF0599.9010205@oracle.com> Message-ID: <53BF9E04.9070402@redhat.com> On 10/07/14 22:28, Vladimir Kozlov wrote: > The proposed JEP is fine from my view. I added myself as reviewer to the > JEP and send it to Mikael for Hotspot architect review. Excellent, thank you. > It would be helpful if you can estimate how many patches you will have > to understand what Oracle efforts/resources would be needed (reviews, > preparing closed changes, pushes). Sure, I get that. It all depends on what you mean by "a patch". I can do the whole thing as a single patch for everything outside the aarch64- specific directories. I'll prepare a webrev. > As David pointed we in Oracle are also learning JEP 2.0 process. > > Small note. There is no more HSX or Hotspot project for jdk9. We have > only one jdk9 project with Mark Reinhold as lead. He is final judge of > all jdk9 JEPs. Okay. Mark suggested to me that, to begin with, I should be engaging with the Hotspot team. Andrew. From volker.simonis at gmail.com Fri Jul 11 09:12:04 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 11 Jul 2014 11:12:04 +0200 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53BDB4A8.4090001@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BBE@DEWDFEMB12A.global.corp.sap> <53BDB4A8.4090001@oracle.com> Message-ID: On Wed, Jul 9, 2014 at 11:31 PM, Lois Foltan wrote: > > On 7/9/2014 8:53 AM, Lindenmaier, Goetz wrote: >> >> Hi Lois, >> >> thanks for looking at this change! >> >> In general, I did not replace xxx_.hpp by the generic header, >> because it's a clear 1:1 relationship. And as David says, if there is >> only the include cascade in the generic header, there is no point to >> including it. Except that maybe including the generic files would >> be a more consistent coding style. >> If it's agreed on, I'll fix this for all the headers I addressed. >> ... OK, in the meantime your other mail arrived ... so I'll leave it >> as is. >> >> The basic idea of .inline.hpp files is to avoid cycles when using inline >> functions. A .inline.hpp file should never be included in a .hpp file, so >> there >> will never be a cycle. >> >> The VMRegImp:: functions in vmreg_.inline.hpp actually don't depend >> on >> other inline functions, so moving them to vmreg_.hpp would be >> feasible. The XxxRegisterImpl:: functions must remain in a >> .inline.hpp >> file, though. Else we get a cycle between register.hpp and vmreg.hpp. >> If you want to, I move the code and rename the vmreg files to register... >> I think this would be a good cleanup, as currently the file contains >> implementations from two different headers, which is unusual. >> >> This is also why I placed the register.hpp include in vmreg.inline.hpp. It >> contains what actually should go to register.inline.hpp, so it should also >> contain >> the natural include for register.inline.hpp. > > > Hi Goetz, > > Thank you for the further explanation with regards to vmreg.inline.hpp. I > now understand why your included register.hpp in vmreg.inline.hpp but I > still think it is unneccessary since vmreg.inline.hpp includes vmreg.hpp > which includes register.hpp as well. > I think it is good practice that every file includes all the other files it depends on an not relies on the fact that some dependencies are included indirectly. In the given example, removing register.hpp from vmreg.hpp (for whatever reason) would break vmreg.inline.hpp as a side effect. > >> >> stubGenerator_ppc.cpp: The macro removed is declared in >> interp_masm_ppc.hpp, and I didn't want to include it there, seems >> not right. So I rather removed the macro. > > > Ok, got it! > > >> >> I fixed the Copyright errors: >> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ > > > Thank you. Just a very minor nit, the copyright on the new file > src/share/vm/opto/ad.hpp still seems incorrect. It is "2000, 2014,". > Shouldn't it be just "2014," since this is a new file? > > Overall I am good with your changes, reviewed! > Lois > > >> >> Best regards, >> Goetz. >> >> -----Original Message----- >> From: Lois Foltan [mailto:lois.foltan at oracle.com] >> Sent: Dienstag, 8. Juli 2014 19:42 >> To: Lindenmaier, Goetz >> Cc: hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for >> the files in the cpu subdirectories >> >> Hi Goetz, >> >> Overall this cleanup looks good. Here are specific comments per file: >> >> src/cpu/ppc/vm/runtime_ppc.cpp >> - include nativeInst.hpp instead of nativeInst_ppc.hpp >> >> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >> - include nativeInst.hpp instead of nativeInst_sparc.hpp >> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >> (however this could pull in more code than needed since >> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >> >> src/cpu/ppc/vm/stubGenerator_ppc.cpp >> - change not related to clean up of umbrella headers, please >> explain/justify. >> >> src/share/vm/code/vmreg.hpp >> - Can lines #143-#15 be replaced by an inclusion of >> vmreg.inline.hpp or will >> this introduce a cyclical inclusion situation, since >> vmreg.inline.hpp includes vmreg.hpp? >> >> src/share/vm/classfile/classFileStream.cpp >> - only has a copyright change in the file, no other changes present? >> >> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >> - incorrect copyright, no current year? >> >> src/share/vm/opto/ad.hpp >> - incorrect copyright date for a new file >> >> src/share/vm/code/vmreg.inline.hpp >> - technically this new file does not need to include >> "asm/register.hpp" since >> vmreg.hpp already includes it >> >> My only lingering concern is the cyclical nature of >> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >> is not much difference between the two? >> >> Thanks, >> Lois >> >> >> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>> >>> Hi, >>> >>> I decided to clean up the remaining include cascades, too. >>> >>> This change introduces umbrella headers for the files in the cpu >>> subdirectories: >>> >>> src/share/vm/utilities/bytes.hpp >>> src/share/vm/opto/ad.hpp >>> src/share/vm/code/nativeInst.hpp >>> src/share/vm/code/vmreg.inline.hpp >>> src/share/vm/interpreter/interp_masm.hpp >>> >>> It also cleans up the include cascades for adGlobals*.hpp, >>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>> >>> Where possible, this change avoids includes in headers. >>> Eventually it adds a forward declaration. >>> >>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>> Still, I did not split the files in the cpu directories, as they are >>> rather small. >>> >>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>> contains machine dependent, c2 specific register information. So I >>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>> includes in, >>> and then use optoreg.hpp where symbols from adGlobals are needed. >>> >>> I moved the constructor and destructor of CodeletMark to the .cpp >>> file, I don't think this is performance relevant. But having them in >>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>> thus all the assembler include headers into a lot of files. >>> >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>> >>> I compiled and tested this without precompiled headers on linuxx86_64, >>> linuxppc64, >>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, aixppc64, >>> ntamd64 >>> in opt, dbg and fastdbg versions. >>> >>> Currently, the change applies to hs-rt, but once my other change arrives >>> in other >>> repos, it will work there, too. (I tested it together with the other >>> change >>> against jdk9/dev, too.) >>> >>> Best regards, >>> Goetz. >>> >>> PS: I also did all the Copyright adaptions ;) > > From volker.simonis at gmail.com Fri Jul 11 11:54:24 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 11 Jul 2014 13:54:24 +0200 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: <53BF69DC.9010305@oracle.com> References: <53BDFD5D.4050908@oracle.com> <53BF69DC.9010305@oracle.com> Message-ID: On Fri, Jul 11, 2014 at 6:36 AM, David Holmes wrote: > Hi Volker, > > > On 10/07/2014 8:12 PM, Volker Simonis wrote: >> >> Hi David, >> >> thanks for looking at this. Here's my new version of the change with >> some of your suggestions applied: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 > > > I have a simpler counter proposal (also default -> DEFAULT as that seems to > be the style): > > # Serviceability Binaries > > ADD_SA_BINARIES/DEFAULT = > $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) \ > > $(EXPORT_LIB_DIR)/sa-jdi.jar > > ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) > ifeq ($(ZIP_DEBUGINFO_FILES),1) > ADD_SA_BINARIES/DEFAULT += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz > else > ADD_SA_BINARIES/DEFAULT += > $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo > endif > endif > > ADD_SA_BINARIES/$(HS_ARCH) = $(ADD_SA_BINARIES/DEFAULT) > > > # No SA Support for IA64 or zero > ADD_SA_BINARIES/ia64 = > ADD_SA_BINARIES/zero = > > --- > > The open logic only has to worry about open platforms. The custom makefile > can accept the default or override as it desires. > > I thought about conditionally setting ADD_SA_BINARIES/$(HS_ARCH) but the > above is simple and clear. > > Ok? > Perfect! Here's the new webrev with your proposed changes (tested on Linux/x86_64 and ppc64): http://cr.openjdk.java.net/~simonis/webrevs/8049715.v3 Thanks for sponsoring, Volker > I'll sponsor this one of course (so its safe for other reviewers to jump in > now :) ). > > Thanks, > David > > > >> Please find more information inline: >> >> On Thu, Jul 10, 2014 at 4:41 AM, David Holmes >> wrote: >>> >>> Hi Volker, >>> >>> Comments below where you might expect them :) >>> >>> >>> On 10/07/2014 3:36 AM, Volker Simonis wrote: >>>> >>>> >>>> Hi, >>>> >>>> could someone please review and sponsor the following change which >>>> does some preliminary work for enabling the SA agent on Linux/PPC64: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >>>> https://bugs.openjdk.java.net/browse/JDK-8049715 >>>> >>>> Details: >>>> >>>> Currently, we don't support the SA agent on Linux/PPC64. This change >>>> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >>>> and sa-jdi.jar) will be correctly build and copied into the resulting >>>> jdk images. >>>> >>>> This change also contains some small fixes in sa-jdi.jar to correctly >>>> detect Linux/PPC64 as supported SA platform. (The actual >>>> implementation of the Linux/PPC64 specific code will be handled by >>>> "8049716 PPC64: Implement SA on Linux/PPC64" - >>>> https://bugs.openjdk.java.net/browse/JDK-8049716). >>>> >>>> One thing which require special attention are the changes in >>>> make/linux/makefiles/defs.make which may touch the closed ppc port. In >>>> my change I've simply added 'ppc' to the list of supported >>>> architectures, but this may break the 32-bit ppc build. I think the >>> >>> >>> >>> It wouldn't break it but I was expecting to see ppc64 here. >>> >> >> The problem is that currently the decision if the SA agent will be >> build is based on the value of HS_ARCH. But HS_ARCH is the 'basic >> architecture' (i.e. x86 or sparc) so there's no easy way to choose the >> SA agent for only a 64-bit platform (like ppc64 or amd64) and not for >> its 32-bit counterpart (i.e. i386 or ppc). >> >> The only possibility with the current solution would be to only >> conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But >> that wouldn't make the code nicer either:) >> >>> >>>> current code is to verbose and error prone anyway. It would be better >>>> to have something like: >>>> >>>> ADD_SA_BINARIES = >>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>> >>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>> else >>>> ADD_SA_BINARIES += >>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>> endif >>>> endif >>>> >>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>> ppc64)) >>>> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) >>> >>> >>> >>> You wouldn't need/want the $(HS_ARCH) there. >>> >> >> Sorry, that was a type of course. It should read: >> >> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >> ppc64)) >> EXPORT_LIST += $(ADD_SA_BINARIES) >> >> But that's not necessary now anymore (see new version below). >> >>> >>>> endif >>>> >>>> With this solution we only define ADD_SA_BINARIES once (because the >>>> various definitions for the different platforms are equal anyway). But >>>> again this may affect other closed ports so please advise which >>>> solution you'd prefer. >>> >>> >>> >>> The above is problematic for customizations. An alternative would be to >>> set >>> ADD_SA_BINARIES/default once with all the file names. Then: >>> >>> ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) >>> # No SA Support for IA64 or zero >>> ifneq (, $(findstring $(ARCH), ia64, zero)) >>> ADD_SA_BINARIES/$(ARCH) = >>> >>> Each ARCH handled elsewhere would then still set ADD_SA_BINARIES/$(ARCH) >>> if >>> needed. >>> >>> Does that seem reasonable? >>> >> >> The problem with using ARCH is that it is not "reliable" in the sens >> that its value differs for top-level and hotspot-only makes. See >> "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for >> hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 >> build". >> >> But using ADD_SA_BINARIES/default to save redundant lines is a good >> idea. I've updated the patch accordingly and think that the new >> solution is a good compromise between readability and not touching >> existing/closed part. >> >> Are you fine with the new version at >> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? >> >>> >>>> Notice that this change also requires a tiny fix in the top-level >>>> repository which must be pushed AFTER this change. >>> >>> >>> >>> Can you elaborate please? >>> >> >> I've also submitted the corresponding top-level repository change for >> review which expects to find the SA agent libraries on Linux/ppc64 in >> order to copy them into the image directory: >> http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ >> >> But once that will be pushed, the build will fail if these HS changes >> will not be in place to actually build the libraries. >> >>> Thanks, >>> David >>> >>> >>>> Thank you and best regards, >>>> Volker >>>> >>> > From david.holmes at oracle.com Fri Jul 11 12:02:49 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 11 Jul 2014 22:02:49 +1000 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: References: <53BDFD5D.4050908@oracle.com> <53BF69DC.9010305@oracle.com> Message-ID: <53BFD269.2050500@oracle.com> I'll test it out with my local custom changes while we wait for a second reviewer. I plan to push to hs-rt repo. Thanks, David On 11/07/2014 9:54 PM, Volker Simonis wrote: > On Fri, Jul 11, 2014 at 6:36 AM, David Holmes wrote: >> Hi Volker, >> >> >> On 10/07/2014 8:12 PM, Volker Simonis wrote: >>> >>> Hi David, >>> >>> thanks for looking at this. Here's my new version of the change with >>> some of your suggestions applied: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 >> >> >> I have a simpler counter proposal (also default -> DEFAULT as that seems to >> be the style): >> >> # Serviceability Binaries >> >> ADD_SA_BINARIES/DEFAULT = >> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) \ >> >> $(EXPORT_LIB_DIR)/sa-jdi.jar >> >> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >> ifeq ($(ZIP_DEBUGINFO_FILES),1) >> ADD_SA_BINARIES/DEFAULT += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >> else >> ADD_SA_BINARIES/DEFAULT += >> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >> endif >> endif >> >> ADD_SA_BINARIES/$(HS_ARCH) = $(ADD_SA_BINARIES/DEFAULT) >> >> >> # No SA Support for IA64 or zero >> ADD_SA_BINARIES/ia64 = >> ADD_SA_BINARIES/zero = >> >> --- >> >> The open logic only has to worry about open platforms. The custom makefile >> can accept the default or override as it desires. >> >> I thought about conditionally setting ADD_SA_BINARIES/$(HS_ARCH) but the >> above is simple and clear. >> >> Ok? >> > > Perfect! > > Here's the new webrev with your proposed changes (tested on > Linux/x86_64 and ppc64): > > http://cr.openjdk.java.net/~simonis/webrevs/8049715.v3 > > Thanks for sponsoring, > Volker > >> I'll sponsor this one of course (so its safe for other reviewers to jump in >> now :) ). >> >> Thanks, >> David >> >> >> >>> Please find more information inline: >>> >>> On Thu, Jul 10, 2014 at 4:41 AM, David Holmes >>> wrote: >>>> >>>> Hi Volker, >>>> >>>> Comments below where you might expect them :) >>>> >>>> >>>> On 10/07/2014 3:36 AM, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi, >>>>> >>>>> could someone please review and sponsor the following change which >>>>> does some preliminary work for enabling the SA agent on Linux/PPC64: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8049715 >>>>> >>>>> Details: >>>>> >>>>> Currently, we don't support the SA agent on Linux/PPC64. This change >>>>> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >>>>> and sa-jdi.jar) will be correctly build and copied into the resulting >>>>> jdk images. >>>>> >>>>> This change also contains some small fixes in sa-jdi.jar to correctly >>>>> detect Linux/PPC64 as supported SA platform. (The actual >>>>> implementation of the Linux/PPC64 specific code will be handled by >>>>> "8049716 PPC64: Implement SA on Linux/PPC64" - >>>>> https://bugs.openjdk.java.net/browse/JDK-8049716). >>>>> >>>>> One thing which require special attention are the changes in >>>>> make/linux/makefiles/defs.make which may touch the closed ppc port. In >>>>> my change I've simply added 'ppc' to the list of supported >>>>> architectures, but this may break the 32-bit ppc build. I think the >>>> >>>> >>>> >>>> It wouldn't break it but I was expecting to see ppc64 here. >>>> >>> >>> The problem is that currently the decision if the SA agent will be >>> build is based on the value of HS_ARCH. But HS_ARCH is the 'basic >>> architecture' (i.e. x86 or sparc) so there's no easy way to choose the >>> SA agent for only a 64-bit platform (like ppc64 or amd64) and not for >>> its 32-bit counterpart (i.e. i386 or ppc). >>> >>> The only possibility with the current solution would be to only >>> conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But >>> that wouldn't make the code nicer either:) >>> >>>> >>>>> current code is to verbose and error prone anyway. It would be better >>>>> to have something like: >>>>> >>>>> ADD_SA_BINARIES = >>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >>>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>>> >>>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>>> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>>> else >>>>> ADD_SA_BINARIES += >>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>>> endif >>>>> endif >>>>> >>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>>> ppc64)) >>>>> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) >>>> >>>> >>>> >>>> You wouldn't need/want the $(HS_ARCH) there. >>>> >>> >>> Sorry, that was a type of course. It should read: >>> >>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>> ppc64)) >>> EXPORT_LIST += $(ADD_SA_BINARIES) >>> >>> But that's not necessary now anymore (see new version below). >>> >>>> >>>>> endif >>>>> >>>>> With this solution we only define ADD_SA_BINARIES once (because the >>>>> various definitions for the different platforms are equal anyway). But >>>>> again this may affect other closed ports so please advise which >>>>> solution you'd prefer. >>>> >>>> >>>> >>>> The above is problematic for customizations. An alternative would be to >>>> set >>>> ADD_SA_BINARIES/default once with all the file names. Then: >>>> >>>> ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) >>>> # No SA Support for IA64 or zero >>>> ifneq (, $(findstring $(ARCH), ia64, zero)) >>>> ADD_SA_BINARIES/$(ARCH) = >>>> >>>> Each ARCH handled elsewhere would then still set ADD_SA_BINARIES/$(ARCH) >>>> if >>>> needed. >>>> >>>> Does that seem reasonable? >>>> >>> >>> The problem with using ARCH is that it is not "reliable" in the sens >>> that its value differs for top-level and hotspot-only makes. See >>> "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for >>> hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 >>> build". >>> >>> But using ADD_SA_BINARIES/default to save redundant lines is a good >>> idea. I've updated the patch accordingly and think that the new >>> solution is a good compromise between readability and not touching >>> existing/closed part. >>> >>> Are you fine with the new version at >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? >>> >>>> >>>>> Notice that this change also requires a tiny fix in the top-level >>>>> repository which must be pushed AFTER this change. >>>> >>>> >>>> >>>> Can you elaborate please? >>>> >>> >>> I've also submitted the corresponding top-level repository change for >>> review which expects to find the SA agent libraries on Linux/ppc64 in >>> order to copy them into the image directory: >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ >>> >>> But once that will be pushed, the build will fail if these HS changes >>> will not be in place to actually build the libraries. >>> >>>> Thanks, >>>> David >>>> >>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>> >> From mark.reinhold at oracle.com Fri Jul 11 15:19:33 2014 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Fri, 11 Jul 2014 08:19:33 -0700 (PDT) Subject: JEP 195: Scalable Native Memory Tracking Message-ID: <20140711152213.988B227AB5@eggemoggin.niobe.net> New JEP Candidate: http://openjdk.java.net/jeps/195 - Mark From vladimir.kozlov at oracle.com Fri Jul 11 21:13:23 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 11 Jul 2014 14:13:23 -0700 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: References: <53BDFD5D.4050908@oracle.com> <53BF69DC.9010305@oracle.com> Message-ID: <53C05373.7000104@oracle.com> This looks good. Vladimir On 7/11/14 4:54 AM, Volker Simonis wrote: > On Fri, Jul 11, 2014 at 6:36 AM, David Holmes wrote: >> Hi Volker, >> >> >> On 10/07/2014 8:12 PM, Volker Simonis wrote: >>> >>> Hi David, >>> >>> thanks for looking at this. Here's my new version of the change with >>> some of your suggestions applied: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 >> >> >> I have a simpler counter proposal (also default -> DEFAULT as that seems to >> be the style): >> >> # Serviceability Binaries >> >> ADD_SA_BINARIES/DEFAULT = >> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) \ >> >> $(EXPORT_LIB_DIR)/sa-jdi.jar >> >> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >> ifeq ($(ZIP_DEBUGINFO_FILES),1) >> ADD_SA_BINARIES/DEFAULT += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >> else >> ADD_SA_BINARIES/DEFAULT += >> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >> endif >> endif >> >> ADD_SA_BINARIES/$(HS_ARCH) = $(ADD_SA_BINARIES/DEFAULT) >> >> >> # No SA Support for IA64 or zero >> ADD_SA_BINARIES/ia64 = >> ADD_SA_BINARIES/zero = >> >> --- >> >> The open logic only has to worry about open platforms. The custom makefile >> can accept the default or override as it desires. >> >> I thought about conditionally setting ADD_SA_BINARIES/$(HS_ARCH) but the >> above is simple and clear. >> >> Ok? >> > > Perfect! > > Here's the new webrev with your proposed changes (tested on > Linux/x86_64 and ppc64): > > http://cr.openjdk.java.net/~simonis/webrevs/8049715.v3 > > Thanks for sponsoring, > Volker > >> I'll sponsor this one of course (so its safe for other reviewers to jump in >> now :) ). >> >> Thanks, >> David >> >> >> >>> Please find more information inline: >>> >>> On Thu, Jul 10, 2014 at 4:41 AM, David Holmes >>> wrote: >>>> >>>> Hi Volker, >>>> >>>> Comments below where you might expect them :) >>>> >>>> >>>> On 10/07/2014 3:36 AM, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi, >>>>> >>>>> could someone please review and sponsor the following change which >>>>> does some preliminary work for enabling the SA agent on Linux/PPC64: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8049715 >>>>> >>>>> Details: >>>>> >>>>> Currently, we don't support the SA agent on Linux/PPC64. This change >>>>> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >>>>> and sa-jdi.jar) will be correctly build and copied into the resulting >>>>> jdk images. >>>>> >>>>> This change also contains some small fixes in sa-jdi.jar to correctly >>>>> detect Linux/PPC64 as supported SA platform. (The actual >>>>> implementation of the Linux/PPC64 specific code will be handled by >>>>> "8049716 PPC64: Implement SA on Linux/PPC64" - >>>>> https://bugs.openjdk.java.net/browse/JDK-8049716). >>>>> >>>>> One thing which require special attention are the changes in >>>>> make/linux/makefiles/defs.make which may touch the closed ppc port. In >>>>> my change I've simply added 'ppc' to the list of supported >>>>> architectures, but this may break the 32-bit ppc build. I think the >>>> >>>> >>>> >>>> It wouldn't break it but I was expecting to see ppc64 here. >>>> >>> >>> The problem is that currently the decision if the SA agent will be >>> build is based on the value of HS_ARCH. But HS_ARCH is the 'basic >>> architecture' (i.e. x86 or sparc) so there's no easy way to choose the >>> SA agent for only a 64-bit platform (like ppc64 or amd64) and not for >>> its 32-bit counterpart (i.e. i386 or ppc). >>> >>> The only possibility with the current solution would be to only >>> conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But >>> that wouldn't make the code nicer either:) >>> >>>> >>>>> current code is to verbose and error prone anyway. It would be better >>>>> to have something like: >>>>> >>>>> ADD_SA_BINARIES = >>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >>>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>>> >>>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>>> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>>> else >>>>> ADD_SA_BINARIES += >>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>>> endif >>>>> endif >>>>> >>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>>> ppc64)) >>>>> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) >>>> >>>> >>>> >>>> You wouldn't need/want the $(HS_ARCH) there. >>>> >>> >>> Sorry, that was a type of course. It should read: >>> >>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>> ppc64)) >>> EXPORT_LIST += $(ADD_SA_BINARIES) >>> >>> But that's not necessary now anymore (see new version below). >>> >>>> >>>>> endif >>>>> >>>>> With this solution we only define ADD_SA_BINARIES once (because the >>>>> various definitions for the different platforms are equal anyway). But >>>>> again this may affect other closed ports so please advise which >>>>> solution you'd prefer. >>>> >>>> >>>> >>>> The above is problematic for customizations. An alternative would be to >>>> set >>>> ADD_SA_BINARIES/default once with all the file names. Then: >>>> >>>> ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) >>>> # No SA Support for IA64 or zero >>>> ifneq (, $(findstring $(ARCH), ia64, zero)) >>>> ADD_SA_BINARIES/$(ARCH) = >>>> >>>> Each ARCH handled elsewhere would then still set ADD_SA_BINARIES/$(ARCH) >>>> if >>>> needed. >>>> >>>> Does that seem reasonable? >>>> >>> >>> The problem with using ARCH is that it is not "reliable" in the sens >>> that its value differs for top-level and hotspot-only makes. See >>> "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for >>> hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 >>> build". >>> >>> But using ADD_SA_BINARIES/default to save redundant lines is a good >>> idea. I've updated the patch accordingly and think that the new >>> solution is a good compromise between readability and not touching >>> existing/closed part. >>> >>> Are you fine with the new version at >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? >>> >>>> >>>>> Notice that this change also requires a tiny fix in the top-level >>>>> repository which must be pushed AFTER this change. >>>> >>>> >>>> >>>> Can you elaborate please? >>>> >>> >>> I've also submitted the corresponding top-level repository change for >>> review which expects to find the SA agent libraries on Linux/ppc64 in >>> order to copy them into the image directory: >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ >>> >>> But once that will be pushed, the build will fail if these HS changes >>> will not be in place to actually build the libraries. >>> >>>> Thanks, >>>> David >>>> >>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>> >> From winniejayclay at gmail.com Sat Jul 12 02:27:50 2014 From: winniejayclay at gmail.com (Winnie JayClay) Date: Sat, 12 Jul 2014 10:27:50 +0800 Subject: hb from the end of a constructor of an object ot the start of finilizer Message-ID: Hi, on the page 575 of the printed edition it says There is a happens-before edge from the end of a constructor of an object ot the start of finilizer (12.6) for that object. I can't get from this definition if this applicable to the case when I manually invoke finilize() on the object or when GC collects it and invokes finilize() or for both? to put it clearly, say I have class with shared non-volatile and non-finile state. object fully intialized in the first thread, in the second thread I invoke finilize() on this object, will I have gurantee of shared-state visibility? and in case if JVM GC invocation? Thanks, Winnie From mike.duigou at oracle.com Sat Jul 12 17:19:54 2014 From: mike.duigou at oracle.com (Mike Duigou) Date: Sat, 12 Jul 2014 10:19:54 -0700 Subject: RFR: 8046765 : (s) makefiles should use parameterized $(CP) and $(MV) rather than explicit commands In-Reply-To: <839864EB-04E0-45B0-8D31-25714E84E1A7@oracle.com> References: <839864EB-04E0-45B0-8D31-25714E84E1A7@oracle.com> Message-ID: <21F2D2AC-6261-4E6C-BF7A-A3978BFAC9D8@oracle.com> Hello all; After further testing some additional changes were found to be needed to support building hotspot without configure support. There are a small number of additional changes in various buildtree.make and the windows build.make files to ensure that locations are defined for CP and MV commands. If there's a more appropriate location for these definitions please suggest it. The patch is otherwise unchanged from the month ago version except for line number offset changes owing to other intervening changesets. jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/2/webrev/ Thanks! Mike On Jun 13 2014, at 12:43 , Mike Duigou wrote: > Hello all; > > This is a small changeset to the hotspot makefiles to have them use expansions of the $(CP) and $(MV) variables rather than explicit commands for all operations involving files in the deliverables. This changes is needed by static code analysis software which provides replacement cp and mv commands that track error reports in executables back to the source from which they are generated. > > jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 > webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/0/webrev/ > > I've checked to make sure that patch doesn't change the build output on linux x64 and am currently checking on other platforms. > > It is probably easier to review this change by looking at the patch than by looking at the file diffs. > > Mike > From igor.veresov at oracle.com Sat Jul 12 18:41:09 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Sat, 12 Jul 2014 11:41:09 -0700 Subject: RFR(S): 8049071: Add jtreg jobs to JPRT for Hotspot In-Reply-To: <53BEF24B.2090600@oracle.com> References: <53B4AD05.3070702@oracle.com> <53B631B3.6090505@oracle.com> <53B63BB5.8090602@oracle.com> <53BB294A.8040801@oracle.com> <53BEF24B.2090600@oracle.com> Message-ID: <14E7FD55-A4D7-4997-9A1B-02E5C598054C@oracle.com> Looks fine.. igor On Jul 10, 2014, at 1:06 PM, Mikael Vidstedt wrote: > > Anybody? Pleeeease? > > Cheers, > Mikael > > On 2014-07-07 16:12, Mikael Vidstedt wrote: >> >> Fixed the comment, removed the loop (the loop logic is btw taken directly from jdk/test/Makefile, but I'll follow up on a fix for that separately). >> >> Anybody else want to have a look? >> >> top: http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.01/top/webrev/ >> hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.01/hotspot/webrev/ >> >> Thanks, >> Mikael >> >> On 2014-07-03 22:29, David Holmes wrote: >>> On 4/07/2014 2:46 PM, David Holmes wrote: >>>> Hi Mikael, >>>> >>>> Generally looks okay - took me a minute to remember that jtreg groups >>>> combine as set unions :) >>>> >>>> A couple of things: >>>> >>>> 226 # Unless explicitly defined below, hotspot_ is interpreted as the >>>> jtreg test group >>>> >>>> The jtreg group is actually called hotspot_ >>>> >>>> 227 hotspot_%: >>>> 228 $(ECHO) "Running tests: $@" >>>> 229 for each in $@; do \ >>>> 230 $(MAKE) -j 1 TEST_SELECTION=":$$each" >>>> UNIQUE_DIR=$$each jtreg_tests; \ >>>> 231 done >>>> >>>> While hotspot_% can match multiple targets each target will be distinct >>>> - ie $@ will only every have a single value and the for loop will only >>>> execute once - and hence is unnecessary. This seems borne out with a >>>> simple test: >>>> >>>> > cat Makefile >>>> hotspot_%: >>>> @echo "Running tests: $@" >>>> @for each in $@; do \ >>>> echo $$each ;\ >>>> done >>>> >>>> > make hotspot_a hotspot_b >>>> Running tests: hotspot_a >>>> hotspot_a >>>> Running tests: hotspot_b >>>> hotspot_b >>> >>> Though if you have a quoting issue with the invocation: >>> >>> > make "hotspot_a hotspot_b" >>> Running tests: hotspot_a hotspot_b >>> hotspot_a >>> hotspot_b >>> >>> things turn out different. >>> >>> David >>> >>> >>>> Cheers, >>>> David >>>> >>>> On 3/07/2014 11:08 AM, Mikael Vidstedt wrote: >>>>> >>>>> Please review this enhancement which adds the scaffolding needed to run >>>>> the hotspot jtreg tests in JPRT. >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8049071 >>>>> Webrev (/): >>>>> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/top/webrev/ >>>>> Webrev (hotspot/): >>>>> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/hotspot/webrev/ >>>>> >>>>> >>>>> >>>>> Summary: >>>>> >>>>> We want to run the hotspot regression tests on every hotspot push. This >>>>> change enables this and adds four new test groups to the set of tests >>>>> being run on hotspot pushes. The new test sets still need to be >>>>> populated. >>>>> >>>>> Narrative: >>>>> >>>>> The majority of the changes are in the hotspot/test/Makefile. The >>>>> changes are almost entirely stolen from jdk/test/Makefile but have been >>>>> massaged to support (at least) three different use cases, two of which >>>>> were supported earlier: >>>>> >>>>> 1. Running the non-jtreg tests (servertest, clienttest and >>>>> internalvmtests), also supporting the use of the "hotspot_" for when the >>>>> tests are invoked from the JDK top level >>>>> 2. Running jtreg tests by selecting test to run using the TESTDIRS >>>>> variable >>>>> 3. Running jtreg tests by selecting the test group to run (NEW) >>>>> >>>>> The third/new use case is implemented by making any target named >>>>> hotspot_% *except* the ones listed in 1. lead to the corresponding jtreg >>>>> test group in TEST.groups being run. For example, running "make >>>>> hotspot_gc" leads to all the tests in the hotspot_gc test group in >>>>> TEST.groups to be run and so on. >>>>> >>>>> I also removed the packtest targets, because as far as I can tell >>>>> they're not used anyway. >>>>> >>>>> Note that the new component test groups in TEST.group - >>>>> hotspot_compiler, hotspot_gc, hotspot_runtime and hotspot_serviceability >>>>> - are currently empty, or more precisely they only run a single test >>>>> each. The intention is that these should be populated by the respective >>>>> teams to include stable and relatively fast tests. Tests added to the >>>>> groups will be run on hotspot push jobs, and therefore will be blocking >>>>> pushes in case they fail. >>>>> >>>>> Cheers, >>>>> Mikael >>>>> >> > From mikael.vidstedt at oracle.com Sat Jul 12 18:52:17 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Sat, 12 Jul 2014 11:52:17 -0700 Subject: RFR(S): 8049071: Add jtreg jobs to JPRT for Hotspot In-Reply-To: <14E7FD55-A4D7-4997-9A1B-02E5C598054C@oracle.com> References: <53B4AD05.3070702@oracle.com> <53B631B3.6090505@oracle.com> <53B63BB5.8090602@oracle.com> <53BB294A.8040801@oracle.com> <53BEF24B.2090600@oracle.com> <14E7FD55-A4D7-4997-9A1B-02E5C598054C@oracle.com> Message-ID: Thanks Igor, appreciate it! Cheers, Mikael > On Jul 12, 2014, at 11:41, Igor Veresov wrote: > > Looks fine.. > > igor > >> On Jul 10, 2014, at 1:06 PM, Mikael Vidstedt wrote: >> >> >> Anybody? Pleeeease? >> >> Cheers, >> Mikael >> >>> On 2014-07-07 16:12, Mikael Vidstedt wrote: >>> >>> Fixed the comment, removed the loop (the loop logic is btw taken directly from jdk/test/Makefile, but I'll follow up on a fix for that separately). >>> >>> Anybody else want to have a look? >>> >>> top: http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.01/top/webrev/ >>> hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.01/hotspot/webrev/ >>> >>> Thanks, >>> Mikael >>> >>>> On 2014-07-03 22:29, David Holmes wrote: >>>>> On 4/07/2014 2:46 PM, David Holmes wrote: >>>>> Hi Mikael, >>>>> >>>>> Generally looks okay - took me a minute to remember that jtreg groups >>>>> combine as set unions :) >>>>> >>>>> A couple of things: >>>>> >>>>> 226 # Unless explicitly defined below, hotspot_ is interpreted as the >>>>> jtreg test group >>>>> >>>>> The jtreg group is actually called hotspot_ >>>>> >>>>> 227 hotspot_%: >>>>> 228 $(ECHO) "Running tests: $@" >>>>> 229 for each in $@; do \ >>>>> 230 $(MAKE) -j 1 TEST_SELECTION=":$$each" >>>>> UNIQUE_DIR=$$each jtreg_tests; \ >>>>> 231 done >>>>> >>>>> While hotspot_% can match multiple targets each target will be distinct >>>>> - ie $@ will only every have a single value and the for loop will only >>>>> execute once - and hence is unnecessary. This seems borne out with a >>>>> simple test: >>>>> >>>>>> cat Makefile >>>>> hotspot_%: >>>>> @echo "Running tests: $@" >>>>> @for each in $@; do \ >>>>> echo $$each ;\ >>>>> done >>>>> >>>>>> make hotspot_a hotspot_b >>>>> Running tests: hotspot_a >>>>> hotspot_a >>>>> Running tests: hotspot_b >>>>> hotspot_b >>>> >>>> Though if you have a quoting issue with the invocation: >>>> >>>>> make "hotspot_a hotspot_b" >>>> Running tests: hotspot_a hotspot_b >>>> hotspot_a >>>> hotspot_b >>>> >>>> things turn out different. >>>> >>>> David >>>> >>>> >>>>> Cheers, >>>>> David >>>>> >>>>>> On 3/07/2014 11:08 AM, Mikael Vidstedt wrote: >>>>>> >>>>>> Please review this enhancement which adds the scaffolding needed to run >>>>>> the hotspot jtreg tests in JPRT. >>>>>> >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8049071 >>>>>> Webrev (/): >>>>>> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/top/webrev/ >>>>>> Webrev (hotspot/): >>>>>> http://cr.openjdk.java.net/~mikael/webrevs/8049071/webrev.00/hotspot/webrev/ >>>>>> >>>>>> >>>>>> >>>>>> Summary: >>>>>> >>>>>> We want to run the hotspot regression tests on every hotspot push. This >>>>>> change enables this and adds four new test groups to the set of tests >>>>>> being run on hotspot pushes. The new test sets still need to be >>>>>> populated. >>>>>> >>>>>> Narrative: >>>>>> >>>>>> The majority of the changes are in the hotspot/test/Makefile. The >>>>>> changes are almost entirely stolen from jdk/test/Makefile but have been >>>>>> massaged to support (at least) three different use cases, two of which >>>>>> were supported earlier: >>>>>> >>>>>> 1. Running the non-jtreg tests (servertest, clienttest and >>>>>> internalvmtests), also supporting the use of the "hotspot_" for when the >>>>>> tests are invoked from the JDK top level >>>>>> 2. Running jtreg tests by selecting test to run using the TESTDIRS >>>>>> variable >>>>>> 3. Running jtreg tests by selecting the test group to run (NEW) >>>>>> >>>>>> The third/new use case is implemented by making any target named >>>>>> hotspot_% *except* the ones listed in 1. lead to the corresponding jtreg >>>>>> test group in TEST.groups being run. For example, running "make >>>>>> hotspot_gc" leads to all the tests in the hotspot_gc test group in >>>>>> TEST.groups to be run and so on. >>>>>> >>>>>> I also removed the packtest targets, because as far as I can tell >>>>>> they're not used anyway. >>>>>> >>>>>> Note that the new component test groups in TEST.group - >>>>>> hotspot_compiler, hotspot_gc, hotspot_runtime and hotspot_serviceability >>>>>> - are currently empty, or more precisely they only run a single test >>>>>> each. The intention is that these should be populated by the respective >>>>>> teams to include stable and relatively fast tests. Tests added to the >>>>>> groups will be run on hotspot push jobs, and therefore will be blocking >>>>>> pushes in case they fail. >>>>>> >>>>>> Cheers, >>>>>> Mikael > From david.holmes at oracle.com Sun Jul 13 02:52:35 2014 From: david.holmes at oracle.com (David Holmes) Date: Sun, 13 Jul 2014 12:52:35 +1000 Subject: hb from the end of a constructor of an object ot the start of finilizer In-Reply-To: References: Message-ID: <53C1F473.60303@oracle.com> Hi Winnie, Not really a hotspot question but a Java Memory Model question. On 12/07/2014 12:27 PM, Winnie JayClay wrote: > Hi, on the page 575 of the printed edition it says > > There is a happens-before edge from the end of a constructor of an > object ot the start of finilizer (12.6) for that object. > > I can't get from this definition if this applicable to the case when > I manually invoke finilize() on the object or when GC collects it > and invokes finilize() or for both? > > to put it clearly, say I have class with shared non-volatile and > non-finile state. object fully intialized in the first thread, in the > second thread I invoke finilize() on this object, will I have gurantee > of shared-state visibility? and in case if JVM GC invocation? The intent of the hb edge is only for the GC case (ie for the thread(s) responsible for finalization), otherwise finalize() is like any other method and if you invoke it directly from another thread then the object must either ensure consistency itself or else was safely-published. But also note that finalization is problematic in that an object that is still being used can be finalized - see 12.6.1 David Holmes > Thanks, > Winnie > From david.holmes at oracle.com Sun Jul 13 21:22:45 2014 From: david.holmes at oracle.com (David Holmes) Date: Mon, 14 Jul 2014 07:22:45 +1000 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: References: <53BDFD5D.4050908@oracle.com> <53BF69DC.9010305@oracle.com> Message-ID: <53C2F8A5.7050006@oracle.com> Hi Volker, Just discovered you didn't quite pick up on all of my change - the ARM entry is to be deleted. Only the open platforms need to be listed: >> # No SA Support for IA64 or zero >> ADD_SA_BINARIES/ia64 = >> ADD_SA_BINARIES/zero = Thanks, David On 11/07/2014 9:54 PM, Volker Simonis wrote: > On Fri, Jul 11, 2014 at 6:36 AM, David Holmes wrote: >> Hi Volker, >> >> >> On 10/07/2014 8:12 PM, Volker Simonis wrote: >>> >>> Hi David, >>> >>> thanks for looking at this. Here's my new version of the change with >>> some of your suggestions applied: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 >> >> >> I have a simpler counter proposal (also default -> DEFAULT as that seems to >> be the style): >> >> # Serviceability Binaries >> >> ADD_SA_BINARIES/DEFAULT = >> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) \ >> >> $(EXPORT_LIB_DIR)/sa-jdi.jar >> >> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >> ifeq ($(ZIP_DEBUGINFO_FILES),1) >> ADD_SA_BINARIES/DEFAULT += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >> else >> ADD_SA_BINARIES/DEFAULT += >> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >> endif >> endif >> >> ADD_SA_BINARIES/$(HS_ARCH) = $(ADD_SA_BINARIES/DEFAULT) >> >> >> # No SA Support for IA64 or zero >> ADD_SA_BINARIES/ia64 = >> ADD_SA_BINARIES/zero = >> >> --- >> >> The open logic only has to worry about open platforms. The custom makefile >> can accept the default or override as it desires. >> >> I thought about conditionally setting ADD_SA_BINARIES/$(HS_ARCH) but the >> above is simple and clear. >> >> Ok? >> > > Perfect! > > Here's the new webrev with your proposed changes (tested on > Linux/x86_64 and ppc64): > > http://cr.openjdk.java.net/~simonis/webrevs/8049715.v3 > > Thanks for sponsoring, > Volker > >> I'll sponsor this one of course (so its safe for other reviewers to jump in >> now :) ). >> >> Thanks, >> David >> >> >> >>> Please find more information inline: >>> >>> On Thu, Jul 10, 2014 at 4:41 AM, David Holmes >>> wrote: >>>> >>>> Hi Volker, >>>> >>>> Comments below where you might expect them :) >>>> >>>> >>>> On 10/07/2014 3:36 AM, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi, >>>>> >>>>> could someone please review and sponsor the following change which >>>>> does some preliminary work for enabling the SA agent on Linux/PPC64: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8049715 >>>>> >>>>> Details: >>>>> >>>>> Currently, we don't support the SA agent on Linux/PPC64. This change >>>>> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >>>>> and sa-jdi.jar) will be correctly build and copied into the resulting >>>>> jdk images. >>>>> >>>>> This change also contains some small fixes in sa-jdi.jar to correctly >>>>> detect Linux/PPC64 as supported SA platform. (The actual >>>>> implementation of the Linux/PPC64 specific code will be handled by >>>>> "8049716 PPC64: Implement SA on Linux/PPC64" - >>>>> https://bugs.openjdk.java.net/browse/JDK-8049716). >>>>> >>>>> One thing which require special attention are the changes in >>>>> make/linux/makefiles/defs.make which may touch the closed ppc port. In >>>>> my change I've simply added 'ppc' to the list of supported >>>>> architectures, but this may break the 32-bit ppc build. I think the >>>> >>>> >>>> >>>> It wouldn't break it but I was expecting to see ppc64 here. >>>> >>> >>> The problem is that currently the decision if the SA agent will be >>> build is based on the value of HS_ARCH. But HS_ARCH is the 'basic >>> architecture' (i.e. x86 or sparc) so there's no easy way to choose the >>> SA agent for only a 64-bit platform (like ppc64 or amd64) and not for >>> its 32-bit counterpart (i.e. i386 or ppc). >>> >>> The only possibility with the current solution would be to only >>> conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But >>> that wouldn't make the code nicer either:) >>> >>>> >>>>> current code is to verbose and error prone anyway. It would be better >>>>> to have something like: >>>>> >>>>> ADD_SA_BINARIES = >>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >>>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>>> >>>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>>> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>>> else >>>>> ADD_SA_BINARIES += >>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>>> endif >>>>> endif >>>>> >>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>>> ppc64)) >>>>> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) >>>> >>>> >>>> >>>> You wouldn't need/want the $(HS_ARCH) there. >>>> >>> >>> Sorry, that was a type of course. It should read: >>> >>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>> ppc64)) >>> EXPORT_LIST += $(ADD_SA_BINARIES) >>> >>> But that's not necessary now anymore (see new version below). >>> >>>> >>>>> endif >>>>> >>>>> With this solution we only define ADD_SA_BINARIES once (because the >>>>> various definitions for the different platforms are equal anyway). But >>>>> again this may affect other closed ports so please advise which >>>>> solution you'd prefer. >>>> >>>> >>>> >>>> The above is problematic for customizations. An alternative would be to >>>> set >>>> ADD_SA_BINARIES/default once with all the file names. Then: >>>> >>>> ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) >>>> # No SA Support for IA64 or zero >>>> ifneq (, $(findstring $(ARCH), ia64, zero)) >>>> ADD_SA_BINARIES/$(ARCH) = >>>> >>>> Each ARCH handled elsewhere would then still set ADD_SA_BINARIES/$(ARCH) >>>> if >>>> needed. >>>> >>>> Does that seem reasonable? >>>> >>> >>> The problem with using ARCH is that it is not "reliable" in the sens >>> that its value differs for top-level and hotspot-only makes. See >>> "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for >>> hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 >>> build". >>> >>> But using ADD_SA_BINARIES/default to save redundant lines is a good >>> idea. I've updated the patch accordingly and think that the new >>> solution is a good compromise between readability and not touching >>> existing/closed part. >>> >>> Are you fine with the new version at >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? >>> >>>> >>>>> Notice that this change also requires a tiny fix in the top-level >>>>> repository which must be pushed AFTER this change. >>>> >>>> >>>> >>>> Can you elaborate please? >>>> >>> >>> I've also submitted the corresponding top-level repository change for >>> review which expects to find the SA agent libraries on Linux/ppc64 in >>> order to copy them into the image directory: >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ >>> >>> But once that will be pushed, the build will fail if these HS changes >>> will not be in place to actually build the libraries. >>> >>>> Thanks, >>>> David >>>> >>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>> >> From jeremymanson at google.com Mon Jul 14 05:29:44 2014 From: jeremymanson at google.com (Jeremy Manson) Date: Sun, 13 Jul 2014 22:29:44 -0700 Subject: hb from the end of a constructor of an object ot the start of finilizer In-Reply-To: <53C1F473.60303@oracle.com> References: <53C1F473.60303@oracle.com> Message-ID: It's probably worth noting that the guarantee mostly implies that non-trivial finalizers probably won't be able to run until the object's constructor ends. (I imagine that VMs *could* do analysis to figure out when a finalizer can run even without the constructor on the object finishing, but it would seriously not be worth it.) Jeremy On Sat, Jul 12, 2014 at 7:52 PM, David Holmes wrote: > Hi Winnie, > > Not really a hotspot question but a Java Memory Model question. > > > On 12/07/2014 12:27 PM, Winnie JayClay wrote: > >> Hi, on the page 575 of the printed edition it says >> >> There is a happens-before edge from the end of a constructor of an >> object ot the start of finilizer (12.6) for that object. >> >> I can't get from this definition if this applicable to the case when >> I manually invoke finilize() on the object or when GC collects it >> and invokes finilize() or for both? >> >> to put it clearly, say I have class with shared non-volatile and >> non-finile state. object fully intialized in the first thread, in the >> second thread I invoke finilize() on this object, will I have gurantee >> of shared-state visibility? and in case if JVM GC invocation? >> > > The intent of the hb edge is only for the GC case (ie for the thread(s) > responsible for finalization), otherwise finalize() is like any other > method and if you invoke it directly from another thread then the object > must either ensure consistency itself or else was safely-published. > > But also note that finalization is problematic in that an object that is > still being used can be finalized - see 12.6.1 > > David Holmes > > Thanks, >> Winnie >> >> From jeremymanson at google.com Mon Jul 14 05:31:23 2014 From: jeremymanson at google.com (Jeremy Manson) Date: Sun, 13 Jul 2014 22:31:23 -0700 Subject: JEP 195: Scalable Native Memory Tracking In-Reply-To: <20140711152213.988B227AB5@eggemoggin.niobe.net> References: <20140711152213.988B227AB5@eggemoggin.niobe.net> Message-ID: +1. We find it unusably expensive. Jeremy On Fri, Jul 11, 2014 at 8:19 AM, wrote: > New JEP Candidate: http://openjdk.java.net/jeps/195 > > - Mark > From david.holmes at oracle.com Mon Jul 14 06:43:14 2014 From: david.holmes at oracle.com (David Holmes) Date: Mon, 14 Jul 2014 16:43:14 +1000 Subject: RFR: 8046765 : (s) makefiles should use parameterized $(CP) and $(MV) rather than explicit commands In-Reply-To: <21F2D2AC-6261-4E6C-BF7A-A3978BFAC9D8@oracle.com> References: <839864EB-04E0-45B0-8D31-25714E84E1A7@oracle.com> <21F2D2AC-6261-4E6C-BF7A-A3978BFAC9D8@oracle.com> Message-ID: <53C37C02.4060801@oracle.com> Hi Mike, The changes from cp to $(CP) look fine, as do the couple of mv changes. As I think I said first time round I'm not sure why cp and mv are being singled out here (and I note that windows also does $(RM) but the others don't). There seem to be a lot of spurious changes in the patch file that don't show up in the cdiffs. I assume whitespace has been added, which jcheck will reject (whitespace can't have been removed as jcheck would not have permitted it in the first place). Also please update all the Oracle copyright lines as needed (hotspot policy). Thanks, David On 13/07/2014 3:19 AM, Mike Duigou wrote: > Hello all; > > After further testing some additional changes were found to be needed to support building hotspot without configure support. There are a small number of additional changes in various buildtree.make and the windows build.make files to ensure that locations are defined for CP and MV commands. If there's a more appropriate location for these definitions please suggest it. > > The patch is otherwise unchanged from the month ago version except for line number offset changes owing to other intervening changesets. > > jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 > webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/2/webrev/ > > Thanks! > > Mike > > On Jun 13 2014, at 12:43 , Mike Duigou wrote: > >> Hello all; >> >> This is a small changeset to the hotspot makefiles to have them use expansions of the $(CP) and $(MV) variables rather than explicit commands for all operations involving files in the deliverables. This changes is needed by static code analysis software which provides replacement cp and mv commands that track error reports in executables back to the source from which they are generated. >> >> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 >> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/0/webrev/ >> >> I've checked to make sure that patch doesn't change the build output on linux x64 and am currently checking on other platforms. >> >> It is probably easier to review this change by looking at the patch than by looking at the file diffs. >> >> Mike >> > From goetz.lindenmaier at sap.com Mon Jul 14 07:56:41 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 14 Jul 2014 07:56:41 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53BF73A9.3070105@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> Hi, David, can I consider this a review? And I please need a sponsor for this change. Could somebody please help here? Probably some closed adaptions are needed. It applies to any repo as my other change traveled around by now. Thanks and best regards, Goetz. -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Freitag, 11. Juli 2014 07:19 To: Lindenmaier, Goetz; Lois Foltan Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: > Hi, > > foo.hpp as few includes as possible, to avoid cycles. > foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp > (either directly or via the platform files.) > * should include foo.platform.inline.hpp, so that shared files that > call functions from foo.platform.inline.hpp need not contain the > cascade of all the platform files. > If code in foo.platform.inline.hpp is only used in the platform files, > it is not necessary to have an umbrella header. > foo.platform.inline.hpp Should include what is needed in its code. > > For client code: > With this change I now removed all include cascades of platform files except for > those in the 'natural' headers. > Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. > (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp > headers, but include bar.[inline.]hpp.) > If it's 1:1, I don't care, as discussed before. > > Does this make sense? I find the overall structure somewhat counter-intuitive from an implementation versus interface perspective. But ... Thanks for the explanation. David > > Best regards, > Goetz. > > > which of the above should #include which others, and which should be > #include'd by "client" code? > > Thanks, > David > >> Thanks, >> Lois >> >>> >>> David >>> ----- >>> >>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>> (however this could pull in more code than needed since >>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>> >>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>> - change not related to clean up of umbrella headers, please >>>> explain/justify. >>>> >>>> src/share/vm/code/vmreg.hpp >>>> - Can lines #143-#15 be replaced by an inclusion of >>>> vmreg.inline.hpp or will >>>> this introduce a cyclical inclusion situation, since >>>> vmreg.inline.hpp includes vmreg.hpp? >>>> >>>> src/share/vm/classfile/classFileStream.cpp >>>> - only has a copyright change in the file, no other changes >>>> present? >>>> >>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>> - incorrect copyright, no current year? >>>> >>>> src/share/vm/opto/ad.hpp >>>> - incorrect copyright date for a new file >>>> >>>> src/share/vm/code/vmreg.inline.hpp >>>> - technically this new file does not need to include >>>> "asm/register.hpp" since >>>> vmreg.hpp already includes it >>>> >>>> My only lingering concern is the cyclical nature of >>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>> is not much difference between the two? >>>> >>>> Thanks, >>>> Lois >>>> >>>> >>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>> Hi, >>>>> >>>>> I decided to clean up the remaining include cascades, too. >>>>> >>>>> This change introduces umbrella headers for the files in the cpu >>>>> subdirectories: >>>>> >>>>> src/share/vm/utilities/bytes.hpp >>>>> src/share/vm/opto/ad.hpp >>>>> src/share/vm/code/nativeInst.hpp >>>>> src/share/vm/code/vmreg.inline.hpp >>>>> src/share/vm/interpreter/interp_masm.hpp >>>>> >>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>> >>>>> Where possible, this change avoids includes in headers. >>>>> Eventually it adds a forward declaration. >>>>> >>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>> Still, I did not split the files in the cpu directories, as they are >>>>> rather small. >>>>> >>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>> contains machine dependent, c2 specific register information. So I >>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>> includes in, >>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>> >>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>> file, I don't think this is performance relevant. But having them in >>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>> thus all the assembler include headers into a lot of files. >>>>> >>>>> Please review and test this change. I please need a sponsor. >>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>> >>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>> linuxppc64, >>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>> aixppc64, ntamd64 >>>>> in opt, dbg and fastdbg versions. >>>>> >>>>> Currently, the change applies to hs-rt, but once my other change >>>>> arrives in other >>>>> repos, it will work there, too. (I tested it together with the other >>>>> change >>>>> against jdk9/dev, too.) >>>>> >>>>> Best regards, >>>>> Goetz. >>>>> >>>>> PS: I also did all the Copyright adaptions ;) >>>> >> From volker.simonis at gmail.com Mon Jul 14 09:44:51 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 14 Jul 2014 11:44:51 +0200 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: <53C2F8A5.7050006@oracle.com> References: <53BDFD5D.4050908@oracle.com> <53BF69DC.9010305@oracle.com> <53C2F8A5.7050006@oracle.com> Message-ID: On Sun, Jul 13, 2014 at 11:22 PM, David Holmes wrote: > Hi Volker, > > Just discovered you didn't quite pick up on all of my change - the ARM entry > is to be deleted. Only the open platforms need to be listed: > > >>> # No SA Support for IA64 or zero >>> ADD_SA_BINARIES/ia64 = >>> ADD_SA_BINARIES/zero = > OK, but then I also remove IA64 as it isn't an open platform either: http://cr.openjdk.java.net/~simonis/webrevs/8049715.v4/ I've also added Vladimir as reviewer. Thank you and best regards, Volker > Thanks, > David > > On 11/07/2014 9:54 PM, Volker Simonis wrote: >> >> On Fri, Jul 11, 2014 at 6:36 AM, David Holmes >> wrote: >>> >>> Hi Volker, >>> >>> >>> On 10/07/2014 8:12 PM, Volker Simonis wrote: >>>> >>>> >>>> Hi David, >>>> >>>> thanks for looking at this. Here's my new version of the change with >>>> some of your suggestions applied: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 >>> >>> >>> >>> I have a simpler counter proposal (also default -> DEFAULT as that seems >>> to >>> be the style): >>> >>> # Serviceability Binaries >>> >>> ADD_SA_BINARIES/DEFAULT = >>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) \ >>> >>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>> >>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>> ADD_SA_BINARIES/DEFAULT += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>> else >>> ADD_SA_BINARIES/DEFAULT += >>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>> endif >>> endif >>> >>> ADD_SA_BINARIES/$(HS_ARCH) = $(ADD_SA_BINARIES/DEFAULT) >>> >>> >>> # No SA Support for IA64 or zero >>> ADD_SA_BINARIES/ia64 = >>> ADD_SA_BINARIES/zero = >>> >>> --- >>> >>> The open logic only has to worry about open platforms. The custom >>> makefile >>> can accept the default or override as it desires. >>> >>> I thought about conditionally setting ADD_SA_BINARIES/$(HS_ARCH) but the >>> above is simple and clear. >>> >>> Ok? >>> >> >> Perfect! >> >> Here's the new webrev with your proposed changes (tested on >> Linux/x86_64 and ppc64): >> >> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v3 >> >> Thanks for sponsoring, >> Volker >> >>> I'll sponsor this one of course (so its safe for other reviewers to jump >>> in >>> now :) ). >>> >>> Thanks, >>> David >>> >>> >>> >>>> Please find more information inline: >>>> >>>> On Thu, Jul 10, 2014 at 4:41 AM, David Holmes >>>> wrote: >>>>> >>>>> >>>>> Hi Volker, >>>>> >>>>> Comments below where you might expect them :) >>>>> >>>>> >>>>> On 10/07/2014 3:36 AM, Volker Simonis wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> could someone please review and sponsor the following change which >>>>>> does some preliminary work for enabling the SA agent on Linux/PPC64: >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >>>>>> https://bugs.openjdk.java.net/browse/JDK-8049715 >>>>>> >>>>>> Details: >>>>>> >>>>>> Currently, we don't support the SA agent on Linux/PPC64. This change >>>>>> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >>>>>> and sa-jdi.jar) will be correctly build and copied into the resulting >>>>>> jdk images. >>>>>> >>>>>> This change also contains some small fixes in sa-jdi.jar to correctly >>>>>> detect Linux/PPC64 as supported SA platform. (The actual >>>>>> implementation of the Linux/PPC64 specific code will be handled by >>>>>> "8049716 PPC64: Implement SA on Linux/PPC64" - >>>>>> https://bugs.openjdk.java.net/browse/JDK-8049716). >>>>>> >>>>>> One thing which require special attention are the changes in >>>>>> make/linux/makefiles/defs.make which may touch the closed ppc port. In >>>>>> my change I've simply added 'ppc' to the list of supported >>>>>> architectures, but this may break the 32-bit ppc build. I think the >>>>> >>>>> >>>>> >>>>> >>>>> It wouldn't break it but I was expecting to see ppc64 here. >>>>> >>>> >>>> The problem is that currently the decision if the SA agent will be >>>> build is based on the value of HS_ARCH. But HS_ARCH is the 'basic >>>> architecture' (i.e. x86 or sparc) so there's no easy way to choose the >>>> SA agent for only a 64-bit platform (like ppc64 or amd64) and not for >>>> its 32-bit counterpart (i.e. i386 or ppc). >>>> >>>> The only possibility with the current solution would be to only >>>> conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But >>>> that wouldn't make the code nicer either:) >>>> >>>>> >>>>>> current code is to verbose and error prone anyway. It would be better >>>>>> to have something like: >>>>>> >>>>>> ADD_SA_BINARIES = >>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >>>>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>>>> >>>>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>>>> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>>>> else >>>>>> ADD_SA_BINARIES += >>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>>>> endif >>>>>> endif >>>>>> >>>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>>>> ppc64)) >>>>>> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) >>>>> >>>>> >>>>> >>>>> >>>>> You wouldn't need/want the $(HS_ARCH) there. >>>>> >>>> >>>> Sorry, that was a type of course. It should read: >>>> >>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>> ppc64)) >>>> EXPORT_LIST += $(ADD_SA_BINARIES) >>>> >>>> But that's not necessary now anymore (see new version below). >>>> >>>>> >>>>>> endif >>>>>> >>>>>> With this solution we only define ADD_SA_BINARIES once (because the >>>>>> various definitions for the different platforms are equal anyway). But >>>>>> again this may affect other closed ports so please advise which >>>>>> solution you'd prefer. >>>>> >>>>> >>>>> >>>>> >>>>> The above is problematic for customizations. An alternative would be to >>>>> set >>>>> ADD_SA_BINARIES/default once with all the file names. Then: >>>>> >>>>> ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) >>>>> # No SA Support for IA64 or zero >>>>> ifneq (, $(findstring $(ARCH), ia64, zero)) >>>>> ADD_SA_BINARIES/$(ARCH) = >>>>> >>>>> Each ARCH handled elsewhere would then still set >>>>> ADD_SA_BINARIES/$(ARCH) >>>>> if >>>>> needed. >>>>> >>>>> Does that seem reasonable? >>>>> >>>> >>>> The problem with using ARCH is that it is not "reliable" in the sens >>>> that its value differs for top-level and hotspot-only makes. See >>>> "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for >>>> hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 >>>> build". >>>> >>>> But using ADD_SA_BINARIES/default to save redundant lines is a good >>>> idea. I've updated the patch accordingly and think that the new >>>> solution is a good compromise between readability and not touching >>>> existing/closed part. >>>> >>>> Are you fine with the new version at >>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? >>>> >>>>> >>>>>> Notice that this change also requires a tiny fix in the top-level >>>>>> repository which must be pushed AFTER this change. >>>>> >>>>> >>>>> >>>>> >>>>> Can you elaborate please? >>>>> >>>> >>>> I've also submitted the corresponding top-level repository change for >>>> review which expects to find the SA agent libraries on Linux/ppc64 in >>>> order to copy them into the image directory: >>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ >>>> >>>> But once that will be pushed, the build will fail if these HS changes >>>> will not be in place to actually build the libraries. >>>> >>>>> Thanks, >>>>> David >>>>> >>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>>> >>>>> >>> > From david.holmes at oracle.com Mon Jul 14 11:09:31 2014 From: david.holmes at oracle.com (David Holmes) Date: Mon, 14 Jul 2014 21:09:31 +1000 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: References: <53BDFD5D.4050908@oracle.com> <53BF69DC.9010305@oracle.com> <53C2F8A5.7050006@oracle.com> Message-ID: <53C3BA6B.70600@oracle.com> On 14/07/2014 7:44 PM, Volker Simonis wrote: > On Sun, Jul 13, 2014 at 11:22 PM, David Holmes wrote: >> Hi Volker, >> >> Just discovered you didn't quite pick up on all of my change - the ARM entry >> is to be deleted. Only the open platforms need to be listed: >> >> >>>> # No SA Support for IA64 or zero >>>> ADD_SA_BINARIES/ia64 = >>>> ADD_SA_BINARIES/zero = >> > > OK, but then I also remove IA64 as it isn't an open platform either: > > http://cr.openjdk.java.net/~simonis/webrevs/8049715.v4/ Yes good point. ia64 should be eradicated from the build system :) I will put this altogether in the AM. > I've also added Vladimir as reviewer. Great Thanks, David > Thank you and best regards, > Volker > > >> Thanks, >> David >> >> On 11/07/2014 9:54 PM, Volker Simonis wrote: >>> >>> On Fri, Jul 11, 2014 at 6:36 AM, David Holmes >>> wrote: >>>> >>>> Hi Volker, >>>> >>>> >>>> On 10/07/2014 8:12 PM, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi David, >>>>> >>>>> thanks for looking at this. Here's my new version of the change with >>>>> some of your suggestions applied: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 >>>> >>>> >>>> >>>> I have a simpler counter proposal (also default -> DEFAULT as that seems >>>> to >>>> be the style): >>>> >>>> # Serviceability Binaries >>>> >>>> ADD_SA_BINARIES/DEFAULT = >>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) \ >>>> >>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>> >>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>> ADD_SA_BINARIES/DEFAULT += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>> else >>>> ADD_SA_BINARIES/DEFAULT += >>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>> endif >>>> endif >>>> >>>> ADD_SA_BINARIES/$(HS_ARCH) = $(ADD_SA_BINARIES/DEFAULT) >>>> >>>> >>>> # No SA Support for IA64 or zero >>>> ADD_SA_BINARIES/ia64 = >>>> ADD_SA_BINARIES/zero = >>>> >>>> --- >>>> >>>> The open logic only has to worry about open platforms. The custom >>>> makefile >>>> can accept the default or override as it desires. >>>> >>>> I thought about conditionally setting ADD_SA_BINARIES/$(HS_ARCH) but the >>>> above is simple and clear. >>>> >>>> Ok? >>>> >>> >>> Perfect! >>> >>> Here's the new webrev with your proposed changes (tested on >>> Linux/x86_64 and ppc64): >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v3 >>> >>> Thanks for sponsoring, >>> Volker >>> >>>> I'll sponsor this one of course (so its safe for other reviewers to jump >>>> in >>>> now :) ). >>>> >>>> Thanks, >>>> David >>>> >>>> >>>> >>>>> Please find more information inline: >>>>> >>>>> On Thu, Jul 10, 2014 at 4:41 AM, David Holmes >>>>> wrote: >>>>>> >>>>>> >>>>>> Hi Volker, >>>>>> >>>>>> Comments below where you might expect them :) >>>>>> >>>>>> >>>>>> On 10/07/2014 3:36 AM, Volker Simonis wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> could someone please review and sponsor the following change which >>>>>>> does some preliminary work for enabling the SA agent on Linux/PPC64: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8049715 >>>>>>> >>>>>>> Details: >>>>>>> >>>>>>> Currently, we don't support the SA agent on Linux/PPC64. This change >>>>>>> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >>>>>>> and sa-jdi.jar) will be correctly build and copied into the resulting >>>>>>> jdk images. >>>>>>> >>>>>>> This change also contains some small fixes in sa-jdi.jar to correctly >>>>>>> detect Linux/PPC64 as supported SA platform. (The actual >>>>>>> implementation of the Linux/PPC64 specific code will be handled by >>>>>>> "8049716 PPC64: Implement SA on Linux/PPC64" - >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8049716). >>>>>>> >>>>>>> One thing which require special attention are the changes in >>>>>>> make/linux/makefiles/defs.make which may touch the closed ppc port. In >>>>>>> my change I've simply added 'ppc' to the list of supported >>>>>>> architectures, but this may break the 32-bit ppc build. I think the >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> It wouldn't break it but I was expecting to see ppc64 here. >>>>>> >>>>> >>>>> The problem is that currently the decision if the SA agent will be >>>>> build is based on the value of HS_ARCH. But HS_ARCH is the 'basic >>>>> architecture' (i.e. x86 or sparc) so there's no easy way to choose the >>>>> SA agent for only a 64-bit platform (like ppc64 or amd64) and not for >>>>> its 32-bit counterpart (i.e. i386 or ppc). >>>>> >>>>> The only possibility with the current solution would be to only >>>>> conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But >>>>> that wouldn't make the code nicer either:) >>>>> >>>>>> >>>>>>> current code is to verbose and error prone anyway. It would be better >>>>>>> to have something like: >>>>>>> >>>>>>> ADD_SA_BINARIES = >>>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >>>>>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>>>>> >>>>>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>>>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>>>>> ADD_SA_BINARIES += $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>>>>> else >>>>>>> ADD_SA_BINARIES += >>>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>>>>> endif >>>>>>> endif >>>>>>> >>>>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>>>>> ppc64)) >>>>>>> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> You wouldn't need/want the $(HS_ARCH) there. >>>>>> >>>>> >>>>> Sorry, that was a type of course. It should read: >>>>> >>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>>> ppc64)) >>>>> EXPORT_LIST += $(ADD_SA_BINARIES) >>>>> >>>>> But that's not necessary now anymore (see new version below). >>>>> >>>>>> >>>>>>> endif >>>>>>> >>>>>>> With this solution we only define ADD_SA_BINARIES once (because the >>>>>>> various definitions for the different platforms are equal anyway). But >>>>>>> again this may affect other closed ports so please advise which >>>>>>> solution you'd prefer. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> The above is problematic for customizations. An alternative would be to >>>>>> set >>>>>> ADD_SA_BINARIES/default once with all the file names. Then: >>>>>> >>>>>> ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) >>>>>> # No SA Support for IA64 or zero >>>>>> ifneq (, $(findstring $(ARCH), ia64, zero)) >>>>>> ADD_SA_BINARIES/$(ARCH) = >>>>>> >>>>>> Each ARCH handled elsewhere would then still set >>>>>> ADD_SA_BINARIES/$(ARCH) >>>>>> if >>>>>> needed. >>>>>> >>>>>> Does that seem reasonable? >>>>>> >>>>> >>>>> The problem with using ARCH is that it is not "reliable" in the sens >>>>> that its value differs for top-level and hotspot-only makes. See >>>>> "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for >>>>> hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 >>>>> build". >>>>> >>>>> But using ADD_SA_BINARIES/default to save redundant lines is a good >>>>> idea. I've updated the patch accordingly and think that the new >>>>> solution is a good compromise between readability and not touching >>>>> existing/closed part. >>>>> >>>>> Are you fine with the new version at >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? >>>>> >>>>>> >>>>>>> Notice that this change also requires a tiny fix in the top-level >>>>>>> repository which must be pushed AFTER this change. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Can you elaborate please? >>>>>> >>>>> >>>>> I've also submitted the corresponding top-level repository change for >>>>> review which expects to find the SA agent libraries on Linux/ppc64 in >>>>> order to copy them into the image directory: >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ >>>>> >>>>> But once that will be pushed, the build will fail if these HS changes >>>>> will not be in place to actually build the libraries. >>>>> >>>>>> Thanks, >>>>>> David >>>>>> >>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>>> >>>>>> >>>> >> From tobias.hartmann at oracle.com Mon Jul 14 11:56:52 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 14 Jul 2014 13:56:52 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation Message-ID: <53C3C584.7070008@oracle.com> Hi, please review the following patch for JDK-8029443. Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ *Problem* After the tracing/marking phase of GC, nmethod::do_unloading(..) checks if a nmethod can be unloaded because it contains dead oops. If class unloading occurred we additionally clear all ICs where the cached metadata refers to an unloaded klass or method. If the nmethod is not unloaded, nmethod::verify_metadata_loaders(..) finally checks if all metadata is alive. The assert in CheckClass::check_class fails because the nmethod contains Method* metadata corresponding to a dead Klass. The Method* belongs to a to-interpreter stub [1] of an optimized compiled IC. Normally we clear those stubs prior to verification to avoid dangling references to Method* [2], but only if the stub is not in use, i.e. if the IC is not in to-interpreted mode. In this case the to-interpreter stub may be executed and hand a stale Method* to the interpreter. *Solution *The implementation of nmethod::do_unloading(..) is changed to clean compiled ICs and compiled static calls if they call into a to-interpreter stub that references dead Method* metadata. The patch was affected by the G1 class unloading changes (JDK-8048248) because the method nmethod::do_unloading_parallel(..) was added. I adapted the implementation as well. * Testing *Failing test (runThese) JPRT Thanks, Tobias [1] see CompiledStaticCall::emit_to_interp_stub(..) [2] see nmethod::verify_metadata_loaders(..), static_stub_reloc()->clear_inline_cache() clears the stub From coleen.phillimore at oracle.com Mon Jul 14 12:09:04 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 14 Jul 2014 08:09:04 -0400 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> Message-ID: <53C3C860.6050402@oracle.com> I think this looks like a good cleanup. I can sponsor it and make the closed changes also again. I initially proposed the #include cascades because the alternative at the time was to blindly create a dispatching header file for each target dependent file. I wanted to see the #includes cleaned up instead and target dependent files included directly. This adds 5 dispatching header files, which is fine. I think the case of interp_masm.hpp is interesting though, because the dispatching file is included in cpu dependent files, which could directly include the cpu version. But there are 3 platform independent files that include it. I'm not going to object though because I'm grateful for this cleanup and I guess it's a matter of opinion which is best to include in the cpu dependent directories. Thanks, Coleen On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: > Hi, > > David, can I consider this a review? > > And I please need a sponsor for this change. Could somebody > please help here? Probably some closed adaptions are needed. > It applies to any repo as my other change traveled around > by now. > > Thanks and best regards, > Goetz. > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Freitag, 11. Juli 2014 07:19 > To: Lindenmaier, Goetz; Lois Foltan > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> foo.hpp as few includes as possible, to avoid cycles. >> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >> (either directly or via the platform files.) >> * should include foo.platform.inline.hpp, so that shared files that >> call functions from foo.platform.inline.hpp need not contain the >> cascade of all the platform files. >> If code in foo.platform.inline.hpp is only used in the platform files, >> it is not necessary to have an umbrella header. >> foo.platform.inline.hpp Should include what is needed in its code. >> >> For client code: >> With this change I now removed all include cascades of platform files except for >> those in the 'natural' headers. >> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >> headers, but include bar.[inline.]hpp.) >> If it's 1:1, I don't care, as discussed before. >> >> Does this make sense? > I find the overall structure somewhat counter-intuitive from an > implementation versus interface perspective. But ... > > Thanks for the explanation. > > David > >> Best regards, >> Goetz. >> >> >> which of the above should #include which others, and which should be >> #include'd by "client" code? >> >> Thanks, >> David >> >>> Thanks, >>> Lois >>> >>>> David >>>> ----- >>>> >>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>> (however this could pull in more code than needed since >>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>> >>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>> - change not related to clean up of umbrella headers, please >>>>> explain/justify. >>>>> >>>>> src/share/vm/code/vmreg.hpp >>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>> vmreg.inline.hpp or will >>>>> this introduce a cyclical inclusion situation, since >>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>> >>>>> src/share/vm/classfile/classFileStream.cpp >>>>> - only has a copyright change in the file, no other changes >>>>> present? >>>>> >>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>> - incorrect copyright, no current year? >>>>> >>>>> src/share/vm/opto/ad.hpp >>>>> - incorrect copyright date for a new file >>>>> >>>>> src/share/vm/code/vmreg.inline.hpp >>>>> - technically this new file does not need to include >>>>> "asm/register.hpp" since >>>>> vmreg.hpp already includes it >>>>> >>>>> My only lingering concern is the cyclical nature of >>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>> is not much difference between the two? >>>>> >>>>> Thanks, >>>>> Lois >>>>> >>>>> >>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>> Hi, >>>>>> >>>>>> I decided to clean up the remaining include cascades, too. >>>>>> >>>>>> This change introduces umbrella headers for the files in the cpu >>>>>> subdirectories: >>>>>> >>>>>> src/share/vm/utilities/bytes.hpp >>>>>> src/share/vm/opto/ad.hpp >>>>>> src/share/vm/code/nativeInst.hpp >>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>> >>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>> >>>>>> Where possible, this change avoids includes in headers. >>>>>> Eventually it adds a forward declaration. >>>>>> >>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>> rather small. >>>>>> >>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>> contains machine dependent, c2 specific register information. So I >>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>> includes in, >>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>> >>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>> file, I don't think this is performance relevant. But having them in >>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>> thus all the assembler include headers into a lot of files. >>>>>> >>>>>> Please review and test this change. I please need a sponsor. >>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>> >>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>> linuxppc64, >>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>> aixppc64, ntamd64 >>>>>> in opt, dbg and fastdbg versions. >>>>>> >>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>> arrives in other >>>>>> repos, it will work there, too. (I tested it together with the other >>>>>> change >>>>>> against jdk9/dev, too.) >>>>>> >>>>>> Best regards, >>>>>> Goetz. >>>>>> >>>>>> PS: I also did all the Copyright adaptions ;) From goetz.lindenmaier at sap.com Mon Jul 14 12:37:18 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 14 Jul 2014 12:37:18 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53C3C860.6050402@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> Hi Coleen, Thanks for sponsoring this! bytes, ad, nativeInst and vmreg.inline were used quite often in shared files, so it definitely makes sense for these to have a shared header. vm_version and register had an umbrella header, but that was not used everywhere, so I cleaned it up. That left adGlobals, jniTypes and interp_masm which are only used a few time. I did these so that all files are treated similarly. In the end, I didn't need a header for all, as they were not really needed in the shared files, or I found another good place, as for adGlobals. I added you and David H. as reviewer to the webrev: http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ I hope this is ok with you, David. Thanks, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore Sent: Montag, 14. Juli 2014 14:09 To: hotspot-dev at openjdk.java.net Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories I think this looks like a good cleanup. I can sponsor it and make the closed changes also again. I initially proposed the #include cascades because the alternative at the time was to blindly create a dispatching header file for each target dependent file. I wanted to see the #includes cleaned up instead and target dependent files included directly. This adds 5 dispatching header files, which is fine. I think the case of interp_masm.hpp is interesting though, because the dispatching file is included in cpu dependent files, which could directly include the cpu version. But there are 3 platform independent files that include it. I'm not going to object though because I'm grateful for this cleanup and I guess it's a matter of opinion which is best to include in the cpu dependent directories. Thanks, Coleen On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: > Hi, > > David, can I consider this a review? > > And I please need a sponsor for this change. Could somebody > please help here? Probably some closed adaptions are needed. > It applies to any repo as my other change traveled around > by now. > > Thanks and best regards, > Goetz. > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Freitag, 11. Juli 2014 07:19 > To: Lindenmaier, Goetz; Lois Foltan > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> foo.hpp as few includes as possible, to avoid cycles. >> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >> (either directly or via the platform files.) >> * should include foo.platform.inline.hpp, so that shared files that >> call functions from foo.platform.inline.hpp need not contain the >> cascade of all the platform files. >> If code in foo.platform.inline.hpp is only used in the platform files, >> it is not necessary to have an umbrella header. >> foo.platform.inline.hpp Should include what is needed in its code. >> >> For client code: >> With this change I now removed all include cascades of platform files except for >> those in the 'natural' headers. >> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >> headers, but include bar.[inline.]hpp.) >> If it's 1:1, I don't care, as discussed before. >> >> Does this make sense? > I find the overall structure somewhat counter-intuitive from an > implementation versus interface perspective. But ... > > Thanks for the explanation. > > David > >> Best regards, >> Goetz. >> >> >> which of the above should #include which others, and which should be >> #include'd by "client" code? >> >> Thanks, >> David >> >>> Thanks, >>> Lois >>> >>>> David >>>> ----- >>>> >>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>> (however this could pull in more code than needed since >>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>> >>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>> - change not related to clean up of umbrella headers, please >>>>> explain/justify. >>>>> >>>>> src/share/vm/code/vmreg.hpp >>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>> vmreg.inline.hpp or will >>>>> this introduce a cyclical inclusion situation, since >>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>> >>>>> src/share/vm/classfile/classFileStream.cpp >>>>> - only has a copyright change in the file, no other changes >>>>> present? >>>>> >>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>> - incorrect copyright, no current year? >>>>> >>>>> src/share/vm/opto/ad.hpp >>>>> - incorrect copyright date for a new file >>>>> >>>>> src/share/vm/code/vmreg.inline.hpp >>>>> - technically this new file does not need to include >>>>> "asm/register.hpp" since >>>>> vmreg.hpp already includes it >>>>> >>>>> My only lingering concern is the cyclical nature of >>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>> is not much difference between the two? >>>>> >>>>> Thanks, >>>>> Lois >>>>> >>>>> >>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>> Hi, >>>>>> >>>>>> I decided to clean up the remaining include cascades, too. >>>>>> >>>>>> This change introduces umbrella headers for the files in the cpu >>>>>> subdirectories: >>>>>> >>>>>> src/share/vm/utilities/bytes.hpp >>>>>> src/share/vm/opto/ad.hpp >>>>>> src/share/vm/code/nativeInst.hpp >>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>> >>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>> >>>>>> Where possible, this change avoids includes in headers. >>>>>> Eventually it adds a forward declaration. >>>>>> >>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>> rather small. >>>>>> >>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>> contains machine dependent, c2 specific register information. So I >>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>> includes in, >>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>> >>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>> file, I don't think this is performance relevant. But having them in >>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>> thus all the assembler include headers into a lot of files. >>>>>> >>>>>> Please review and test this change. I please need a sponsor. >>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>> >>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>> linuxppc64, >>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>> aixppc64, ntamd64 >>>>>> in opt, dbg and fastdbg versions. >>>>>> >>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>> arrives in other >>>>>> repos, it will work there, too. (I tested it together with the other >>>>>> change >>>>>> against jdk9/dev, too.) >>>>>> >>>>>> Best regards, >>>>>> Goetz. >>>>>> >>>>>> PS: I also did all the Copyright adaptions ;) From mike.duigou at oracle.com Mon Jul 14 17:11:19 2014 From: mike.duigou at oracle.com (Mike Duigou) Date: Mon, 14 Jul 2014 10:11:19 -0700 Subject: RFR: 8046765 : (s) makefiles should use parameterized $(CP) and $(MV) rather than explicit commands In-Reply-To: <53C37C02.4060801@oracle.com> References: <839864EB-04E0-45B0-8D31-25714E84E1A7@oracle.com> <21F2D2AC-6261-4E6C-BF7A-A3978BFAC9D8@oracle.com> <53C37C02.4060801@oracle.com> Message-ID: <278CDB58-3873-4187-A82F-29E69A8F3F49@oracle.com> On Jul 13 2014, at 23:43 , David Holmes wrote: > Hi Mike, > > The changes from cp to $(CP) look fine, as do the couple of mv changes. As I think I said first time round I'm not sure why cp and mv are being singled out here The static analysis tool we are using substitutes instrumented versions of mv and cp so that it can track files from their final location back to the source. It would seem that hashing would be a more reliable way to do this tracking but this is what the tool requires. > (and I note that windows also does $(RM) but the others don't). $RM was already defined in build.make. I don't see any changes in my patch involving RM. I do note that the RM expansion isn't used in most makefiles (sa.make was the one I noticed). > > There seem to be a lot of spurious changes in the patch file that don't show up in the cdiffs. I assume whitespace has been added, which jcheck will reject (whitespace can't have been removed as jcheck would not have permitted it in the first place). It is actually whitespace being removed as a consequence of my text editor trimming trailing whitespace. jcheck whitespace checks only specific file types (java|c|h|cpp|hpp) > > Also please update all the Oracle copyright lines as needed (hotspot policy). I shall do so. With the copyright changes are we good to go? > > Thanks, > David > > On 13/07/2014 3:19 AM, Mike Duigou wrote: >> Hello all; >> >> After further testing some additional changes were found to be needed to support building hotspot without configure support. There are a small number of additional changes in various buildtree.make and the windows build.make files to ensure that locations are defined for CP and MV commands. If there's a more appropriate location for these definitions please suggest it. >> >> The patch is otherwise unchanged from the month ago version except for line number offset changes owing to other intervening changesets. >> >> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 >> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/2/webrev/ >> >> Thanks! >> >> Mike >> >> On Jun 13 2014, at 12:43 , Mike Duigou wrote: >> >>> Hello all; >>> >>> This is a small changeset to the hotspot makefiles to have them use expansions of the $(CP) and $(MV) variables rather than explicit commands for all operations involving files in the deliverables. This changes is needed by static code analysis software which provides replacement cp and mv commands that track error reports in executables back to the source from which they are generated. >>> >>> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 >>> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/0/webrev/ >>> >>> I've checked to make sure that patch doesn't change the build output on linux x64 and am currently checking on other platforms. >>> >>> It is probably easier to review this change by looking at the patch than by looking at the file diffs. >>> >>> Mike >>> >> From volker.simonis at gmail.com Mon Jul 14 18:24:42 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 14 Jul 2014 20:24:42 +0200 Subject: RFR(XS): 8049441: PPC64: Don't use StubCodeMarks for zero-length stubs In-Reply-To: <53BC1237.2060006@oracle.com> References: <53BC1237.2060006@oracle.com> Message-ID: Hi everybody, can somebody PLEASE review and sponsor this tiny, ppc64-only change. Thanks, Volker On Tue, Jul 8, 2014 at 5:45 PM, Daniel D. Daugherty wrote: > Adding the Serviceability Team since JVM/TI belongs to them. > > Dan > > > > On 7/8/14 9:41 AM, Volker Simonis wrote: >> >> Hi, >> >> could somebody please review and push the following small, PPC64-only >> change to any of the hs team repositories: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8049441/ >> https://bugs.openjdk.java.net/browse/JDK-8049441 >> >> Background: >> >> For some stubs we actually do not really generate code on PPC64 but >> instead we use a native C-function with inline-assembly. If the >> generators of these stubs contain a StubCodeMark, they will trigger >> JvmtiExport::post_dynamic_code_generated_internal events with a zero >> length code size. These events may fool clients like Oprofile which >> register for these events (thanks to Maynard Johnson who reported this >> - see >> http://mail.openjdk.java.net/pipermail/ppc-aix-port-dev/2014-June/002032.html). >> >> This change simply removes the StubCodeMark from >> ICacheStubGenerator::generate_icache_flush() and generate_verify_oop() >> because they don't generate assembly code. It also removes the >> StubCodeMark from generate_throw_exception() because it doesn't really >> generate a plain stub but a runtime stub for which the JVMT dynamic >> code event is already generated by RuntimeStub::new_runtime_stub() -> >> CodeBlob::trace_new_stub() -> >> JvmtiExport::post_dynamic_code_generated(). >> >> Thank you and best regards, >> Volker > > From serguei.spitsyn at oracle.com Mon Jul 14 20:35:42 2014 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Mon, 14 Jul 2014 13:35:42 -0700 Subject: RFR(XS): 8049441: PPC64: Don't use StubCodeMarks for zero-length stubs In-Reply-To: References: <53BC1237.2060006@oracle.com> Message-ID: <53C43F1E.2030805@oracle.com> Hi Volker, It looks good in general. But I don't understand all the details. For instance, your email description of the fix tells that the the event is posted by: RuntimeStub::new_runtime_stub() -> CodeBlob::trace_new_stub() -> JvmtiExport::post_dynamic_code_generated() I see the new_runtime_stub() call in the generate_throw_exception() but there is no such call in the generate_icache_flush() and generate_handler_for_unsafe_access() . Probably, the StubCodeMark just needs to be removed there. Could you, please, explain this a little bit? We also need someone from the compiler team to look at this. I also included into the cc-list Oleg, who recently touched this area. Thanks, Serguei On 7/14/14 11:24 AM, Volker Simonis wrote: > Hi everybody, > > can somebody PLEASE review and sponsor this tiny, ppc64-only change. > > Thanks, > Volker > > > On Tue, Jul 8, 2014 at 5:45 PM, Daniel D. Daugherty > wrote: >> Adding the Serviceability Team since JVM/TI belongs to them. >> >> Dan >> >> >> >> On 7/8/14 9:41 AM, Volker Simonis wrote: >>> Hi, >>> >>> could somebody please review and push the following small, PPC64-only >>> change to any of the hs team repositories: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8049441/ >>> https://bugs.openjdk.java.net/browse/JDK-8049441 >>> >>> Background: >>> >>> For some stubs we actually do not really generate code on PPC64 but >>> instead we use a native C-function with inline-assembly. If the >>> generators of these stubs contain a StubCodeMark, they will trigger >>> JvmtiExport::post_dynamic_code_generated_internal events with a zero >>> length code size. These events may fool clients like Oprofile which >>> register for these events (thanks to Maynard Johnson who reported this >>> - see >>> http://mail.openjdk.java.net/pipermail/ppc-aix-port-dev/2014-June/002032.html). >>> >>> This change simply removes the StubCodeMark from >>> ICacheStubGenerator::generate_icache_flush() and generate_verify_oop() >>> because they don't generate assembly code. It also removes the >>> StubCodeMark from generate_throw_exception() because it doesn't really >>> generate a plain stub but a runtime stub for which the JVMT dynamic >>> code event is already generated by RuntimeStub::new_runtime_stub() -> >>> CodeBlob::trace_new_stub() -> >>> JvmtiExport::post_dynamic_code_generated(). >>> >>> Thank you and best regards, >>> Volker >> From david.holmes at oracle.com Mon Jul 14 22:26:26 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 15 Jul 2014 08:26:26 +1000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> Message-ID: <53C45912.4050905@oracle.com> On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: > Hi Coleen, > > Thanks for sponsoring this! > > bytes, ad, nativeInst and vmreg.inline were used quite often > in shared files, so it definitely makes sense for these to have > a shared header. > vm_version and register had an umbrella header, but that > was not used everywhere, so I cleaned it up. > That left adGlobals, jniTypes and interp_masm which > are only used a few time. I did these so that all files > are treated similarly. > In the end, I didn't need a header for all, as they were > not really needed in the shared files, or I found > another good place, as for adGlobals. > > I added you and David H. as reviewer to the webrev: > http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ > I hope this is ok with you, David. It might be somewhat premature :) I somewhat confused by the rules for headers and includes and inlines. I now see with this change a bunch of inline function definitions being moved out of the .inline.hpp file and into the .hpp file. Why? What criteria determines if an inline function goes into the .hpp versus the .inline.hpp file ??? Thanks, David > Thanks, > Goetz. > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore > Sent: Montag, 14. Juli 2014 14:09 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > > I think this looks like a good cleanup. I can sponsor it and make the > closed changes also again. I initially proposed the #include cascades > because the alternative at the time was to blindly create a dispatching > header file for each target dependent file. I wanted to see the > #includes cleaned up instead and target dependent files included > directly. This adds 5 dispatching header files, which is fine. I > think the case of interp_masm.hpp is interesting though, because the > dispatching file is included in cpu dependent files, which could > directly include the cpu version. But there are 3 platform independent > files that include it. I'm not going to object though because I'm > grateful for this cleanup and I guess it's a matter of opinion which is > best to include in the cpu dependent directories. > > Thanks, > Coleen > > > On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> David, can I consider this a review? >> >> And I please need a sponsor for this change. Could somebody >> please help here? Probably some closed adaptions are needed. >> It applies to any repo as my other change traveled around >> by now. >> >> Thanks and best regards, >> Goetz. >> >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Freitag, 11. Juli 2014 07:19 >> To: Lindenmaier, Goetz; Lois Foltan >> Cc: hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> foo.hpp as few includes as possible, to avoid cycles. >>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>> (either directly or via the platform files.) >>> * should include foo.platform.inline.hpp, so that shared files that >>> call functions from foo.platform.inline.hpp need not contain the >>> cascade of all the platform files. >>> If code in foo.platform.inline.hpp is only used in the platform files, >>> it is not necessary to have an umbrella header. >>> foo.platform.inline.hpp Should include what is needed in its code. >>> >>> For client code: >>> With this change I now removed all include cascades of platform files except for >>> those in the 'natural' headers. >>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>> headers, but include bar.[inline.]hpp.) >>> If it's 1:1, I don't care, as discussed before. >>> >>> Does this make sense? >> I find the overall structure somewhat counter-intuitive from an >> implementation versus interface perspective. But ... >> >> Thanks for the explanation. >> >> David >> >>> Best regards, >>> Goetz. >>> >>> >>> which of the above should #include which others, and which should be >>> #include'd by "client" code? >>> >>> Thanks, >>> David >>> >>>> Thanks, >>>> Lois >>>> >>>>> David >>>>> ----- >>>>> >>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>> (however this could pull in more code than needed since >>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>> >>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>> - change not related to clean up of umbrella headers, please >>>>>> explain/justify. >>>>>> >>>>>> src/share/vm/code/vmreg.hpp >>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>> vmreg.inline.hpp or will >>>>>> this introduce a cyclical inclusion situation, since >>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>> >>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>> - only has a copyright change in the file, no other changes >>>>>> present? >>>>>> >>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>> - incorrect copyright, no current year? >>>>>> >>>>>> src/share/vm/opto/ad.hpp >>>>>> - incorrect copyright date for a new file >>>>>> >>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>> - technically this new file does not need to include >>>>>> "asm/register.hpp" since >>>>>> vmreg.hpp already includes it >>>>>> >>>>>> My only lingering concern is the cyclical nature of >>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>> is not much difference between the two? >>>>>> >>>>>> Thanks, >>>>>> Lois >>>>>> >>>>>> >>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>> >>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>> subdirectories: >>>>>>> >>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>> src/share/vm/opto/ad.hpp >>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>> >>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>> >>>>>>> Where possible, this change avoids includes in headers. >>>>>>> Eventually it adds a forward declaration. >>>>>>> >>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>> rather small. >>>>>>> >>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>> includes in, >>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>> >>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>> thus all the assembler include headers into a lot of files. >>>>>>> >>>>>>> Please review and test this change. I please need a sponsor. >>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>> >>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>> linuxppc64, >>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>> aixppc64, ntamd64 >>>>>>> in opt, dbg and fastdbg versions. >>>>>>> >>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>> arrives in other >>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>> change >>>>>>> against jdk9/dev, too.) >>>>>>> >>>>>>> Best regards, >>>>>>> Goetz. >>>>>>> >>>>>>> PS: I also did all the Copyright adaptions ;) > From david.holmes at oracle.com Mon Jul 14 22:56:42 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 15 Jul 2014 08:56:42 +1000 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: <53C3BA6B.70600@oracle.com> References: <53BDFD5D.4050908@oracle.com> <53BF69DC.9010305@oracle.com> <53C2F8A5.7050006@oracle.com> <53C3BA6B.70600@oracle.com> Message-ID: <53C4602A.90505@oracle.com> All changes (hotspot and top-level) are now in the jdk9/hs-rt forest. David On 14/07/2014 9:09 PM, David Holmes wrote: > On 14/07/2014 7:44 PM, Volker Simonis wrote: >> On Sun, Jul 13, 2014 at 11:22 PM, David Holmes >> wrote: >>> Hi Volker, >>> >>> Just discovered you didn't quite pick up on all of my change - the >>> ARM entry >>> is to be deleted. Only the open platforms need to be listed: >>> >>> >>>>> # No SA Support for IA64 or zero >>>>> ADD_SA_BINARIES/ia64 = >>>>> ADD_SA_BINARIES/zero = >>> >> >> OK, but then I also remove IA64 as it isn't an open platform either: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v4/ > > Yes good point. ia64 should be eradicated from the build system :) > > I will put this altogether in the AM. > >> I've also added Vladimir as reviewer. > > Great > > Thanks, > David > > >> Thank you and best regards, >> Volker >> >> >>> Thanks, >>> David >>> >>> On 11/07/2014 9:54 PM, Volker Simonis wrote: >>>> >>>> On Fri, Jul 11, 2014 at 6:36 AM, David Holmes >>>> wrote: >>>>> >>>>> Hi Volker, >>>>> >>>>> >>>>> On 10/07/2014 8:12 PM, Volker Simonis wrote: >>>>>> >>>>>> >>>>>> Hi David, >>>>>> >>>>>> thanks for looking at this. Here's my new version of the change with >>>>>> some of your suggestions applied: >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 >>>>> >>>>> >>>>> >>>>> I have a simpler counter proposal (also default -> DEFAULT as that >>>>> seems >>>>> to >>>>> be the style): >>>>> >>>>> # Serviceability Binaries >>>>> >>>>> ADD_SA_BINARIES/DEFAULT = >>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) \ >>>>> >>>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>>> >>>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>>> ADD_SA_BINARIES/DEFAULT += >>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>>> else >>>>> ADD_SA_BINARIES/DEFAULT += >>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>>> endif >>>>> endif >>>>> >>>>> ADD_SA_BINARIES/$(HS_ARCH) = $(ADD_SA_BINARIES/DEFAULT) >>>>> >>>>> >>>>> # No SA Support for IA64 or zero >>>>> ADD_SA_BINARIES/ia64 = >>>>> ADD_SA_BINARIES/zero = >>>>> >>>>> --- >>>>> >>>>> The open logic only has to worry about open platforms. The custom >>>>> makefile >>>>> can accept the default or override as it desires. >>>>> >>>>> I thought about conditionally setting ADD_SA_BINARIES/$(HS_ARCH) >>>>> but the >>>>> above is simple and clear. >>>>> >>>>> Ok? >>>>> >>>> >>>> Perfect! >>>> >>>> Here's the new webrev with your proposed changes (tested on >>>> Linux/x86_64 and ppc64): >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v3 >>>> >>>> Thanks for sponsoring, >>>> Volker >>>> >>>>> I'll sponsor this one of course (so its safe for other reviewers to >>>>> jump >>>>> in >>>>> now :) ). >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>> >>>>> >>>>>> Please find more information inline: >>>>>> >>>>>> On Thu, Jul 10, 2014 at 4:41 AM, David Holmes >>>>>> >>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> Hi Volker, >>>>>>> >>>>>>> Comments below where you might expect them :) >>>>>>> >>>>>>> >>>>>>> On 10/07/2014 3:36 AM, Volker Simonis wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> could someone please review and sponsor the following change which >>>>>>>> does some preliminary work for enabling the SA agent on >>>>>>>> Linux/PPC64: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8049715 >>>>>>>> >>>>>>>> Details: >>>>>>>> >>>>>>>> Currently, we don't support the SA agent on Linux/PPC64. This >>>>>>>> change >>>>>>>> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >>>>>>>> and sa-jdi.jar) will be correctly build and copied into the >>>>>>>> resulting >>>>>>>> jdk images. >>>>>>>> >>>>>>>> This change also contains some small fixes in sa-jdi.jar to >>>>>>>> correctly >>>>>>>> detect Linux/PPC64 as supported SA platform. (The actual >>>>>>>> implementation of the Linux/PPC64 specific code will be handled by >>>>>>>> "8049716 PPC64: Implement SA on Linux/PPC64" - >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8049716). >>>>>>>> >>>>>>>> One thing which require special attention are the changes in >>>>>>>> make/linux/makefiles/defs.make which may touch the closed ppc >>>>>>>> port. In >>>>>>>> my change I've simply added 'ppc' to the list of supported >>>>>>>> architectures, but this may break the 32-bit ppc build. I think the >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> It wouldn't break it but I was expecting to see ppc64 here. >>>>>>> >>>>>> >>>>>> The problem is that currently the decision if the SA agent will be >>>>>> build is based on the value of HS_ARCH. But HS_ARCH is the 'basic >>>>>> architecture' (i.e. x86 or sparc) so there's no easy way to choose >>>>>> the >>>>>> SA agent for only a 64-bit platform (like ppc64 or amd64) and not for >>>>>> its 32-bit counterpart (i.e. i386 or ppc). >>>>>> >>>>>> The only possibility with the current solution would be to only >>>>>> conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But >>>>>> that wouldn't make the code nicer either:) >>>>>> >>>>>>> >>>>>>>> current code is to verbose and error prone anyway. It would be >>>>>>>> better >>>>>>>> to have something like: >>>>>>>> >>>>>>>> ADD_SA_BINARIES = >>>>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >>>>>>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>>>>>> >>>>>>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>>>>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>>>>>> ADD_SA_BINARIES += >>>>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>>>>>> else >>>>>>>> ADD_SA_BINARIES += >>>>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>>>>>> endif >>>>>>>> endif >>>>>>>> >>>>>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>>>>>> ppc64)) >>>>>>>> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> You wouldn't need/want the $(HS_ARCH) there. >>>>>>> >>>>>> >>>>>> Sorry, that was a type of course. It should read: >>>>>> >>>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc >>>>>> sparcv9 >>>>>> ppc64)) >>>>>> EXPORT_LIST += $(ADD_SA_BINARIES) >>>>>> >>>>>> But that's not necessary now anymore (see new version below). >>>>>> >>>>>>> >>>>>>>> endif >>>>>>>> >>>>>>>> With this solution we only define ADD_SA_BINARIES once (because the >>>>>>>> various definitions for the different platforms are equal >>>>>>>> anyway). But >>>>>>>> again this may affect other closed ports so please advise which >>>>>>>> solution you'd prefer. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> The above is problematic for customizations. An alternative would >>>>>>> be to >>>>>>> set >>>>>>> ADD_SA_BINARIES/default once with all the file names. Then: >>>>>>> >>>>>>> ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) >>>>>>> # No SA Support for IA64 or zero >>>>>>> ifneq (, $(findstring $(ARCH), ia64, zero)) >>>>>>> ADD_SA_BINARIES/$(ARCH) = >>>>>>> >>>>>>> Each ARCH handled elsewhere would then still set >>>>>>> ADD_SA_BINARIES/$(ARCH) >>>>>>> if >>>>>>> needed. >>>>>>> >>>>>>> Does that seem reasonable? >>>>>>> >>>>>> >>>>>> The problem with using ARCH is that it is not "reliable" in the sens >>>>>> that its value differs for top-level and hotspot-only makes. See >>>>>> "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for >>>>>> hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 >>>>>> build". >>>>>> >>>>>> But using ADD_SA_BINARIES/default to save redundant lines is a good >>>>>> idea. I've updated the patch accordingly and think that the new >>>>>> solution is a good compromise between readability and not touching >>>>>> existing/closed part. >>>>>> >>>>>> Are you fine with the new version at >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? >>>>>> >>>>>>> >>>>>>>> Notice that this change also requires a tiny fix in the top-level >>>>>>>> repository which must be pushed AFTER this change. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Can you elaborate please? >>>>>>> >>>>>> >>>>>> I've also submitted the corresponding top-level repository change for >>>>>> review which expects to find the SA agent libraries on Linux/ppc64 in >>>>>> order to copy them into the image directory: >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ >>>>>> >>>>>> But once that will be pushed, the build will fail if these HS changes >>>>>> will not be in place to actually build the libraries. >>>>>> >>>>>>> Thanks, >>>>>>> David >>>>>>> >>>>>>> >>>>>>>> Thank you and best regards, >>>>>>>> Volker >>>>>>>> >>>>>>> >>>>> >>> From david.holmes at oracle.com Mon Jul 14 23:00:19 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 15 Jul 2014 09:00:19 +1000 Subject: RFR: 8046765 : (s) makefiles should use parameterized $(CP) and $(MV) rather than explicit commands In-Reply-To: <278CDB58-3873-4187-A82F-29E69A8F3F49@oracle.com> References: <839864EB-04E0-45B0-8D31-25714E84E1A7@oracle.com> <21F2D2AC-6261-4E6C-BF7A-A3978BFAC9D8@oracle.com> <53C37C02.4060801@oracle.com> <278CDB58-3873-4187-A82F-29E69A8F3F49@oracle.com> Message-ID: <53C46103.4040805@oracle.com> Hi Mike, On 15/07/2014 3:11 AM, Mike Duigou wrote: > > On Jul 13 2014, at 23:43 , David Holmes wrote: > >> Hi Mike, >> >> The changes from cp to $(CP) look fine, as do the couple of mv changes. As I think I said first time round I'm not sure why cp and mv are being singled out here > > The static analysis tool we are using substitutes instrumented versions of mv and cp so that it can track files from their final location back to the source. It would seem that hashing would be a more reliable way to do this tracking but this is what the tool requires. Ah I see. >> (and I note that windows also does $(RM) but the others don't). > > $RM was already defined in build.make. I don't see any changes in my patch involving RM. I do note that the RM expansion isn't used in most makefiles (sa.make was the one I noticed). > >> >> There seem to be a lot of spurious changes in the patch file that don't show up in the cdiffs. I assume whitespace has been added, which jcheck will reject (whitespace can't have been removed as jcheck would not have permitted it in the first place). > > It is actually whitespace being removed as a consequence of my text editor trimming trailing whitespace. jcheck whitespace checks only specific file types (java|c|h|cpp|hpp) I didn't realize that. I suppose the tab checking in makefiles would be a bit tricky :) >> >> Also please update all the Oracle copyright lines as needed (hotspot policy). > > I shall do so. > > With the copyright changes are we good to go? Absolutely from my perspective. But you need a second reviewer if not already present. Thanks, David >> >> Thanks, >> David >> >> On 13/07/2014 3:19 AM, Mike Duigou wrote: >>> Hello all; >>> >>> After further testing some additional changes were found to be needed to support building hotspot without configure support. There are a small number of additional changes in various buildtree.make and the windows build.make files to ensure that locations are defined for CP and MV commands. If there's a more appropriate location for these definitions please suggest it. >>> >>> The patch is otherwise unchanged from the month ago version except for line number offset changes owing to other intervening changesets. >>> >>> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 >>> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/2/webrev/ >>> >>> Thanks! >>> >>> Mike >>> >>> On Jun 13 2014, at 12:43 , Mike Duigou wrote: >>> >>>> Hello all; >>>> >>>> This is a small changeset to the hotspot makefiles to have them use expansions of the $(CP) and $(MV) variables rather than explicit commands for all operations involving files in the deliverables. This changes is needed by static code analysis software which provides replacement cp and mv commands that track error reports in executables back to the source from which they are generated. >>>> >>>> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 >>>> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/0/webrev/ >>>> >>>> I've checked to make sure that patch doesn't change the build output on linux x64 and am currently checking on other platforms. >>>> >>>> It is probably easier to review this change by looking at the patch than by looking at the file diffs. >>>> >>>> Mike >>>> >>> > From coleen.phillimore at oracle.com Tue Jul 15 00:05:50 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 14 Jul 2014 20:05:50 -0400 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code Message-ID: <53C4705E.7060407@oracle.com> Summary: remove bcx and mdx handling. We no longer have to convert bytecode pointers or method data pointers to indices for GC since Metadata aren't moved. Tested with nsk.quick.testlist, jck tests, JPRT. Most of this is renaming bcx to bcp and mdx to mdp. The content changes are in frame.cpp. StefanK implemented 90% of these changes. open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ bug link https://bugs.openjdk.java.net/browse/JDK-8004128 Thanks, Coleen From vladimir.kozlov at oracle.com Tue Jul 15 00:11:54 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 14 Jul 2014 17:11:54 -0700 Subject: RFR: 8046765 : (s) makefiles should use parameterized $(CP) and $(MV) rather than explicit commands In-Reply-To: <53C46103.4040805@oracle.com> References: <839864EB-04E0-45B0-8D31-25714E84E1A7@oracle.com> <21F2D2AC-6261-4E6C-BF7A-A3978BFAC9D8@oracle.com> <53C37C02.4060801@oracle.com> <278CDB58-3873-4187-A82F-29E69A8F3F49@oracle.com> <53C46103.4040805@oracle.com> Message-ID: <53C471CA.7080104@oracle.com> Hi, Mike Changes looks good to me too. Thank you for cleaning up trailing spaces. I verified that my local Hotspot build on Solaris works with your patch. Thanks, Vladimir On 7/14/14 4:00 PM, David Holmes wrote: > Hi Mike, > > On 15/07/2014 3:11 AM, Mike Duigou wrote: >> >> On Jul 13 2014, at 23:43 , David Holmes wrote: >> >>> Hi Mike, >>> >>> The changes from cp to $(CP) look fine, as do the couple of mv >>> changes. As I think I said first time round I'm not sure why cp and >>> mv are being singled out here >> >> The static analysis tool we are using substitutes instrumented >> versions of mv and cp so that it can track files from their final >> location back to the source. It would seem that hashing would be a >> more reliable way to do this tracking but this is what the tool requires. > > Ah I see. > >>> (and I note that windows also does $(RM) but the others don't). >> >> $RM was already defined in build.make. I don't see any changes in my >> patch involving RM. I do note that the RM expansion isn't used in most >> makefiles (sa.make was the one I noticed). >> >>> >>> There seem to be a lot of spurious changes in the patch file that >>> don't show up in the cdiffs. I assume whitespace has been added, >>> which jcheck will reject (whitespace can't have been removed as >>> jcheck would not have permitted it in the first place). >> >> It is actually whitespace being removed as a consequence of my text >> editor trimming trailing whitespace. jcheck whitespace checks only >> specific file types (java|c|h|cpp|hpp) > > I didn't realize that. I suppose the tab checking in makefiles would be > a bit tricky :) > >>> >>> Also please update all the Oracle copyright lines as needed (hotspot >>> policy). >> >> I shall do so. >> >> With the copyright changes are we good to go? > > Absolutely from my perspective. But you need a second reviewer if not > already present. > > Thanks, > David > > > > > >>> >>> Thanks, >>> David >>> >>> On 13/07/2014 3:19 AM, Mike Duigou wrote: >>>> Hello all; >>>> >>>> After further testing some additional changes were found to be >>>> needed to support building hotspot without configure support. There >>>> are a small number of additional changes in various buildtree.make >>>> and the windows build.make files to ensure that locations are >>>> defined for CP and MV commands. If there's a more appropriate >>>> location for these definitions please suggest it. >>>> >>>> The patch is otherwise unchanged from the month ago version except >>>> for line number offset changes owing to other intervening changesets. >>>> >>>> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 >>>> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/2/webrev/ >>>> >>>> Thanks! >>>> >>>> Mike >>>> >>>> On Jun 13 2014, at 12:43 , Mike Duigou wrote: >>>> >>>>> Hello all; >>>>> >>>>> This is a small changeset to the hotspot makefiles to have them use >>>>> expansions of the $(CP) and $(MV) variables rather than explicit >>>>> commands for all operations involving files in the deliverables. >>>>> This changes is needed by static code analysis software which >>>>> provides replacement cp and mv commands that track error reports in >>>>> executables back to the source from which they are generated. >>>>> >>>>> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046765 >>>>> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046765/0/webrev/ >>>>> >>>>> I've checked to make sure that patch doesn't change the build >>>>> output on linux x64 and am currently checking on other platforms. >>>>> >>>>> It is probably easier to review this change by looking at the patch >>>>> than by looking at the file diffs. >>>>> >>>>> Mike >>>>> >>>> >> From vladimir.kozlov at oracle.com Tue Jul 15 01:21:16 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 14 Jul 2014 18:21:16 -0700 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C3C584.7070008@oracle.com> References: <53C3C584.7070008@oracle.com> Message-ID: <53C4820C.5000300@oracle.com> Impressive work, Tobias! So before the permgen removal embedded method* were oops and they were processed in relocInfo::oop_type loop. May be instead of specializing opt_virtual_call_type and static_call_type call site you can simple add a loop for relocInfo::metadata_type (similar to oop_type loop)? Thanks, Vladimir On 7/14/14 4:56 AM, Tobias Hartmann wrote: > Hi, > > please review the following patch for JDK-8029443. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 > Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ > > *Problem* > After the tracing/marking phase of GC, nmethod::do_unloading(..) checks > if a nmethod can be unloaded because it contains dead oops. If class > unloading occurred we additionally clear all ICs where the cached > metadata refers to an unloaded klass or method. If the nmethod is not > unloaded, nmethod::verify_metadata_loaders(..) finally checks if all > metadata is alive. The assert in CheckClass::check_class fails because > the nmethod contains Method* metadata corresponding to a dead Klass. > The Method* belongs to a to-interpreter stub [1] of an optimized > compiled IC. Normally we clear those stubs prior to verification to > avoid dangling references to Method* [2], but only if the stub is not in > use, i.e. if the IC is not in to-interpreted mode. In this case the > to-interpreter stub may be executed and hand a stale Method* to the > interpreter. > > *Solution > *The implementation of nmethod::do_unloading(..) is changed to clean > compiled ICs and compiled static calls if they call into a > to-interpreter stub that references dead Method* metadata. > > The patch was affected by the G1 class unloading changes (JDK-8048248) > because the method nmethod::do_unloading_parallel(..) was added. I > adapted the implementation as well. > * > Testing > *Failing test (runThese) > JPRT > > Thanks, > Tobias > > [1] see CompiledStaticCall::emit_to_interp_stub(..) > [2] see nmethod::verify_metadata_loaders(..), > static_stub_reloc()->clear_inline_cache() clears the stub From goetz.lindenmaier at sap.com Tue Jul 15 06:34:25 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 15 Jul 2014 06:34:25 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53C45912.4050905@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> <53C45912.4050905@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> Hi David, functions that are completely self contained can go into the .hpp. Functions that call another inline function defined in an other header must go to .inline.hpp as else there could be cycles the c++ compilers can't deal with. Best regards, Goetz. -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Dienstag, 15. Juli 2014 00:26 To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: > Hi Coleen, > > Thanks for sponsoring this! > > bytes, ad, nativeInst and vmreg.inline were used quite often > in shared files, so it definitely makes sense for these to have > a shared header. > vm_version and register had an umbrella header, but that > was not used everywhere, so I cleaned it up. > That left adGlobals, jniTypes and interp_masm which > are only used a few time. I did these so that all files > are treated similarly. > In the end, I didn't need a header for all, as they were > not really needed in the shared files, or I found > another good place, as for adGlobals. > > I added you and David H. as reviewer to the webrev: > http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ > I hope this is ok with you, David. It might be somewhat premature :) I somewhat confused by the rules for headers and includes and inlines. I now see with this change a bunch of inline function definitions being moved out of the .inline.hpp file and into the .hpp file. Why? What criteria determines if an inline function goes into the .hpp versus the .inline.hpp file ??? Thanks, David > Thanks, > Goetz. > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore > Sent: Montag, 14. Juli 2014 14:09 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > > I think this looks like a good cleanup. I can sponsor it and make the > closed changes also again. I initially proposed the #include cascades > because the alternative at the time was to blindly create a dispatching > header file for each target dependent file. I wanted to see the > #includes cleaned up instead and target dependent files included > directly. This adds 5 dispatching header files, which is fine. I > think the case of interp_masm.hpp is interesting though, because the > dispatching file is included in cpu dependent files, which could > directly include the cpu version. But there are 3 platform independent > files that include it. I'm not going to object though because I'm > grateful for this cleanup and I guess it's a matter of opinion which is > best to include in the cpu dependent directories. > > Thanks, > Coleen > > > On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> David, can I consider this a review? >> >> And I please need a sponsor for this change. Could somebody >> please help here? Probably some closed adaptions are needed. >> It applies to any repo as my other change traveled around >> by now. >> >> Thanks and best regards, >> Goetz. >> >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Freitag, 11. Juli 2014 07:19 >> To: Lindenmaier, Goetz; Lois Foltan >> Cc: hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> foo.hpp as few includes as possible, to avoid cycles. >>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>> (either directly or via the platform files.) >>> * should include foo.platform.inline.hpp, so that shared files that >>> call functions from foo.platform.inline.hpp need not contain the >>> cascade of all the platform files. >>> If code in foo.platform.inline.hpp is only used in the platform files, >>> it is not necessary to have an umbrella header. >>> foo.platform.inline.hpp Should include what is needed in its code. >>> >>> For client code: >>> With this change I now removed all include cascades of platform files except for >>> those in the 'natural' headers. >>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>> headers, but include bar.[inline.]hpp.) >>> If it's 1:1, I don't care, as discussed before. >>> >>> Does this make sense? >> I find the overall structure somewhat counter-intuitive from an >> implementation versus interface perspective. But ... >> >> Thanks for the explanation. >> >> David >> >>> Best regards, >>> Goetz. >>> >>> >>> which of the above should #include which others, and which should be >>> #include'd by "client" code? >>> >>> Thanks, >>> David >>> >>>> Thanks, >>>> Lois >>>> >>>>> David >>>>> ----- >>>>> >>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>> (however this could pull in more code than needed since >>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>> >>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>> - change not related to clean up of umbrella headers, please >>>>>> explain/justify. >>>>>> >>>>>> src/share/vm/code/vmreg.hpp >>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>> vmreg.inline.hpp or will >>>>>> this introduce a cyclical inclusion situation, since >>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>> >>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>> - only has a copyright change in the file, no other changes >>>>>> present? >>>>>> >>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>> - incorrect copyright, no current year? >>>>>> >>>>>> src/share/vm/opto/ad.hpp >>>>>> - incorrect copyright date for a new file >>>>>> >>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>> - technically this new file does not need to include >>>>>> "asm/register.hpp" since >>>>>> vmreg.hpp already includes it >>>>>> >>>>>> My only lingering concern is the cyclical nature of >>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>> is not much difference between the two? >>>>>> >>>>>> Thanks, >>>>>> Lois >>>>>> >>>>>> >>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>> >>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>> subdirectories: >>>>>>> >>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>> src/share/vm/opto/ad.hpp >>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>> >>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>> >>>>>>> Where possible, this change avoids includes in headers. >>>>>>> Eventually it adds a forward declaration. >>>>>>> >>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>> rather small. >>>>>>> >>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>> includes in, >>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>> >>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>> thus all the assembler include headers into a lot of files. >>>>>>> >>>>>>> Please review and test this change. I please need a sponsor. >>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>> >>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>> linuxppc64, >>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>> aixppc64, ntamd64 >>>>>>> in opt, dbg and fastdbg versions. >>>>>>> >>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>> arrives in other >>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>> change >>>>>>> against jdk9/dev, too.) >>>>>>> >>>>>>> Best regards, >>>>>>> Goetz. >>>>>>> >>>>>>> PS: I also did all the Copyright adaptions ;) > From david.holmes at oracle.com Tue Jul 15 07:20:26 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 15 Jul 2014 17:20:26 +1000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> <53C45912.4050905@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> Message-ID: <53C4D63A.5060802@oracle.com> On 15/07/2014 4:34 PM, Lindenmaier, Goetz wrote: > Hi David, > > functions that are completely self contained can go into the .hpp. > Functions that call another inline function defined in an other header > must go to .inline.hpp as else there could be cycles the c++ compilers can't > deal with. A quick survey of the shared *.inline.hpp files shows many don't seem to fit this definition. Are templates also something that needs special handling? I'm not saying anything is wrong with your changes, just trying to understand what the rules are. Thanks, David > Best regards, > Goetz. > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 15. Juli 2014 00:26 > To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: >> Hi Coleen, >> >> Thanks for sponsoring this! >> >> bytes, ad, nativeInst and vmreg.inline were used quite often >> in shared files, so it definitely makes sense for these to have >> a shared header. >> vm_version and register had an umbrella header, but that >> was not used everywhere, so I cleaned it up. >> That left adGlobals, jniTypes and interp_masm which >> are only used a few time. I did these so that all files >> are treated similarly. >> In the end, I didn't need a header for all, as they were >> not really needed in the shared files, or I found >> another good place, as for adGlobals. >> >> I added you and David H. as reviewer to the webrev: >> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >> I hope this is ok with you, David. > > It might be somewhat premature :) I somewhat confused by the rules for > headers and includes and inlines. I now see with this change a bunch of > inline function definitions being moved out of the .inline.hpp file and > into the .hpp file. Why? What criteria determines if an inline function > goes into the .hpp versus the .inline.hpp file ??? > > Thanks, > David > >> Thanks, >> Goetz. >> >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore >> Sent: Montag, 14. Juli 2014 14:09 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> >> I think this looks like a good cleanup. I can sponsor it and make the >> closed changes also again. I initially proposed the #include cascades >> because the alternative at the time was to blindly create a dispatching >> header file for each target dependent file. I wanted to see the >> #includes cleaned up instead and target dependent files included >> directly. This adds 5 dispatching header files, which is fine. I >> think the case of interp_masm.hpp is interesting though, because the >> dispatching file is included in cpu dependent files, which could >> directly include the cpu version. But there are 3 platform independent >> files that include it. I'm not going to object though because I'm >> grateful for this cleanup and I guess it's a matter of opinion which is >> best to include in the cpu dependent directories. >> >> Thanks, >> Coleen >> >> >> On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> David, can I consider this a review? >>> >>> And I please need a sponsor for this change. Could somebody >>> please help here? Probably some closed adaptions are needed. >>> It applies to any repo as my other change traveled around >>> by now. >>> >>> Thanks and best regards, >>> Goetz. >>> >>> >>> -----Original Message----- >>> From: David Holmes [mailto:david.holmes at oracle.com] >>> Sent: Freitag, 11. Juli 2014 07:19 >>> To: Lindenmaier, Goetz; Lois Foltan >>> Cc: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>> >>> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> foo.hpp as few includes as possible, to avoid cycles. >>>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>>> (either directly or via the platform files.) >>>> * should include foo.platform.inline.hpp, so that shared files that >>>> call functions from foo.platform.inline.hpp need not contain the >>>> cascade of all the platform files. >>>> If code in foo.platform.inline.hpp is only used in the platform files, >>>> it is not necessary to have an umbrella header. >>>> foo.platform.inline.hpp Should include what is needed in its code. >>>> >>>> For client code: >>>> With this change I now removed all include cascades of platform files except for >>>> those in the 'natural' headers. >>>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>>> headers, but include bar.[inline.]hpp.) >>>> If it's 1:1, I don't care, as discussed before. >>>> >>>> Does this make sense? >>> I find the overall structure somewhat counter-intuitive from an >>> implementation versus interface perspective. But ... >>> >>> Thanks for the explanation. >>> >>> David >>> >>>> Best regards, >>>> Goetz. >>>> >>>> >>>> which of the above should #include which others, and which should be >>>> #include'd by "client" code? >>>> >>>> Thanks, >>>> David >>>> >>>>> Thanks, >>>>> Lois >>>>> >>>>>> David >>>>>> ----- >>>>>> >>>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>>> (however this could pull in more code than needed since >>>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>>> >>>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>>> - change not related to clean up of umbrella headers, please >>>>>>> explain/justify. >>>>>>> >>>>>>> src/share/vm/code/vmreg.hpp >>>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>>> vmreg.inline.hpp or will >>>>>>> this introduce a cyclical inclusion situation, since >>>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>>> >>>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>>> - only has a copyright change in the file, no other changes >>>>>>> present? >>>>>>> >>>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>>> - incorrect copyright, no current year? >>>>>>> >>>>>>> src/share/vm/opto/ad.hpp >>>>>>> - incorrect copyright date for a new file >>>>>>> >>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>> - technically this new file does not need to include >>>>>>> "asm/register.hpp" since >>>>>>> vmreg.hpp already includes it >>>>>>> >>>>>>> My only lingering concern is the cyclical nature of >>>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>>> is not much difference between the two? >>>>>>> >>>>>>> Thanks, >>>>>>> Lois >>>>>>> >>>>>>> >>>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>>> >>>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>>> subdirectories: >>>>>>>> >>>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>>> >>>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>>> >>>>>>>> Where possible, this change avoids includes in headers. >>>>>>>> Eventually it adds a forward declaration. >>>>>>>> >>>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>>> rather small. >>>>>>>> >>>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>>> includes in, >>>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>>> >>>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>>> thus all the assembler include headers into a lot of files. >>>>>>>> >>>>>>>> Please review and test this change. I please need a sponsor. >>>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>>> >>>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>>> linuxppc64, >>>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>>> aixppc64, ntamd64 >>>>>>>> in opt, dbg and fastdbg versions. >>>>>>>> >>>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>>> arrives in other >>>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>>> change >>>>>>>> against jdk9/dev, too.) >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Goetz. >>>>>>>> >>>>>>>> PS: I also did all the Copyright adaptions ;) >> From volker.simonis at gmail.com Tue Jul 15 07:37:12 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 15 Jul 2014 09:37:12 +0200 Subject: RFR(S): 8049715: PPC64: First steps to enable SA on Linux/PPC64 In-Reply-To: <53C4602A.90505@oracle.com> References: <53BDFD5D.4050908@oracle.com> <53BF69DC.9010305@oracle.com> <53C2F8A5.7050006@oracle.com> <53C3BA6B.70600@oracle.com> <53C4602A.90505@oracle.com> Message-ID: Great! Thanks a lot, Volker On Tue, Jul 15, 2014 at 12:56 AM, David Holmes wrote: > All changes (hotspot and top-level) are now in the jdk9/hs-rt forest. > > David > > > On 14/07/2014 9:09 PM, David Holmes wrote: >> >> On 14/07/2014 7:44 PM, Volker Simonis wrote: >>> >>> On Sun, Jul 13, 2014 at 11:22 PM, David Holmes >>> wrote: >>>> >>>> Hi Volker, >>>> >>>> Just discovered you didn't quite pick up on all of my change - the >>>> ARM entry >>>> is to be deleted. Only the open platforms need to be listed: >>>> >>>> >>>>>> # No SA Support for IA64 or zero >>>>>> ADD_SA_BINARIES/ia64 = >>>>>> ADD_SA_BINARIES/zero = >>>> >>>> >>> >>> OK, but then I also remove IA64 as it isn't an open platform either: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v4/ >> >> >> Yes good point. ia64 should be eradicated from the build system :) >> >> I will put this altogether in the AM. >> >>> I've also added Vladimir as reviewer. >> >> >> Great >> >> Thanks, >> David >> >> >>> Thank you and best regards, >>> Volker >>> >>> >>>> Thanks, >>>> David >>>> >>>> On 11/07/2014 9:54 PM, Volker Simonis wrote: >>>>> >>>>> >>>>> On Fri, Jul 11, 2014 at 6:36 AM, David Holmes >>>>> wrote: >>>>>> >>>>>> >>>>>> Hi Volker, >>>>>> >>>>>> >>>>>> On 10/07/2014 8:12 PM, Volker Simonis wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi David, >>>>>>> >>>>>>> thanks for looking at this. Here's my new version of the change with >>>>>>> some of your suggestions applied: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> I have a simpler counter proposal (also default -> DEFAULT as that >>>>>> seems >>>>>> to >>>>>> be the style): >>>>>> >>>>>> # Serviceability Binaries >>>>>> >>>>>> ADD_SA_BINARIES/DEFAULT = >>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) \ >>>>>> >>>>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>>>> >>>>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>>>> ADD_SA_BINARIES/DEFAULT += >>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>>>> else >>>>>> ADD_SA_BINARIES/DEFAULT += >>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>>>> endif >>>>>> endif >>>>>> >>>>>> ADD_SA_BINARIES/$(HS_ARCH) = $(ADD_SA_BINARIES/DEFAULT) >>>>>> >>>>>> >>>>>> # No SA Support for IA64 or zero >>>>>> ADD_SA_BINARIES/ia64 = >>>>>> ADD_SA_BINARIES/zero = >>>>>> >>>>>> --- >>>>>> >>>>>> The open logic only has to worry about open platforms. The custom >>>>>> makefile >>>>>> can accept the default or override as it desires. >>>>>> >>>>>> I thought about conditionally setting ADD_SA_BINARIES/$(HS_ARCH) >>>>>> but the >>>>>> above is simple and clear. >>>>>> >>>>>> Ok? >>>>>> >>>>> >>>>> Perfect! >>>>> >>>>> Here's the new webrev with your proposed changes (tested on >>>>> Linux/x86_64 and ppc64): >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v3 >>>>> >>>>> Thanks for sponsoring, >>>>> Volker >>>>> >>>>>> I'll sponsor this one of course (so its safe for other reviewers to >>>>>> jump >>>>>> in >>>>>> now :) ). >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> >>>>>> >>>>>> >>>>>>> Please find more information inline: >>>>>>> >>>>>>> On Thu, Jul 10, 2014 at 4:41 AM, David Holmes >>>>>>> >>>>>>> wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hi Volker, >>>>>>>> >>>>>>>> Comments below where you might expect them :) >>>>>>>> >>>>>>>> >>>>>>>> On 10/07/2014 3:36 AM, Volker Simonis wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> could someone please review and sponsor the following change which >>>>>>>>> does some preliminary work for enabling the SA agent on >>>>>>>>> Linux/PPC64: >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715/ >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8049715 >>>>>>>>> >>>>>>>>> Details: >>>>>>>>> >>>>>>>>> Currently, we don't support the SA agent on Linux/PPC64. This >>>>>>>>> change >>>>>>>>> fixes the buildsystem such that the SA libraries (i.e. libsaproc.so >>>>>>>>> and sa-jdi.jar) will be correctly build and copied into the >>>>>>>>> resulting >>>>>>>>> jdk images. >>>>>>>>> >>>>>>>>> This change also contains some small fixes in sa-jdi.jar to >>>>>>>>> correctly >>>>>>>>> detect Linux/PPC64 as supported SA platform. (The actual >>>>>>>>> implementation of the Linux/PPC64 specific code will be handled by >>>>>>>>> "8049716 PPC64: Implement SA on Linux/PPC64" - >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8049716). >>>>>>>>> >>>>>>>>> One thing which require special attention are the changes in >>>>>>>>> make/linux/makefiles/defs.make which may touch the closed ppc >>>>>>>>> port. In >>>>>>>>> my change I've simply added 'ppc' to the list of supported >>>>>>>>> architectures, but this may break the 32-bit ppc build. I think the >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> It wouldn't break it but I was expecting to see ppc64 here. >>>>>>>> >>>>>>> >>>>>>> The problem is that currently the decision if the SA agent will be >>>>>>> build is based on the value of HS_ARCH. But HS_ARCH is the 'basic >>>>>>> architecture' (i.e. x86 or sparc) so there's no easy way to choose >>>>>>> the >>>>>>> SA agent for only a 64-bit platform (like ppc64 or amd64) and not for >>>>>>> its 32-bit counterpart (i.e. i386 or ppc). >>>>>>> >>>>>>> The only possibility with the current solution would be to only >>>>>>> conditionally set ADD_SA_BINARIES/ppc if ARCH_DATA_MODEL is 64. But >>>>>>> that wouldn't make the code nicer either:) >>>>>>> >>>>>>>> >>>>>>>>> current code is to verbose and error prone anyway. It would be >>>>>>>>> better >>>>>>>>> to have something like: >>>>>>>>> >>>>>>>>> ADD_SA_BINARIES = >>>>>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.$(LIBRARY_SUFFIX) >>>>>>>>> $(EXPORT_LIB_DIR)/sa-jdi.jar >>>>>>>>> >>>>>>>>> ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) >>>>>>>>> ifeq ($(ZIP_DEBUGINFO_FILES),1) >>>>>>>>> ADD_SA_BINARIES += >>>>>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.diz >>>>>>>>> else >>>>>>>>> ADD_SA_BINARIES += >>>>>>>>> $(EXPORT_JRE_LIB_ARCH_DIR)/libsaproc.debuginfo >>>>>>>>> endif >>>>>>>>> endif >>>>>>>>> >>>>>>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc sparcv9 >>>>>>>>> ppc64)) >>>>>>>>> EXPORT_LIST += $(ADD_SA_BINARIES/$(HS_ARCH)) >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> You wouldn't need/want the $(HS_ARCH) there. >>>>>>>> >>>>>>> >>>>>>> Sorry, that was a type of course. It should read: >>>>>>> >>>>>>> ifneq (,$(findstring $(ARCH), amd64 x86_64 i686 i586 sparc >>>>>>> sparcv9 >>>>>>> ppc64)) >>>>>>> EXPORT_LIST += $(ADD_SA_BINARIES) >>>>>>> >>>>>>> But that's not necessary now anymore (see new version below). >>>>>>> >>>>>>>> >>>>>>>>> endif >>>>>>>>> >>>>>>>>> With this solution we only define ADD_SA_BINARIES once (because the >>>>>>>>> various definitions for the different platforms are equal >>>>>>>>> anyway). But >>>>>>>>> again this may affect other closed ports so please advise which >>>>>>>>> solution you'd prefer. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> The above is problematic for customizations. An alternative would >>>>>>>> be to >>>>>>>> set >>>>>>>> ADD_SA_BINARIES/default once with all the file names. Then: >>>>>>>> >>>>>>>> ADD_SA_BINARIES/$(ARCH) = $(ADD_SA_BINARIES/default) >>>>>>>> # No SA Support for IA64 or zero >>>>>>>> ifneq (, $(findstring $(ARCH), ia64, zero)) >>>>>>>> ADD_SA_BINARIES/$(ARCH) = >>>>>>>> >>>>>>>> Each ARCH handled elsewhere would then still set >>>>>>>> ADD_SA_BINARIES/$(ARCH) >>>>>>>> if >>>>>>>> needed. >>>>>>>> >>>>>>>> Does that seem reasonable? >>>>>>>> >>>>>>> >>>>>>> The problem with using ARCH is that it is not "reliable" in the sens >>>>>>> that its value differs for top-level and hotspot-only makes. See >>>>>>> "8046471: Use OPENJDK_TARGET_CPU_ARCH instead of legacy value for >>>>>>> hotspot ARCH" and my fix "8048232: Fix for 8046471 breaks PPC64 >>>>>>> build". >>>>>>> >>>>>>> But using ADD_SA_BINARIES/default to save redundant lines is a good >>>>>>> idea. I've updated the patch accordingly and think that the new >>>>>>> solution is a good compromise between readability and not touching >>>>>>> existing/closed part. >>>>>>> >>>>>>> Are you fine with the new version at >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715.v2 ? >>>>>>> >>>>>>>> >>>>>>>>> Notice that this change also requires a tiny fix in the top-level >>>>>>>>> repository which must be pushed AFTER this change. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Can you elaborate please? >>>>>>>> >>>>>>> >>>>>>> I've also submitted the corresponding top-level repository change for >>>>>>> review which expects to find the SA agent libraries on Linux/ppc64 in >>>>>>> order to copy them into the image directory: >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8049715_top_level/ >>>>>>> >>>>>>> But once that will be pushed, the build will fail if these HS changes >>>>>>> will not be in place to actually build the libraries. >>>>>>> >>>>>>>> Thanks, >>>>>>>> David >>>>>>>> >>>>>>>> >>>>>>>>> Thank you and best regards, >>>>>>>>> Volker >>>>>>>>> >>>>>>>> >>>>>> >>>> > From tobias.hartmann at oracle.com Tue Jul 15 07:48:08 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 15 Jul 2014 09:48:08 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C4820C.5000300@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> Message-ID: <53C4DCB8.5020705@oracle.com> Hi Vladimir, > Impressive work, Tobias! Thanks! Took me a while to figure out what's happening. > So before the permgen removal embedded method* were oops and they were > processed in relocInfo::oop_type loop. Okay, good to know. That explains why the terms oops and metadata are used interchangeably at some points in the code. > May be instead of specializing opt_virtual_call_type and > static_call_type call site you can simple add a loop for > relocInfo::metadata_type (similar to oop_type loop)? The problem with iterating over relocInfo::metadata_type is that we don't know to which stub, i.e., to which IC the Method* pointer belongs. Since we don't want to unload the entire method but only clear the corresponding IC, we need this information. Thanks, Tobias > > Thanks, > Vladimir > > On 7/14/14 4:56 AM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch for JDK-8029443. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >> >> *Problem* >> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks >> if a nmethod can be unloaded because it contains dead oops. If class >> unloading occurred we additionally clear all ICs where the cached >> metadata refers to an unloaded klass or method. If the nmethod is not >> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >> metadata is alive. The assert in CheckClass::check_class fails because >> the nmethod contains Method* metadata corresponding to a dead Klass. >> The Method* belongs to a to-interpreter stub [1] of an optimized >> compiled IC. Normally we clear those stubs prior to verification to >> avoid dangling references to Method* [2], but only if the stub is not in >> use, i.e. if the IC is not in to-interpreted mode. In this case the >> to-interpreter stub may be executed and hand a stale Method* to the >> interpreter. >> >> *Solution >> *The implementation of nmethod::do_unloading(..) is changed to clean >> compiled ICs and compiled static calls if they call into a >> to-interpreter stub that references dead Method* metadata. >> >> The patch was affected by the G1 class unloading changes (JDK-8048248) >> because the method nmethod::do_unloading_parallel(..) was added. I >> adapted the implementation as well. >> * >> Testing >> *Failing test (runThese) >> JPRT >> >> Thanks, >> Tobias >> >> [1] see CompiledStaticCall::emit_to_interp_stub(..) >> [2] see nmethod::verify_metadata_loaders(..), >> static_stub_reloc()->clear_inline_cache() clears the stub From volker.simonis at gmail.com Tue Jul 15 08:48:16 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 15 Jul 2014 10:48:16 +0200 Subject: RFR(XS): 8049441: PPC64: Don't use StubCodeMarks for zero-length stubs In-Reply-To: <53C43F1E.2030805@oracle.com> References: <53BC1237.2060006@oracle.com> <53C43F1E.2030805@oracle.com> Message-ID: On Mon, Jul 14, 2014 at 10:35 PM, serguei.spitsyn at oracle.com wrote: > Hi Volker, > > It looks good in general. > > But I don't understand all the details. > For instance, your email description of the fix tells that the the event is > posted by: > > RuntimeStub::new_runtime_stub() -> CodeBlob::trace_new_stub() -> > JvmtiExport::post_dynamic_code_generated() > > I see the new_runtime_stub() call in the generate_throw_exception() but > there is no such call > in the generate_icache_flush() and generate_handler_for_unsafe_access() . > > Probably, the StubCodeMark just needs to be removed there. > Could you, please, explain this a little bit? > Hi Serguei, Thank you for looking at my change. I tried to explain your questions in my initial mail but maybe I wasn't clear enough: - in generate_icache_flush() and generate_verify_oop() we DO NOT GENERATE any stub code. We don't use dynamically generated stubs on ppc64 for flushing the icache or verifying oops but call C-functions instead. So there's no need to generate post_dynamic_code_generated() events for them and also no need for a StubCodeMark. - for generate_throw_exception() we dynamically generate a runtime stub instead of an simple stub and for runtime stubs the JVMT dynamic code event is already generated by RuntimeStub::new_runtime_stub() -> CodeBlob::trace_new_stub() -> JvmtiExport::post_dynamic_code_generated(). This is exactly the way how it works on other CPU architectures. The usage of a StubCodeMark in generate_throw_exception() was simply a "day one" bug in the ppc64 port. - I haven't changed generate_handler_for_unsafe_access() so I don't actually understand your concerns. generate_handler_for_unsafe_access() correctly contains a StubCodeMark because it dynamically generates stub code - even if it is just the output of a "not yet implemented" message. Regards, Volker > We also need someone from the compiler team to look at this. > I also included into the cc-list Oleg, who recently touched this area. > > Thanks, > Serguei > > > > On 7/14/14 11:24 AM, Volker Simonis wrote: > > Hi everybody, > > can somebody PLEASE review and sponsor this tiny, ppc64-only change. > > Thanks, > Volker > > > On Tue, Jul 8, 2014 at 5:45 PM, Daniel D. Daugherty > wrote: > > Adding the Serviceability Team since JVM/TI belongs to them. > > Dan > > > > On 7/8/14 9:41 AM, Volker Simonis wrote: > > Hi, > > could somebody please review and push the following small, PPC64-only > change to any of the hs team repositories: > > http://cr.openjdk.java.net/~simonis/webrevs/8049441/ > https://bugs.openjdk.java.net/browse/JDK-8049441 > > Background: > > For some stubs we actually do not really generate code on PPC64 but > instead we use a native C-function with inline-assembly. If the > generators of these stubs contain a StubCodeMark, they will trigger > JvmtiExport::post_dynamic_code_generated_internal events with a zero > length code size. These events may fool clients like Oprofile which > register for these events (thanks to Maynard Johnson who reported this > - see > http://mail.openjdk.java.net/pipermail/ppc-aix-port-dev/2014-June/002032.html). > > This change simply removes the StubCodeMark from > ICacheStubGenerator::generate_icache_flush() and generate_verify_oop() > because they don't generate assembly code. It also removes the > StubCodeMark from generate_throw_exception() because it doesn't really > generate a plain stub but a runtime stub for which the JVMT dynamic > code event is already generated by RuntimeStub::new_runtime_stub() -> > CodeBlob::trace_new_stub() -> > JvmtiExport::post_dynamic_code_generated(). > > Thank you and best regards, > Volker > > From serguei.spitsyn at oracle.com Tue Jul 15 09:09:39 2014 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Tue, 15 Jul 2014 02:09:39 -0700 Subject: RFR(XS): 8049441: PPC64: Don't use StubCodeMarks for zero-length stubs In-Reply-To: References: <53BC1237.2060006@oracle.com> <53C43F1E.2030805@oracle.com> Message-ID: <53C4EFD3.7000609@oracle.com> On 7/15/14 1:48 AM, Volker Simonis wrote: > On Mon, Jul 14, 2014 at 10:35 PM, serguei.spitsyn at oracle.com > wrote: >> Hi Volker, >> >> It looks good in general. >> >> But I don't understand all the details. >> For instance, your email description of the fix tells that the the event is >> posted by: >> >> RuntimeStub::new_runtime_stub() -> CodeBlob::trace_new_stub() -> >> JvmtiExport::post_dynamic_code_generated() >> >> I see the new_runtime_stub() call in the generate_throw_exception() but >> there is no such call >> in the generate_icache_flush() and generate_handler_for_unsafe_access() . >> >> Probably, the StubCodeMark just needs to be removed there. >> Could you, please, explain this a little bit? >> > Hi Serguei, > > Thank you for looking at my change. I tried to explain your questions > in my initial mail but maybe I wasn't clear enough: > > - in generate_icache_flush() and generate_verify_oop() we DO NOT > GENERATE any stub code. We don't use dynamically generated stubs on > ppc64 for flushing the icache or verifying oops but call C-functions > instead. So there's no need to generate post_dynamic_code_generated() > events for them and also no need for a StubCodeMark. > > - for generate_throw_exception() we dynamically generate a runtime > stub instead of an simple stub and for runtime stubs the JVMT dynamic > code event is already generated by RuntimeStub::new_runtime_stub() -> > CodeBlob::trace_new_stub() -> > JvmtiExport::post_dynamic_code_generated(). This is exactly the way > how it works on other CPU architectures. The usage of a StubCodeMark > in generate_throw_exception() was simply a "day one" bug in the ppc64 > port. Thank you for the extra details! I asked for that as my knowledge in this area is limited. The fix looks good to me. I can be a sponsor for integration if needed. But a Review is still required. > > - I haven't changed generate_handler_for_unsafe_access() so I don't > actually understand your concerns. I accidentally copied a wrong name. Sorry. I had to copy: generate_verify_oop(). Thanks, Serguei > generate_handler_for_unsafe_access() correctly contains a StubCodeMark > because it dynamically generates stub code - even if it is just the > output of a "not yet implemented" message. > > Regards, > Volker > >> We also need someone from the compiler team to look at this. >> I also included into the cc-list Oleg, who recently touched this area. >> >> Thanks, >> Serguei >> >> >> >> On 7/14/14 11:24 AM, Volker Simonis wrote: >> >> Hi everybody, >> >> can somebody PLEASE review and sponsor this tiny, ppc64-only change. >> >> Thanks, >> Volker >> >> >> On Tue, Jul 8, 2014 at 5:45 PM, Daniel D. Daugherty >> wrote: >> >> Adding the Serviceability Team since JVM/TI belongs to them. >> >> Dan >> >> >> >> On 7/8/14 9:41 AM, Volker Simonis wrote: >> >> Hi, >> >> could somebody please review and push the following small, PPC64-only >> change to any of the hs team repositories: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8049441/ >> https://bugs.openjdk.java.net/browse/JDK-8049441 >> >> Background: >> >> For some stubs we actually do not really generate code on PPC64 but >> instead we use a native C-function with inline-assembly. If the >> generators of these stubs contain a StubCodeMark, they will trigger >> JvmtiExport::post_dynamic_code_generated_internal events with a zero >> length code size. These events may fool clients like Oprofile which >> register for these events (thanks to Maynard Johnson who reported this >> - see >> http://mail.openjdk.java.net/pipermail/ppc-aix-port-dev/2014-June/002032.html). >> >> This change simply removes the StubCodeMark from >> ICacheStubGenerator::generate_icache_flush() and generate_verify_oop() >> because they don't generate assembly code. It also removes the >> StubCodeMark from generate_throw_exception() because it doesn't really >> generate a plain stub but a runtime stub for which the JVMT dynamic >> code event is already generated by RuntimeStub::new_runtime_stub() -> >> CodeBlob::trace_new_stub() -> >> JvmtiExport::post_dynamic_code_generated(). >> >> Thank you and best regards, >> Volker >> >> From goetz.lindenmaier at sap.com Tue Jul 15 09:18:28 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 15 Jul 2014 09:18:28 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53C4D63A.5060802@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> <53C45912.4050905@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> <53C4D63A.5060802@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDA584@DEWDFEMB12A.global.corp.sap> Hi David, There are no clean rules followed, which happens to cause compile problems here and there. I try to clean this up a bit. If inline function foo() calls another inline function bar(), the c++ compiler must see both implementations to compile foo (else it obviously can't inline). It must see the declaration of the function to be inlined before the function where it is inlined. If there are cyclic inlines you need inline.hpp headers to get a safe state. Also, to be on the safe side, .hpp files never may include .inline.hpp files, else an implementation can end up above the declaration it needs. See also the two examples attached. If there is no cycle, it doesn't matter. That's why a lot of functions are not placed according to this scheme. For the functions I moved to the header (path_separator etc): They are used in a lot of .hpp files. Moving them to os.hpp I easily could avoid including the os.inline.hpp in .hpp files, which would be bad. Best regards, Goetz. -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Dienstag, 15. Juli 2014 09:20 To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories On 15/07/2014 4:34 PM, Lindenmaier, Goetz wrote: > Hi David, > > functions that are completely self contained can go into the .hpp. > Functions that call another inline function defined in an other header > must go to .inline.hpp as else there could be cycles the c++ compilers can't > deal with. A quick survey of the shared *.inline.hpp files shows many don't seem to fit this definition. Are templates also something that needs special handling? I'm not saying anything is wrong with your changes, just trying to understand what the rules are. Thanks, David > Best regards, > Goetz. > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 15. Juli 2014 00:26 > To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: >> Hi Coleen, >> >> Thanks for sponsoring this! >> >> bytes, ad, nativeInst and vmreg.inline were used quite often >> in shared files, so it definitely makes sense for these to have >> a shared header. >> vm_version and register had an umbrella header, but that >> was not used everywhere, so I cleaned it up. >> That left adGlobals, jniTypes and interp_masm which >> are only used a few time. I did these so that all files >> are treated similarly. >> In the end, I didn't need a header for all, as they were >> not really needed in the shared files, or I found >> another good place, as for adGlobals. >> >> I added you and David H. as reviewer to the webrev: >> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >> I hope this is ok with you, David. > > It might be somewhat premature :) I somewhat confused by the rules for > headers and includes and inlines. I now see with this change a bunch of > inline function definitions being moved out of the .inline.hpp file and > into the .hpp file. Why? What criteria determines if an inline function > goes into the .hpp versus the .inline.hpp file ??? > > Thanks, > David > >> Thanks, >> Goetz. >> >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore >> Sent: Montag, 14. Juli 2014 14:09 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> >> I think this looks like a good cleanup. I can sponsor it and make the >> closed changes also again. I initially proposed the #include cascades >> because the alternative at the time was to blindly create a dispatching >> header file for each target dependent file. I wanted to see the >> #includes cleaned up instead and target dependent files included >> directly. This adds 5 dispatching header files, which is fine. I >> think the case of interp_masm.hpp is interesting though, because the >> dispatching file is included in cpu dependent files, which could >> directly include the cpu version. But there are 3 platform independent >> files that include it. I'm not going to object though because I'm >> grateful for this cleanup and I guess it's a matter of opinion which is >> best to include in the cpu dependent directories. >> >> Thanks, >> Coleen >> >> >> On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> David, can I consider this a review? >>> >>> And I please need a sponsor for this change. Could somebody >>> please help here? Probably some closed adaptions are needed. >>> It applies to any repo as my other change traveled around >>> by now. >>> >>> Thanks and best regards, >>> Goetz. >>> >>> >>> -----Original Message----- >>> From: David Holmes [mailto:david.holmes at oracle.com] >>> Sent: Freitag, 11. Juli 2014 07:19 >>> To: Lindenmaier, Goetz; Lois Foltan >>> Cc: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>> >>> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> foo.hpp as few includes as possible, to avoid cycles. >>>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>>> (either directly or via the platform files.) >>>> * should include foo.platform.inline.hpp, so that shared files that >>>> call functions from foo.platform.inline.hpp need not contain the >>>> cascade of all the platform files. >>>> If code in foo.platform.inline.hpp is only used in the platform files, >>>> it is not necessary to have an umbrella header. >>>> foo.platform.inline.hpp Should include what is needed in its code. >>>> >>>> For client code: >>>> With this change I now removed all include cascades of platform files except for >>>> those in the 'natural' headers. >>>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>>> headers, but include bar.[inline.]hpp.) >>>> If it's 1:1, I don't care, as discussed before. >>>> >>>> Does this make sense? >>> I find the overall structure somewhat counter-intuitive from an >>> implementation versus interface perspective. But ... >>> >>> Thanks for the explanation. >>> >>> David >>> >>>> Best regards, >>>> Goetz. >>>> >>>> >>>> which of the above should #include which others, and which should be >>>> #include'd by "client" code? >>>> >>>> Thanks, >>>> David >>>> >>>>> Thanks, >>>>> Lois >>>>> >>>>>> David >>>>>> ----- >>>>>> >>>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>>> (however this could pull in more code than needed since >>>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>>> >>>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>>> - change not related to clean up of umbrella headers, please >>>>>>> explain/justify. >>>>>>> >>>>>>> src/share/vm/code/vmreg.hpp >>>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>>> vmreg.inline.hpp or will >>>>>>> this introduce a cyclical inclusion situation, since >>>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>>> >>>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>>> - only has a copyright change in the file, no other changes >>>>>>> present? >>>>>>> >>>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>>> - incorrect copyright, no current year? >>>>>>> >>>>>>> src/share/vm/opto/ad.hpp >>>>>>> - incorrect copyright date for a new file >>>>>>> >>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>> - technically this new file does not need to include >>>>>>> "asm/register.hpp" since >>>>>>> vmreg.hpp already includes it >>>>>>> >>>>>>> My only lingering concern is the cyclical nature of >>>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>>> is not much difference between the two? >>>>>>> >>>>>>> Thanks, >>>>>>> Lois >>>>>>> >>>>>>> >>>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>>> >>>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>>> subdirectories: >>>>>>>> >>>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>>> >>>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>>> >>>>>>>> Where possible, this change avoids includes in headers. >>>>>>>> Eventually it adds a forward declaration. >>>>>>>> >>>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>>> rather small. >>>>>>>> >>>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>>> includes in, >>>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>>> >>>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>>> thus all the assembler include headers into a lot of files. >>>>>>>> >>>>>>>> Please review and test this change. I please need a sponsor. >>>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>>> >>>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>>> linuxppc64, >>>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>>> aixppc64, ntamd64 >>>>>>>> in opt, dbg and fastdbg versions. >>>>>>>> >>>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>>> arrives in other >>>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>>> change >>>>>>>> against jdk9/dev, too.) >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Goetz. >>>>>>>> >>>>>>>> PS: I also did all the Copyright adaptions ;) >> -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test.cpp URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test2.cpp URL: From volker.simonis at gmail.com Tue Jul 15 09:45:20 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 15 Jul 2014 11:45:20 +0200 Subject: RFR(XS): 8049441: PPC64: Don't use StubCodeMarks for zero-length stubs In-Reply-To: <53C4EFD3.7000609@oracle.com> References: <53BC1237.2060006@oracle.com> <53C43F1E.2030805@oracle.com> <53C4EFD3.7000609@oracle.com> Message-ID: Hi Serguei, thanks for sponsoring! So just waiting for another reviewer. Anybody volunteers:) Regards, Volker On Tue, Jul 15, 2014 at 11:09 AM, serguei.spitsyn at oracle.com wrote: > On 7/15/14 1:48 AM, Volker Simonis wrote: > > On Mon, Jul 14, 2014 at 10:35 PM, serguei.spitsyn at oracle.com > wrote: > > Hi Volker, > > It looks good in general. > > But I don't understand all the details. > For instance, your email description of the fix tells that the the event is > posted by: > > RuntimeStub::new_runtime_stub() -> CodeBlob::trace_new_stub() -> > JvmtiExport::post_dynamic_code_generated() > > I see the new_runtime_stub() call in the generate_throw_exception() but > there is no such call > in the generate_icache_flush() and generate_handler_for_unsafe_access() . > > Probably, the StubCodeMark just needs to be removed there. > Could you, please, explain this a little bit? > > Hi Serguei, > > Thank you for looking at my change. I tried to explain your questions > in my initial mail but maybe I wasn't clear enough: > > - in generate_icache_flush() and generate_verify_oop() we DO NOT > GENERATE any stub code. We don't use dynamically generated stubs on > ppc64 for flushing the icache or verifying oops but call C-functions > instead. So there's no need to generate post_dynamic_code_generated() > events for them and also no need for a StubCodeMark. > > - for generate_throw_exception() we dynamically generate a runtime > stub instead of an simple stub and for runtime stubs the JVMT dynamic > code event is already generated by RuntimeStub::new_runtime_stub() -> > CodeBlob::trace_new_stub() -> > JvmtiExport::post_dynamic_code_generated(). This is exactly the way > how it works on other CPU architectures. The usage of a StubCodeMark > in generate_throw_exception() was simply a "day one" bug in the ppc64 > port. > > > Thank you for the extra details! > I asked for that as my knowledge in this area is limited. > > The fix looks good to me. > I can be a sponsor for integration if needed. > But a Review is still required. > > > > - I haven't changed generate_handler_for_unsafe_access() so I don't > actually understand your concerns. > > > I accidentally copied a wrong name. Sorry. > I had to copy: generate_verify_oop(). > > Thanks, > Serguei > > generate_handler_for_unsafe_access() correctly contains a StubCodeMark > because it dynamically generates stub code - even if it is just the > output of a "not yet implemented" message. > > Regards, > Volker > > We also need someone from the compiler team to look at this. > I also included into the cc-list Oleg, who recently touched this area. > > Thanks, > Serguei > > > > On 7/14/14 11:24 AM, Volker Simonis wrote: > > Hi everybody, > > can somebody PLEASE review and sponsor this tiny, ppc64-only change. > > Thanks, > Volker > > > On Tue, Jul 8, 2014 at 5:45 PM, Daniel D. Daugherty > wrote: > > Adding the Serviceability Team since JVM/TI belongs to them. > > Dan > > > > On 7/8/14 9:41 AM, Volker Simonis wrote: > > Hi, > > could somebody please review and push the following small, PPC64-only > change to any of the hs team repositories: > > http://cr.openjdk.java.net/~simonis/webrevs/8049441/ > https://bugs.openjdk.java.net/browse/JDK-8049441 > > Background: > > For some stubs we actually do not really generate code on PPC64 but > instead we use a native C-function with inline-assembly. If the > generators of these stubs contain a StubCodeMark, they will trigger > JvmtiExport::post_dynamic_code_generated_internal events with a zero > length code size. These events may fool clients like Oprofile which > register for these events (thanks to Maynard Johnson who reported this > - see > http://mail.openjdk.java.net/pipermail/ppc-aix-port-dev/2014-June/002032.html). > > This change simply removes the StubCodeMark from > ICacheStubGenerator::generate_icache_flush() and generate_verify_oop() > because they don't generate assembly code. It also removes the > StubCodeMark from generate_throw_exception() because it doesn't really > generate a plain stub but a runtime stub for which the JVMT dynamic > code event is already generated by RuntimeStub::new_runtime_stub() -> > CodeBlob::trace_new_stub() -> > JvmtiExport::post_dynamic_code_generated(). > > Thank you and best regards, > Volker > > > From mikael.gerdin at oracle.com Tue Jul 15 11:36:45 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 15 Jul 2014 13:36:45 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C4DCB8.5020705@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C4DCB8.5020705@oracle.com> Message-ID: <9129730.8quV1l9zAl@mgerdin03> Tobias, On Tuesday 15 July 2014 09.48.08 Tobias Hartmann wrote: > Hi Vladimir, > > > Impressive work, Tobias! > > Thanks! Took me a while to figure out what's happening. > > > So before the permgen removal embedded method* were oops and they were > > processed in relocInfo::oop_type loop. > > Okay, good to know. That explains why the terms oops and metadata are > used interchangeably at some points in the code. Yep, there are a lot of leftover references to metadata as oops, especially in some compiler/runtime parts such as MDOs and CompiledICs. > > > May be instead of specializing opt_virtual_call_type and > > static_call_type call site you can simple add a loop for > > relocInfo::metadata_type (similar to oop_type loop)? > > The problem with iterating over relocInfo::metadata_type is that we > don't know to which stub, i.e., to which IC the Method* pointer belongs. > Since we don't want to unload the entire method but only clear the > corresponding IC, we need this information. I'm wondering, is there some way to figure out the IC for the Method*? In CompiledStaticCall::emit_to_interp_stub a static_stub_Relocation is created and from the looks of it it points to the call site through some setting of a "mark". The metadata relocation is emitted just after the static_stub_Relocation, so one approach (untested) could be to have a case for static_stub_Relocations, create a CompiledIC.at(reloc->static_call()) and check if it's a call to interpreted. If it is the advance the relocIterator to the next position and check that metadata for liveness. /Mikael > > Thanks, > Tobias > > > Thanks, > > Vladimir > > > > On 7/14/14 4:56 AM, Tobias Hartmann wrote: > >> Hi, > >> > >> please review the following patch for JDK-8029443. > >> > >> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 > >> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ > >> > >> *Problem* > >> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks > >> if a nmethod can be unloaded because it contains dead oops. If class > >> unloading occurred we additionally clear all ICs where the cached > >> metadata refers to an unloaded klass or method. If the nmethod is not > >> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all > >> metadata is alive. The assert in CheckClass::check_class fails because > >> the nmethod contains Method* metadata corresponding to a dead Klass. > >> The Method* belongs to a to-interpreter stub [1] of an optimized > >> compiled IC. Normally we clear those stubs prior to verification to > >> avoid dangling references to Method* [2], but only if the stub is not in > >> use, i.e. if the IC is not in to-interpreted mode. In this case the > >> to-interpreter stub may be executed and hand a stale Method* to the > >> interpreter. > >> > >> *Solution > >> *The implementation of nmethod::do_unloading(..) is changed to clean > >> compiled ICs and compiled static calls if they call into a > >> to-interpreter stub that references dead Method* metadata. > >> > >> The patch was affected by the G1 class unloading changes (JDK-8048248) > >> because the method nmethod::do_unloading_parallel(..) was added. I > >> adapted the implementation as well. > >> * > >> Testing > >> *Failing test (runThese) > >> JPRT > >> > >> Thanks, > >> Tobias > >> > >> [1] see CompiledStaticCall::emit_to_interp_stub(..) > >> [2] see nmethod::verify_metadata_loaders(..), > >> static_stub_reloc()->clear_inline_cache() clears the stub From coleen.phillimore at oracle.com Tue Jul 15 13:57:01 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 15 Jul 2014 09:57:01 -0400 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDA584@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> <53C45912.4050905@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> <53C4D63A.5060802@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA584@DEWDFEMB12A.global.corp.sap> Message-ID: <53C5332D.3010604@oracle.com> It also seems to me that these vmreg_ppc.hpp inline functions are special in that they are included directly in the class declaration, rather than the preferred separate class declaration. So I think this doesn't follow the "rules" as such because this case is different. It would be nice to clean out these includes in another cleanup pass. I hit the same cycles on the closed part but didn't realize it was because of cycles. Thanks, Coleen On 7/15/14, 5:18 AM, Lindenmaier, Goetz wrote: > Hi David, > > There are no clean rules followed, which happens to cause > compile problems here and there. I try to clean this up a bit. > > If inline function foo() calls another inline function bar(), the c++ compiler > must see both implementations to compile foo (else it obviously can't > inline). It must see the declaration of the function to be inlined before > the function where it is inlined. If there are cyclic inlines you need inline.hpp > headers to get a safe state. Also, to be on the safe side, .hpp files never may include > .inline.hpp files, else an implementation can end up above the declaration > it needs. See also the two examples attached. > > If there is no cycle, it doesn't matter. That's why a lot of functions > are not placed according to this scheme. > > For the functions I moved to the header (path_separator etc): > They are used in a lot of .hpp files. Moving them to os.hpp I easily could avoid > including the os.inline.hpp in .hpp files, which would be bad. > > Best regards, > Goetz. > > > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 15. Juli 2014 09:20 > To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 15/07/2014 4:34 PM, Lindenmaier, Goetz wrote: >> Hi David, >> >> functions that are completely self contained can go into the .hpp. >> Functions that call another inline function defined in an other header >> must go to .inline.hpp as else there could be cycles the c++ compilers can't >> deal with. > A quick survey of the shared *.inline.hpp files shows many don't seem to > fit this definition. Are templates also something that needs special > handling? > > I'm not saying anything is wrong with your changes, just trying to > understand what the rules are. > > Thanks, > David > >> Best regards, >> Goetz. >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Dienstag, 15. Juli 2014 00:26 >> To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: >>> Hi Coleen, >>> >>> Thanks for sponsoring this! >>> >>> bytes, ad, nativeInst and vmreg.inline were used quite often >>> in shared files, so it definitely makes sense for these to have >>> a shared header. >>> vm_version and register had an umbrella header, but that >>> was not used everywhere, so I cleaned it up. >>> That left adGlobals, jniTypes and interp_masm which >>> are only used a few time. I did these so that all files >>> are treated similarly. >>> In the end, I didn't need a header for all, as they were >>> not really needed in the shared files, or I found >>> another good place, as for adGlobals. >>> >>> I added you and David H. as reviewer to the webrev: >>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>> I hope this is ok with you, David. >> It might be somewhat premature :) I somewhat confused by the rules for >> headers and includes and inlines. I now see with this change a bunch of >> inline function definitions being moved out of the .inline.hpp file and >> into the .hpp file. Why? What criteria determines if an inline function >> goes into the .hpp versus the .inline.hpp file ??? >> >> Thanks, >> David >> >>> Thanks, >>> Goetz. >>> >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore >>> Sent: Montag, 14. Juli 2014 14:09 >>> To: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>> >>> >>> I think this looks like a good cleanup. I can sponsor it and make the >>> closed changes also again. I initially proposed the #include cascades >>> because the alternative at the time was to blindly create a dispatching >>> header file for each target dependent file. I wanted to see the >>> #includes cleaned up instead and target dependent files included >>> directly. This adds 5 dispatching header files, which is fine. I >>> think the case of interp_masm.hpp is interesting though, because the >>> dispatching file is included in cpu dependent files, which could >>> directly include the cpu version. But there are 3 platform independent >>> files that include it. I'm not going to object though because I'm >>> grateful for this cleanup and I guess it's a matter of opinion which is >>> best to include in the cpu dependent directories. >>> >>> Thanks, >>> Coleen >>> >>> >>> On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> David, can I consider this a review? >>>> >>>> And I please need a sponsor for this change. Could somebody >>>> please help here? Probably some closed adaptions are needed. >>>> It applies to any repo as my other change traveled around >>>> by now. >>>> >>>> Thanks and best regards, >>>> Goetz. >>>> >>>> >>>> -----Original Message----- >>>> From: David Holmes [mailto:david.holmes at oracle.com] >>>> Sent: Freitag, 11. Juli 2014 07:19 >>>> To: Lindenmaier, Goetz; Lois Foltan >>>> Cc: hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>>> >>>> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>>>> Hi, >>>>> >>>>> foo.hpp as few includes as possible, to avoid cycles. >>>>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>>>> (either directly or via the platform files.) >>>>> * should include foo.platform.inline.hpp, so that shared files that >>>>> call functions from foo.platform.inline.hpp need not contain the >>>>> cascade of all the platform files. >>>>> If code in foo.platform.inline.hpp is only used in the platform files, >>>>> it is not necessary to have an umbrella header. >>>>> foo.platform.inline.hpp Should include what is needed in its code. >>>>> >>>>> For client code: >>>>> With this change I now removed all include cascades of platform files except for >>>>> those in the 'natural' headers. >>>>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>>>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>>>> headers, but include bar.[inline.]hpp.) >>>>> If it's 1:1, I don't care, as discussed before. >>>>> >>>>> Does this make sense? >>>> I find the overall structure somewhat counter-intuitive from an >>>> implementation versus interface perspective. But ... >>>> >>>> Thanks for the explanation. >>>> >>>> David >>>> >>>>> Best regards, >>>>> Goetz. >>>>> >>>>> >>>>> which of the above should #include which others, and which should be >>>>> #include'd by "client" code? >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>>> Thanks, >>>>>> Lois >>>>>> >>>>>>> David >>>>>>> ----- >>>>>>> >>>>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>>>> (however this could pull in more code than needed since >>>>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>>>> >>>>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>>>> - change not related to clean up of umbrella headers, please >>>>>>>> explain/justify. >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.hpp >>>>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>>>> vmreg.inline.hpp or will >>>>>>>> this introduce a cyclical inclusion situation, since >>>>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>>>> >>>>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>>>> - only has a copyright change in the file, no other changes >>>>>>>> present? >>>>>>>> >>>>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>>>> - incorrect copyright, no current year? >>>>>>>> >>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>> - incorrect copyright date for a new file >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>> - technically this new file does not need to include >>>>>>>> "asm/register.hpp" since >>>>>>>> vmreg.hpp already includes it >>>>>>>> >>>>>>>> My only lingering concern is the cyclical nature of >>>>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>>>> is not much difference between the two? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Lois >>>>>>>> >>>>>>>> >>>>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>>>> >>>>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>>>> subdirectories: >>>>>>>>> >>>>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>>>> >>>>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>>>> >>>>>>>>> Where possible, this change avoids includes in headers. >>>>>>>>> Eventually it adds a forward declaration. >>>>>>>>> >>>>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>>>> rather small. >>>>>>>>> >>>>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>>>> includes in, >>>>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>>>> >>>>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>>>> thus all the assembler include headers into a lot of files. >>>>>>>>> >>>>>>>>> Please review and test this change. I please need a sponsor. >>>>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>>>> >>>>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>>>> linuxppc64, >>>>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>>>> aixppc64, ntamd64 >>>>>>>>> in opt, dbg and fastdbg versions. >>>>>>>>> >>>>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>>>> arrives in other >>>>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>>>> change >>>>>>>>> against jdk9/dev, too.) >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Goetz. >>>>>>>>> >>>>>>>>> PS: I also did all the Copyright adaptions ;) From daniel.daugherty at oracle.com Tue Jul 15 14:00:47 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Tue, 15 Jul 2014 08:00:47 -0600 Subject: RFR(XS): 8049441: PPC64: Don't use StubCodeMarks for zero-length stubs In-Reply-To: <53BC1237.2060006@oracle.com> References: <53BC1237.2060006@oracle.com> Message-ID: <53C5340F.80109@oracle.com> > http://cr.openjdk.java.net/~simonis/webrevs/8049441/ src/cpu/ppc/vm/icache_ppc.cpp No comments. src/cpu/ppc/vm/stubGenerator_ppc.cpp No comments. Thumbs up. Dan On 7/8/14 9:45 AM, Daniel D. Daugherty wrote: > Adding the Serviceability Team since JVM/TI belongs to them. > > Dan > > > On 7/8/14 9:41 AM, Volker Simonis wrote: >> Hi, >> >> could somebody please review and push the following small, PPC64-only >> change to any of the hs team repositories: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8049441/ >> https://bugs.openjdk.java.net/browse/JDK-8049441 >> >> Background: >> >> For some stubs we actually do not really generate code on PPC64 but >> instead we use a native C-function with inline-assembly. If the >> generators of these stubs contain a StubCodeMark, they will trigger >> JvmtiExport::post_dynamic_code_generated_internal events with a zero >> length code size. These events may fool clients like Oprofile which >> register for these events (thanks to Maynard Johnson who reported this >> - see >> http://mail.openjdk.java.net/pipermail/ppc-aix-port-dev/2014-June/002032.html). >> >> This change simply removes the StubCodeMark from >> ICacheStubGenerator::generate_icache_flush() and generate_verify_oop() >> because they don't generate assembly code. It also removes the >> StubCodeMark from generate_throw_exception() because it doesn't really >> generate a plain stub but a runtime stub for which the JVMT dynamic >> code event is already generated by RuntimeStub::new_runtime_stub() -> >> CodeBlob::trace_new_stub() -> >> JvmtiExport::post_dynamic_code_generated(). >> >> Thank you and best regards, >> Volker > > > > From mikael.gerdin at oracle.com Tue Jul 15 13:58:49 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 15 Jul 2014 15:58:49 +0200 Subject: RFR: 8011397: JTREG needs to copy additional WhiteBox class file to JTwork/scratch/sun/hotspot In-Reply-To: <53BACF55.2020301@oracle.com> References: <536B7CF0.6010508@oracle.com> <53AAE5DA.2030700@oracle.com> <53BACF55.2020301@oracle.com> Message-ID: <2443586.qRToXKmNqX@mgerdin03> Andrey, On Monday 07 July 2014 20.48.21 Andrey Zakharov wrote: > Hi ,all > Mikael, can you please review it. Sorry, I was on vacation last week. > webrev: > http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ Looks ok for now. We should consider revisiting this by either switching to @run main/bootclasspath or deleting the WhiteboxPermission nested class and using some other way for permission checks (if they are at all needed). /Mikael > > Thanks. > > On 25.06.2014 19:08, Andrey Zakharov wrote: > > Hi, all > > So in progress of previous email - > > webrev: > > http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ > > > > Thanks. > > > > On 16.06.2014 19:57, Andrey Zakharov wrote: > >> Hi, all > >> So issue is that when tests with WhiteBox API has been invoked with > >> -Xverify:all it fails with Exception java.lang.NoClassDefFoundError: > >> sun/hotspot/WhiteBox$WhiteBoxPermission > >> Solutions that are observed: > >> 1. Copy WhiteBoxPermission with WhiteBox. But > >> > >> >> Perhaps this is a good time to get rid of ClassFileInstaller > >> > >> altogether? > >> > >> 2. Using bootclasspath to hook pre-built whitebox (due @library > >> /testlibrary/whitebox) . Some tests has @run main/othervm, some uses > >> ProcessBuilder. > >> > >> - main/othervm/bootclasspath adds ${test.src} and > >> > >> ${test.classes}to options. > >> > >> - With ProcessBuilder we can just add ${test.classes} > >> > >> Question here is, can it broke some tests ? While testing this, I > >> found only https://bugs.openjdk.java.net/browse/JDK-8046231, others > >> looks fine. > >> > >> 3. Make ClassFileInstaller deal with inner classes like that: > >> diff -r 6ed24aedeef0 -r c01651363ba8 > >> test/testlibrary/ClassFileInstaller.java > >> --- a/test/testlibrary/ClassFileInstaller.java Thu Jun 05 19:02:56 > >> 2014 +0400 > >> +++ b/test/testlibrary/ClassFileInstaller.java Fri Jun 06 18:18:11 > >> 2014 +0400 > >> @@ -50,6 +50,16 @@ > >> > >> } > >> // Create the class file > >> Files.copy(is, p, StandardCopyOption.REPLACE_EXISTING); > >> > >> + > >> + for (Class cls : > >> Class.forName(arg).getDeclaredClasses()) { > >> + //if (!Modifier.isStatic(cls.getModifiers())) { > >> + String pathNameSub = > >> cls.getCanonicalName().replace('.', '/').concat(".class"); > >> + Path pathSub = Paths.get(pathNameSub); > >> + InputStream streamSub = > >> cl.getResourceAsStream(pathNameSub); > >> + Files.copy(streamSub, pathSub, > >> StandardCopyOption.REPLACE_EXISTING); > >> + //} > >> + } > >> + > >> > >> } > >> > >> } > >> > >> } > >> > >> Works fine for ordinary classes, but fails for WhiteBox due > >> Class.forName initiate Class. WhiteBox has "static" section, and > >> initialization fails as it cannot bind to native methods > >> "registerNatives" and so on. > >> > >> > >> So, lets return to first one option? Just add everywhere > >> > >> * @run main ClassFileInstaller sun.hotspot.WhiteBox > >> > >> + * @run main ClassFileInstaller sun.hotspot.WhiteBox$WhiteBoxPermission > >> > >> Thanks. > >> > >> On 10.06.2014 19:43, Igor Ignatyev wrote: > >>> Andrey, > >>> > >>> I don't like this idea, since it completely changes the tests. > >>> 'run/othervm/bootclasspath' adds all paths from CP to BCP, so the > >>> tests whose main idea was testing WB methods themselves (sanity, > >>> compiler/whitebox, ...) don't check that it's possible to use WB > >>> when the application isn't in BCP. > >>> > >>> Igor > >>> > >>> On 06/09/2014 06:59 PM, Andrey Zakharov wrote: > >>>> Hi, everybody > >>>> I have tested my changes on major platforms and found one bug, filed: > >>>> https://bugs.openjdk.java.net/browse/JDK-8046231 > >>>> Also, i did another try to make ClassFileInstaller to copy all inner > >>>> classes within parent, but this fails for WhiteBox due its static > >>>> "registerNatives" dependency. > >>>> > >>>> Please, review suggested changes: > >>>> - replace ClassFileInstaller and run/othervm with > >>>> > >>>> "run/othervm/bootclasspath". > >>>> > >>>> bootclasspath parameter for othervm adds-Xbootclasspath/a: > >>>> option with ${test.src} and ${test.classes}according to > >>>> http://hg.openjdk.java.net/code-tools/jtreg/file/31003a1c46d9/src/share > >>>> /classes/com/sun/javatest/regtest/MainAction.java. > >>>> > >>>> Is this suitable for our needs - give to test compiled WhiteBox? > >>>> > >>>> - replace explicit -Xbootclasspath option values (".") in > >>>> > >>>> ProcessBuilder invocations to ${test.classes} where WhiteBox has been > >>>> compiled. > >>>> > >>>> Webrev: > >>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.00/ > >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8011397 > >>>> Thanks. > >>>> > >>>> On 23.05.2014 15:40, Andrey Zakharov wrote: > >>>>> On 22.05.2014 12:47, Igor Ignatyev wrote: > >>>>>> Andrey, > >>>>>> > >>>>>> 1. You changed dozen of tests, have you tested your changes? > >>>>> > >>>>> Locally, aurora on the way. > >>>>> > >>>>>> 2. Your changes of year in copyright is wrong. it has to be > >>>>>> $first_year, [$last_year, ], see Mark's email[1] for details. > >>>>>> > >>>>>> [1] > >>>>>> http://mail.openjdk.java.net/pipermail/jdk7-dev/2010-May/001321.html > >>>>> > >>>>> Thanks, fixed. will be uploaded soon. > >>>>> > >>>>>> Igor > >>>>>> > >>>>>> On 05/21/2014 07:37 PM, Andrey Zakharov wrote: > >>>>>>> On 13.05.2014 14:43, Andrey Zakharov wrote: > >>>>>>>> Hi > >>>>>>>> So here is trivial patch - > >>>>>>>> removing ClassFileInstaller sun.hotspot.WhiteBox and adding > >>>>>>>> main/othervm/bootclasspath > >>>>>>>> where this needed > >>>>>>>> > >>>>>>>> Also, some tests are modified as > >>>>>>>> - "-Xbootclasspath/a:.", > >>>>>>>> + "-Xbootclasspath/a:" + > >>>>>>>> System.getProperty("test.classes"), > >>>>>>>> > >>>>>>>> Thanks. > >>>>>>> > >>>>>>> webrev: http://cr.openjdk.java.net/~jwilhelm/8011397/webrev.02/ > >>>>>>> bug: https://bugs.openjdk.java.net/browse/JDK-8011397 > >>>>>>> Thanks. > >>>>>>> > >>>>>>>> On 09.05.2014 12:13, Mikael Gerdin wrote: > >>>>>>>>> On Thursday 08 May 2014 19.28.13 Igor Ignatyev wrote: > >>>>>>>>>> // cc'ing hotspot-dev instaed of compiler, runtime and gc lists. > >>>>>>>>>> > >>>>>>>>>> On 05/08/2014 07:09 PM, Filipp Zhinkin wrote: > >>>>>>>>>>> Andrey, > >>>>>>>>>>> > >>>>>>>>>>> I've CC'ed compiler and runtime mailing list, because you're > >>>>>>>>>>> changes > >>>>>>>>>>> affect test for other components as too. > >>>>>>>>>>> > >>>>>>>>>>> I don't like your solution (but I'm not a reviewer, so treat my > >>>>>>>>>>> words > >>>>>>>>>>> just as suggestion), > >>>>>>>>>>> because we'll have to write more meta information for each test > >>>>>>>>>>> and it > >>>>>>>>>>> is very easy to > >>>>>>>>>>> forget to install WhiteBoxPermission if you don't test your > >>>>>>>>>>> test > >>>>>>>>>>> with > >>>>>>>>>>> some security manager. > >>>>>>>>>>> > >>>>>>>>>>> From my point of view, it will be better to extend > >>>>>>>>>>> > >>>>>>>>>>> ClassFileInstaller > >>>>>>>>>>> > >>>>>>>>>>> so it will copy not only > >>>>>>>>>>> a class whose name was passed as an arguments, but also all > >>>>>>>>>>> inner > >>>>>>>>>>> classes of that class. > >>>>>>>>>>> And if someone want copy only specified class without inner > >>>>>>>>>>> classes, > >>>>>>>>>>> then some option > >>>>>>>>>>> could be added to ClassFileInstaller to force such behaviour. > >>>>>>>>> > >>>>>>>>> Perhaps this is a good time to get rid of ClassFileInstaller > >>>>>>>>> altogether? > >>>>>>>>> > >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8009117 > >>>>>>>>> > >>>>>>>>> The reason for its existence is that the WhiteBox class needs > >>>>>>>>> to be > >>>>>>>>> on the > >>>>>>>>> boot class path. > >>>>>>>>> If we can live with having all the test's classes on the boot > >>>>>>>>> class > >>>>>>>>> path then > >>>>>>>>> we could use the /bootclasspath option in jtreg as stated in > >>>>>>>>> the RFE. > >>>>>>>>> > >>>>>>>>> /Mikael > >>>>>>>>> > >>>>>>>>>>> Thanks, > >>>>>>>>>>> Filipp. > >>>>>>>>>>> > >>>>>>>>>>> On 05/08/2014 04:47 PM, Andrey Zakharov wrote: > >>>>>>>>>>>> Hi! > >>>>>>>>>>>> Suggesting patch with fixes for > >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8011397 > >>>>>>>>>>>> > >>>>>>>>>>>> webrev: > >>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20275/8011397.t > >>>>>>>>>>>> gz > >>>>>>>>>>>> > >>>>>>>>>>>> patch: > >>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20274/8011397.W > >>>>>>>>>>>> hiteBoxPer > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> mission > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> Thanks. From goetz.lindenmaier at sap.com Tue Jul 15 13:59:20 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 15 Jul 2014 13:59:20 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53C5332D.3010604@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> <53C45912.4050905@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> <53C4D63A.5060802@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA584@DEWDFEMB12A.global.corp.sap> <53C5332D.3010604@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDA633@DEWDFEMB12A.global.corp.sap> Hi Coleen, It's a quiet common pattern in hotspot to have a class specialized platform dependent by having headers that go into the middle of a class declaration. E.g., it's the same with os.hpp. Best regards, Goetz. -----Original Message----- From: Coleen Phillimore [mailto:coleen.phillimore at oracle.com] Sent: Dienstag, 15. Juli 2014 15:57 To: Lindenmaier, Goetz; David Holmes; hotspot-dev at openjdk.java.net Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories It also seems to me that these vmreg_ppc.hpp inline functions are special in that they are included directly in the class declaration, rather than the preferred separate class declaration. So I think this doesn't follow the "rules" as such because this case is different. It would be nice to clean out these includes in another cleanup pass. I hit the same cycles on the closed part but didn't realize it was because of cycles. Thanks, Coleen On 7/15/14, 5:18 AM, Lindenmaier, Goetz wrote: > Hi David, > > There are no clean rules followed, which happens to cause > compile problems here and there. I try to clean this up a bit. > > If inline function foo() calls another inline function bar(), the c++ compiler > must see both implementations to compile foo (else it obviously can't > inline). It must see the declaration of the function to be inlined before > the function where it is inlined. If there are cyclic inlines you need inline.hpp > headers to get a safe state. Also, to be on the safe side, .hpp files never may include > .inline.hpp files, else an implementation can end up above the declaration > it needs. See also the two examples attached. > > If there is no cycle, it doesn't matter. That's why a lot of functions > are not placed according to this scheme. > > For the functions I moved to the header (path_separator etc): > They are used in a lot of .hpp files. Moving them to os.hpp I easily could avoid > including the os.inline.hpp in .hpp files, which would be bad. > > Best regards, > Goetz. > > > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 15. Juli 2014 09:20 > To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 15/07/2014 4:34 PM, Lindenmaier, Goetz wrote: >> Hi David, >> >> functions that are completely self contained can go into the .hpp. >> Functions that call another inline function defined in an other header >> must go to .inline.hpp as else there could be cycles the c++ compilers can't >> deal with. > A quick survey of the shared *.inline.hpp files shows many don't seem to > fit this definition. Are templates also something that needs special > handling? > > I'm not saying anything is wrong with your changes, just trying to > understand what the rules are. > > Thanks, > David > >> Best regards, >> Goetz. >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Dienstag, 15. Juli 2014 00:26 >> To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: >>> Hi Coleen, >>> >>> Thanks for sponsoring this! >>> >>> bytes, ad, nativeInst and vmreg.inline were used quite often >>> in shared files, so it definitely makes sense for these to have >>> a shared header. >>> vm_version and register had an umbrella header, but that >>> was not used everywhere, so I cleaned it up. >>> That left adGlobals, jniTypes and interp_masm which >>> are only used a few time. I did these so that all files >>> are treated similarly. >>> In the end, I didn't need a header for all, as they were >>> not really needed in the shared files, or I found >>> another good place, as for adGlobals. >>> >>> I added you and David H. as reviewer to the webrev: >>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>> I hope this is ok with you, David. >> It might be somewhat premature :) I somewhat confused by the rules for >> headers and includes and inlines. I now see with this change a bunch of >> inline function definitions being moved out of the .inline.hpp file and >> into the .hpp file. Why? What criteria determines if an inline function >> goes into the .hpp versus the .inline.hpp file ??? >> >> Thanks, >> David >> >>> Thanks, >>> Goetz. >>> >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore >>> Sent: Montag, 14. Juli 2014 14:09 >>> To: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>> >>> >>> I think this looks like a good cleanup. I can sponsor it and make the >>> closed changes also again. I initially proposed the #include cascades >>> because the alternative at the time was to blindly create a dispatching >>> header file for each target dependent file. I wanted to see the >>> #includes cleaned up instead and target dependent files included >>> directly. This adds 5 dispatching header files, which is fine. I >>> think the case of interp_masm.hpp is interesting though, because the >>> dispatching file is included in cpu dependent files, which could >>> directly include the cpu version. But there are 3 platform independent >>> files that include it. I'm not going to object though because I'm >>> grateful for this cleanup and I guess it's a matter of opinion which is >>> best to include in the cpu dependent directories. >>> >>> Thanks, >>> Coleen >>> >>> >>> On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> David, can I consider this a review? >>>> >>>> And I please need a sponsor for this change. Could somebody >>>> please help here? Probably some closed adaptions are needed. >>>> It applies to any repo as my other change traveled around >>>> by now. >>>> >>>> Thanks and best regards, >>>> Goetz. >>>> >>>> >>>> -----Original Message----- >>>> From: David Holmes [mailto:david.holmes at oracle.com] >>>> Sent: Freitag, 11. Juli 2014 07:19 >>>> To: Lindenmaier, Goetz; Lois Foltan >>>> Cc: hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>>> >>>> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>>>> Hi, >>>>> >>>>> foo.hpp as few includes as possible, to avoid cycles. >>>>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>>>> (either directly or via the platform files.) >>>>> * should include foo.platform.inline.hpp, so that shared files that >>>>> call functions from foo.platform.inline.hpp need not contain the >>>>> cascade of all the platform files. >>>>> If code in foo.platform.inline.hpp is only used in the platform files, >>>>> it is not necessary to have an umbrella header. >>>>> foo.platform.inline.hpp Should include what is needed in its code. >>>>> >>>>> For client code: >>>>> With this change I now removed all include cascades of platform files except for >>>>> those in the 'natural' headers. >>>>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>>>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>>>> headers, but include bar.[inline.]hpp.) >>>>> If it's 1:1, I don't care, as discussed before. >>>>> >>>>> Does this make sense? >>>> I find the overall structure somewhat counter-intuitive from an >>>> implementation versus interface perspective. But ... >>>> >>>> Thanks for the explanation. >>>> >>>> David >>>> >>>>> Best regards, >>>>> Goetz. >>>>> >>>>> >>>>> which of the above should #include which others, and which should be >>>>> #include'd by "client" code? >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>>> Thanks, >>>>>> Lois >>>>>> >>>>>>> David >>>>>>> ----- >>>>>>> >>>>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>>>> (however this could pull in more code than needed since >>>>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>>>> >>>>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>>>> - change not related to clean up of umbrella headers, please >>>>>>>> explain/justify. >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.hpp >>>>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>>>> vmreg.inline.hpp or will >>>>>>>> this introduce a cyclical inclusion situation, since >>>>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>>>> >>>>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>>>> - only has a copyright change in the file, no other changes >>>>>>>> present? >>>>>>>> >>>>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>>>> - incorrect copyright, no current year? >>>>>>>> >>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>> - incorrect copyright date for a new file >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>> - technically this new file does not need to include >>>>>>>> "asm/register.hpp" since >>>>>>>> vmreg.hpp already includes it >>>>>>>> >>>>>>>> My only lingering concern is the cyclical nature of >>>>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>>>> is not much difference between the two? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Lois >>>>>>>> >>>>>>>> >>>>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>>>> >>>>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>>>> subdirectories: >>>>>>>>> >>>>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>>>> >>>>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>>>> >>>>>>>>> Where possible, this change avoids includes in headers. >>>>>>>>> Eventually it adds a forward declaration. >>>>>>>>> >>>>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>>>> rather small. >>>>>>>>> >>>>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>>>> includes in, >>>>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>>>> >>>>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>>>> thus all the assembler include headers into a lot of files. >>>>>>>>> >>>>>>>>> Please review and test this change. I please need a sponsor. >>>>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>>>> >>>>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>>>> linuxppc64, >>>>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>>>> aixppc64, ntamd64 >>>>>>>>> in opt, dbg and fastdbg versions. >>>>>>>>> >>>>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>>>> arrives in other >>>>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>>>> change >>>>>>>>> against jdk9/dev, too.) >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Goetz. >>>>>>>>> >>>>>>>>> PS: I also did all the Copyright adaptions ;) From david.simms at oracle.com Tue Jul 15 14:21:35 2014 From: david.simms at oracle.com (David Simms) Date: Tue, 15 Jul 2014 16:21:35 +0200 Subject: RFR (S) JNI Specification Issue: JDK-7172129 Integration of the JNI spec updates for JDK 1.2 was incomplete Message-ID: <53C538EF.3000300@oracle.com> Greetings, Some important updates from way back in JDK 1.2 were never added to the current JNI spec: JDK Bug: https://bugs.openjdk.java.net/browse/JDK-7172129 Although the "GetPrimitiveArrayCritical" issues have been incorporated into JDK-4907359, changes are still required to the "Asynchronous Exceptions" section: Web review: http://cr.openjdk.java.net/~dsimms/jnispec/7172129 HTML: http://cr.openjdk.java.net/~dsimms/jnispec/7172129/raw_files/new/docs/technotes/guides/jni/spec/design.html#asynchronous_exceptions Thank you, /David Simms From coleen.phillimore at oracle.com Tue Jul 15 14:30:59 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 15 Jul 2014 10:30:59 -0400 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C4820C.5000300@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> Message-ID: <53C53B23.6090907@oracle.com> On 7/14/14, 9:21 PM, Vladimir Kozlov wrote: > Impressive work, Tobias! I agree! This was a tricky case and hard to reproduce. Were you able to create a small test case for it that would be useful to add? I have a comment about the code. *+ if (csc->is_call_to_interpreted() && stub_contains_dead_metadata(is_alive, csc->destination())) {* *+ csc->set_to_clean();* *+ }* This appears in each case. Can you fold it and the new function into a function like clean_call_to_interpreted_stub(is_alive, csc)? Thanks, Coleen > > So before the permgen removal embedded method* were oops and they were > processed in relocInfo::oop_type loop. > > May be instead of specializing opt_virtual_call_type and > static_call_type call site you can simple add a loop for > relocInfo::metadata_type (similar to oop_type loop)? > > Thanks, > Vladimir > > On 7/14/14 4:56 AM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch for JDK-8029443. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >> >> *Problem* >> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks >> if a nmethod can be unloaded because it contains dead oops. If class >> unloading occurred we additionally clear all ICs where the cached >> metadata refers to an unloaded klass or method. If the nmethod is not >> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >> metadata is alive. The assert in CheckClass::check_class fails because >> the nmethod contains Method* metadata corresponding to a dead Klass. >> The Method* belongs to a to-interpreter stub [1] of an optimized >> compiled IC. Normally we clear those stubs prior to verification to >> avoid dangling references to Method* [2], but only if the stub is not in >> use, i.e. if the IC is not in to-interpreted mode. In this case the >> to-interpreter stub may be executed and hand a stale Method* to the >> interpreter. >> >> *Solution >> *The implementation of nmethod::do_unloading(..) is changed to clean >> compiled ICs and compiled static calls if they call into a >> to-interpreter stub that references dead Method* metadata. >> >> The patch was affected by the G1 class unloading changes (JDK-8048248) >> because the method nmethod::do_unloading_parallel(..) was added. I >> adapted the implementation as well. >> * >> Testing >> *Failing test (runThese) >> JPRT >> >> Thanks, >> Tobias >> >> [1] see CompiledStaticCall::emit_to_interp_stub(..) >> [2] see nmethod::verify_metadata_loaders(..), >> static_stub_reloc()->clear_inline_cache() clears the stub From coleen.phillimore at oracle.com Tue Jul 15 14:35:34 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 15 Jul 2014 10:35:34 -0400 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <9129730.8quV1l9zAl@mgerdin03> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C4DCB8.5020705@oracle.com> <9129730.8quV1l9zAl@mgerdin03> Message-ID: <53C53C36.7000703@oracle.com> On 7/15/14, 7:36 AM, Mikael Gerdin wrote: > Tobias, > > On Tuesday 15 July 2014 09.48.08 Tobias Hartmann wrote: >> Hi Vladimir, >> >>> Impressive work, Tobias! >> Thanks! Took me a while to figure out what's happening. >> >>> So before the permgen removal embedded method* were oops and they were >>> processed in relocInfo::oop_type loop. >> Okay, good to know. That explains why the terms oops and metadata are >> used interchangeably at some points in the code. > Yep, there are a lot of leftover references to metadata as oops, especially in > some compiler/runtime parts such as MDOs and CompiledICs. I forgot to mention that there shouldn't be leftover references. Maybe in comments and naming though. Coleen > >>> May be instead of specializing opt_virtual_call_type and >>> static_call_type call site you can simple add a loop for >>> relocInfo::metadata_type (similar to oop_type loop)? >> The problem with iterating over relocInfo::metadata_type is that we >> don't know to which stub, i.e., to which IC the Method* pointer belongs. >> Since we don't want to unload the entire method but only clear the >> corresponding IC, we need this information. > I'm wondering, is there some way to figure out the IC for the Method*? > > In CompiledStaticCall::emit_to_interp_stub a static_stub_Relocation is created > and from the looks of it it points to the call site through some setting of a > "mark". > > The metadata relocation is emitted just after the static_stub_Relocation, so > one approach (untested) could be to have a case for static_stub_Relocations, > create a CompiledIC.at(reloc->static_call()) and check if it's a call to > interpreted. If it is the advance the relocIterator to the next position and > check that metadata for liveness. > > /Mikael > >> Thanks, >> Tobias >> >>> Thanks, >>> Vladimir >>> >>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>> Hi, >>>> >>>> please review the following patch for JDK-8029443. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>> >>>> *Problem* >>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks >>>> if a nmethod can be unloaded because it contains dead oops. If class >>>> unloading occurred we additionally clear all ICs where the cached >>>> metadata refers to an unloaded klass or method. If the nmethod is not >>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>>> metadata is alive. The assert in CheckClass::check_class fails because >>>> the nmethod contains Method* metadata corresponding to a dead Klass. >>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>> compiled IC. Normally we clear those stubs prior to verification to >>>> avoid dangling references to Method* [2], but only if the stub is not in >>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>> to-interpreter stub may be executed and hand a stale Method* to the >>>> interpreter. >>>> >>>> *Solution >>>> *The implementation of nmethod::do_unloading(..) is changed to clean >>>> compiled ICs and compiled static calls if they call into a >>>> to-interpreter stub that references dead Method* metadata. >>>> >>>> The patch was affected by the G1 class unloading changes (JDK-8048248) >>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>> adapted the implementation as well. >>>> * >>>> Testing >>>> *Failing test (runThese) >>>> JPRT >>>> >>>> Thanks, >>>> Tobias >>>> >>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>> [2] see nmethod::verify_metadata_loaders(..), >>>> static_stub_reloc()->clear_inline_cache() clears the stub From daniel.daugherty at oracle.com Tue Jul 15 14:44:14 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Tue, 15 Jul 2014 08:44:14 -0600 Subject: RFR (S) JNI Specification Issue: JDK-7172129 Integration of the JNI spec updates for JDK 1.2 was incomplete In-Reply-To: <53C538EF.3000300@oracle.com> References: <53C538EF.3000300@oracle.com> Message-ID: <53C53E3E.9030404@oracle.com> > Web review: http://cr.openjdk.java.net/~dsimms/jnispec/7172129 docs/technotes/guides/jni/spec/design.html No comments. Thumbs up. Dan On 7/15/14 8:21 AM, David Simms wrote: > > Greetings, > > Some important updates from way back in JDK 1.2 were never added to > the current JNI spec: > > JDK Bug: https://bugs.openjdk.java.net/browse/JDK-7172129 > > Although the "GetPrimitiveArrayCritical" issues have been incorporated > into JDK-4907359, changes are still required to the "Asynchronous > Exceptions" section: > > Web review: http://cr.openjdk.java.net/~dsimms/jnispec/7172129 > > HTML: > http://cr.openjdk.java.net/~dsimms/jnispec/7172129/raw_files/new/docs/technotes/guides/jni/spec/design.html#asynchronous_exceptions > > Thank you, > /David Simms From andrey.x.zakharov at oracle.com Tue Jul 15 15:26:34 2014 From: andrey.x.zakharov at oracle.com (Andrey Zakharov) Date: Tue, 15 Jul 2014 19:26:34 +0400 Subject: RFR: 8011397: JTREG needs to copy additional WhiteBox class file to JTwork/scratch/sun/hotspot In-Reply-To: <2443586.qRToXKmNqX@mgerdin03> References: <536B7CF0.6010508@oracle.com> <53AAE5DA.2030700@oracle.com> <53BACF55.2020301@oracle.com> <2443586.qRToXKmNqX@mgerdin03> Message-ID: <53C5482A.9090001@oracle.com> Hi, Erik, Bengt. Could you, please, review this too. Thanks. On 15.07.2014 17:58, Mikael Gerdin wrote: > Andrey, > > On Monday 07 July 2014 20.48.21 Andrey Zakharov wrote: >> Hi ,all >> Mikael, can you please review it. > Sorry, I was on vacation last week. > >> webrev: >> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ > Looks ok for now. We should consider revisiting this by either switching to > @run main/bootclasspath > or > deleting the WhiteboxPermission nested class and using some other way for > permission checks (if they are at all needed). > > /Mikael > >> Thanks. >> >> On 25.06.2014 19:08, Andrey Zakharov wrote: >>> Hi, all >>> So in progress of previous email - >>> webrev: >>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ >>> >>> Thanks. >>> >>> On 16.06.2014 19:57, Andrey Zakharov wrote: >>>> Hi, all >>>> So issue is that when tests with WhiteBox API has been invoked with >>>> -Xverify:all it fails with Exception java.lang.NoClassDefFoundError: >>>> sun/hotspot/WhiteBox$WhiteBoxPermission >>>> Solutions that are observed: >>>> 1. Copy WhiteBoxPermission with WhiteBox. But >>>> >>>>>> Perhaps this is a good time to get rid of ClassFileInstaller >>>> altogether? >>>> >>>> 2. Using bootclasspath to hook pre-built whitebox (due @library >>>> /testlibrary/whitebox) . Some tests has @run main/othervm, some uses >>>> ProcessBuilder. >>>> >>>> - main/othervm/bootclasspath adds ${test.src} and >>>> >>>> ${test.classes}to options. >>>> >>>> - With ProcessBuilder we can just add ${test.classes} >>>> >>>> Question here is, can it broke some tests ? While testing this, I >>>> found only https://bugs.openjdk.java.net/browse/JDK-8046231, others >>>> looks fine. >>>> >>>> 3. Make ClassFileInstaller deal with inner classes like that: >>>> diff -r 6ed24aedeef0 -r c01651363ba8 >>>> test/testlibrary/ClassFileInstaller.java >>>> --- a/test/testlibrary/ClassFileInstaller.java Thu Jun 05 19:02:56 >>>> 2014 +0400 >>>> +++ b/test/testlibrary/ClassFileInstaller.java Fri Jun 06 18:18:11 >>>> 2014 +0400 >>>> @@ -50,6 +50,16 @@ >>>> >>>> } >>>> // Create the class file >>>> Files.copy(is, p, StandardCopyOption.REPLACE_EXISTING); >>>> >>>> + >>>> + for (Class cls : >>>> Class.forName(arg).getDeclaredClasses()) { >>>> + //if (!Modifier.isStatic(cls.getModifiers())) { >>>> + String pathNameSub = >>>> cls.getCanonicalName().replace('.', '/').concat(".class"); >>>> + Path pathSub = Paths.get(pathNameSub); >>>> + InputStream streamSub = >>>> cl.getResourceAsStream(pathNameSub); >>>> + Files.copy(streamSub, pathSub, >>>> StandardCopyOption.REPLACE_EXISTING); >>>> + //} >>>> + } >>>> + >>>> >>>> } >>>> >>>> } >>>> >>>> } >>>> >>>> Works fine for ordinary classes, but fails for WhiteBox due >>>> Class.forName initiate Class. WhiteBox has "static" section, and >>>> initialization fails as it cannot bind to native methods >>>> "registerNatives" and so on. >>>> >>>> >>>> So, lets return to first one option? Just add everywhere >>>> >>>> * @run main ClassFileInstaller sun.hotspot.WhiteBox >>>> >>>> + * @run main ClassFileInstaller sun.hotspot.WhiteBox$WhiteBoxPermission >>>> >>>> Thanks. >>>> >>>> On 10.06.2014 19:43, Igor Ignatyev wrote: >>>>> Andrey, >>>>> >>>>> I don't like this idea, since it completely changes the tests. >>>>> 'run/othervm/bootclasspath' adds all paths from CP to BCP, so the >>>>> tests whose main idea was testing WB methods themselves (sanity, >>>>> compiler/whitebox, ...) don't check that it's possible to use WB >>>>> when the application isn't in BCP. >>>>> >>>>> Igor >>>>> >>>>> On 06/09/2014 06:59 PM, Andrey Zakharov wrote: >>>>>> Hi, everybody >>>>>> I have tested my changes on major platforms and found one bug, filed: >>>>>> https://bugs.openjdk.java.net/browse/JDK-8046231 >>>>>> Also, i did another try to make ClassFileInstaller to copy all inner >>>>>> classes within parent, but this fails for WhiteBox due its static >>>>>> "registerNatives" dependency. >>>>>> >>>>>> Please, review suggested changes: >>>>>> - replace ClassFileInstaller and run/othervm with >>>>>> >>>>>> "run/othervm/bootclasspath". >>>>>> >>>>>> bootclasspath parameter for othervm adds-Xbootclasspath/a: >>>>>> option with ${test.src} and ${test.classes}according to >>>>>> http://hg.openjdk.java.net/code-tools/jtreg/file/31003a1c46d9/src/share >>>>>> /classes/com/sun/javatest/regtest/MainAction.java. >>>>>> >>>>>> Is this suitable for our needs - give to test compiled WhiteBox? >>>>>> >>>>>> - replace explicit -Xbootclasspath option values (".") in >>>>>> >>>>>> ProcessBuilder invocations to ${test.classes} where WhiteBox has been >>>>>> compiled. >>>>>> >>>>>> Webrev: >>>>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.00/ >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>> Thanks. >>>>>> >>>>>> On 23.05.2014 15:40, Andrey Zakharov wrote: >>>>>>> On 22.05.2014 12:47, Igor Ignatyev wrote: >>>>>>>> Andrey, >>>>>>>> >>>>>>>> 1. You changed dozen of tests, have you tested your changes? >>>>>>> Locally, aurora on the way. >>>>>>> >>>>>>>> 2. Your changes of year in copyright is wrong. it has to be >>>>>>>> $first_year, [$last_year, ], see Mark's email[1] for details. >>>>>>>> >>>>>>>> [1] >>>>>>>> http://mail.openjdk.java.net/pipermail/jdk7-dev/2010-May/001321.html >>>>>>> Thanks, fixed. will be uploaded soon. >>>>>>> >>>>>>>> Igor >>>>>>>> >>>>>>>> On 05/21/2014 07:37 PM, Andrey Zakharov wrote: >>>>>>>>> On 13.05.2014 14:43, Andrey Zakharov wrote: >>>>>>>>>> Hi >>>>>>>>>> So here is trivial patch - >>>>>>>>>> removing ClassFileInstaller sun.hotspot.WhiteBox and adding >>>>>>>>>> main/othervm/bootclasspath >>>>>>>>>> where this needed >>>>>>>>>> >>>>>>>>>> Also, some tests are modified as >>>>>>>>>> - "-Xbootclasspath/a:.", >>>>>>>>>> + "-Xbootclasspath/a:" + >>>>>>>>>> System.getProperty("test.classes"), >>>>>>>>>> >>>>>>>>>> Thanks. >>>>>>>>> webrev: http://cr.openjdk.java.net/~jwilhelm/8011397/webrev.02/ >>>>>>>>> bug: https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>>>> Thanks. >>>>>>>>> >>>>>>>>>> On 09.05.2014 12:13, Mikael Gerdin wrote: >>>>>>>>>>> On Thursday 08 May 2014 19.28.13 Igor Ignatyev wrote: >>>>>>>>>>>> // cc'ing hotspot-dev instaed of compiler, runtime and gc lists. >>>>>>>>>>>> >>>>>>>>>>>> On 05/08/2014 07:09 PM, Filipp Zhinkin wrote: >>>>>>>>>>>>> Andrey, >>>>>>>>>>>>> >>>>>>>>>>>>> I've CC'ed compiler and runtime mailing list, because you're >>>>>>>>>>>>> changes >>>>>>>>>>>>> affect test for other components as too. >>>>>>>>>>>>> >>>>>>>>>>>>> I don't like your solution (but I'm not a reviewer, so treat my >>>>>>>>>>>>> words >>>>>>>>>>>>> just as suggestion), >>>>>>>>>>>>> because we'll have to write more meta information for each test >>>>>>>>>>>>> and it >>>>>>>>>>>>> is very easy to >>>>>>>>>>>>> forget to install WhiteBoxPermission if you don't test your >>>>>>>>>>>>> test >>>>>>>>>>>>> with >>>>>>>>>>>>> some security manager. >>>>>>>>>>>>> >>>>>>>>>>>>> From my point of view, it will be better to extend >>>>>>>>>>>>> >>>>>>>>>>>>> ClassFileInstaller >>>>>>>>>>>>> >>>>>>>>>>>>> so it will copy not only >>>>>>>>>>>>> a class whose name was passed as an arguments, but also all >>>>>>>>>>>>> inner >>>>>>>>>>>>> classes of that class. >>>>>>>>>>>>> And if someone want copy only specified class without inner >>>>>>>>>>>>> classes, >>>>>>>>>>>>> then some option >>>>>>>>>>>>> could be added to ClassFileInstaller to force such behaviour. >>>>>>>>>>> Perhaps this is a good time to get rid of ClassFileInstaller >>>>>>>>>>> altogether? >>>>>>>>>>> >>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8009117 >>>>>>>>>>> >>>>>>>>>>> The reason for its existence is that the WhiteBox class needs >>>>>>>>>>> to be >>>>>>>>>>> on the >>>>>>>>>>> boot class path. >>>>>>>>>>> If we can live with having all the test's classes on the boot >>>>>>>>>>> class >>>>>>>>>>> path then >>>>>>>>>>> we could use the /bootclasspath option in jtreg as stated in >>>>>>>>>>> the RFE. >>>>>>>>>>> >>>>>>>>>>> /Mikael >>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Filipp. >>>>>>>>>>>>> >>>>>>>>>>>>> On 05/08/2014 04:47 PM, Andrey Zakharov wrote: >>>>>>>>>>>>>> Hi! >>>>>>>>>>>>>> Suggesting patch with fixes for >>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>>>>>>>>> >>>>>>>>>>>>>> webrev: >>>>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20275/8011397.t >>>>>>>>>>>>>> gz >>>>>>>>>>>>>> >>>>>>>>>>>>>> patch: >>>>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20274/8011397.W >>>>>>>>>>>>>> hiteBoxPer >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> mission >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks. From mikael.gerdin at oracle.com Tue Jul 15 15:40:34 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 15 Jul 2014 17:40:34 +0200 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code In-Reply-To: <53C4705E.7060407@oracle.com> References: <53C4705E.7060407@oracle.com> Message-ID: <1649311.XsSm0sYPeC@mgerdin03> Hi Coleen, On Monday 14 July 2014 20.05.50 Coleen Phillimore wrote: > Summary: remove bcx and mdx handling. We no longer have to convert > bytecode pointers or method data pointers to indices for GC since > Metadata aren't moved. > > Tested with nsk.quick.testlist, jck tests, JPRT. > > Most of this is renaming bcx to bcp and mdx to mdp. The content changes > are in frame.cpp. StefanK implemented 90% of these changes. > > open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ This isn't exactly my area of the code, but I'm happy that we got around to this cleanup! I looked through the change and to my not-so-runtime-familiar eyes it seems good. One thought about the frame accessors 244 intptr_t* interpreter_frame_bcp_addr() const; 245 intptr_t* interpreter_frame_mdp_addr() const; Now that the contents of bcp and mdp in the frames are always pointers, perhaps these accessors should be appropriately typed? Something like 244 address* interpreter_frame_bcp_addr() const; 245 ProfileData** interpreter_frame_mdp_addr() const; Also, BytecodeInterpreter still has a member named _mdx, should that be renamed to _mdp as well? /Mikael > bug link https://bugs.openjdk.java.net/browse/JDK-8004128 > > Thanks, > Coleen From volker.simonis at gmail.com Tue Jul 15 16:54:11 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 15 Jul 2014 18:54:11 +0200 Subject: RFR(XS): 8050228: Rename 'rem_size' in compactibleFreeListSpace.cpp because of name clashes on AIX" -f 8050228_rename_rem_size.patch Message-ID: Hi, could somebody please review and sponsor this little change: http://cr.openjdk.java.net/~simonis/webrevs/8050228/ https://bugs.openjdk.java.net/browse/JDK-8050228 Background: I know this sounds crazy but it's true: there's an AIX header which unconditionally defines rem_size: /usr/include/sys/xmem.h struct xmem { ... #define rem_size u2._subspace_id2 }; This breaks the compilation of CompactibleFreeListSpace::splitChunkAndReturnRemainder() which uses a local variable of the same name. Until now, we've worked around this problem by simply undefining 'rem_size' in the platform specific file os_aix.inline.hpp but after "8042195: Introduce umbrella header orderAccess.inline.hpp" this doesn't seems to be enough any more. So before introducing yet another ugly platform dependent hack in shared code or depending on a certain include order of otherwise unrelated platform headers in shared code I suggest so simply give up and rename the local variable. In this change I've renamed 'rem_size' to 'remain_size' because "rem" is used as abbreviation of "remainder" in the code. But actually I'd be happy with any other name which differs from "rem_size". Thank you and best regards, Volker From coleen.phillimore at oracle.com Tue Jul 15 17:09:39 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 15 Jul 2014 13:09:39 -0400 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code In-Reply-To: <1649311.XsSm0sYPeC@mgerdin03> References: <53C4705E.7060407@oracle.com> <1649311.XsSm0sYPeC@mgerdin03> Message-ID: <53C56053.6030700@oracle.com> On 7/15/14, 11:40 AM, Mikael Gerdin wrote: > Hi Coleen, > > On Monday 14 July 2014 20.05.50 Coleen Phillimore wrote: >> Summary: remove bcx and mdx handling. We no longer have to convert >> bytecode pointers or method data pointers to indices for GC since >> Metadata aren't moved. >> >> Tested with nsk.quick.testlist, jck tests, JPRT. >> >> Most of this is renaming bcx to bcp and mdx to mdp. The content changes >> are in frame.cpp. StefanK implemented 90% of these changes. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ > This isn't exactly my area of the code, but I'm happy that we got around to > this cleanup! > > I looked through the change and to my not-so-runtime-familiar eyes it seems > good. There was GC bits so I'm glad you looked at it. It might be good for performance since you don't have to walk thread stacks for no good reason anymore. > > One thought about the frame accessors > 244 intptr_t* interpreter_frame_bcp_addr() const; > 245 intptr_t* interpreter_frame_mdp_addr() const; > Now that the contents of bcp and mdp in the frames are always pointers, > perhaps these accessors should be appropriately typed? > > Something like > 244 address* interpreter_frame_bcp_addr() const; > 245 ProfileData** interpreter_frame_mdp_addr() const; That's a nice idea. I'll see if this change isn't too disruptive. Coleen > > Also, BytecodeInterpreter still has a member named _mdx, should that be > renamed to _mdp as well? > > /Mikael > >> bug link https://bugs.openjdk.java.net/browse/JDK-8004128 >> >> Thanks, >> Coleen From coleen.phillimore at oracle.com Tue Jul 15 17:26:54 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 15 Jul 2014 13:26:54 -0400 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code In-Reply-To: <1649311.XsSm0sYPeC@mgerdin03> References: <53C4705E.7060407@oracle.com> <1649311.XsSm0sYPeC@mgerdin03> Message-ID: <53C5645E.2050504@oracle.com> I forgot one comment. On 7/15/14, 11:40 AM, Mikael Gerdin wrote: > Hi Coleen, > > On Monday 14 July 2014 20.05.50 Coleen Phillimore wrote: >> Summary: remove bcx and mdx handling. We no longer have to convert >> bytecode pointers or method data pointers to indices for GC since >> Metadata aren't moved. >> >> Tested with nsk.quick.testlist, jck tests, JPRT. >> >> Most of this is renaming bcx to bcp and mdx to mdp. The content changes >> are in frame.cpp. StefanK implemented 90% of these changes. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ > This isn't exactly my area of the code, but I'm happy that we got around to > this cleanup! > > I looked through the change and to my not-so-runtime-familiar eyes it seems > good. > > One thought about the frame accessors > 244 intptr_t* interpreter_frame_bcp_addr() const; > 245 intptr_t* interpreter_frame_mdp_addr() const; > Now that the contents of bcp and mdp in the frames are always pointers, > perhaps these accessors should be appropriately typed? > > Something like > 244 address* interpreter_frame_bcp_addr() const; > 245 ProfileData** interpreter_frame_mdp_addr() const; > > Also, BytecodeInterpreter still has a member named _mdx, should that be > renamed to _mdp as well? There were too many mdx hits in the C++ interpreter (aka. bytecodeInterpreter) and since we don't really build and test it, I didn't want to change this. mdp would be better but mdx is still an okay name. Coleen > /Mikael > >> bug link https://bugs.openjdk.java.net/browse/JDK-8004128 >> >> Thanks, >> Coleen From vladimir.kozlov at oracle.com Tue Jul 15 17:59:50 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 15 Jul 2014 10:59:50 -0700 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C4DCB8.5020705@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C4DCB8.5020705@oracle.com> Message-ID: <53C56C16.40909@oracle.com> On 7/15/14 12:48 AM, Tobias Hartmann wrote: > Hi Vladimir, > >> Impressive work, Tobias! > > Thanks! Took me a while to figure out what's happening. > >> So before the permgen removal embedded method* were oops and they were >> processed in relocInfo::oop_type loop. > > Okay, good to know. That explains why the terms oops and metadata are > used interchangeably at some points in the code. > >> May be instead of specializing opt_virtual_call_type and >> static_call_type call site you can simple add a loop for >> relocInfo::metadata_type (similar to oop_type loop)? > > The problem with iterating over relocInfo::metadata_type is that we > don't know to which stub, i.e., to which IC the Method* pointer belongs. > Since we don't want to unload the entire method but only clear the > corresponding IC, we need this information. Got it: you are cleaning call site IC: ic->set_to_clean(). My point was these to_interp stubs are part of a nmethod (they are in stubs section) and contain dead metadata. Should we unload this nmethod then? We do unloading nmethods if any embedded oops are dead (see can_unload()). Should we do the same if a nmethod (and its stubs) have dead metadata? Note, embedded metadata could be Method* and MethodData*. Vladimir > > Thanks, > Tobias > >> >> Thanks, >> Vladimir >> >> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>> Hi, >>> >>> please review the following patch for JDK-8029443. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>> >>> *Problem* >>> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks >>> if a nmethod can be unloaded because it contains dead oops. If class >>> unloading occurred we additionally clear all ICs where the cached >>> metadata refers to an unloaded klass or method. If the nmethod is not >>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>> metadata is alive. The assert in CheckClass::check_class fails because >>> the nmethod contains Method* metadata corresponding to a dead Klass. >>> The Method* belongs to a to-interpreter stub [1] of an optimized >>> compiled IC. Normally we clear those stubs prior to verification to >>> avoid dangling references to Method* [2], but only if the stub is not in >>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>> to-interpreter stub may be executed and hand a stale Method* to the >>> interpreter. >>> >>> *Solution >>> *The implementation of nmethod::do_unloading(..) is changed to clean >>> compiled ICs and compiled static calls if they call into a >>> to-interpreter stub that references dead Method* metadata. >>> >>> The patch was affected by the G1 class unloading changes (JDK-8048248) >>> because the method nmethod::do_unloading_parallel(..) was added. I >>> adapted the implementation as well. >>> * >>> Testing >>> *Failing test (runThese) >>> JPRT >>> >>> Thanks, >>> Tobias >>> >>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>> [2] see nmethod::verify_metadata_loaders(..), >>> static_stub_reloc()->clear_inline_cache() clears the stub > From zhengyu.gu at oracle.com Tue Jul 15 20:12:04 2014 From: zhengyu.gu at oracle.com (Zhengyu Gu) Date: Tue, 15 Jul 2014 16:12:04 -0400 Subject: RFR(L) 8046598: Scalable Native memory tracking development Message-ID: <53C58B14.9010003@oracle.com> This is an update to previous RFR 8028541: Native Memory Tracking enhancement, the original one is closed as duplicate of current one. The update is mainly based on feedback from Coleen and Christian: - Refactored MemReporter to break up some large functions and eliminate duplicated code. - Minor change to MemBaseline for eliminating duplicated code. - Changed MEMFLAGS type from unsigned short => MemoryType. Also added unit tests for LinkedList. The note from RFR 8028541: ========================= This is significant rework of native memory tracking introduced in earlier releases. The goal of this enhancement is to improve scalability, from both tracking memory and CPU usage perspectives, so it can scale well with increased memory allocation in large applications. The enhancement is mainly focused on malloc memory tracking, whose activities are several magnitude higher than virtual memory, and was the main bottleneck in early implementation. Instead of using book keeping records for tracking malloc activities, new implementation co-locates tracking data along side with user data by using a prefixed header. The header size is 8 bytes on 32-bit systems and 16 bytes on 64-bit systems, which ensure that user data also align properly. Virtual memory tracking still uses book keeping records, and ThreadCritical lock is always acquired to alter the records and related data structures. Summary tracking data is maintained in static data structures, via atomic operations. Malloc detail tracking call stacks are maintained in a lock free hashtable. The key improvements: 1. Up-to-date tracking report. 2. Detail tracking now shows multiple call frames. Number of frames is compilation time decision, currently default to 4. 3. Malloc tracking is lock free. 4. Tracking summary is reported in hs_err file when native memory tracking is enabled. 5. Query is faster, uses little memory and need a very little process. The drawback is that, malloc tracking header is always needed if native memory tracking has ever been enabled, even after tracking is shutdown. Impacts: The most noticeable impact for JVM developers, is that Arena now also take memory type as constructor parameter, besides the new operators. Arena* a = new (mtCode) Arena() => Arena* a = new (mtCode) Arena(mtCode) The webrev shows modification of about 60 files, but most of them are due to tracking API changes, mainly due to tracking stack, now, is an object, vs. a single pc. The most important files for this implementations are: memTracker.hpp/cpp mallocTracker.hpp/cpp and mallocTracker.inline.hpp virtualMemoryTracker.hpp/cpp mallocSiteTable.hpp/cpp allocationSite.hpp nativeCallStack.hpp/cpp linkedlist.hpp Tests: - JPRT - NMT test suite - vm.quick.testlist - Kitchensink stability test for 16+ days - FMW Bug: https://bugs.openjdk.java.net/browse/JDK-8046598 Webrev: http://cr.openjdk.java.net/~zgu/8046598/webrev.00/ Thanks, -Zhengyu From coleen.phillimore at oracle.com Tue Jul 15 20:25:01 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 15 Jul 2014 16:25:01 -0400 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code In-Reply-To: <1649311.XsSm0sYPeC@mgerdin03> References: <53C4705E.7060407@oracle.com> <1649311.XsSm0sYPeC@mgerdin03> Message-ID: <53C58E1D.9060802@oracle.com> I didn't make this change to interpreter_frame_bcp or mdp_addr() at the end. The frame code is consistent in returning intptr_t for objects on the frame and then casting them to the right types. I think this is better. Thanks, Coleen On 7/15/14, 11:40 AM, Mikael Gerdin wrote: > Hi Coleen, > > On Monday 14 July 2014 20.05.50 Coleen Phillimore wrote: >> Summary: remove bcx and mdx handling. We no longer have to convert >> bytecode pointers or method data pointers to indices for GC since >> Metadata aren't moved. >> >> Tested with nsk.quick.testlist, jck tests, JPRT. >> >> Most of this is renaming bcx to bcp and mdx to mdp. The content changes >> are in frame.cpp. StefanK implemented 90% of these changes. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ > This isn't exactly my area of the code, but I'm happy that we got around to > this cleanup! > > I looked through the change and to my not-so-runtime-familiar eyes it seems > good. > > One thought about the frame accessors > 244 intptr_t* interpreter_frame_bcp_addr() const; > 245 intptr_t* interpreter_frame_mdp_addr() const; > Now that the contents of bcp and mdp in the frames are always pointers, > perhaps these accessors should be appropriately typed? > > Something like > 244 address* interpreter_frame_bcp_addr() const; > 245 ProfileData** interpreter_frame_mdp_addr() const; > > Also, BytecodeInterpreter still has a member named _mdx, should that be > renamed to _mdp as well? > > /Mikael > >> bug link https://bugs.openjdk.java.net/browse/JDK-8004128 >> >> Thanks, >> Coleen From mikael.vidstedt at oracle.com Tue Jul 15 22:59:07 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Tue, 15 Jul 2014 15:59:07 -0700 Subject: RFR(S): 8050802: Update jprt runthese test suite to jck-8 Message-ID: <53C5B23B.9040604@oracle.com> Please review the below change which switches the 'runThese' test suite over the new, jck-8 based 'runThese8' tests suite. The change also splits up the long running fastdebug-Xcomp test into two separate tests (fastdebug-Xcomp_lang and fastdebug-Xcomp_vm) to take advantage of the parallelism in jprt to reduce the job times further. Bug: https://bugs.openjdk.java.net/browse/JDK-8050802 Webrev (hs-rt/ (top) repo): http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/top/webrev/ Webrev (hs-rt/hotspot repo): http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/hotspot/webrev/ Thanks, Mikael From Gary.Collins at oracle.com Tue Jul 15 23:46:09 2014 From: Gary.Collins at oracle.com (Gary Collins) Date: Tue, 15 Jul 2014 16:46:09 -0700 Subject: RFR(S): 8050802: Update jprt runthese test suite to jck-8 In-Reply-To: <53C5B23B.9040604@oracle.com> References: <53C5B23B.9040604@oracle.com> Message-ID: <71FF23CB-A314-41FF-B2CB-A169711DC877@oracle.com> Looks good to me.. On Jul 15, 2014, at 3:59 PM, Mikael Vidstedt wrote: > > Please review the below change which switches the 'runThese' test suite over the new, jck-8 based 'runThese8' tests suite. The change also splits up the long running fastdebug-Xcomp test into two separate tests (fastdebug-Xcomp_lang and fastdebug-Xcomp_vm) to take advantage of the parallelism in jprt to reduce the job times further. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8050802 > Webrev (hs-rt/ (top) repo): http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/top/webrev/ > Webrev (hs-rt/hotspot repo): http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/hotspot/webrev/ > > Thanks, > Mikael > From vladimir.kozlov at oracle.com Wed Jul 16 00:20:27 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 15 Jul 2014 17:20:27 -0700 Subject: RFR(S): 8050802: Update jprt runthese test suite to jck-8 In-Reply-To: <53C5B23B.9040604@oracle.com> References: <53C5B23B.9040604@oracle.com> Message-ID: <53C5C54B.60803@oracle.com> Mikael, I think you should split Xcomp on all platforms (not only for linux.i586) where it runs. thanks, Vladimir On 7/15/14 3:59 PM, Mikael Vidstedt wrote: > > Please review the below change which switches the 'runThese' test suite > over the new, jck-8 based 'runThese8' tests suite. The change also > splits up the long running fastdebug-Xcomp test into two separate tests > (fastdebug-Xcomp_lang and fastdebug-Xcomp_vm) to take advantage of the > parallelism in jprt to reduce the job times further. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8050802 > Webrev (hs-rt/ (top) repo): > http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/top/webrev/ > Webrev (hs-rt/hotspot repo): > http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/hotspot/webrev/ > > > Thanks, > Mikael > From mikael.vidstedt at oracle.com Wed Jul 16 01:40:52 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Tue, 15 Jul 2014 18:40:52 -0700 Subject: RFR(S): 8050802: Update jprt runthese test suite to jck-8 In-Reply-To: <53C5C54B.60803@oracle.com> References: <53C5B23B.9040604@oracle.com> <53C5C54B.60803@oracle.com> Message-ID: <53C5D824.7060808@oracle.com> From my empirical data the only test I've seen this "problem" with is the linux-i586-fastdebug-Xcomp; remember that there's a cost/overhead for setting up the individual tests too so splitting up the other Xcomp tests may actually make the job times longer. That said, if you feel that it's important for symmetry I can certainly do it. Cheers, Mikael On 2014-07-15 17:20, Vladimir Kozlov wrote: > Mikael, > > I think you should split Xcomp on all platforms (not only for > linux.i586) where it runs. > > thanks, > Vladimir > > On 7/15/14 3:59 PM, Mikael Vidstedt wrote: >> >> Please review the below change which switches the 'runThese' test suite >> over the new, jck-8 based 'runThese8' tests suite. The change also >> splits up the long running fastdebug-Xcomp test into two separate tests >> (fastdebug-Xcomp_lang and fastdebug-Xcomp_vm) to take advantage of the >> parallelism in jprt to reduce the job times further. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8050802 >> Webrev (hs-rt/ (top) repo): >> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/top/webrev/ >> Webrev (hs-rt/hotspot repo): >> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/hotspot/webrev/ >> >> >> >> Thanks, >> Mikael >> From david.holmes at oracle.com Wed Jul 16 01:48:47 2014 From: david.holmes at oracle.com (David Holmes) Date: Wed, 16 Jul 2014 11:48:47 +1000 Subject: RFR (S) JNI Specification Issue: JDK-7172129 Integration of the JNI spec updates for JDK 1.2 was incomplete In-Reply-To: <53C538EF.3000300@oracle.com> References: <53C538EF.3000300@oracle.com> Message-ID: <53C5D9FF.4090504@oracle.com> Looks good to me! Thanks Mr Simms! David H. On 16/07/2014 12:21 AM, David Simms wrote: > > Greetings, > > Some important updates from way back in JDK 1.2 were never added to the > current JNI spec: > > JDK Bug: https://bugs.openjdk.java.net/browse/JDK-7172129 > > Although the "GetPrimitiveArrayCritical" issues have been incorporated > into JDK-4907359, changes are still required to the "Asynchronous > Exceptions" section: > > Web review: http://cr.openjdk.java.net/~dsimms/jnispec/7172129 > > HTML: > http://cr.openjdk.java.net/~dsimms/jnispec/7172129/raw_files/new/docs/technotes/guides/jni/spec/design.html#asynchronous_exceptions > > > Thank you, > /David Simms From mikael.vidstedt at oracle.com Wed Jul 16 02:05:18 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Tue, 15 Jul 2014 19:05:18 -0700 Subject: RFR(S): 8050802: Update jprt runthese test suite to jck-8 In-Reply-To: <53C5D824.7060808@oracle.com> References: <53C5B23B.9040604@oracle.com> <53C5C54B.60803@oracle.com> <53C5D824.7060808@oracle.com> Message-ID: <53C5DDDE.4000608@oracle.com> Note, btw, that the reason why this linux-i586-fastdebug-Xcomp is the culprit here is that that's the only platform where we're running Xcomp on fastdebug, the other Xcomp are all on product. Cheers, Mikael On 2014-07-15 18:40, Mikael Vidstedt wrote: > > From my empirical data the only test I've seen this "problem" with is > the linux-i586-fastdebug-Xcomp; remember that there's a cost/overhead > for setting up the individual tests too so splitting up the other > Xcomp tests may actually make the job times longer. > > That said, if you feel that it's important for symmetry I can > certainly do it. > > Cheers, > Mikael > > On 2014-07-15 17:20, Vladimir Kozlov wrote: >> Mikael, >> >> I think you should split Xcomp on all platforms (not only for >> linux.i586) where it runs. >> >> thanks, >> Vladimir >> >> On 7/15/14 3:59 PM, Mikael Vidstedt wrote: >>> >>> Please review the below change which switches the 'runThese' test suite >>> over the new, jck-8 based 'runThese8' tests suite. The change also >>> splits up the long running fastdebug-Xcomp test into two separate tests >>> (fastdebug-Xcomp_lang and fastdebug-Xcomp_vm) to take advantage of the >>> parallelism in jprt to reduce the job times further. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8050802 >>> Webrev (hs-rt/ (top) repo): >>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/top/webrev/ >>> >>> Webrev (hs-rt/hotspot repo): >>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/hotspot/webrev/ >>> >>> >>> >>> Thanks, >>> Mikael >>> > From david.holmes at oracle.com Wed Jul 16 02:06:19 2014 From: david.holmes at oracle.com (David Holmes) Date: Wed, 16 Jul 2014 12:06:19 +1000 Subject: RFR(S): 8050802: Update jprt runthese test suite to jck-8 In-Reply-To: <53C5D824.7060808@oracle.com> References: <53C5B23B.9040604@oracle.com> <53C5C54B.60803@oracle.com> <53C5D824.7060808@oracle.com> Message-ID: <53C5DE1B.80804@oracle.com> On 16/07/2014 11:40 AM, Mikael Vidstedt wrote: > > From my empirical data the only test I've seen this "problem" with is > the linux-i586-fastdebug-Xcomp; remember that there's a cost/overhead > for setting up the individual tests too so splitting up the other Xcomp > tests may actually make the job times longer. > > That said, if you feel that it's important for symmetry I can certainly > do it. Current split seems fine to me. Someone could always do additional measurements with the other configurations to see if the split is worthwhile - but as you say only fastdebug Xcomp is really a problem. Thanks, David > Cheers, > Mikael > > On 2014-07-15 17:20, Vladimir Kozlov wrote: >> Mikael, >> >> I think you should split Xcomp on all platforms (not only for >> linux.i586) where it runs. >> >> thanks, >> Vladimir >> >> On 7/15/14 3:59 PM, Mikael Vidstedt wrote: >>> >>> Please review the below change which switches the 'runThese' test suite >>> over the new, jck-8 based 'runThese8' tests suite. The change also >>> splits up the long running fastdebug-Xcomp test into two separate tests >>> (fastdebug-Xcomp_lang and fastdebug-Xcomp_vm) to take advantage of the >>> parallelism in jprt to reduce the job times further. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8050802 >>> Webrev (hs-rt/ (top) repo): >>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/top/webrev/ >>> Webrev (hs-rt/hotspot repo): >>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/hotspot/webrev/ >>> >>> >>> >>> Thanks, >>> Mikael >>> > From david.holmes at oracle.com Wed Jul 16 02:14:19 2014 From: david.holmes at oracle.com (David Holmes) Date: Wed, 16 Jul 2014 12:14:19 +1000 Subject: RFR(XS): 8050228: Rename 'rem_size' in compactibleFreeListSpace.cpp because of name clashes on AIX" -f 8050228_rename_rem_size.patch In-Reply-To: References: Message-ID: <53C5DFFB.5010401@oracle.com> On 16/07/2014 2:54 AM, Volker Simonis wrote: > Hi, > > could somebody please review and sponsor this little change: GC code should be taken by GC folk. (aka buck passing :) ) FWIW change looks fine to me. The name remain_size grates a little but spelling out remaining_size grates even more. I don't have a better suggestion but perhaps GC folk will. Cheers, David > http://cr.openjdk.java.net/~simonis/webrevs/8050228/ > https://bugs.openjdk.java.net/browse/JDK-8050228 > > Background: > > I know this sounds crazy but it's true: there's an AIX header which > unconditionally defines rem_size: > > /usr/include/sys/xmem.h > struct xmem { > ... > #define rem_size u2._subspace_id2 > }; > > This breaks the compilation of > CompactibleFreeListSpace::splitChunkAndReturnRemainder() which uses a > local variable of the same name. > > Until now, we've worked around this problem by simply undefining > 'rem_size' in the platform specific file os_aix.inline.hpp but after > "8042195: Introduce umbrella header orderAccess.inline.hpp" this > doesn't seems to be enough any more. > > So before introducing yet another ugly platform dependent hack in > shared code or depending on a certain include order of otherwise > unrelated platform headers in shared code I suggest so simply give up > and rename the local variable. > > In this change I've renamed 'rem_size' to 'remain_size' because "rem" > is used as abbreviation of "remainder" in the code. But actually I'd > be happy with any other name which differs from "rem_size". > > Thank you and best regards, > Volker > From vitalyd at gmail.com Wed Jul 16 03:02:44 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Tue, 15 Jul 2014 23:02:44 -0400 Subject: RFR(XS): 8050228: Rename 'rem_size' in compactibleFreeListSpace.cpp because of name clashes on AIX" -f 8050228_rename_rem_size.patch In-Reply-To: <53C5DFFB.5010401@oracle.com> References: <53C5DFFB.5010401@oracle.com> Message-ID: How about rem_sz? :) Sent from my phone On Jul 15, 2014 10:14 PM, "David Holmes" wrote: > On 16/07/2014 2:54 AM, Volker Simonis wrote: > >> Hi, >> >> could somebody please review and sponsor this little change: >> > > GC code should be taken by GC folk. (aka buck passing :) ) > > FWIW change looks fine to me. The name remain_size grates a little but > spelling out remaining_size grates even more. I don't have a better > suggestion but perhaps GC folk will. > > Cheers, > David > > http://cr.openjdk.java.net/~simonis/webrevs/8050228/ >> https://bugs.openjdk.java.net/browse/JDK-8050228 >> >> Background: >> >> I know this sounds crazy but it's true: there's an AIX header which >> unconditionally defines rem_size: >> >> /usr/include/sys/xmem.h >> struct xmem { >> ... >> #define rem_size u2._subspace_id2 >> }; >> >> This breaks the compilation of >> CompactibleFreeListSpace::splitChunkAndReturnRemainder() which uses a >> local variable of the same name. >> >> Until now, we've worked around this problem by simply undefining >> 'rem_size' in the platform specific file os_aix.inline.hpp but after >> "8042195: Introduce umbrella header orderAccess.inline.hpp" this >> doesn't seems to be enough any more. >> >> So before introducing yet another ugly platform dependent hack in >> shared code or depending on a certain include order of otherwise >> unrelated platform headers in shared code I suggest so simply give up >> and rename the local variable. >> >> In this change I've renamed 'rem_size' to 'remain_size' because "rem" >> is used as abbreviation of "remainder" in the code. But actually I'd >> be happy with any other name which differs from "rem_size". >> >> Thank you and best regards, >> Volker >> >> From david.simms at oracle.com Wed Jul 16 07:00:33 2014 From: david.simms at oracle.com (David Simms) Date: Wed, 16 Jul 2014 09:00:33 +0200 Subject: RFR (S) JNI Specification Issue: JDK-7172129 Integration of the JNI spec updates for JDK 1.2 was incomplete In-Reply-To: <53C5D9FF.4090504@oracle.com> References: <53C538EF.3000300@oracle.com> <53C5D9FF.4090504@oracle.com> Message-ID: <53C62311.7020005@oracle.com> Thanks for the reviews, Dan and David ! On 2014-07-16 03:48, David Holmes wrote: > Looks good to me! > > Thanks Mr Simms! > > David H. > > On 16/07/2014 12:21 AM, David Simms wrote: >> >> Greetings, >> >> Some important updates from way back in JDK 1.2 were never added to the >> current JNI spec: >> >> JDK Bug: https://bugs.openjdk.java.net/browse/JDK-7172129 >> >> Although the "GetPrimitiveArrayCritical" issues have been incorporated >> into JDK-4907359, changes are still required to the "Asynchronous >> Exceptions" section: >> >> Web review: http://cr.openjdk.java.net/~dsimms/jnispec/7172129 >> >> HTML: >> http://cr.openjdk.java.net/~dsimms/jnispec/7172129/raw_files/new/docs/technotes/guides/jni/spec/design.html#asynchronous_exceptions >> >> >> >> Thank you, >> /David Simms From tobias.hartmann at oracle.com Wed Jul 16 07:54:50 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 16 Jul 2014 09:54:50 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C53B23.6090907@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> Message-ID: <53C62FCA.8020302@oracle.com> Hi Coleen, thanks for the review. > *+ if (csc->is_call_to_interpreted() && stub_contains_dead_metadata(is_alive, csc->destination())) {* > *+ csc->set_to_clean();* > *+ }* > > This appears in each case. Can you fold it and the new function into > a function like clean_call_to_interpreted_stub(is_alive, csc)? I folded it into the function clean_call_to_interpreter_stub(..). New webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ Thanks, Tobias > > Thanks, > Coleen > >> >> So before the permgen removal embedded method* were oops and they >> were processed in relocInfo::oop_type loop. >> >> May be instead of specializing opt_virtual_call_type and >> static_call_type call site you can simple add a loop for >> relocInfo::metadata_type (similar to oop_type loop)? >> >> Thanks, >> Vladimir >> >> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>> Hi, >>> >>> please review the following patch for JDK-8029443. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>> >>> *Problem* >>> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks >>> if a nmethod can be unloaded because it contains dead oops. If class >>> unloading occurred we additionally clear all ICs where the cached >>> metadata refers to an unloaded klass or method. If the nmethod is not >>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>> metadata is alive. The assert in CheckClass::check_class fails because >>> the nmethod contains Method* metadata corresponding to a dead Klass. >>> The Method* belongs to a to-interpreter stub [1] of an optimized >>> compiled IC. Normally we clear those stubs prior to verification to >>> avoid dangling references to Method* [2], but only if the stub is >>> not in >>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>> to-interpreter stub may be executed and hand a stale Method* to the >>> interpreter. >>> >>> *Solution >>> *The implementation of nmethod::do_unloading(..) is changed to clean >>> compiled ICs and compiled static calls if they call into a >>> to-interpreter stub that references dead Method* metadata. >>> >>> The patch was affected by the G1 class unloading changes (JDK-8048248) >>> because the method nmethod::do_unloading_parallel(..) was added. I >>> adapted the implementation as well. >>> * >>> Testing >>> *Failing test (runThese) >>> JPRT >>> >>> Thanks, >>> Tobias >>> >>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>> [2] see nmethod::verify_metadata_loaders(..), >>> static_stub_reloc()->clear_inline_cache() clears the stub > From mikael.gerdin at oracle.com Wed Jul 16 08:20:45 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 16 Jul 2014 10:20:45 +0200 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code In-Reply-To: <53C58E1D.9060802@oracle.com> References: <53C4705E.7060407@oracle.com> <1649311.XsSm0sYPeC@mgerdin03> <53C58E1D.9060802@oracle.com> Message-ID: <3929573.XurhNyGbnR@mgerdin03> On Tuesday 15 July 2014 16.25.01 Coleen Phillimore wrote: > I didn't make this change to interpreter_frame_bcp or mdp_addr() at the > end. The frame code is consistent in returning intptr_t for objects on > the frame and then casting them to the right types. I think this is better. Ok. /Mikael > > Thanks, > Coleen > > On 7/15/14, 11:40 AM, Mikael Gerdin wrote: > > Hi Coleen, > > > > On Monday 14 July 2014 20.05.50 Coleen Phillimore wrote: > >> Summary: remove bcx and mdx handling. We no longer have to convert > >> bytecode pointers or method data pointers to indices for GC since > >> Metadata aren't moved. > >> > >> Tested with nsk.quick.testlist, jck tests, JPRT. > >> > >> Most of this is renaming bcx to bcp and mdx to mdp. The content changes > >> are in frame.cpp. StefanK implemented 90% of these changes. > >> > >> open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ > > > > This isn't exactly my area of the code, but I'm happy that we got around > > to > > this cleanup! > > > > I looked through the change and to my not-so-runtime-familiar eyes it > > seems > > good. > > > > One thought about the frame accessors > > > > 244 intptr_t* interpreter_frame_bcp_addr() const; > > 245 intptr_t* interpreter_frame_mdp_addr() const; > > > > Now that the contents of bcp and mdp in the frames are always pointers, > > perhaps these accessors should be appropriately typed? > > > > Something like > > > > 244 address* interpreter_frame_bcp_addr() const; > > 245 ProfileData** interpreter_frame_mdp_addr() const; > > > > Also, BytecodeInterpreter still has a member named _mdx, should that be > > renamed to _mdp as well? > > > > /Mikael > > > >> bug link https://bugs.openjdk.java.net/browse/JDK-8004128 > >> > >> Thanks, > >> Coleen From tobias.hartmann at oracle.com Wed Jul 16 08:24:16 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 16 Jul 2014 10:24:16 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <9129730.8quV1l9zAl@mgerdin03> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C4DCB8.5020705@oracle.com> <9129730.8quV1l9zAl@mgerdin03> Message-ID: <53C636B0.1080900@oracle.com> Hi Mikael, thanks for the review. Please see comments inline. On 15.07.2014 13:36, Mikael Gerdin wrote: > Tobias, > > On Tuesday 15 July 2014 09.48.08 Tobias Hartmann wrote: >> Hi Vladimir, >> >>> Impressive work, Tobias! >> Thanks! Took me a while to figure out what's happening. >> >>> So before the permgen removal embedded method* were oops and they were >>> processed in relocInfo::oop_type loop. >> Okay, good to know. That explains why the terms oops and metadata are >> used interchangeably at some points in the code. > Yep, there are a lot of leftover references to metadata as oops, especially in > some compiler/runtime parts such as MDOs and CompiledICs. > >>> May be instead of specializing opt_virtual_call_type and >>> static_call_type call site you can simple add a loop for >>> relocInfo::metadata_type (similar to oop_type loop)? >> The problem with iterating over relocInfo::metadata_type is that we >> don't know to which stub, i.e., to which IC the Method* pointer belongs. >> Since we don't want to unload the entire method but only clear the >> corresponding IC, we need this information. > I'm wondering, is there some way to figure out the IC for the Method*? > > In CompiledStaticCall::emit_to_interp_stub a static_stub_Relocation is created > and from the looks of it it points to the call site through some setting of a > "mark". > > The metadata relocation is emitted just after the static_stub_Relocation, so > one approach (untested) could be to have a case for static_stub_Relocations, > create a CompiledIC.at(reloc->static_call()) and check if it's a call to > interpreted. If it is the advance the relocIterator to the next position and > check that metadata for liveness. The relocation entries for this particular case are [1]. Looking at the static_stub_Relocation (0xffffffff6ea49cc4) we don't know if the stub belongs to an optimized IC or a compiled static call. We would either have to create both CompiledIC.at(..) and compiledStaticCall_at(..) or check the relocation entry for the call (0xffffffff6ea49bc0) requiring another iteration. Only then we are able to look at the metadata_relocation at the next position. Since we already have case statements for opt_virtual_call_type and static_call_type (at least in nmethod::do_unloading_parallel(..)) I would prefer to infer the Method* from the IC or compiled static call. I adapted the implementation according to Coleen's feedback: http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ What do you think? Thanks, Tobias [1] Relocation entries: [...] 154031 @0xffffffff6ea49b3a: 3005 154032 relocInfo at 0xffffffff6ea49b3a [type=3(opt_virtual_call) addr=0xffffffff6ea49bc0 offset=20] [...] 154047 @0xffffffff6ea49b52: f801ffe8500a 154048 relocInfo at 0xffffffff6ea49b56 [type=5(static_stub) addr=0xffffffff6ea49cc4 offset=40 data=-24] | [static_call=0xffffffff6ea49bc0] 154049 @0xffffffff6ea49b58: f003c000 154050 relocInfo at 0xffffffff6ea49b5a [type=12(metadata) addr=0xffffffff6ea49cc4 offset=0 data=3] | [metadata_addr=0xffffffff6ea49d68 *=0xffffffff6ae20960 offset=0]metadata_value=0xffffffff6ae20960: {method} {0xffffffff6ae20968} 'newInstance' '([Ljava/lang/Object;)Ljava/lang/Object;' in 'sun/reflect/GeneratedConstructorAccessor3' 154051 @0xffffffff6ea49b5c: f003c007 154052 relocInfo at 0xffffffff6ea49b5e [type=12(metadata) addr=0xffffffff6ea49ce0 offset=28 data=3] | [metadata_addr=0xffffffff6ea49d68 *=0xffffffff6ae20960 offset=0]metadata_value=0xffffffff6ae20960: {method} {0xffffffff6ae20968} 'newInstance' '([Ljava/lang/Object;)Ljava/lang/Object;' in 'sun/reflect/GeneratedConstructorAccessor3' 154053 @0xffffffff6ea49b60: > /Mikael > >> Thanks, >> Tobias >> >>> Thanks, >>> Vladimir >>> >>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>> Hi, >>>> >>>> please review the following patch for JDK-8029443. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>> >>>> *Problem* >>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks >>>> if a nmethod can be unloaded because it contains dead oops. If class >>>> unloading occurred we additionally clear all ICs where the cached >>>> metadata refers to an unloaded klass or method. If the nmethod is not >>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>>> metadata is alive. The assert in CheckClass::check_class fails because >>>> the nmethod contains Method* metadata corresponding to a dead Klass. >>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>> compiled IC. Normally we clear those stubs prior to verification to >>>> avoid dangling references to Method* [2], but only if the stub is not in >>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>> to-interpreter stub may be executed and hand a stale Method* to the >>>> interpreter. >>>> >>>> *Solution >>>> *The implementation of nmethod::do_unloading(..) is changed to clean >>>> compiled ICs and compiled static calls if they call into a >>>> to-interpreter stub that references dead Method* metadata. >>>> >>>> The patch was affected by the G1 class unloading changes (JDK-8048248) >>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>> adapted the implementation as well. >>>> * >>>> Testing >>>> *Failing test (runThese) >>>> JPRT >>>> >>>> Thanks, >>>> Tobias >>>> >>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>> [2] see nmethod::verify_metadata_loaders(..), >>>> static_stub_reloc()->clear_inline_cache() clears the stub From mikael.gerdin at oracle.com Wed Jul 16 08:34:08 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 16 Jul 2014 10:34:08 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C636B0.1080900@oracle.com> References: <53C3C584.7070008@oracle.com> <9129730.8quV1l9zAl@mgerdin03> <53C636B0.1080900@oracle.com> Message-ID: <4262688.ke7CquU56b@mgerdin03> Hi Tobias, On Wednesday 16 July 2014 10.24.16 Tobias Hartmann wrote: > Hi Mikael, > > thanks for the review. Please see comments inline. > > On 15.07.2014 13:36, Mikael Gerdin wrote: > > Tobias, > > > > On Tuesday 15 July 2014 09.48.08 Tobias Hartmann wrote: > >> Hi Vladimir, > >> > >>> Impressive work, Tobias! > >> > >> Thanks! Took me a while to figure out what's happening. > >> > >>> So before the permgen removal embedded method* were oops and they were > >>> processed in relocInfo::oop_type loop. > >> > >> Okay, good to know. That explains why the terms oops and metadata are > >> used interchangeably at some points in the code. > > > > Yep, there are a lot of leftover references to metadata as oops, > > especially in some compiler/runtime parts such as MDOs and CompiledICs. > > > >>> May be instead of specializing opt_virtual_call_type and > >>> static_call_type call site you can simple add a loop for > >>> relocInfo::metadata_type (similar to oop_type loop)? > >> > >> The problem with iterating over relocInfo::metadata_type is that we > >> don't know to which stub, i.e., to which IC the Method* pointer belongs. > >> Since we don't want to unload the entire method but only clear the > >> corresponding IC, we need this information. > > > > I'm wondering, is there some way to figure out the IC for the Method*? > > > > In CompiledStaticCall::emit_to_interp_stub a static_stub_Relocation is > > created and from the looks of it it points to the call site through some > > setting of a "mark". > > > > The metadata relocation is emitted just after the static_stub_Relocation, > > so one approach (untested) could be to have a case for > > static_stub_Relocations, create a CompiledIC.at(reloc->static_call()) and > > check if it's a call to interpreted. If it is the advance the > > relocIterator to the next position and check that metadata for liveness. > > The relocation entries for this particular case are [1]. Looking at the > static_stub_Relocation (0xffffffff6ea49cc4) we don't know if the stub > belongs to an optimized IC or a compiled static call. We would either > have to create both CompiledIC.at(..) and compiledStaticCall_at(..) or > check the relocation entry for the call (0xffffffff6ea49bc0) requiring > another iteration. Only then we are able to look at the > metadata_relocation at the next position. That's a good point. > > Since we already have case statements for opt_virtual_call_type and > static_call_type (at least in nmethod::do_unloading_parallel(..)) I > would prefer to infer the Method* from the IC or compiled static call. > > I adapted the implementation according to Coleen's feedback: > > http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ > > What do you think? I think your suggested change is fine. Did you test it with -XX:+UseG1GC to exercise the do_unloading_parallel path as well? /Mikael > > Thanks, > Tobias > > > [1] Relocation entries: > > [...] > 154031 @0xffffffff6ea49b3a: 3005 > 154032 relocInfo at 0xffffffff6ea49b3a [type=3(opt_virtual_call) > addr=0xffffffff6ea49bc0 offset=20] > [...] > 154047 @0xffffffff6ea49b52: f801ffe8500a > 154048 relocInfo at 0xffffffff6ea49b56 [type=5(static_stub) > addr=0xffffffff6ea49cc4 offset=40 data=-24] | > [static_call=0xffffffff6ea49bc0] > 154049 @0xffffffff6ea49b58: f003c000 > 154050 relocInfo at 0xffffffff6ea49b5a [type=12(metadata) > addr=0xffffffff6ea49cc4 offset=0 data=3] | > [metadata_addr=0xffffffff6ea49d68 *=0xffffffff6ae20960 > offset=0]metadata_value=0xffffffff6ae20960: {method} > {0xffffffff6ae20968} 'newInstance' > '([Ljava/lang/Object;)Ljava/lang/Object;' in > 'sun/reflect/GeneratedConstructorAccessor3' > 154051 @0xffffffff6ea49b5c: f003c007 > 154052 relocInfo at 0xffffffff6ea49b5e [type=12(metadata) > addr=0xffffffff6ea49ce0 offset=28 data=3] | > [metadata_addr=0xffffffff6ea49d68 *=0xffffffff6ae20960 > offset=0]metadata_value=0xffffffff6ae20960: {method} > {0xffffffff6ae20968} 'newInstance' > '([Ljava/lang/Object;)Ljava/lang/Object;' in > 'sun/reflect/GeneratedConstructorAccessor3' > > 154053 @0xffffffff6ea49b60: > > /Mikael > > > >> Thanks, > >> Tobias > >> > >>> Thanks, > >>> Vladimir > >>> > >>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: > >>>> Hi, > >>>> > >>>> please review the following patch for JDK-8029443. > >>>> > >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 > >>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ > >>>> > >>>> *Problem* > >>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks > >>>> if a nmethod can be unloaded because it contains dead oops. If class > >>>> unloading occurred we additionally clear all ICs where the cached > >>>> metadata refers to an unloaded klass or method. If the nmethod is not > >>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all > >>>> metadata is alive. The assert in CheckClass::check_class fails because > >>>> the nmethod contains Method* metadata corresponding to a dead Klass. > >>>> The Method* belongs to a to-interpreter stub [1] of an optimized > >>>> compiled IC. Normally we clear those stubs prior to verification to > >>>> avoid dangling references to Method* [2], but only if the stub is not > >>>> in > >>>> use, i.e. if the IC is not in to-interpreted mode. In this case the > >>>> to-interpreter stub may be executed and hand a stale Method* to the > >>>> interpreter. > >>>> > >>>> *Solution > >>>> *The implementation of nmethod::do_unloading(..) is changed to clean > >>>> compiled ICs and compiled static calls if they call into a > >>>> to-interpreter stub that references dead Method* metadata. > >>>> > >>>> The patch was affected by the G1 class unloading changes (JDK-8048248) > >>>> because the method nmethod::do_unloading_parallel(..) was added. I > >>>> adapted the implementation as well. > >>>> * > >>>> Testing > >>>> *Failing test (runThese) > >>>> JPRT > >>>> > >>>> Thanks, > >>>> Tobias > >>>> > >>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) > >>>> [2] see nmethod::verify_metadata_loaders(..), > >>>> static_stub_reloc()->clear_inline_cache() clears the stub From tobias.hartmann at oracle.com Wed Jul 16 08:36:50 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 16 Jul 2014 10:36:50 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C62FCA.8020302@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> Message-ID: <53C639A2.3050202@oracle.com> Sorry, forgot to answer this question: > Were you able to create a small test case for it that would be useful > to add? Unfortunately I was not able to create a test. The bug only reproduces on a particular system with a > 30 minute run of runThese. Best, Tobias On 16.07.2014 09:54, Tobias Hartmann wrote: > Hi Coleen, > > thanks for the review. >> *+ if (csc->is_call_to_interpreted() && >> stub_contains_dead_metadata(is_alive, csc->destination())) {* >> *+ csc->set_to_clean();* >> *+ }* >> >> This appears in each case. Can you fold it and the new function into >> a function like clean_call_to_interpreted_stub(is_alive, csc)? > > I folded it into the function clean_call_to_interpreter_stub(..). > > New webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ > > Thanks, > Tobias > >> >> Thanks, >> Coleen >> >>> >>> So before the permgen removal embedded method* were oops and they >>> were processed in relocInfo::oop_type loop. >>> >>> May be instead of specializing opt_virtual_call_type and >>> static_call_type call site you can simple add a loop for >>> relocInfo::metadata_type (similar to oop_type loop)? >>> >>> Thanks, >>> Vladimir >>> >>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>> Hi, >>>> >>>> please review the following patch for JDK-8029443. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>> >>>> *Problem* >>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) >>>> checks >>>> if a nmethod can be unloaded because it contains dead oops. If class >>>> unloading occurred we additionally clear all ICs where the cached >>>> metadata refers to an unloaded klass or method. If the nmethod is not >>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>>> metadata is alive. The assert in CheckClass::check_class fails because >>>> the nmethod contains Method* metadata corresponding to a dead Klass. >>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>> compiled IC. Normally we clear those stubs prior to verification to >>>> avoid dangling references to Method* [2], but only if the stub is >>>> not in >>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>> to-interpreter stub may be executed and hand a stale Method* to the >>>> interpreter. >>>> >>>> *Solution >>>> *The implementation of nmethod::do_unloading(..) is changed to clean >>>> compiled ICs and compiled static calls if they call into a >>>> to-interpreter stub that references dead Method* metadata. >>>> >>>> The patch was affected by the G1 class unloading changes (JDK-8048248) >>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>> adapted the implementation as well. >>>> * >>>> Testing >>>> *Failing test (runThese) >>>> JPRT >>>> >>>> Thanks, >>>> Tobias >>>> >>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>> [2] see nmethod::verify_metadata_loaders(..), >>>> static_stub_reloc()->clear_inline_cache() clears the stub >> > From volker.simonis at gmail.com Wed Jul 16 10:17:13 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 16 Jul 2014 12:17:13 +0200 Subject: RFR(XS): 8050228: Rename 'rem_size' in compactibleFreeListSpace.cpp because of name clashes on AIX" -f 8050228_rename_rem_size.patch In-Reply-To: References: <53C5DFFB.5010401@oracle.com> Message-ID: On Wed, Jul 16, 2014 at 5:02 AM, Vitaly Davidovich wrote: > How about rem_sz? :) > As I wrote: I would be happy with any name as long as it gets reviewed:) > Sent from my phone > > On Jul 15, 2014 10:14 PM, "David Holmes" wrote: >> >> On 16/07/2014 2:54 AM, Volker Simonis wrote: >>> >>> Hi, >>> >>> could somebody please review and sponsor this little change: >> >> >> GC code should be taken by GC folk. (aka buck passing :) ) >> >> FWIW change looks fine to me. The name remain_size grates a little but >> spelling out remaining_size grates even more. I don't have a better >> suggestion but perhaps GC folk will. >> >> Cheers, >> David >> >>> http://cr.openjdk.java.net/~simonis/webrevs/8050228/ >>> https://bugs.openjdk.java.net/browse/JDK-8050228 >>> >>> Background: >>> >>> I know this sounds crazy but it's true: there's an AIX header which >>> unconditionally defines rem_size: >>> >>> /usr/include/sys/xmem.h >>> struct xmem { >>> ... >>> #define rem_size u2._subspace_id2 >>> }; >>> >>> This breaks the compilation of >>> CompactibleFreeListSpace::splitChunkAndReturnRemainder() which uses a >>> local variable of the same name. >>> >>> Until now, we've worked around this problem by simply undefining >>> 'rem_size' in the platform specific file os_aix.inline.hpp but after >>> "8042195: Introduce umbrella header orderAccess.inline.hpp" this >>> doesn't seems to be enough any more. >>> >>> So before introducing yet another ugly platform dependent hack in >>> shared code or depending on a certain include order of otherwise >>> unrelated platform headers in shared code I suggest so simply give up >>> and rename the local variable. >>> >>> In this change I've renamed 'rem_size' to 'remain_size' because "rem" >>> is used as abbreviation of "remainder" in the code. But actually I'd >>> be happy with any other name which differs from "rem_size". >>> >>> Thank you and best regards, >>> Volker >>> > From erik.helin at oracle.com Wed Jul 16 10:39:03 2014 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 16 Jul 2014 12:39:03 +0200 Subject: RFR: 8011397: JTREG needs to copy additional WhiteBox class file to JTwork/scratch/sun/hotspot In-Reply-To: <53C5482A.9090001@oracle.com> References: <536B7CF0.6010508@oracle.com> <2443586.qRToXKmNqX@mgerdin03> <53C5482A.9090001@oracle.com> Message-ID: <12779611.jBGqJ13gfp@ehelin-laptop> On Tuesday 15 July 2014 19:26:34 PM Andrey Zakharov wrote: > Hi, Erik, Bengt. Could you, please, review this too. Andrey, why did you only update a couple of tests to also copy sun.hotspot.WhiteBox$WhiteBoxPermission? You updated 14 tests, there are still 116 tests using sun.hotspot.WhiteBox. Why doesn't these 116 tests have to be updated? Thanks, Erik > Thanks. > > On 15.07.2014 17:58, Mikael Gerdin wrote: > > Andrey, > > > > On Monday 07 July 2014 20.48.21 Andrey Zakharov wrote: > >> Hi ,all > >> Mikael, can you please review it. > > > > Sorry, I was on vacation last week. > > > >> webrev: > >> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ > > > > Looks ok for now. We should consider revisiting this by either switching > > to > > @run main/bootclasspath > > or > > deleting the WhiteboxPermission nested class and using some other way for > > permission checks (if they are at all needed). > > > > /Mikael > > > >> Thanks. > >> > >> On 25.06.2014 19:08, Andrey Zakharov wrote: > >>> Hi, all > >>> So in progress of previous email - > >>> webrev: > >>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ > >>> > >>> Thanks. > >>> > >>> On 16.06.2014 19:57, Andrey Zakharov wrote: > >>>> Hi, all > >>>> So issue is that when tests with WhiteBox API has been invoked with > >>>> -Xverify:all it fails with Exception java.lang.NoClassDefFoundError: > >>>> sun/hotspot/WhiteBox$WhiteBoxPermission > >>>> Solutions that are observed: > >>>> 1. Copy WhiteBoxPermission with WhiteBox. But > >>>> > >>>>>> Perhaps this is a good time to get rid of ClassFileInstaller > >>>> > >>>> altogether? > >>>> > >>>> 2. Using bootclasspath to hook pre-built whitebox (due @library > >>>> /testlibrary/whitebox) . Some tests has @run main/othervm, some uses > >>>> ProcessBuilder. > >>>> > >>>> - main/othervm/bootclasspath adds ${test.src} and > >>>> > >>>> ${test.classes}to options. > >>>> > >>>> - With ProcessBuilder we can just add ${test.classes} > >>>> > >>>> Question here is, can it broke some tests ? While testing this, I > >>>> found only https://bugs.openjdk.java.net/browse/JDK-8046231, others > >>>> looks fine. > >>>> > >>>> 3. Make ClassFileInstaller deal with inner classes like that: > >>>> diff -r 6ed24aedeef0 -r c01651363ba8 > >>>> test/testlibrary/ClassFileInstaller.java > >>>> --- a/test/testlibrary/ClassFileInstaller.java Thu Jun 05 19:02:56 > >>>> 2014 +0400 > >>>> +++ b/test/testlibrary/ClassFileInstaller.java Fri Jun 06 18:18:11 > >>>> 2014 +0400 > >>>> @@ -50,6 +50,16 @@ > >>>> > >>>> } > >>>> // Create the class file > >>>> Files.copy(is, p, StandardCopyOption.REPLACE_EXISTING); > >>>> > >>>> + > >>>> + for (Class cls : > >>>> Class.forName(arg).getDeclaredClasses()) { > >>>> + //if (!Modifier.isStatic(cls.getModifiers())) { > >>>> + String pathNameSub = > >>>> cls.getCanonicalName().replace('.', '/').concat(".class"); > >>>> + Path pathSub = Paths.get(pathNameSub); > >>>> + InputStream streamSub = > >>>> cl.getResourceAsStream(pathNameSub); > >>>> + Files.copy(streamSub, pathSub, > >>>> StandardCopyOption.REPLACE_EXISTING); > >>>> + //} > >>>> + } > >>>> + > >>>> > >>>> } > >>>> > >>>> } > >>>> > >>>> } > >>>> > >>>> Works fine for ordinary classes, but fails for WhiteBox due > >>>> Class.forName initiate Class. WhiteBox has "static" section, and > >>>> initialization fails as it cannot bind to native methods > >>>> "registerNatives" and so on. > >>>> > >>>> > >>>> So, lets return to first one option? Just add everywhere > >>>> > >>>> * @run main ClassFileInstaller sun.hotspot.WhiteBox > >>>> > >>>> + * @run main ClassFileInstaller > >>>> sun.hotspot.WhiteBox$WhiteBoxPermission > >>>> > >>>> Thanks. > >>>> > >>>> On 10.06.2014 19:43, Igor Ignatyev wrote: > >>>>> Andrey, > >>>>> > >>>>> I don't like this idea, since it completely changes the tests. > >>>>> 'run/othervm/bootclasspath' adds all paths from CP to BCP, so the > >>>>> tests whose main idea was testing WB methods themselves (sanity, > >>>>> compiler/whitebox, ...) don't check that it's possible to use WB > >>>>> when the application isn't in BCP. > >>>>> > >>>>> Igor > >>>>> > >>>>> On 06/09/2014 06:59 PM, Andrey Zakharov wrote: > >>>>>> Hi, everybody > >>>>>> I have tested my changes on major platforms and found one bug, filed: > >>>>>> https://bugs.openjdk.java.net/browse/JDK-8046231 > >>>>>> Also, i did another try to make ClassFileInstaller to copy all inner > >>>>>> classes within parent, but this fails for WhiteBox due its static > >>>>>> "registerNatives" dependency. > >>>>>> > >>>>>> Please, review suggested changes: > >>>>>> - replace ClassFileInstaller and run/othervm with > >>>>>> > >>>>>> "run/othervm/bootclasspath". > >>>>>> > >>>>>> bootclasspath parameter for othervm adds-Xbootclasspath/a: > >>>>>> option with ${test.src} and ${test.classes}according to > >>>>>> http://hg.openjdk.java.net/code-tools/jtreg/file/31003a1c46d9/src/sha > >>>>>> re > >>>>>> /classes/com/sun/javatest/regtest/MainAction.java. > >>>>>> > >>>>>> Is this suitable for our needs - give to test compiled WhiteBox? > >>>>>> > >>>>>> - replace explicit -Xbootclasspath option values (".") in > >>>>>> > >>>>>> ProcessBuilder invocations to ${test.classes} where WhiteBox has been > >>>>>> compiled. > >>>>>> > >>>>>> Webrev: > >>>>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.00/ > >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8011397 > >>>>>> Thanks. > >>>>>> > >>>>>> On 23.05.2014 15:40, Andrey Zakharov wrote: > >>>>>>> On 22.05.2014 12:47, Igor Ignatyev wrote: > >>>>>>>> Andrey, > >>>>>>>> > >>>>>>>> 1. You changed dozen of tests, have you tested your changes? > >>>>>>> > >>>>>>> Locally, aurora on the way. > >>>>>>> > >>>>>>>> 2. Your changes of year in copyright is wrong. it has to be > >>>>>>>> $first_year, [$last_year, ], see Mark's email[1] for details. > >>>>>>>> > >>>>>>>> [1] > >>>>>>>> http://mail.openjdk.java.net/pipermail/jdk7-dev/2010-May/001321.htm > >>>>>>>> l > >>>>>>> > >>>>>>> Thanks, fixed. will be uploaded soon. > >>>>>>> > >>>>>>>> Igor > >>>>>>>> > >>>>>>>> On 05/21/2014 07:37 PM, Andrey Zakharov wrote: > >>>>>>>>> On 13.05.2014 14:43, Andrey Zakharov wrote: > >>>>>>>>>> Hi > >>>>>>>>>> So here is trivial patch - > >>>>>>>>>> removing ClassFileInstaller sun.hotspot.WhiteBox and adding > >>>>>>>>>> main/othervm/bootclasspath > >>>>>>>>>> where this needed > >>>>>>>>>> > >>>>>>>>>> Also, some tests are modified as > >>>>>>>>>> - "-Xbootclasspath/a:.", > >>>>>>>>>> + "-Xbootclasspath/a:" + > >>>>>>>>>> System.getProperty("test.classes"), > >>>>>>>>>> > >>>>>>>>>> Thanks. > >>>>>>>>> > >>>>>>>>> webrev: http://cr.openjdk.java.net/~jwilhelm/8011397/webrev.02/ > >>>>>>>>> bug: https://bugs.openjdk.java.net/browse/JDK-8011397 > >>>>>>>>> Thanks. > >>>>>>>>> > >>>>>>>>>> On 09.05.2014 12:13, Mikael Gerdin wrote: > >>>>>>>>>>> On Thursday 08 May 2014 19.28.13 Igor Ignatyev wrote: > >>>>>>>>>>>> // cc'ing hotspot-dev instaed of compiler, runtime and gc > >>>>>>>>>>>> lists. > >>>>>>>>>>>> > >>>>>>>>>>>> On 05/08/2014 07:09 PM, Filipp Zhinkin wrote: > >>>>>>>>>>>>> Andrey, > >>>>>>>>>>>>> > >>>>>>>>>>>>> I've CC'ed compiler and runtime mailing list, because you're > >>>>>>>>>>>>> changes > >>>>>>>>>>>>> affect test for other components as too. > >>>>>>>>>>>>> > >>>>>>>>>>>>> I don't like your solution (but I'm not a reviewer, so treat > >>>>>>>>>>>>> my > >>>>>>>>>>>>> words > >>>>>>>>>>>>> just as suggestion), > >>>>>>>>>>>>> because we'll have to write more meta information for each > >>>>>>>>>>>>> test > >>>>>>>>>>>>> and it > >>>>>>>>>>>>> is very easy to > >>>>>>>>>>>>> forget to install WhiteBoxPermission if you don't test your > >>>>>>>>>>>>> test > >>>>>>>>>>>>> with > >>>>>>>>>>>>> some security manager. > >>>>>>>>>>>>> > >>>>>>>>>>>>> From my point of view, it will be better to extend > >>>>>>>>>>>>> > >>>>>>>>>>>>> ClassFileInstaller > >>>>>>>>>>>>> > >>>>>>>>>>>>> so it will copy not only > >>>>>>>>>>>>> a class whose name was passed as an arguments, but also all > >>>>>>>>>>>>> inner > >>>>>>>>>>>>> classes of that class. > >>>>>>>>>>>>> And if someone want copy only specified class without inner > >>>>>>>>>>>>> classes, > >>>>>>>>>>>>> then some option > >>>>>>>>>>>>> could be added to ClassFileInstaller to force such behaviour. > >>>>>>>>>>> > >>>>>>>>>>> Perhaps this is a good time to get rid of ClassFileInstaller > >>>>>>>>>>> altogether? > >>>>>>>>>>> > >>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8009117 > >>>>>>>>>>> > >>>>>>>>>>> The reason for its existence is that the WhiteBox class needs > >>>>>>>>>>> to be > >>>>>>>>>>> on the > >>>>>>>>>>> boot class path. > >>>>>>>>>>> If we can live with having all the test's classes on the boot > >>>>>>>>>>> class > >>>>>>>>>>> path then > >>>>>>>>>>> we could use the /bootclasspath option in jtreg as stated in > >>>>>>>>>>> the RFE. > >>>>>>>>>>> > >>>>>>>>>>> /Mikael > >>>>>>>>>>> > >>>>>>>>>>>>> Thanks, > >>>>>>>>>>>>> Filipp. > >>>>>>>>>>>>> > >>>>>>>>>>>>> On 05/08/2014 04:47 PM, Andrey Zakharov wrote: > >>>>>>>>>>>>>> Hi! > >>>>>>>>>>>>>> Suggesting patch with fixes for > >>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8011397 > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> webrev: > >>>>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20275/8011397 > >>>>>>>>>>>>>> .t > >>>>>>>>>>>>>> gz > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> patch: > >>>>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20274/8011397 > >>>>>>>>>>>>>> .W > >>>>>>>>>>>>>> hiteBoxPer > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> mission > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Thanks. From tobias.hartmann at oracle.com Wed Jul 16 11:04:23 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 16 Jul 2014 13:04:23 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C56C16.40909@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C4DCB8.5020705@oracle.com> <53C56C16.40909@oracle.com> Message-ID: <53C65C37.4070700@oracle.com> Hi Vladimir, On 15.07.2014 19:59, Vladimir Kozlov wrote: > On 7/15/14 12:48 AM, Tobias Hartmann wrote: >> Hi Vladimir, >> >>> Impressive work, Tobias! >> >> Thanks! Took me a while to figure out what's happening. >> >>> So before the permgen removal embedded method* were oops and they were >>> processed in relocInfo::oop_type loop. >> >> Okay, good to know. That explains why the terms oops and metadata are >> used interchangeably at some points in the code. >> >>> May be instead of specializing opt_virtual_call_type and >>> static_call_type call site you can simple add a loop for >>> relocInfo::metadata_type (similar to oop_type loop)? >> >> The problem with iterating over relocInfo::metadata_type is that we >> don't know to which stub, i.e., to which IC the Method* pointer belongs. >> Since we don't want to unload the entire method but only clear the >> corresponding IC, we need this information. > > Got it: you are cleaning call site IC: ic->set_to_clean(). > > My point was these to_interp stubs are part of a nmethod (they are in > stubs section) and contain dead metadata. Should we unload this > nmethod then? We do unloading nmethods if any embedded oops are dead > (see can_unload()). Should we do the same if a nmethod (and its stubs) > have dead metadata? Note, embedded metadata could be Method* and > MethodData*. I'm not sure but I think except for the to-interpreter stub the nmethod is still valid and could be executed. In nmethod::verify_metadata_loaders(..) we explicitly check for this case and clean such stubs prior to verification. Assuming that this code was added for a reason, it means that nmethods with a to-interpreter stub referencing stale metadata do not necessarily have to be unloaded. But maybe someone else knows better. Thanks, Tobias > > Vladimir > >> >> Thanks, >> Tobias >> >>> >>> Thanks, >>> Vladimir >>> >>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>> Hi, >>>> >>>> please review the following patch for JDK-8029443. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>> >>>> *Problem* >>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) >>>> checks >>>> if a nmethod can be unloaded because it contains dead oops. If class >>>> unloading occurred we additionally clear all ICs where the cached >>>> metadata refers to an unloaded klass or method. If the nmethod is not >>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>>> metadata is alive. The assert in CheckClass::check_class fails because >>>> the nmethod contains Method* metadata corresponding to a dead Klass. >>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>> compiled IC. Normally we clear those stubs prior to verification to >>>> avoid dangling references to Method* [2], but only if the stub is >>>> not in >>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>> to-interpreter stub may be executed and hand a stale Method* to the >>>> interpreter. >>>> >>>> *Solution >>>> *The implementation of nmethod::do_unloading(..) is changed to clean >>>> compiled ICs and compiled static calls if they call into a >>>> to-interpreter stub that references dead Method* metadata. >>>> >>>> The patch was affected by the G1 class unloading changes (JDK-8048248) >>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>> adapted the implementation as well. >>>> * >>>> Testing >>>> *Failing test (runThese) >>>> JPRT >>>> >>>> Thanks, >>>> Tobias >>>> >>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>> [2] see nmethod::verify_metadata_loaders(..), >>>> static_stub_reloc()->clear_inline_cache() clears the stub >> From andrey.x.zakharov at oracle.com Wed Jul 16 11:13:14 2014 From: andrey.x.zakharov at oracle.com (Andrey Zakharov) Date: Wed, 16 Jul 2014 15:13:14 +0400 Subject: RFR: 8011397: JTREG needs to copy additional WhiteBox class file to JTwork/scratch/sun/hotspot In-Reply-To: <12779611.jBGqJ13gfp@ehelin-laptop> References: <536B7CF0.6010508@oracle.com> <2443586.qRToXKmNqX@mgerdin03> <53C5482A.9090001@oracle.com> <12779611.jBGqJ13gfp@ehelin-laptop> Message-ID: <53C65E4A.4020401@oracle.com> On 16.07.2014 14:39, Erik Helin wrote: > On Tuesday 15 July 2014 19:26:34 PM Andrey Zakharov wrote: >> Hi, Erik, Bengt. Could you, please, review this too. > Andrey, why did you only update a couple of tests to also copy > sun.hotspot.WhiteBox$WhiteBoxPermission? You updated 14 tests, there are > still 116 tests using sun.hotspot.WhiteBox. > > Why doesn't these 116 tests have to be updated? > > Thanks, > Erik Thanks Erik. Actually this first one patch 8011397.WhiteBoxPermission is correct. I will rework it and upload to webrev space. > >> Thanks. >> >> On 15.07.2014 17:58, Mikael Gerdin wrote: >>> Andrey, >>> >>> On Monday 07 July 2014 20.48.21 Andrey Zakharov wrote: >>>> Hi ,all >>>> Mikael, can you please review it. >>> Sorry, I was on vacation last week. >>> >>>> webrev: >>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ >>> Looks ok for now. We should consider revisiting this by either switching >>> to >>> @run main/bootclasspath >>> or >>> deleting the WhiteboxPermission nested class and using some other way for >>> permission checks (if they are at all needed). >>> >>> /Mikael >>> >>>> Thanks. >>>> >>>> On 25.06.2014 19:08, Andrey Zakharov wrote: >>>>> Hi, all >>>>> So in progress of previous email - >>>>> webrev: >>>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ >>>>> >>>>> Thanks. >>>>> >>>>> On 16.06.2014 19:57, Andrey Zakharov wrote: >>>>>> Hi, all >>>>>> So issue is that when tests with WhiteBox API has been invoked with >>>>>> -Xverify:all it fails with Exception java.lang.NoClassDefFoundError: >>>>>> sun/hotspot/WhiteBox$WhiteBoxPermission >>>>>> Solutions that are observed: >>>>>> 1. Copy WhiteBoxPermission with WhiteBox. But >>>>>> >>>>>>>> Perhaps this is a good time to get rid of ClassFileInstaller >>>>>> altogether? >>>>>> >>>>>> 2. Using bootclasspath to hook pre-built whitebox (due @library >>>>>> /testlibrary/whitebox) . Some tests has @run main/othervm, some uses >>>>>> ProcessBuilder. >>>>>> >>>>>> - main/othervm/bootclasspath adds ${test.src} and >>>>>> >>>>>> ${test.classes}to options. >>>>>> >>>>>> - With ProcessBuilder we can just add ${test.classes} >>>>>> >>>>>> Question here is, can it broke some tests ? While testing this, I >>>>>> found only https://bugs.openjdk.java.net/browse/JDK-8046231, others >>>>>> looks fine. >>>>>> >>>>>> 3. Make ClassFileInstaller deal with inner classes like that: >>>>>> diff -r 6ed24aedeef0 -r c01651363ba8 >>>>>> test/testlibrary/ClassFileInstaller.java >>>>>> --- a/test/testlibrary/ClassFileInstaller.java Thu Jun 05 19:02:56 >>>>>> 2014 +0400 >>>>>> +++ b/test/testlibrary/ClassFileInstaller.java Fri Jun 06 18:18:11 >>>>>> 2014 +0400 >>>>>> @@ -50,6 +50,16 @@ >>>>>> >>>>>> } >>>>>> // Create the class file >>>>>> Files.copy(is, p, StandardCopyOption.REPLACE_EXISTING); >>>>>> >>>>>> + >>>>>> + for (Class cls : >>>>>> Class.forName(arg).getDeclaredClasses()) { >>>>>> + //if (!Modifier.isStatic(cls.getModifiers())) { >>>>>> + String pathNameSub = >>>>>> cls.getCanonicalName().replace('.', '/').concat(".class"); >>>>>> + Path pathSub = Paths.get(pathNameSub); >>>>>> + InputStream streamSub = >>>>>> cl.getResourceAsStream(pathNameSub); >>>>>> + Files.copy(streamSub, pathSub, >>>>>> StandardCopyOption.REPLACE_EXISTING); >>>>>> + //} >>>>>> + } >>>>>> + >>>>>> >>>>>> } >>>>>> >>>>>> } >>>>>> >>>>>> } >>>>>> >>>>>> Works fine for ordinary classes, but fails for WhiteBox due >>>>>> Class.forName initiate Class. WhiteBox has "static" section, and >>>>>> initialization fails as it cannot bind to native methods >>>>>> "registerNatives" and so on. >>>>>> >>>>>> >>>>>> So, lets return to first one option? Just add everywhere >>>>>> >>>>>> * @run main ClassFileInstaller sun.hotspot.WhiteBox >>>>>> >>>>>> + * @run main ClassFileInstaller >>>>>> sun.hotspot.WhiteBox$WhiteBoxPermission >>>>>> >>>>>> Thanks. >>>>>> >>>>>> On 10.06.2014 19:43, Igor Ignatyev wrote: >>>>>>> Andrey, >>>>>>> >>>>>>> I don't like this idea, since it completely changes the tests. >>>>>>> 'run/othervm/bootclasspath' adds all paths from CP to BCP, so the >>>>>>> tests whose main idea was testing WB methods themselves (sanity, >>>>>>> compiler/whitebox, ...) don't check that it's possible to use WB >>>>>>> when the application isn't in BCP. >>>>>>> >>>>>>> Igor >>>>>>> >>>>>>> On 06/09/2014 06:59 PM, Andrey Zakharov wrote: >>>>>>>> Hi, everybody >>>>>>>> I have tested my changes on major platforms and found one bug, filed: >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046231 >>>>>>>> Also, i did another try to make ClassFileInstaller to copy all inner >>>>>>>> classes within parent, but this fails for WhiteBox due its static >>>>>>>> "registerNatives" dependency. >>>>>>>> >>>>>>>> Please, review suggested changes: >>>>>>>> - replace ClassFileInstaller and run/othervm with >>>>>>>> >>>>>>>> "run/othervm/bootclasspath". >>>>>>>> >>>>>>>> bootclasspath parameter for othervm adds-Xbootclasspath/a: >>>>>>>> option with ${test.src} and ${test.classes}according to >>>>>>>> http://hg.openjdk.java.net/code-tools/jtreg/file/31003a1c46d9/src/sha >>>>>>>> re >>>>>>>> /classes/com/sun/javatest/regtest/MainAction.java. >>>>>>>> >>>>>>>> Is this suitable for our needs - give to test compiled WhiteBox? >>>>>>>> >>>>>>>> - replace explicit -Xbootclasspath option values (".") in >>>>>>>> >>>>>>>> ProcessBuilder invocations to ${test.classes} where WhiteBox has been >>>>>>>> compiled. >>>>>>>> >>>>>>>> Webrev: >>>>>>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.00/ >>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>>> Thanks. >>>>>>>> >>>>>>>> On 23.05.2014 15:40, Andrey Zakharov wrote: >>>>>>>>> On 22.05.2014 12:47, Igor Ignatyev wrote: >>>>>>>>>> Andrey, >>>>>>>>>> >>>>>>>>>> 1. You changed dozen of tests, have you tested your changes? >>>>>>>>> Locally, aurora on the way. >>>>>>>>> >>>>>>>>>> 2. Your changes of year in copyright is wrong. it has to be >>>>>>>>>> $first_year, [$last_year, ], see Mark's email[1] for details. >>>>>>>>>> >>>>>>>>>> [1] >>>>>>>>>> http://mail.openjdk.java.net/pipermail/jdk7-dev/2010-May/001321.htm >>>>>>>>>> l >>>>>>>>> Thanks, fixed. will be uploaded soon. >>>>>>>>> >>>>>>>>>> Igor >>>>>>>>>> >>>>>>>>>> On 05/21/2014 07:37 PM, Andrey Zakharov wrote: >>>>>>>>>>> On 13.05.2014 14:43, Andrey Zakharov wrote: >>>>>>>>>>>> Hi >>>>>>>>>>>> So here is trivial patch - >>>>>>>>>>>> removing ClassFileInstaller sun.hotspot.WhiteBox and adding >>>>>>>>>>>> main/othervm/bootclasspath >>>>>>>>>>>> where this needed >>>>>>>>>>>> >>>>>>>>>>>> Also, some tests are modified as >>>>>>>>>>>> - "-Xbootclasspath/a:.", >>>>>>>>>>>> + "-Xbootclasspath/a:" + >>>>>>>>>>>> System.getProperty("test.classes"), >>>>>>>>>>>> >>>>>>>>>>>> Thanks. >>>>>>>>>>> webrev: http://cr.openjdk.java.net/~jwilhelm/8011397/webrev.02/ >>>>>>>>>>> bug: https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>>>>>> Thanks. >>>>>>>>>>> >>>>>>>>>>>> On 09.05.2014 12:13, Mikael Gerdin wrote: >>>>>>>>>>>>> On Thursday 08 May 2014 19.28.13 Igor Ignatyev wrote: >>>>>>>>>>>>>> // cc'ing hotspot-dev instaed of compiler, runtime and gc >>>>>>>>>>>>>> lists. >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 05/08/2014 07:09 PM, Filipp Zhinkin wrote: >>>>>>>>>>>>>>> Andrey, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I've CC'ed compiler and runtime mailing list, because you're >>>>>>>>>>>>>>> changes >>>>>>>>>>>>>>> affect test for other components as too. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I don't like your solution (but I'm not a reviewer, so treat >>>>>>>>>>>>>>> my >>>>>>>>>>>>>>> words >>>>>>>>>>>>>>> just as suggestion), >>>>>>>>>>>>>>> because we'll have to write more meta information for each >>>>>>>>>>>>>>> test >>>>>>>>>>>>>>> and it >>>>>>>>>>>>>>> is very easy to >>>>>>>>>>>>>>> forget to install WhiteBoxPermission if you don't test your >>>>>>>>>>>>>>> test >>>>>>>>>>>>>>> with >>>>>>>>>>>>>>> some security manager. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> From my point of view, it will be better to extend >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ClassFileInstaller >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> so it will copy not only >>>>>>>>>>>>>>> a class whose name was passed as an arguments, but also all >>>>>>>>>>>>>>> inner >>>>>>>>>>>>>>> classes of that class. >>>>>>>>>>>>>>> And if someone want copy only specified class without inner >>>>>>>>>>>>>>> classes, >>>>>>>>>>>>>>> then some option >>>>>>>>>>>>>>> could be added to ClassFileInstaller to force such behaviour. >>>>>>>>>>>>> Perhaps this is a good time to get rid of ClassFileInstaller >>>>>>>>>>>>> altogether? >>>>>>>>>>>>> >>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8009117 >>>>>>>>>>>>> >>>>>>>>>>>>> The reason for its existence is that the WhiteBox class needs >>>>>>>>>>>>> to be >>>>>>>>>>>>> on the >>>>>>>>>>>>> boot class path. >>>>>>>>>>>>> If we can live with having all the test's classes on the boot >>>>>>>>>>>>> class >>>>>>>>>>>>> path then >>>>>>>>>>>>> we could use the /bootclasspath option in jtreg as stated in >>>>>>>>>>>>> the RFE. >>>>>>>>>>>>> >>>>>>>>>>>>> /Mikael >>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>> Filipp. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On 05/08/2014 04:47 PM, Andrey Zakharov wrote: >>>>>>>>>>>>>>>> Hi! >>>>>>>>>>>>>>>> Suggesting patch with fixes for >>>>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> webrev: >>>>>>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20275/8011397 >>>>>>>>>>>>>>>> .t >>>>>>>>>>>>>>>> gz >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> patch: >>>>>>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20274/8011397 >>>>>>>>>>>>>>>> .W >>>>>>>>>>>>>>>> hiteBoxPer >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> mission >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks. From tobias.hartmann at oracle.com Wed Jul 16 11:50:40 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 16 Jul 2014 13:50:40 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <4262688.ke7CquU56b@mgerdin03> References: <53C3C584.7070008@oracle.com> <9129730.8quV1l9zAl@mgerdin03> <53C636B0.1080900@oracle.com> <4262688.ke7CquU56b@mgerdin03> Message-ID: <53C66710.3020808@oracle.com> Hi Mikael, thanks for the review. Please see comments inline. On 16.07.2014 10:34, Mikael Gerdin wrote: > Hi Tobias, > > On Wednesday 16 July 2014 10.24.16 Tobias Hartmann wrote: >> Hi Mikael, >> >> thanks for the review. Please see comments inline. >> >> On 15.07.2014 13:36, Mikael Gerdin wrote: >>> Tobias, >>> >>> On Tuesday 15 July 2014 09.48.08 Tobias Hartmann wrote: >>>> Hi Vladimir, >>>> >>>>> Impressive work, Tobias! >>>> Thanks! Took me a while to figure out what's happening. >>>> >>>>> So before the permgen removal embedded method* were oops and they were >>>>> processed in relocInfo::oop_type loop. >>>> Okay, good to know. That explains why the terms oops and metadata are >>>> used interchangeably at some points in the code. >>> Yep, there are a lot of leftover references to metadata as oops, >>> especially in some compiler/runtime parts such as MDOs and CompiledICs. >>> >>>>> May be instead of specializing opt_virtual_call_type and >>>>> static_call_type call site you can simple add a loop for >>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>> The problem with iterating over relocInfo::metadata_type is that we >>>> don't know to which stub, i.e., to which IC the Method* pointer belongs. >>>> Since we don't want to unload the entire method but only clear the >>>> corresponding IC, we need this information. >>> I'm wondering, is there some way to figure out the IC for the Method*? >>> >>> In CompiledStaticCall::emit_to_interp_stub a static_stub_Relocation is >>> created and from the looks of it it points to the call site through some >>> setting of a "mark". >>> >>> The metadata relocation is emitted just after the static_stub_Relocation, >>> so one approach (untested) could be to have a case for >>> static_stub_Relocations, create a CompiledIC.at(reloc->static_call()) and >>> check if it's a call to interpreted. If it is the advance the >>> relocIterator to the next position and check that metadata for liveness. >> The relocation entries for this particular case are [1]. Looking at the >> static_stub_Relocation (0xffffffff6ea49cc4) we don't know if the stub >> belongs to an optimized IC or a compiled static call. We would either >> have to create both CompiledIC.at(..) and compiledStaticCall_at(..) or >> check the relocation entry for the call (0xffffffff6ea49bc0) requiring >> another iteration. Only then we are able to look at the >> metadata_relocation at the next position. > That's a good point. > >> Since we already have case statements for opt_virtual_call_type and >> static_call_type (at least in nmethod::do_unloading_parallel(..)) I >> would prefer to infer the Method* from the IC or compiled static call. >> >> I adapted the implementation according to Coleen's feedback: >> >> http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >> >> What do you think? > I think your suggested change is fine. > > Did you test it with -XX:+UseG1GC to exercise the do_unloading_parallel path > as well? I just did some more testing with runThese and -XX:+UseG1GC and it looks fine. Best, Tobias > > /Mikael > >> Thanks, >> Tobias >> >> >> [1] Relocation entries: >> >> [...] >> 154031 @0xffffffff6ea49b3a: 3005 >> 154032 relocInfo at 0xffffffff6ea49b3a [type=3(opt_virtual_call) >> addr=0xffffffff6ea49bc0 offset=20] >> [...] >> 154047 @0xffffffff6ea49b52: f801ffe8500a >> 154048 relocInfo at 0xffffffff6ea49b56 [type=5(static_stub) >> addr=0xffffffff6ea49cc4 offset=40 data=-24] | >> [static_call=0xffffffff6ea49bc0] >> 154049 @0xffffffff6ea49b58: f003c000 >> 154050 relocInfo at 0xffffffff6ea49b5a [type=12(metadata) >> addr=0xffffffff6ea49cc4 offset=0 data=3] | >> [metadata_addr=0xffffffff6ea49d68 *=0xffffffff6ae20960 >> offset=0]metadata_value=0xffffffff6ae20960: {method} >> {0xffffffff6ae20968} 'newInstance' >> '([Ljava/lang/Object;)Ljava/lang/Object;' in >> 'sun/reflect/GeneratedConstructorAccessor3' >> 154051 @0xffffffff6ea49b5c: f003c007 >> 154052 relocInfo at 0xffffffff6ea49b5e [type=12(metadata) >> addr=0xffffffff6ea49ce0 offset=28 data=3] | >> [metadata_addr=0xffffffff6ea49d68 *=0xffffffff6ae20960 >> offset=0]metadata_value=0xffffffff6ae20960: {method} >> {0xffffffff6ae20968} 'newInstance' >> '([Ljava/lang/Object;)Ljava/lang/Object;' in >> 'sun/reflect/GeneratedConstructorAccessor3' >> >> 154053 @0xffffffff6ea49b60: >>> /Mikael >>> >>>> Thanks, >>>> Tobias >>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>> Hi, >>>>>> >>>>>> please review the following patch for JDK-8029443. >>>>>> >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>> >>>>>> *Problem* >>>>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks >>>>>> if a nmethod can be unloaded because it contains dead oops. If class >>>>>> unloading occurred we additionally clear all ICs where the cached >>>>>> metadata refers to an unloaded klass or method. If the nmethod is not >>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>>>>> metadata is alive. The assert in CheckClass::check_class fails because >>>>>> the nmethod contains Method* metadata corresponding to a dead Klass. >>>>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>>>> compiled IC. Normally we clear those stubs prior to verification to >>>>>> avoid dangling references to Method* [2], but only if the stub is not >>>>>> in >>>>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>>>> to-interpreter stub may be executed and hand a stale Method* to the >>>>>> interpreter. >>>>>> >>>>>> *Solution >>>>>> *The implementation of nmethod::do_unloading(..) is changed to clean >>>>>> compiled ICs and compiled static calls if they call into a >>>>>> to-interpreter stub that references dead Method* metadata. >>>>>> >>>>>> The patch was affected by the G1 class unloading changes (JDK-8048248) >>>>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>>>> adapted the implementation as well. >>>>>> * >>>>>> Testing >>>>>> *Failing test (runThese) >>>>>> JPRT >>>>>> >>>>>> Thanks, >>>>>> Tobias >>>>>> >>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>> static_stub_reloc()->clear_inline_cache() clears the stub From vladimir.kozlov at oracle.com Wed Jul 16 16:51:48 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 16 Jul 2014 09:51:48 -0700 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C65C37.4070700@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C4DCB8.5020705@oracle.com> <53C56C16.40909@oracle.com> <53C65C37.4070700@oracle.com> Message-ID: <53C6ADA4.5000904@oracle.com> Looks like nmethod::verify_metadata_loaders(..) does check embedded metadata - CheckClass::do_check_class() call. So we will catch dead metadata in debug VM. Okay, your latest version looks fine. Thanks, Vladimir On 7/16/14 4:04 AM, Tobias Hartmann wrote: > Hi Vladimir, > > On 15.07.2014 19:59, Vladimir Kozlov wrote: >> On 7/15/14 12:48 AM, Tobias Hartmann wrote: >>> Hi Vladimir, >>> >>>> Impressive work, Tobias! >>> >>> Thanks! Took me a while to figure out what's happening. >>> >>>> So before the permgen removal embedded method* were oops and they were >>>> processed in relocInfo::oop_type loop. >>> >>> Okay, good to know. That explains why the terms oops and metadata are >>> used interchangeably at some points in the code. >>> >>>> May be instead of specializing opt_virtual_call_type and >>>> static_call_type call site you can simple add a loop for >>>> relocInfo::metadata_type (similar to oop_type loop)? >>> >>> The problem with iterating over relocInfo::metadata_type is that we >>> don't know to which stub, i.e., to which IC the Method* pointer belongs. >>> Since we don't want to unload the entire method but only clear the >>> corresponding IC, we need this information. >> >> Got it: you are cleaning call site IC: ic->set_to_clean(). >> >> My point was these to_interp stubs are part of a nmethod (they are in stubs section) and contain dead metadata. Should >> we unload this nmethod then? We do unloading nmethods if any embedded oops are dead (see can_unload()). Should we do >> the same if a nmethod (and its stubs) have dead metadata? Note, embedded metadata could be Method* and MethodData*. > > I'm not sure but I think except for the to-interpreter stub the nmethod is still valid and could be executed. In > nmethod::verify_metadata_loaders(..) we explicitly check for this case and clean such stubs prior to verification. > Assuming that this code was added for a reason, it means that nmethods with a to-interpreter stub referencing stale > metadata do not necessarily have to be unloaded. > > But maybe someone else knows better. > > Thanks, > Tobias > >> >> Vladimir >> >>> >>> Thanks, >>> Tobias >>> >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>> Hi, >>>>> >>>>> please review the following patch for JDK-8029443. >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>> >>>>> *Problem* >>>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks >>>>> if a nmethod can be unloaded because it contains dead oops. If class >>>>> unloading occurred we additionally clear all ICs where the cached >>>>> metadata refers to an unloaded klass or method. If the nmethod is not >>>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>>>> metadata is alive. The assert in CheckClass::check_class fails because >>>>> the nmethod contains Method* metadata corresponding to a dead Klass. >>>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>>> compiled IC. Normally we clear those stubs prior to verification to >>>>> avoid dangling references to Method* [2], but only if the stub is not in >>>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>>> to-interpreter stub may be executed and hand a stale Method* to the >>>>> interpreter. >>>>> >>>>> *Solution >>>>> *The implementation of nmethod::do_unloading(..) is changed to clean >>>>> compiled ICs and compiled static calls if they call into a >>>>> to-interpreter stub that references dead Method* metadata. >>>>> >>>>> The patch was affected by the G1 class unloading changes (JDK-8048248) >>>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>>> adapted the implementation as well. >>>>> * >>>>> Testing >>>>> *Failing test (runThese) >>>>> JPRT >>>>> >>>>> Thanks, >>>>> Tobias >>>>> >>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>> > From mike.duigou at oracle.com Wed Jul 16 17:13:18 2014 From: mike.duigou at oracle.com (Mike Duigou) Date: Wed, 16 Jul 2014 10:13:18 -0700 Subject: RFR: 8046765: (s) disable FORTIFY_SOURCE for files with optimization disabled. Message-ID: <3F7746F6-B85B-4DFF-95AF-60B442569806@oracle.com> Hello all; In some GCC distributions the FORTIFY_SOURCE option is incompatible with the -O0. This change disables FORTIFY sources for the files we know have optimizations disabled. jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047952 webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047952/0/webrev/ Unfortunately I don't have a Fedora 19 setup to test the change on the reported platform but I did verify that the compiler command line is correct, that fortify is disabled and the resulting build still works on a number of platforms. Additional verifications on other platforms is encouraged. The changeset will be pushed via hotspot-rt forest unless otherwise requested. Mike From vladimir.kozlov at oracle.com Wed Jul 16 19:02:19 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 16 Jul 2014 12:02:19 -0700 Subject: RFR: 8046765: (s) disable FORTIFY_SOURCE for files with optimization disabled. In-Reply-To: <3F7746F6-B85B-4DFF-95AF-60B442569806@oracle.com> References: <3F7746F6-B85B-4DFF-95AF-60B442569806@oracle.com> Message-ID: <53C6CC3B.5060508@oracle.com> No changes in make/linux/makefiles/ppc.make Why next was removed from linux makefiles?: OPT_CFLAGS/compactingPermGenGen.o = -O1 Vladimir On 7/16/14 10:13 AM, Mike Duigou wrote: > Hello all; > > In some GCC distributions the FORTIFY_SOURCE option is incompatible with the -O0. This change disables FORTIFY sources for the files we know have optimizations disabled. > > jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047952 > webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047952/0/webrev/ > > Unfortunately I don't have a Fedora 19 setup to test the change on the reported platform but I did verify that the compiler command line is correct, that fortify is disabled and the resulting build still works on a number of platforms. Additional verifications on other platforms is encouraged. > > The changeset will be pushed via hotspot-rt forest unless otherwise requested. > > Mike > From mike.duigou at oracle.com Wed Jul 16 19:13:05 2014 From: mike.duigou at oracle.com (Mike Duigou) Date: Wed, 16 Jul 2014 12:13:05 -0700 Subject: RFR: 8046765: (s) disable FORTIFY_SOURCE for files with optimization disabled. In-Reply-To: <53C6CC3B.5060508@oracle.com> References: <3F7746F6-B85B-4DFF-95AF-60B442569806@oracle.com> <53C6CC3B.5060508@oracle.com> Message-ID: On Jul 16 2014, at 12:02 , Vladimir Kozlov wrote: > No changes in make/linux/makefiles/ppc.make That platform appears to use xlc rather than gcc in some case. I would need to be sure that whatever we changed there did not impact xlc compiles. > Why next was removed from linux makefiles?: > > OPT_CFLAGS/compactingPermGenGen.o = -O1 The file appears to no longer exist. I assumed that it was removed as part of permgen removal. > > Vladimir > > On 7/16/14 10:13 AM, Mike Duigou wrote: >> Hello all; >> >> In some GCC distributions the FORTIFY_SOURCE option is incompatible with the -O0. This change disables FORTIFY sources for the files we know have optimizations disabled. >> >> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047952 >> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047952/0/webrev/ >> >> Unfortunately I don't have a Fedora 19 setup to test the change on the reported platform but I did verify that the compiler command line is correct, that fortify is disabled and the resulting build still works on a number of platforms. Additional verifications on other platforms is encouraged. >> >> The changeset will be pushed via hotspot-rt forest unless otherwise requested. >> >> Mike >> From coleen.phillimore at oracle.com Wed Jul 16 19:37:02 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 16 Jul 2014 15:37:02 -0400 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code In-Reply-To: <3929573.XurhNyGbnR@mgerdin03> References: <53C4705E.7060407@oracle.com> <1649311.XsSm0sYPeC@mgerdin03> <53C58E1D.9060802@oracle.com> <3929573.XurhNyGbnR@mgerdin03> Message-ID: <53C6D45E.8030609@oracle.com> Mikael, Thank you for the code review. Can someone from the compiler group review the mdx removal? thanks, Coleen On 7/16/14, 4:20 AM, Mikael Gerdin wrote: > > On Tuesday 15 July 2014 16.25.01 Coleen Phillimore wrote: > > > I didn't make this change to interpreter_frame_bcp or mdp_addr() at the > > > end. The frame code is consistent in returning intptr_t for objects on > > > the frame and then casting them to the right types. I think this is > better. > > Ok. > > /Mikael > > > > > > Thanks, > > > Coleen > > > > > > On 7/15/14, 11:40 AM, Mikael Gerdin wrote: > > > > Hi Coleen, > > > > > > > > On Monday 14 July 2014 20.05.50 Coleen Phillimore wrote: > > > >> Summary: remove bcx and mdx handling. We no longer have to convert > > > >> bytecode pointers or method data pointers to indices for GC since > > > >> Metadata aren't moved. > > > >> > > > >> Tested with nsk.quick.testlist, jck tests, JPRT. > > > >> > > > >> Most of this is renaming bcx to bcp and mdx to mdp. The content > changes > > > >> are in frame.cpp. StefanK implemented 90% of these changes. > > > >> > > > >> open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ > > > > > > > > This isn't exactly my area of the code, but I'm happy that we got > around > > > > to > > > > this cleanup! > > > > > > > > I looked through the change and to my not-so-runtime-familiar eyes it > > > > seems > > > > good. > > > > > > > > One thought about the frame accessors > > > > > > > > 244 intptr_t* interpreter_frame_bcp_addr() const; > > > > 245 intptr_t* interpreter_frame_mdp_addr() const; > > > > > > > > Now that the contents of bcp and mdp in the frames are always > pointers, > > > > perhaps these accessors should be appropriately typed? > > > > > > > > Something like > > > > > > > > 244 address* interpreter_frame_bcp_addr() const; > > > > 245 ProfileData** interpreter_frame_mdp_addr() const; > > > > > > > > Also, BytecodeInterpreter still has a member named _mdx, should > that be > > > > renamed to _mdp as well? > > > > > > > > /Mikael > > > > > > > >> bug link https://bugs.openjdk.java.net/browse/JDK-8004128 > > > >> > > > >> Thanks, > > > >> Coleen > From vladimir.kozlov at oracle.com Wed Jul 16 20:25:40 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 16 Jul 2014 13:25:40 -0700 Subject: RFR: 8046765: (s) disable FORTIFY_SOURCE for files with optimization disabled. In-Reply-To: References: <3F7746F6-B85B-4DFF-95AF-60B442569806@oracle.com> <53C6CC3B.5060508@oracle.com> Message-ID: <53C6DFC4.7070302@oracle.com> On 7/16/14 12:13 PM, Mike Duigou wrote: > > On Jul 16 2014, at 12:02 , Vladimir Kozlov wrote: > >> No changes in make/linux/makefiles/ppc.make > > That platform appears to use xlc rather than gcc in some case. I would need to be sure that whatever we changed there did not impact xlc compiles. But the file is listed in webrev. > >> Why next was removed from linux makefiles?: >> >> OPT_CFLAGS/compactingPermGenGen.o = -O1 > > The file appears to no longer exist. I assumed that it was removed as part of permgen removal. Okay, make sense. Vladimir > >> >> Vladimir >> >> On 7/16/14 10:13 AM, Mike Duigou wrote: >>> Hello all; >>> >>> In some GCC distributions the FORTIFY_SOURCE option is incompatible with the -O0. This change disables FORTIFY sources for the files we know have optimizations disabled. >>> >>> jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047952 >>> webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047952/0/webrev/ >>> >>> Unfortunately I don't have a Fedora 19 setup to test the change on the reported platform but I did verify that the compiler command line is correct, that fortify is disabled and the resulting build still works on a number of platforms. Additional verifications on other platforms is encouraged. >>> >>> The changeset will be pushed via hotspot-rt forest unless otherwise requested. >>> >>> Mike >>> > From jon.masamitsu at oracle.com Wed Jul 16 20:45:02 2014 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Wed, 16 Jul 2014 13:45:02 -0700 Subject: RFR(XS): 8050228: Rename 'rem_size' in compactibleFreeListSpace.cpp because of name clashes on AIX" -f 8050228_rename_rem_size.patch In-Reply-To: References: <53C5DFFB.5010401@oracle.com> Message-ID: <53C6E44E.1080206@oracle.com> On 7/15/2014 8:02 PM, Vitaly Davidovich wrote: > How about rem_sz? :) Yes, rem_sz, please. There are example of such abbreviations in the file. I'll sponsor and push. Sending an hg patch with summary and reviewed-by info that I can import would be the easiest I think. Jon > > Sent from my phone > On Jul 15, 2014 10:14 PM, "David Holmes" wrote: > >> On 16/07/2014 2:54 AM, Volker Simonis wrote: >> >>> Hi, >>> >>> could somebody please review and sponsor this little change: >>> >> GC code should be taken by GC folk. (aka buck passing :) ) >> >> FWIW change looks fine to me. The name remain_size grates a little but >> spelling out remaining_size grates even more. I don't have a better >> suggestion but perhaps GC folk will. >> >> Cheers, >> David >> >> http://cr.openjdk.java.net/~simonis/webrevs/8050228/ >>> https://bugs.openjdk.java.net/browse/JDK-8050228 >>> >>> Background: >>> >>> I know this sounds crazy but it's true: there's an AIX header which >>> unconditionally defines rem_size: >>> >>> /usr/include/sys/xmem.h >>> struct xmem { >>> ... >>> #define rem_size u2._subspace_id2 >>> }; >>> >>> This breaks the compilation of >>> CompactibleFreeListSpace::splitChunkAndReturnRemainder() which uses a >>> local variable of the same name. >>> >>> Until now, we've worked around this problem by simply undefining >>> 'rem_size' in the platform specific file os_aix.inline.hpp but after >>> "8042195: Introduce umbrella header orderAccess.inline.hpp" this >>> doesn't seems to be enough any more. >>> >>> So before introducing yet another ugly platform dependent hack in >>> shared code or depending on a certain include order of otherwise >>> unrelated platform headers in shared code I suggest so simply give up >>> and rename the local variable. >>> >>> In this change I've renamed 'rem_size' to 'remain_size' because "rem" >>> is used as abbreviation of "remainder" in the code. But actually I'd >>> be happy with any other name which differs from "rem_size". >>> >>> Thank you and best regards, >>> Volker >>> >>> From mikael.vidstedt at oracle.com Wed Jul 16 21:06:45 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 16 Jul 2014 14:06:45 -0700 Subject: RFR(S): 8050802: Update jprt runthese test suite to jck-8 In-Reply-To: <53C5DDDE.4000608@oracle.com> References: <53C5B23B.9040604@oracle.com> <53C5C54B.60803@oracle.com> <53C5D824.7060808@oracle.com> <53C5DDDE.4000608@oracle.com> Message-ID: <53C6E965.2080609@oracle.com> Vladimir, Per our conversation off-list I've updated the webrev to split up all the -Xcomp tests into -Xcomp_lang and -Xcomp_vm with the understanding that this will likely add some small amount to job times in favor of symmetry. New webrevs: top: http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.01/top/webrev/ hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.01/hotspot/webrev/ Cheers, Mikael On 2014-07-15 19:05, Mikael Vidstedt wrote: > > Note, btw, that the reason why this linux-i586-fastdebug-Xcomp is the > culprit here is that that's the only platform where we're running > Xcomp on fastdebug, the other Xcomp are all on product. > > Cheers, > Mikael > > On 2014-07-15 18:40, Mikael Vidstedt wrote: >> >> From my empirical data the only test I've seen this "problem" with is >> the linux-i586-fastdebug-Xcomp; remember that there's a cost/overhead >> for setting up the individual tests too so splitting up the other >> Xcomp tests may actually make the job times longer. >> >> That said, if you feel that it's important for symmetry I can >> certainly do it. >> >> Cheers, >> Mikael >> >> On 2014-07-15 17:20, Vladimir Kozlov wrote: >>> Mikael, >>> >>> I think you should split Xcomp on all platforms (not only for >>> linux.i586) where it runs. >>> >>> thanks, >>> Vladimir >>> >>> On 7/15/14 3:59 PM, Mikael Vidstedt wrote: >>>> >>>> Please review the below change which switches the 'runThese' test >>>> suite >>>> over the new, jck-8 based 'runThese8' tests suite. The change also >>>> splits up the long running fastdebug-Xcomp test into two separate >>>> tests >>>> (fastdebug-Xcomp_lang and fastdebug-Xcomp_vm) to take advantage of the >>>> parallelism in jprt to reduce the job times further. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8050802 >>>> Webrev (hs-rt/ (top) repo): >>>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/top/webrev/ >>>> >>>> Webrev (hs-rt/hotspot repo): >>>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/hotspot/webrev/ >>>> >>>> >>>> >>>> Thanks, >>>> Mikael >>>> >> > From vladimir.kozlov at oracle.com Wed Jul 16 21:40:21 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 16 Jul 2014 14:40:21 -0700 Subject: RFR(S): 8050802: Update jprt runthese test suite to jck-8 In-Reply-To: <53C6E965.2080609@oracle.com> References: <53C5B23B.9040604@oracle.com> <53C5C54B.60803@oracle.com> <53C5D824.7060808@oracle.com> <53C5DDDE.4000608@oracle.com> <53C6E965.2080609@oracle.com> Message-ID: <53C6F145.1080005@oracle.com> Looks fine to me :) Thanks, Vladimir On 7/16/14 2:06 PM, Mikael Vidstedt wrote: > > Vladimir, > > Per our conversation off-list I've updated the webrev to split up all > the -Xcomp tests into -Xcomp_lang and -Xcomp_vm with the understanding > that this will likely add some small amount to job times in favor of > symmetry. New webrevs: > > top: > http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.01/top/webrev/ > hotspot: > http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.01/hotspot/webrev/ > > > Cheers, > Mikael > > On 2014-07-15 19:05, Mikael Vidstedt wrote: >> >> Note, btw, that the reason why this linux-i586-fastdebug-Xcomp is the >> culprit here is that that's the only platform where we're running >> Xcomp on fastdebug, the other Xcomp are all on product. >> >> Cheers, >> Mikael >> >> On 2014-07-15 18:40, Mikael Vidstedt wrote: >>> >>> From my empirical data the only test I've seen this "problem" with is >>> the linux-i586-fastdebug-Xcomp; remember that there's a cost/overhead >>> for setting up the individual tests too so splitting up the other >>> Xcomp tests may actually make the job times longer. >>> >>> That said, if you feel that it's important for symmetry I can >>> certainly do it. >>> >>> Cheers, >>> Mikael >>> >>> On 2014-07-15 17:20, Vladimir Kozlov wrote: >>>> Mikael, >>>> >>>> I think you should split Xcomp on all platforms (not only for >>>> linux.i586) where it runs. >>>> >>>> thanks, >>>> Vladimir >>>> >>>> On 7/15/14 3:59 PM, Mikael Vidstedt wrote: >>>>> >>>>> Please review the below change which switches the 'runThese' test >>>>> suite >>>>> over the new, jck-8 based 'runThese8' tests suite. The change also >>>>> splits up the long running fastdebug-Xcomp test into two separate >>>>> tests >>>>> (fastdebug-Xcomp_lang and fastdebug-Xcomp_vm) to take advantage of the >>>>> parallelism in jprt to reduce the job times further. >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8050802 >>>>> Webrev (hs-rt/ (top) repo): >>>>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/top/webrev/ >>>>> >>>>> Webrev (hs-rt/hotspot repo): >>>>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/hotspot/webrev/ >>>>> >>>>> >>>>> >>>>> Thanks, >>>>> Mikael >>>>> >>> >> > From mikael.vidstedt at oracle.com Wed Jul 16 21:54:01 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 16 Jul 2014 14:54:01 -0700 Subject: RFR(S): 8050802: Update jprt runthese test suite to jck-8 In-Reply-To: <53C6F145.1080005@oracle.com> References: <53C5B23B.9040604@oracle.com> <53C5C54B.60803@oracle.com> <53C5D824.7060808@oracle.com> <53C5DDDE.4000608@oracle.com> <53C6E965.2080609@oracle.com> <53C6F145.1080005@oracle.com> Message-ID: <53C6F479.8020603@oracle.com> David/Vladimir - thanks for the reviews! Cheers, Mikael On 2014-07-16 14:40, Vladimir Kozlov wrote: > Looks fine to me :) > > Thanks, > Vladimir > > On 7/16/14 2:06 PM, Mikael Vidstedt wrote: >> >> Vladimir, >> >> Per our conversation off-list I've updated the webrev to split up all >> the -Xcomp tests into -Xcomp_lang and -Xcomp_vm with the understanding >> that this will likely add some small amount to job times in favor of >> symmetry. New webrevs: >> >> top: >> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.01/top/webrev/ >> hotspot: >> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.01/hotspot/webrev/ >> >> >> >> Cheers, >> Mikael >> >> On 2014-07-15 19:05, Mikael Vidstedt wrote: >>> >>> Note, btw, that the reason why this linux-i586-fastdebug-Xcomp is the >>> culprit here is that that's the only platform where we're running >>> Xcomp on fastdebug, the other Xcomp are all on product. >>> >>> Cheers, >>> Mikael >>> >>> On 2014-07-15 18:40, Mikael Vidstedt wrote: >>>> >>>> From my empirical data the only test I've seen this "problem" with is >>>> the linux-i586-fastdebug-Xcomp; remember that there's a cost/overhead >>>> for setting up the individual tests too so splitting up the other >>>> Xcomp tests may actually make the job times longer. >>>> >>>> That said, if you feel that it's important for symmetry I can >>>> certainly do it. >>>> >>>> Cheers, >>>> Mikael >>>> >>>> On 2014-07-15 17:20, Vladimir Kozlov wrote: >>>>> Mikael, >>>>> >>>>> I think you should split Xcomp on all platforms (not only for >>>>> linux.i586) where it runs. >>>>> >>>>> thanks, >>>>> Vladimir >>>>> >>>>> On 7/15/14 3:59 PM, Mikael Vidstedt wrote: >>>>>> >>>>>> Please review the below change which switches the 'runThese' test >>>>>> suite >>>>>> over the new, jck-8 based 'runThese8' tests suite. The change also >>>>>> splits up the long running fastdebug-Xcomp test into two separate >>>>>> tests >>>>>> (fastdebug-Xcomp_lang and fastdebug-Xcomp_vm) to take advantage >>>>>> of the >>>>>> parallelism in jprt to reduce the job times further. >>>>>> >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8050802 >>>>>> Webrev (hs-rt/ (top) repo): >>>>>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/top/webrev/ >>>>>> >>>>>> >>>>>> Webrev (hs-rt/hotspot repo): >>>>>> http://cr.openjdk.java.net/~mikael/webrevs/8050802/webrev.00/hotspot/webrev/ >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Thanks, >>>>>> Mikael >>>>>> >>>> >>> >> From asmundak at google.com Thu Jul 17 00:26:23 2014 From: asmundak at google.com (Alexander Smundak) Date: Wed, 16 Jul 2014 17:26:23 -0700 Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le Message-ID: Hi, Please review the patch that ports template interpreter to the little-endian PowerPC64. I have tested it on linuxppc64le. I need a sponsor. http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.00/ Sasha From goetz.lindenmaier at sap.com Thu Jul 17 08:54:06 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 17 Jul 2014 08:54:06 +0000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> Hi, This webrev fixes an important concurrency issue in nmethod. Please review and test this change. I please need a sponsor. http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ This should be fixed into 8u20, too. The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. Best regards, Martin and Goetz. From volker.simonis at gmail.com Thu Jul 17 09:44:19 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 17 Jul 2014 11:44:19 +0200 Subject: RFR(XS): 8050228: Rename 'rem_size' in compactibleFreeListSpace.cpp because of name clashes on AIX" -f 8050228_rename_rem_size.patch In-Reply-To: <53C6E44E.1080206@oracle.com> References: <53C5DFFB.5010401@oracle.com> <53C6E44E.1080206@oracle.com> Message-ID: Hi John, thanks a lot for reviewing and sponsoring his change. You can find the new, adapted webrev here: http://cr.openjdk.java.net/~simonis/webrevs/8050228.v2/ Regards, Volker On Wed, Jul 16, 2014 at 10:45 PM, Jon Masamitsu wrote: > > On 7/15/2014 8:02 PM, Vitaly Davidovich wrote: >> >> How about rem_sz? :) > > > Yes, rem_sz, please. There are example of such abbreviations > in the file. I'll sponsor and push. > > Sending an hg patch with summary and reviewed-by > info that I can import would be the easiest I think. > > Jon > > > >> >> Sent from my phone >> On Jul 15, 2014 10:14 PM, "David Holmes" wrote: >> >>> On 16/07/2014 2:54 AM, Volker Simonis wrote: >>> >>>> Hi, >>>> >>>> could somebody please review and sponsor this little change: >>>> >>> GC code should be taken by GC folk. (aka buck passing :) ) >>> >>> FWIW change looks fine to me. The name remain_size grates a little but >>> spelling out remaining_size grates even more. I don't have a better >>> suggestion but perhaps GC folk will. >>> >>> Cheers, >>> David >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8050228/ >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8050228 >>>> >>>> Background: >>>> >>>> I know this sounds crazy but it's true: there's an AIX header which >>>> unconditionally defines rem_size: >>>> >>>> /usr/include/sys/xmem.h >>>> struct xmem { >>>> ... >>>> #define rem_size u2._subspace_id2 >>>> }; >>>> >>>> This breaks the compilation of >>>> CompactibleFreeListSpace::splitChunkAndReturnRemainder() which uses a >>>> local variable of the same name. >>>> >>>> Until now, we've worked around this problem by simply undefining >>>> 'rem_size' in the platform specific file os_aix.inline.hpp but after >>>> "8042195: Introduce umbrella header orderAccess.inline.hpp" this >>>> doesn't seems to be enough any more. >>>> >>>> So before introducing yet another ugly platform dependent hack in >>>> shared code or depending on a certain include order of otherwise >>>> unrelated platform headers in shared code I suggest so simply give up >>>> and rename the local variable. >>>> >>>> In this change I've renamed 'rem_size' to 'remain_size' because "rem" >>>> is used as abbreviation of "remainder" in the code. But actually I'd >>>> be happy with any other name which differs from "rem_size". >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> > From goetz.lindenmaier at sap.com Thu Jul 17 10:20:24 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 17 Jul 2014 10:20:24 +0000 Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le In-Reply-To: References: Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDAA66@DEWDFEMB12A.global.corp.sap> Hi Sasha, I tested your change. Unfortunately it breaks our port. You need to fix Unsigned to Signed: --- a/src/cpu/ppc/vm/templateTable_ppc_64.cpp Wed Jul 16 16:53:32 2014 -0700 +++ b/src/cpu/ppc/vm/templateTable_ppc_64.cpp Thu Jul 17 12:14:18 2014 +0200 @@ -1929,7 +1929,7 @@ // default case __ bind(Ldefault_case); - __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Unsigned); + __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Signed); if (ProfileInterpreter) { __ profile_switch_default(Rdef_offset_addr, Rcount/* scratch */); __ b(Lcontinue_execution); If you want to, you can move loading the bci in this bytecode behind the loop. Could you please fix indentation of relocInfo::none in call_c? Should be aligned to call_c. Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Alexander Smundak Sent: Donnerstag, 17. Juli 2014 02:26 To: HotSpot Open Source Developers Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le Hi, Please review the patch that ports template interpreter to the little-endian PowerPC64. I have tested it on linuxppc64le. I need a sponsor. http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.00/ Sasha From goetz.lindenmaier at sap.com Thu Jul 17 10:47:08 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 17 Jul 2014 10:47:08 +0000 Subject: RFR(S): 8050978: Fix bad field access check in C1 and C2 Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDAAAC@DEWDFEMB12A.global.corp.sap> Hi, This fixes an error doing field access checks in C1 and C2. Please review and test the change. We please need a sponsor. http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ This should be included in 8u20, too. JCK8 test vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html fails with -Xbatch -Xcomp due to bad field access check in C1 and C2 Precondition: ------------- Consider the following class hierarchy: A / \ B1 B2 A declares a field "aa" which both B1 and B2 inherit. Despite aa is declared in a super class of B1, methods in B1 might not access the field aa of an object of class B2: class B1 extends A { m(B2 b2) { ... x = b2.aa; // !!! Access not allowed } } This is checked by the test mentioned above. Problem: -------- ciField::will_link() used by C1 and C2 does the access check using the canonical_holder (which is A in this case) and thus the access erroneously succeeds. Fix: ---- In ciField::ciField(), just before the canonical holder is stored into the _holder variable (and which is used by ciField::will_link()) perform an additional access check with the holder declared in the class file. If this check fails, store the declared holder instead and ciField::will_link() will bail out compilation for this field later on. Then, the interpreter will throw an PrivilegedAccessException at runtime. Ways to reproduce: ------------------ Run the above JCK test with C2 only: -XX:-TieredCompilation -Xbatch -Xcomp or with C1: -Xbatch -Xcomp -XX:-Inline Best regards, Andreas and Goetz From goetz.lindenmaier at sap.com Thu Jul 17 13:43:22 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 17 Jul 2014 13:43:22 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDA633@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> <53C45912.4050905@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> <53C4D63A.5060802@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA584@DEWDFEMB12A.global.corp.sap> <53C5332D.3010604@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA633@DEWDFEMB12A.global.corp.sap> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDAB46@DEWDFEMB12A.global.corp.sap> Hi, I made a new webrev because the old one wouldn't apply any more to rt. in vm_version.hpp the copyright was updated to 2014 so I had to remove my patch. Whitebox.cpp got a new include in the context of my patch. Both just minor adaptions. http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.02/ Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Lindenmaier, Goetz Sent: Dienstag, 15. Juli 2014 15:59 To: Coleen Phillimore; David Holmes; hotspot-dev at openjdk.java.net Subject: RE: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories Hi Coleen, It's a quiet common pattern in hotspot to have a class specialized platform dependent by having headers that go into the middle of a class declaration. E.g., it's the same with os.hpp. Best regards, Goetz. -----Original Message----- From: Coleen Phillimore [mailto:coleen.phillimore at oracle.com] Sent: Dienstag, 15. Juli 2014 15:57 To: Lindenmaier, Goetz; David Holmes; hotspot-dev at openjdk.java.net Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories It also seems to me that these vmreg_ppc.hpp inline functions are special in that they are included directly in the class declaration, rather than the preferred separate class declaration. So I think this doesn't follow the "rules" as such because this case is different. It would be nice to clean out these includes in another cleanup pass. I hit the same cycles on the closed part but didn't realize it was because of cycles. Thanks, Coleen On 7/15/14, 5:18 AM, Lindenmaier, Goetz wrote: > Hi David, > > There are no clean rules followed, which happens to cause > compile problems here and there. I try to clean this up a bit. > > If inline function foo() calls another inline function bar(), the c++ compiler > must see both implementations to compile foo (else it obviously can't > inline). It must see the declaration of the function to be inlined before > the function where it is inlined. If there are cyclic inlines you need inline.hpp > headers to get a safe state. Also, to be on the safe side, .hpp files never may include > .inline.hpp files, else an implementation can end up above the declaration > it needs. See also the two examples attached. > > If there is no cycle, it doesn't matter. That's why a lot of functions > are not placed according to this scheme. > > For the functions I moved to the header (path_separator etc): > They are used in a lot of .hpp files. Moving them to os.hpp I easily could avoid > including the os.inline.hpp in .hpp files, which would be bad. > > Best regards, > Goetz. > > > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 15. Juli 2014 09:20 > To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 15/07/2014 4:34 PM, Lindenmaier, Goetz wrote: >> Hi David, >> >> functions that are completely self contained can go into the .hpp. >> Functions that call another inline function defined in an other header >> must go to .inline.hpp as else there could be cycles the c++ compilers can't >> deal with. > A quick survey of the shared *.inline.hpp files shows many don't seem to > fit this definition. Are templates also something that needs special > handling? > > I'm not saying anything is wrong with your changes, just trying to > understand what the rules are. > > Thanks, > David > >> Best regards, >> Goetz. >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Dienstag, 15. Juli 2014 00:26 >> To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: >>> Hi Coleen, >>> >>> Thanks for sponsoring this! >>> >>> bytes, ad, nativeInst and vmreg.inline were used quite often >>> in shared files, so it definitely makes sense for these to have >>> a shared header. >>> vm_version and register had an umbrella header, but that >>> was not used everywhere, so I cleaned it up. >>> That left adGlobals, jniTypes and interp_masm which >>> are only used a few time. I did these so that all files >>> are treated similarly. >>> In the end, I didn't need a header for all, as they were >>> not really needed in the shared files, or I found >>> another good place, as for adGlobals. >>> >>> I added you and David H. as reviewer to the webrev: >>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>> I hope this is ok with you, David. >> It might be somewhat premature :) I somewhat confused by the rules for >> headers and includes and inlines. I now see with this change a bunch of >> inline function definitions being moved out of the .inline.hpp file and >> into the .hpp file. Why? What criteria determines if an inline function >> goes into the .hpp versus the .inline.hpp file ??? >> >> Thanks, >> David >> >>> Thanks, >>> Goetz. >>> >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore >>> Sent: Montag, 14. Juli 2014 14:09 >>> To: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>> >>> >>> I think this looks like a good cleanup. I can sponsor it and make the >>> closed changes also again. I initially proposed the #include cascades >>> because the alternative at the time was to blindly create a dispatching >>> header file for each target dependent file. I wanted to see the >>> #includes cleaned up instead and target dependent files included >>> directly. This adds 5 dispatching header files, which is fine. I >>> think the case of interp_masm.hpp is interesting though, because the >>> dispatching file is included in cpu dependent files, which could >>> directly include the cpu version. But there are 3 platform independent >>> files that include it. I'm not going to object though because I'm >>> grateful for this cleanup and I guess it's a matter of opinion which is >>> best to include in the cpu dependent directories. >>> >>> Thanks, >>> Coleen >>> >>> >>> On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> David, can I consider this a review? >>>> >>>> And I please need a sponsor for this change. Could somebody >>>> please help here? Probably some closed adaptions are needed. >>>> It applies to any repo as my other change traveled around >>>> by now. >>>> >>>> Thanks and best regards, >>>> Goetz. >>>> >>>> >>>> -----Original Message----- >>>> From: David Holmes [mailto:david.holmes at oracle.com] >>>> Sent: Freitag, 11. Juli 2014 07:19 >>>> To: Lindenmaier, Goetz; Lois Foltan >>>> Cc: hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>>> >>>> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>>>> Hi, >>>>> >>>>> foo.hpp as few includes as possible, to avoid cycles. >>>>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>>>> (either directly or via the platform files.) >>>>> * should include foo.platform.inline.hpp, so that shared files that >>>>> call functions from foo.platform.inline.hpp need not contain the >>>>> cascade of all the platform files. >>>>> If code in foo.platform.inline.hpp is only used in the platform files, >>>>> it is not necessary to have an umbrella header. >>>>> foo.platform.inline.hpp Should include what is needed in its code. >>>>> >>>>> For client code: >>>>> With this change I now removed all include cascades of platform files except for >>>>> those in the 'natural' headers. >>>>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>>>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>>>> headers, but include bar.[inline.]hpp.) >>>>> If it's 1:1, I don't care, as discussed before. >>>>> >>>>> Does this make sense? >>>> I find the overall structure somewhat counter-intuitive from an >>>> implementation versus interface perspective. But ... >>>> >>>> Thanks for the explanation. >>>> >>>> David >>>> >>>>> Best regards, >>>>> Goetz. >>>>> >>>>> >>>>> which of the above should #include which others, and which should be >>>>> #include'd by "client" code? >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>>> Thanks, >>>>>> Lois >>>>>> >>>>>>> David >>>>>>> ----- >>>>>>> >>>>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>>>> (however this could pull in more code than needed since >>>>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>>>> >>>>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>>>> - change not related to clean up of umbrella headers, please >>>>>>>> explain/justify. >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.hpp >>>>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>>>> vmreg.inline.hpp or will >>>>>>>> this introduce a cyclical inclusion situation, since >>>>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>>>> >>>>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>>>> - only has a copyright change in the file, no other changes >>>>>>>> present? >>>>>>>> >>>>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>>>> - incorrect copyright, no current year? >>>>>>>> >>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>> - incorrect copyright date for a new file >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>> - technically this new file does not need to include >>>>>>>> "asm/register.hpp" since >>>>>>>> vmreg.hpp already includes it >>>>>>>> >>>>>>>> My only lingering concern is the cyclical nature of >>>>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>>>> is not much difference between the two? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Lois >>>>>>>> >>>>>>>> >>>>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>>>> >>>>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>>>> subdirectories: >>>>>>>>> >>>>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>>>> >>>>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>>>> >>>>>>>>> Where possible, this change avoids includes in headers. >>>>>>>>> Eventually it adds a forward declaration. >>>>>>>>> >>>>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>>>> rather small. >>>>>>>>> >>>>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>>>> includes in, >>>>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>>>> >>>>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>>>> thus all the assembler include headers into a lot of files. >>>>>>>>> >>>>>>>>> Please review and test this change. I please need a sponsor. >>>>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>>>> >>>>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>>>> linuxppc64, >>>>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>>>> aixppc64, ntamd64 >>>>>>>>> in opt, dbg and fastdbg versions. >>>>>>>>> >>>>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>>>> arrives in other >>>>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>>>> change >>>>>>>>> against jdk9/dev, too.) >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Goetz. >>>>>>>>> >>>>>>>>> PS: I also did all the Copyright adaptions ;) From vladimir.kozlov at oracle.com Thu Jul 17 15:08:56 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 17 Jul 2014 08:08:56 -0700 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> Message-ID: <53C7E708.8060208@oracle.com> Hi Goetz, What is the reason for new typedef? Thanks, Vladimir On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: > Hi, > > This webrev fixes an important concurrency issue in nmethod. > Please review and test this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > This should be fixed into 8u20, too. > > The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. > Best regards, > Martin and Goetz. > From martin.doerr at sap.com Thu Jul 17 15:19:45 2014 From: martin.doerr at sap.com (Doerr, Martin) Date: Thu, 17 Jul 2014 15:19:45 +0000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <53C7E708.8060208@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> Message-ID: <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> Hi Vladimir, the following line should also work: PcDesc* volatile _pc_descs[cache_size]; But we thought that the typedef would improve readability. The array elements must be volatile, not the PcDescs which are pointed to. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov Sent: Donnerstag, 17. Juli 2014 17:09 To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache Hi Goetz, What is the reason for new typedef? Thanks, Vladimir On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: > Hi, > > This webrev fixes an important concurrency issue in nmethod. > Please review and test this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > This should be fixed into 8u20, too. > > The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. > Best regards, > Martin and Goetz. > From vladimir.kozlov at oracle.com Thu Jul 17 15:52:22 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 17 Jul 2014 08:52:22 -0700 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> Message-ID: <53C7F136.3000709@oracle.com> First, comments needs to be fixed: "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" Second, type name should be camel style (PcDescPtr). Someone have to double check this volatile declaration. Your example is more clear for me than typedef. Thanks, Vladimir On 7/17/14 8:19 AM, Doerr, Martin wrote: > Hi Vladimir, > > the following line should also work: > PcDesc* volatile _pc_descs[cache_size]; > But we thought that the typedef would improve readability. > The array elements must be volatile, not the PcDescs which are pointed to. > > Best regards, > Martin > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov > Sent: Donnerstag, 17. Juli 2014 17:09 > To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > Hi Goetz, > > What is the reason for new typedef? > > Thanks, > Vladimir > > On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> This webrev fixes an important concurrency issue in nmethod. >> Please review and test this change. I please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >> >> This should be fixed into 8u20, too. >> >> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >> Best regards, >> Martin and Goetz. >> From vladimir.kozlov at oracle.com Thu Jul 17 16:01:45 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 17 Jul 2014 09:01:45 -0700 Subject: RFR(S): 8050978: Fix bad field access check in C1 and C2 In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDAAAC@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDAAAC@DEWDFEMB12A.global.corp.sap> Message-ID: <53C7F369.5070706@oracle.com> Please, don't put next part of comment into sources: + // This will make the jck8 test + // vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html + // pass with -Xbatch -Xcomp instead add something like "canonical_holder should not be use to check access becasue it can erroneously succeeds". Thanks, Vladimir On 7/17/14 3:47 AM, Lindenmaier, Goetz wrote: > Hi, > > This fixes an error doing field access checks in C1 and C2. > Please review and test the change. We please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ > > This should be included in 8u20, too. > > JCK8 test vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html fails with -Xbatch -Xcomp due to bad field access check in C1 and C2 > > Precondition: > ------------- > > Consider the following class hierarchy: > > A > / \ > B1 B2 > > A declares a field "aa" which both B1 and B2 inherit. > > Despite aa is declared in a super class of B1, methods in B1 might not access the field aa of an object of class B2: > > class B1 extends A { > m(B2 b2) { > ... > x = b2.aa; // !!! Access not allowed > } > } > > This is checked by the test mentioned above. > > Problem: > -------- > > ciField::will_link() used by C1 and C2 does the access check using the canonical_holder (which is A in this case) and thus the access erroneously succeeds. > > Fix: > ---- > > In ciField::ciField(), just before the canonical holder is stored into the _holder variable (and which is used by ciField::will_link()) perform an additional access check with the holder declared in the class file. If this check fails, store the declared holder instead and ciField::will_link() will bail out compilation for this field later on. Then, the interpreter will throw an PrivilegedAccessException at runtime. > > Ways to reproduce: > ------------------ > > Run the above JCK test with > > C2 only: -XX:-TieredCompilation -Xbatch -Xcomp > > or > > with C1: -Xbatch -Xcomp -XX:-Inline > > Best regards, > Andreas and Goetz > > From vitalyd at gmail.com Thu Jul 17 16:39:18 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Thu, 17 Jul 2014 12:39:18 -0400 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> Message-ID: Hi Martin, Is volatile enough though if the entries are read/written concurrently? What about needing, e.g., store-store barriers when writing an entry into the array? Sent from my phone On Jul 17, 2014 11:20 AM, "Doerr, Martin" wrote: > Hi Vladimir, > > the following line should also work: > PcDesc* volatile _pc_descs[cache_size]; > But we thought that the typedef would improve readability. > The array elements must be volatile, not the PcDescs which are pointed to. > > Best regards, > Martin > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf > Of Vladimir Kozlov > Sent: Donnerstag, 17. Juli 2014 17:09 > To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > Hi Goetz, > > What is the reason for new typedef? > > Thanks, > Vladimir > > On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: > > Hi, > > > > This webrev fixes an important concurrency issue in nmethod. > > Please review and test this change. I please need a sponsor. > > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > > > This should be fixed into 8u20, too. > > > > The entries of the PcDesc cache in nmethods are not declared as > volatile, but they are accessed and modified by several threads > concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory > accesses to non-volatile fields. In this case, this has led to the > situation that a thread had successfully matched a pc in the cache, but > returned the reloaded value which was already overwritten by another thread. > > Best regards, > > Martin and Goetz. > > > From coleen.phillimore at oracle.com Thu Jul 17 17:59:43 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 17 Jul 2014 13:59:43 -0400 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code In-Reply-To: <53C4705E.7060407@oracle.com> References: <53C4705E.7060407@oracle.com> Message-ID: <53C80F0F.8050600@oracle.com> I created another webrev with the better version of webrev (with source file navigation) which makes this much easier to read. open webrev at http://cr.openjdk.java.net/~coleenp/8004128_2/ Thanks! Coleen On 7/14/14, 8:05 PM, Coleen Phillimore wrote: > Summary: remove bcx and mdx handling. We no longer have to convert > bytecode pointers or method data pointers to indices for GC since > Metadata aren't moved. > > Tested with nsk.quick.testlist, jck tests, JPRT. > > Most of this is renaming bcx to bcp and mdx to mdp. The content > changes are in frame.cpp. StefanK implemented 90% of these changes. > > open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ > bug link https://bugs.openjdk.java.net/browse/JDK-8004128 > > Thanks, > Coleen From vladimir.kozlov at oracle.com Thu Jul 17 18:27:11 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 17 Jul 2014 11:27:11 -0700 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code In-Reply-To: <53C80F0F.8050600@oracle.com> References: <53C4705E.7060407@oracle.com> <53C80F0F.8050600@oracle.com> Message-ID: <53C8157F.3080504@oracle.com> This looks good. I looked through it several times and found nothing suspicious. Thanks, Vladimir On 7/17/14 10:59 AM, Coleen Phillimore wrote: > > I created another webrev with the better version of webrev (with source > file navigation) which makes this much easier to read. > > open webrev at http://cr.openjdk.java.net/~coleenp/8004128_2/ > > Thanks! > Coleen > > > On 7/14/14, 8:05 PM, Coleen Phillimore wrote: >> Summary: remove bcx and mdx handling. We no longer have to convert >> bytecode pointers or method data pointers to indices for GC since >> Metadata aren't moved. >> >> Tested with nsk.quick.testlist, jck tests, JPRT. >> >> Most of this is renaming bcx to bcp and mdx to mdp. The content >> changes are in frame.cpp. StefanK implemented 90% of these changes. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8004128 >> >> Thanks, >> Coleen > From coleen.phillimore at oracle.com Thu Jul 17 18:55:50 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 17 Jul 2014 14:55:50 -0400 Subject: RFR 8004128: NPG: remove stackwalking in Threads::gc_prologue and gc_epilogue code In-Reply-To: <53C8157F.3080504@oracle.com> References: <53C4705E.7060407@oracle.com> <53C80F0F.8050600@oracle.com> <53C8157F.3080504@oracle.com> Message-ID: <53C81C36.3030003@oracle.com> Thank you, Vladimir! Coleen On 7/17/14, 2:27 PM, Vladimir Kozlov wrote: > This looks good. I looked through it several times and found nothing > suspicious. > > Thanks, > Vladimir > > On 7/17/14 10:59 AM, Coleen Phillimore wrote: >> >> I created another webrev with the better version of webrev (with source >> file navigation) which makes this much easier to read. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8004128_2/ >> >> Thanks! >> Coleen >> >> >> On 7/14/14, 8:05 PM, Coleen Phillimore wrote: >>> Summary: remove bcx and mdx handling. We no longer have to convert >>> bytecode pointers or method data pointers to indices for GC since >>> Metadata aren't moved. >>> >>> Tested with nsk.quick.testlist, jck tests, JPRT. >>> >>> Most of this is renaming bcx to bcp and mdx to mdp. The content >>> changes are in frame.cpp. StefanK implemented 90% of these changes. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8004128/ >>> bug link https://bugs.openjdk.java.net/browse/JDK-8004128 >>> >>> Thanks, >>> Coleen >> From zhengyu.gu at oracle.com Thu Jul 17 18:58:05 2014 From: zhengyu.gu at oracle.com (Zhengyu Gu) Date: Thu, 17 Jul 2014 14:58:05 -0400 Subject: RFR(XS) 8050165: linux-sparcv9: NMT detail causes assert((intptr_t*)younger_sp[FP->sp_offset_in_saved_window()] == (intptr_t*)((intptr_t)sp - STACK_BIAS)) failed: younger_sp must be valid Message-ID: <53C81CBD.6080000@oracle.com> Please review this small fix to enable NMT stack walking, also removed debugging print. This fix is for 7u80. Bug: https://bugs.openjdk.java.net/browse/JDK-8050165 Webrev: http://cr.openjdk.java.net/~zgu/8050165/webrev.00/ Thanks, -Zhengyu From mikael.vidstedt at oracle.com Thu Jul 17 19:49:17 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Thu, 17 Jul 2014 12:49:17 -0700 Subject: [8u40] 8047740: Add hotspot testset to jprt.properties Message-ID: <53C828BD.2030508@oracle.com> Please review this backport of 8047740 from 9 to 8u-dev. The backport is almost the same as the original changeset - only the version (jdk9 vs jdk8u20) is different, along with the corresponding --with-update-version configure argument. Currently the update version is set to '20' which matches what the corresponding hotspot/make/jprt.properties file uses. Bug: https://bugs.openjdk.java.net/browse/JDK-8047740 jdk9 changeset: http://hg.openjdk.java.net/jdk9/dev/rev/9f96a36ef77c webrev (8udev): http://cr.openjdk.java.net/~mikael/webrevs/8047740-8udev/webrev.00/webrev/ Cheers, Mikael From coleen.phillimore at oracle.com Thu Jul 17 21:31:46 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 17 Jul 2014 17:31:46 -0400 Subject: RFR(XS) 8050165: linux-sparcv9: NMT detail causes assert((intptr_t*)younger_sp[FP->sp_offset_in_saved_window()] == (intptr_t*)((intptr_t)sp - STACK_BIAS)) failed: younger_sp must be valid In-Reply-To: <53C81CBD.6080000@oracle.com> References: <53C81CBD.6080000@oracle.com> Message-ID: <53C840C2.9040904@oracle.com> Yes, this looks good. Coleen On 7/17/14, 2:58 PM, Zhengyu Gu wrote: > Please review this small fix to enable NMT stack walking, also removed > debugging print. This fix is for 7u80. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8050165 > Webrev: http://cr.openjdk.java.net/~zgu/8050165/webrev.00/ > > > Thanks, > > -Zhengyu From mikael.vidstedt at oracle.com Thu Jul 17 21:34:27 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Thu, 17 Jul 2014 14:34:27 -0700 Subject: RFR(XS) 8050165: linux-sparcv9: NMT detail causes assert((intptr_t*)younger_sp[FP->sp_offset_in_saved_window()] == (intptr_t*)((intptr_t)sp - STACK_BIAS)) failed: younger_sp must be valid In-Reply-To: <53C840C2.9040904@oracle.com> References: <53C81CBD.6080000@oracle.com> <53C840C2.9040904@oracle.com> Message-ID: <53C84163.1090608@oracle.com> Looks good! Cheers, Mikael On 2014-07-17 14:31, Coleen Phillimore wrote: > > Yes, this looks good. > Coleen > > On 7/17/14, 2:58 PM, Zhengyu Gu wrote: >> Please review this small fix to enable NMT stack walking, also >> removed debugging print. This fix is for 7u80. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8050165 >> Webrev: http://cr.openjdk.java.net/~zgu/8050165/webrev.00/ >> >> >> Thanks, >> >> -Zhengyu > From asmundak at google.com Fri Jul 18 00:58:17 2014 From: asmundak at google.com (Alexander Smundak) Date: Thu, 17 Jul 2014 17:58:17 -0700 Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDAA66@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDAA66@DEWDFEMB12A.global.corp.sap> Message-ID: On Thu, Jul 17, 2014 at 3:20 AM, Lindenmaier, Goetz wrote: > I tested your change. Unfortunately it breaks our port. You need to fix Unsigned to > Signed: > > --- a/src/cpu/ppc/vm/templateTable_ppc_64.cpp Wed Jul 16 16:53:32 2014 -0700 > +++ b/src/cpu/ppc/vm/templateTable_ppc_64.cpp Thu Jul 17 12:14:18 2014 +0200 > @@ -1929,7 +1929,7 @@ > // default case > __ bind(Ldefault_case); > > - __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Unsigned); > + __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Signed); > if (ProfileInterpreter) { > __ profile_switch_default(Rdef_offset_addr, Rcount/* scratch */); > __ b(Lcontinue_execution); Oops. Fixed. Which test was broken by this, BTW? > If you want to, you can move loading the bci in this bytecode behind the loop. Done. > Could you please fix indentation of relocInfo::none in call_c? Should > be aligned to call_c. Done. The revised patch is at http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.01/ please take another look. Sasha From david.holmes at oracle.com Fri Jul 18 02:09:54 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 18 Jul 2014 12:09:54 +1000 Subject: [8u40] 8047740: Add hotspot testset to jprt.properties In-Reply-To: <53C828BD.2030508@oracle.com> References: <53C828BD.2030508@oracle.com> Message-ID: <53C881F2.1030700@oracle.com> Hi Mikael, This looks good to me. One request: can you add an additional comment: 92 # i586 platforms have both client and server, but to allow for overriding the exact configuration 93 # on a per-build flavor basis the value is set for the individual build flavors + # All other platforms only build server, which is the default setting from configure Thanks! As a reminder for other readers/reviewers/casual-observers, the JDK testing is unchanged; the hotspot testing is as currently specified by the hotspot jprt.properties files; the non-open platforms are all handled in a non-open jprt.properties files. This doesn't add any JDK testing when doing full builds and pushes. Thanks, David On 18/07/2014 5:49 AM, Mikael Vidstedt wrote: > > Please review this backport of 8047740 from 9 to 8u-dev. The backport is > almost the same as the original changeset - only the version (jdk9 vs > jdk8u20) is different, along with the corresponding > --with-update-version configure argument. Currently the update version > is set to '20' which matches what the corresponding > hotspot/make/jprt.properties file uses. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8047740 > jdk9 changeset: http://hg.openjdk.java.net/jdk9/dev/rev/9f96a36ef77c > > webrev (8udev): > http://cr.openjdk.java.net/~mikael/webrevs/8047740-8udev/webrev.00/webrev/ > > Cheers, > Mikael > From david.holmes at oracle.com Fri Jul 18 05:28:07 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 18 Jul 2014 15:28:07 +1000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> Message-ID: <53C8B067.5070100@oracle.com> On 17/07/2014 6:54 PM, Lindenmaier, Goetz wrote: > Hi, > > This webrev fixes an important concurrency issue in nmethod. > Please review and test this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > This should be fixed into 8u20, too. > > The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. Do you mean that code like: int val = ptr->field; int val2 = 2*val; might result in the compiler generating two loads for ptr->field? That would surely break a lot of lock-free algorithms! Or are you saying that would only happen if field (ptr?) is not volatile? Otherwise this change seems insufficient to ensure general MT correctness as Vitaly suggested. David > Best regards, > Martin and Goetz. > From mikael.vidstedt at oracle.com Fri Jul 18 06:05:15 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Thu, 17 Jul 2014 23:05:15 -0700 Subject: [8u40] 8047740: Add hotspot testset to jprt.properties In-Reply-To: <53C881F2.1030700@oracle.com> References: <53C828BD.2030508@oracle.com> <53C881F2.1030700@oracle.com> Message-ID: <53C8B91B.80100@oracle.com> David, Thanks for the review, I'll add the comment before committing! Cheers, Mikael On 2014-07-17 19:09, David Holmes wrote: > Hi Mikael, > > This looks good to me. One request: can you add an additional comment: > > 92 # i586 platforms have both client and server, but to allow for > overriding the exact configuration > 93 # on a per-build flavor basis the value is set for the individual > build flavors > + # All other platforms only build server, which is the default > setting from configure > > Thanks! > > As a reminder for other readers/reviewers/casual-observers, the JDK > testing is unchanged; the hotspot testing is as currently specified by > the hotspot jprt.properties files; the non-open platforms are all > handled in a non-open jprt.properties files. This doesn't add any JDK > testing when doing full builds and pushes. > > Thanks, > David > > > On 18/07/2014 5:49 AM, Mikael Vidstedt wrote: >> >> Please review this backport of 8047740 from 9 to 8u-dev. The backport is >> almost the same as the original changeset - only the version (jdk9 vs >> jdk8u20) is different, along with the corresponding >> --with-update-version configure argument. Currently the update version >> is set to '20' which matches what the corresponding >> hotspot/make/jprt.properties file uses. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8047740 >> jdk9 changeset: http://hg.openjdk.java.net/jdk9/dev/rev/9f96a36ef77c >> >> webrev (8udev): >> http://cr.openjdk.java.net/~mikael/webrevs/8047740-8udev/webrev.00/webrev/ >> >> >> Cheers, >> Mikael >> From goetz.lindenmaier at sap.com Fri Jul 18 07:15:01 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 18 Jul 2014 07:15:01 +0000 Subject: RFR(S): 8050978: Fix bad field access check in C1 and C2 In-Reply-To: <53C7F369.5070706@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CEDAAAC@DEWDFEMB12A.global.corp.sap> <53C7F369.5070706@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDAD04@DEWDFEMB12A.global.corp.sap> Hi Vladimir, we updated the changeset with the new comment. http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ Best regards, Goetz. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Donnerstag, 17. Juli 2014 18:02 To: hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050978: Fix bad field access check in C1 and C2 Please, don't put next part of comment into sources: + // This will make the jck8 test + // vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html + // pass with -Xbatch -Xcomp instead add something like "canonical_holder should not be use to check access becasue it can erroneously succeeds". Thanks, Vladimir On 7/17/14 3:47 AM, Lindenmaier, Goetz wrote: > Hi, > > This fixes an error doing field access checks in C1 and C2. > Please review and test the change. We please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ > > This should be included in 8u20, too. > > JCK8 test vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html fails with -Xbatch -Xcomp due to bad field access check in C1 and C2 > > Precondition: > ------------- > > Consider the following class hierarchy: > > A > / \ > B1 B2 > > A declares a field "aa" which both B1 and B2 inherit. > > Despite aa is declared in a super class of B1, methods in B1 might not access the field aa of an object of class B2: > > class B1 extends A { > m(B2 b2) { > ... > x = b2.aa; // !!! Access not allowed > } > } > > This is checked by the test mentioned above. > > Problem: > -------- > > ciField::will_link() used by C1 and C2 does the access check using the canonical_holder (which is A in this case) and thus the access erroneously succeeds. > > Fix: > ---- > > In ciField::ciField(), just before the canonical holder is stored into the _holder variable (and which is used by ciField::will_link()) perform an additional access check with the holder declared in the class file. If this check fails, store the declared holder instead and ciField::will_link() will bail out compilation for this field later on. Then, the interpreter will throw an PrivilegedAccessException at runtime. > > Ways to reproduce: > ------------------ > > Run the above JCK test with > > C2 only: -XX:-TieredCompilation -Xbatch -Xcomp > > or > > with C1: -Xbatch -Xcomp -XX:-Inline > > Best regards, > Andreas and Goetz > > From goetz.lindenmaier at sap.com Fri Jul 18 08:12:59 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 18 Jul 2014 08:12:59 +0000 Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le In-Reply-To: References: <4295855A5C1DE049A61835A1887419CC2CEDAA66@DEWDFEMB12A.global.corp.sap> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDAD4A@DEWDFEMB12A.global.corp.sap> Hi Sasha, thanks, now it works. I just ran jvm98/javac. Comprehensive tests will be executed tonight. Best regards, Goetz. -----Original Message----- From: Alexander Smundak [mailto:asmundak at google.com] Sent: Freitag, 18. Juli 2014 02:58 To: Lindenmaier, Goetz Cc: HotSpot Open Source Developers Subject: Re: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le On Thu, Jul 17, 2014 at 3:20 AM, Lindenmaier, Goetz wrote: > I tested your change. Unfortunately it breaks our port. You need to fix Unsigned to > Signed: > > --- a/src/cpu/ppc/vm/templateTable_ppc_64.cpp Wed Jul 16 16:53:32 2014 -0700 > +++ b/src/cpu/ppc/vm/templateTable_ppc_64.cpp Thu Jul 17 12:14:18 2014 +0200 > @@ -1929,7 +1929,7 @@ > // default case > __ bind(Ldefault_case); > > - __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Unsigned); > + __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Signed); > if (ProfileInterpreter) { > __ profile_switch_default(Rdef_offset_addr, Rcount/* scratch */); > __ b(Lcontinue_execution); Oops. Fixed. Which test was broken by this, BTW? > If you want to, you can move loading the bci in this bytecode behind the loop. Done. > Could you please fix indentation of relocInfo::none in call_c? Should > be aligned to call_c. Done. The revised patch is at http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.01/ please take another look. Sasha From martin.doerr at sap.com Fri Jul 18 08:27:25 2014 From: martin.doerr at sap.com (Doerr, Martin) Date: Fri, 18 Jul 2014 08:27:25 +0000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <53C8B067.5070100@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C8B067.5070100@oracle.com> Message-ID: <7C9B87B351A4BA4AA9EC95BB418116566ACB7BD1@DEWDFEMB19C.global.corp.sap> Hi David, yes, the compiler is allowed to generate 2 loads in your example. And yes, declaring the field volatile suffices to prevent this. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of David Holmes Sent: Freitag, 18. Juli 2014 07:28 To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache On 17/07/2014 6:54 PM, Lindenmaier, Goetz wrote: > Hi, > > This webrev fixes an important concurrency issue in nmethod. > Please review and test this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > This should be fixed into 8u20, too. > > The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. Do you mean that code like: int val = ptr->field; int val2 = 2*val; might result in the compiler generating two loads for ptr->field? That would surely break a lot of lock-free algorithms! Or are you saying that would only happen if field (ptr?) is not volatile? Otherwise this change seems insufficient to ensure general MT correctness as Vitaly suggested. David > Best regards, > Martin and Goetz. > From martin.doerr at sap.com Fri Jul 18 08:34:46 2014 From: martin.doerr at sap.com (Doerr, Martin) Date: Fri, 18 Jul 2014 08:34:46 +0000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> Message-ID: <7C9B87B351A4BA4AA9EC95BB418116566ACB7BE8@DEWDFEMB19C.global.corp.sap> Hi Vitaly, yes, volatile is enough. There?s no requirement on the ordering. The requirement for the writers is that they must only write valid entries or NULL. The requirement for the readers is that they must not read several times for the matching and the returning of the result. If a reader doesn?t find a value in the cache, the slow path will be used. Best regards, Martin From: Vitaly Davidovich [mailto:vitalyd at gmail.com] Sent: Donnerstag, 17. Juli 2014 18:39 To: Doerr, Martin Cc: hotspot-dev developers; Vladimir Kozlov; Lindenmaier, Goetz Subject: RE: RFR(S): 8050972: Concurrency problem in PcDesc cache Hi Martin, Is volatile enough though if the entries are read/written concurrently? What about needing, e.g., store-store barriers when writing an entry into the array? Sent from my phone On Jul 17, 2014 11:20 AM, "Doerr, Martin" > wrote: Hi Vladimir, the following line should also work: PcDesc* volatile _pc_descs[cache_size]; But we thought that the typedef would improve readability. The array elements must be volatile, not the PcDescs which are pointed to. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov Sent: Donnerstag, 17. Juli 2014 17:09 To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache Hi Goetz, What is the reason for new typedef? Thanks, Vladimir On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: > Hi, > > This webrev fixes an important concurrency issue in nmethod. > Please review and test this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > This should be fixed into 8u20, too. > > The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. > Best regards, > Martin and Goetz. > From goetz.lindenmaier at sap.com Fri Jul 18 09:08:35 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 18 Jul 2014 09:08:35 +0000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <53C7F136.3000709@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <53C7F136.3000709@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> Hi Vladimir, We fixed the comment and camel case stuff. http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ We think it looks better if volatile is before the type. Best regards, Martin and Goetz. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Donnerstag, 17. Juli 2014 17:52 To: Doerr, Martin; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache First, comments needs to be fixed: "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" Second, type name should be camel style (PcDescPtr). Someone have to double check this volatile declaration. Your example is more clear for me than typedef. Thanks, Vladimir On 7/17/14 8:19 AM, Doerr, Martin wrote: > Hi Vladimir, > > the following line should also work: > PcDesc* volatile _pc_descs[cache_size]; > But we thought that the typedef would improve readability. > The array elements must be volatile, not the PcDescs which are pointed to. > > Best regards, > Martin > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov > Sent: Donnerstag, 17. Juli 2014 17:09 > To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > Hi Goetz, > > What is the reason for new typedef? > > Thanks, > Vladimir > > On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> This webrev fixes an important concurrency issue in nmethod. >> Please review and test this change. I please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >> >> This should be fixed into 8u20, too. >> >> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >> Best regards, >> Martin and Goetz. >> From david.holmes at oracle.com Fri Jul 18 10:04:20 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 18 Jul 2014 20:04:20 +1000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <7C9B87B351A4BA4AA9EC95BB418116566ACB7BD1@DEWDFEMB19C.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C8B067.5070100@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7BD1@DEWDFEMB19C.global.corp.sap> Message-ID: <53C8F124.30701@oracle.com> On 18/07/2014 6:27 PM, Doerr, Martin wrote: > Hi David, > > yes, the compiler is allowed to generate 2 loads in your example. > And yes, declaring the field volatile suffices to prevent this. Surprising - but thank goodness it does the right thing for volatiles! Thanks for clarifying the ordering non-issue. David > Best regards, > Martin > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of David Holmes > Sent: Freitag, 18. Juli 2014 07:28 > To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > On 17/07/2014 6:54 PM, Lindenmaier, Goetz wrote: >> Hi, >> >> This webrev fixes an important concurrency issue in nmethod. >> Please review and test this change. I please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >> >> This should be fixed into 8u20, too. >> >> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. > > Do you mean that code like: > > int val = ptr->field; > int val2 = 2*val; > > might result in the compiler generating two loads for ptr->field? > > That would surely break a lot of lock-free algorithms! Or are you saying > that would only happen if field (ptr?) is not volatile? > > Otherwise this change seems insufficient to ensure general MT > correctness as Vitaly suggested. > > David > >> Best regards, >> Martin and Goetz. >> From david.holmes at oracle.com Fri Jul 18 10:05:49 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 18 Jul 2014 20:05:49 +1000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <53C7F136.3000709@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> Message-ID: <53C8F17D.3040706@oracle.com> On 18/07/2014 7:08 PM, Lindenmaier, Goetz wrote: > Hi Vladimir, > > We fixed the comment and camel case stuff. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ I'm not normally a fan of "documentary" comments but in this case I'm okay with it :) David > We think it looks better if volatile is before the type. > > Best regards, > Martin and Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Donnerstag, 17. Juli 2014 17:52 > To: Doerr, Martin; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > First, comments needs to be fixed: > > "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" > > Second, type name should be camel style (PcDescPtr). > > Someone have to double check this volatile declaration. Your example is more clear for me than typedef. > > Thanks, > Vladimir > > On 7/17/14 8:19 AM, Doerr, Martin wrote: >> Hi Vladimir, >> >> the following line should also work: >> PcDesc* volatile _pc_descs[cache_size]; >> But we thought that the typedef would improve readability. >> The array elements must be volatile, not the PcDescs which are pointed to. >> >> Best regards, >> Martin >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov >> Sent: Donnerstag, 17. Juli 2014 17:09 >> To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >> >> Hi Goetz, >> >> What is the reason for new typedef? >> >> Thanks, >> Vladimir >> >> On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This webrev fixes an important concurrency issue in nmethod. >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>> >>> This should be fixed into 8u20, too. >>> >>> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >>> Best regards, >>> Martin and Goetz. >>> From tobias.hartmann at oracle.com Fri Jul 18 11:38:40 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 18 Jul 2014 13:38:40 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C639A2.3050202@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> Message-ID: <53C90740.40602@oracle.com> Hi, I spend some more days and was finally able to implement a test that deterministically triggers the bug: http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ @Vladimir: The test shows why we should only clean the ICs but not unload the nmethod if possible. The method ' doWork' is still valid after WorkerClass was unloaded and depending on the complexity of the method we should avoid unloading it. On Sparc my patch fixes the bug and leads to the nmethod not being unloaded. The compiled version is therefore used even after WorkerClass is unloaded. On x86 the nmethod is unloaded anyway because of a dead oop. This is probably due to a slightly different implementation of the ICs. I'll have a closer look to see if we can improve that. Thanks, Tobias On 16.07.2014 10:36, Tobias Hartmann wrote: > Sorry, forgot to answer this question: >> Were you able to create a small test case for it that would be useful >> to add? > Unfortunately I was not able to create a test. The bug only reproduces > on a particular system with a > 30 minute run of runThese. > > Best, > Tobias > > On 16.07.2014 09:54, Tobias Hartmann wrote: >> Hi Coleen, >> >> thanks for the review. >>> *+ if (csc->is_call_to_interpreted() && >>> stub_contains_dead_metadata(is_alive, csc->destination())) {* >>> *+ csc->set_to_clean();* >>> *+ }* >>> >>> This appears in each case. Can you fold it and the new function >>> into a function like clean_call_to_interpreted_stub(is_alive, csc)? >> >> I folded it into the function clean_call_to_interpreter_stub(..). >> >> New webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >> >> Thanks, >> Tobias >> >>> >>> Thanks, >>> Coleen >>> >>>> >>>> So before the permgen removal embedded method* were oops and they >>>> were processed in relocInfo::oop_type loop. >>>> >>>> May be instead of specializing opt_virtual_call_type and >>>> static_call_type call site you can simple add a loop for >>>> relocInfo::metadata_type (similar to oop_type loop)? >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>> Hi, >>>>> >>>>> please review the following patch for JDK-8029443. >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>> >>>>> *Problem* >>>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) >>>>> checks >>>>> if a nmethod can be unloaded because it contains dead oops. If class >>>>> unloading occurred we additionally clear all ICs where the cached >>>>> metadata refers to an unloaded klass or method. If the nmethod is not >>>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>>>> metadata is alive. The assert in CheckClass::check_class fails >>>>> because >>>>> the nmethod contains Method* metadata corresponding to a dead Klass. >>>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>>> compiled IC. Normally we clear those stubs prior to verification to >>>>> avoid dangling references to Method* [2], but only if the stub is >>>>> not in >>>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>>> to-interpreter stub may be executed and hand a stale Method* to the >>>>> interpreter. >>>>> >>>>> *Solution >>>>> *The implementation of nmethod::do_unloading(..) is changed to clean >>>>> compiled ICs and compiled static calls if they call into a >>>>> to-interpreter stub that references dead Method* metadata. >>>>> >>>>> The patch was affected by the G1 class unloading changes >>>>> (JDK-8048248) >>>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>>> adapted the implementation as well. >>>>> * >>>>> Testing >>>>> *Failing test (runThese) >>>>> JPRT >>>>> >>>>> Thanks, >>>>> Tobias >>>>> >>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>> >> > From vitalyd at gmail.com Fri Jul 18 11:53:02 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Fri, 18 Jul 2014 07:53:02 -0400 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <7C9B87B351A4BA4AA9EC95BB418116566ACB7BE8@DEWDFEMB19C.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <7C9B87B351A4BA4AA9EC95BB418116566ACB7BE8@DEWDFEMB19C.global.corp.sap> Message-ID: Hi Martin, What if the write of a new entry results in address being set for the entry before the writes into the entry's fields? Is that still ok? Reader may see non-NULL partially written entry, it seems like ... I was thinking you need store-store barrier for that when writing the entry into the array. Sent from my phone On Jul 18, 2014 4:35 AM, "Doerr, Martin" wrote: > Hi Vitaly, > > > > yes, volatile is enough. There?s no requirement on the ordering. > > The requirement for the writers is that they must only write valid entries > or NULL. > > The requirement for the readers is that they must not read several times > for the matching and the returning of the result. > > If a reader doesn?t find a value in the cache, the slow path will be used. > > > > Best regards, > > Martin > > > > > > *From:* Vitaly Davidovich [mailto:vitalyd at gmail.com] > *Sent:* Donnerstag, 17. Juli 2014 18:39 > *To:* Doerr, Martin > *Cc:* hotspot-dev developers; Vladimir Kozlov; Lindenmaier, Goetz > *Subject:* RE: RFR(S): 8050972: Concurrency problem in PcDesc cache > > > > Hi Martin, > > Is volatile enough though if the entries are read/written concurrently? > What about needing, e.g., store-store barriers when writing an entry into > the array? > > Sent from my phone > > On Jul 17, 2014 11:20 AM, "Doerr, Martin" wrote: > > Hi Vladimir, > > the following line should also work: > PcDesc* volatile _pc_descs[cache_size]; > But we thought that the typedef would improve readability. > The array elements must be volatile, not the PcDescs which are pointed to. > > Best regards, > Martin > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf > Of Vladimir Kozlov > Sent: Donnerstag, 17. Juli 2014 17:09 > To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > Hi Goetz, > > What is the reason for new typedef? > > Thanks, > Vladimir > > On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: > > Hi, > > > > This webrev fixes an important concurrency issue in nmethod. > > Please review and test this change. I please need a sponsor. > > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > > > This should be fixed into 8u20, too. > > > > The entries of the PcDesc cache in nmethods are not declared as > volatile, but they are accessed and modified by several threads > concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory > accesses to non-volatile fields. In this case, this has led to the > situation that a thread had successfully matched a pc in the cache, but > returned the reloaded value which was already overwritten by another thread. > > Best regards, > > Martin and Goetz. > > > From martin.doerr at sap.com Fri Jul 18 12:38:54 2014 From: martin.doerr at sap.com (Doerr, Martin) Date: Fri, 18 Jul 2014 12:38:54 +0000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <7C9B87B351A4BA4AA9EC95BB418116566ACB7BE8@DEWDFEMB19C.global.corp.sap> Message-ID: <7C9B87B351A4BA4AA9EC95BB418116566ACB7C53@DEWDFEMB19C.global.corp.sap> Hi Vitaly, the PcDescs are already there before the cache gets used (nmethod creation). The cache is just an array of pointers. It is true that the pointers need to get written atomically. However, this is also ensured by making them volatile. So this is a second reason why we need this change. Best regards, Martin From: Vitaly Davidovich [mailto:vitalyd at gmail.com] Sent: Freitag, 18. Juli 2014 13:53 To: Doerr, Martin Cc: Vladimir Kozlov; hotspot-dev developers; Lindenmaier, Goetz Subject: RE: RFR(S): 8050972: Concurrency problem in PcDesc cache Hi Martin, What if the write of a new entry results in address being set for the entry before the writes into the entry's fields? Is that still ok? Reader may see non-NULL partially written entry, it seems like ... I was thinking you need store-store barrier for that when writing the entry into the array. Sent from my phone On Jul 18, 2014 4:35 AM, "Doerr, Martin" > wrote: Hi Vitaly, yes, volatile is enough. There?s no requirement on the ordering. The requirement for the writers is that they must only write valid entries or NULL. The requirement for the readers is that they must not read several times for the matching and the returning of the result. If a reader doesn?t find a value in the cache, the slow path will be used. Best regards, Martin From: Vitaly Davidovich [mailto:vitalyd at gmail.com] Sent: Donnerstag, 17. Juli 2014 18:39 To: Doerr, Martin Cc: hotspot-dev developers; Vladimir Kozlov; Lindenmaier, Goetz Subject: RE: RFR(S): 8050972: Concurrency problem in PcDesc cache Hi Martin, Is volatile enough though if the entries are read/written concurrently? What about needing, e.g., store-store barriers when writing an entry into the array? Sent from my phone On Jul 17, 2014 11:20 AM, "Doerr, Martin" > wrote: Hi Vladimir, the following line should also work: PcDesc* volatile _pc_descs[cache_size]; But we thought that the typedef would improve readability. The array elements must be volatile, not the PcDescs which are pointed to. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov Sent: Donnerstag, 17. Juli 2014 17:09 To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache Hi Goetz, What is the reason for new typedef? Thanks, Vladimir On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: > Hi, > > This webrev fixes an important concurrency issue in nmethod. > Please review and test this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > This should be fixed into 8u20, too. > > The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. > Best regards, > Martin and Goetz. > From vitalyd at gmail.com Fri Jul 18 12:43:03 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Fri, 18 Jul 2014 08:43:03 -0400 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <7C9B87B351A4BA4AA9EC95BB418116566ACB7C53@DEWDFEMB19C.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <7C9B87B351A4BA4AA9EC95BB418116566ACB7BE8@DEWDFEMB19C.global.corp.sap> <7C9B87B351A4BA4AA9EC95BB418116566ACB7C53@DEWDFEMB19C.global.corp.sap> Message-ID: Ah, ok - didn't realize they're in the array already. Thanks for the explanation. Sent from my phone On Jul 18, 2014 8:40 AM, "Doerr, Martin" wrote: > Hi Vitaly, > > > > the PcDescs are already there before the cache gets used (nmethod > creation). The cache is just an array of pointers. It is true that the > pointers need to get written atomically. > > However, this is also ensured by making them volatile. So this is a second > reason why we need this change. > > > > Best regards, > > Martin > > > > > > *From:* Vitaly Davidovich [mailto:vitalyd at gmail.com] > *Sent:* Freitag, 18. Juli 2014 13:53 > *To:* Doerr, Martin > *Cc:* Vladimir Kozlov; hotspot-dev developers; Lindenmaier, Goetz > *Subject:* RE: RFR(S): 8050972: Concurrency problem in PcDesc cache > > > > Hi Martin, > > What if the write of a new entry results in address being set for the > entry before the writes into the entry's fields? Is that still ok? Reader > may see non-NULL partially written entry, it seems like ... I was thinking > you need store-store barrier for that when writing the entry into the array. > > Sent from my phone > > On Jul 18, 2014 4:35 AM, "Doerr, Martin" wrote: > > Hi Vitaly, > > > > yes, volatile is enough. There?s no requirement on the ordering. > > The requirement for the writers is that they must only write valid entries > or NULL. > > The requirement for the readers is that they must not read several times > for the matching and the returning of the result. > > If a reader doesn?t find a value in the cache, the slow path will be used. > > > > Best regards, > > Martin > > > > > > *From:* Vitaly Davidovich [mailto:vitalyd at gmail.com] > *Sent:* Donnerstag, 17. Juli 2014 18:39 > *To:* Doerr, Martin > *Cc:* hotspot-dev developers; Vladimir Kozlov; Lindenmaier, Goetz > *Subject:* RE: RFR(S): 8050972: Concurrency problem in PcDesc cache > > > > Hi Martin, > > Is volatile enough though if the entries are read/written concurrently? > What about needing, e.g., store-store barriers when writing an entry into > the array? > > Sent from my phone > > On Jul 17, 2014 11:20 AM, "Doerr, Martin" wrote: > > Hi Vladimir, > > the following line should also work: > PcDesc* volatile _pc_descs[cache_size]; > But we thought that the typedef would improve readability. > The array elements must be volatile, not the PcDescs which are pointed to. > > Best regards, > Martin > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf > Of Vladimir Kozlov > Sent: Donnerstag, 17. Juli 2014 17:09 > To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > Hi Goetz, > > What is the reason for new typedef? > > Thanks, > Vladimir > > On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: > > Hi, > > > > This webrev fixes an important concurrency issue in nmethod. > > Please review and test this change. I please need a sponsor. > > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > > > This should be fixed into 8u20, too. > > > > The entries of the PcDesc cache in nmethods are not declared as > volatile, but they are accessed and modified by several threads > concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory > accesses to non-volatile fields. In this case, this has led to the > situation that a thread had successfully matched a pc in the cache, but > returned the reloaded value which was already overwritten by another thread. > > Best regards, > > Martin and Goetz. > > > From goetz.lindenmaier at sap.com Fri Jul 18 12:47:01 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 18 Jul 2014 12:47:01 +0000 Subject: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDAE6B@DEWDFEMB12A.global.corp.sap> Hi, This fixes two missing Resource and Handle marks. Please review and test this change. We please need a sponsor to push it. http://cr.openjdk.java.net/~goetz/webrevs/8050973-mark/webrev-01/ Should this be pushed to 8u20? Thanks and best regards, Martin and Goetz From goetz.lindenmaier at sap.com Fri Jul 18 12:58:31 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 18 Jul 2014 12:58:31 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> <53C45912.4050905@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> <53C4D63A.5060802@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDAE85@DEWDFEMB12A.global.corp.sap> Hi David, does this clear the situation? Can we consider the change reviewed? Thanks and best regards, Goetz. -----Original Message----- From: Lindenmaier, Goetz Sent: Dienstag, 15. Juli 2014 11:18 To: 'David Holmes'; Coleen Phillimore; hotspot-dev at openjdk.java.net Subject: RE: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories Hi David, There are no clean rules followed, which happens to cause compile problems here and there. I try to clean this up a bit. If inline function foo() calls another inline function bar(), the c++ compiler must see both implementations to compile foo (else it obviously can't inline). It must see the declaration of the function to be inlined before the function where it is inlined. If there are cyclic inlines you need inline.hpp headers to get a safe state. Also, to be on the safe side, .hpp files never may include .inline.hpp files, else an implementation can end up above the declaration it needs. See also the two examples attached. If there is no cycle, it doesn't matter. That's why a lot of functions are not placed according to this scheme. For the functions I moved to the header (path_separator etc): They are used in a lot of .hpp files. Moving them to os.hpp I easily could avoid including the os.inline.hpp in .hpp files, which would be bad. Best regards, Goetz. -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Dienstag, 15. Juli 2014 09:20 To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories On 15/07/2014 4:34 PM, Lindenmaier, Goetz wrote: > Hi David, > > functions that are completely self contained can go into the .hpp. > Functions that call another inline function defined in an other header > must go to .inline.hpp as else there could be cycles the c++ compilers can't > deal with. A quick survey of the shared *.inline.hpp files shows many don't seem to fit this definition. Are templates also something that needs special handling? I'm not saying anything is wrong with your changes, just trying to understand what the rules are. Thanks, David > Best regards, > Goetz. > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 15. Juli 2014 00:26 > To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: >> Hi Coleen, >> >> Thanks for sponsoring this! >> >> bytes, ad, nativeInst and vmreg.inline were used quite often >> in shared files, so it definitely makes sense for these to have >> a shared header. >> vm_version and register had an umbrella header, but that >> was not used everywhere, so I cleaned it up. >> That left adGlobals, jniTypes and interp_masm which >> are only used a few time. I did these so that all files >> are treated similarly. >> In the end, I didn't need a header for all, as they were >> not really needed in the shared files, or I found >> another good place, as for adGlobals. >> >> I added you and David H. as reviewer to the webrev: >> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >> I hope this is ok with you, David. > > It might be somewhat premature :) I somewhat confused by the rules for > headers and includes and inlines. I now see with this change a bunch of > inline function definitions being moved out of the .inline.hpp file and > into the .hpp file. Why? What criteria determines if an inline function > goes into the .hpp versus the .inline.hpp file ??? > > Thanks, > David > >> Thanks, >> Goetz. >> >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore >> Sent: Montag, 14. Juli 2014 14:09 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> >> I think this looks like a good cleanup. I can sponsor it and make the >> closed changes also again. I initially proposed the #include cascades >> because the alternative at the time was to blindly create a dispatching >> header file for each target dependent file. I wanted to see the >> #includes cleaned up instead and target dependent files included >> directly. This adds 5 dispatching header files, which is fine. I >> think the case of interp_masm.hpp is interesting though, because the >> dispatching file is included in cpu dependent files, which could >> directly include the cpu version. But there are 3 platform independent >> files that include it. I'm not going to object though because I'm >> grateful for this cleanup and I guess it's a matter of opinion which is >> best to include in the cpu dependent directories. >> >> Thanks, >> Coleen >> >> >> On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> David, can I consider this a review? >>> >>> And I please need a sponsor for this change. Could somebody >>> please help here? Probably some closed adaptions are needed. >>> It applies to any repo as my other change traveled around >>> by now. >>> >>> Thanks and best regards, >>> Goetz. >>> >>> >>> -----Original Message----- >>> From: David Holmes [mailto:david.holmes at oracle.com] >>> Sent: Freitag, 11. Juli 2014 07:19 >>> To: Lindenmaier, Goetz; Lois Foltan >>> Cc: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>> >>> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> foo.hpp as few includes as possible, to avoid cycles. >>>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>>> (either directly or via the platform files.) >>>> * should include foo.platform.inline.hpp, so that shared files that >>>> call functions from foo.platform.inline.hpp need not contain the >>>> cascade of all the platform files. >>>> If code in foo.platform.inline.hpp is only used in the platform files, >>>> it is not necessary to have an umbrella header. >>>> foo.platform.inline.hpp Should include what is needed in its code. >>>> >>>> For client code: >>>> With this change I now removed all include cascades of platform files except for >>>> those in the 'natural' headers. >>>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>>> headers, but include bar.[inline.]hpp.) >>>> If it's 1:1, I don't care, as discussed before. >>>> >>>> Does this make sense? >>> I find the overall structure somewhat counter-intuitive from an >>> implementation versus interface perspective. But ... >>> >>> Thanks for the explanation. >>> >>> David >>> >>>> Best regards, >>>> Goetz. >>>> >>>> >>>> which of the above should #include which others, and which should be >>>> #include'd by "client" code? >>>> >>>> Thanks, >>>> David >>>> >>>>> Thanks, >>>>> Lois >>>>> >>>>>> David >>>>>> ----- >>>>>> >>>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>>> (however this could pull in more code than needed since >>>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>>> >>>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>>> - change not related to clean up of umbrella headers, please >>>>>>> explain/justify. >>>>>>> >>>>>>> src/share/vm/code/vmreg.hpp >>>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>>> vmreg.inline.hpp or will >>>>>>> this introduce a cyclical inclusion situation, since >>>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>>> >>>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>>> - only has a copyright change in the file, no other changes >>>>>>> present? >>>>>>> >>>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>>> - incorrect copyright, no current year? >>>>>>> >>>>>>> src/share/vm/opto/ad.hpp >>>>>>> - incorrect copyright date for a new file >>>>>>> >>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>> - technically this new file does not need to include >>>>>>> "asm/register.hpp" since >>>>>>> vmreg.hpp already includes it >>>>>>> >>>>>>> My only lingering concern is the cyclical nature of >>>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>>> is not much difference between the two? >>>>>>> >>>>>>> Thanks, >>>>>>> Lois >>>>>>> >>>>>>> >>>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>>> >>>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>>> subdirectories: >>>>>>>> >>>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>>> >>>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>>> >>>>>>>> Where possible, this change avoids includes in headers. >>>>>>>> Eventually it adds a forward declaration. >>>>>>>> >>>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>>> rather small. >>>>>>>> >>>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>>> includes in, >>>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>>> >>>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>>> thus all the assembler include headers into a lot of files. >>>>>>>> >>>>>>>> Please review and test this change. I please need a sponsor. >>>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>>> >>>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>>> linuxppc64, >>>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>>> aixppc64, ntamd64 >>>>>>>> in opt, dbg and fastdbg versions. >>>>>>>> >>>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>>> arrives in other >>>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>>> change >>>>>>>> against jdk9/dev, too.) >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Goetz. >>>>>>>> >>>>>>>> PS: I also did all the Copyright adaptions ;) >> From daniel.daugherty at oracle.com Fri Jul 18 13:43:20 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Fri, 18 Jul 2014 07:43:20 -0600 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <53C7F136.3000709@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> Message-ID: <53C92478.40909@oracle.com> On 7/18/14 3:08 AM, Lindenmaier, Goetz wrote: > Hi Vladimir, > > We fixed the comment and camel case stuff. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ Please consider modifying this comment: C++ compiler (namely xlC12) may duplicate C++ field accesses. to: C++ compiler (namely xlC12) may duplicate C++ field accesses if the elements are not volatile. Otherwise it sounds like a C++ compiler might duplicate the C++ field access with the new code. Dan > > We think it looks better if volatile is before the type. > > Best regards, > Martin and Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Donnerstag, 17. Juli 2014 17:52 > To: Doerr, Martin; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > First, comments needs to be fixed: > > "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" > > Second, type name should be camel style (PcDescPtr). > > Someone have to double check this volatile declaration. Your example is more clear for me than typedef. > > Thanks, > Vladimir > > On 7/17/14 8:19 AM, Doerr, Martin wrote: >> Hi Vladimir, >> >> the following line should also work: >> PcDesc* volatile _pc_descs[cache_size]; >> But we thought that the typedef would improve readability. >> The array elements must be volatile, not the PcDescs which are pointed to. >> >> Best regards, >> Martin >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov >> Sent: Donnerstag, 17. Juli 2014 17:09 >> To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >> >> Hi Goetz, >> >> What is the reason for new typedef? >> >> Thanks, >> Vladimir >> >> On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This webrev fixes an important concurrency issue in nmethod. >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>> >>> This should be fixed into 8u20, too. >>> >>> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >>> Best regards, >>> Martin and Goetz. >>> From vladimir.kozlov at oracle.com Fri Jul 18 14:48:09 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 18 Jul 2014 07:48:09 -0700 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <53C7F136.3000709@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> Message-ID: <53C933A9.7060705@oracle.com> Looks good. Thanks, Vladimir On 7/18/14 2:08 AM, Lindenmaier, Goetz wrote: > Hi Vladimir, > > We fixed the comment and camel case stuff. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > We think it looks better if volatile is before the type. > > Best regards, > Martin and Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Donnerstag, 17. Juli 2014 17:52 > To: Doerr, Martin; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > First, comments needs to be fixed: > > "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" > > Second, type name should be camel style (PcDescPtr). > > Someone have to double check this volatile declaration. Your example is more clear for me than typedef. > > Thanks, > Vladimir > > On 7/17/14 8:19 AM, Doerr, Martin wrote: >> Hi Vladimir, >> >> the following line should also work: >> PcDesc* volatile _pc_descs[cache_size]; >> But we thought that the typedef would improve readability. >> The array elements must be volatile, not the PcDescs which are pointed to. >> >> Best regards, >> Martin >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov >> Sent: Donnerstag, 17. Juli 2014 17:09 >> To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >> >> Hi Goetz, >> >> What is the reason for new typedef? >> >> Thanks, >> Vladimir >> >> On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This webrev fixes an important concurrency issue in nmethod. >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>> >>> This should be fixed into 8u20, too. >>> >>> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >>> Best regards, >>> Martin and Goetz. >>> From vladimir.kozlov at oracle.com Fri Jul 18 14:59:22 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 18 Jul 2014 07:59:22 -0700 Subject: RFR(S): 8050978: Fix bad field access check in C1 and C2 In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDAD04@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDAAAC@DEWDFEMB12A.global.corp.sap> <53C7F369.5070706@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD04@DEWDFEMB12A.global.corp.sap> Message-ID: <53C9364A.1000202@oracle.com> Good. Thanks, Vladimir On 7/18/14 12:15 AM, Lindenmaier, Goetz wrote: > Hi Vladimir, > > we updated the changeset with the new comment. > http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ > > Best regards, > Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Donnerstag, 17. Juli 2014 18:02 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050978: Fix bad field access check in C1 and C2 > > Please, don't put next part of comment into sources: > > + // This will make the jck8 test > + // vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html > + // pass with -Xbatch -Xcomp > > instead add something like "canonical_holder should not be use to check access becasue it can erroneously succeeds". > > Thanks, > Vladimir > > On 7/17/14 3:47 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> This fixes an error doing field access checks in C1 and C2. >> Please review and test the change. We please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ >> >> This should be included in 8u20, too. >> >> JCK8 test vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html fails with -Xbatch -Xcomp due to bad field access check in C1 and C2 >> >> Precondition: >> ------------- >> >> Consider the following class hierarchy: >> >> A >> / \ >> B1 B2 >> >> A declares a field "aa" which both B1 and B2 inherit. >> >> Despite aa is declared in a super class of B1, methods in B1 might not access the field aa of an object of class B2: >> >> class B1 extends A { >> m(B2 b2) { >> ... >> x = b2.aa; // !!! Access not allowed >> } >> } >> >> This is checked by the test mentioned above. >> >> Problem: >> -------- >> >> ciField::will_link() used by C1 and C2 does the access check using the canonical_holder (which is A in this case) and thus the access erroneously succeeds. >> >> Fix: >> ---- >> >> In ciField::ciField(), just before the canonical holder is stored into the _holder variable (and which is used by ciField::will_link()) perform an additional access check with the holder declared in the class file. If this check fails, store the declared holder instead and ciField::will_link() will bail out compilation for this field later on. Then, the interpreter will throw an PrivilegedAccessException at runtime. >> >> Ways to reproduce: >> ------------------ >> >> Run the above JCK test with >> >> C2 only: -XX:-TieredCompilation -Xbatch -Xcomp >> >> or >> >> with C1: -Xbatch -Xcomp -XX:-Inline >> >> Best regards, >> Andreas and Goetz >> >> From vladimir.kozlov at oracle.com Fri Jul 18 15:06:08 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 18 Jul 2014 08:06:08 -0700 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C90740.40602@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> <53C90740.40602@oracle.com> Message-ID: <53C937E0.7060304@oracle.com> On 7/18/14 4:38 AM, Tobias Hartmann wrote: > Hi, > > I spend some more days and was finally able to implement a test that deterministically triggers the bug: Why do you need to switch off compressed oops? Do you need to switch off compressed klass pointers too (UseCompressedClassPointers)? > > http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ Very nice! > > @Vladimir: The test shows why we should only clean the ICs but not unload the nmethod if possible. The method ' doWork' > is still valid after WorkerClass was unloaded and depending on the complexity of the method we should avoid unloading it. Make sense. > > On Sparc my patch fixes the bug and leads to the nmethod not being unloaded. The compiled version is therefore used even > after WorkerClass is unloaded. > > On x86 the nmethod is unloaded anyway because of a dead oop. This is probably due to a slightly different implementation > of the ICs. I'll have a closer look to see if we can improve that. Thanks, Vladimir > > Thanks, > Tobias > > On 16.07.2014 10:36, Tobias Hartmann wrote: >> Sorry, forgot to answer this question: >>> Were you able to create a small test case for it that would be useful to add? >> Unfortunately I was not able to create a test. The bug only reproduces on a particular system with a > 30 minute run >> of runThese. >> >> Best, >> Tobias >> >> On 16.07.2014 09:54, Tobias Hartmann wrote: >>> Hi Coleen, >>> >>> thanks for the review. >>>> *+ if (csc->is_call_to_interpreted() && stub_contains_dead_metadata(is_alive, csc->destination())) {* >>>> *+ csc->set_to_clean();* >>>> *+ }* >>>> >>>> This appears in each case. Can you fold it and the new function into a function like >>>> clean_call_to_interpreted_stub(is_alive, csc)? >>> >>> I folded it into the function clean_call_to_interpreter_stub(..). >>> >>> New webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >>> >>> Thanks, >>> Tobias >>> >>>> >>>> Thanks, >>>> Coleen >>>> >>>>> >>>>> So before the permgen removal embedded method* were oops and they were processed in relocInfo::oop_type loop. >>>>> >>>>> May be instead of specializing opt_virtual_call_type and static_call_type call site you can simple add a loop for >>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>> Hi, >>>>>> >>>>>> please review the following patch for JDK-8029443. >>>>>> >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>> >>>>>> *Problem* >>>>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) checks >>>>>> if a nmethod can be unloaded because it contains dead oops. If class >>>>>> unloading occurred we additionally clear all ICs where the cached >>>>>> metadata refers to an unloaded klass or method. If the nmethod is not >>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if all >>>>>> metadata is alive. The assert in CheckClass::check_class fails because >>>>>> the nmethod contains Method* metadata corresponding to a dead Klass. >>>>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>>>> compiled IC. Normally we clear those stubs prior to verification to >>>>>> avoid dangling references to Method* [2], but only if the stub is not in >>>>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>>>> to-interpreter stub may be executed and hand a stale Method* to the >>>>>> interpreter. >>>>>> >>>>>> *Solution >>>>>> *The implementation of nmethod::do_unloading(..) is changed to clean >>>>>> compiled ICs and compiled static calls if they call into a >>>>>> to-interpreter stub that references dead Method* metadata. >>>>>> >>>>>> The patch was affected by the G1 class unloading changes (JDK-8048248) >>>>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>>>> adapted the implementation as well. >>>>>> * >>>>>> Testing >>>>>> *Failing test (runThese) >>>>>> JPRT >>>>>> >>>>>> Thanks, >>>>>> Tobias >>>>>> >>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>>> >>> >> > From volker.simonis at gmail.com Fri Jul 18 15:32:54 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 18 Jul 2014 17:32:54 +0200 Subject: Change "8030763: Validate global memory allocation" breaks the HotSpot build on AIX Message-ID: Hi, unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the HotSpot build on AIX for those code lines. While the fix is trivial (see below) I'm not sure how to proceed with this fix in order to bring it to 8u-dev and 8u20. The problem is that 8030763 is not in jdk9 until now so I can't fix it in 9 and then backport it to 8. Should I just open a new bug for 8u and send out a request for review? Where should this RFR be directed to - to both hotspot-dev and jdk8u-dev? And who will do the actual push? Thank you and best regards, Volker diff -r f09d1f6a401e src/os/aix/vm/os_aix.cpp --- a/src/os/aix/vm/os_aix.cpp Mon Jul 14 10:16:34 2014 -0700 +++ b/src/os/aix/vm/os_aix.cpp Fri Jul 18 17:22:32 2014 +0200 @@ -1215,10 +1215,6 @@ ::abort(); } -// Unused on Aix for now. -void os::set_error_file(const char *logfile) {} - - // This method is a copy of JDK's sysGetLastErrorString // from src/solaris/hpi/src/system_md.c From vladimir.kozlov at oracle.com Fri Jul 18 15:53:40 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 18 Jul 2014 08:53:40 -0700 Subject: Change "8030763: Validate global memory allocation" breaks the HotSpot build on AIX In-Reply-To: References: Message-ID: <53C94304.8030309@oracle.com> File bug P2 on 8u20 and we will try to convince release team to put your fix into 8u20. Only showstopper are allowed now and it looks like you have showstopper. Later for jdk9 it will be forward port. Regards, Vladimir On 7/18/14 8:32 AM, Volker Simonis wrote: > Hi, > > unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the > HotSpot build on AIX for those code lines. > > While the fix is trivial (see below) I'm not sure how to proceed with > this fix in order to bring it to 8u-dev and 8u20. > > The problem is that 8030763 is not in jdk9 until now so I can't fix it > in 9 and then backport it to 8. > > Should I just open a new bug for 8u and send out a request for review? > Where should this RFR be directed to - to both hotspot-dev and > jdk8u-dev? And who will do the actual push? > > Thank you and best regards, > Volker > > > diff -r f09d1f6a401e src/os/aix/vm/os_aix.cpp > --- a/src/os/aix/vm/os_aix.cpp Mon Jul 14 10:16:34 2014 -0700 > +++ b/src/os/aix/vm/os_aix.cpp Fri Jul 18 17:22:32 2014 +0200 > @@ -1215,10 +1215,6 @@ > ::abort(); > } > > -// Unused on Aix for now. > -void os::set_error_file(const char *logfile) {} > - > - > // This method is a copy of JDK's sysGetLastErrorString > // from src/solaris/hpi/src/system_md.c > From sean.coffey at oracle.com Fri Jul 18 16:03:55 2014 From: sean.coffey at oracle.com (=?UTF-8?B?U2XDoW4gQ29mZmV5?=) Date: Fri, 18 Jul 2014 17:03:55 +0100 Subject: Change "8030763: Validate global memory allocation" breaks the HotSpot build on AIX In-Reply-To: <53C94304.8030309@oracle.com> References: <53C94304.8030309@oracle.com> Message-ID: <53C9456B.1070401@oracle.com> Volker, Can you work this fix into the jdk8u hotspot team forest : http://hg.openjdk.java.net/jdk8u/hs-dev/ Unfortunately it looks like the forest hasn't been synced up with 8u11 changes yet. Alejandro - do you have a timeline for that ? If it's not going to happen shortly, we should make an exception and push the hotspot change to the jdk8u-dev team forest which already has the 8u11 changes. Please mark the bug with '8u20-critical-watch' label when you log it. The label can be changed to '8u20-critical-request' once the fix is pushed to the 8u40 mainline. regards, Sean. On 18/07/14 16:53, Vladimir Kozlov wrote: > File bug P2 on 8u20 and we will try to convince release team to put > your fix into 8u20. Only showstopper are allowed now and it looks like > you have showstopper. > > Later for jdk9 it will be forward port. > > Regards, > Vladimir > > On 7/18/14 8:32 AM, Volker Simonis wrote: >> Hi, >> >> unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the >> HotSpot build on AIX for those code lines. >> >> While the fix is trivial (see below) I'm not sure how to proceed with >> this fix in order to bring it to 8u-dev and 8u20. >> >> The problem is that 8030763 is not in jdk9 until now so I can't fix it >> in 9 and then backport it to 8. >> >> Should I just open a new bug for 8u and send out a request for review? >> Where should this RFR be directed to - to both hotspot-dev and >> jdk8u-dev? And who will do the actual push? >> >> Thank you and best regards, >> Volker >> >> >> diff -r f09d1f6a401e src/os/aix/vm/os_aix.cpp >> --- a/src/os/aix/vm/os_aix.cpp Mon Jul 14 10:16:34 2014 -0700 >> +++ b/src/os/aix/vm/os_aix.cpp Fri Jul 18 17:22:32 2014 +0200 >> @@ -1215,10 +1215,6 @@ >> ::abort(); >> } >> >> -// Unused on Aix for now. >> -void os::set_error_file(const char *logfile) {} >> - >> - >> // This method is a copy of JDK's sysGetLastErrorString >> // from src/solaris/hpi/src/system_md.c >> From coleen.phillimore at oracle.com Fri Jul 18 18:02:52 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 18 Jul 2014 14:02:52 -0400 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C937E0.7060304@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> <53C90740.40602@oracle.com> <53C937E0.7060304@oracle.com> Message-ID: <53C9614C.8080109@oracle.com> On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: > On 7/18/14 4:38 AM, Tobias Hartmann wrote: >> Hi, >> >> I spend some more days and was finally able to implement a test that >> deterministically triggers the bug: > > Why do you need to switch off compressed oops? Do you need to switch > off compressed klass pointers too (UseCompressedClassPointers)? CompressedOops when off turns off CompressedClassPointers. > >> >> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ > > Very nice! Yes, I agree. Impressive. The refactoring in nmethod.cpp looks good to me. I have no further comments. Thanks! Coleen > >> >> @Vladimir: The test shows why we should only clean the ICs but not >> unload the nmethod if possible. The method ' doWork' >> is still valid after WorkerClass was unloaded and depending on the >> complexity of the method we should avoid unloading it. > > Make sense. > >> >> On Sparc my patch fixes the bug and leads to the nmethod not being >> unloaded. The compiled version is therefore used even >> after WorkerClass is unloaded. >> >> On x86 the nmethod is unloaded anyway because of a dead oop. This is >> probably due to a slightly different implementation >> of the ICs. I'll have a closer look to see if we can improve that. > > Thanks, > Vladimir > >> >> Thanks, >> Tobias >> >> On 16.07.2014 10:36, Tobias Hartmann wrote: >>> Sorry, forgot to answer this question: >>>> Were you able to create a small test case for it that would be >>>> useful to add? >>> Unfortunately I was not able to create a test. The bug only >>> reproduces on a particular system with a > 30 minute run >>> of runThese. >>> >>> Best, >>> Tobias >>> >>> On 16.07.2014 09:54, Tobias Hartmann wrote: >>>> Hi Coleen, >>>> >>>> thanks for the review. >>>>> *+ if (csc->is_call_to_interpreted() && >>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* >>>>> *+ csc->set_to_clean();* >>>>> *+ }* >>>>> >>>>> This appears in each case. Can you fold it and the new function >>>>> into a function like >>>>> clean_call_to_interpreted_stub(is_alive, csc)? >>>> >>>> I folded it into the function clean_call_to_interpreter_stub(..). >>>> >>>> New webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >>>> >>>> Thanks, >>>> Tobias >>>> >>>>> >>>>> Thanks, >>>>> Coleen >>>>> >>>>>> >>>>>> So before the permgen removal embedded method* were oops and they >>>>>> were processed in relocInfo::oop_type loop. >>>>>> >>>>>> May be instead of specializing opt_virtual_call_type and >>>>>> static_call_type call site you can simple add a loop for >>>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>>> Hi, >>>>>>> >>>>>>> please review the following patch for JDK-8029443. >>>>>>> >>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>>> >>>>>>> *Problem* >>>>>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) >>>>>>> checks >>>>>>> if a nmethod can be unloaded because it contains dead oops. If >>>>>>> class >>>>>>> unloading occurred we additionally clear all ICs where the cached >>>>>>> metadata refers to an unloaded klass or method. If the nmethod >>>>>>> is not >>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if >>>>>>> all >>>>>>> metadata is alive. The assert in CheckClass::check_class fails >>>>>>> because >>>>>>> the nmethod contains Method* metadata corresponding to a dead >>>>>>> Klass. >>>>>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>>>>> compiled IC. Normally we clear those stubs prior to verification to >>>>>>> avoid dangling references to Method* [2], but only if the stub >>>>>>> is not in >>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>>>>> to-interpreter stub may be executed and hand a stale Method* to the >>>>>>> interpreter. >>>>>>> >>>>>>> *Solution >>>>>>> *The implementation of nmethod::do_unloading(..) is changed to >>>>>>> clean >>>>>>> compiled ICs and compiled static calls if they call into a >>>>>>> to-interpreter stub that references dead Method* metadata. >>>>>>> >>>>>>> The patch was affected by the G1 class unloading changes >>>>>>> (JDK-8048248) >>>>>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>>>>> adapted the implementation as well. >>>>>>> * >>>>>>> Testing >>>>>>> *Failing test (runThese) >>>>>>> JPRT >>>>>>> >>>>>>> Thanks, >>>>>>> Tobias >>>>>>> >>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>>>> >>>> >>> >> From volker.simonis at gmail.com Fri Jul 18 18:06:59 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 18 Jul 2014 20:06:59 +0200 Subject: Change "8030763: Validate global memory allocation" breaks the HotSpot build on AIX In-Reply-To: <53C9456B.1070401@oracle.com> References: <53C94304.8030309@oracle.com> <53C9456B.1070401@oracle.com> Message-ID: Hi Vladimir, Sean, thanks a lot for the fast help. I've just created the following bug and webrev: http://cr.openjdk.java.net/~simonis/webrevs/8051378/ https://bugs.openjdk.java.net/browse/JDK-8051378 I'll send out a RFR in a second. The webrev is against 8u20 but should apply equally well against 8u-dev. Actually, it sould also apply against jdk8u/hs-dev/ and if that helps it can also be pushed there first. That will in fact break the HotSpot build on AIX in jdk8u/hs-dev/ until it will be synced up with the 8u11 changes, but I think that's not so important. It's much more important to get the fix into 8u20 in time. Thank you and best regards, Volker On Fri, Jul 18, 2014 at 6:03 PM, Se?n Coffey wrote: > Volker, > > Can you work this fix into the jdk8u hotspot team forest : > http://hg.openjdk.java.net/jdk8u/hs-dev/ > > Unfortunately it looks like the forest hasn't been synced up with 8u11 > changes yet. Alejandro - do you have a timeline for that ? If it's not going > to happen shortly, we should make an exception and push the hotspot change > to the jdk8u-dev team forest which already has the 8u11 changes. > > Please mark the bug with '8u20-critical-watch' label when you log it. The > label can be changed to '8u20-critical-request' once the fix is pushed to > the 8u40 mainline. > > regards, > Sean. > > > On 18/07/14 16:53, Vladimir Kozlov wrote: >> >> File bug P2 on 8u20 and we will try to convince release team to put your >> fix into 8u20. Only showstopper are allowed now and it looks like you have >> showstopper. >> >> Later for jdk9 it will be forward port. >> >> Regards, >> Vladimir >> >> On 7/18/14 8:32 AM, Volker Simonis wrote: >>> >>> Hi, >>> >>> unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the >>> HotSpot build on AIX for those code lines. >>> >>> While the fix is trivial (see below) I'm not sure how to proceed with >>> this fix in order to bring it to 8u-dev and 8u20. >>> >>> The problem is that 8030763 is not in jdk9 until now so I can't fix it >>> in 9 and then backport it to 8. >>> >>> Should I just open a new bug for 8u and send out a request for review? >>> Where should this RFR be directed to - to both hotspot-dev and >>> jdk8u-dev? And who will do the actual push? >>> >>> Thank you and best regards, >>> Volker >>> >>> >>> diff -r f09d1f6a401e src/os/aix/vm/os_aix.cpp >>> --- a/src/os/aix/vm/os_aix.cpp Mon Jul 14 10:16:34 2014 -0700 >>> +++ b/src/os/aix/vm/os_aix.cpp Fri Jul 18 17:22:32 2014 +0200 >>> @@ -1215,10 +1215,6 @@ >>> ::abort(); >>> } >>> >>> -// Unused on Aix for now. >>> -void os::set_error_file(const char *logfile) {} >>> - >>> - >>> // This method is a copy of JDK's sysGetLastErrorString >>> // from src/solaris/hpi/src/system_md.c >>> > From vladimir.kozlov at oracle.com Fri Jul 18 18:09:55 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 18 Jul 2014 11:09:55 -0700 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C9614C.8080109@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> <53C90740.40602@oracle.com> <53C937E0.7060304@oracle.com> <53C9614C.8080109@oracle.com> Message-ID: <53C962F3.3070405@oracle.com> On 7/18/14 11:02 AM, Coleen Phillimore wrote: > > On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: >> On 7/18/14 4:38 AM, Tobias Hartmann wrote: >>> Hi, >>> >>> I spend some more days and was finally able to implement a test that >>> deterministically triggers the bug: >> >> Why do you need to switch off compressed oops? Do you need to switch >> off compressed klass pointers too (UseCompressedClassPointers)? > > CompressedOops when off turns off CompressedClassPointers. You are right, I forgot that. Still the question is why switch off coop? Vladimir >> >>> >>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ >> >> Very nice! > > Yes, I agree. Impressive. > > The refactoring in nmethod.cpp looks good to me. I have no further > comments. > Thanks! > Coleen > >> >>> >>> @Vladimir: The test shows why we should only clean the ICs but not >>> unload the nmethod if possible. The method ' doWork' >>> is still valid after WorkerClass was unloaded and depending on the >>> complexity of the method we should avoid unloading it. >> >> Make sense. >> >>> >>> On Sparc my patch fixes the bug and leads to the nmethod not being >>> unloaded. The compiled version is therefore used even >>> after WorkerClass is unloaded. >>> >>> On x86 the nmethod is unloaded anyway because of a dead oop. This is >>> probably due to a slightly different implementation >>> of the ICs. I'll have a closer look to see if we can improve that. >> >> Thanks, >> Vladimir >> >>> >>> Thanks, >>> Tobias >>> >>> On 16.07.2014 10:36, Tobias Hartmann wrote: >>>> Sorry, forgot to answer this question: >>>>> Were you able to create a small test case for it that would be >>>>> useful to add? >>>> Unfortunately I was not able to create a test. The bug only >>>> reproduces on a particular system with a > 30 minute run >>>> of runThese. >>>> >>>> Best, >>>> Tobias >>>> >>>> On 16.07.2014 09:54, Tobias Hartmann wrote: >>>>> Hi Coleen, >>>>> >>>>> thanks for the review. >>>>>> *+ if (csc->is_call_to_interpreted() && >>>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* >>>>>> *+ csc->set_to_clean();* >>>>>> *+ }* >>>>>> >>>>>> This appears in each case. Can you fold it and the new function >>>>>> into a function like >>>>>> clean_call_to_interpreted_stub(is_alive, csc)? >>>>> >>>>> I folded it into the function clean_call_to_interpreter_stub(..). >>>>> >>>>> New webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >>>>> >>>>> Thanks, >>>>> Tobias >>>>> >>>>>> >>>>>> Thanks, >>>>>> Coleen >>>>>> >>>>>>> >>>>>>> So before the permgen removal embedded method* were oops and they >>>>>>> were processed in relocInfo::oop_type loop. >>>>>>> >>>>>>> May be instead of specializing opt_virtual_call_type and >>>>>>> static_call_type call site you can simple add a loop for >>>>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> please review the following patch for JDK-8029443. >>>>>>>> >>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>>>> >>>>>>>> *Problem* >>>>>>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) >>>>>>>> checks >>>>>>>> if a nmethod can be unloaded because it contains dead oops. If >>>>>>>> class >>>>>>>> unloading occurred we additionally clear all ICs where the cached >>>>>>>> metadata refers to an unloaded klass or method. If the nmethod >>>>>>>> is not >>>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if >>>>>>>> all >>>>>>>> metadata is alive. The assert in CheckClass::check_class fails >>>>>>>> because >>>>>>>> the nmethod contains Method* metadata corresponding to a dead >>>>>>>> Klass. >>>>>>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>>>>>> compiled IC. Normally we clear those stubs prior to verification to >>>>>>>> avoid dangling references to Method* [2], but only if the stub >>>>>>>> is not in >>>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this case the >>>>>>>> to-interpreter stub may be executed and hand a stale Method* to the >>>>>>>> interpreter. >>>>>>>> >>>>>>>> *Solution >>>>>>>> *The implementation of nmethod::do_unloading(..) is changed to >>>>>>>> clean >>>>>>>> compiled ICs and compiled static calls if they call into a >>>>>>>> to-interpreter stub that references dead Method* metadata. >>>>>>>> >>>>>>>> The patch was affected by the G1 class unloading changes >>>>>>>> (JDK-8048248) >>>>>>>> because the method nmethod::do_unloading_parallel(..) was added. I >>>>>>>> adapted the implementation as well. >>>>>>>> * >>>>>>>> Testing >>>>>>>> *Failing test (runThese) >>>>>>>> JPRT >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Tobias >>>>>>>> >>>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>>>>> >>>>> >>>> >>> > From volker.simonis at gmail.com Fri Jul 18 18:14:23 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 18 Jul 2014 20:14:23 +0200 Subject: RFR (XXS): 8051378: AIX: Change "8030763: Validate global memory allocation" breaks the HotSpot build Message-ID: Hi, could somebody please review and sponsor the following tiny, AIX-only change which fixes the HotSpot build on AIX for the 8u20/8u-dev code lines: http://cr.openjdk.java.net/~simonis/webrevs/8051378/ https://bugs.openjdk.java.net/browse/JDK-8051378 Details: Unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the HotSpot build on AIX for those code lines. The fix is trivial - just remove the unused function os::set_error_file() in os_aix.cpp (as this has been done on the other platforms by 8030763) The webrev is against 8u20 but should apply equally well against 8u-dev. Actually, it sould also apply against jdk8u/hs-dev/ and if that helps it can also be pushed there first. That will in fact break the HotSpot build on AIX in jdk8u/hs-dev/ until it will be synced up with the 8u11 changes, but I think that's not so important. It's much more important to get the fix into 8u20 in time. Thank you and best regards, Volker From alejandro.murillo at oracle.com Fri Jul 18 18:21:05 2014 From: alejandro.murillo at oracle.com (Alejandro E Murillo) Date: Fri, 18 Jul 2014 12:21:05 -0600 Subject: Change "8030763: Validate global memory allocation" breaks the HotSpot build on AIX In-Reply-To: <53C9456B.1070401@oracle.com> References: <53C94304.8030309@oracle.com> <53C9456B.1070401@oracle.com> Message-ID: <53C96591.80209@oracle.com> On 7/18/2014 10:03 AM, Se?n Coffey wrote: > Volker, > > Can you work this fix into the jdk8u hotspot team forest : > http://hg.openjdk.java.net/jdk8u/hs-dev/ > > Unfortunately it looks like the forest hasn't been synced up with 8u11 > changes yet. Alejandro - do you have a timeline for that ? If it's not > going to happen shortly, we should make an exception and push the > hotspot change to the jdk8u-dev team forest which already has the 8u11 > changes. I just synched jdk8u/hs-dev with jdk8u/jdk8u. I wasn't planning on taking a snapshot this week (no new changes) so if this ready to go in, and need to be in 8u20 next week, let's get it in now and I can start a snapshot later today. If not, it will have to go in through 8u-dev. Volker, do you have the patch? if so send it out and I (or Vladimir) can push it Thanks Alejandro > > Please mark the bug with '8u20-critical-watch' label when you log it. > The label can be changed to '8u20-critical-request' once the fix is > pushed to the 8u40 mainline. > > regards, > Sean. > > On 18/07/14 16:53, Vladimir Kozlov wrote: >> File bug P2 on 8u20 and we will try to convince release team to put >> your fix into 8u20. Only showstopper are allowed now and it looks >> like you have showstopper. >> >> Later for jdk9 it will be forward port. >> >> Regards, >> Vladimir >> >> On 7/18/14 8:32 AM, Volker Simonis wrote: >>> Hi, >>> >>> unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the >>> HotSpot build on AIX for those code lines. >>> >>> While the fix is trivial (see below) I'm not sure how to proceed with >>> this fix in order to bring it to 8u-dev and 8u20. >>> >>> The problem is that 8030763 is not in jdk9 until now so I can't fix it >>> in 9 and then backport it to 8. >>> >>> Should I just open a new bug for 8u and send out a request for review? >>> Where should this RFR be directed to - to both hotspot-dev and >>> jdk8u-dev? And who will do the actual push? >>> >>> Thank you and best regards, >>> Volker >>> >>> >>> diff -r f09d1f6a401e src/os/aix/vm/os_aix.cpp >>> --- a/src/os/aix/vm/os_aix.cpp Mon Jul 14 10:16:34 2014 -0700 >>> +++ b/src/os/aix/vm/os_aix.cpp Fri Jul 18 17:22:32 2014 +0200 >>> @@ -1215,10 +1215,6 @@ >>> ::abort(); >>> } >>> >>> -// Unused on Aix for now. >>> -void os::set_error_file(const char *logfile) {} >>> - >>> - >>> // This method is a copy of JDK's sysGetLastErrorString >>> // from src/solaris/hpi/src/system_md.c >>> > -- Alejandro From alejandro.murillo at oracle.com Fri Jul 18 18:42:35 2014 From: alejandro.murillo at oracle.com (Alejandro E Murillo) Date: Fri, 18 Jul 2014 12:42:35 -0600 Subject: Change "8030763: Validate global memory allocation" breaks the HotSpot build on AIX In-Reply-To: <53C96591.80209@oracle.com> References: <53C94304.8030309@oracle.com> <53C9456B.1070401@oracle.com> <53C96591.80209@oracle.com> Message-ID: <53C96A9B.10805@oracle.com> On 7/18/2014 12:21 PM, Alejandro E Murillo wrote: > > On 7/18/2014 10:03 AM, Se?n Coffey wrote: >> Volker, >> >> Can you work this fix into the jdk8u hotspot team forest : >> http://hg.openjdk.java.net/jdk8u/hs-dev/ >> >> Unfortunately it looks like the forest hasn't been synced up with >> 8u11 changes yet. Alejandro - do you have a timeline for that ? If >> it's not going to happen shortly, we should make an exception and >> push the hotspot change to the jdk8u-dev team forest which already >> has the 8u11 changes. > I just synched jdk8u/hs-dev with jdk8u/jdk8u. I just realized the 8u11 changes are only on jdk8u-dev. It's too risky to bring them straight from there to the jdk8u/hs-dev repo, I usually synch from the master (stable snapshot). so given the urgency, Vladimir, can you push it to jdk8u/jdk8u-dev instead? it will come back down to the hotspot repo along with the 8u11 changes once they are tested and pushed to master. Thanks Alejandro > I wasn't planning on taking a snapshot this week (no new changes) > so if this ready to go in, and need to be in 8u20 next week, let's get it > in now and I can start a snapshot later today. If not, it will have to > go in through 8u-dev. > Volker, do you have the patch? if so send it out and I (or Vladimir) > can push it > Thanks > Alejandro > > >> >> Please mark the bug with '8u20-critical-watch' label when you log it. >> The label can be changed to '8u20-critical-request' once the fix is >> pushed to the 8u40 mainline. >> >> regards, >> Sean. >> >> On 18/07/14 16:53, Vladimir Kozlov wrote: >>> File bug P2 on 8u20 and we will try to convince release team to put >>> your fix into 8u20. Only showstopper are allowed now and it looks >>> like you have showstopper. >>> >>> Later for jdk9 it will be forward port. >>> >>> Regards, >>> Vladimir >>> >>> On 7/18/14 8:32 AM, Volker Simonis wrote: >>>> Hi, >>>> >>>> unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the >>>> HotSpot build on AIX for those code lines. >>>> >>>> While the fix is trivial (see below) I'm not sure how to proceed with >>>> this fix in order to bring it to 8u-dev and 8u20. >>>> >>>> The problem is that 8030763 is not in jdk9 until now so I can't fix it >>>> in 9 and then backport it to 8. >>>> >>>> Should I just open a new bug for 8u and send out a request for review? >>>> Where should this RFR be directed to - to both hotspot-dev and >>>> jdk8u-dev? And who will do the actual push? >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> >>>> diff -r f09d1f6a401e src/os/aix/vm/os_aix.cpp >>>> --- a/src/os/aix/vm/os_aix.cpp Mon Jul 14 10:16:34 2014 -0700 >>>> +++ b/src/os/aix/vm/os_aix.cpp Fri Jul 18 17:22:32 2014 +0200 >>>> @@ -1215,10 +1215,6 @@ >>>> ::abort(); >>>> } >>>> >>>> -// Unused on Aix for now. >>>> -void os::set_error_file(const char *logfile) {} >>>> - >>>> - >>>> // This method is a copy of JDK's sysGetLastErrorString >>>> // from src/solaris/hpi/src/system_md.c >>>> >> > -- Alejandro From vladimir.kozlov at oracle.com Fri Jul 18 18:57:12 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 18 Jul 2014 11:57:12 -0700 Subject: Change "8030763: Validate global memory allocation" breaks the HotSpot build on AIX In-Reply-To: <53C96A9B.10805@oracle.com> References: <53C94304.8030309@oracle.com> <53C9456B.1070401@oracle.com> <53C96591.80209@oracle.com> <53C96A9B.10805@oracle.com> Message-ID: <53C96E08.9060404@oracle.com> Okay, I will push into jdk8u/jdk8u-dev. Thanks, Vladimir On 7/18/14 11:42 AM, Alejandro E Murillo wrote: > > On 7/18/2014 12:21 PM, Alejandro E Murillo wrote: >> >> On 7/18/2014 10:03 AM, Se?n Coffey wrote: >>> Volker, >>> >>> Can you work this fix into the jdk8u hotspot team forest : >>> http://hg.openjdk.java.net/jdk8u/hs-dev/ >>> >>> Unfortunately it looks like the forest hasn't been synced up with >>> 8u11 changes yet. Alejandro - do you have a timeline for that ? If >>> it's not going to happen shortly, we should make an exception and >>> push the hotspot change to the jdk8u-dev team forest which already >>> has the 8u11 changes. >> I just synched jdk8u/hs-dev with jdk8u/jdk8u. > I just realized the 8u11 changes are only on jdk8u-dev. > It's too risky to bring them straight from there to the jdk8u/hs-dev repo, > I usually synch from the master (stable snapshot). > so given the urgency, Vladimir, can you push it to jdk8u/jdk8u-dev > instead? > it will come back down to the hotspot repo along with the 8u11 changes > once they are tested and pushed to master. > > Thanks > Alejandro > > >> I wasn't planning on taking a snapshot this week (no new changes) >> so if this ready to go in, and need to be in 8u20 next week, let's get it >> in now and I can start a snapshot later today. If not, it will have to >> go in through 8u-dev. >> Volker, do you have the patch? if so send it out and I (or Vladimir) >> can push it >> Thanks >> Alejandro >> >> >>> >>> Please mark the bug with '8u20-critical-watch' label when you log it. >>> The label can be changed to '8u20-critical-request' once the fix is >>> pushed to the 8u40 mainline. >>> >>> regards, >>> Sean. >>> >>> On 18/07/14 16:53, Vladimir Kozlov wrote: >>>> File bug P2 on 8u20 and we will try to convince release team to put >>>> your fix into 8u20. Only showstopper are allowed now and it looks >>>> like you have showstopper. >>>> >>>> Later for jdk9 it will be forward port. >>>> >>>> Regards, >>>> Vladimir >>>> >>>> On 7/18/14 8:32 AM, Volker Simonis wrote: >>>>> Hi, >>>>> >>>>> unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the >>>>> HotSpot build on AIX for those code lines. >>>>> >>>>> While the fix is trivial (see below) I'm not sure how to proceed with >>>>> this fix in order to bring it to 8u-dev and 8u20. >>>>> >>>>> The problem is that 8030763 is not in jdk9 until now so I can't fix it >>>>> in 9 and then backport it to 8. >>>>> >>>>> Should I just open a new bug for 8u and send out a request for review? >>>>> Where should this RFR be directed to - to both hotspot-dev and >>>>> jdk8u-dev? And who will do the actual push? >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>>> >>>>> diff -r f09d1f6a401e src/os/aix/vm/os_aix.cpp >>>>> --- a/src/os/aix/vm/os_aix.cpp Mon Jul 14 10:16:34 2014 -0700 >>>>> +++ b/src/os/aix/vm/os_aix.cpp Fri Jul 18 17:22:32 2014 +0200 >>>>> @@ -1215,10 +1215,6 @@ >>>>> ::abort(); >>>>> } >>>>> >>>>> -// Unused on Aix for now. >>>>> -void os::set_error_file(const char *logfile) {} >>>>> - >>>>> - >>>>> // This method is a copy of JDK's sysGetLastErrorString >>>>> // from src/solaris/hpi/src/system_md.c >>>>> >>> >> > From vladimir.kozlov at oracle.com Fri Jul 18 18:57:31 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 18 Jul 2014 11:57:31 -0700 Subject: RFR (XXS): 8051378: AIX: Change "8030763: Validate global memory allocation" breaks the HotSpot build In-Reply-To: References: Message-ID: <53C96E1B.3040804@oracle.com> Looks good. Alejandro suggested that I should push it into jdk8u/jdk8u-dev/hotspot where 8u11 was merged. When the fix will be approved it will be pushed into 8u20 repo. Thanks, Vladimir On 7/18/14 11:14 AM, Volker Simonis wrote: > Hi, > > could somebody please review and sponsor the following tiny, AIX-only > change which fixes the HotSpot build on AIX for the 8u20/8u-dev code > lines: > > http://cr.openjdk.java.net/~simonis/webrevs/8051378/ > https://bugs.openjdk.java.net/browse/JDK-8051378 > > Details: > > Unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the > HotSpot build on AIX for those code lines. > > The fix is trivial - just remove the unused function > os::set_error_file() in os_aix.cpp (as this has been done on the other > platforms by 8030763) > > The webrev is against 8u20 but should apply equally well against 8u-dev. > > Actually, it sould also apply against jdk8u/hs-dev/ and if that helps > it can also be pushed there first. That will in fact break the HotSpot > build on AIX in jdk8u/hs-dev/ until it will be synced up with the 8u11 > changes, but I think that's not so important. It's much more important > to get the fix into 8u20 in time. > > Thank you and best regards, > Volker > From volker.simonis at gmail.com Fri Jul 18 20:40:42 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 18 Jul 2014 22:40:42 +0200 Subject: Change "8030763: Validate global memory allocation" breaks the HotSpot build on AIX In-Reply-To: <53C96E08.9060404@oracle.com> References: <53C94304.8030309@oracle.com> <53C9456B.1070401@oracle.com> <53C96591.80209@oracle.com> <53C96A9B.10805@oracle.com> <53C96E08.9060404@oracle.com> Message-ID: Hi Vladimir, Alejandro, thanks a lot for the fast help! Volker On Friday, July 18, 2014, Vladimir Kozlov wrote: > Okay, I will push into jdk8u/jdk8u-dev. > > Thanks, > Vladimir > > On 7/18/14 11:42 AM, Alejandro E Murillo wrote: > >> >> On 7/18/2014 12:21 PM, Alejandro E Murillo wrote: >> >>> >>> On 7/18/2014 10:03 AM, Se?n Coffey wrote: >>> >>>> Volker, >>>> >>>> Can you work this fix into the jdk8u hotspot team forest : >>>> http://hg.openjdk.java.net/jdk8u/hs-dev/ >>>> >>>> Unfortunately it looks like the forest hasn't been synced up with >>>> 8u11 changes yet. Alejandro - do you have a timeline for that ? If >>>> it's not going to happen shortly, we should make an exception and >>>> push the hotspot change to the jdk8u-dev team forest which already >>>> has the 8u11 changes. >>>> >>> I just synched jdk8u/hs-dev with jdk8u/jdk8u. >>> >> I just realized the 8u11 changes are only on jdk8u-dev. >> It's too risky to bring them straight from there to the jdk8u/hs-dev repo, >> I usually synch from the master (stable snapshot). >> so given the urgency, Vladimir, can you push it to jdk8u/jdk8u-dev >> instead? >> it will come back down to the hotspot repo along with the 8u11 changes >> once they are tested and pushed to master. >> >> Thanks >> Alejandro >> >> >> I wasn't planning on taking a snapshot this week (no new changes) >>> so if this ready to go in, and need to be in 8u20 next week, let's get it >>> in now and I can start a snapshot later today. If not, it will have to >>> go in through 8u-dev. >>> Volker, do you have the patch? if so send it out and I (or Vladimir) >>> can push it >>> Thanks >>> Alejandro >>> >>> >>> >>>> Please mark the bug with '8u20-critical-watch' label when you log it. >>>> The label can be changed to '8u20-critical-request' once the fix is >>>> pushed to the 8u40 mainline. >>>> >>>> regards, >>>> Sean. >>>> >>>> On 18/07/14 16:53, Vladimir Kozlov wrote: >>>> >>>>> File bug P2 on 8u20 and we will try to convince release team to put >>>>> your fix into 8u20. Only showstopper are allowed now and it looks >>>>> like you have showstopper. >>>>> >>>>> Later for jdk9 it will be forward port. >>>>> >>>>> Regards, >>>>> Vladimir >>>>> >>>>> On 7/18/14 8:32 AM, Volker Simonis wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> unfortunately the merging of 8u11 into 8u-dev and 8u20 broke the >>>>>> HotSpot build on AIX for those code lines. >>>>>> >>>>>> While the fix is trivial (see below) I'm not sure how to proceed with >>>>>> this fix in order to bring it to 8u-dev and 8u20. >>>>>> >>>>>> The problem is that 8030763 is not in jdk9 until now so I can't fix it >>>>>> in 9 and then backport it to 8. >>>>>> >>>>>> Should I just open a new bug for 8u and send out a request for review? >>>>>> Where should this RFR be directed to - to both hotspot-dev and >>>>>> jdk8u-dev? And who will do the actual push? >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>>> >>>>>> >>>>>> diff -r f09d1f6a401e src/os/aix/vm/os_aix.cpp >>>>>> --- a/src/os/aix/vm/os_aix.cpp Mon Jul 14 10:16:34 2014 -0700 >>>>>> +++ b/src/os/aix/vm/os_aix.cpp Fri Jul 18 17:22:32 2014 +0200 >>>>>> @@ -1215,10 +1215,6 @@ >>>>>> ::abort(); >>>>>> } >>>>>> >>>>>> -// Unused on Aix for now. >>>>>> -void os::set_error_file(const char *logfile) {} >>>>>> - >>>>>> - >>>>>> // This method is a copy of JDK's sysGetLastErrorString >>>>>> // from src/solaris/hpi/src/system_md.c >>>>>> >>>>>> >>>> >>> >> From zhengyu.gu at oracle.com Fri Jul 18 21:37:12 2014 From: zhengyu.gu at oracle.com (Zhengyu Gu) Date: Fri, 18 Jul 2014 17:37:12 -0400 Subject: RFR(XS) 8050167: linux-sparcv9: hs_err file does not show any stack information Message-ID: <53C99388.2030800@oracle.com> This is a small fix to setup the first stack frame from exception handler. Sparc's sigcontext does not contain frame pointer, so uses frame::unpatchable instead. Bug: https://bugs.openjdk.java.net/browse/JDK-8050167 Webrev: http://cr.openjdk.java.net/~zgu/8050167/webrev.00/ Thanks, -Zhengyu From mikael.vidstedt at oracle.com Fri Jul 18 21:45:30 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Fri, 18 Jul 2014 14:45:30 -0700 Subject: RFR(XS) 8050167: linux-sparcv9: hs_err file does not show any stack information In-Reply-To: <53C99388.2030800@oracle.com> References: <53C99388.2030800@oracle.com> Message-ID: <53C9957A.9070706@oracle.com> This looks like another case of code duplication between solaris_sparc and linux_sparc - I wish we could unify it in some way going forward. Apart from that, this appears to be in line with what the Solaris implementation does so thumbs up. Cheers, Mikael On 2014-07-18 14:37, Zhengyu Gu wrote: > This is a small fix to setup the first stack frame from exception > handler. Sparc's sigcontext does not contain frame pointer, so uses > frame::unpatchable instead. > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8050167 > Webrev: http://cr.openjdk.java.net/~zgu/8050167/webrev.00/ > > > Thanks, > > -Zhengyu From coleen.phillimore at oracle.com Fri Jul 18 23:01:22 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 18 Jul 2014 19:01:22 -0400 Subject: RFR(XS) 8050167: linux-sparcv9: hs_err file does not show any stack information In-Reply-To: <53C99388.2030800@oracle.com> References: <53C99388.2030800@oracle.com> Message-ID: <53C9A742.6030709@oracle.com> This looks good! Coleen On 7/18/14, 5:37 PM, Zhengyu Gu wrote: > This is a small fix to setup the first stack frame from exception > handler. Sparc's sigcontext does not contain frame pointer, so uses > frame::unpatchable instead. > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8050167 > Webrev: http://cr.openjdk.java.net/~zgu/8050167/webrev.00/ > > > Thanks, > > -Zhengyu From coleen.phillimore at oracle.com Fri Jul 18 23:02:49 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 18 Jul 2014 19:02:49 -0400 Subject: RFR(XS) 8050167: linux-sparcv9: hs_err file does not show any stack information In-Reply-To: <53C99388.2030800@oracle.com> References: <53C99388.2030800@oracle.com> Message-ID: <53C9A799.2080603@oracle.com> You could change ucontext_get_fp in linux-sparc like solaris_sparc: // Solaris X86 only intptr_t* os::Solaris::ucontext_get_fp(ucontext_t *uc) { ShouldNotReachHere(); return NULL; } Coleen On 7/18/14, 5:37 PM, Zhengyu Gu wrote: > This is a small fix to setup the first stack frame from exception > handler. Sparc's sigcontext does not contain frame pointer, so uses > frame::unpatchable instead. > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8050167 > Webrev: http://cr.openjdk.java.net/~zgu/8050167/webrev.00/ > > > Thanks, > > -Zhengyu From coleen.phillimore at oracle.com Fri Jul 18 23:07:26 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 18 Jul 2014 19:07:26 -0400 Subject: RFR(XS) 8050167: linux-sparcv9: hs_err file does not show any stack information In-Reply-To: <53C9A799.2080603@oracle.com> References: <53C99388.2030800@oracle.com> <53C9A799.2080603@oracle.com> Message-ID: <53C9A8AE.1020300@oracle.com> Never mind, it already has it. Coleen On 7/18/14, 7:02 PM, Coleen Phillimore wrote: > > You could change ucontext_get_fp in linux-sparc like solaris_sparc: > > // Solaris X86 only > intptr_t* os::Solaris::ucontext_get_fp(ucontext_t *uc) { > ShouldNotReachHere(); > return NULL; > } > > > Coleen > > On 7/18/14, 5:37 PM, Zhengyu Gu wrote: >> This is a small fix to setup the first stack frame from exception >> handler. Sparc's sigcontext does not contain frame pointer, so uses >> frame::unpatchable instead. >> >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8050167 >> Webrev: http://cr.openjdk.java.net/~zgu/8050167/webrev.00/ >> >> >> Thanks, >> >> -Zhengyu > From vladimir.kozlov at oracle.com Fri Jul 18 23:43:08 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 18 Jul 2014 16:43:08 -0700 Subject: Change "8030763: Validate global memory allocation" breaks the HotSpot build on AIX In-Reply-To: References: <53C94304.8030309@oracle.com> <53C9456B.1070401@oracle.com> <53C96591.80209@oracle.com> <53C96A9B.10805@oracle.com> <53C96E08.9060404@oracle.com> Message-ID: <53C9B10C.3060702@oracle.com> I should have pushed the 8051378 fix into http://hg.openjdk.java.net/jdk9/jdk9/hotspot/ before Alejandro pushed 8030763 into all our repository. I will push 8051378 into jdk9/hs-comp/hotspot and main jdk9/hs/hotspot. Alejandro, this change will not affect our testing so you don't need to restart PIT. Vladimir On 7/18/14 1:40 PM, Volker Simonis wrote: > Hi Vladimir, Alejandro, > > thanks a lot for the fast help! > > Volker > > On Friday, July 18, 2014, Vladimir Kozlov > wrote: > > Okay, I will push into jdk8u/jdk8u-dev. > > Thanks, > Vladimir > > On 7/18/14 11:42 AM, Alejandro E Murillo wrote: > > > On 7/18/2014 12:21 PM, Alejandro E Murillo wrote: > > > On 7/18/2014 10:03 AM, Se?n Coffey wrote: > > Volker, > > Can you work this fix into the jdk8u hotspot team forest : > http://hg.openjdk.java.net/__jdk8u/hs-dev/ > > > Unfortunately it looks like the forest hasn't been > synced up with > 8u11 changes yet. Alejandro - do you have a timeline for > that ? If > it's not going to happen shortly, we should make an > exception and > push the hotspot change to the jdk8u-dev team forest > which already > has the 8u11 changes. > > I just synched jdk8u/hs-dev with jdk8u/jdk8u. > > I just realized the 8u11 changes are only on jdk8u-dev. > It's too risky to bring them straight from there to the > jdk8u/hs-dev repo, > I usually synch from the master (stable snapshot). > so given the urgency, Vladimir, can you push it to jdk8u/jdk8u-dev > instead? > it will come back down to the hotspot repo along with the 8u11 > changes > once they are tested and pushed to master. > > Thanks > Alejandro > > > I wasn't planning on taking a snapshot this week (no new > changes) > so if this ready to go in, and need to be in 8u20 next week, > let's get it > in now and I can start a snapshot later today. If not, it > will have to > go in through 8u-dev. > Volker, do you have the patch? if so send it out and I (or > Vladimir) > can push it > Thanks > Alejandro > > > > Please mark the bug with '8u20-critical-watch' label > when you log it. > The label can be changed to '8u20-critical-request' once > the fix is > pushed to the 8u40 mainline. > > regards, > Sean. > > On 18/07/14 16:53, Vladimir Kozlov wrote: > > File bug P2 on 8u20 and we will try to convince > release team to put > your fix into 8u20. Only showstopper are allowed now > and it looks > like you have showstopper. > > Later for jdk9 it will be forward port. > > Regards, > Vladimir > > On 7/18/14 8:32 AM, Volker Simonis wrote: > > Hi, > > unfortunately the merging of 8u11 into 8u-dev > and 8u20 broke the > HotSpot build on AIX for those code lines. > > While the fix is trivial (see below) I'm not > sure how to proceed with > this fix in order to bring it to 8u-dev and 8u20. > > The problem is that 8030763 is not in jdk9 until > now so I can't fix it > in 9 and then backport it to 8. > > Should I just open a new bug for 8u and send out > a request for review? > Where should this RFR be directed to - to both > hotspot-dev and > jdk8u-dev? And who will do the actual push? > > Thank you and best regards, > Volker > > > diff -r f09d1f6a401e src/os/aix/vm/os_aix.cpp > --- a/src/os/aix/vm/os_aix.cpp Mon Jul 14 > 10:16:34 2014 -0700 > +++ b/src/os/aix/vm/os_aix.cpp Fri Jul 18 > 17:22:32 2014 +0200 > @@ -1215,10 +1215,6 @@ > ::abort(); > } > > -// Unused on Aix for now. > -void os::set_error_file(const char *logfile) {} > - > - > // This method is a copy of JDK's > sysGetLastErrorString > // from src/solaris/hpi/src/system_md.__c > > > > From david.holmes at oracle.com Sat Jul 19 12:21:07 2014 From: david.holmes at oracle.com (David Holmes) Date: Sat, 19 Jul 2014 22:21:07 +1000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDAE85@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> <53C45912.4050905@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> <53C4D63A.5060802@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAE85@DEWDFEMB12A.global.corp.sap> Message-ID: <53CA62B3.50104@oracle.com> On 18/07/2014 10:58 PM, Lindenmaier, Goetz wrote: > Hi David, > > does this clear the situation? Clear as mud :) > Can we consider the change reviewed? Yes. Thanks, David > Thanks and best regards, > Goetz. > > -----Original Message----- > From: Lindenmaier, Goetz > Sent: Dienstag, 15. Juli 2014 11:18 > To: 'David Holmes'; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: RE: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > Hi David, > > There are no clean rules followed, which happens to cause > compile problems here and there. I try to clean this up a bit. > > If inline function foo() calls another inline function bar(), the c++ compiler > must see both implementations to compile foo (else it obviously can't > inline). It must see the declaration of the function to be inlined before > the function where it is inlined. If there are cyclic inlines you need inline.hpp > headers to get a safe state. Also, to be on the safe side, .hpp files never may include > .inline.hpp files, else an implementation can end up above the declaration > it needs. See also the two examples attached. > > If there is no cycle, it doesn't matter. That's why a lot of functions > are not placed according to this scheme. > > For the functions I moved to the header (path_separator etc): > They are used in a lot of .hpp files. Moving them to os.hpp I easily could avoid > including the os.inline.hpp in .hpp files, which would be bad. > > Best regards, > Goetz. > > > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 15. Juli 2014 09:20 > To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 15/07/2014 4:34 PM, Lindenmaier, Goetz wrote: >> Hi David, >> >> functions that are completely self contained can go into the .hpp. >> Functions that call another inline function defined in an other header >> must go to .inline.hpp as else there could be cycles the c++ compilers can't >> deal with. > > A quick survey of the shared *.inline.hpp files shows many don't seem to > fit this definition. Are templates also something that needs special > handling? > > I'm not saying anything is wrong with your changes, just trying to > understand what the rules are. > > Thanks, > David > >> Best regards, >> Goetz. >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Dienstag, 15. Juli 2014 00:26 >> To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: >>> Hi Coleen, >>> >>> Thanks for sponsoring this! >>> >>> bytes, ad, nativeInst and vmreg.inline were used quite often >>> in shared files, so it definitely makes sense for these to have >>> a shared header. >>> vm_version and register had an umbrella header, but that >>> was not used everywhere, so I cleaned it up. >>> That left adGlobals, jniTypes and interp_masm which >>> are only used a few time. I did these so that all files >>> are treated similarly. >>> In the end, I didn't need a header for all, as they were >>> not really needed in the shared files, or I found >>> another good place, as for adGlobals. >>> >>> I added you and David H. as reviewer to the webrev: >>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>> I hope this is ok with you, David. >> >> It might be somewhat premature :) I somewhat confused by the rules for >> headers and includes and inlines. I now see with this change a bunch of >> inline function definitions being moved out of the .inline.hpp file and >> into the .hpp file. Why? What criteria determines if an inline function >> goes into the .hpp versus the .inline.hpp file ??? >> >> Thanks, >> David >> >>> Thanks, >>> Goetz. >>> >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore >>> Sent: Montag, 14. Juli 2014 14:09 >>> To: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>> >>> >>> I think this looks like a good cleanup. I can sponsor it and make the >>> closed changes also again. I initially proposed the #include cascades >>> because the alternative at the time was to blindly create a dispatching >>> header file for each target dependent file. I wanted to see the >>> #includes cleaned up instead and target dependent files included >>> directly. This adds 5 dispatching header files, which is fine. I >>> think the case of interp_masm.hpp is interesting though, because the >>> dispatching file is included in cpu dependent files, which could >>> directly include the cpu version. But there are 3 platform independent >>> files that include it. I'm not going to object though because I'm >>> grateful for this cleanup and I guess it's a matter of opinion which is >>> best to include in the cpu dependent directories. >>> >>> Thanks, >>> Coleen >>> >>> >>> On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> David, can I consider this a review? >>>> >>>> And I please need a sponsor for this change. Could somebody >>>> please help here? Probably some closed adaptions are needed. >>>> It applies to any repo as my other change traveled around >>>> by now. >>>> >>>> Thanks and best regards, >>>> Goetz. >>>> >>>> >>>> -----Original Message----- >>>> From: David Holmes [mailto:david.holmes at oracle.com] >>>> Sent: Freitag, 11. Juli 2014 07:19 >>>> To: Lindenmaier, Goetz; Lois Foltan >>>> Cc: hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>>> >>>> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>>>> Hi, >>>>> >>>>> foo.hpp as few includes as possible, to avoid cycles. >>>>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>>>> (either directly or via the platform files.) >>>>> * should include foo.platform.inline.hpp, so that shared files that >>>>> call functions from foo.platform.inline.hpp need not contain the >>>>> cascade of all the platform files. >>>>> If code in foo.platform.inline.hpp is only used in the platform files, >>>>> it is not necessary to have an umbrella header. >>>>> foo.platform.inline.hpp Should include what is needed in its code. >>>>> >>>>> For client code: >>>>> With this change I now removed all include cascades of platform files except for >>>>> those in the 'natural' headers. >>>>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>>>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>>>> headers, but include bar.[inline.]hpp.) >>>>> If it's 1:1, I don't care, as discussed before. >>>>> >>>>> Does this make sense? >>>> I find the overall structure somewhat counter-intuitive from an >>>> implementation versus interface perspective. But ... >>>> >>>> Thanks for the explanation. >>>> >>>> David >>>> >>>>> Best regards, >>>>> Goetz. >>>>> >>>>> >>>>> which of the above should #include which others, and which should be >>>>> #include'd by "client" code? >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>>> Thanks, >>>>>> Lois >>>>>> >>>>>>> David >>>>>>> ----- >>>>>>> >>>>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>>>> (however this could pull in more code than needed since >>>>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>>>> >>>>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>>>> - change not related to clean up of umbrella headers, please >>>>>>>> explain/justify. >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.hpp >>>>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>>>> vmreg.inline.hpp or will >>>>>>>> this introduce a cyclical inclusion situation, since >>>>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>>>> >>>>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>>>> - only has a copyright change in the file, no other changes >>>>>>>> present? >>>>>>>> >>>>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>>>> - incorrect copyright, no current year? >>>>>>>> >>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>> - incorrect copyright date for a new file >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>> - technically this new file does not need to include >>>>>>>> "asm/register.hpp" since >>>>>>>> vmreg.hpp already includes it >>>>>>>> >>>>>>>> My only lingering concern is the cyclical nature of >>>>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>>>> is not much difference between the two? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Lois >>>>>>>> >>>>>>>> >>>>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>>>> >>>>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>>>> subdirectories: >>>>>>>>> >>>>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>>>> >>>>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>>>> >>>>>>>>> Where possible, this change avoids includes in headers. >>>>>>>>> Eventually it adds a forward declaration. >>>>>>>>> >>>>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>>>> rather small. >>>>>>>>> >>>>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>>>> includes in, >>>>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>>>> >>>>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>>>> thus all the assembler include headers into a lot of files. >>>>>>>>> >>>>>>>>> Please review and test this change. I please need a sponsor. >>>>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>>>> >>>>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>>>> linuxppc64, >>>>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>>>> aixppc64, ntamd64 >>>>>>>>> in opt, dbg and fastdbg versions. >>>>>>>>> >>>>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>>>> arrives in other >>>>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>>>> change >>>>>>>>> against jdk9/dev, too.) >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Goetz. >>>>>>>>> >>>>>>>>> PS: I also did all the Copyright adaptions ;) >>> From goetz.lindenmaier at sap.com Mon Jul 21 07:18:21 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 21 Jul 2014 07:18:21 +0000 Subject: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories In-Reply-To: <53CA62B3.50104@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CED8FB2@DEWDFEMB12A.global.corp.sap> <53BC2D74.4070708@oracle.com> <53BCA7B4.7020201@oracle.com> <53BD3782.3080002@oracle.com> <53BD3C8E.5070804@oracle.com> <4295855A5C1DE049A61835A1887419CC2CED9BE3@DEWDFEMB12A.global.corp.sap> <53BF73A9.3070105@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA2D1@DEWDFEMB12A.global.corp.sap> <53C3C860.6050402@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA3BC@DEWDFEMB12A.global.corp.sap> <53C45912.4050905@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDA517@DEWDFEMB12A.global.corp.sap> <53C4D63A.5060802@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAE85@DEWDFEMB12A.global.corp.sap> <53CA62B3.50104@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDB5DB@DEWDFEMB12A.global.corp.sap> Thanks, David! -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Samstag, 19. Juli 2014 14:21 To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories On 18/07/2014 10:58 PM, Lindenmaier, Goetz wrote: > Hi David, > > does this clear the situation? Clear as mud :) > Can we consider the change reviewed? Yes. Thanks, David > Thanks and best regards, > Goetz. > > -----Original Message----- > From: Lindenmaier, Goetz > Sent: Dienstag, 15. Juli 2014 11:18 > To: 'David Holmes'; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: RE: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > Hi David, > > There are no clean rules followed, which happens to cause > compile problems here and there. I try to clean this up a bit. > > If inline function foo() calls another inline function bar(), the c++ compiler > must see both implementations to compile foo (else it obviously can't > inline). It must see the declaration of the function to be inlined before > the function where it is inlined. If there are cyclic inlines you need inline.hpp > headers to get a safe state. Also, to be on the safe side, .hpp files never may include > .inline.hpp files, else an implementation can end up above the declaration > it needs. See also the two examples attached. > > If there is no cycle, it doesn't matter. That's why a lot of functions > are not placed according to this scheme. > > For the functions I moved to the header (path_separator etc): > They are used in a lot of .hpp files. Moving them to os.hpp I easily could avoid > including the os.inline.hpp in .hpp files, which would be bad. > > Best regards, > Goetz. > > > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 15. Juli 2014 09:20 > To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net > Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories > > On 15/07/2014 4:34 PM, Lindenmaier, Goetz wrote: >> Hi David, >> >> functions that are completely self contained can go into the .hpp. >> Functions that call another inline function defined in an other header >> must go to .inline.hpp as else there could be cycles the c++ compilers can't >> deal with. > > A quick survey of the shared *.inline.hpp files shows many don't seem to > fit this definition. Are templates also something that needs special > handling? > > I'm not saying anything is wrong with your changes, just trying to > understand what the rules are. > > Thanks, > David > >> Best regards, >> Goetz. >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Dienstag, 15. Juli 2014 00:26 >> To: Lindenmaier, Goetz; Coleen Phillimore; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >> >> On 14/07/2014 10:37 PM, Lindenmaier, Goetz wrote: >>> Hi Coleen, >>> >>> Thanks for sponsoring this! >>> >>> bytes, ad, nativeInst and vmreg.inline were used quite often >>> in shared files, so it definitely makes sense for these to have >>> a shared header. >>> vm_version and register had an umbrella header, but that >>> was not used everywhere, so I cleaned it up. >>> That left adGlobals, jniTypes and interp_masm which >>> are only used a few time. I did these so that all files >>> are treated similarly. >>> In the end, I didn't need a header for all, as they were >>> not really needed in the shared files, or I found >>> another good place, as for adGlobals. >>> >>> I added you and David H. as reviewer to the webrev: >>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>> I hope this is ok with you, David. >> >> It might be somewhat premature :) I somewhat confused by the rules for >> headers and includes and inlines. I now see with this change a bunch of >> inline function definitions being moved out of the .inline.hpp file and >> into the .hpp file. Why? What criteria determines if an inline function >> goes into the .hpp versus the .inline.hpp file ??? >> >> Thanks, >> David >> >>> Thanks, >>> Goetz. >>> >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Coleen Phillimore >>> Sent: Montag, 14. Juli 2014 14:09 >>> To: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>> >>> >>> I think this looks like a good cleanup. I can sponsor it and make the >>> closed changes also again. I initially proposed the #include cascades >>> because the alternative at the time was to blindly create a dispatching >>> header file for each target dependent file. I wanted to see the >>> #includes cleaned up instead and target dependent files included >>> directly. This adds 5 dispatching header files, which is fine. I >>> think the case of interp_masm.hpp is interesting though, because the >>> dispatching file is included in cpu dependent files, which could >>> directly include the cpu version. But there are 3 platform independent >>> files that include it. I'm not going to object though because I'm >>> grateful for this cleanup and I guess it's a matter of opinion which is >>> best to include in the cpu dependent directories. >>> >>> Thanks, >>> Coleen >>> >>> >>> On 7/14/14, 3:56 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> David, can I consider this a review? >>>> >>>> And I please need a sponsor for this change. Could somebody >>>> please help here? Probably some closed adaptions are needed. >>>> It applies to any repo as my other change traveled around >>>> by now. >>>> >>>> Thanks and best regards, >>>> Goetz. >>>> >>>> >>>> -----Original Message----- >>>> From: David Holmes [mailto:david.holmes at oracle.com] >>>> Sent: Freitag, 11. Juli 2014 07:19 >>>> To: Lindenmaier, Goetz; Lois Foltan >>>> Cc: hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR(L): 8049325: Introduce and clean up umbrella headers for the files in the cpu subdirectories >>>> >>>> On 10/07/2014 12:03 AM, Lindenmaier, Goetz wrote: >>>>> Hi, >>>>> >>>>> foo.hpp as few includes as possible, to avoid cycles. >>>>> foo.inline.hpp * must have foo.hpp, as it contains functions declared in foo.hpp >>>>> (either directly or via the platform files.) >>>>> * should include foo.platform.inline.hpp, so that shared files that >>>>> call functions from foo.platform.inline.hpp need not contain the >>>>> cascade of all the platform files. >>>>> If code in foo.platform.inline.hpp is only used in the platform files, >>>>> it is not necessary to have an umbrella header. >>>>> foo.platform.inline.hpp Should include what is needed in its code. >>>>> >>>>> For client code: >>>>> With this change I now removed all include cascades of platform files except for >>>>> those in the 'natural' headers. >>>>> Shared files, or files with less 'diversity' should include the general foo.[inline.]hpp. >>>>> (foo.platform.cpp should not get a cascade with bar.os_platform.[inline.]hpp >>>>> headers, but include bar.[inline.]hpp.) >>>>> If it's 1:1, I don't care, as discussed before. >>>>> >>>>> Does this make sense? >>>> I find the overall structure somewhat counter-intuitive from an >>>> implementation versus interface perspective. But ... >>>> >>>> Thanks for the explanation. >>>> >>>> David >>>> >>>>> Best regards, >>>>> Goetz. >>>>> >>>>> >>>>> which of the above should #include which others, and which should be >>>>> #include'd by "client" code? >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>>> Thanks, >>>>>> Lois >>>>>> >>>>>>> David >>>>>>> ----- >>>>>>> >>>>>>>> src/cpu/sparc/vm/c1_Runtime1_sparc.cpp >>>>>>>> - include nativeInst.hpp instead of nativeInst_sparc.hpp >>>>>>>> - include vmreg.inline.hpp instead of vmreg_sparc.inline.hpp >>>>>>>> (however this could pull in more code than needed since >>>>>>>> vmreg.inline.hpp also includes asm/register.hpp and code/vmreg.hpp) >>>>>>>> >>>>>>>> src/cpu/ppc/vm/stubGenerator_ppc.cpp >>>>>>>> - change not related to clean up of umbrella headers, please >>>>>>>> explain/justify. >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.hpp >>>>>>>> - Can lines #143-#15 be replaced by an inclusion of >>>>>>>> vmreg.inline.hpp or will >>>>>>>> this introduce a cyclical inclusion situation, since >>>>>>>> vmreg.inline.hpp includes vmreg.hpp? >>>>>>>> >>>>>>>> src/share/vm/classfile/classFileStream.cpp >>>>>>>> - only has a copyright change in the file, no other changes >>>>>>>> present? >>>>>>>> >>>>>>>> src/share/vm/prims/jvmtiClassFileReconstituter.cpp >>>>>>>> - incorrect copyright, no current year? >>>>>>>> >>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>> - incorrect copyright date for a new file >>>>>>>> >>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>> - technically this new file does not need to include >>>>>>>> "asm/register.hpp" since >>>>>>>> vmreg.hpp already includes it >>>>>>>> >>>>>>>> My only lingering concern is the cyclical nature of >>>>>>>> vmreg.hpp/vmreg.inline.hpp. It might be better to not introduce the new >>>>>>>> file "vmreg.inline.hpp" in favor of having files include vmreg.hpp >>>>>>>> instead? Again since vmreg.inline.hpp includes vmreg.hpp there really >>>>>>>> is not much difference between the two? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Lois >>>>>>>> >>>>>>>> >>>>>>>> On 7/7/2014 4:52 AM, Lindenmaier, Goetz wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I decided to clean up the remaining include cascades, too. >>>>>>>>> >>>>>>>>> This change introduces umbrella headers for the files in the cpu >>>>>>>>> subdirectories: >>>>>>>>> >>>>>>>>> src/share/vm/utilities/bytes.hpp >>>>>>>>> src/share/vm/opto/ad.hpp >>>>>>>>> src/share/vm/code/nativeInst.hpp >>>>>>>>> src/share/vm/code/vmreg.inline.hpp >>>>>>>>> src/share/vm/interpreter/interp_masm.hpp >>>>>>>>> >>>>>>>>> It also cleans up the include cascades for adGlobals*.hpp, >>>>>>>>> jniTypes*.hpp, vm_version*.hpp and register*.hpp. >>>>>>>>> >>>>>>>>> Where possible, this change avoids includes in headers. >>>>>>>>> Eventually it adds a forward declaration. >>>>>>>>> >>>>>>>>> vmreg_.inline.hpp contains functions declared in register_cpu.hpp >>>>>>>>> and vmreg.hpp, so there is no obvious mapping to the shared files. >>>>>>>>> Still, I did not split the files in the cpu directories, as they are >>>>>>>>> rather small. >>>>>>>>> >>>>>>>>> I didn't introduce a file for adGlobals_.hpp, as adGlobals mainly >>>>>>>>> contains machine dependent, c2 specific register information. So I >>>>>>>>> think optoreg.hpp is a good header to place the adGlobals_.hpp >>>>>>>>> includes in, >>>>>>>>> and then use optoreg.hpp where symbols from adGlobals are needed. >>>>>>>>> >>>>>>>>> I moved the constructor and destructor of CodeletMark to the .cpp >>>>>>>>> file, I don't think this is performance relevant. But having them in >>>>>>>>> the header requirs to pull interp_masm.hpp into interpreter.hpp, and >>>>>>>>> thus all the assembler include headers into a lot of files. >>>>>>>>> >>>>>>>>> Please review and test this change. I please need a sponsor. >>>>>>>>> http://cr.openjdk.java.net/~goetz/webrevs/8049325-cpuInc/webrev.01/ >>>>>>>>> >>>>>>>>> I compiled and tested this without precompiled headers on linuxx86_64, >>>>>>>>> linuxppc64, >>>>>>>>> windowsx86_64, solaris_sparc64, solaris_sparc32, darwinx86_64, >>>>>>>>> aixppc64, ntamd64 >>>>>>>>> in opt, dbg and fastdbg versions. >>>>>>>>> >>>>>>>>> Currently, the change applies to hs-rt, but once my other change >>>>>>>>> arrives in other >>>>>>>>> repos, it will work there, too. (I tested it together with the other >>>>>>>>> change >>>>>>>>> against jdk9/dev, too.) >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Goetz. >>>>>>>>> >>>>>>>>> PS: I also did all the Copyright adaptions ;) >>> From tobias.hartmann at oracle.com Mon Jul 21 08:44:55 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 21 Jul 2014 10:44:55 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53C962F3.3070405@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> <53C90740.40602@oracle.com> <53C937E0.7060304@oracle.com> <53C9614C.8080109@oracle.com> <53C962F3.3070405@oracle.com> Message-ID: <53CCD307.7040806@oracle.com> Vladimir, Coleen, thanks for the reviews! On 18.07.2014 20:09, Vladimir Kozlov wrote: > On 7/18/14 11:02 AM, Coleen Phillimore wrote: >> >> On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: >>> On 7/18/14 4:38 AM, Tobias Hartmann wrote: >>>> Hi, >>>> >>>> I spend some more days and was finally able to implement a test that >>>> deterministically triggers the bug: >>> >>> Why do you need to switch off compressed oops? Do you need to switch >>> off compressed klass pointers too (UseCompressedClassPointers)? >> >> CompressedOops when off turns off CompressedClassPointers. > > You are right, I forgot that. Still the question is why switch off coop? I'm only able to reproduce the bug without compressed oops. The original bug also only reproduces with -XX:-UseCompressedOops. I tried to figure out why (on Sparc): With compressed oops enabled, Method* metadata referencing 'WorkerClass' is added to 'doWork' in MacroAssembler::set_narrow_klass(..). In CodeBuffer::finalize_oop_references(..) the metadata is processed and an oop to the class loader 'URLClassLoader' is added. This oop leads to the unloading of 'doWork', hence the verification code is never executed. I'm not sure what set_narrow_klass(..) is used for in this case. I assume it stores a 'WorkerClass' Klass* in a register as part of an optimization? Because 'doWork' potentially works on any class. Apparently this optimization is not performed without compressed oops. Best, Tobias > > Vladimir > >>> >>>> >>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ >>> >>> Very nice! >> >> Yes, I agree. Impressive. >> >> The refactoring in nmethod.cpp looks good to me. I have no further >> comments. >> Thanks! >> Coleen >> >>> >>>> >>>> @Vladimir: The test shows why we should only clean the ICs but not >>>> unload the nmethod if possible. The method ' doWork' >>>> is still valid after WorkerClass was unloaded and depending on the >>>> complexity of the method we should avoid unloading it. >>> >>> Make sense. >>> >>>> >>>> On Sparc my patch fixes the bug and leads to the nmethod not being >>>> unloaded. The compiled version is therefore used even >>>> after WorkerClass is unloaded. >>>> >>>> On x86 the nmethod is unloaded anyway because of a dead oop. This is >>>> probably due to a slightly different implementation >>>> of the ICs. I'll have a closer look to see if we can improve that. >>> >>> Thanks, >>> Vladimir >>> >>>> >>>> Thanks, >>>> Tobias >>>> >>>> On 16.07.2014 10:36, Tobias Hartmann wrote: >>>>> Sorry, forgot to answer this question: >>>>>> Were you able to create a small test case for it that would be >>>>>> useful to add? >>>>> Unfortunately I was not able to create a test. The bug only >>>>> reproduces on a particular system with a > 30 minute run >>>>> of runThese. >>>>> >>>>> Best, >>>>> Tobias >>>>> >>>>> On 16.07.2014 09:54, Tobias Hartmann wrote: >>>>>> Hi Coleen, >>>>>> >>>>>> thanks for the review. >>>>>>> *+ if (csc->is_call_to_interpreted() && >>>>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* >>>>>>> *+ csc->set_to_clean();* >>>>>>> *+ }* >>>>>>> >>>>>>> This appears in each case. Can you fold it and the new function >>>>>>> into a function like >>>>>>> clean_call_to_interpreted_stub(is_alive, csc)? >>>>>> >>>>>> I folded it into the function clean_call_to_interpreter_stub(..). >>>>>> >>>>>> New webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >>>>>> >>>>>> Thanks, >>>>>> Tobias >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Coleen >>>>>>> >>>>>>>> >>>>>>>> So before the permgen removal embedded method* were oops and they >>>>>>>> were processed in relocInfo::oop_type loop. >>>>>>>> >>>>>>>> May be instead of specializing opt_virtual_call_type and >>>>>>>> static_call_type call site you can simple add a loop for >>>>>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Vladimir >>>>>>>> >>>>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> please review the following patch for JDK-8029443. >>>>>>>>> >>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>>>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>>>>> >>>>>>>>> *Problem* >>>>>>>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) >>>>>>>>> checks >>>>>>>>> if a nmethod can be unloaded because it contains dead oops. If >>>>>>>>> class >>>>>>>>> unloading occurred we additionally clear all ICs where the cached >>>>>>>>> metadata refers to an unloaded klass or method. If the nmethod >>>>>>>>> is not >>>>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if >>>>>>>>> all >>>>>>>>> metadata is alive. The assert in CheckClass::check_class fails >>>>>>>>> because >>>>>>>>> the nmethod contains Method* metadata corresponding to a dead >>>>>>>>> Klass. >>>>>>>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>>>>>>> compiled IC. Normally we clear those stubs prior to >>>>>>>>> verification to >>>>>>>>> avoid dangling references to Method* [2], but only if the stub >>>>>>>>> is not in >>>>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this >>>>>>>>> case the >>>>>>>>> to-interpreter stub may be executed and hand a stale Method* >>>>>>>>> to the >>>>>>>>> interpreter. >>>>>>>>> >>>>>>>>> *Solution >>>>>>>>> *The implementation of nmethod::do_unloading(..) is changed to >>>>>>>>> clean >>>>>>>>> compiled ICs and compiled static calls if they call into a >>>>>>>>> to-interpreter stub that references dead Method* metadata. >>>>>>>>> >>>>>>>>> The patch was affected by the G1 class unloading changes >>>>>>>>> (JDK-8048248) >>>>>>>>> because the method nmethod::do_unloading_parallel(..) was >>>>>>>>> added. I >>>>>>>>> adapted the implementation as well. >>>>>>>>> * >>>>>>>>> Testing >>>>>>>>> *Failing test (runThese) >>>>>>>>> JPRT >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Tobias >>>>>>>>> >>>>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>>>>>> >>>>>> >>>>> >>>> >> From thomas.schatzl at oracle.com Mon Jul 21 08:54:32 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 21 Jul 2014 10:54:32 +0200 Subject: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDAE6B@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDAE6B@DEWDFEMB12A.global.corp.sap> Message-ID: <1405932872.2723.13.camel@cirrus> Hi, On Fri, 2014-07-18 at 12:47 +0000, Lindenmaier, Goetz wrote: > Hi, > > This fixes two missing Resource and Handle marks. > > Please review and test this change. We please need a sponsor > to push it. > http://cr.openjdk.java.net/~goetz/webrevs/8050973-mark/webrev-01/ I think the resource/handle marks should be added before the work method is called for safety. So I would prefer if the Marks were added to at least in the two places in GangWorker::loop() and YieldingFlexibleGangWorker::loop() where the work method is called, if not everywhere where work() is called. The latter might be too big a change for now. > Should this be pushed to 8u20? Can you elaborate the consequences of not applying the patch to 8u20? Only showstoppers can be pushed to 8u20 at this time, so we need a good reason. It seems to be a problem with ParallelRefProcEnabled only, and we have not seen crashes. Thanks, Thomas From winniejayclay at gmail.com Mon Jul 21 12:38:19 2014 From: winniejayclay at gmail.com (Winnie JayClay) Date: Mon, 21 Jul 2014 20:38:19 +0800 Subject: waiting room and monitor lock aq. Message-ID: Hi, is there any order between threads which wake up by notifyAll and those trying to acquire obj monitor (blocked after synchronized invocation)? Is it mentioned in JLS? Thanks. From zhengyu.gu at oracle.com Mon Jul 21 12:49:55 2014 From: zhengyu.gu at oracle.com (Zhengyu Gu) Date: Mon, 21 Jul 2014 08:49:55 -0400 Subject: RFR(XS) 8050167: linux-sparcv9: hs_err file does not show any stack information In-Reply-To: <53C9957A.9070706@oracle.com> References: <53C99388.2030800@oracle.com> <53C9957A.9070706@oracle.com> Message-ID: <53CD0C73.9020608@oracle.com> Thanks for the review. -Zhengyu On 7/18/2014 5:45 PM, Mikael Vidstedt wrote: > > This looks like another case of code duplication between solaris_sparc > and linux_sparc - I wish we could unify it in some way going forward. > > Apart from that, this appears to be in line with what the Solaris > implementation does so thumbs up. > > Cheers, > Mikael > > On 2014-07-18 14:37, Zhengyu Gu wrote: >> This is a small fix to setup the first stack frame from exception >> handler. Sparc's sigcontext does not contain frame pointer, so uses >> frame::unpatchable instead. >> >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8050167 >> Webrev: http://cr.openjdk.java.net/~zgu/8050167/webrev.00/ >> >> >> Thanks, >> >> -Zhengyu > From zhengyu.gu at oracle.com Mon Jul 21 12:50:12 2014 From: zhengyu.gu at oracle.com (Zhengyu Gu) Date: Mon, 21 Jul 2014 08:50:12 -0400 Subject: RFR(XS) 8050167: linux-sparcv9: hs_err file does not show any stack information In-Reply-To: <53C9A742.6030709@oracle.com> References: <53C99388.2030800@oracle.com> <53C9A742.6030709@oracle.com> Message-ID: <53CD0C84.9000308@oracle.com> Thanks for the review. -Zhengyu On 7/18/2014 7:01 PM, Coleen Phillimore wrote: > > This looks good! > Coleen > > On 7/18/14, 5:37 PM, Zhengyu Gu wrote: >> This is a small fix to setup the first stack frame from exception >> handler. Sparc's sigcontext does not contain frame pointer, so uses >> frame::unpatchable instead. >> >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8050167 >> Webrev: http://cr.openjdk.java.net/~zgu/8050167/webrev.00/ >> >> >> Thanks, >> >> -Zhengyu > From goetz.lindenmaier at sap.com Mon Jul 21 13:01:10 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 21 Jul 2014 13:01:10 +0000 Subject: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark In-Reply-To: <1405932872.2723.13.camel@cirrus> References: <4295855A5C1DE049A61835A1887419CC2CEDAE6B@DEWDFEMB12A.global.corp.sap> <1405932872.2723.13.camel@cirrus> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDB8D9@DEWDFEMB12A.global.corp.sap> Hi Thomas, we put the marks there as other work functions are setup similarly, as, e.g., CMConcurrentMarkingTask::work(), CMRemarkTask::work(). So we propose to fist go with our change to fix the issue, it can be refactored afterwards. It's a minor resource leak. We don't think it's a showstopper, so let's only move it to 8u40. Best regards, Martin and Goetz. -----Original Message----- From: Thomas Schatzl [mailto:thomas.schatzl at oracle.com] Sent: Montag, 21. Juli 2014 10:55 To: Lindenmaier, Goetz Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark Hi, On Fri, 2014-07-18 at 12:47 +0000, Lindenmaier, Goetz wrote: > Hi, > > This fixes two missing Resource and Handle marks. > > Please review and test this change. We please need a sponsor > to push it. > http://cr.openjdk.java.net/~goetz/webrevs/8050973-mark/webrev-01/ I think the resource/handle marks should be added before the work method is called for safety. So I would prefer if the Marks were added to at least in the two places in GangWorker::loop() and YieldingFlexibleGangWorker::loop() where the work method is called, if not everywhere where work() is called. The latter might be too big a change for now. > Should this be pushed to 8u20? Can you elaborate the consequences of not applying the patch to 8u20? Only showstoppers can be pushed to 8u20 at this time, so we need a good reason. It seems to be a problem with ParallelRefProcEnabled only, and we have not seen crashes. Thanks, Thomas From vladimir.kozlov at oracle.com Mon Jul 21 18:59:00 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 21 Jul 2014 11:59:00 -0700 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53CCD307.7040806@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> <53C90740.40602@oracle.com> <53C937E0.7060304@oracle.com> <53C9614C.8080109@oracle.com> <53C962F3.3070405@oracle.com> <53CCD307.7040806@oracle.com> Message-ID: <53CD62F4.1020904@oracle.com> On 7/21/14 1:44 AM, Tobias Hartmann wrote: > Vladimir, Coleen, thanks for the reviews! > > On 18.07.2014 20:09, Vladimir Kozlov wrote: >> On 7/18/14 11:02 AM, Coleen Phillimore wrote: >>> >>> On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: >>>> On 7/18/14 4:38 AM, Tobias Hartmann wrote: >>>>> Hi, >>>>> >>>>> I spend some more days and was finally able to implement a test that >>>>> deterministically triggers the bug: >>>> >>>> Why do you need to switch off compressed oops? Do you need to switch >>>> off compressed klass pointers too (UseCompressedClassPointers)? >>> >>> CompressedOops when off turns off CompressedClassPointers. >> >> You are right, I forgot that. Still the question is why switch off coop? > > I'm only able to reproduce the bug without compressed oops. The original > bug also only reproduces with -XX:-UseCompressedOops. I tried to figure > out why (on Sparc): > > With compressed oops enabled, Method* metadata referencing 'WorkerClass' > is added to 'doWork' in MacroAssembler::set_narrow_klass(..). In > CodeBuffer::finalize_oop_references(..) the metadata is processed and an > oop to the class loader 'URLClassLoader' is added. This oop leads to the > unloading of 'doWork', hence the verification code is never executed. > > I'm not sure what set_narrow_klass(..) is used for in this case. I > assume it stores a 'WorkerClass' Klass* in a register as part of an > optimization? Because 'doWork' potentially works on any class. > Apparently this optimization is not performed without compressed oops. I would suggest to compare 'doWork' assembler (-XX:CompileCommand=print,TestMethodUnloading::doWork) with coop and without it. Usually loaded into register class is used for klass compare do guard inlining code. Or to initialize new object. I don't see loading (constructing) uncompressed (whole) klass pointer from constant in sparc.ad. It could be the reason for different behavior. It could be loaded from constants section. But constants section should have metadata relocation info in such case. thanks, Vladimir > > Best, > Tobias > >> >> Vladimir >> >>>> >>>>> >>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ >>>> >>>> Very nice! >>> >>> Yes, I agree. Impressive. >>> >>> The refactoring in nmethod.cpp looks good to me. I have no further >>> comments. >>> Thanks! >>> Coleen >>> >>>> >>>>> >>>>> @Vladimir: The test shows why we should only clean the ICs but not >>>>> unload the nmethod if possible. The method ' doWork' >>>>> is still valid after WorkerClass was unloaded and depending on the >>>>> complexity of the method we should avoid unloading it. >>>> >>>> Make sense. >>>> >>>>> >>>>> On Sparc my patch fixes the bug and leads to the nmethod not being >>>>> unloaded. The compiled version is therefore used even >>>>> after WorkerClass is unloaded. >>>>> >>>>> On x86 the nmethod is unloaded anyway because of a dead oop. This is >>>>> probably due to a slightly different implementation >>>>> of the ICs. I'll have a closer look to see if we can improve that. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>>> >>>>> Thanks, >>>>> Tobias >>>>> >>>>> On 16.07.2014 10:36, Tobias Hartmann wrote: >>>>>> Sorry, forgot to answer this question: >>>>>>> Were you able to create a small test case for it that would be >>>>>>> useful to add? >>>>>> Unfortunately I was not able to create a test. The bug only >>>>>> reproduces on a particular system with a > 30 minute run >>>>>> of runThese. >>>>>> >>>>>> Best, >>>>>> Tobias >>>>>> >>>>>> On 16.07.2014 09:54, Tobias Hartmann wrote: >>>>>>> Hi Coleen, >>>>>>> >>>>>>> thanks for the review. >>>>>>>> *+ if (csc->is_call_to_interpreted() && >>>>>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* >>>>>>>> *+ csc->set_to_clean();* >>>>>>>> *+ }* >>>>>>>> >>>>>>>> This appears in each case. Can you fold it and the new function >>>>>>>> into a function like >>>>>>>> clean_call_to_interpreted_stub(is_alive, csc)? >>>>>>> >>>>>>> I folded it into the function clean_call_to_interpreter_stub(..). >>>>>>> >>>>>>> New webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >>>>>>> >>>>>>> Thanks, >>>>>>> Tobias >>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Coleen >>>>>>>> >>>>>>>>> >>>>>>>>> So before the permgen removal embedded method* were oops and they >>>>>>>>> were processed in relocInfo::oop_type loop. >>>>>>>>> >>>>>>>>> May be instead of specializing opt_virtual_call_type and >>>>>>>>> static_call_type call site you can simple add a loop for >>>>>>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Vladimir >>>>>>>>> >>>>>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> please review the following patch for JDK-8029443. >>>>>>>>>> >>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>>>>>> Webrev: http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>>>>>> >>>>>>>>>> *Problem* >>>>>>>>>> After the tracing/marking phase of GC, nmethod::do_unloading(..) >>>>>>>>>> checks >>>>>>>>>> if a nmethod can be unloaded because it contains dead oops. If >>>>>>>>>> class >>>>>>>>>> unloading occurred we additionally clear all ICs where the cached >>>>>>>>>> metadata refers to an unloaded klass or method. If the nmethod >>>>>>>>>> is not >>>>>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally checks if >>>>>>>>>> all >>>>>>>>>> metadata is alive. The assert in CheckClass::check_class fails >>>>>>>>>> because >>>>>>>>>> the nmethod contains Method* metadata corresponding to a dead >>>>>>>>>> Klass. >>>>>>>>>> The Method* belongs to a to-interpreter stub [1] of an optimized >>>>>>>>>> compiled IC. Normally we clear those stubs prior to >>>>>>>>>> verification to >>>>>>>>>> avoid dangling references to Method* [2], but only if the stub >>>>>>>>>> is not in >>>>>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this >>>>>>>>>> case the >>>>>>>>>> to-interpreter stub may be executed and hand a stale Method* >>>>>>>>>> to the >>>>>>>>>> interpreter. >>>>>>>>>> >>>>>>>>>> *Solution >>>>>>>>>> *The implementation of nmethod::do_unloading(..) is changed to >>>>>>>>>> clean >>>>>>>>>> compiled ICs and compiled static calls if they call into a >>>>>>>>>> to-interpreter stub that references dead Method* metadata. >>>>>>>>>> >>>>>>>>>> The patch was affected by the G1 class unloading changes >>>>>>>>>> (JDK-8048248) >>>>>>>>>> because the method nmethod::do_unloading_parallel(..) was >>>>>>>>>> added. I >>>>>>>>>> adapted the implementation as well. >>>>>>>>>> * >>>>>>>>>> Testing >>>>>>>>>> *Failing test (runThese) >>>>>>>>>> JPRT >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Tobias >>>>>>>>>> >>>>>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>>>>>>> >>>>>>> >>>>>> >>>>> >>> > From david.holmes at oracle.com Mon Jul 21 22:16:26 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 22 Jul 2014 08:16:26 +1000 Subject: waiting room and monitor lock aq. In-Reply-To: References: Message-ID: <53CD913A.2020007@oracle.com> On 21/07/2014 10:38 PM, Winnie JayClay wrote: > Hi, is there any order between threads which wake up by notifyAll and those > trying to acquire obj monitor (blocked after synchronized invocation)? Is > it mentioned in JLS? Ordering is completely unspecified. An implementation is free to do what it likes to optimize performance (using whatever metric it chooses). So for example a thread that is woken up by a notify/notifyAll and is placed into the monitor acquisition queue need not be given preference. In general in hotspot the queues are simply FIFO, but the monitor implementation also allows barging and monitor release does not perform a hand-off. David > Thanks. > From tobias.hartmann at oracle.com Tue Jul 22 09:22:27 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 22 Jul 2014 11:22:27 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53CD62F4.1020904@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> <53C90740.40602@oracle.com> <53C937E0.7060304@oracle.com> <53C9614C.8080109@oracle.com> <53C962F3.3070405@oracle.com> <53CCD307.7040806@oracle.com> <53CD62F4.1020904@oracle.com> Message-ID: <53CE2D53.6040006@oracle.com> On 21.07.2014 20:59, Vladimir Kozlov wrote: > On 7/21/14 1:44 AM, Tobias Hartmann wrote: >> Vladimir, Coleen, thanks for the reviews! >> >> On 18.07.2014 20:09, Vladimir Kozlov wrote: >>> On 7/18/14 11:02 AM, Coleen Phillimore wrote: >>>> >>>> On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: >>>>> On 7/18/14 4:38 AM, Tobias Hartmann wrote: >>>>>> Hi, >>>>>> >>>>>> I spend some more days and was finally able to implement a test that >>>>>> deterministically triggers the bug: >>>>> >>>>> Why do you need to switch off compressed oops? Do you need to switch >>>>> off compressed klass pointers too (UseCompressedClassPointers)? >>>> >>>> CompressedOops when off turns off CompressedClassPointers. >>> >>> You are right, I forgot that. Still the question is why switch off >>> coop? >> >> I'm only able to reproduce the bug without compressed oops. The original >> bug also only reproduces with -XX:-UseCompressedOops. I tried to figure >> out why (on Sparc): >> >> With compressed oops enabled, Method* metadata referencing 'WorkerClass' >> is added to 'doWork' in MacroAssembler::set_narrow_klass(..). In >> CodeBuffer::finalize_oop_references(..) the metadata is processed and an >> oop to the class loader 'URLClassLoader' is added. This oop leads to the >> unloading of 'doWork', hence the verification code is never executed. >> >> I'm not sure what set_narrow_klass(..) is used for in this case. I >> assume it stores a 'WorkerClass' Klass* in a register as part of an >> optimization? Because 'doWork' potentially works on any class. >> Apparently this optimization is not performed without compressed oops. > > I would suggest to compare 'doWork' assembler > (-XX:CompileCommand=print,TestMethodUnloading::doWork) with coop and > without it. Usually loaded into register class is used for klass > compare do guard inlining code. Or to initialize new object. > > I don't see loading (constructing) uncompressed (whole) klass pointer > from constant in sparc.ad. It could be the reason for different > behavior. It could be loaded from constants section. But constants > section should have metadata relocation info in such case. I did as you suggested and found the following: During the profiling phase the class given to 'doWork' always is 'WorkerClass'. The C2 compiler therefore optimizes the compiled version to expect a 'WorkerClass'. The branch that instantiates a new object is guarded by an uncommon trap (class_check). The difference between the two versions (with and without compressed oops) is the loading of the 'WorkerClass' Klass to check if the given class is equal: With compressed oops: SET narrowklass: precise klass WorkerClass: 0x00000001004a0d40:Constant:exact *,R_L1 ! compressed klass ptr CWBne R_L2,R_L1,B8 ! compressed ptr P=0.000001 C=-1.000000 Without: SET precise klass WorkerClass: 0x00000001004aeab0:Constant:exact *,R_L1 ! non-oop ptr CXBpne R_L2,R_L1,B8 ! ptr P=0.000001 C=-1.000000 R_L2: class given as parameter B8: location of uncommon trap In the first case, the Klass is loaded by a 'loadConNKlass' instruction that calls MacroAssembler::set_narrow_klass(..) which then creates a metadata_Relocation for the 'WorkerClass'. This metada_Relocation is processed by CodeBuffer::finalize_oop_references(..) and an oop to 'WorkerClass' is added. This oop causes the unloading of the method. In the second case, the Klass is loaded by a 'loadConP_no_oop_cheap' instruction that does not create a metadata_Relocation. I don't understand why the metadata_Relocation in the first case is needed? As the test shows it is better to only unload the method if we hit the uncommon trap because we could still use other (potentially complex) branches of the method. Thanks, Tobias > > thanks, > Vladimir > >> >> Best, >> Tobias >> >>> >>> Vladimir >>> >>>>> >>>>>> >>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ >>>>> >>>>> Very nice! >>>> >>>> Yes, I agree. Impressive. >>>> >>>> The refactoring in nmethod.cpp looks good to me. I have no further >>>> comments. >>>> Thanks! >>>> Coleen >>>> >>>>> >>>>>> >>>>>> @Vladimir: The test shows why we should only clean the ICs but not >>>>>> unload the nmethod if possible. The method ' doWork' >>>>>> is still valid after WorkerClass was unloaded and depending on the >>>>>> complexity of the method we should avoid unloading it. >>>>> >>>>> Make sense. >>>>> >>>>>> >>>>>> On Sparc my patch fixes the bug and leads to the nmethod not being >>>>>> unloaded. The compiled version is therefore used even >>>>>> after WorkerClass is unloaded. >>>>>> >>>>>> On x86 the nmethod is unloaded anyway because of a dead oop. This is >>>>>> probably due to a slightly different implementation >>>>>> of the ICs. I'll have a closer look to see if we can improve that. >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>>> >>>>>> Thanks, >>>>>> Tobias >>>>>> >>>>>> On 16.07.2014 10:36, Tobias Hartmann wrote: >>>>>>> Sorry, forgot to answer this question: >>>>>>>> Were you able to create a small test case for it that would be >>>>>>>> useful to add? >>>>>>> Unfortunately I was not able to create a test. The bug only >>>>>>> reproduces on a particular system with a > 30 minute run >>>>>>> of runThese. >>>>>>> >>>>>>> Best, >>>>>>> Tobias >>>>>>> >>>>>>> On 16.07.2014 09:54, Tobias Hartmann wrote: >>>>>>>> Hi Coleen, >>>>>>>> >>>>>>>> thanks for the review. >>>>>>>>> *+ if (csc->is_call_to_interpreted() && >>>>>>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* >>>>>>>>> *+ csc->set_to_clean();* >>>>>>>>> *+ }* >>>>>>>>> >>>>>>>>> This appears in each case. Can you fold it and the new function >>>>>>>>> into a function like >>>>>>>>> clean_call_to_interpreted_stub(is_alive, csc)? >>>>>>>> >>>>>>>> I folded it into the function clean_call_to_interpreter_stub(..). >>>>>>>> >>>>>>>> New webrev: >>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Tobias >>>>>>>> >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Coleen >>>>>>>>> >>>>>>>>>> >>>>>>>>>> So before the permgen removal embedded method* were oops and >>>>>>>>>> they >>>>>>>>>> were processed in relocInfo::oop_type loop. >>>>>>>>>> >>>>>>>>>> May be instead of specializing opt_virtual_call_type and >>>>>>>>>> static_call_type call site you can simple add a loop for >>>>>>>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Vladimir >>>>>>>>>> >>>>>>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> please review the following patch for JDK-8029443. >>>>>>>>>>> >>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>>>>>>> Webrev: >>>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>>>>>>> >>>>>>>>>>> *Problem* >>>>>>>>>>> After the tracing/marking phase of GC, >>>>>>>>>>> nmethod::do_unloading(..) >>>>>>>>>>> checks >>>>>>>>>>> if a nmethod can be unloaded because it contains dead oops. If >>>>>>>>>>> class >>>>>>>>>>> unloading occurred we additionally clear all ICs where the >>>>>>>>>>> cached >>>>>>>>>>> metadata refers to an unloaded klass or method. If the nmethod >>>>>>>>>>> is not >>>>>>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally >>>>>>>>>>> checks if >>>>>>>>>>> all >>>>>>>>>>> metadata is alive. The assert in CheckClass::check_class fails >>>>>>>>>>> because >>>>>>>>>>> the nmethod contains Method* metadata corresponding to a dead >>>>>>>>>>> Klass. >>>>>>>>>>> The Method* belongs to a to-interpreter stub [1] of an >>>>>>>>>>> optimized >>>>>>>>>>> compiled IC. Normally we clear those stubs prior to >>>>>>>>>>> verification to >>>>>>>>>>> avoid dangling references to Method* [2], but only if the stub >>>>>>>>>>> is not in >>>>>>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this >>>>>>>>>>> case the >>>>>>>>>>> to-interpreter stub may be executed and hand a stale Method* >>>>>>>>>>> to the >>>>>>>>>>> interpreter. >>>>>>>>>>> >>>>>>>>>>> *Solution >>>>>>>>>>> *The implementation of nmethod::do_unloading(..) is changed to >>>>>>>>>>> clean >>>>>>>>>>> compiled ICs and compiled static calls if they call into a >>>>>>>>>>> to-interpreter stub that references dead Method* metadata. >>>>>>>>>>> >>>>>>>>>>> The patch was affected by the G1 class unloading changes >>>>>>>>>>> (JDK-8048248) >>>>>>>>>>> because the method nmethod::do_unloading_parallel(..) was >>>>>>>>>>> added. I >>>>>>>>>>> adapted the implementation as well. >>>>>>>>>>> * >>>>>>>>>>> Testing >>>>>>>>>>> *Failing test (runThese) >>>>>>>>>>> JPRT >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Tobias >>>>>>>>>>> >>>>>>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>>>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>> >> From goetz.lindenmaier at sap.com Tue Jul 22 09:55:17 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 22 Jul 2014 09:55:17 +0000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <53C933A9.7060705@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <53C7F136.3000709@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> <53C933A9.7060705@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDBAE4@DEWDFEMB12A.global.corp.sap> Hi, could somebody please sponsor this change? It also needs to go to jdk8u20. Thanks and best regards, Goetz. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Freitag, 18. Juli 2014 16:48 To: Lindenmaier, Goetz; Doerr, Martin; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache Looks good. Thanks, Vladimir On 7/18/14 2:08 AM, Lindenmaier, Goetz wrote: > Hi Vladimir, > > We fixed the comment and camel case stuff. > http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ > > We think it looks better if volatile is before the type. > > Best regards, > Martin and Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Donnerstag, 17. Juli 2014 17:52 > To: Doerr, Martin; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > First, comments needs to be fixed: > > "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" > > Second, type name should be camel style (PcDescPtr). > > Someone have to double check this volatile declaration. Your example is more clear for me than typedef. > > Thanks, > Vladimir > > On 7/17/14 8:19 AM, Doerr, Martin wrote: >> Hi Vladimir, >> >> the following line should also work: >> PcDesc* volatile _pc_descs[cache_size]; >> But we thought that the typedef would improve readability. >> The array elements must be volatile, not the PcDescs which are pointed to. >> >> Best regards, >> Martin >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov >> Sent: Donnerstag, 17. Juli 2014 17:09 >> To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >> >> Hi Goetz, >> >> What is the reason for new typedef? >> >> Thanks, >> Vladimir >> >> On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This webrev fixes an important concurrency issue in nmethod. >>> Please review and test this change. I please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>> >>> This should be fixed into 8u20, too. >>> >>> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >>> Best regards, >>> Martin and Goetz. >>> From aph at redhat.com Tue Jul 22 10:08:03 2014 From: aph at redhat.com (Andrew Haley) Date: Tue, 22 Jul 2014 11:08:03 +0100 Subject: AArch64 in JDK 9? In-Reply-To: <5352CEC4.7080500@oracle.com> References: <534C2283.2010108@redhat.com> <53523109.7040002@redhat.com> <5352CD23.5050700@oracle.com> <5352CEC4.7080500@oracle.com> Message-ID: <53CE3803.5050803@redhat.com> I have prepared a diff that is the patch needed in the shared parts of HotSpot for OpenJDK 9. It is at http://cr.openjdk.java.net/~aph/aarch64.1 If it would help to have a webrev I will do that, but it wasn't clear how it should be done. I believe that there are no non-trivial changes to shared code which are not guarded by TARGET_ARCH_aarch64, therefore it is unlikekly that any other target will be affected. There are a couple of trivial white space changes which I intend to remove; please ignore these. I hope that this will help you estimate the work required on Oracle's side to integrate this port. Thanks, Andrew. From goetz.lindenmaier at sap.com Tue Jul 22 10:42:30 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 22 Jul 2014 10:42:30 +0000 Subject: RFR(S): 8050978: Fix bad field access check in C1 and C2 In-Reply-To: <53C9364A.1000202@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CEDAAAC@DEWDFEMB12A.global.corp.sap> <53C7F369.5070706@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD04@DEWDFEMB12A.global.corp.sap> <53C9364A.1000202@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDBB30@DEWDFEMB12A.global.corp.sap> Hi, I please need a second reviewer for this change, as well as a sponsor. It also needs to go to 8u20. Thanks and best regards, Goetz. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Freitag, 18. Juli 2014 16:59 To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050978: Fix bad field access check in C1 and C2 Good. Thanks, Vladimir On 7/18/14 12:15 AM, Lindenmaier, Goetz wrote: > Hi Vladimir, > > we updated the changeset with the new comment. > http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ > > Best regards, > Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Donnerstag, 17. Juli 2014 18:02 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050978: Fix bad field access check in C1 and C2 > > Please, don't put next part of comment into sources: > > + // This will make the jck8 test > + // vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html > + // pass with -Xbatch -Xcomp > > instead add something like "canonical_holder should not be use to check access becasue it can erroneously succeeds". > > Thanks, > Vladimir > > On 7/17/14 3:47 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> This fixes an error doing field access checks in C1 and C2. >> Please review and test the change. We please need a sponsor. >> http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ >> >> This should be included in 8u20, too. >> >> JCK8 test vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html fails with -Xbatch -Xcomp due to bad field access check in C1 and C2 >> >> Precondition: >> ------------- >> >> Consider the following class hierarchy: >> >> A >> / \ >> B1 B2 >> >> A declares a field "aa" which both B1 and B2 inherit. >> >> Despite aa is declared in a super class of B1, methods in B1 might not access the field aa of an object of class B2: >> >> class B1 extends A { >> m(B2 b2) { >> ... >> x = b2.aa; // !!! Access not allowed >> } >> } >> >> This is checked by the test mentioned above. >> >> Problem: >> -------- >> >> ciField::will_link() used by C1 and C2 does the access check using the canonical_holder (which is A in this case) and thus the access erroneously succeeds. >> >> Fix: >> ---- >> >> In ciField::ciField(), just before the canonical holder is stored into the _holder variable (and which is used by ciField::will_link()) perform an additional access check with the holder declared in the class file. If this check fails, store the declared holder instead and ciField::will_link() will bail out compilation for this field later on. Then, the interpreter will throw an PrivilegedAccessException at runtime. >> >> Ways to reproduce: >> ------------------ >> >> Run the above JCK test with >> >> C2 only: -XX:-TieredCompilation -Xbatch -Xcomp >> >> or >> >> with C1: -Xbatch -Xcomp -XX:-Inline >> >> Best regards, >> Andreas and Goetz >> >> From mikael.gerdin at oracle.com Tue Jul 22 11:27:52 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 22 Jul 2014 13:27:52 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53CE2D53.6040006@oracle.com> References: <53C3C584.7070008@oracle.com> <53CD62F4.1020904@oracle.com> <53CE2D53.6040006@oracle.com> Message-ID: <13924261.0o0go5MNlk@mgerdin03> Tobias, On Tuesday 22 July 2014 11.22.27 Tobias Hartmann wrote: > On 21.07.2014 20:59, Vladimir Kozlov wrote: > > On 7/21/14 1:44 AM, Tobias Hartmann wrote: > >> Vladimir, Coleen, thanks for the reviews! > >> > >> On 18.07.2014 20:09, Vladimir Kozlov wrote: > >>> On 7/18/14 11:02 AM, Coleen Phillimore wrote: > >>>> On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: > >>>>> On 7/18/14 4:38 AM, Tobias Hartmann wrote: > >>>>>> Hi, > >>>>>> > >>>>>> I spend some more days and was finally able to implement a test that > >>>>> > >>>>>> deterministically triggers the bug: > >>>>> Why do you need to switch off compressed oops? Do you need to switch > >>>>> off compressed klass pointers too (UseCompressedClassPointers)? > >>>> > >>>> CompressedOops when off turns off CompressedClassPointers. > >>> > >>> You are right, I forgot that. Still the question is why switch off > >>> coop? > >> > >> I'm only able to reproduce the bug without compressed oops. The original > >> bug also only reproduces with -XX:-UseCompressedOops. I tried to figure > >> out why (on Sparc): > >> > >> With compressed oops enabled, Method* metadata referencing 'WorkerClass' > >> is added to 'doWork' in MacroAssembler::set_narrow_klass(..). In > >> CodeBuffer::finalize_oop_references(..) the metadata is processed and an > >> oop to the class loader 'URLClassLoader' is added. This oop leads to the > >> unloading of 'doWork', hence the verification code is never executed. > >> > >> I'm not sure what set_narrow_klass(..) is used for in this case. I > >> assume it stores a 'WorkerClass' Klass* in a register as part of an > >> optimization? Because 'doWork' potentially works on any class. > >> Apparently this optimization is not performed without compressed oops. > > > > I would suggest to compare 'doWork' assembler > > (-XX:CompileCommand=print,TestMethodUnloading::doWork) with coop and > > without it. Usually loaded into register class is used for klass > > compare do guard inlining code. Or to initialize new object. > > > > I don't see loading (constructing) uncompressed (whole) klass pointer > > from constant in sparc.ad. It could be the reason for different > > behavior. It could be loaded from constants section. But constants > > section should have metadata relocation info in such case. > > I did as you suggested and found the following: > > During the profiling phase the class given to 'doWork' always is > 'WorkerClass'. The C2 compiler therefore optimizes the compiled version > to expect a 'WorkerClass'. The branch that instantiates a new object is > guarded by an uncommon trap (class_check). The difference between the > two versions (with and without compressed oops) is the loading of the > 'WorkerClass' Klass to check if the given class is equal: > > With compressed oops: > SET narrowklass: precise klass WorkerClass: > 0x00000001004a0d40:Constant:exact *,R_L1 ! compressed klass ptr > CWBne R_L2,R_L1,B8 ! compressed ptr P=0.000001 C=-1.000000 > > Without: > SET precise klass WorkerClass: 0x00000001004aeab0:Constant:exact > *,R_L1 ! non-oop ptr > CXBpne R_L2,R_L1,B8 ! ptr P=0.000001 C=-1.000000 > > R_L2: class given as parameter > B8: location of uncommon trap > > In the first case, the Klass is loaded by a 'loadConNKlass' instruction > that calls MacroAssembler::set_narrow_klass(..) which then creates a > metadata_Relocation for the 'WorkerClass'. This metada_Relocation is > processed by CodeBuffer::finalize_oop_references(..) and an oop to > 'WorkerClass' is added. This oop causes the unloading of the method. > > In the second case, the Klass is loaded by a 'loadConP_no_oop_cheap' > instruction that does not create a metadata_Relocation. > > I don't understand why the metadata_Relocation in the first case is > needed? As the test shows it is better to only unload the method if we > hit the uncommon trap because we could still use other (potentially > complex) branches of the method. It sounds to me like the case without compressed oops is incorrect. If WorkerClass is unloaded then the memory address 0x00000001004aeab0 could potentially be used for some other class at some point in the future. This could cause a false positive in the class check and cause corruption and/or a crash, right? Currently the only course of action in this case is to force the method to be unloaded. Another alternative approach would be to use a metadata reloaction to set the class check to an impossible value, effectively making the code for the dead class unreachable. It seems most likely to me that this is a leftover bug from perm gen removal, since Klasses are no longer oops they can be materialized as a immP_no_oop_cheap and not generate a relocation entry even though one should be needed. Is there a similar construct on other platforms? /Mikael > > Thanks, > Tobias > > > thanks, > > Vladimir > > > >> Best, > >> Tobias > >> > >>> Vladimir > >>> > >>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ > >>>>> > >>>>> Very nice! > >>>> > >>>> Yes, I agree. Impressive. > >>>> > >>>> The refactoring in nmethod.cpp looks good to me. I have no further > >>>> comments. > >>>> Thanks! > >>>> Coleen > >>>> > >>>>>> @Vladimir: The test shows why we should only clean the ICs but not > >>>>>> unload the nmethod if possible. The method ' doWork' > >>>>>> is still valid after WorkerClass was unloaded and depending on the > >>>>>> complexity of the method we should avoid unloading it. > >>>>> > >>>>> Make sense. > >>>>> > >>>>>> On Sparc my patch fixes the bug and leads to the nmethod not being > >>>>>> unloaded. The compiled version is therefore used even > >>>>>> after WorkerClass is unloaded. > >>>>>> > >>>>>> On x86 the nmethod is unloaded anyway because of a dead oop. This is > >>>>>> probably due to a slightly different implementation > >>>>>> of the ICs. I'll have a closer look to see if we can improve that. > >>>>> > >>>>> Thanks, > >>>>> Vladimir > >>>>> > >>>>>> Thanks, > >>>>>> Tobias > >>>>>> > >>>>>> On 16.07.2014 10:36, Tobias Hartmann wrote: > >>>>>>> Sorry, forgot to answer this question: > >>>>>>>> Were you able to create a small test case for it that would be > >>>>>>>> useful to add? > >>>>>>> > >>>>>>> Unfortunately I was not able to create a test. The bug only > >>>>>>> reproduces on a particular system with a > 30 minute run > >>>>>>> of runThese. > >>>>>>> > >>>>>>> Best, > >>>>>>> Tobias > >>>>>>> > >>>>>>> On 16.07.2014 09:54, Tobias Hartmann wrote: > >>>>>>>> Hi Coleen, > >>>>>>>> > >>>>>>>> thanks for the review. > >>>>>>>> > >>>>>>>>> *+ if (csc->is_call_to_interpreted() && > >>>>>>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* > >>>>>>>>> *+ csc->set_to_clean();* > >>>>>>>>> *+ }* > >>>>>>>>> > >>>>>>>>> This appears in each case. Can you fold it and the new function > >>>>>>>>> into a function like > >>>>>>>>> clean_call_to_interpreted_stub(is_alive, csc)? > >>>>>>>> > >>>>>>>> I folded it into the function clean_call_to_interpreter_stub(..). > >>>>>>>> > >>>>>>>> New webrev: > >>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ > >>>>>>>> > >>>>>>>> Thanks, > >>>>>>>> Tobias > >>>>>>>> > >>>>>>>>> Thanks, > >>>>>>>>> Coleen > >>>>>>>>> > >>>>>>>>>> So before the permgen removal embedded method* were oops and > >>>>>>>>>> they > >>>>>>>>>> were processed in relocInfo::oop_type loop. > >>>>>>>>>> > >>>>>>>>>> May be instead of specializing opt_virtual_call_type and > >>>>>>>>>> static_call_type call site you can simple add a loop for > >>>>>>>>>> relocInfo::metadata_type (similar to oop_type loop)? > >>>>>>>>>> > >>>>>>>>>> Thanks, > >>>>>>>>>> Vladimir > >>>>>>>>>> > >>>>>>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: > >>>>>>>>>>> Hi, > >>>>>>>>>>> > >>>>>>>>>>> please review the following patch for JDK-8029443. > >>>>>>>>>>> > >>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 > >>>>>>>>>>> Webrev: > >>>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ > >>>>>>>>>>> > >>>>>>>>>>> *Problem* > >>>>>>>>>>> After the tracing/marking phase of GC, > >>>>>>>>>>> nmethod::do_unloading(..) > >>>>>>>>>>> checks > >>>>>>>>>>> if a nmethod can be unloaded because it contains dead oops. If > >>>>>>>>>>> class > >>>>>>>>>>> unloading occurred we additionally clear all ICs where the > >>>>>>>>>>> cached > >>>>>>>>>>> metadata refers to an unloaded klass or method. If the nmethod > >>>>>>>>>>> is not > >>>>>>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally > >>>>>>>>>>> checks if > >>>>>>>>>>> all > >>>>>>>>>>> metadata is alive. The assert in CheckClass::check_class fails > >>>>>>>>>>> because > >>>>>>>>>>> the nmethod contains Method* metadata corresponding to a dead > >>>>>>>>>>> Klass. > >>>>>>>>>>> The Method* belongs to a to-interpreter stub [1] of an > >>>>>>>>>>> optimized > >>>>>>>>>>> compiled IC. Normally we clear those stubs prior to > >>>>>>>>>>> verification to > >>>>>>>>>>> avoid dangling references to Method* [2], but only if the stub > >>>>>>>>>>> is not in > >>>>>>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this > >>>>>>>>>>> case the > >>>>>>>>>>> to-interpreter stub may be executed and hand a stale Method* > >>>>>>>>>>> to the > >>>>>>>>>>> interpreter. > >>>>>>>>>>> > >>>>>>>>>>> *Solution > >>>>>>>>>>> *The implementation of nmethod::do_unloading(..) is changed to > >>>>>>>>>>> clean > >>>>>>>>>>> compiled ICs and compiled static calls if they call into a > >>>>>>>>>>> to-interpreter stub that references dead Method* metadata. > >>>>>>>>>>> > >>>>>>>>>>> The patch was affected by the G1 class unloading changes > >>>>>>>>>>> (JDK-8048248) > >>>>>>>>>>> because the method nmethod::do_unloading_parallel(..) was > >>>>>>>>>>> added. I > >>>>>>>>>>> adapted the implementation as well. > >>>>>>>>>>> * > >>>>>>>>>>> Testing > >>>>>>>>>>> *Failing test (runThese) > >>>>>>>>>>> JPRT > >>>>>>>>>>> > >>>>>>>>>>> Thanks, > >>>>>>>>>>> Tobias > >>>>>>>>>>> > >>>>>>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) > >>>>>>>>>>> [2] see nmethod::verify_metadata_loaders(..), > >>>>>>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub From goetz.lindenmaier at sap.com Tue Jul 22 14:33:56 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 22 Jul 2014 14:33:56 +0000 Subject: AArch64 in JDK 9? In-Reply-To: <53CE3803.5050803@redhat.com> References: <534C2283.2010108@redhat.com> <53523109.7040002@redhat.com> <5352CD23.5050700@oracle.com> <5352CEC4.7080500@oracle.com> <53CE3803.5050803@redhat.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDBBB4@DEWDFEMB12A.global.corp.sap> Hi Andrew, I had a look at your change. It's really much less than what we adapted! All the effort with the staging directory would be overkill I guess. I tried to patch the change to hs-rt. Unfortunately, my change cleaning up the shared includes of cpu files was just pushed, so your change does not apply cleanly any more. But in the end this should even more reduce the size of your patch. I would add the includes in alphabetic sorting, i.e., before TARGET_ARCH_arm. Same for lists of defines: c1_LIR.hpp: +#if defined(SPARC) || defined(ARM) || defined(PPC) || defined(AARCH64) I would put AARCH64 before ARM. We also put AIX before BSD -- PPC was there already. Saying that, why don't you reuse the includes etc. for ARM? In the end, only the strings in os_linux must differ, and they could depend on ARM64 as we do it for PPC. In c1_LIR.c|hpp, I would use AARCH64 in defines instead of TARGET_ARCH_aarch64. Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Andrew Haley Sent: Dienstag, 22. Juli 2014 12:08 To: Vladimir Kozlov; Volker Simonis Cc: hotspot-dev Source Developers Subject: Re: AArch64 in JDK 9? I have prepared a diff that is the patch needed in the shared parts of HotSpot for OpenJDK 9. It is at http://cr.openjdk.java.net/~aph/aarch64.1 If it would help to have a webrev I will do that, but it wasn't clear how it should be done. I believe that there are no non-trivial changes to shared code which are not guarded by TARGET_ARCH_aarch64, therefore it is unlikekly that any other target will be affected. There are a couple of trivial white space changes which I intend to remove; please ignore these. I hope that this will help you estimate the work required on Oracle's side to integrate this port. Thanks, Andrew. From daniel.daugherty at oracle.com Tue Jul 22 16:45:58 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Tue, 22 Jul 2014 10:45:58 -0600 Subject: waiting room and monitor lock aq. In-Reply-To: <53CD913A.2020007@oracle.com> References: <53CD913A.2020007@oracle.com> Message-ID: <53CE9546.2040506@oracle.com> On 7/21/14 4:16 PM, David Holmes wrote: > On 21/07/2014 10:38 PM, Winnie JayClay wrote: >> Hi, is there any order between threads which wake up by notifyAll and >> those >> trying to acquire obj monitor (blocked after synchronized >> invocation)? Is >> it mentioned in JLS? > > Ordering is completely unspecified. An implementation is free to do > what it likes to optimize performance (using whatever metric it chooses). > > So for example a thread that is woken up by a notify/notifyAll and is > placed into the monitor acquisition queue need not be given preference. > > In general in hotspot the queues are simply FIFO, but the monitor > implementation also allows barging and monitor release does not > perform a hand-off. > > David > >> Thanks. >> Filling in some details... This link: http://docs.oracle.com/javase/specs/index.html gets you to various versions of the Java Language spec. This section is probably the one that you want: http://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html#jls-17.2 17.2. Wait Sets and Notification Also see Josh Bloch's "Effective Java" book. In particular checkout the chapter on "Threads" and the item on "Never invoke wait outside a loop"... Dan From vladimir.kozlov at oracle.com Tue Jul 22 18:04:15 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 22 Jul 2014 11:04:15 -0700 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53CE2D53.6040006@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> <53C90740.40602@oracle.com> <53C937E0.7060304@oracle.com> <53C9614C.8080109@oracle.com> <53C962F3.3070405@oracle.com> <53CCD307.7040806@oracle.com> <53CD62F4.1020904@oracle.com> <53CE2D53.6040006@oracle.com> Message-ID: <53CEA79F.6030004@oracle.com> I agree with Mikael, the case without compressed oops is incorrect. The problem is immP_load() and immP_no_oop_cheap() operands miss the check for metadata pointer, they only check for oop. For class loading loadConP_load() should be used instead of loadConP_no_oop_cheap() and it should check relocation type and do what loadConP_set() does. X64 seems fine. X86_64.ad use $$$emit32$src$$constant; in such case (load_immP31) which is expanded to the same code as load_immP: if ( opnd_array(1)->constant_reloc() != relocInfo::none ) { emit_d32_reloc(cbuf, opnd_array(1)->constant(), opnd_array(1)->constant_reloc(), 0); Thanks, Vladimir On 7/22/14 2:22 AM, Tobias Hartmann wrote: > On 21.07.2014 20:59, Vladimir Kozlov wrote: >> On 7/21/14 1:44 AM, Tobias Hartmann wrote: >>> Vladimir, Coleen, thanks for the reviews! >>> >>> On 18.07.2014 20:09, Vladimir Kozlov wrote: >>>> On 7/18/14 11:02 AM, Coleen Phillimore wrote: >>>>> >>>>> On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: >>>>>> On 7/18/14 4:38 AM, Tobias Hartmann wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I spend some more days and was finally able to implement a test that >>>>>>> deterministically triggers the bug: >>>>>> >>>>>> Why do you need to switch off compressed oops? Do you need to switch >>>>>> off compressed klass pointers too (UseCompressedClassPointers)? >>>>> >>>>> CompressedOops when off turns off CompressedClassPointers. >>>> >>>> You are right, I forgot that. Still the question is why switch off >>>> coop? >>> >>> I'm only able to reproduce the bug without compressed oops. The original >>> bug also only reproduces with -XX:-UseCompressedOops. I tried to figure >>> out why (on Sparc): >>> >>> With compressed oops enabled, Method* metadata referencing 'WorkerClass' >>> is added to 'doWork' in MacroAssembler::set_narrow_klass(..). In >>> CodeBuffer::finalize_oop_references(..) the metadata is processed and an >>> oop to the class loader 'URLClassLoader' is added. This oop leads to the >>> unloading of 'doWork', hence the verification code is never executed. >>> >>> I'm not sure what set_narrow_klass(..) is used for in this case. I >>> assume it stores a 'WorkerClass' Klass* in a register as part of an >>> optimization? Because 'doWork' potentially works on any class. >>> Apparently this optimization is not performed without compressed oops. >> >> I would suggest to compare 'doWork' assembler >> (-XX:CompileCommand=print,TestMethodUnloading::doWork) with coop and >> without it. Usually loaded into register class is used for klass >> compare do guard inlining code. Or to initialize new object. >> >> I don't see loading (constructing) uncompressed (whole) klass pointer >> from constant in sparc.ad. It could be the reason for different >> behavior. It could be loaded from constants section. But constants >> section should have metadata relocation info in such case. > > I did as you suggested and found the following: > > During the profiling phase the class given to 'doWork' always is > 'WorkerClass'. The C2 compiler therefore optimizes the compiled version > to expect a 'WorkerClass'. The branch that instantiates a new object is > guarded by an uncommon trap (class_check). The difference between the > two versions (with and without compressed oops) is the loading of the > 'WorkerClass' Klass to check if the given class is equal: > > With compressed oops: > SET narrowklass: precise klass WorkerClass: > 0x00000001004a0d40:Constant:exact *,R_L1 ! compressed klass ptr > CWBne R_L2,R_L1,B8 ! compressed ptr P=0.000001 C=-1.000000 > > Without: > SET precise klass WorkerClass: 0x00000001004aeab0:Constant:exact > *,R_L1 ! non-oop ptr > CXBpne R_L2,R_L1,B8 ! ptr P=0.000001 C=-1.000000 > > R_L2: class given as parameter > B8: location of uncommon trap > > In the first case, the Klass is loaded by a 'loadConNKlass' instruction > that calls MacroAssembler::set_narrow_klass(..) which then creates a > metadata_Relocation for the 'WorkerClass'. This metada_Relocation is > processed by CodeBuffer::finalize_oop_references(..) and an oop to > 'WorkerClass' is added. This oop causes the unloading of the method. > > In the second case, the Klass is loaded by a 'loadConP_no_oop_cheap' > instruction that does not create a metadata_Relocation. > > I don't understand why the metadata_Relocation in the first case is > needed? As the test shows it is better to only unload the method if we > hit the uncommon trap because we could still use other (potentially > complex) branches of the method. > > Thanks, > Tobias > >> >> thanks, >> Vladimir >> >>> >>> Best, >>> Tobias >>> >>>> >>>> Vladimir >>>> >>>>>> >>>>>>> >>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ >>>>>> >>>>>> Very nice! >>>>> >>>>> Yes, I agree. Impressive. >>>>> >>>>> The refactoring in nmethod.cpp looks good to me. I have no further >>>>> comments. >>>>> Thanks! >>>>> Coleen >>>>> >>>>>> >>>>>>> >>>>>>> @Vladimir: The test shows why we should only clean the ICs but not >>>>>>> unload the nmethod if possible. The method ' doWork' >>>>>>> is still valid after WorkerClass was unloaded and depending on the >>>>>>> complexity of the method we should avoid unloading it. >>>>>> >>>>>> Make sense. >>>>>> >>>>>>> >>>>>>> On Sparc my patch fixes the bug and leads to the nmethod not being >>>>>>> unloaded. The compiled version is therefore used even >>>>>>> after WorkerClass is unloaded. >>>>>>> >>>>>>> On x86 the nmethod is unloaded anyway because of a dead oop. This is >>>>>>> probably due to a slightly different implementation >>>>>>> of the ICs. I'll have a closer look to see if we can improve that. >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Tobias >>>>>>> >>>>>>> On 16.07.2014 10:36, Tobias Hartmann wrote: >>>>>>>> Sorry, forgot to answer this question: >>>>>>>>> Were you able to create a small test case for it that would be >>>>>>>>> useful to add? >>>>>>>> Unfortunately I was not able to create a test. The bug only >>>>>>>> reproduces on a particular system with a > 30 minute run >>>>>>>> of runThese. >>>>>>>> >>>>>>>> Best, >>>>>>>> Tobias >>>>>>>> >>>>>>>> On 16.07.2014 09:54, Tobias Hartmann wrote: >>>>>>>>> Hi Coleen, >>>>>>>>> >>>>>>>>> thanks for the review. >>>>>>>>>> *+ if (csc->is_call_to_interpreted() && >>>>>>>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* >>>>>>>>>> *+ csc->set_to_clean();* >>>>>>>>>> *+ }* >>>>>>>>>> >>>>>>>>>> This appears in each case. Can you fold it and the new function >>>>>>>>>> into a function like >>>>>>>>>> clean_call_to_interpreted_stub(is_alive, csc)? >>>>>>>>> >>>>>>>>> I folded it into the function clean_call_to_interpreter_stub(..). >>>>>>>>> >>>>>>>>> New webrev: >>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Tobias >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Coleen >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> So before the permgen removal embedded method* were oops and >>>>>>>>>>> they >>>>>>>>>>> were processed in relocInfo::oop_type loop. >>>>>>>>>>> >>>>>>>>>>> May be instead of specializing opt_virtual_call_type and >>>>>>>>>>> static_call_type call site you can simple add a loop for >>>>>>>>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Vladimir >>>>>>>>>>> >>>>>>>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> please review the following patch for JDK-8029443. >>>>>>>>>>>> >>>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>>>>>>>> Webrev: >>>>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>>>>>>>> >>>>>>>>>>>> *Problem* >>>>>>>>>>>> After the tracing/marking phase of GC, >>>>>>>>>>>> nmethod::do_unloading(..) >>>>>>>>>>>> checks >>>>>>>>>>>> if a nmethod can be unloaded because it contains dead oops. If >>>>>>>>>>>> class >>>>>>>>>>>> unloading occurred we additionally clear all ICs where the >>>>>>>>>>>> cached >>>>>>>>>>>> metadata refers to an unloaded klass or method. If the nmethod >>>>>>>>>>>> is not >>>>>>>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally >>>>>>>>>>>> checks if >>>>>>>>>>>> all >>>>>>>>>>>> metadata is alive. The assert in CheckClass::check_class fails >>>>>>>>>>>> because >>>>>>>>>>>> the nmethod contains Method* metadata corresponding to a dead >>>>>>>>>>>> Klass. >>>>>>>>>>>> The Method* belongs to a to-interpreter stub [1] of an >>>>>>>>>>>> optimized >>>>>>>>>>>> compiled IC. Normally we clear those stubs prior to >>>>>>>>>>>> verification to >>>>>>>>>>>> avoid dangling references to Method* [2], but only if the stub >>>>>>>>>>>> is not in >>>>>>>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this >>>>>>>>>>>> case the >>>>>>>>>>>> to-interpreter stub may be executed and hand a stale Method* >>>>>>>>>>>> to the >>>>>>>>>>>> interpreter. >>>>>>>>>>>> >>>>>>>>>>>> *Solution >>>>>>>>>>>> *The implementation of nmethod::do_unloading(..) is changed to >>>>>>>>>>>> clean >>>>>>>>>>>> compiled ICs and compiled static calls if they call into a >>>>>>>>>>>> to-interpreter stub that references dead Method* metadata. >>>>>>>>>>>> >>>>>>>>>>>> The patch was affected by the G1 class unloading changes >>>>>>>>>>>> (JDK-8048248) >>>>>>>>>>>> because the method nmethod::do_unloading_parallel(..) was >>>>>>>>>>>> added. I >>>>>>>>>>>> adapted the implementation as well. >>>>>>>>>>>> * >>>>>>>>>>>> Testing >>>>>>>>>>>> *Failing test (runThese) >>>>>>>>>>>> JPRT >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Tobias >>>>>>>>>>>> >>>>>>>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>>>>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>>>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>> >>> > From vladimir.kozlov at oracle.com Tue Jul 22 19:32:46 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 22 Jul 2014 12:32:46 -0700 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDBAE4@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <53C7F136.3000709@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> <53C933A9.7060705@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDBAE4@DEWDFEMB12A.global.corp.sap> Message-ID: <53CEBC5E.7030106@oracle.com> I am pushing it into hs-comp. Unfortunately it 8u20 is closed, only showstoppers are allowed. I will push it into 8u40 later. Thanks, Vladimir On 7/22/14 2:55 AM, Lindenmaier, Goetz wrote: > Hi, > > could somebody please sponsor this change? It also needs to go to jdk8u20. > > Thanks and best regards, > Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Freitag, 18. Juli 2014 16:48 > To: Lindenmaier, Goetz; Doerr, Martin; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > Looks good. > > Thanks, > Vladimir > > On 7/18/14 2:08 AM, Lindenmaier, Goetz wrote: >> Hi Vladimir, >> >> We fixed the comment and camel case stuff. >> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >> >> We think it looks better if volatile is before the type. >> >> Best regards, >> Martin and Goetz. >> >> -----Original Message----- >> From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] >> Sent: Donnerstag, 17. Juli 2014 17:52 >> To: Doerr, Martin; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >> >> First, comments needs to be fixed: >> >> "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" >> >> Second, type name should be camel style (PcDescPtr). >> >> Someone have to double check this volatile declaration. Your example is more clear for me than typedef. >> >> Thanks, >> Vladimir >> >> On 7/17/14 8:19 AM, Doerr, Martin wrote: >>> Hi Vladimir, >>> >>> the following line should also work: >>> PcDesc* volatile _pc_descs[cache_size]; >>> But we thought that the typedef would improve readability. >>> The array elements must be volatile, not the PcDescs which are pointed to. >>> >>> Best regards, >>> Martin >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov >>> Sent: Donnerstag, 17. Juli 2014 17:09 >>> To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >>> >>> Hi Goetz, >>> >>> What is the reason for new typedef? >>> >>> Thanks, >>> Vladimir >>> >>> On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> This webrev fixes an important concurrency issue in nmethod. >>>> Please review and test this change. I please need a sponsor. >>>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>>> >>>> This should be fixed into 8u20, too. >>>> >>>> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >>>> Best regards, >>>> Martin and Goetz. >>>> From goetz.lindenmaier at sap.com Wed Jul 23 07:05:29 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Wed, 23 Jul 2014 07:05:29 +0000 Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le References: <4295855A5C1DE049A61835A1887419CC2CEDAA66@DEWDFEMB12A.global.corp.sap> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDBD44@DEWDFEMB12A.global.corp.sap> Hi Sasha, we ran our nightly tests on big-endian with this change. They're all green. reviewed. Best regards, Goetz. -----Original Message----- From: Lindenmaier, Goetz Sent: Freitag, 18. Juli 2014 10:13 To: 'Alexander Smundak' Cc: HotSpot Open Source Developers Subject: RE: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le Hi Sasha, thanks, now it works. I just ran jvm98/javac. Comprehensive tests will be executed tonight. Best regards, Goetz. -----Original Message----- From: Alexander Smundak [mailto:asmundak at google.com] Sent: Freitag, 18. Juli 2014 02:58 To: Lindenmaier, Goetz Cc: HotSpot Open Source Developers Subject: Re: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le On Thu, Jul 17, 2014 at 3:20 AM, Lindenmaier, Goetz wrote: > I tested your change. Unfortunately it breaks our port. You need to fix Unsigned to > Signed: > > --- a/src/cpu/ppc/vm/templateTable_ppc_64.cpp Wed Jul 16 16:53:32 2014 -0700 > +++ b/src/cpu/ppc/vm/templateTable_ppc_64.cpp Thu Jul 17 12:14:18 2014 +0200 > @@ -1929,7 +1929,7 @@ > // default case > __ bind(Ldefault_case); > > - __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Unsigned); > + __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Signed); > if (ProfileInterpreter) { > __ profile_switch_default(Rdef_offset_addr, Rcount/* scratch */); > __ b(Lcontinue_execution); Oops. Fixed. Which test was broken by this, BTW? > If you want to, you can move loading the bci in this bytecode behind the loop. Done. > Could you please fix indentation of relocInfo::none in call_c? Should > be aligned to call_c. Done. The revised patch is at http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.01/ please take another look. Sasha From goetz.lindenmaier at sap.com Wed Jul 23 09:14:26 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Wed, 23 Jul 2014 09:14:26 +0000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <53CEBC5E.7030106@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <53C7F136.3000709@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> <53C933A9.7060705@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDBAE4@DEWDFEMB12A.global.corp.sap> <53CEBC5E.7030106@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDBDDD@DEWDFEMB12A.global.corp.sap> Hi Vladimir, Thanks for pushing the change! I would consider this a showstopper. Jvm2008/sunflow does a wrong resolve and then throws an incompatible class change error on AIX if the VM is built with xlc12. This happens on about every second try running this benchmark. Therefore we would appreciate if the fix could go to 8u20. Best regards, Goetz. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Dienstag, 22. Juli 2014 21:33 To: Lindenmaier, Goetz; Doerr, Martin; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache I am pushing it into hs-comp. Unfortunately it 8u20 is closed, only showstoppers are allowed. I will push it into 8u40 later. Thanks, Vladimir On 7/22/14 2:55 AM, Lindenmaier, Goetz wrote: > Hi, > > could somebody please sponsor this change? It also needs to go to jdk8u20. > > Thanks and best regards, > Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Freitag, 18. Juli 2014 16:48 > To: Lindenmaier, Goetz; Doerr, Martin; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > Looks good. > > Thanks, > Vladimir > > On 7/18/14 2:08 AM, Lindenmaier, Goetz wrote: >> Hi Vladimir, >> >> We fixed the comment and camel case stuff. >> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >> >> We think it looks better if volatile is before the type. >> >> Best regards, >> Martin and Goetz. >> >> -----Original Message----- >> From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] >> Sent: Donnerstag, 17. Juli 2014 17:52 >> To: Doerr, Martin; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >> >> First, comments needs to be fixed: >> >> "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" >> >> Second, type name should be camel style (PcDescPtr). >> >> Someone have to double check this volatile declaration. Your example is more clear for me than typedef. >> >> Thanks, >> Vladimir >> >> On 7/17/14 8:19 AM, Doerr, Martin wrote: >>> Hi Vladimir, >>> >>> the following line should also work: >>> PcDesc* volatile _pc_descs[cache_size]; >>> But we thought that the typedef would improve readability. >>> The array elements must be volatile, not the PcDescs which are pointed to. >>> >>> Best regards, >>> Martin >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov >>> Sent: Donnerstag, 17. Juli 2014 17:09 >>> To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >>> >>> Hi Goetz, >>> >>> What is the reason for new typedef? >>> >>> Thanks, >>> Vladimir >>> >>> On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >>>> Hi, >>>> >>>> This webrev fixes an important concurrency issue in nmethod. >>>> Please review and test this change. I please need a sponsor. >>>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>>> >>>> This should be fixed into 8u20, too. >>>> >>>> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >>>> Best regards, >>>> Martin and Goetz. >>>> From winniejayclay at gmail.com Wed Jul 23 13:05:38 2014 From: winniejayclay at gmail.com (Winnie JayClay) Date: Wed, 23 Jul 2014 21:05:38 +0800 Subject: volatile and caches question Message-ID: Hi, Say, if I have class with volatile variable, but only one thread operates on that variable (read and update it), will be memory flushed always on x86 when the same thread read and write? What is the overhead and do you have any optimizations in JDK? Also if I have multiply threads which operate on single volatile variable: one writer and many readers and writer doesn't write too much, will be caches flushed every time readers access volatile varible and when write didn't write anything? I also thought to use normal non-final non-volatile variable and for writer thread create and invoke synchronized block somwhere after it will update varible to establish happens-before, i.e. just a synchronized block to flush caches for reader threads to pick up the latest value - by the way, is my understanding correct that if writer somewhere else invokes synchronized block which is not available for readers, will the readers get the latest value? Thanks for help, we work on HFT project in java, and performance is super-critical for us. Thanks, Winnie From vitalyd at gmail.com Wed Jul 23 13:27:02 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 23 Jul 2014 09:27:02 -0400 Subject: volatile and caches question In-Reply-To: References: Message-ID: Winnie, Questions like this are probably best sent to concurrency-interest mailing list (concurrency-interest at cs.oswego.edu). Appropriate fences will be inserted even if single thread is reading/writing a volatile. Assuming no false sharing of the cacheline containing the volatile variable, no further memory transactions should occur in this case (I.e. the core will have owned the line in exclusive mode anyway and no other cores will be snooping it since nobody else is reading it). With multiple readers single writer, readers will read from memory each time. Assuming this cacheline stays in the cache, the read will be serviced from one of the cache hierarchy levels. When a write occurs, the reading cores will have the line invalidated and then snoop in the new one. Generally, single writer multiple readers scales pretty well. This is a high-level answer. If you want to dive into more details, I suggest you drop this alias and use the concurrency interest one instead. HTH, Vitaly Sent from my phone On Jul 23, 2014 9:06 AM, "Winnie JayClay" wrote: > Hi, > > Say, if I have class with volatile variable, but only one thread operates > on that variable (read and update it), will be memory flushed always on x86 > when the same thread read and write? What is the overhead and do you have > any optimizations in JDK? Also if I have multiply threads which operate on > single volatile variable: one writer and many readers and writer doesn't > write too much, will be caches flushed every time readers access volatile > varible and when write didn't write anything? I also thought to use normal > non-final non-volatile variable and for writer thread create and invoke > synchronized block somwhere after it will update varible to establish > happens-before, i.e. just a synchronized block to flush caches for reader > threads to pick up the latest value - by the way, is my understanding > correct that if writer somewhere else invokes synchronized block which is > not available for readers, will the readers get the latest value? > > Thanks for help, we work on HFT project in java, and performance is > super-critical for us. > > > Thanks, > Winnie > From aleksey.shipilev at oracle.com Wed Jul 23 13:31:27 2014 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 23 Jul 2014 17:31:27 +0400 Subject: volatile and caches question In-Reply-To: References: Message-ID: <53CFB92F.9060008@oracle.com> Hi, This is a good question for concurrency-interest, not hotspot-dev: http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest Most of your questions seem to be answered by the JSR 133 Cookbook, which describes the conservative approach to conform to Java Memory Model: http://gee.cs.oswego.edu/dl/jmm/cookbook.html On 07/23/2014 05:05 PM, Winnie JayClay wrote: > Say, if I have class with volatile variable, but only one thread operates > on that variable (read and update it), will be memory flushed always on x86 > when the same thread read and write? What is the overhead and do you have > any optimizations in JDK? "Memory flush" has no place since 2005. VMs are free to optimize volatile accesses, as long as those optimizations fit the Java Memory Model. For example, the multiple volatile accesses in constructor can be optimized since we know whether the variable is not exposed to other threads. For the variables already on heap, it is generally unknown whether we can optimize the accesses without breaking the JMM (there are cases where we can do simple optos, see Cookbook). > Also if I have multiply threads which operate on > single volatile variable: one writer and many readers and writer doesn't > write too much, will be caches flushed every time readers access volatile > varible and when write didn't write anything? I'm not following what "caches" need "flushing" in this case. Cache coherency already takes care about propagating the values. Low-level-hardware-speaking, memory semantics around "volatiles" is about exposing the data to cache coherency in proper order. On x86, once writer had committed the volatile write, CPU store buffers have drained into memory subsystem, regular coherency takes care that readers are reading the consistent value. In this parlance, reading the volatile variable is almost no different from reading a plain one (not exactly, since compiler optimizations which may break the provisions of JMM are also inhibited). > I also thought to use normal > non-final non-volatile variable and for writer thread create and invoke > synchronized block somwhere after it will update varible to establish > happens-before, i.e. just a synchronized block to flush caches for reader > threads to pick up the latest value - by the way, is my understanding > correct that if writer somewhere else invokes synchronized block which is > not available for readers, will the readers get the latest value? A lone synchronized{} block used in writer only is not enough to establish happens-before with reader which does not use the synchronized{} on the same object. Moreover, the synchronized{} block on non-escaped object may be eliminated altogether, since its memory effects are not required to be visible by any other code. > Thanks for help, we work on HFT project in java, and performance is > super-critical for us. Judging from the questions alone (yes, I read RSDN occasionally), I would personally recommend to learn about what JMM is guaranteeing by itself (Java Language Spec), then learn how JMM is conservatively implemented (JSR133 Cookbook, etc.), then learn more about hardware guarantees (Intel/AMD SDMs), and then clearly keep in mind the difference between all three. If you have questions about these, concurrency-interest@ has a good supply of folks who are ready to discuss this. Thanks, -Aleksey. From winniejayclay at gmail.com Wed Jul 23 14:13:12 2014 From: winniejayclay at gmail.com (Winnie JayClay) Date: Wed, 23 Jul 2014 22:13:12 +0800 Subject: volatile and caches question In-Reply-To: <53CFB92F.9060008@oracle.com> References: <53CFB92F.9060008@oracle.com> Message-ID: Hi Aleksey, Thanks, but I have hotspot and x86 question, not that much about specification. What is the real implemented hotspot x86 behavior for these two scenarios. Also, personally, have no idea why you mentioned RSDN. On Wednesday, July 23, 2014, Aleksey Shipilev wrote: > Hi, > > This is a good question for concurrency-interest, not hotspot-dev: > http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest > > Most of your questions seem to be answered by the JSR 133 Cookbook, > which describes the conservative approach to conform to Java Memory > Model: http://gee.cs.oswego.edu/dl/jmm/cookbook.html > > On 07/23/2014 05:05 PM, Winnie JayClay wrote: > > Say, if I have class with volatile variable, but only one thread operates > > on that variable (read and update it), will be memory flushed always on > x86 > > when the same thread read and write? What is the overhead and do you have > > any optimizations in JDK? > > "Memory flush" has no place since 2005. > > VMs are free to optimize volatile accesses, as long as those > optimizations fit the Java Memory Model. For example, the multiple > volatile accesses in constructor can be optimized since we know whether > the variable is not exposed to other threads. For the variables already > on heap, it is generally unknown whether we can optimize the accesses > without breaking the JMM (there are cases where we can do simple optos, > see Cookbook). > > > Also if I have multiply threads which operate on > > single volatile variable: one writer and many readers and writer doesn't > > write too much, will be caches flushed every time readers access volatile > > varible and when write didn't write anything? > > I'm not following what "caches" need "flushing" in this case. Cache > coherency already takes care about propagating the values. > Low-level-hardware-speaking, memory semantics around "volatiles" is > about exposing the data to cache coherency in proper order. > > On x86, once writer had committed the volatile write, CPU store buffers > have drained into memory subsystem, regular coherency takes care that > readers are reading the consistent value. In this parlance, reading the > volatile variable is almost no different from reading a plain one (not > exactly, since compiler optimizations which may break the provisions of > JMM are also inhibited). > > > I also thought to use normal > > non-final non-volatile variable and for writer thread create and invoke > > synchronized block somwhere after it will update varible to establish > > happens-before, i.e. just a synchronized block to flush caches for reader > > threads to pick up the latest value - by the way, is my understanding > > correct that if writer somewhere else invokes synchronized block which is > > not available for readers, will the readers get the latest value? > > A lone synchronized{} block used in writer only is not enough to > establish happens-before with reader which does not use the > synchronized{} on the same object. Moreover, the synchronized{} block on > non-escaped object may be eliminated altogether, since its memory > effects are not required to be visible by any other code. > > > Thanks for help, we work on HFT project in java, and performance is > > super-critical for us. > > Judging from the questions alone (yes, I read RSDN occasionally), I > would personally recommend to learn about what JMM is guaranteeing by > itself (Java Language Spec), then learn how JMM is conservatively > implemented (JSR133 Cookbook, etc.), then learn more about hardware > guarantees (Intel/AMD SDMs), and then clearly keep in mind the > difference between all three. If you have questions about these, > concurrency-interest@ has a good supply of folks who are ready to > discuss this. > > Thanks, > -Aleksey. > From vladimir.kozlov at oracle.com Wed Jul 23 14:28:56 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 23 Jul 2014 07:28:56 -0700 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDBDDD@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <53C7F136.3000709@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> <53C933A9.7060705@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDBAE4@DEWDFEMB12A.global.corp.sap> <53CEBC5E.7030106@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDBDDD@DEWDFEMB12A.global.corp.sap> Message-ID: <53CFC6A8.1070803@oracle.com> But the bug is P4! Change it to P2 (or P1). Add 8u20 to affected versions. Add 8u20-critical-request label and add comments with 8u20-critical-request justification: explaining why it is showstopper. See 8051378 as example. Vladimir On 7/23/14 2:14 AM, Lindenmaier, Goetz wrote: > Hi Vladimir, > > Thanks for pushing the change! > > I would consider this a showstopper. > Jvm2008/sunflow does a wrong resolve and then throws an incompatible > class change error on AIX if the VM is built with xlc12. > This happens on about every second try running this benchmark. > > Therefore we would appreciate if the fix could go to 8u20. > > Best regards, > Goetz. > > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Dienstag, 22. Juli 2014 21:33 > To: Lindenmaier, Goetz; Doerr, Martin; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > I am pushing it into hs-comp. > > Unfortunately it 8u20 is closed, only showstoppers are allowed. I will > push it into 8u40 later. > > Thanks, > Vladimir > > On 7/22/14 2:55 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> could somebody please sponsor this change? It also needs to go to jdk8u20. >> >> Thanks and best regards, >> Goetz. >> >> -----Original Message----- >> From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] >> Sent: Freitag, 18. Juli 2014 16:48 >> To: Lindenmaier, Goetz; Doerr, Martin; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >> >> Looks good. >> >> Thanks, >> Vladimir >> >> On 7/18/14 2:08 AM, Lindenmaier, Goetz wrote: >>> Hi Vladimir, >>> >>> We fixed the comment and camel case stuff. >>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>> >>> We think it looks better if volatile is before the type. >>> >>> Best regards, >>> Martin and Goetz. >>> >>> -----Original Message----- >>> From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] >>> Sent: Donnerstag, 17. Juli 2014 17:52 >>> To: Doerr, Martin; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >>> >>> First, comments needs to be fixed: >>> >>> "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" >>> >>> Second, type name should be camel style (PcDescPtr). >>> >>> Someone have to double check this volatile declaration. Your example is more clear for me than typedef. >>> >>> Thanks, >>> Vladimir >>> >>> On 7/17/14 8:19 AM, Doerr, Martin wrote: >>>> Hi Vladimir, >>>> >>>> the following line should also work: >>>> PcDesc* volatile _pc_descs[cache_size]; >>>> But we thought that the typedef would improve readability. >>>> The array elements must be volatile, not the PcDescs which are pointed to. >>>> >>>> Best regards, >>>> Martin >>>> >>>> -----Original Message----- >>>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov >>>> Sent: Donnerstag, 17. Juli 2014 17:09 >>>> To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >>>> >>>> Hi Goetz, >>>> >>>> What is the reason for new typedef? >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >>>>> Hi, >>>>> >>>>> This webrev fixes an important concurrency issue in nmethod. >>>>> Please review and test this change. I please need a sponsor. >>>>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>>>> >>>>> This should be fixed into 8u20, too. >>>>> >>>>> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >>>>> Best regards, >>>>> Martin and Goetz. >>>>> From goetz.lindenmaier at sap.com Wed Jul 23 14:45:14 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Wed, 23 Jul 2014 14:45:14 +0000 Subject: RFR(S): 8050972: Concurrency problem in PcDesc cache In-Reply-To: <53CFC6A8.1070803@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CEDA9E2@DEWDFEMB12A.global.corp.sap> <53C7E708.8060208@oracle.com> <7C9B87B351A4BA4AA9EC95BB418116566ACB7AD0@DEWDFEMB19C.global.corp.sap> <53C7F136.3000709@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD78@DEWDFEMB12A.global.corp.sap> <53C933A9.7060705@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDBAE4@DEWDFEMB12A.global.corp.sap> <53CEBC5E.7030106@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDBDDD@DEWDFEMB12A.global.corp.sap> <53CFC6A8.1070803@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEDC5B6@DEWDFEMB12A.global.corp.sap> Hi Vladimir, I fixed that in the bug, as well as for 8050978: Fix bad field access check in C1 and C2 Sorry for not doing that right away. Best regards, Goetz. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Mittwoch, 23. Juli 2014 16:29 To: Lindenmaier, Goetz; Doerr, Martin; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache But the bug is P4! Change it to P2 (or P1). Add 8u20 to affected versions. Add 8u20-critical-request label and add comments with 8u20-critical-request justification: explaining why it is showstopper. See 8051378 as example. Vladimir On 7/23/14 2:14 AM, Lindenmaier, Goetz wrote: > Hi Vladimir, > > Thanks for pushing the change! > > I would consider this a showstopper. > Jvm2008/sunflow does a wrong resolve and then throws an incompatible > class change error on AIX if the VM is built with xlc12. > This happens on about every second try running this benchmark. > > Therefore we would appreciate if the fix could go to 8u20. > > Best regards, > Goetz. > > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Dienstag, 22. Juli 2014 21:33 > To: Lindenmaier, Goetz; Doerr, Martin; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache > > I am pushing it into hs-comp. > > Unfortunately it 8u20 is closed, only showstoppers are allowed. I will > push it into 8u40 later. > > Thanks, > Vladimir > > On 7/22/14 2:55 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> could somebody please sponsor this change? It also needs to go to jdk8u20. >> >> Thanks and best regards, >> Goetz. >> >> -----Original Message----- >> From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] >> Sent: Freitag, 18. Juli 2014 16:48 >> To: Lindenmaier, Goetz; Doerr, Martin; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >> >> Looks good. >> >> Thanks, >> Vladimir >> >> On 7/18/14 2:08 AM, Lindenmaier, Goetz wrote: >>> Hi Vladimir, >>> >>> We fixed the comment and camel case stuff. >>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>> >>> We think it looks better if volatile is before the type. >>> >>> Best regards, >>> Martin and Goetz. >>> >>> -----Original Message----- >>> From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] >>> Sent: Donnerstag, 17. Juli 2014 17:52 >>> To: Doerr, Martin; Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >>> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >>> >>> First, comments needs to be fixed: >>> >>> "The array elements must be volatile" but in changeset comments: "// Array MUST be volatile!" >>> >>> Second, type name should be camel style (PcDescPtr). >>> >>> Someone have to double check this volatile declaration. Your example is more clear for me than typedef. >>> >>> Thanks, >>> Vladimir >>> >>> On 7/17/14 8:19 AM, Doerr, Martin wrote: >>>> Hi Vladimir, >>>> >>>> the following line should also work: >>>> PcDesc* volatile _pc_descs[cache_size]; >>>> But we thought that the typedef would improve readability. >>>> The array elements must be volatile, not the PcDescs which are pointed to. >>>> >>>> Best regards, >>>> Martin >>>> >>>> -----Original Message----- >>>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov >>>> Sent: Donnerstag, 17. Juli 2014 17:09 >>>> To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR(S): 8050972: Concurrency problem in PcDesc cache >>>> >>>> Hi Goetz, >>>> >>>> What is the reason for new typedef? >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 7/17/14 1:54 AM, Lindenmaier, Goetz wrote: >>>>> Hi, >>>>> >>>>> This webrev fixes an important concurrency issue in nmethod. >>>>> Please review and test this change. I please need a sponsor. >>>>> http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ >>>>> >>>>> This should be fixed into 8u20, too. >>>>> >>>>> The entries of the PcDesc cache in nmethods are not declared as volatile, but they are accessed and modified by several threads concurrently. Some compilers (namely xlC 12 on AIX) duplicate some memory accesses to non-volatile fields. In this case, this has led to the situation that a thread had successfully matched a pc in the cache, but returned the reloaded value which was already overwritten by another thread. >>>>> Best regards, >>>>> Martin and Goetz. >>>>> From aleksey.shipilev at oracle.com Wed Jul 23 15:06:28 2014 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 23 Jul 2014 19:06:28 +0400 Subject: volatile and caches question In-Reply-To: References: <53CFB92F.9060008@oracle.com> Message-ID: <53CFCF74.10308@oracle.com> On 07/23/2014 06:13 PM, Winnie JayClay wrote: > Thanks, but I have hotspot and x86 question, not that much about > specification. What is the real implemented hotspot x86 behavior for > these two scenarios. HotSpot follows the guidance from JSR 133 Cookbook, please refer there. -Aleksey From vitalyd at gmail.com Wed Jul 23 15:26:25 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 23 Jul 2014 11:26:25 -0400 Subject: volatile and caches question In-Reply-To: References: <53CFB92F.9060008@oracle.com> Message-ID: Hotspot emits a storeload fence for volatile writes and a compiler - only barrier for volatile loads. Currently, the storeload is implemented as "lock add [r/esp], 0". This is a nop semantically but has a synchronizing effect. The resulting cpu/memory behavior is detailed by various online x86 resources. Sent from my phone On Jul 23, 2014 10:13 AM, "Winnie JayClay" wrote: > Hi Aleksey, > > > Thanks, but I have hotspot and x86 question, not that much about > specification. What is the real implemented hotspot x86 behavior for these > two scenarios. > > Also, personally, have no idea why you mentioned RSDN. > > > > On Wednesday, July 23, 2014, Aleksey Shipilev > > wrote: > > > Hi, > > > > This is a good question for concurrency-interest, not hotspot-dev: > > http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest > > > > Most of your questions seem to be answered by the JSR 133 Cookbook, > > which describes the conservative approach to conform to Java Memory > > Model: http://gee.cs.oswego.edu/dl/jmm/cookbook.html > > > > On 07/23/2014 05:05 PM, Winnie JayClay wrote: > > > Say, if I have class with volatile variable, but only one thread > operates > > > on that variable (read and update it), will be memory flushed always on > > x86 > > > when the same thread read and write? What is the overhead and do you > have > > > any optimizations in JDK? > > > > "Memory flush" has no place since 2005. > > > > VMs are free to optimize volatile accesses, as long as those > > optimizations fit the Java Memory Model. For example, the multiple > > volatile accesses in constructor can be optimized since we know whether > > the variable is not exposed to other threads. For the variables already > > on heap, it is generally unknown whether we can optimize the accesses > > without breaking the JMM (there are cases where we can do simple optos, > > see Cookbook). > > > > > Also if I have multiply threads which operate on > > > single volatile variable: one writer and many readers and writer > doesn't > > > write too much, will be caches flushed every time readers access > volatile > > > varible and when write didn't write anything? > > > > I'm not following what "caches" need "flushing" in this case. Cache > > coherency already takes care about propagating the values. > > Low-level-hardware-speaking, memory semantics around "volatiles" is > > about exposing the data to cache coherency in proper order. > > > > On x86, once writer had committed the volatile write, CPU store buffers > > have drained into memory subsystem, regular coherency takes care that > > readers are reading the consistent value. In this parlance, reading the > > volatile variable is almost no different from reading a plain one (not > > exactly, since compiler optimizations which may break the provisions of > > JMM are also inhibited). > > > > > I also thought to use normal > > > non-final non-volatile variable and for writer thread create and invoke > > > synchronized block somwhere after it will update varible to establish > > > happens-before, i.e. just a synchronized block to flush caches for > reader > > > threads to pick up the latest value - by the way, is my understanding > > > correct that if writer somewhere else invokes synchronized block which > is > > > not available for readers, will the readers get the latest value? > > > > A lone synchronized{} block used in writer only is not enough to > > establish happens-before with reader which does not use the > > synchronized{} on the same object. Moreover, the synchronized{} block on > > non-escaped object may be eliminated altogether, since its memory > > effects are not required to be visible by any other code. > > > > > Thanks for help, we work on HFT project in java, and performance is > > > super-critical for us. > > > > Judging from the questions alone (yes, I read RSDN occasionally), I > > would personally recommend to learn about what JMM is guaranteeing by > > itself (Java Language Spec), then learn how JMM is conservatively > > implemented (JSR133 Cookbook, etc.), then learn more about hardware > > guarantees (Intel/AMD SDMs), and then clearly keep in mind the > > difference between all three. If you have questions about these, > > concurrency-interest@ has a good supply of folks who are ready to > > discuss this. > > > > Thanks, > > -Aleksey. > > > From vladimir.x.ivanov at oracle.com Wed Jul 23 15:38:59 2014 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 23 Jul 2014 19:38:59 +0400 Subject: RFR(S): 8050978: Fix bad field access check in C1 and C2 In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDBB30@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDAAAC@DEWDFEMB12A.global.corp.sap> <53C7F369.5070706@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD04@DEWDFEMB12A.global.corp.sap> <53C9364A.1000202@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDBB30@DEWDFEMB12A.global.corp.sap> Message-ID: <53CFD713.6050107@oracle.com> Looks good. I'll sponsor the fix. Best regards, Vladimir Ivanov On 7/22/14, 2:42 PM, Lindenmaier, Goetz wrote: > Hi, > > I please need a second reviewer for this change, as well as a sponsor. > It also needs to go to 8u20. > > Thanks and best regards, > Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Freitag, 18. Juli 2014 16:59 > To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050978: Fix bad field access check in C1 and C2 > > Good. > > Thanks, > Vladimir > > On 7/18/14 12:15 AM, Lindenmaier, Goetz wrote: >> Hi Vladimir, >> >> we updated the changeset with the new comment. >> http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ >> >> Best regards, >> Goetz. >> >> -----Original Message----- >> From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] >> Sent: Donnerstag, 17. Juli 2014 18:02 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050978: Fix bad field access check in C1 and C2 >> >> Please, don't put next part of comment into sources: >> >> + // This will make the jck8 test >> + // vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html >> + // pass with -Xbatch -Xcomp >> >> instead add something like "canonical_holder should not be use to check access becasue it can erroneously succeeds". >> >> Thanks, >> Vladimir >> >> On 7/17/14 3:47 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This fixes an error doing field access checks in C1 and C2. >>> Please review and test the change. We please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ >>> >>> This should be included in 8u20, too. >>> >>> JCK8 test vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html fails with -Xbatch -Xcomp due to bad field access check in C1 and C2 >>> >>> Precondition: >>> ------------- >>> >>> Consider the following class hierarchy: >>> >>> A >>> / \ >>> B1 B2 >>> >>> A declares a field "aa" which both B1 and B2 inherit. >>> >>> Despite aa is declared in a super class of B1, methods in B1 might not access the field aa of an object of class B2: >>> >>> class B1 extends A { >>> m(B2 b2) { >>> ... >>> x = b2.aa; // !!! Access not allowed >>> } >>> } >>> >>> This is checked by the test mentioned above. >>> >>> Problem: >>> -------- >>> >>> ciField::will_link() used by C1 and C2 does the access check using the canonical_holder (which is A in this case) and thus the access erroneously succeeds. >>> >>> Fix: >>> ---- >>> >>> In ciField::ciField(), just before the canonical holder is stored into the _holder variable (and which is used by ciField::will_link()) perform an additional access check with the holder declared in the class file. If this check fails, store the declared holder instead and ciField::will_link() will bail out compilation for this field later on. Then, the interpreter will throw an PrivilegedAccessException at runtime. >>> >>> Ways to reproduce: >>> ------------------ >>> >>> Run the above JCK test with >>> >>> C2 only: -XX:-TieredCompilation -Xbatch -Xcomp >>> >>> or >>> >>> with C1: -Xbatch -Xcomp -XX:-Inline >>> >>> Best regards, >>> Andreas and Goetz >>> >>> From asmundak at google.com Wed Jul 23 17:00:49 2014 From: asmundak at google.com (Alexander Smundak) Date: Wed, 23 Jul 2014 10:00:49 -0700 Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDBD44@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDAA66@DEWDFEMB12A.global.corp.sap> <4295855A5C1DE049A61835A1887419CC2CEDBD44@DEWDFEMB12A.global.corp.sap> Message-ID: Thanks. I need a sponsor, please. Sasha On Wed, Jul 23, 2014 at 12:05 AM, Lindenmaier, Goetz wrote: > Hi Sasha, > > we ran our nightly tests on big-endian with this change. They're all green. > reviewed. > > Best regards, > Goetz. > > > -----Original Message----- > From: Lindenmaier, Goetz > Sent: Freitag, 18. Juli 2014 10:13 > To: 'Alexander Smundak' > Cc: HotSpot Open Source Developers > Subject: RE: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le > > Hi Sasha, > > thanks, now it works. I just ran jvm98/javac. > Comprehensive tests will be executed tonight. > > Best regards, > Goetz. > > > > > > -----Original Message----- > From: Alexander Smundak [mailto:asmundak at google.com] > Sent: Freitag, 18. Juli 2014 02:58 > To: Lindenmaier, Goetz > Cc: HotSpot Open Source Developers > Subject: Re: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le > > On Thu, Jul 17, 2014 at 3:20 AM, Lindenmaier, Goetz > wrote: >> I tested your change. Unfortunately it breaks our port. You need to fix Unsigned to >> Signed: >> >> --- a/src/cpu/ppc/vm/templateTable_ppc_64.cpp Wed Jul 16 16:53:32 2014 -0700 >> +++ b/src/cpu/ppc/vm/templateTable_ppc_64.cpp Thu Jul 17 12:14:18 2014 +0200 >> @@ -1929,7 +1929,7 @@ >> // default case >> __ bind(Ldefault_case); >> >> - __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Unsigned); >> + __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Signed); >> if (ProfileInterpreter) { >> __ profile_switch_default(Rdef_offset_addr, Rcount/* scratch */); >> __ b(Lcontinue_execution); > Oops. Fixed. Which test was broken by this, BTW? > >> If you want to, you can move loading the bci in this bytecode behind the loop. > Done. > >> Could you please fix indentation of relocInfo::none in call_c? Should >> be aligned to call_c. > Done. > > The revised patch is at > http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.01/ > please take another look. > > Sasha From aph at redhat.com Wed Jul 23 18:28:34 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 23 Jul 2014 19:28:34 +0100 Subject: volatile and caches question In-Reply-To: <53CFB92F.9060008@oracle.com> References: <53CFB92F.9060008@oracle.com> Message-ID: <53CFFED2.8080302@redhat.com> On 23/07/14 14:31, Aleksey Shipilev wrote: > "Memory flush" has no place since 2005. > > VMs are free to optimize volatile accesses, as long as those > optimizations fit the Java Memory Model. For example, the multiple > volatile accesses in constructor can be optimized since we know whether > the variable is not exposed to other threads. For the variables already > on heap, it is generally unknown whether we can optimize the accesses > without breaking the JMM (there are cases where we can do simple optos, > see Cookbook). > >> > Also if I have multiply threads which operate on >> > single volatile variable: one writer and many readers and writer doesn't >> > write too much, will be caches flushed every time readers access volatile >> > varible and when write didn't write anything? > I'm not following what "caches" need "flushing" in this case. Cache > coherency already takes care about propagating the values. > Low-level-hardware-speaking, memory semantics around "volatiles" is > about exposing the data to cache coherency in proper order. For anyone reading this who still thinks that barriers cause cache flushes (not pointing at you, Winnie, this is for anyone) I recommend _Memory Barriers: a Hardware View for Software Hackers_ http://irl.cs.ucla.edu/~yingdi/web/paperreading/whymb.2010.06.07c.pdf Andrew. From vladimir.kozlov at oracle.com Wed Jul 23 18:35:10 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 23 Jul 2014 11:35:10 -0700 Subject: [8u] RFR(S): 8050972: Concurrency problem in PcDesc cache Message-ID: <53D0005E.4000409@oracle.com> 8u (8u40) backport request. The fix was pushed into jdk9 yesterday and nightly testing shows no related problems. Changes from jdk9 applied to 8u without conflicts. http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/82cd02bbfc3a http://cr.openjdk.java.net/~goetz/webrevs/8050972-pcDescConc/webrev-01/ https://bugs.openjdk.java.net/browse/JDK-8050972 Thanks, Vladimir From vladimir.kozlov at oracle.com Thu Jul 24 01:42:20 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 23 Jul 2014 18:42:20 -0700 Subject: AArch64 in JDK 9? In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEDBBB4@DEWDFEMB12A.global.corp.sap> References: <534C2283.2010108@redhat.com> <53523109.7040002@redhat.com> <5352CD23.5050700@oracle.com> <5352CEC4.7080500@oracle.com> <53CE3803.5050803@redhat.com> <4295855A5C1DE049A61835A1887419CC2CEDBBB4@DEWDFEMB12A.global.corp.sap> Message-ID: <53D0647C.5050903@oracle.com> Hi Andrew, First, please, update JEP as Mikael suggested in his reply to "Please look at my JEP". About your patch. The patch was applied cleanly to jdk9/hs-comp/hotspot today. I generated webrev for easy review. So far I don't see problems except you need additional patch to jdk build system. http://cr.openjdk.java.net/~kvn/8044552/webrev/ In globals.hpp at line 123 you replaced code instead of only adding new one: - #ifdef TARGET_ARCH_ppc - # include "c1_globals_ppc.hpp" --- + #ifdef TARGET_ARCH_aarch64 + # include "c1_globals_aarch64.hpp" I fixed it. I ran these changes through JPRT (our build and test system). It failed to build: src/os/linux/vm/os_linux.cpp: In static member function 'static void* os::dll_load(const char*, char*, int)': src/os/linux/vm/os_linux.cpp:1951:6: error: 'EM_AARCH64' was not declared in this scope {EM_AARCH64, EM_AARCH64, ELFCLASS64, ELFDATA2LSB, (char*)"AARCH64"}, It looks like GCC we use does not know about it: gcc version 4.8.2 (GCC) I put that line under #ifdef and JPRT passed: +#if defined(AARCH64) + {EM_AARCH64, EM_AARCH64, ELFCLASS64, ELFDATA2LSB, (char*)"AARCH64"} +#endif On 7/22/14 7:33 AM, Lindenmaier, Goetz wrote: > Hi Andrew, > > I had a look at your change. > It's really much less than what we adapted! > All the effort with the staging directory would be overkill I guess. I agree that these Hotspot shared changes are surprisingly small. But you need also changes in JDK build system and remaining hotspot patches. We still need the staging directory to collect all changes as separate reviewed changesets and test them. Thanks, Vladimir > > I tried to patch the change to hs-rt. > Unfortunately, my change cleaning up the shared includes of cpu files > was just pushed, so your change does not apply cleanly any more. > But in the end this should even more reduce the size of your patch. > > I would add the includes in alphabetic sorting, i.e., before TARGET_ARCH_arm. > Same for lists of defines: c1_LIR.hpp: > +#if defined(SPARC) || defined(ARM) || defined(PPC) || defined(AARCH64) > I would put AARCH64 before ARM. > We also put AIX before BSD -- PPC was there already. > Saying that, why don't you reuse the includes etc. for ARM? In the end, only > the strings in os_linux must differ, and they could depend on ARM64 as we do > it for PPC. > > In c1_LIR.c|hpp, I would use AARCH64 in defines instead of TARGET_ARCH_aarch64. > > Best regards, > Goetz. > > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Andrew Haley > Sent: Dienstag, 22. Juli 2014 12:08 > To: Vladimir Kozlov; Volker Simonis > Cc: hotspot-dev Source Developers > Subject: Re: AArch64 in JDK 9? > > I have prepared a diff that is the patch needed in the shared parts > of HotSpot for OpenJDK 9. > > It is at http://cr.openjdk.java.net/~aph/aarch64.1 > > If it would help to have a webrev I will do that, but it wasn't > clear how it should be done. > > I believe that there are no non-trivial changes to shared code which > are not guarded by TARGET_ARCH_aarch64, therefore it is unlikekly that > any other target will be affected. There are a couple of trivial > white space changes which I intend to remove; please ignore these. > > I hope that this will help you estimate the work required on Oracle's > side to integrate this port. > > Thanks, > Andrew. > From igor.ignatyev at oracle.com Thu Jul 24 20:37:08 2014 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Fri, 25 Jul 2014 00:37:08 +0400 Subject: RFR(S) : 8051896 : jtreg tests don't use $TESTJAVAOPTS In-Reply-To: <53D14DF4.2040702@oracle.com> References: <53D14A95.5010009@oracle.com> <53D14DF4.2040702@oracle.com> Message-ID: <53D16E74.6090504@oracle.com> // was Re: RFR(XS) : 8051896 : compiler/ciReplay tests don't use $TESTJAVAOPTS: updated webrev: http://cr.openjdk.java.net/~iignatyev/8051896/webrev.01/ 93 lines changed: 15 ins; 40 del; 38 mod; On 07/24/2014 10:18 PM, Vladimir Kozlov wrote: > Looks good. > > Is not this a general problem for all our tests? They use only > TESTVMOPTS. Yeap, it's a general problem, I've updated all tests. I thought jtreg merges TESTVMOPTS and TESTJAVAOPTS flags > together. > > Thanks, > Vladimir > > On 7/24/14 11:04 AM, Igor Ignatyev wrote: >> http://cr.openjdk.java.net/~iignatyev/8051896/webrev.00/ >> 12 lines changed: 2 ins; 0 del; 10 mod >> >> Hi all, >> >> Please review patch: >> >> Problem: >> the tests use only TESTVMOPTS, but jtreg propagates some flags by >> TESTJAVAOPTS variable >> >> Fix: >> usages of TESTVMOPTS were replaced by TESTOPTS which is initialized as >> concatenated values of TESTVMOPTS and TESTJAVAOPTS >> >> jbs: https://bugs.openjdk.java.net/browse/JDK-8051896 >> testing: jprt From vladimir.kozlov at oracle.com Thu Jul 24 21:38:20 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 24 Jul 2014 14:38:20 -0700 Subject: RFR(S) : 8051896 : jtreg tests don't use $TESTJAVAOPTS In-Reply-To: <53D16E74.6090504@oracle.com> References: <53D14A95.5010009@oracle.com> <53D14DF4.2040702@oracle.com> <53D16E74.6090504@oracle.com> Message-ID: <53D17CCC.5010908@oracle.com> The order of flags on command line is important. I think TESTJAVAOPTS should be last to take priority: TESTOPTS="${TESTVMOPTS} ${TESTJAVAOPTS}" Please, add copyright headers to files in compiler/6894807. Otherwise look good. Thanks, Vladimir On 7/24/14 1:37 PM, Igor Ignatyev wrote: > // was Re: RFR(XS) : 8051896 : compiler/ciReplay tests don't use > $TESTJAVAOPTS: > > updated webrev: http://cr.openjdk.java.net/~iignatyev/8051896/webrev.01/ > 93 lines changed: 15 ins; 40 del; 38 mod; > > On 07/24/2014 10:18 PM, Vladimir Kozlov wrote: >> Looks good. >> >> Is not this a general problem for all our tests? They use only >> TESTVMOPTS. > Yeap, it's a general problem, I've updated all tests. > I thought jtreg merges TESTVMOPTS and TESTJAVAOPTS flags >> together. >> >> Thanks, >> Vladimir >> >> On 7/24/14 11:04 AM, Igor Ignatyev wrote: >>> http://cr.openjdk.java.net/~iignatyev/8051896/webrev.00/ >>> 12 lines changed: 2 ins; 0 del; 10 mod >>> >>> Hi all, >>> >>> Please review patch: >>> >>> Problem: >>> the tests use only TESTVMOPTS, but jtreg propagates some flags by >>> TESTJAVAOPTS variable >>> >>> Fix: >>> usages of TESTVMOPTS were replaced by TESTOPTS which is initialized as >>> concatenated values of TESTVMOPTS and TESTJAVAOPTS >>> >>> jbs: https://bugs.openjdk.java.net/browse/JDK-8051896 >>> testing: jprt From goetz.lindenmaier at sap.com Fri Jul 25 07:17:23 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 25 Jul 2014 07:17:23 +0000 Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le In-Reply-To: References: <4295855A5C1DE049A61835A1887419CC2CEDAA66@DEWDFEMB12A.global.corp.sap> <4295855A5C1DE049A61835A1887419CC2CEDBD44@DEWDFEMB12A.global.corp.sap> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEE3B91@DEWDFEMB12A.global.corp.sap> HI Alexander, you please also need an official reviewer, I'm only 'committer', so my review only counts as a second one. Best regards, Goetz. -----Original Message----- From: Alexander Smundak [mailto:asmundak at google.com] Sent: Mittwoch, 23. Juli 2014 19:01 To: Lindenmaier, Goetz Cc: HotSpot Open Source Developers Subject: Re: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le Thanks. I need a sponsor, please. Sasha On Wed, Jul 23, 2014 at 12:05 AM, Lindenmaier, Goetz wrote: > Hi Sasha, > > we ran our nightly tests on big-endian with this change. They're all green. > reviewed. > > Best regards, > Goetz. > > > -----Original Message----- > From: Lindenmaier, Goetz > Sent: Freitag, 18. Juli 2014 10:13 > To: 'Alexander Smundak' > Cc: HotSpot Open Source Developers > Subject: RE: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le > > Hi Sasha, > > thanks, now it works. I just ran jvm98/javac. > Comprehensive tests will be executed tonight. > > Best regards, > Goetz. > > > > > > -----Original Message----- > From: Alexander Smundak [mailto:asmundak at google.com] > Sent: Freitag, 18. Juli 2014 02:58 > To: Lindenmaier, Goetz > Cc: HotSpot Open Source Developers > Subject: Re: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le > > On Thu, Jul 17, 2014 at 3:20 AM, Lindenmaier, Goetz > wrote: >> I tested your change. Unfortunately it breaks our port. You need to fix Unsigned to >> Signed: >> >> --- a/src/cpu/ppc/vm/templateTable_ppc_64.cpp Wed Jul 16 16:53:32 2014 -0700 >> +++ b/src/cpu/ppc/vm/templateTable_ppc_64.cpp Thu Jul 17 12:14:18 2014 +0200 >> @@ -1929,7 +1929,7 @@ >> // default case >> __ bind(Ldefault_case); >> >> - __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Unsigned); >> + __ get_u4(Roffset, Rdef_offset_addr, 0, InterpreterMacroAssembler::Signed); >> if (ProfileInterpreter) { >> __ profile_switch_default(Rdef_offset_addr, Rcount/* scratch */); >> __ b(Lcontinue_execution); > Oops. Fixed. Which test was broken by this, BTW? > >> If you want to, you can move loading the bci in this bytecode behind the loop. > Done. > >> Could you please fix indentation of relocInfo::none in call_c? Should >> be aligned to call_c. > Done. > > The revised patch is at > http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.01/ > please take another look. > > Sasha From asmundak at google.com Fri Jul 25 07:23:17 2014 From: asmundak at google.com (Alexander Smundak) Date: Fri, 25 Jul 2014 00:23:17 -0700 Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEE3B91@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDAA66@DEWDFEMB12A.global.corp.sap> <4295855A5C1DE049A61835A1887419CC2CEDBD44@DEWDFEMB12A.global.corp.sap> <4295855A5C1DE049A61835A1887419CC2CEE3B91@DEWDFEMB12A.global.corp.sap> Message-ID: Official reviewers, please take a look. On Jul 25, 2014 12:17 AM, "Lindenmaier, Goetz" wrote: > HI Alexander, > > you please also need an official reviewer, > I'm only 'committer', so my review only counts as a second one. > > Best regards, > Goetz. > > -----Original Message----- > From: Alexander Smundak [mailto:asmundak at google.com] > Sent: Mittwoch, 23. Juli 2014 19:01 > To: Lindenmaier, Goetz > Cc: HotSpot Open Source Developers > Subject: Re: RFR(M): 8050942 : PPC64: implement template interpreter for > ppc64le > > Thanks. > I need a sponsor, please. > Sasha > > On Wed, Jul 23, 2014 at 12:05 AM, Lindenmaier, Goetz > wrote: > > Hi Sasha, > > > > we ran our nightly tests on big-endian with this change. They're all > green. > > reviewed. > > > > Best regards, > > Goetz. > > > > > > -----Original Message----- > > From: Lindenmaier, Goetz > > Sent: Freitag, 18. Juli 2014 10:13 > > To: 'Alexander Smundak' > > Cc: HotSpot Open Source Developers > > Subject: RE: RFR(M): 8050942 : PPC64: implement template interpreter for > ppc64le > > > > Hi Sasha, > > > > thanks, now it works. I just ran jvm98/javac. > > Comprehensive tests will be executed tonight. > > > > Best regards, > > Goetz. > > > > > > > > > > > > -----Original Message----- > > From: Alexander Smundak [mailto:asmundak at google.com] > > Sent: Freitag, 18. Juli 2014 02:58 > > To: Lindenmaier, Goetz > > Cc: HotSpot Open Source Developers > > Subject: Re: RFR(M): 8050942 : PPC64: implement template interpreter for > ppc64le > > > > On Thu, Jul 17, 2014 at 3:20 AM, Lindenmaier, Goetz > > wrote: > >> I tested your change. Unfortunately it breaks our port. You need to > fix Unsigned to > >> Signed: > >> > >> --- a/src/cpu/ppc/vm/templateTable_ppc_64.cpp Wed Jul 16 16:53:32 > 2014 -0700 > >> +++ b/src/cpu/ppc/vm/templateTable_ppc_64.cpp Thu Jul 17 12:14:18 > 2014 +0200 > >> @@ -1929,7 +1929,7 @@ > >> // default case > >> __ bind(Ldefault_case); > >> > >> - __ get_u4(Roffset, Rdef_offset_addr, 0, > InterpreterMacroAssembler::Unsigned); > >> + __ get_u4(Roffset, Rdef_offset_addr, 0, > InterpreterMacroAssembler::Signed); > >> if (ProfileInterpreter) { > >> __ profile_switch_default(Rdef_offset_addr, Rcount/* scratch */); > >> __ b(Lcontinue_execution); > > Oops. Fixed. Which test was broken by this, BTW? > > > >> If you want to, you can move loading the bci in this bytecode behind > the loop. > > Done. > > > >> Could you please fix indentation of relocInfo::none in call_c? Should > >> be aligned to call_c. > > Done. > > > > The revised patch is at > > http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.01/ > > please take another look. > > > > Sasha > From goetz.lindenmaier at sap.com Fri Jul 25 07:59:01 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 25 Jul 2014 07:59:01 +0000 Subject: RFR(S): 8050978: Fix bad field access check in C1 and C2 In-Reply-To: <53CFD713.6050107@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CEDAAAC@DEWDFEMB12A.global.corp.sap> <53C7F369.5070706@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDAD04@DEWDFEMB12A.global.corp.sap> <53C9364A.1000202@oracle.com> <4295855A5C1DE049A61835A1887419CC2CEDBB30@DEWDFEMB12A.global.corp.sap> <53CFD713.6050107@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEE3BEE@DEWDFEMB12A.global.corp.sap> Hi, Vladimir, thanks for sponsoring it! And all the people handling JBS, thanks too! Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Ivanov Sent: Mittwoch, 23. Juli 2014 17:39 To: hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050978: Fix bad field access check in C1 and C2 Looks good. I'll sponsor the fix. Best regards, Vladimir Ivanov On 7/22/14, 2:42 PM, Lindenmaier, Goetz wrote: > Hi, > > I please need a second reviewer for this change, as well as a sponsor. > It also needs to go to 8u20. > > Thanks and best regards, > Goetz. > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Freitag, 18. Juli 2014 16:59 > To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): 8050978: Fix bad field access check in C1 and C2 > > Good. > > Thanks, > Vladimir > > On 7/18/14 12:15 AM, Lindenmaier, Goetz wrote: >> Hi Vladimir, >> >> we updated the changeset with the new comment. >> http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ >> >> Best regards, >> Goetz. >> >> -----Original Message----- >> From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] >> Sent: Donnerstag, 17. Juli 2014 18:02 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR(S): 8050978: Fix bad field access check in C1 and C2 >> >> Please, don't put next part of comment into sources: >> >> + // This will make the jck8 test >> + // vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html >> + // pass with -Xbatch -Xcomp >> >> instead add something like "canonical_holder should not be use to check access becasue it can erroneously succeeds". >> >> Thanks, >> Vladimir >> >> On 7/17/14 3:47 AM, Lindenmaier, Goetz wrote: >>> Hi, >>> >>> This fixes an error doing field access checks in C1 and C2. >>> Please review and test the change. We please need a sponsor. >>> http://cr.openjdk.java.net/~goetz/webrevs/8050978-fieldCheck/webrev-01/ >>> >>> This should be included in 8u20, too. >>> >>> JCK8 test vm/constantpool/accessControl/accessControl004/accessControl00402m3/accessControl00402m3.html fails with -Xbatch -Xcomp due to bad field access check in C1 and C2 >>> >>> Precondition: >>> ------------- >>> >>> Consider the following class hierarchy: >>> >>> A >>> / \ >>> B1 B2 >>> >>> A declares a field "aa" which both B1 and B2 inherit. >>> >>> Despite aa is declared in a super class of B1, methods in B1 might not access the field aa of an object of class B2: >>> >>> class B1 extends A { >>> m(B2 b2) { >>> ... >>> x = b2.aa; // !!! Access not allowed >>> } >>> } >>> >>> This is checked by the test mentioned above. >>> >>> Problem: >>> -------- >>> >>> ciField::will_link() used by C1 and C2 does the access check using the canonical_holder (which is A in this case) and thus the access erroneously succeeds. >>> >>> Fix: >>> ---- >>> >>> In ciField::ciField(), just before the canonical holder is stored into the _holder variable (and which is used by ciField::will_link()) perform an additional access check with the holder declared in the class file. If this check fails, store the declared holder instead and ciField::will_link() will bail out compilation for this field later on. Then, the interpreter will throw an PrivilegedAccessException at runtime. >>> >>> Ways to reproduce: >>> ------------------ >>> >>> Run the above JCK test with >>> >>> C2 only: -XX:-TieredCompilation -Xbatch -Xcomp >>> >>> or >>> >>> with C1: -Xbatch -Xcomp -XX:-Inline >>> >>> Best regards, >>> Andreas and Goetz >>> >>> From zoltan.majo at oracle.com Fri Jul 25 08:39:03 2014 From: zoltan.majo at oracle.com (=?UTF-8?B?Wm9sdMOhbiBNYWrDsw==?=) Date: Fri, 25 Jul 2014 10:39:03 +0200 Subject: supported platforms for JDK 9 Message-ID: <53D217A7.5000906@oracle.com> Hi, I was wondering recently what the supported platforms for JDK 9 on Solaris SPARC are. It is clear that JDK 8 supports only Solaris 10u9+: https://wiki.se.oracle.com/display/JPGINFRA/JDK+8+Build+Platforms+and+Compilers. The build platform for JDK 9 is Solaris 11.1. Could you please tell me if you think there is any reason to support JDK 9 on a < Solaris 11? Thank you and best regards, Zoltan From goetz.lindenmaier at sap.com Fri Jul 25 12:07:13 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 25 Jul 2014 12:07:13 +0000 Subject: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark References: <4295855A5C1DE049A61835A1887419CC2CEDAE6B@DEWDFEMB12A.global.corp.sap> <1405932872.2723.13.camel@cirrus> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEE4A4E@DEWDFEMB12A.global.corp.sap> Hi, could somebody please have a further look at this? We also need a sponsor please. Best regards, Goetz. -----Original Message----- From: Lindenmaier, Goetz Sent: Montag, 21. Juli 2014 15:01 To: 'Thomas Schatzl' Cc: hotspot-dev at openjdk.java.net; Doerr, Martin Subject: RE: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark Hi Thomas, we put the marks there as other work functions are setup similarly, as, e.g., CMConcurrentMarkingTask::work(), CMRemarkTask::work(). So we propose to fist go with our change to fix the issue, it can be refactored afterwards. It's a minor resource leak. We don't think it's a showstopper, so let's only move it to 8u40. Best regards, Martin and Goetz. -----Original Message----- From: Thomas Schatzl [mailto:thomas.schatzl at oracle.com] Sent: Montag, 21. Juli 2014 10:55 To: Lindenmaier, Goetz Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark Hi, On Fri, 2014-07-18 at 12:47 +0000, Lindenmaier, Goetz wrote: > Hi, > > This fixes two missing Resource and Handle marks. > > Please review and test this change. We please need a sponsor > to push it. > http://cr.openjdk.java.net/~goetz/webrevs/8050973-mark/webrev-01/ I think the resource/handle marks should be added before the work method is called for safety. So I would prefer if the Marks were added to at least in the two places in GangWorker::loop() and YieldingFlexibleGangWorker::loop() where the work method is called, if not everywhere where work() is called. The latter might be too big a change for now. > Should this be pushed to 8u20? Can you elaborate the consequences of not applying the patch to 8u20? Only showstoppers can be pushed to 8u20 at this time, so we need a good reason. It seems to be a problem with ParallelRefProcEnabled only, and we have not seen crashes. Thanks, Thomas From tobias.hartmann at oracle.com Fri Jul 25 13:54:38 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 25 Jul 2014 15:54:38 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53CEA79F.6030004@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> <53C90740.40602@oracle.com> <53C937E0.7060304@oracle.com> <53C9614C.8080109@oracle.com> <53C962F3.3070405@oracle.com> <53CCD307.7040806@oracle.com> <53CD62F4.1020904@oracle.com> <53CE2D53.6040006@oracle.com> <53CEA79F.6030004@oracle.com> Message-ID: <53D2619E.8010109@oracle.com> Mikael, Vladimir, thanks for the review. The problem is indeed caused by a missing check for a metadata pointer in sparc.ad. Adding 'n->bottom_type()->isa_klassptr()' checks to immP_load() and immP_no_oop_cheap() fixes the problem. The Klass pointer is then loaded from the constant table (loadConP_load()) and a metadata relocation is added by Compile::ConstantTable::emit(). I had a look at the Aurora chessboard and it looks like as if the bug recently occured on x86_32 as well. I was not yet able to reproduce it but will try again next week. Thanks, Tobias On 22.07.2014 20:04, Vladimir Kozlov wrote: > I agree with Mikael, the case without compressed oops is incorrect. > The problem is immP_load() and immP_no_oop_cheap() operands miss the > check for metadata pointer, they only check for oop. For class loading > loadConP_load() should be used instead of loadConP_no_oop_cheap() and > it should check relocation type and do what loadConP_set() does. > > > X64 seems fine. X86_64.ad use $$$emit32$src$$constant; in such case > (load_immP31) which is expanded to the same code as load_immP: > > if ( opnd_array(1)->constant_reloc() != relocInfo::none ) { > emit_d32_reloc(cbuf, opnd_array(1)->constant(), > opnd_array(1)->constant_reloc(), 0); > > Thanks, > Vladimir > > On 7/22/14 2:22 AM, Tobias Hartmann wrote: >> On 21.07.2014 20:59, Vladimir Kozlov wrote: >>> On 7/21/14 1:44 AM, Tobias Hartmann wrote: >>>> Vladimir, Coleen, thanks for the reviews! >>>> >>>> On 18.07.2014 20:09, Vladimir Kozlov wrote: >>>>> On 7/18/14 11:02 AM, Coleen Phillimore wrote: >>>>>> >>>>>> On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: >>>>>>> On 7/18/14 4:38 AM, Tobias Hartmann wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I spend some more days and was finally able to implement a test >>>>>>>> that >>>>>>>> deterministically triggers the bug: >>>>>>> >>>>>>> Why do you need to switch off compressed oops? Do you need to >>>>>>> switch >>>>>>> off compressed klass pointers too (UseCompressedClassPointers)? >>>>>> >>>>>> CompressedOops when off turns off CompressedClassPointers. >>>>> >>>>> You are right, I forgot that. Still the question is why switch off >>>>> coop? >>>> >>>> I'm only able to reproduce the bug without compressed oops. The >>>> original >>>> bug also only reproduces with -XX:-UseCompressedOops. I tried to >>>> figure >>>> out why (on Sparc): >>>> >>>> With compressed oops enabled, Method* metadata referencing >>>> 'WorkerClass' >>>> is added to 'doWork' in MacroAssembler::set_narrow_klass(..). In >>>> CodeBuffer::finalize_oop_references(..) the metadata is processed >>>> and an >>>> oop to the class loader 'URLClassLoader' is added. This oop leads >>>> to the >>>> unloading of 'doWork', hence the verification code is never executed. >>>> >>>> I'm not sure what set_narrow_klass(..) is used for in this case. I >>>> assume it stores a 'WorkerClass' Klass* in a register as part of an >>>> optimization? Because 'doWork' potentially works on any class. >>>> Apparently this optimization is not performed without compressed oops. >>> >>> I would suggest to compare 'doWork' assembler >>> (-XX:CompileCommand=print,TestMethodUnloading::doWork) with coop and >>> without it. Usually loaded into register class is used for klass >>> compare do guard inlining code. Or to initialize new object. >>> >>> I don't see loading (constructing) uncompressed (whole) klass pointer >>> from constant in sparc.ad. It could be the reason for different >>> behavior. It could be loaded from constants section. But constants >>> section should have metadata relocation info in such case. >> >> I did as you suggested and found the following: >> >> During the profiling phase the class given to 'doWork' always is >> 'WorkerClass'. The C2 compiler therefore optimizes the compiled version >> to expect a 'WorkerClass'. The branch that instantiates a new object is >> guarded by an uncommon trap (class_check). The difference between the >> two versions (with and without compressed oops) is the loading of the >> 'WorkerClass' Klass to check if the given class is equal: >> >> With compressed oops: >> SET narrowklass: precise klass WorkerClass: >> 0x00000001004a0d40:Constant:exact *,R_L1 ! compressed klass ptr >> CWBne R_L2,R_L1,B8 ! compressed ptr P=0.000001 C=-1.000000 >> >> Without: >> SET precise klass WorkerClass: 0x00000001004aeab0:Constant:exact >> *,R_L1 ! non-oop ptr >> CXBpne R_L2,R_L1,B8 ! ptr P=0.000001 C=-1.000000 >> >> R_L2: class given as parameter >> B8: location of uncommon trap >> >> In the first case, the Klass is loaded by a 'loadConNKlass' instruction >> that calls MacroAssembler::set_narrow_klass(..) which then creates a >> metadata_Relocation for the 'WorkerClass'. This metada_Relocation is >> processed by CodeBuffer::finalize_oop_references(..) and an oop to >> 'WorkerClass' is added. This oop causes the unloading of the method. >> >> In the second case, the Klass is loaded by a 'loadConP_no_oop_cheap' >> instruction that does not create a metadata_Relocation. >> >> I don't understand why the metadata_Relocation in the first case is >> needed? As the test shows it is better to only unload the method if we >> hit the uncommon trap because we could still use other (potentially >> complex) branches of the method. >> >> Thanks, >> Tobias >> >>> >>> thanks, >>> Vladimir >>> >>>> >>>> Best, >>>> Tobias >>>> >>>>> >>>>> Vladimir >>>>> >>>>>>> >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ >>>>>>> >>>>>>> Very nice! >>>>>> >>>>>> Yes, I agree. Impressive. >>>>>> >>>>>> The refactoring in nmethod.cpp looks good to me. I have no further >>>>>> comments. >>>>>> Thanks! >>>>>> Coleen >>>>>> >>>>>>> >>>>>>>> >>>>>>>> @Vladimir: The test shows why we should only clean the ICs but not >>>>>>>> unload the nmethod if possible. The method ' doWork' >>>>>>>> is still valid after WorkerClass was unloaded and depending on the >>>>>>>> complexity of the method we should avoid unloading it. >>>>>>> >>>>>>> Make sense. >>>>>>> >>>>>>>> >>>>>>>> On Sparc my patch fixes the bug and leads to the nmethod not being >>>>>>>> unloaded. The compiled version is therefore used even >>>>>>>> after WorkerClass is unloaded. >>>>>>>> >>>>>>>> On x86 the nmethod is unloaded anyway because of a dead oop. >>>>>>>> This is >>>>>>>> probably due to a slightly different implementation >>>>>>>> of the ICs. I'll have a closer look to see if we can improve that. >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Tobias >>>>>>>> >>>>>>>> On 16.07.2014 10:36, Tobias Hartmann wrote: >>>>>>>>> Sorry, forgot to answer this question: >>>>>>>>>> Were you able to create a small test case for it that would be >>>>>>>>>> useful to add? >>>>>>>>> Unfortunately I was not able to create a test. The bug only >>>>>>>>> reproduces on a particular system with a > 30 minute run >>>>>>>>> of runThese. >>>>>>>>> >>>>>>>>> Best, >>>>>>>>> Tobias >>>>>>>>> >>>>>>>>> On 16.07.2014 09:54, Tobias Hartmann wrote: >>>>>>>>>> Hi Coleen, >>>>>>>>>> >>>>>>>>>> thanks for the review. >>>>>>>>>>> *+ if (csc->is_call_to_interpreted() && >>>>>>>>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* >>>>>>>>>>> *+ csc->set_to_clean();* >>>>>>>>>>> *+ }* >>>>>>>>>>> >>>>>>>>>>> This appears in each case. Can you fold it and the new >>>>>>>>>>> function >>>>>>>>>>> into a function like >>>>>>>>>>> clean_call_to_interpreted_stub(is_alive, csc)? >>>>>>>>>> >>>>>>>>>> I folded it into the function >>>>>>>>>> clean_call_to_interpreter_stub(..). >>>>>>>>>> >>>>>>>>>> New webrev: >>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Tobias >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Coleen >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> So before the permgen removal embedded method* were oops and >>>>>>>>>>>> they >>>>>>>>>>>> were processed in relocInfo::oop_type loop. >>>>>>>>>>>> >>>>>>>>>>>> May be instead of specializing opt_virtual_call_type and >>>>>>>>>>>> static_call_type call site you can simple add a loop for >>>>>>>>>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Vladimir >>>>>>>>>>>> >>>>>>>>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> please review the following patch for JDK-8029443. >>>>>>>>>>>>> >>>>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>>>>>>>>> Webrev: >>>>>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>>>>>>>>> >>>>>>>>>>>>> *Problem* >>>>>>>>>>>>> After the tracing/marking phase of GC, >>>>>>>>>>>>> nmethod::do_unloading(..) >>>>>>>>>>>>> checks >>>>>>>>>>>>> if a nmethod can be unloaded because it contains dead >>>>>>>>>>>>> oops. If >>>>>>>>>>>>> class >>>>>>>>>>>>> unloading occurred we additionally clear all ICs where the >>>>>>>>>>>>> cached >>>>>>>>>>>>> metadata refers to an unloaded klass or method. If the >>>>>>>>>>>>> nmethod >>>>>>>>>>>>> is not >>>>>>>>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally >>>>>>>>>>>>> checks if >>>>>>>>>>>>> all >>>>>>>>>>>>> metadata is alive. The assert in CheckClass::check_class >>>>>>>>>>>>> fails >>>>>>>>>>>>> because >>>>>>>>>>>>> the nmethod contains Method* metadata corresponding to a dead >>>>>>>>>>>>> Klass. >>>>>>>>>>>>> The Method* belongs to a to-interpreter stub [1] of an >>>>>>>>>>>>> optimized >>>>>>>>>>>>> compiled IC. Normally we clear those stubs prior to >>>>>>>>>>>>> verification to >>>>>>>>>>>>> avoid dangling references to Method* [2], but only if the >>>>>>>>>>>>> stub >>>>>>>>>>>>> is not in >>>>>>>>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this >>>>>>>>>>>>> case the >>>>>>>>>>>>> to-interpreter stub may be executed and hand a stale Method* >>>>>>>>>>>>> to the >>>>>>>>>>>>> interpreter. >>>>>>>>>>>>> >>>>>>>>>>>>> *Solution >>>>>>>>>>>>> *The implementation of nmethod::do_unloading(..) is >>>>>>>>>>>>> changed to >>>>>>>>>>>>> clean >>>>>>>>>>>>> compiled ICs and compiled static calls if they call into a >>>>>>>>>>>>> to-interpreter stub that references dead Method* metadata. >>>>>>>>>>>>> >>>>>>>>>>>>> The patch was affected by the G1 class unloading changes >>>>>>>>>>>>> (JDK-8048248) >>>>>>>>>>>>> because the method nmethod::do_unloading_parallel(..) was >>>>>>>>>>>>> added. I >>>>>>>>>>>>> adapted the implementation as well. >>>>>>>>>>>>> * >>>>>>>>>>>>> Testing >>>>>>>>>>>>> *Failing test (runThese) >>>>>>>>>>>>> JPRT >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Tobias >>>>>>>>>>>>> >>>>>>>>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>>>>>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>>>>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>> >>>> >> From mikael.gerdin at oracle.com Fri Jul 25 14:13:56 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Fri, 25 Jul 2014 16:13:56 +0200 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53D2619E.8010109@oracle.com> References: <53C3C584.7070008@oracle.com> <53CEA79F.6030004@oracle.com> <53D2619E.8010109@oracle.com> Message-ID: <2662926.5arkkomgnm@mgerdin03> Tobias, On Friday 25 July 2014 15.54.38 Tobias Hartmann wrote: > Mikael, Vladimir, thanks for the review. > > The problem is indeed caused by a missing check for a metadata pointer > in sparc.ad. > > Adding 'n->bottom_type()->isa_klassptr()' checks to immP_load() and > immP_no_oop_cheap() fixes the problem. The Klass pointer is then loaded > from the constant table (loadConP_load()) and a metadata relocation is > added by Compile::ConstantTable::emit(). There shouldn't be anything stopping the Klass* from being emitted as an immediate just as long as the appropriate relocation entry is created. We don't need to update the immediate in the instruction stream for the Klass* since we don't move klasses any more. I can't find any usages of {oop,metadata}_Relocation::spec_for_immediate from C2 though, so I don't know how that works in C2-land. /Mikael > > I had a look at the Aurora chessboard and it looks like as if the bug > recently occured on x86_32 as well. I was not yet able to reproduce it > but will try again next week. > > Thanks, > Tobias > > On 22.07.2014 20:04, Vladimir Kozlov wrote: > > I agree with Mikael, the case without compressed oops is incorrect. > > The problem is immP_load() and immP_no_oop_cheap() operands miss the > > check for metadata pointer, they only check for oop. For class loading > > loadConP_load() should be used instead of loadConP_no_oop_cheap() and > > it should check relocation type and do what loadConP_set() does. > > > > > > X64 seems fine. X86_64.ad use $$$emit32$src$$constant; in such case > > (load_immP31) which is expanded to the same code as load_immP: > > > > if ( opnd_array(1)->constant_reloc() != relocInfo::none ) { > > > > emit_d32_reloc(cbuf, opnd_array(1)->constant(), > > > > opnd_array(1)->constant_reloc(), 0); > > > > Thanks, > > Vladimir > > > > On 7/22/14 2:22 AM, Tobias Hartmann wrote: > >> On 21.07.2014 20:59, Vladimir Kozlov wrote: > >>> On 7/21/14 1:44 AM, Tobias Hartmann wrote: > >>>> Vladimir, Coleen, thanks for the reviews! > >>>> > >>>> On 18.07.2014 20:09, Vladimir Kozlov wrote: > >>>>> On 7/18/14 11:02 AM, Coleen Phillimore wrote: > >>>>>> On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: > >>>>>>> On 7/18/14 4:38 AM, Tobias Hartmann wrote: > >>>>>>>> Hi, > >>>>>>>> > >>>>>>>> I spend some more days and was finally able to implement a test > >>>>>>>> that > >>>>>>> > >>>>>>>> deterministically triggers the bug: > >>>>>>> Why do you need to switch off compressed oops? Do you need to > >>>>>>> switch > >>>>>>> off compressed klass pointers too (UseCompressedClassPointers)? > >>>>>> > >>>>>> CompressedOops when off turns off CompressedClassPointers. > >>>>> > >>>>> You are right, I forgot that. Still the question is why switch off > >>>>> coop? > >>>> > >>>> I'm only able to reproduce the bug without compressed oops. The > >>>> original > >>>> bug also only reproduces with -XX:-UseCompressedOops. I tried to > >>>> figure > >>>> out why (on Sparc): > >>>> > >>>> With compressed oops enabled, Method* metadata referencing > >>>> 'WorkerClass' > >>>> is added to 'doWork' in MacroAssembler::set_narrow_klass(..). In > >>>> CodeBuffer::finalize_oop_references(..) the metadata is processed > >>>> and an > >>>> oop to the class loader 'URLClassLoader' is added. This oop leads > >>>> to the > >>>> unloading of 'doWork', hence the verification code is never executed. > >>>> > >>>> I'm not sure what set_narrow_klass(..) is used for in this case. I > >>>> assume it stores a 'WorkerClass' Klass* in a register as part of an > >>>> optimization? Because 'doWork' potentially works on any class. > >>>> Apparently this optimization is not performed without compressed oops. > >>> > >>> I would suggest to compare 'doWork' assembler > >>> (-XX:CompileCommand=print,TestMethodUnloading::doWork) with coop and > >>> without it. Usually loaded into register class is used for klass > >>> compare do guard inlining code. Or to initialize new object. > >>> > >>> I don't see loading (constructing) uncompressed (whole) klass pointer > >>> from constant in sparc.ad. It could be the reason for different > >>> behavior. It could be loaded from constants section. But constants > >>> section should have metadata relocation info in such case. > >> > >> I did as you suggested and found the following: > >> > >> During the profiling phase the class given to 'doWork' always is > >> 'WorkerClass'. The C2 compiler therefore optimizes the compiled version > >> to expect a 'WorkerClass'. The branch that instantiates a new object is > >> guarded by an uncommon trap (class_check). The difference between the > >> two versions (with and without compressed oops) is the loading of the > >> 'WorkerClass' Klass to check if the given class is equal: > >> > >> With compressed oops: > >> SET narrowklass: precise klass WorkerClass: > >> 0x00000001004a0d40:Constant:exact *,R_L1 ! compressed klass ptr > >> > >> CWBne R_L2,R_L1,B8 ! compressed ptr P=0.000001 C=-1.000000 > >> > >> Without: > >> SET precise klass WorkerClass: 0x00000001004aeab0:Constant:exact > >> > >> *,R_L1 ! non-oop ptr > >> > >> CXBpne R_L2,R_L1,B8 ! ptr P=0.000001 C=-1.000000 > >> > >> R_L2: class given as parameter > >> B8: location of uncommon trap > >> > >> In the first case, the Klass is loaded by a 'loadConNKlass' instruction > >> that calls MacroAssembler::set_narrow_klass(..) which then creates a > >> metadata_Relocation for the 'WorkerClass'. This metada_Relocation is > >> processed by CodeBuffer::finalize_oop_references(..) and an oop to > >> 'WorkerClass' is added. This oop causes the unloading of the method. > >> > >> In the second case, the Klass is loaded by a 'loadConP_no_oop_cheap' > >> instruction that does not create a metadata_Relocation. > >> > >> I don't understand why the metadata_Relocation in the first case is > >> needed? As the test shows it is better to only unload the method if we > >> hit the uncommon trap because we could still use other (potentially > >> complex) branches of the method. > >> > >> Thanks, > >> Tobias > >> > >>> thanks, > >>> Vladimir > >>> > >>>> Best, > >>>> Tobias > >>>> > >>>>> Vladimir > >>>>> > >>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ > >>>>>>> > >>>>>>> Very nice! > >>>>>> > >>>>>> Yes, I agree. Impressive. > >>>>>> > >>>>>> The refactoring in nmethod.cpp looks good to me. I have no further > >>>>>> comments. > >>>>>> Thanks! > >>>>>> Coleen > >>>>>> > >>>>>>>> @Vladimir: The test shows why we should only clean the ICs but not > >>>>>>>> unload the nmethod if possible. The method ' doWork' > >>>>>>>> is still valid after WorkerClass was unloaded and depending on the > >>>>>>>> complexity of the method we should avoid unloading it. > >>>>>>> > >>>>>>> Make sense. > >>>>>>> > >>>>>>>> On Sparc my patch fixes the bug and leads to the nmethod not being > >>>>>>>> unloaded. The compiled version is therefore used even > >>>>>>>> after WorkerClass is unloaded. > >>>>>>>> > >>>>>>>> On x86 the nmethod is unloaded anyway because of a dead oop. > >>>>>>>> This is > >>>>>>>> probably due to a slightly different implementation > >>>>>>>> of the ICs. I'll have a closer look to see if we can improve that. > >>>>>>> > >>>>>>> Thanks, > >>>>>>> Vladimir > >>>>>>> > >>>>>>>> Thanks, > >>>>>>>> Tobias > >>>>>>>> > >>>>>>>> On 16.07.2014 10:36, Tobias Hartmann wrote: > >>>>>>>>> Sorry, forgot to answer this question: > >>>>>>>>>> Were you able to create a small test case for it that would be > >>>>>>>>>> useful to add? > >>>>>>>>> > >>>>>>>>> Unfortunately I was not able to create a test. The bug only > >>>>>>>>> reproduces on a particular system with a > 30 minute run > >>>>>>>>> of runThese. > >>>>>>>>> > >>>>>>>>> Best, > >>>>>>>>> Tobias > >>>>>>>>> > >>>>>>>>> On 16.07.2014 09:54, Tobias Hartmann wrote: > >>>>>>>>>> Hi Coleen, > >>>>>>>>>> > >>>>>>>>>> thanks for the review. > >>>>>>>>>> > >>>>>>>>>>> *+ if (csc->is_call_to_interpreted() && > >>>>>>>>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* > >>>>>>>>>>> *+ csc->set_to_clean();* > >>>>>>>>>>> *+ }* > >>>>>>>>>>> > >>>>>>>>>>> This appears in each case. Can you fold it and the new > >>>>>>>>>>> function > >>>>>>>>>>> into a function like > >>>>>>>>>>> clean_call_to_interpreted_stub(is_alive, csc)? > >>>>>>>>>> > >>>>>>>>>> I folded it into the function > >>>>>>>>>> clean_call_to_interpreter_stub(..). > >>>>>>>>>> > >>>>>>>>>> New webrev: > >>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ > >>>>>>>>>> > >>>>>>>>>> Thanks, > >>>>>>>>>> Tobias > >>>>>>>>>> > >>>>>>>>>>> Thanks, > >>>>>>>>>>> Coleen > >>>>>>>>>>> > >>>>>>>>>>>> So before the permgen removal embedded method* were oops and > >>>>>>>>>>>> they > >>>>>>>>>>>> were processed in relocInfo::oop_type loop. > >>>>>>>>>>>> > >>>>>>>>>>>> May be instead of specializing opt_virtual_call_type and > >>>>>>>>>>>> static_call_type call site you can simple add a loop for > >>>>>>>>>>>> relocInfo::metadata_type (similar to oop_type loop)? > >>>>>>>>>>>> > >>>>>>>>>>>> Thanks, > >>>>>>>>>>>> Vladimir > >>>>>>>>>>>> > >>>>>>>>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: > >>>>>>>>>>>>> Hi, > >>>>>>>>>>>>> > >>>>>>>>>>>>> please review the following patch for JDK-8029443. > >>>>>>>>>>>>> > >>>>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 > >>>>>>>>>>>>> Webrev: > >>>>>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ > >>>>>>>>>>>>> > >>>>>>>>>>>>> *Problem* > >>>>>>>>>>>>> After the tracing/marking phase of GC, > >>>>>>>>>>>>> nmethod::do_unloading(..) > >>>>>>>>>>>>> checks > >>>>>>>>>>>>> if a nmethod can be unloaded because it contains dead > >>>>>>>>>>>>> oops. If > >>>>>>>>>>>>> class > >>>>>>>>>>>>> unloading occurred we additionally clear all ICs where the > >>>>>>>>>>>>> cached > >>>>>>>>>>>>> metadata refers to an unloaded klass or method. If the > >>>>>>>>>>>>> nmethod > >>>>>>>>>>>>> is not > >>>>>>>>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally > >>>>>>>>>>>>> checks if > >>>>>>>>>>>>> all > >>>>>>>>>>>>> metadata is alive. The assert in CheckClass::check_class > >>>>>>>>>>>>> fails > >>>>>>>>>>>>> because > >>>>>>>>>>>>> the nmethod contains Method* metadata corresponding to a dead > >>>>>>>>>>>>> Klass. > >>>>>>>>>>>>> The Method* belongs to a to-interpreter stub [1] of an > >>>>>>>>>>>>> optimized > >>>>>>>>>>>>> compiled IC. Normally we clear those stubs prior to > >>>>>>>>>>>>> verification to > >>>>>>>>>>>>> avoid dangling references to Method* [2], but only if the > >>>>>>>>>>>>> stub > >>>>>>>>>>>>> is not in > >>>>>>>>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this > >>>>>>>>>>>>> case the > >>>>>>>>>>>>> to-interpreter stub may be executed and hand a stale Method* > >>>>>>>>>>>>> to the > >>>>>>>>>>>>> interpreter. > >>>>>>>>>>>>> > >>>>>>>>>>>>> *Solution > >>>>>>>>>>>>> *The implementation of nmethod::do_unloading(..) is > >>>>>>>>>>>>> changed to > >>>>>>>>>>>>> clean > >>>>>>>>>>>>> compiled ICs and compiled static calls if they call into a > >>>>>>>>>>>>> to-interpreter stub that references dead Method* metadata. > >>>>>>>>>>>>> > >>>>>>>>>>>>> The patch was affected by the G1 class unloading changes > >>>>>>>>>>>>> (JDK-8048248) > >>>>>>>>>>>>> because the method nmethod::do_unloading_parallel(..) was > >>>>>>>>>>>>> added. I > >>>>>>>>>>>>> adapted the implementation as well. > >>>>>>>>>>>>> * > >>>>>>>>>>>>> Testing > >>>>>>>>>>>>> *Failing test (runThese) > >>>>>>>>>>>>> JPRT > >>>>>>>>>>>>> > >>>>>>>>>>>>> Thanks, > >>>>>>>>>>>>> Tobias > >>>>>>>>>>>>> > >>>>>>>>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) > >>>>>>>>>>>>> [2] see nmethod::verify_metadata_loaders(..), > >>>>>>>>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub From zoltan.majo at oracle.com Fri Jul 25 15:10:11 2014 From: zoltan.majo at oracle.com (=?UTF-8?B?Wm9sdMOhbiBNYWrDsw==?=) Date: Fri, 25 Jul 2014 17:10:11 +0200 Subject: support for Solaris SPARC < 11 Message-ID: <53D27353.6090201@oracle.com> Hi, I'm currently working on JBS bug JDK-8043913 . Problem: remove legacy code in SPARC's VM_Version::platform_features URL: https://bugs.openjdk.java.net/browse/JDK-8043913 Do you or are you aware of anyone who still needs need to run or support Solaris SPARC < 11? I would need this information to fix the bug. Thank you and best regards, Zoltan From ludwig.mark at siemens.com Fri Jul 25 15:19:17 2014 From: ludwig.mark at siemens.com (Ludwig, Mark) Date: Fri, 25 Jul 2014 15:19:17 +0000 Subject: support for Solaris SPARC < 11 In-Reply-To: <53D27353.6090201@oracle.com> References: <53D27353.6090201@oracle.com> Message-ID: The Solaris 10 end of support date looks like 2018-2021, which certainly is after the JDK 9 expected release date, isn't it? Therefore, I would think Solaris 10 should be supported by JDK 9. Thanks, Mark Ludwig -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Zolt?n Maj? Sent: Friday, July 25, 2014 10:10 AM To: jdk9-dev at openjdk.java.net; hotspot-dev at openjdk.java.net Subject: support for Solaris SPARC < 11 Hi, I'm currently working on JBS bug JDK-8043913 . Problem: remove legacy code in SPARC's VM_Version::platform_features URL: https://bugs.openjdk.java.net/browse/JDK-8043913 Do you or are you aware of anyone who still needs need to run or support Solaris SPARC < 11? I would need this information to fix the bug. Thank you and best regards, Zoltan From daniel.daugherty at oracle.com Fri Jul 25 15:46:03 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Fri, 25 Jul 2014 09:46:03 -0600 Subject: support for Solaris SPARC < 11 In-Reply-To: References: <53D27353.6090201@oracle.com> Message-ID: <53D27BBB.7060702@oracle.com> The Release Engineering build platform for Solaris is Solaris 11u1. Traditionally that means that code built with that platform is only supported on the official build platform and newer. This is why we hung onto Solaris 10u6 so long as the official build platform for JDK8... I would say that Solaris 10 is not a supported platform in JDK9, but I'm not an official voice at all... :-) Dan On 7/25/14 9:19 AM, Ludwig, Mark wrote: > The Solaris 10 end of support date looks like 2018-2021, which certainly is after the JDK 9 expected release date, isn't it? Therefore, I would think Solaris 10 should be supported by JDK 9. > > Thanks, > Mark Ludwig > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Zolt?n Maj? > Sent: Friday, July 25, 2014 10:10 AM > To: jdk9-dev at openjdk.java.net; hotspot-dev at openjdk.java.net > Subject: support for Solaris SPARC < 11 > > Hi, > > > I'm currently working on JBS bug JDK-8043913 > . > > Problem: remove legacy code in SPARC's VM_Version::platform_features > URL: https://bugs.openjdk.java.net/browse/JDK-8043913 > > Do you or are you aware of anyone who still needs need to run or support > Solaris SPARC < 11? I would need this information to fix the bug. > > Thank you and best regards, > > > Zoltan > From ludwig.mark at siemens.com Fri Jul 25 16:08:24 2014 From: ludwig.mark at siemens.com (Ludwig, Mark) Date: Fri, 25 Jul 2014 16:08:24 +0000 Subject: support for Solaris SPARC < 11 In-Reply-To: <53D27BBB.7060702@oracle.com> References: <53D27353.6090201@oracle.com> <53D27BBB.7060702@oracle.com> Message-ID: I'm just expressing the view as an independent software vendor. We have "enterprise" customers using web application servers (running Java code we produce) that have shorter end-of-support plans. To stay on supported software, they get forced into upgrades. I think the normal end-of-support date for Java 7 is before Java 9 FCS (not sure). It is also true that the web app server vendors are slow to move; none have released Java 8 yet. Such customers don't like to be forced into OS upgrades when they already feel artificially forced into application server upgrades. The "hard spots" I foresee would be when security exposures are revealed that require upgrading Java, perhaps because the web app server requires it. Since we don't produce Java or any web app server, we can only be a messenger to our customers about what they need to do. For WebLogic, the pressure will land on Oracle either way in such cases. I'm just surprised that an OS with at least four more years of support life from Oracle would not be planned to be supported. Thanks, Mark Ludwig -----Original Message----- From: Daniel D. Daugherty [mailto:daniel.daugherty at oracle.com] Sent: Friday, July 25, 2014 10:46 AM To: Ludwig, Mark Cc: Zolt?n Maj?; jdk9-dev at openjdk.java.net; hotspot-dev at openjdk.java.net Subject: Re: support for Solaris SPARC < 11 The Release Engineering build platform for Solaris is Solaris 11u1. Traditionally that means that code built with that platform is only supported on the official build platform and newer. This is why we hung onto Solaris 10u6 so long as the official build platform for JDK8... I would say that Solaris 10 is not a supported platform in JDK9, but I'm not an official voice at all... :-) Dan On 7/25/14 9:19 AM, Ludwig, Mark wrote: > The Solaris 10 end of support date looks like 2018-2021, which certainly is after the JDK 9 expected release date, isn't it? Therefore, I would think Solaris 10 should be supported by JDK 9. > > Thanks, > Mark Ludwig > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Zolt?n Maj? > Sent: Friday, July 25, 2014 10:10 AM > To: jdk9-dev at openjdk.java.net; hotspot-dev at openjdk.java.net > Subject: support for Solaris SPARC < 11 > > Hi, > > > I'm currently working on JBS bug JDK-8043913 > . > > Problem: remove legacy code in SPARC's VM_Version::platform_features > URL: https://bugs.openjdk.java.net/browse/JDK-8043913 > > Do you or are you aware of anyone who still needs need to run or support > Solaris SPARC < 11? I would need this information to fix the bug. > > Thank you and best regards, > > > Zoltan > From vladimir.kozlov at oracle.com Fri Jul 25 16:16:21 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 25 Jul 2014 09:16:21 -0700 Subject: RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le In-Reply-To: References: <4295855A5C1DE049A61835A1887419CC2CEDAA66@DEWDFEMB12A.global.corp.sap> <4295855A5C1DE049A61835A1887419CC2CEDBD44@DEWDFEMB12A.global.corp.sap> <4295855A5C1DE049A61835A1887419CC2CEE3B91@DEWDFEMB12A.global.corp.sap> Message-ID: <53D282D5.8000004@oracle.com> Seems good to me. I will push it into hs-comp. Thanks, Vladimir On 7/25/14 12:23 AM, Alexander Smundak wrote: > Official reviewers, please take a look. > On Jul 25, 2014 12:17 AM, "Lindenmaier, Goetz" > wrote: > >> HI Alexander, >> >> you please also need an official reviewer, >> I'm only 'committer', so my review only counts as a second one. >> >> Best regards, >> Goetz. >> >> -----Original Message----- >> From: Alexander Smundak [mailto:asmundak at google.com] >> Sent: Mittwoch, 23. Juli 2014 19:01 >> To: Lindenmaier, Goetz >> Cc: HotSpot Open Source Developers >> Subject: Re: RFR(M): 8050942 : PPC64: implement template interpreter for >> ppc64le >> >> Thanks. >> I need a sponsor, please. >> Sasha >> >> On Wed, Jul 23, 2014 at 12:05 AM, Lindenmaier, Goetz >> wrote: >>> Hi Sasha, >>> >>> we ran our nightly tests on big-endian with this change. They're all >> green. >>> reviewed. >>> >>> Best regards, >>> Goetz. >>> >>> >>> -----Original Message----- >>> From: Lindenmaier, Goetz >>> Sent: Freitag, 18. Juli 2014 10:13 >>> To: 'Alexander Smundak' >>> Cc: HotSpot Open Source Developers >>> Subject: RE: RFR(M): 8050942 : PPC64: implement template interpreter for >> ppc64le >>> >>> Hi Sasha, >>> >>> thanks, now it works. I just ran jvm98/javac. >>> Comprehensive tests will be executed tonight. >>> >>> Best regards, >>> Goetz. >>> >>> >>> >>> >>> >>> -----Original Message----- >>> From: Alexander Smundak [mailto:asmundak at google.com] >>> Sent: Freitag, 18. Juli 2014 02:58 >>> To: Lindenmaier, Goetz >>> Cc: HotSpot Open Source Developers >>> Subject: Re: RFR(M): 8050942 : PPC64: implement template interpreter for >> ppc64le >>> >>> On Thu, Jul 17, 2014 at 3:20 AM, Lindenmaier, Goetz >>> wrote: >>>> I tested your change. Unfortunately it breaks our port. You need to >> fix Unsigned to >>>> Signed: >>>> >>>> --- a/src/cpu/ppc/vm/templateTable_ppc_64.cpp Wed Jul 16 16:53:32 >> 2014 -0700 >>>> +++ b/src/cpu/ppc/vm/templateTable_ppc_64.cpp Thu Jul 17 12:14:18 >> 2014 +0200 >>>> @@ -1929,7 +1929,7 @@ >>>> // default case >>>> __ bind(Ldefault_case); >>>> >>>> - __ get_u4(Roffset, Rdef_offset_addr, 0, >> InterpreterMacroAssembler::Unsigned); >>>> + __ get_u4(Roffset, Rdef_offset_addr, 0, >> InterpreterMacroAssembler::Signed); >>>> if (ProfileInterpreter) { >>>> __ profile_switch_default(Rdef_offset_addr, Rcount/* scratch */); >>>> __ b(Lcontinue_execution); >>> Oops. Fixed. Which test was broken by this, BTW? >>> >>>> If you want to, you can move loading the bci in this bytecode behind >> the loop. >>> Done. >>> >>>> Could you please fix indentation of relocInfo::none in call_c? Should >>>> be aligned to call_c. >>> Done. >>> >>> The revised patch is at >>> http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.01/ >>> please take another look. >>> >>> Sasha >> From vladimir.kozlov at oracle.com Fri Jul 25 17:27:41 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 25 Jul 2014 10:27:41 -0700 Subject: [9] RFR(S): 8029443: 'assert(klass->is_loader_alive(_is_alive)) failed: must be alive' during VM_CollectForMetadataAllocation In-Reply-To: <53D2619E.8010109@oracle.com> References: <53C3C584.7070008@oracle.com> <53C4820C.5000300@oracle.com> <53C53B23.6090907@oracle.com> <53C62FCA.8020302@oracle.com> <53C639A2.3050202@oracle.com> <53C90740.40602@oracle.com> <53C937E0.7060304@oracle.com> <53C9614C.8080109@oracle.com> <53C962F3.3070405@oracle.com> <53CCD307.7040806@oracle.com> <53CD62F4.1020904@oracle.com> <53CE2D53.6040006@oracle.com> <53CEA79F.6030004@oracle.com> <53D2619E.8010109@oracle.com> Message-ID: <53D2938D.1070205@oracle.com> Tobias, you also need to check for general metadata and not just klassptr - is_metadataptr(). Unfortunately they not inherited from one-another in C2 types. Thanks, Vladimir On 7/25/14 6:54 AM, Tobias Hartmann wrote: > Mikael, Vladimir, thanks for the review. > > The problem is indeed caused by a missing check for a metadata pointer > in sparc.ad. > > Adding 'n->bottom_type()->isa_klassptr()' checks to immP_load() and > immP_no_oop_cheap() fixes the problem. The Klass pointer is then loaded > from the constant table (loadConP_load()) and a metadata relocation is > added by Compile::ConstantTable::emit(). > > I had a look at the Aurora chessboard and it looks like as if the bug > recently occured on x86_32 as well. I was not yet able to reproduce it > but will try again next week. > > Thanks, > Tobias > > On 22.07.2014 20:04, Vladimir Kozlov wrote: >> I agree with Mikael, the case without compressed oops is incorrect. >> The problem is immP_load() and immP_no_oop_cheap() operands miss the >> check for metadata pointer, they only check for oop. For class loading >> loadConP_load() should be used instead of loadConP_no_oop_cheap() and >> it should check relocation type and do what loadConP_set() does. >> >> >> X64 seems fine. X86_64.ad use $$$emit32$src$$constant; in such case >> (load_immP31) which is expanded to the same code as load_immP: >> >> if ( opnd_array(1)->constant_reloc() != relocInfo::none ) { >> emit_d32_reloc(cbuf, opnd_array(1)->constant(), >> opnd_array(1)->constant_reloc(), 0); >> >> Thanks, >> Vladimir >> >> On 7/22/14 2:22 AM, Tobias Hartmann wrote: >>> On 21.07.2014 20:59, Vladimir Kozlov wrote: >>>> On 7/21/14 1:44 AM, Tobias Hartmann wrote: >>>>> Vladimir, Coleen, thanks for the reviews! >>>>> >>>>> On 18.07.2014 20:09, Vladimir Kozlov wrote: >>>>>> On 7/18/14 11:02 AM, Coleen Phillimore wrote: >>>>>>> >>>>>>> On 7/18/14, 11:06 AM, Vladimir Kozlov wrote: >>>>>>>> On 7/18/14 4:38 AM, Tobias Hartmann wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I spend some more days and was finally able to implement a test >>>>>>>>> that >>>>>>>>> deterministically triggers the bug: >>>>>>>> >>>>>>>> Why do you need to switch off compressed oops? Do you need to >>>>>>>> switch >>>>>>>> off compressed klass pointers too (UseCompressedClassPointers)? >>>>>>> >>>>>>> CompressedOops when off turns off CompressedClassPointers. >>>>>> >>>>>> You are right, I forgot that. Still the question is why switch off >>>>>> coop? >>>>> >>>>> I'm only able to reproduce the bug without compressed oops. The >>>>> original >>>>> bug also only reproduces with -XX:-UseCompressedOops. I tried to >>>>> figure >>>>> out why (on Sparc): >>>>> >>>>> With compressed oops enabled, Method* metadata referencing >>>>> 'WorkerClass' >>>>> is added to 'doWork' in MacroAssembler::set_narrow_klass(..). In >>>>> CodeBuffer::finalize_oop_references(..) the metadata is processed >>>>> and an >>>>> oop to the class loader 'URLClassLoader' is added. This oop leads >>>>> to the >>>>> unloading of 'doWork', hence the verification code is never executed. >>>>> >>>>> I'm not sure what set_narrow_klass(..) is used for in this case. I >>>>> assume it stores a 'WorkerClass' Klass* in a register as part of an >>>>> optimization? Because 'doWork' potentially works on any class. >>>>> Apparently this optimization is not performed without compressed oops. >>>> >>>> I would suggest to compare 'doWork' assembler >>>> (-XX:CompileCommand=print,TestMethodUnloading::doWork) with coop and >>>> without it. Usually loaded into register class is used for klass >>>> compare do guard inlining code. Or to initialize new object. >>>> >>>> I don't see loading (constructing) uncompressed (whole) klass pointer >>>> from constant in sparc.ad. It could be the reason for different >>>> behavior. It could be loaded from constants section. But constants >>>> section should have metadata relocation info in such case. >>> >>> I did as you suggested and found the following: >>> >>> During the profiling phase the class given to 'doWork' always is >>> 'WorkerClass'. The C2 compiler therefore optimizes the compiled version >>> to expect a 'WorkerClass'. The branch that instantiates a new object is >>> guarded by an uncommon trap (class_check). The difference between the >>> two versions (with and without compressed oops) is the loading of the >>> 'WorkerClass' Klass to check if the given class is equal: >>> >>> With compressed oops: >>> SET narrowklass: precise klass WorkerClass: >>> 0x00000001004a0d40:Constant:exact *,R_L1 ! compressed klass ptr >>> CWBne R_L2,R_L1,B8 ! compressed ptr P=0.000001 C=-1.000000 >>> >>> Without: >>> SET precise klass WorkerClass: 0x00000001004aeab0:Constant:exact >>> *,R_L1 ! non-oop ptr >>> CXBpne R_L2,R_L1,B8 ! ptr P=0.000001 C=-1.000000 >>> >>> R_L2: class given as parameter >>> B8: location of uncommon trap >>> >>> In the first case, the Klass is loaded by a 'loadConNKlass' instruction >>> that calls MacroAssembler::set_narrow_klass(..) which then creates a >>> metadata_Relocation for the 'WorkerClass'. This metada_Relocation is >>> processed by CodeBuffer::finalize_oop_references(..) and an oop to >>> 'WorkerClass' is added. This oop causes the unloading of the method. >>> >>> In the second case, the Klass is loaded by a 'loadConP_no_oop_cheap' >>> instruction that does not create a metadata_Relocation. >>> >>> I don't understand why the metadata_Relocation in the first case is >>> needed? As the test shows it is better to only unload the method if we >>> hit the uncommon trap because we could still use other (potentially >>> complex) branches of the method. >>> >>> Thanks, >>> Tobias >>> >>>> >>>> thanks, >>>> Vladimir >>>> >>>>> >>>>> Best, >>>>> Tobias >>>>> >>>>>> >>>>>> Vladimir >>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.02/ >>>>>>>> >>>>>>>> Very nice! >>>>>>> >>>>>>> Yes, I agree. Impressive. >>>>>>> >>>>>>> The refactoring in nmethod.cpp looks good to me. I have no further >>>>>>> comments. >>>>>>> Thanks! >>>>>>> Coleen >>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> @Vladimir: The test shows why we should only clean the ICs but not >>>>>>>>> unload the nmethod if possible. The method ' doWork' >>>>>>>>> is still valid after WorkerClass was unloaded and depending on the >>>>>>>>> complexity of the method we should avoid unloading it. >>>>>>>> >>>>>>>> Make sense. >>>>>>>> >>>>>>>>> >>>>>>>>> On Sparc my patch fixes the bug and leads to the nmethod not being >>>>>>>>> unloaded. The compiled version is therefore used even >>>>>>>>> after WorkerClass is unloaded. >>>>>>>>> >>>>>>>>> On x86 the nmethod is unloaded anyway because of a dead oop. >>>>>>>>> This is >>>>>>>>> probably due to a slightly different implementation >>>>>>>>> of the ICs. I'll have a closer look to see if we can improve that. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Vladimir >>>>>>>> >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Tobias >>>>>>>>> >>>>>>>>> On 16.07.2014 10:36, Tobias Hartmann wrote: >>>>>>>>>> Sorry, forgot to answer this question: >>>>>>>>>>> Were you able to create a small test case for it that would be >>>>>>>>>>> useful to add? >>>>>>>>>> Unfortunately I was not able to create a test. The bug only >>>>>>>>>> reproduces on a particular system with a > 30 minute run >>>>>>>>>> of runThese. >>>>>>>>>> >>>>>>>>>> Best, >>>>>>>>>> Tobias >>>>>>>>>> >>>>>>>>>> On 16.07.2014 09:54, Tobias Hartmann wrote: >>>>>>>>>>> Hi Coleen, >>>>>>>>>>> >>>>>>>>>>> thanks for the review. >>>>>>>>>>>> *+ if (csc->is_call_to_interpreted() && >>>>>>>>>>>> stub_contains_dead_metadata(is_alive, csc->destination())) {* >>>>>>>>>>>> *+ csc->set_to_clean();* >>>>>>>>>>>> *+ }* >>>>>>>>>>>> >>>>>>>>>>>> This appears in each case. Can you fold it and the new >>>>>>>>>>>> function >>>>>>>>>>>> into a function like >>>>>>>>>>>> clean_call_to_interpreted_stub(is_alive, csc)? >>>>>>>>>>> >>>>>>>>>>> I folded it into the function >>>>>>>>>>> clean_call_to_interpreter_stub(..). >>>>>>>>>>> >>>>>>>>>>> New webrev: >>>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.01/ >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Tobias >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Coleen >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> So before the permgen removal embedded method* were oops and >>>>>>>>>>>>> they >>>>>>>>>>>>> were processed in relocInfo::oop_type loop. >>>>>>>>>>>>> >>>>>>>>>>>>> May be instead of specializing opt_virtual_call_type and >>>>>>>>>>>>> static_call_type call site you can simple add a loop for >>>>>>>>>>>>> relocInfo::metadata_type (similar to oop_type loop)? >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Vladimir >>>>>>>>>>>>> >>>>>>>>>>>>> On 7/14/14 4:56 AM, Tobias Hartmann wrote: >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> please review the following patch for JDK-8029443. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8029443 >>>>>>>>>>>>>> Webrev: >>>>>>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8029443/webrev.00/ >>>>>>>>>>>>>> >>>>>>>>>>>>>> *Problem* >>>>>>>>>>>>>> After the tracing/marking phase of GC, >>>>>>>>>>>>>> nmethod::do_unloading(..) >>>>>>>>>>>>>> checks >>>>>>>>>>>>>> if a nmethod can be unloaded because it contains dead >>>>>>>>>>>>>> oops. If >>>>>>>>>>>>>> class >>>>>>>>>>>>>> unloading occurred we additionally clear all ICs where the >>>>>>>>>>>>>> cached >>>>>>>>>>>>>> metadata refers to an unloaded klass or method. If the >>>>>>>>>>>>>> nmethod >>>>>>>>>>>>>> is not >>>>>>>>>>>>>> unloaded, nmethod::verify_metadata_loaders(..) finally >>>>>>>>>>>>>> checks if >>>>>>>>>>>>>> all >>>>>>>>>>>>>> metadata is alive. The assert in CheckClass::check_class >>>>>>>>>>>>>> fails >>>>>>>>>>>>>> because >>>>>>>>>>>>>> the nmethod contains Method* metadata corresponding to a dead >>>>>>>>>>>>>> Klass. >>>>>>>>>>>>>> The Method* belongs to a to-interpreter stub [1] of an >>>>>>>>>>>>>> optimized >>>>>>>>>>>>>> compiled IC. Normally we clear those stubs prior to >>>>>>>>>>>>>> verification to >>>>>>>>>>>>>> avoid dangling references to Method* [2], but only if the >>>>>>>>>>>>>> stub >>>>>>>>>>>>>> is not in >>>>>>>>>>>>>> use, i.e. if the IC is not in to-interpreted mode. In this >>>>>>>>>>>>>> case the >>>>>>>>>>>>>> to-interpreter stub may be executed and hand a stale Method* >>>>>>>>>>>>>> to the >>>>>>>>>>>>>> interpreter. >>>>>>>>>>>>>> >>>>>>>>>>>>>> *Solution >>>>>>>>>>>>>> *The implementation of nmethod::do_unloading(..) is >>>>>>>>>>>>>> changed to >>>>>>>>>>>>>> clean >>>>>>>>>>>>>> compiled ICs and compiled static calls if they call into a >>>>>>>>>>>>>> to-interpreter stub that references dead Method* metadata. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The patch was affected by the G1 class unloading changes >>>>>>>>>>>>>> (JDK-8048248) >>>>>>>>>>>>>> because the method nmethod::do_unloading_parallel(..) was >>>>>>>>>>>>>> added. I >>>>>>>>>>>>>> adapted the implementation as well. >>>>>>>>>>>>>> * >>>>>>>>>>>>>> Testing >>>>>>>>>>>>>> *Failing test (runThese) >>>>>>>>>>>>>> JPRT >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> Tobias >>>>>>>>>>>>>> >>>>>>>>>>>>>> [1] see CompiledStaticCall::emit_to_interp_stub(..) >>>>>>>>>>>>>> [2] see nmethod::verify_metadata_loaders(..), >>>>>>>>>>>>>> static_stub_reloc()->clear_inline_cache() clears the stub >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>> >>>>> >>> > From andrey.x.zakharov at oracle.com Mon Jul 28 14:51:58 2014 From: andrey.x.zakharov at oracle.com (Andrey Zakharov) Date: Mon, 28 Jul 2014 18:51:58 +0400 Subject: RFR: 8011397: JTREG needs to copy additional WhiteBox class file to JTwork/scratch/sun/hotspot In-Reply-To: <53C65E4A.4020401@oracle.com> References: <536B7CF0.6010508@oracle.com> <2443586.qRToXKmNqX@mgerdin03> <53C5482A.9090001@oracle.com> <12779611.jBGqJ13gfp@ehelin-laptop> <53C65E4A.4020401@oracle.com> Message-ID: <53D6638E.4070501@oracle.com> Hi, all. I've prepared rechecked and verified fix for subject. webrev: http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.03/ Bug: https://bugs.openjdk.java.net/browse/JDK-8011397 I've grep'ed workspace by sun.hotspot.WhiteBox and fixed every file. Also I've updated copyright. Testing was done by aurora batch 538304.ute.hs_jtreg.accept.full It looks good. Thanks. On 16.07.2014 15:13, Andrey Zakharov wrote: > > On 16.07.2014 14:39, Erik Helin wrote: >> On Tuesday 15 July 2014 19:26:34 PM Andrey Zakharov wrote: >>> Hi, Erik, Bengt. Could you, please, review this too. >> Andrey, why did you only update a couple of tests to also copy >> sun.hotspot.WhiteBox$WhiteBoxPermission? You updated 14 tests, there are >> still 116 tests using sun.hotspot.WhiteBox. >> >> Why doesn't these 116 tests have to be updated? >> >> Thanks, >> Erik > Thanks Erik. Actually this first one patch 8011397.WhiteBoxPermission > > is correct. I will rework it and upload to webrev space. > > >>> Thanks. >>> >>> On 15.07.2014 17:58, Mikael Gerdin wrote: >>>> Andrey, >>>> >>>> On Monday 07 July 2014 20.48.21 Andrey Zakharov wrote: >>>>> Hi ,all >>>>> Mikael, can you please review it. >>>> Sorry, I was on vacation last week. >>>> >>>>> webrev: >>>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ >>>> Looks ok for now. We should consider revisiting this by either switching >>>> to >>>> @run main/bootclasspath >>>> or >>>> deleting the WhiteboxPermission nested class and using some other way for >>>> permission checks (if they are at all needed). >>>> >>>> /Mikael >>>> >>>>> Thanks. >>>>> >>>>> On 25.06.2014 19:08, Andrey Zakharov wrote: >>>>>> Hi, all >>>>>> So in progress of previous email - >>>>>> webrev: >>>>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.01/ >>>>>> >>>>>> Thanks. >>>>>> >>>>>> On 16.06.2014 19:57, Andrey Zakharov wrote: >>>>>>> Hi, all >>>>>>> So issue is that when tests with WhiteBox API has been invoked with >>>>>>> -Xverify:all it fails with Exception java.lang.NoClassDefFoundError: >>>>>>> sun/hotspot/WhiteBox$WhiteBoxPermission >>>>>>> Solutions that are observed: >>>>>>> 1. Copy WhiteBoxPermission with WhiteBox. But >>>>>>> >>>>>>>>> Perhaps this is a good time to get rid of ClassFileInstaller >>>>>>> altogether? >>>>>>> >>>>>>> 2. Using bootclasspath to hook pre-built whitebox (due @library >>>>>>> /testlibrary/whitebox) . Some tests has @run main/othervm, some uses >>>>>>> ProcessBuilder. >>>>>>> >>>>>>> - main/othervm/bootclasspath adds ${test.src} and >>>>>>> >>>>>>> ${test.classes}to options. >>>>>>> >>>>>>> - With ProcessBuilder we can just add ${test.classes} >>>>>>> >>>>>>> Question here is, can it broke some tests ? While testing this, I >>>>>>> found onlyhttps://bugs.openjdk.java.net/browse/JDK-8046231, others >>>>>>> looks fine. >>>>>>> >>>>>>> 3. Make ClassFileInstaller deal with inner classes like that: >>>>>>> diff -r 6ed24aedeef0 -r c01651363ba8 >>>>>>> test/testlibrary/ClassFileInstaller.java >>>>>>> --- a/test/testlibrary/ClassFileInstaller.java Thu Jun 05 19:02:56 >>>>>>> 2014 +0400 >>>>>>> +++ b/test/testlibrary/ClassFileInstaller.java Fri Jun 06 18:18:11 >>>>>>> 2014 +0400 >>>>>>> @@ -50,6 +50,16 @@ >>>>>>> >>>>>>> } >>>>>>> // Create the class file >>>>>>> Files.copy(is, p, StandardCopyOption.REPLACE_EXISTING); >>>>>>> >>>>>>> + >>>>>>> + for (Class cls : >>>>>>> Class.forName(arg).getDeclaredClasses()) { >>>>>>> + //if (!Modifier.isStatic(cls.getModifiers())) { >>>>>>> + String pathNameSub = >>>>>>> cls.getCanonicalName().replace('.', '/').concat(".class"); >>>>>>> + Path pathSub = Paths.get(pathNameSub); >>>>>>> + InputStream streamSub = >>>>>>> cl.getResourceAsStream(pathNameSub); >>>>>>> + Files.copy(streamSub, pathSub, >>>>>>> StandardCopyOption.REPLACE_EXISTING); >>>>>>> + //} >>>>>>> + } >>>>>>> + >>>>>>> >>>>>>> } >>>>>>> >>>>>>> } >>>>>>> >>>>>>> } >>>>>>> >>>>>>> Works fine for ordinary classes, but fails for WhiteBox due >>>>>>> Class.forName initiate Class. WhiteBox has "static" section, and >>>>>>> initialization fails as it cannot bind to native methods >>>>>>> "registerNatives" and so on. >>>>>>> >>>>>>> >>>>>>> So, lets return to first one option? Just add everywhere >>>>>>> >>>>>>> * @run main ClassFileInstaller sun.hotspot.WhiteBox >>>>>>> >>>>>>> + * @run main ClassFileInstaller >>>>>>> sun.hotspot.WhiteBox$WhiteBoxPermission >>>>>>> >>>>>>> Thanks. >>>>>>> >>>>>>> On 10.06.2014 19:43, Igor Ignatyev wrote: >>>>>>>> Andrey, >>>>>>>> >>>>>>>> I don't like this idea, since it completely changes the tests. >>>>>>>> 'run/othervm/bootclasspath' adds all paths from CP to BCP, so the >>>>>>>> tests whose main idea was testing WB methods themselves (sanity, >>>>>>>> compiler/whitebox, ...) don't check that it's possible to use WB >>>>>>>> when the application isn't in BCP. >>>>>>>> >>>>>>>> Igor >>>>>>>> >>>>>>>> On 06/09/2014 06:59 PM, Andrey Zakharov wrote: >>>>>>>>> Hi, everybody >>>>>>>>> I have tested my changes on major platforms and found one bug, filed: >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046231 >>>>>>>>> Also, i did another try to make ClassFileInstaller to copy all inner >>>>>>>>> classes within parent, but this fails for WhiteBox due its static >>>>>>>>> "registerNatives" dependency. >>>>>>>>> >>>>>>>>> Please, review suggested changes: >>>>>>>>> - replace ClassFileInstaller and run/othervm with >>>>>>>>> >>>>>>>>> "run/othervm/bootclasspath". >>>>>>>>> >>>>>>>>> bootclasspath parameter for othervm adds-Xbootclasspath/a: >>>>>>>>> option with ${test.src} and ${test.classes}according to >>>>>>>>> http://hg.openjdk.java.net/code-tools/jtreg/file/31003a1c46d9/src/sha >>>>>>>>> re >>>>>>>>> /classes/com/sun/javatest/regtest/MainAction.java. >>>>>>>>> >>>>>>>>> Is this suitable for our needs - give to test compiled WhiteBox? >>>>>>>>> >>>>>>>>> - replace explicit -Xbootclasspath option values (".") in >>>>>>>>> >>>>>>>>> ProcessBuilder invocations to ${test.classes} where WhiteBox has been >>>>>>>>> compiled. >>>>>>>>> >>>>>>>>> Webrev: >>>>>>>>> http://cr.openjdk.java.net/~fzhinkin/azakharov/8011397/webrev.00/ >>>>>>>>> Bug:https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>>>> Thanks. >>>>>>>>> >>>>>>>>> On 23.05.2014 15:40, Andrey Zakharov wrote: >>>>>>>>>> On 22.05.2014 12:47, Igor Ignatyev wrote: >>>>>>>>>>> Andrey, >>>>>>>>>>> >>>>>>>>>>> 1. You changed dozen of tests, have you tested your changes? >>>>>>>>>> Locally, aurora on the way. >>>>>>>>>> >>>>>>>>>>> 2. Your changes of year in copyright is wrong. it has to be >>>>>>>>>>> $first_year, [$last_year, ], see Mark's email[1] for details. >>>>>>>>>>> >>>>>>>>>>> [1] >>>>>>>>>>> http://mail.openjdk.java.net/pipermail/jdk7-dev/2010-May/001321.htm >>>>>>>>>>> l >>>>>>>>>> Thanks, fixed. will be uploaded soon. >>>>>>>>>> >>>>>>>>>>> Igor >>>>>>>>>>> >>>>>>>>>>> On 05/21/2014 07:37 PM, Andrey Zakharov wrote: >>>>>>>>>>>> On 13.05.2014 14:43, Andrey Zakharov wrote: >>>>>>>>>>>>> Hi >>>>>>>>>>>>> So here is trivial patch - >>>>>>>>>>>>> removing ClassFileInstaller sun.hotspot.WhiteBox and adding >>>>>>>>>>>>> main/othervm/bootclasspath >>>>>>>>>>>>> where this needed >>>>>>>>>>>>> >>>>>>>>>>>>> Also, some tests are modified as >>>>>>>>>>>>> - "-Xbootclasspath/a:.", >>>>>>>>>>>>> + "-Xbootclasspath/a:" + >>>>>>>>>>>>> System.getProperty("test.classes"), >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks. >>>>>>>>>>>> webrev:http://cr.openjdk.java.net/~jwilhelm/8011397/webrev.02/ >>>>>>>>>>>> bug:https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>>>>>>> Thanks. >>>>>>>>>>>> >>>>>>>>>>>>> On 09.05.2014 12:13, Mikael Gerdin wrote: >>>>>>>>>>>>>> On Thursday 08 May 2014 19.28.13 Igor Ignatyev wrote: >>>>>>>>>>>>>>> // cc'ing hotspot-dev instaed of compiler, runtime and gc >>>>>>>>>>>>>>> lists. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On 05/08/2014 07:09 PM, Filipp Zhinkin wrote: >>>>>>>>>>>>>>>> Andrey, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I've CC'ed compiler and runtime mailing list, because you're >>>>>>>>>>>>>>>> changes >>>>>>>>>>>>>>>> affect test for other components as too. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I don't like your solution (but I'm not a reviewer, so treat >>>>>>>>>>>>>>>> my >>>>>>>>>>>>>>>> words >>>>>>>>>>>>>>>> just as suggestion), >>>>>>>>>>>>>>>> because we'll have to write more meta information for each >>>>>>>>>>>>>>>> test >>>>>>>>>>>>>>>> and it >>>>>>>>>>>>>>>> is very easy to >>>>>>>>>>>>>>>> forget to install WhiteBoxPermission if you don't test your >>>>>>>>>>>>>>>> test >>>>>>>>>>>>>>>> with >>>>>>>>>>>>>>>> some security manager. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> From my point of view, it will be better to extend >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ClassFileInstaller >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> so it will copy not only >>>>>>>>>>>>>>>> a class whose name was passed as an arguments, but also all >>>>>>>>>>>>>>>> inner >>>>>>>>>>>>>>>> classes of that class. >>>>>>>>>>>>>>>> And if someone want copy only specified class without inner >>>>>>>>>>>>>>>> classes, >>>>>>>>>>>>>>>> then some option >>>>>>>>>>>>>>>> could be added to ClassFileInstaller to force such behaviour. >>>>>>>>>>>>>> Perhaps this is a good time to get rid of ClassFileInstaller >>>>>>>>>>>>>> altogether? >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8009117 >>>>>>>>>>>>>> >>>>>>>>>>>>>> The reason for its existence is that the WhiteBox class needs >>>>>>>>>>>>>> to be >>>>>>>>>>>>>> on the >>>>>>>>>>>>>> boot class path. >>>>>>>>>>>>>> If we can live with having all the test's classes on the boot >>>>>>>>>>>>>> class >>>>>>>>>>>>>> path then >>>>>>>>>>>>>> we could use the /bootclasspath option in jtreg as stated in >>>>>>>>>>>>>> the RFE. >>>>>>>>>>>>>> >>>>>>>>>>>>>> /Mikael >>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>> Filipp. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On 05/08/2014 04:47 PM, Andrey Zakharov wrote: >>>>>>>>>>>>>>>>> Hi! >>>>>>>>>>>>>>>>> Suggesting patch with fixes for >>>>>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8011397 >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> webrev: >>>>>>>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20275/8011397 >>>>>>>>>>>>>>>>> .t >>>>>>>>>>>>>>>>> gz >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> patch: >>>>>>>>>>>>>>>>> https://bugs.openjdk.java.net/secure/attachment/20274/8011397 >>>>>>>>>>>>>>>>> .W >>>>>>>>>>>>>>>>> hiteBoxPer >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> mission >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks. > From jesper.wilhelmsson at oracle.com Mon Jul 28 18:49:47 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Mon, 28 Jul 2014 20:49:47 +0200 Subject: RFR(s): Backport of 8046715 - Add a way to verify an extended set of command line options Message-ID: <53D69B4B.4080708@oracle.com> Hi, Backport of 8046715 (Add a way to verify an extended set of command line options). The patch applied with an offset, so a new review is required I believe. Webrev: http://cr.openjdk.java.net/~jwilhelm/8046715/webrev.jdk8/ Bug: https://bugs.openjdk.java.net/browse/JDK-8046715 JDK 9 change: http://hg.openjdk.java.net/jdk9/hs-gc/hotspot/rev/c0b3ddf06856 Thanks! /Jesper From jesper.wilhelmsson at oracle.com Mon Jul 28 21:56:39 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Mon, 28 Jul 2014 23:56:39 +0200 Subject: PATCH: using mixed types in MIN2/MAX2 functions In-Reply-To: References: <20140612110513.d59301a5c21f3000aa4973d1@danny.cz> <5399F826.4060409@oracle.com> <539A0454.5030906@oracle.com> <20140613095511.5fae7c4b483bb65f073e5628@danny.cz> <539AEA4D.8020204@oracle.com> <20140613145531.559f6d943ef097550580a1f6@danny.cz> <539E8F41.6070001@oracle.com> <20140616091702.28b895ce918bdf4c58d2506f@danny.cz> <539ED1EE.30107@oracle.com> <539EFCE5.9020108@oracle.com> <20140616162130.ac9bec274c625241bb7fd18d@danny.cz> <539F0CA0.3010309@oracle.com> <20140616182920.baccf0cc83debbe522189688@danny.cz> <539F22E0.2080202@oracle.com> <20140618085537.0cc3e3e85856ae67c491c88e@danny.cz> <53A18A50.50001@oracle.com> <53A2DDD8.4010605@oracle.com> <20140619150654.cf6c68f626234f01295fec61@danny.cz> Message-ID: <53D6C717.3040707@oracle.com> Hi Dan, Kim, and others, Trying to get this discussion going again. There isn't too much of non-trivial template usage in HotSpot today and I'm not sure I think it's worth complicating the code to avoid a few type casts. How do other people feel about the non-trivial template usage? /Jesper Kim Barrett skrev 20/6/14 20:03: > On Jun 19, 2014, at 9:06 AM, Dan Hor?k wrote: >> >> On Thu, 19 Jun 2014 14:55:52 +0200 >> Bengt Rutisson wrote: >>> Can you explain more in detail why it is not possible to specialize >>> the MIN2 and MAX2 functions? You are probably correct, because when I >>> read the comment in the code it says: >>> >>> >>> // It is necessary to use templates here. Having normal overloaded >>> // functions does not work because it is necessary to provide both 32- >>> // and 64-bit overloaded functions, which does not work, and having >>> // explicitly-typed versions of these routines (i.e., MAX2I, MAX2L) >>> // will be even more error-prone than macros. >>> template inline T MAX2(T a, T b) { return (a > >>> b) ? a : b; } >>> >>> >>> This kind of says what you also said. That it is not possible, but it >>> does not really explain why. >>> >>> Can you explain why we can have definitions like: >>> >>> inline uint MAX2(uint a, size_t b) >>> inline uint MAX2(size_t a, uint b) >> >> if I remember correctly from my previous experience, these 2 definition >> conflict with the existing >> inline uint MAX2(uint a, uint b) >> on platforms where size_t == uint. But I might be wrong, the easiest we >> can do, is to try add them. Or we could add the new definitions only >> for s390 with #if defined(__s390__) && ! defined(__s390x__). Or maybe >> there is another way to add them only when size_t != uint. A C++ expert >> is required :-) > > This isn?t too difficult. The tests at the end should of course be turned into real test cases. > > Requires > > - > - std::numeric_limits::is_specialized > - for specialized types > - std::numeric_limits::is_integer > - std::numeric_limits::is_signed > - partial template specialization > - SFINAE for function return types > > I have no idea whether all of our toolchains support all that. I?ve heard of some > strange defects around numeric_limits with some (older) toolchains. For example, > boost provides > BOOST_NO_LIMIT - does not provide > BOOST_NO_LIMITS_COMPILE_TIME_CONSTANTS > std::numeric_limits::is_signed and similar are not available at compile-time. > Neither of those seem to be applicable to any toolchain relevant to jdk code though. > > I also don't know how folks feel about non-trivial template usage. > > ??????? > > #include > > // MIN2/MAX2(a, b) compare a and b and return the lesser/greater. > // > // a and b must either > // > // - be of the same type, in which case the result is of that type, or > // > // - both be of integer types with the same signed-ness, in which case the > // result is the larger of those two types. > > template > struct MINMAX2_result_differ_choose { typedef U type; }; > > template > struct MINMAX2_result_differ_choose { typedef T type; }; > > template bool T_Integer = std::numeric_limits::is_specialized, > bool U_Integer = std::numeric_limits::is_specialized> > struct MINMAX2_result_differ { }; > > template bool T_Integer = std::numeric_limits::is_integer, > bool U_Integer = std::numeric_limits::is_integer, > bool SameSigned = (std::numeric_limits::is_signed > == std::numeric_limits::is_signed)> > struct MINMAX2_result_differ_aux { }; > > template > struct MINMAX2_result_differ_aux > : public MINMAX2_result_differ_choose > { }; > > template > struct MINMAX2_result_differ > : public MINMAX2_result_differ_aux > { }; > > template > struct MINMAX2_result_type > : public MINMAX2_result_differ > { }; > > template > struct MINMAX2_result_type { > typedef T type; > }; > > template > inline typename MINMAX2_result_type::type MAX2(T a, U b) { > // note: if T & U are different integral types, terniary operator will > // perform implicit promotion of the smaller to the larger. > return a > b ? a : b; > } > > template > inline typename MINMAX2_result_type::type MIN2(T a, U b) { > // note: if T & U are different integral types, terniary operator will > // perform implicit promotion of the smaller to the larger. > return a < b ? a : b; > } > > // TESTS > > typedef unsigned int uint; > typedef unsigned int uint_size_t; > typedef unsigned long ulong_size_t; > > uint max_same1(uint a, uint_size_t b) { return MAX2(a, b); } > uint max_same2(uint_size_t a, uint b) { return MAX2(a, b); } > > uint min_same1(uint a, uint_size_t b) { return MIN2(a, b); } > uint min_same2(uint_size_t a, uint b) { return MIN2(a, b); } > > ulong_size_t max_diff1(uint a, ulong_size_t b) { return MAX2(a, b); } > ulong_size_t max_diff2(ulong_size_t a, uint b) { return MAX2(a, b); } > > ulong_size_t min_diff1(uint a, ulong_size_t b) { return MIN2(a, b); } > ulong_size_t min_diff2(ulong_size_t a, uint b) { return MIN2(a, b); } > > ulong_size_t max_ulong_size_t(ulong_size_t a, ulong_size_t b) { > return MAX2(a, b); > } > > uint_size_t max_uing_size_t(uint_size_t a, uint_size_t b) { > return MAX2(a, b); > } > > ulong_size_t min_ulong_size_t(ulong_size_t a, ulong_size_t b) { > return MIN2(a, b); > } > > uint_size_t min_uing_size_t(uint_size_t a, uint_size_t b) { > return MIN2(a, b); > } > > // these aren't supposed to compile > > // float max_float1(uint a, float b) { return MAX2(a, b); } > // float max_float2(float a, uint b) { return MAX2(a, b); } > > // float min_float1(uint a, float b) { return MIN2(a, b); } > // float min_float2(float a, uint b) { return MIN2(a, b); } > > // uint max_int1(uint a, int b) { return MAX2(a, b); } > // uint max_int2(int a, uint b) { return MAX2(a, b); } > > // uint min_int1(uint a, int b) { return MIN2(a, b); } > // uint min_int2(int a, uint b) { return MIN2(a, b); } > > // uint max_long1(uint a, long b) { return MAX2(a, b); } > // uint max_long2(long a, uint b) { return MAX2(a, b); } > > // uint min_long1(uint a, long b) { return MIN2(a, b); } > // uint min_long2(long a, uint b) { return MIN2(a, b); } > From erik.osterlund at lnu.se Mon Jul 28 22:43:34 2014 From: erik.osterlund at lnu.se (=?Windows-1252?Q?Erik_=D6sterlund?=) Date: Mon, 28 Jul 2014 22:43:34 +0000 Subject: PATCH: using mixed types in MIN2/MAX2 functions In-Reply-To: <53D6C717.3040707@oracle.com> References: <20140612110513.d59301a5c21f3000aa4973d1@danny.cz> <5399F826.4060409@oracle.com> <539A0454.5030906@oracle.com> <20140613095511.5fae7c4b483bb65f073e5628@danny.cz> <539AEA4D.8020204@oracle.com> <20140613145531.559f6d943ef097550580a1f6@danny.cz> <539E8F41.6070001@oracle.com> <20140616091702.28b895ce918bdf4c58d2506f@danny.cz> <539ED1EE.30107@oracle.com> <539EFCE5.9020108@oracle.com> <20140616162130.ac9bec274c625241bb7fd18d@danny.cz> <539F0CA0.3010309@oracle.com> <20140616182920.baccf0cc83debbe522189688@danny.cz> <539F22E0.2080202@oracle.com> <20140618085537.0cc3e3e85856ae67c491c88e@danny.cz> <53A18A50.50001@oracle.com> <53A2DDD8.4010605@oracle.com> <20140619150654.cf6c68f626234f01295fec61@danny.cz> <53D6C717.3040707@oracle.com> Message-ID: <298C75CF-ED79-4673-8CEA-D2D211DAFCC0@lnu.se> Hi guys, Thought I'd put my general opinion on templates vs macros in here. I personally do not get the fear of "non-trivial" templates to complicate things, while instead embracing macros as friends. I would on the contrary argue that the macros many times complicate things a lot more than templates would. Apart from providing type safety to such small macro functions, they can also get rid of the immensely ugly macros, e.g. for doing full GC, and specialized oop closures. The specialized closures span multiple files, multiple macros with different mechanics for compiler compatibility taking into account having too large macros. I do not know in which universe having such macros for closure specialization and logic for doing full GC (hundreds of LOC per macro) is less painful nor complicated compared to templates. As a matter of fact, for the sake of it, I made a similar solution for closure specialization using templates instead of macros to remove virtual calls, and it is a lot cleaner. I strongly believe that by considering templates friends, people will become a lot happier. Rather than complicating the code, I am convinced it will both simplify it and make it safer. +1 vote for non-trivial templates! /Erik On 28 Jul 2014, at 23:56, Jesper Wilhelmsson wrote: > Hi Dan, Kim, and others, > > Trying to get this discussion going again. > > There isn't too much of non-trivial template usage in HotSpot today and I'm not sure I think it's worth complicating the code to avoid a few type casts. > > How do other people feel about the non-trivial template usage? > /Jesper > > > Kim Barrett skrev 20/6/14 20:03: >> On Jun 19, 2014, at 9:06 AM, Dan Hor?k wrote: >>> >>> On Thu, 19 Jun 2014 14:55:52 +0200 >>> Bengt Rutisson wrote: >>>> Can you explain more in detail why it is not possible to specialize >>>> the MIN2 and MAX2 functions? You are probably correct, because when I >>>> read the comment in the code it says: >>>> >>>> >>>> // It is necessary to use templates here. Having normal overloaded >>>> // functions does not work because it is necessary to provide both 32- >>>> // and 64-bit overloaded functions, which does not work, and having >>>> // explicitly-typed versions of these routines (i.e., MAX2I, MAX2L) >>>> // will be even more error-prone than macros. >>>> template inline T MAX2(T a, T b) { return (a > >>>> b) ? a : b; } >>>> >>>> >>>> This kind of says what you also said. That it is not possible, but it >>>> does not really explain why. >>>> >>>> Can you explain why we can have definitions like: >>>> >>>> inline uint MAX2(uint a, size_t b) >>>> inline uint MAX2(size_t a, uint b) >>> >>> if I remember correctly from my previous experience, these 2 definition >>> conflict with the existing >>> inline uint MAX2(uint a, uint b) >>> on platforms where size_t == uint. But I might be wrong, the easiest we >>> can do, is to try add them. Or we could add the new definitions only >>> for s390 with #if defined(__s390__) && ! defined(__s390x__). Or maybe >>> there is another way to add them only when size_t != uint. A C++ expert >>> is required :-) >> >> This isn?t too difficult. The tests at the end should of course be turned into real test cases. >> >> Requires >> >> - >> - std::numeric_limits::is_specialized >> - for specialized types >> - std::numeric_limits::is_integer >> - std::numeric_limits::is_signed >> - partial template specialization >> - SFINAE for function return types >> >> I have no idea whether all of our toolchains support all that. I?ve heard of some >> strange defects around numeric_limits with some (older) toolchains. For example, >> boost provides >> BOOST_NO_LIMIT - does not provide >> BOOST_NO_LIMITS_COMPILE_TIME_CONSTANTS >> std::numeric_limits::is_signed and similar are not available at compile-time. >> Neither of those seem to be applicable to any toolchain relevant to jdk code though. >> >> I also don't know how folks feel about non-trivial template usage. >> >> ??????? >> >> #include >> >> // MIN2/MAX2(a, b) compare a and b and return the lesser/greater. >> // >> // a and b must either >> // >> // - be of the same type, in which case the result is of that type, or >> // >> // - both be of integer types with the same signed-ness, in which case the >> // result is the larger of those two types. >> >> template >> struct MINMAX2_result_differ_choose { typedef U type; }; >> >> template >> struct MINMAX2_result_differ_choose { typedef T type; }; >> >> template> bool T_Integer = std::numeric_limits::is_specialized, >> bool U_Integer = std::numeric_limits::is_specialized> >> struct MINMAX2_result_differ { }; >> >> template> bool T_Integer = std::numeric_limits::is_integer, >> bool U_Integer = std::numeric_limits::is_integer, >> bool SameSigned = (std::numeric_limits::is_signed >> == std::numeric_limits::is_signed)> >> struct MINMAX2_result_differ_aux { }; >> >> template >> struct MINMAX2_result_differ_aux >> : public MINMAX2_result_differ_choose >> { }; >> >> template >> struct MINMAX2_result_differ >> : public MINMAX2_result_differ_aux >> { }; >> >> template >> struct MINMAX2_result_type >> : public MINMAX2_result_differ >> { }; >> >> template >> struct MINMAX2_result_type { >> typedef T type; >> }; >> >> template >> inline typename MINMAX2_result_type::type MAX2(T a, U b) { >> // note: if T & U are different integral types, terniary operator will >> // perform implicit promotion of the smaller to the larger. >> return a > b ? a : b; >> } >> >> template >> inline typename MINMAX2_result_type::type MIN2(T a, U b) { >> // note: if T & U are different integral types, terniary operator will >> // perform implicit promotion of the smaller to the larger. >> return a < b ? a : b; >> } >> >> // TESTS >> >> typedef unsigned int uint; >> typedef unsigned int uint_size_t; >> typedef unsigned long ulong_size_t; >> >> uint max_same1(uint a, uint_size_t b) { return MAX2(a, b); } >> uint max_same2(uint_size_t a, uint b) { return MAX2(a, b); } >> >> uint min_same1(uint a, uint_size_t b) { return MIN2(a, b); } >> uint min_same2(uint_size_t a, uint b) { return MIN2(a, b); } >> >> ulong_size_t max_diff1(uint a, ulong_size_t b) { return MAX2(a, b); } >> ulong_size_t max_diff2(ulong_size_t a, uint b) { return MAX2(a, b); } >> >> ulong_size_t min_diff1(uint a, ulong_size_t b) { return MIN2(a, b); } >> ulong_size_t min_diff2(ulong_size_t a, uint b) { return MIN2(a, b); } >> >> ulong_size_t max_ulong_size_t(ulong_size_t a, ulong_size_t b) { >> return MAX2(a, b); >> } >> >> uint_size_t max_uing_size_t(uint_size_t a, uint_size_t b) { >> return MAX2(a, b); >> } >> >> ulong_size_t min_ulong_size_t(ulong_size_t a, ulong_size_t b) { >> return MIN2(a, b); >> } >> >> uint_size_t min_uing_size_t(uint_size_t a, uint_size_t b) { >> return MIN2(a, b); >> } >> >> // these aren't supposed to compile >> >> // float max_float1(uint a, float b) { return MAX2(a, b); } >> // float max_float2(float a, uint b) { return MAX2(a, b); } >> >> // float min_float1(uint a, float b) { return MIN2(a, b); } >> // float min_float2(float a, uint b) { return MIN2(a, b); } >> >> // uint max_int1(uint a, int b) { return MAX2(a, b); } >> // uint max_int2(int a, uint b) { return MAX2(a, b); } >> >> // uint min_int1(uint a, int b) { return MIN2(a, b); } >> // uint min_int2(int a, uint b) { return MIN2(a, b); } >> >> // uint max_long1(uint a, long b) { return MAX2(a, b); } >> // uint max_long2(long a, uint b) { return MAX2(a, b); } >> >> // uint min_long1(uint a, long b) { return MIN2(a, b); } >> // uint min_long2(long a, uint b) { return MIN2(a, b); } >> From thomas.schatzl at oracle.com Tue Jul 29 07:50:45 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 29 Jul 2014 09:50:45 +0200 Subject: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CEE4A4E@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDAE6B@DEWDFEMB12A.global.corp.sap> <1405932872.2723.13.camel@cirrus> <4295855A5C1DE049A61835A1887419CC2CEE4A4E@DEWDFEMB12A.global.corp.sap> Message-ID: <1406620245.2620.8.camel@cirrus> Hi, On Fri, 2014-07-25 at 12:07 +0000, Lindenmaier, Goetz wrote: > Hi, > > could somebody please have a further look at this? > We also need a sponsor please. I will sponsor the change. Do you mind removing the wrong comments before the marks? I can do this before pushing. Do you have any preference on who to attribute this change to? Otherwise I will simply add a Contributed-by line with both of you. The system does not support two authors of a change in the "User" line. Thanks, Thomas From martin.doerr at sap.com Tue Jul 29 08:16:31 2014 From: martin.doerr at sap.com (Doerr, Martin) Date: Tue, 29 Jul 2014 08:16:31 +0000 Subject: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark In-Reply-To: <1406620245.2620.8.camel@cirrus> References: <4295855A5C1DE049A61835A1887419CC2CEDAE6B@DEWDFEMB12A.global.corp.sap> <1405932872.2723.13.camel@cirrus> <4295855A5C1DE049A61835A1887419CC2CEE4A4E@DEWDFEMB12A.global.corp.sap> <1406620245.2620.8.camel@cirrus> Message-ID: <7C9B87B351A4BA4AA9EC95BB418116566ACB8618@DEWDFEMB19C.global.corp.sap> Hi Thomas, feel free to remove the wrong comments. You can attribute the change to me ("mdoerr"). Thanks and best regards, Martin -----Original Message----- From: Thomas Schatzl [mailto:thomas.schatzl at oracle.com] Sent: Dienstag, 29. Juli 2014 09:51 To: Lindenmaier, Goetz Cc: hotspot-dev at openjdk.java.net; Doerr, Martin Subject: Re: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark Hi, On Fri, 2014-07-25 at 12:07 +0000, Lindenmaier, Goetz wrote: > Hi, > > could somebody please have a further look at this? > We also need a sponsor please. I will sponsor the change. Do you mind removing the wrong comments before the marks? I can do this before pushing. Do you have any preference on who to attribute this change to? Otherwise I will simply add a Contributed-by line with both of you. The system does not support two authors of a change in the "User" line. Thanks, Thomas From goetz.lindenmaier at sap.com Tue Jul 29 08:17:38 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 29 Jul 2014 08:17:38 +0000 Subject: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark In-Reply-To: <7C9B87B351A4BA4AA9EC95BB418116566ACB8618@DEWDFEMB19C.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CEDAE6B@DEWDFEMB12A.global.corp.sap> <1405932872.2723.13.camel@cirrus> <4295855A5C1DE049A61835A1887419CC2CEE4A4E@DEWDFEMB12A.global.corp.sap> <1406620245.2620.8.camel@cirrus> <7C9B87B351A4BA4AA9EC95BB418116566ACB8618@DEWDFEMB19C.global.corp.sap> Message-ID: <4295855A5C1DE049A61835A1887419CC2CEE6F13@DEWDFEMB12A.global.corp.sap> Yep, that's fine. Best regards, Goetz. -----Original Message----- From: Doerr, Martin Sent: Dienstag, 29. Juli 2014 10:17 To: Thomas Schatzl; Lindenmaier, Goetz Cc: hotspot-dev at openjdk.java.net Subject: RE: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark Hi Thomas, feel free to remove the wrong comments. You can attribute the change to me ("mdoerr"). Thanks and best regards, Martin -----Original Message----- From: Thomas Schatzl [mailto:thomas.schatzl at oracle.com] Sent: Dienstag, 29. Juli 2014 09:51 To: Lindenmaier, Goetz Cc: hotspot-dev at openjdk.java.net; Doerr, Martin Subject: Re: RFR(S): 8050973: CMS/G1 GC: add missing Resource and Handle Mark Hi, On Fri, 2014-07-25 at 12:07 +0000, Lindenmaier, Goetz wrote: > Hi, > > could somebody please have a further look at this? > We also need a sponsor please. I will sponsor the change. Do you mind removing the wrong comments before the marks? I can do this before pushing. Do you have any preference on who to attribute this change to? Otherwise I will simply add a Contributed-by line with both of you. The system does not support two authors of a change in the "User" line. Thanks, Thomas From mikael.gerdin at oracle.com Tue Jul 29 14:54:38 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 29 Jul 2014 16:54:38 +0200 Subject: RFR(s): Backport of 8046715 - Add a way to verify an extended set of command line options In-Reply-To: <53D69B4B.4080708@oracle.com> References: <53D69B4B.4080708@oracle.com> Message-ID: <4122534.tWEblkkurM@mgerdin03> Jesper, If the original patch applies, but with an offset to the line numbers I think it's fine to just push the backport according to the 8u-process for HotSpot changes. /Mikael On Monday 28 July 2014 20.49.47 Jesper Wilhelmsson wrote: > Hi, > > Backport of 8046715 (Add a way to verify an extended set of command line > options). > > The patch applied with an offset, so a new review is required I believe. > > Webrev: http://cr.openjdk.java.net/~jwilhelm/8046715/webrev.jdk8/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8046715 > > JDK 9 change: http://hg.openjdk.java.net/jdk9/hs-gc/hotspot/rev/c0b3ddf06856 > > Thanks! > /Jesper From jesper.wilhelmsson at oracle.com Tue Jul 29 15:12:13 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Tue, 29 Jul 2014 17:12:13 +0200 Subject: RFR(s): Backport of 8046715 - Add a way to verify an extended set of command line options In-Reply-To: <4122534.tWEblkkurM@mgerdin03> References: <53D69B4B.4080708@oracle.com> <4122534.tWEblkkurM@mgerdin03> Message-ID: <53D7B9CD.3070306@oracle.com> OK. I'm pushing :) /Jesper Mikael Gerdin skrev 29/7/14 16:54: > Jesper, > > If the original patch applies, but with an offset to the line numbers I think > it's fine to just push the backport according to the 8u-process for HotSpot > changes. > > /Mikael > > On Monday 28 July 2014 20.49.47 Jesper Wilhelmsson wrote: >> Hi, >> >> Backport of 8046715 (Add a way to verify an extended set of command line >> options). >> >> The patch applied with an offset, so a new review is required I believe. >> >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8046715/webrev.jdk8/ >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8046715 >> >> JDK 9 change: http://hg.openjdk.java.net/jdk9/hs-gc/hotspot/rev/c0b3ddf06856 >> >> Thanks! >> /Jesper > From igor.ignatyev at oracle.com Tue Jul 29 23:14:39 2014 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Wed, 30 Jul 2014 03:14:39 +0400 Subject: RFR(S) : 8051896 : jtreg tests don't use $TESTJAVAOPTS In-Reply-To: <53D17CCC.5010908@oracle.com> References: <53D14A95.5010009@oracle.com> <53D14DF4.2040702@oracle.com> <53D16E74.6090504@oracle.com> <53D17CCC.5010908@oracle.com> Message-ID: <53D82ADF.2020905@oracle.com> Can I have a 2nd review of this changes? I'd prefer to get it from someone from runtime team, since there are several runtime tests are touched. updated webrev: http://cr.openjdk.java.net/~iignatyev/8051896/webrev.02/ Igor On 07/25/2014 01:38 AM, Vladimir Kozlov wrote: > The order of flags on command line is important. I think TESTJAVAOPTS > should be last to take priority: > > TESTOPTS="${TESTVMOPTS} ${TESTJAVAOPTS}" > > Please, add copyright headers to files in compiler/6894807. > > Otherwise look good. > > Thanks, > Vladimir > > On 7/24/14 1:37 PM, Igor Ignatyev wrote: >> // was Re: RFR(XS) : 8051896 : compiler/ciReplay tests don't use >> $TESTJAVAOPTS: >> >> updated webrev: http://cr.openjdk.java.net/~iignatyev/8051896/webrev.01/ >> 93 lines changed: 15 ins; 40 del; 38 mod; >> >> On 07/24/2014 10:18 PM, Vladimir Kozlov wrote: >>> Looks good. >>> >>> Is not this a general problem for all our tests? They use only >>> TESTVMOPTS. >> Yeap, it's a general problem, I've updated all tests. >> I thought jtreg merges TESTVMOPTS and TESTJAVAOPTS flags >>> together. >>> >>> Thanks, >>> Vladimir >>> >>> On 7/24/14 11:04 AM, Igor Ignatyev wrote: >>>> http://cr.openjdk.java.net/~iignatyev/8051896/webrev.00/ >>>> 12 lines changed: 2 ins; 0 del; 10 mod >>>> >>>> Hi all, >>>> >>>> Please review patch: >>>> >>>> Problem: >>>> the tests use only TESTVMOPTS, but jtreg propagates some flags by >>>> TESTJAVAOPTS variable >>>> >>>> Fix: >>>> usages of TESTVMOPTS were replaced by TESTOPTS which is initialized as >>>> concatenated values of TESTVMOPTS and TESTJAVAOPTS >>>> >>>> jbs: https://bugs.openjdk.java.net/browse/JDK-8051896 >>>> testing: jprt From kim.barrett at oracle.com Wed Jul 30 16:19:48 2014 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 30 Jul 2014 12:19:48 -0400 Subject: PATCH: using mixed types in MIN2/MAX2 functions In-Reply-To: <53D6C717.3040707@oracle.com> References: <20140612110513.d59301a5c21f3000aa4973d1@danny.cz> <5399F826.4060409@oracle.com> <539A0454.5030906@oracle.com> <20140613095511.5fae7c4b483bb65f073e5628@danny.cz> <539AEA4D.8020204@oracle.com> <20140613145531.559f6d943ef097550580a1f6@danny.cz> <539E8F41.6070001@oracle.com> <20140616091702.28b895ce918bdf4c58d2506f@danny.cz> <539ED1EE.30107@oracle.com> <539EFCE5.9020108@oracle.com> <20140616162130.ac9bec274c625241bb7fd18d@danny.cz> <539F0CA0.3010309@oracle.com> <20140616182920.baccf0cc83debbe522189688@danny.cz> <539F22E0.2080202@oracle.com> <20140618085537.0cc3e3e85856ae67c491c88e@danny.cz> <53A18A50.50001@oracle.com> <53A2DDD8.4010605@oracle.com> <20140619150654.cf6c68f626234f01295fec61@danny.cz> <53D6C717.3040707@oracle.com> Message-ID: <13DF41DE-16C1-45B5-A4C0-8001134C74E8@oracle.com> On Jul 28, 2014, at 5:56 PM, Jesper Wilhelmsson wrote: > > Trying to get this discussion going again. > > There isn't too much of non-trivial template usage in HotSpot today and I'm not sure I think it's worth complicating the code to avoid a few type casts. > > How do other people feel about the non-trivial template usage? For what it?s worth, the code I sent out earlier can be further simplified. It also contains a bug, because I didn't think through the problem quite carefully enough. I know how to fix the bug (and also how to improve the tests so it would have been caught!). If there's still interest I can produce an update. I have a very strong dislike for casts in most contexts. The number of casts and other unchecked conversions I'm running across in the hotspot code base makes me cringe. My understanding is that the proposed casts are to work around semantically different types being used (e.g. command flags of uintx type that actually represent size_t values), which happen to be implemented using different primitive types on some small set of platforms. If that's true, a better solution would be to actually fix the semantic type mismatches, though that may be more work. Casts that are only needed for a small set of platforms (and which might not even be included in Oracle testing) seem quite problematic to me - casts generally make for more difficult to understand code, and in a situation like this a future reader is going to have a hard time understanding which casts are truly meaningful and which are workarounds for unusual platforms. Maintenance is also going to be difficult: when is a cast needed in new code, and when can a cast in existing code be eliminated? "Non-trivial" is of course dependent on the reader. And a few comments might help the casual reader. My biggest worry, from a portability standpoint, is the #include of , which could run afoul of issues similar to https://bugs.openjdk.java.net/browse/JDK-8007770, or to bugs in itself - Boost.Config has several defect macros to describe bugs encountered in various (generally quite old) versions of . That said, I don't have any attachment to that code. There was a question about whether mixed type support could be done in this situation where different platforms are using different primitive types. It can. Whether we actually want to do so is a different question. I think a simplified approach that just handles the specific known problematic cases could also be done. I think the amount of template infrastructure involved in that might be less than the more general approach taken in the code I sent. That's kind of ugly, though (IMO) still an improvement on cast littering. A completely different approach to the problem would be to conditionally add the needed extra overloads, using preprocessor conditionalization based on the toolchain. My impression is that the coding conventions of this code base (attempt to) eschew such conditionalizations in otherwise generic code, instead isolating such stuff to platform/target/toolchain-specific files. That might be a little awkward to do in this case, but that might be seen by some as more palatable than the template approach. I don't like this any better than the afore-mentioned template support for specific additional types. From maynardj at us.ibm.com Wed Jul 30 17:05:38 2014 From: maynardj at us.ibm.com (Maynard Johnson) Date: Wed, 30 Jul 2014 12:05:38 -0500 Subject: Fwd: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero In-Reply-To: References: <53AAE839.8050105@us.ibm.com> <53B4300C.7040401@us.ibm.com> <53B43340.6020508@oracle.com> <53BAAC36.8030507@us.ibm.com> Message-ID: <53D925E2.9040103@us.ibm.com> On 07/07/2014 10:51 AM, Volker Simonis wrote: > Hi Maynard, > > I've opened bug "PPC64: Don't use StubCodeMarks for zero-length stubs" > (https://bugs.openjdk.java.net/browse/JDK-8049441) for this issue. > Until it is resolved in the main code line, you can use the attached > patch to work around the problem. Hi, Volker, Just checking on the status of this bug. Thanks. -Maynard > > Regards, > Volker > > > On Mon, Jul 7, 2014 at 4:18 PM, Maynard Johnson wrote: >> On 07/02/2014 01:21 PM, Volker Simonis wrote: >>> After a quick look I can say that at least for the "flush_icache_stub" >>> and "verify_oop" cases we indeed generate no code. Other platforms >>> like x86 for example generate code for instruction cache flushing. The >>> starting address of this code is saved in a function pointer and >>> called if necessary. On PPC64 we just save the address of a normal >>> C-funtion in this function pointer and implement the cache flush with >>> the help of inline assembler in the C-function. However this saving of >>> the C-function address in the corresponding function pointer is still >>> done in a helper method which triggers the creation of the >>> JvmtiExport::post_dynamic_code_generated_internal event - but with >>> zero size in that case. >>> >>> I agree that it is questionable if we really need to post these events >>> although they didn't hurt until know. Maybe we can remove them - >>> please let me think one more night about it:) >> Any further thoughts on this, Volker? Thanks. >> >> -Maynard >>> >>> Regards, >>> Volker >>> >>> >>> >>> On Wed, Jul 2, 2014 at 7:38 PM, Volker Simonis wrote: >>>> Hi Maynard, >>>> >>>> I really apologize that I've somehow missed your first message. >>>> ppc-aix-port-dev was the right list to post to. >>>> >>>> I'll analyze this problem instantly and let you know why we post this >>>> zero-code size events. >>>> >>>> Regards, >>>> Volker >>>> >>>> PS: really great to see that somebody is working on oprofile/OpenJDK >>>> integration! >>>> >>>> >>>> On Wed, Jul 2, 2014 at 6:28 PM, Daniel D. Daugherty >>>> wrote: >>>>> Adding the Serviceability team to the thread since JVM/TI is owned >>>>> by them... >>>>> >>>>> Dan >>>>> >>>>> >>>>> >>>>> On 7/2/14 10:15 AM, Maynard Johnson wrote: >>>>>> >>>>>> Cross-posting to see if Hotspot developers can help. >>>>>> >>>>>> -Maynard >>>>>> >>>>>> >>>>>> -------- Original Message -------- >>>>>> Subject: PowerPC issue: Some JVMTI dynamic code generated events have code >>>>>> size of zero >>>>>> Date: Wed, 25 Jun 2014 10:18:17 -0500 >>>>>> From: Maynard Johnson >>>>>> To: ppc-aix-port-dev at openjdk.java.net >>>>>> >>>>>> Hello, PowerPC OpenJDK folks, >>>>>> I am just now starting to get involved in the OpenJDK project. My goal is >>>>>> to ensure that the standard serviceability tools and tooling (jdb, JVMTI, >>>>>> jmap, etc.) work correctly on the PowerLinux platform. I selected JVMTI to >>>>>> start with since I have some experience from a client perspective with the >>>>>> JVMTI API. An OSS profiling tool for which I am the maintainer (oprofile) >>>>>> provides an agent library that implements the JVMTI API. Using this agent >>>>>> library to profile Java apps on my Intel-based laptop with OpenJDK (using >>>>>> various versions, up to current jdk9-dev) works fine. But the same >>>>>> profiling scenario attempted on my PowerLinux box (POWER7/Fedora 20) fails >>>>>> miserably. >>>>>> >>>>>> The oprofile agent library registers for callbacks for CompiledMethodLoad, >>>>>> CompiledMethodUnload, and DynamicCodeGenerated. In the callback functions, >>>>>> it writes information about the JVMTI event to a file. After profiling >>>>>> completes, oprofile's post-processing phase involves interpreting the >>>>>> information from the agent library's output file and generating an ELF file >>>>>> to represent the JITed code. When I profile an OpenJDK app on my Power >>>>>> system, the post-processing phase fails while trying to resolve overlapping >>>>>> symbols. The failure is due to the fact that it is unexpectedly finding >>>>>> symbols with code size of zero overlapping at the starting address of some >>>>>> other symbol with non-zero code size. The symbols in question here are from >>>>>> DynamicCodeGenerated events. >>>>>> >>>>>> Are these "code size=0" events valid? If so, I can fix the oprofile code >>>>>> to handle them. If they're not valid, then below is some debug information >>>>>> I've collected so far. >>>>>> >>>>>> ---------------------------- >>>>>> >>>>>> I instrumented JvmtiExport::post_dynamic_code_generated_internal (in >>>>>> hotspot/src/share/vm/prims/jvmtiExport.cpp) to print a debug line when a >>>>>> symbol with code size of zero was detected and then ran the following >>>>>> command: >>>>>> >>>>>> java >>>>>> -agentpath:/jvm/openjdk-1.9.0-internal/demo/jvmti/CodeLoadInfo/lib/libCodeLoadInfo.so >>>>>> -version >>>>>> >>>>>> The debug output from my instrumentation was as follows: >>>>>> >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> flush_icache_stub; code begin: 0x3fff68000080; code end: 0x3fff68000080 >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> throw_exception; code begin: 0x3fff68000a90; code end: 0x3fff68000a90 >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>>> Code size is ZERO!! Dynamic code generated event sent for verify_oop; >>>>>> code begin: 0x3fff6801665c; code end: 0x3fff6801665c >>>>>> openjdk version "1.9.0-internal" >>>>>> OpenJDK Runtime Environment (build >>>>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00) >>>>>> OpenJDK 64-Bit Server VM (build >>>>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00, mixed mode) >>>>>> >>>>>> >>>>>> I don't have access to an AIX system to know if the same issue would be >>>>>> seen there. Let me know if there's any other information I can provide. >>>>>> >>>>>> Thanks for the help. >>>>>> >>>>>> -Maynard >>>>>> >>>>>> >>>>>> >>>>> >>> >> From volker.simonis at gmail.com Wed Jul 30 17:20:48 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 30 Jul 2014 19:20:48 +0200 Subject: Fwd: PowerPC issue: Some JVMTI dynamic code generated events have code size of zero In-Reply-To: <53D925E2.9040103@us.ibm.com> References: <53AAE839.8050105@us.ibm.com> <53B4300C.7040401@us.ibm.com> <53B43340.6020508@oracle.com> <53BAAC36.8030507@us.ibm.com> <53D925E2.9040103@us.ibm.com> Message-ID: Hi Maynard, as you can see from the bug (https://bugs.openjdk.java.net/browse/JDK-8049441) it was already fixed two weeks ago in http://hg.openjdk.java.net/jdk9/hs-rt. Meanwhile it has also arrived in the jdk9 development code line http://hg.openjdk.java.net/jdk9/dev. Regards, Volker PS: unfortunately it is not possible to add people without OpenJDK "Author" status to the watch list of bugs. So the only chance you have is to manually look at the bug reports you're interested in :( On Wed, Jul 30, 2014 at 7:05 PM, Maynard Johnson wrote: > On 07/07/2014 10:51 AM, Volker Simonis wrote: >> Hi Maynard, >> >> I've opened bug "PPC64: Don't use StubCodeMarks for zero-length stubs" >> (https://bugs.openjdk.java.net/browse/JDK-8049441) for this issue. >> Until it is resolved in the main code line, you can use the attached >> patch to work around the problem. > Hi, Volker, > Just checking on the status of this bug. Thanks. > > -Maynard >> >> Regards, >> Volker >> >> >> On Mon, Jul 7, 2014 at 4:18 PM, Maynard Johnson wrote: >>> On 07/02/2014 01:21 PM, Volker Simonis wrote: >>>> After a quick look I can say that at least for the "flush_icache_stub" >>>> and "verify_oop" cases we indeed generate no code. Other platforms >>>> like x86 for example generate code for instruction cache flushing. The >>>> starting address of this code is saved in a function pointer and >>>> called if necessary. On PPC64 we just save the address of a normal >>>> C-funtion in this function pointer and implement the cache flush with >>>> the help of inline assembler in the C-function. However this saving of >>>> the C-function address in the corresponding function pointer is still >>>> done in a helper method which triggers the creation of the >>>> JvmtiExport::post_dynamic_code_generated_internal event - but with >>>> zero size in that case. >>>> >>>> I agree that it is questionable if we really need to post these events >>>> although they didn't hurt until know. Maybe we can remove them - >>>> please let me think one more night about it:) >>> Any further thoughts on this, Volker? Thanks. >>> >>> -Maynard >>>> >>>> Regards, >>>> Volker >>>> >>>> >>>> >>>> On Wed, Jul 2, 2014 at 7:38 PM, Volker Simonis wrote: >>>>> Hi Maynard, >>>>> >>>>> I really apologize that I've somehow missed your first message. >>>>> ppc-aix-port-dev was the right list to post to. >>>>> >>>>> I'll analyze this problem instantly and let you know why we post this >>>>> zero-code size events. >>>>> >>>>> Regards, >>>>> Volker >>>>> >>>>> PS: really great to see that somebody is working on oprofile/OpenJDK >>>>> integration! >>>>> >>>>> >>>>> On Wed, Jul 2, 2014 at 6:28 PM, Daniel D. Daugherty >>>>> wrote: >>>>>> Adding the Serviceability team to the thread since JVM/TI is owned >>>>>> by them... >>>>>> >>>>>> Dan >>>>>> >>>>>> >>>>>> >>>>>> On 7/2/14 10:15 AM, Maynard Johnson wrote: >>>>>>> >>>>>>> Cross-posting to see if Hotspot developers can help. >>>>>>> >>>>>>> -Maynard >>>>>>> >>>>>>> >>>>>>> -------- Original Message -------- >>>>>>> Subject: PowerPC issue: Some JVMTI dynamic code generated events have code >>>>>>> size of zero >>>>>>> Date: Wed, 25 Jun 2014 10:18:17 -0500 >>>>>>> From: Maynard Johnson >>>>>>> To: ppc-aix-port-dev at openjdk.java.net >>>>>>> >>>>>>> Hello, PowerPC OpenJDK folks, >>>>>>> I am just now starting to get involved in the OpenJDK project. My goal is >>>>>>> to ensure that the standard serviceability tools and tooling (jdb, JVMTI, >>>>>>> jmap, etc.) work correctly on the PowerLinux platform. I selected JVMTI to >>>>>>> start with since I have some experience from a client perspective with the >>>>>>> JVMTI API. An OSS profiling tool for which I am the maintainer (oprofile) >>>>>>> provides an agent library that implements the JVMTI API. Using this agent >>>>>>> library to profile Java apps on my Intel-based laptop with OpenJDK (using >>>>>>> various versions, up to current jdk9-dev) works fine. But the same >>>>>>> profiling scenario attempted on my PowerLinux box (POWER7/Fedora 20) fails >>>>>>> miserably. >>>>>>> >>>>>>> The oprofile agent library registers for callbacks for CompiledMethodLoad, >>>>>>> CompiledMethodUnload, and DynamicCodeGenerated. In the callback functions, >>>>>>> it writes information about the JVMTI event to a file. After profiling >>>>>>> completes, oprofile's post-processing phase involves interpreting the >>>>>>> information from the agent library's output file and generating an ELF file >>>>>>> to represent the JITed code. When I profile an OpenJDK app on my Power >>>>>>> system, the post-processing phase fails while trying to resolve overlapping >>>>>>> symbols. The failure is due to the fact that it is unexpectedly finding >>>>>>> symbols with code size of zero overlapping at the starting address of some >>>>>>> other symbol with non-zero code size. The symbols in question here are from >>>>>>> DynamicCodeGenerated events. >>>>>>> >>>>>>> Are these "code size=0" events valid? If so, I can fix the oprofile code >>>>>>> to handle them. If they're not valid, then below is some debug information >>>>>>> I've collected so far. >>>>>>> >>>>>>> ---------------------------- >>>>>>> >>>>>>> I instrumented JvmtiExport::post_dynamic_code_generated_internal (in >>>>>>> hotspot/src/share/vm/prims/jvmtiExport.cpp) to print a debug line when a >>>>>>> symbol with code size of zero was detected and then ran the following >>>>>>> command: >>>>>>> >>>>>>> java >>>>>>> -agentpath:/jvm/openjdk-1.9.0-internal/demo/jvmti/CodeLoadInfo/lib/libCodeLoadInfo.so >>>>>>> -version >>>>>>> >>>>>>> The debug output from my instrumentation was as follows: >>>>>>> >>>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>>> flush_icache_stub; code begin: 0x3fff68000080; code end: 0x3fff68000080 >>>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>>> throw_exception; code begin: 0x3fff68000a90; code end: 0x3fff68000a90 >>>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>>>> Code size is ZERO!! Dynamic code generated event sent for >>>>>>> throw_exception; code begin: 0x3fff68016600; code end: 0x3fff68016600 >>>>>>> Code size is ZERO!! Dynamic code generated event sent for verify_oop; >>>>>>> code begin: 0x3fff6801665c; code end: 0x3fff6801665c >>>>>>> openjdk version "1.9.0-internal" >>>>>>> OpenJDK Runtime Environment (build >>>>>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00) >>>>>>> OpenJDK 64-Bit Server VM (build >>>>>>> 1.9.0-internal-mpj_2014_06_18_09_55-b00, mixed mode) >>>>>>> >>>>>>> >>>>>>> I don't have access to an AIX system to know if the same issue would be >>>>>>> seen there. Let me know if there's any other information I can provide. >>>>>>> >>>>>>> Thanks for the help. >>>>>>> >>>>>>> -Maynard >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>> >>> > From coleen.phillimore at oracle.com Wed Jul 30 19:20:42 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 30 Jul 2014 15:20:42 -0400 Subject: RFR 8051398: jvmti tests fieldacc002, fieldmod002 fail in nightly with errors: (watch#0) wrong location Message-ID: <53D9458A.4050909@oracle.com> Summary: Didn't handle NULL bcp for native methods bcp is set to NULL in the interpreter frame for native methods. x86 generate_native_entry() contains a call_VM that sets the bcp address to the beginning of code, but sparc doesn't. I don't think ppc does either. The code I changed, doesn't handle a null bcp which post_field_access and post_field_modification tests call from JNI for a native method. open webrev at http://cr.openjdk.java.net/~coleenp/8051398/ bug link https://bugs.openjdk.java.net/browse/JDK-8051398 Tested with jck vm/jvmti, jtreg, and NSK internal tests. No test added because there's a test that already tests this. Thanks, Coleen From daniel.daugherty at oracle.com Wed Jul 30 19:58:48 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 30 Jul 2014 13:58:48 -0600 Subject: RFR 8051398: jvmti tests fieldacc002, fieldmod002 fail in nightly with errors: (watch#0) wrong location In-Reply-To: <53D9458A.4050909@oracle.com> References: <53D9458A.4050909@oracle.com> Message-ID: <53D94E78.5050903@oracle.com> On 7/30/14 1:20 PM, Coleen Phillimore wrote: > Summary: Didn't handle NULL bcp for native methods > > bcp is set to NULL in the interpreter frame for native methods. x86 > generate_native_entry() contains a call_VM that sets the bcp address > to the beginning of code, but sparc doesn't. I don't think ppc does > either. The code I changed, doesn't handle a null bcp which > post_field_access and post_field_modification tests call from JNI for > a native method. > > open webrev at http://cr.openjdk.java.net/~coleenp/8051398/ src/share/vm/interpreter/interpreterRuntime.cpp I'm not seeing the reason for the code deletion here. Just re-read the bug and I'm still not seeing it. Could be that I've been away from this code for too long. src/share/vm/oops/method.hpp line 652: address bcp_from(address bci) const; Should the prototype parameter name be 'bcp' instead of 'bci' since the type is address? src/share/vm/oops/method.cpp line 287: if (is_native() && bcp == 0) { line 288: return code_base() + (intptr_t)bcp; Why add '(intptr_t)bcp' since you know it is zero? src/share/vm/runtime/frame.cpp No comments. Dan > bug link https://bugs.openjdk.java.net/browse/JDK-8051398 > > Tested with jck vm/jvmti, jtreg, and NSK internal tests. No test > added because there's a test that already tests this. > > Thanks, > Coleen > > From coleen.phillimore at oracle.com Wed Jul 30 20:20:11 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 30 Jul 2014 16:20:11 -0400 Subject: RFR 8051398: jvmti tests fieldacc002, fieldmod002 fail in nightly with errors: (watch#0) wrong location In-Reply-To: <53D94E78.5050903@oracle.com> References: <53D9458A.4050909@oracle.com> <53D94E78.5050903@oracle.com> Message-ID: <53D9537B.2020905@oracle.com> On 7/30/14, 3:58 PM, Daniel D. Daugherty wrote: > On 7/30/14 1:20 PM, Coleen Phillimore wrote: >> Summary: Didn't handle NULL bcp for native methods >> >> bcp is set to NULL in the interpreter frame for native methods. x86 >> generate_native_entry() contains a call_VM that sets the bcp address >> to the beginning of code, but sparc doesn't. I don't think ppc does >> either. The code I changed, doesn't handle a null bcp which >> post_field_access and post_field_modification tests call from JNI for >> a native method. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8051398/ > > src/share/vm/interpreter/interpreterRuntime.cpp > I'm not seeing the reason for the code deletion here. > Just re-read the bug and I'm still not seeing it. > Could be that I've been away from this code for too long. I was debugging this and found this useless piece of code, so while not directly related to the cause of the bug, it was in the path of the bug. > > src/share/vm/oops/method.hpp > line 652: address bcp_from(address bci) const; > Should the prototype parameter name be 'bcp' instead > of 'bci' since the type is address? > You're right. I will make that bcp. > src/share/vm/oops/method.cpp > line 287: if (is_native() && bcp == 0) { > line 288: return code_base() + (intptr_t)bcp; > Why add '(intptr_t)bcp' since you know it is zero? > True. I don't need to add bcp. That saves a cast. Thanks! Coleen > src/share/vm/runtime/frame.cpp > No comments. > > > Dan > > > >> bug link https://bugs.openjdk.java.net/browse/JDK-8051398 >> >> Tested with jck vm/jvmti, jtreg, and NSK internal tests. No test >> added because there's a test that already tests this. >> >> Thanks, >> Coleen >> >> > From daniel.daugherty at oracle.com Wed Jul 30 20:29:18 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 30 Jul 2014 14:29:18 -0600 Subject: RFR 8051398: jvmti tests fieldacc002, fieldmod002 fail in nightly with errors: (watch#0) wrong location In-Reply-To: <53D9537B.2020905@oracle.com> References: <53D9458A.4050909@oracle.com> <53D94E78.5050903@oracle.com> <53D9537B.2020905@oracle.com> Message-ID: <53D9559E.8040907@oracle.com> On 7/30/14 2:20 PM, Coleen Phillimore wrote: > > On 7/30/14, 3:58 PM, Daniel D. Daugherty wrote: >> On 7/30/14 1:20 PM, Coleen Phillimore wrote: >>> Summary: Didn't handle NULL bcp for native methods >>> >>> bcp is set to NULL in the interpreter frame for native methods. x86 >>> generate_native_entry() contains a call_VM that sets the bcp address >>> to the beginning of code, but sparc doesn't. I don't think ppc does >>> either. The code I changed, doesn't handle a null bcp which >>> post_field_access and post_field_modification tests call from JNI >>> for a native method. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8051398/ >> >> src/share/vm/interpreter/interpreterRuntime.cpp >> I'm not seeing the reason for the code deletion here. >> Just re-read the bug and I'm still not seeing it. >> Could be that I've been away from this code for too long. > > I was debugging this and found this useless piece of code, so while > not directly related to the cause of the bug, it was in the path of > the bug. I'm good with you putting on the CDE hat! >> >> src/share/vm/oops/method.hpp >> line 652: address bcp_from(address bci) const; >> Should the prototype parameter name be 'bcp' instead >> of 'bci' since the type is address? >> > > You're right. I will make that bcp. >> src/share/vm/oops/method.cpp >> line 287: if (is_native() && bcp == 0) { >> line 288: return code_base() + (intptr_t)bcp; >> Why add '(intptr_t)bcp' since you know it is zero? >> > > True. I don't need to add bcp. That saves a cast. All sounds good to me. Dan > > Thanks! > Coleen > >> src/share/vm/runtime/frame.cpp >> No comments. >> >> >> Dan >> >> >> >>> bug link https://bugs.openjdk.java.net/browse/JDK-8051398 >>> >>> Tested with jck vm/jvmti, jtreg, and NSK internal tests. No test >>> added because there's a test that already tests this. >>> >>> Thanks, >>> Coleen >>> >>> >> > From coleen.phillimore at oracle.com Wed Jul 30 20:35:15 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 30 Jul 2014 16:35:15 -0400 Subject: RFR 8051398: jvmti tests fieldacc002, fieldmod002 fail in nightly with errors: (watch#0) wrong location In-Reply-To: <53D9559E.8040907@oracle.com> References: <53D9458A.4050909@oracle.com> <53D94E78.5050903@oracle.com> <53D9537B.2020905@oracle.com> <53D9559E.8040907@oracle.com> Message-ID: <53D95703.6090303@oracle.com> Thanks Dan! Coleen On 7/30/14, 4:29 PM, Daniel D. Daugherty wrote: > On 7/30/14 2:20 PM, Coleen Phillimore wrote: >> >> On 7/30/14, 3:58 PM, Daniel D. Daugherty wrote: >>> On 7/30/14 1:20 PM, Coleen Phillimore wrote: >>>> Summary: Didn't handle NULL bcp for native methods >>>> >>>> bcp is set to NULL in the interpreter frame for native methods. x86 >>>> generate_native_entry() contains a call_VM that sets the bcp >>>> address to the beginning of code, but sparc doesn't. I don't think >>>> ppc does either. The code I changed, doesn't handle a null bcp >>>> which post_field_access and post_field_modification tests call from >>>> JNI for a native method. >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8051398/ >>> >>> src/share/vm/interpreter/interpreterRuntime.cpp >>> I'm not seeing the reason for the code deletion here. >>> Just re-read the bug and I'm still not seeing it. >>> Could be that I've been away from this code for too long. >> >> I was debugging this and found this useless piece of code, so while >> not directly related to the cause of the bug, it was in the path of >> the bug. > > I'm good with you putting on the CDE hat! > > >>> >>> src/share/vm/oops/method.hpp >>> line 652: address bcp_from(address bci) const; >>> Should the prototype parameter name be 'bcp' instead >>> of 'bci' since the type is address? >>> >> >> You're right. I will make that bcp. >>> src/share/vm/oops/method.cpp >>> line 287: if (is_native() && bcp == 0) { >>> line 288: return code_base() + (intptr_t)bcp; >>> Why add '(intptr_t)bcp' since you know it is zero? >>> >> >> True. I don't need to add bcp. That saves a cast. > > All sounds good to me. > > Dan > > >> >> Thanks! >> Coleen >> >>> src/share/vm/runtime/frame.cpp >>> No comments. >>> >>> >>> Dan >>> >>> >>> >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8051398 >>>> >>>> Tested with jck vm/jvmti, jtreg, and NSK internal tests. No test >>>> added because there's a test that already tests this. >>>> >>>> Thanks, >>>> Coleen >>>> >>>> >>> >> > From serguei.spitsyn at oracle.com Wed Jul 30 23:28:34 2014 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Wed, 30 Jul 2014 16:28:34 -0700 Subject: RFR 8051398: jvmti tests fieldacc002, fieldmod002 fail in nightly with errors: (watch#0) wrong location In-Reply-To: <53D9537B.2020905@oracle.com> References: <53D9458A.4050909@oracle.com> <53D94E78.5050903@oracle.com> <53D9537B.2020905@oracle.com> Message-ID: <53D97FA2.3010707@oracle.com> Coleen, The fix looks good to me. I only had the same minor comments that Dan already asked below. The code removed in the interpreterRuntime.cpp looks like an assert but I'm not sure how useful it is. Thanks, Serguei On 7/30/14 1:20 PM, Coleen Phillimore wrote: > > On 7/30/14, 3:58 PM, Daniel D. Daugherty wrote: >> On 7/30/14 1:20 PM, Coleen Phillimore wrote: >>> Summary: Didn't handle NULL bcp for native methods >>> >>> bcp is set to NULL in the interpreter frame for native methods. x86 >>> generate_native_entry() contains a call_VM that sets the bcp address >>> to the beginning of code, but sparc doesn't. I don't think ppc does >>> either. The code I changed, doesn't handle a null bcp which >>> post_field_access and post_field_modification tests call from JNI >>> for a native method. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8051398/ >> >> src/share/vm/interpreter/interpreterRuntime.cpp >> I'm not seeing the reason for the code deletion here. >> Just re-read the bug and I'm still not seeing it. >> Could be that I've been away from this code for too long. > > I was debugging this and found this useless piece of code, so while > not directly related to the cause of the bug, it was in the path of > the bug. >> >> src/share/vm/oops/method.hpp >> line 652: address bcp_from(address bci) const; >> Should the prototype parameter name be 'bcp' instead >> of 'bci' since the type is address? >> > > You're right. I will make that bcp. >> src/share/vm/oops/method.cpp >> line 287: if (is_native() && bcp == 0) { >> line 288: return code_base() + (intptr_t)bcp; >> Why add '(intptr_t)bcp' since you know it is zero? >> > > True. I don't need to add bcp. That saves a cast. > > Thanks! > Coleen > >> src/share/vm/runtime/frame.cpp >> No comments. >> >> >> Dan >> >> >> >>> bug link https://bugs.openjdk.java.net/browse/JDK-8051398 >>> >>> Tested with jck vm/jvmti, jtreg, and NSK internal tests. No test >>> added because there's a test that already tests this. >>> >>> Thanks, >>> Coleen >>> >>> >> > From coleen.phillimore at oracle.com Thu Jul 31 00:49:52 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 30 Jul 2014 20:49:52 -0400 Subject: RFR 8051398: jvmti tests fieldacc002, fieldmod002 fail in nightly with errors: (watch#0) wrong location In-Reply-To: <53D97FA2.3010707@oracle.com> References: <53D9458A.4050909@oracle.com> <53D94E78.5050903@oracle.com> <53D9537B.2020905@oracle.com> <53D97FA2.3010707@oracle.com> Message-ID: <53D992B0.9050702@oracle.com> Thank you, Serguei! I thought the code removed was something cut/pasted from post_field_modification, which does something for each case. I thought it looked strange because it has all cases except vtos, so essentially it's asserting that the cpCache doesn't get initialized to void for getfield, which should be asserted somewhere else if it's a useful test. It was visual noise. Thanks! Coleen On 7/30/14, 7:28 PM, serguei.spitsyn at oracle.com wrote: > Coleen, > > The fix looks good to me. > I only had the same minor comments that Dan already asked below. > The code removed in the interpreterRuntime.cpp looks like an assert > but I'm not sure how useful it is. > > Thanks, > Serguei > > On 7/30/14 1:20 PM, Coleen Phillimore wrote: >> >> On 7/30/14, 3:58 PM, Daniel D. Daugherty wrote: >>> On 7/30/14 1:20 PM, Coleen Phillimore wrote: >>>> Summary: Didn't handle NULL bcp for native methods >>>> >>>> bcp is set to NULL in the interpreter frame for native methods. x86 >>>> generate_native_entry() contains a call_VM that sets the bcp >>>> address to the beginning of code, but sparc doesn't. I don't think >>>> ppc does either. The code I changed, doesn't handle a null bcp >>>> which post_field_access and post_field_modification tests call from >>>> JNI for a native method. >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8051398/ >>> >>> src/share/vm/interpreter/interpreterRuntime.cpp >>> I'm not seeing the reason for the code deletion here. >>> Just re-read the bug and I'm still not seeing it. >>> Could be that I've been away from this code for too long. >> >> I was debugging this and found this useless piece of code, so while >> not directly related to the cause of the bug, it was in the path of >> the bug. >>> >>> src/share/vm/oops/method.hpp >>> line 652: address bcp_from(address bci) const; >>> Should the prototype parameter name be 'bcp' instead >>> of 'bci' since the type is address? >>> >> >> You're right. I will make that bcp. >>> src/share/vm/oops/method.cpp >>> line 287: if (is_native() && bcp == 0) { >>> line 288: return code_base() + (intptr_t)bcp; >>> Why add '(intptr_t)bcp' since you know it is zero? >>> >> >> True. I don't need to add bcp. That saves a cast. >> >> Thanks! >> Coleen >> >>> src/share/vm/runtime/frame.cpp >>> No comments. >>> >>> >>> Dan >>> >>> >>> >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8051398 >>>> >>>> Tested with jck vm/jvmti, jtreg, and NSK internal tests. No test >>>> added because there's a test that already tests this. >>>> >>>> Thanks, >>>> Coleen >>>> >>>> >>> >> > From serguei.spitsyn at oracle.com Thu Jul 31 00:54:33 2014 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Wed, 30 Jul 2014 17:54:33 -0700 Subject: RFR 8051398: jvmti tests fieldacc002, fieldmod002 fail in nightly with errors: (watch#0) wrong location In-Reply-To: <53D992B0.9050702@oracle.com> References: <53D9458A.4050909@oracle.com> <53D94E78.5050903@oracle.com> <53D9537B.2020905@oracle.com> <53D97FA2.3010707@oracle.com> <53D992B0.9050702@oracle.com> Message-ID: <53D993C9.7090004@oracle.com> On 7/30/14 5:49 PM, Coleen Phillimore wrote: > > Thank you, Serguei! I thought the code removed was something > cut/pasted from post_field_modification, which does something for each > case. I thought it looked strange because it has all cases except > vtos, so essentially it's asserting that the cpCache doesn't get > initialized to void for getfield, which should be asserted somewhere > else if it's a useful test. It was visual noise. Agreed. Thanks, Serguei > > Thanks! > Coleen > > > On 7/30/14, 7:28 PM, serguei.spitsyn at oracle.com wrote: >> Coleen, >> >> The fix looks good to me. >> I only had the same minor comments that Dan already asked below. >> The code removed in the interpreterRuntime.cpp looks like an assert >> but I'm not sure how useful it is. >> >> Thanks, >> Serguei >> >> On 7/30/14 1:20 PM, Coleen Phillimore wrote: >>> >>> On 7/30/14, 3:58 PM, Daniel D. Daugherty wrote: >>>> On 7/30/14 1:20 PM, Coleen Phillimore wrote: >>>>> Summary: Didn't handle NULL bcp for native methods >>>>> >>>>> bcp is set to NULL in the interpreter frame for native methods. >>>>> x86 generate_native_entry() contains a call_VM that sets the bcp >>>>> address to the beginning of code, but sparc doesn't. I don't >>>>> think ppc does either. The code I changed, doesn't handle a null >>>>> bcp which post_field_access and post_field_modification tests call >>>>> from JNI for a native method. >>>>> >>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8051398/ >>>> >>>> src/share/vm/interpreter/interpreterRuntime.cpp >>>> I'm not seeing the reason for the code deletion here. >>>> Just re-read the bug and I'm still not seeing it. >>>> Could be that I've been away from this code for too long. >>> >>> I was debugging this and found this useless piece of code, so while >>> not directly related to the cause of the bug, it was in the path of >>> the bug. >>>> >>>> src/share/vm/oops/method.hpp >>>> line 652: address bcp_from(address bci) const; >>>> Should the prototype parameter name be 'bcp' instead >>>> of 'bci' since the type is address? >>>> >>> >>> You're right. I will make that bcp. >>>> src/share/vm/oops/method.cpp >>>> line 287: if (is_native() && bcp == 0) { >>>> line 288: return code_base() + (intptr_t)bcp; >>>> Why add '(intptr_t)bcp' since you know it is zero? >>>> >>> >>> True. I don't need to add bcp. That saves a cast. >>> >>> Thanks! >>> Coleen >>> >>>> src/share/vm/runtime/frame.cpp >>>> No comments. >>>> >>>> >>>> Dan >>>> >>>> >>>> >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8051398 >>>>> >>>>> Tested with jck vm/jvmti, jtreg, and NSK internal tests. No test >>>>> added because there's a test that already tests this. >>>>> >>>>> Thanks, >>>>> Coleen >>>>> >>>>> >>>> >>> >> > From jesper.wilhelmsson at oracle.com Thu Jul 31 11:34:30 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Thu, 31 Jul 2014 13:34:30 +0200 Subject: PATCH: using mixed types in MIN2/MAX2 functions In-Reply-To: <13DF41DE-16C1-45B5-A4C0-8001134C74E8@oracle.com> References: <20140612110513.d59301a5c21f3000aa4973d1@danny.cz> <5399F826.4060409@oracle.com> <539A0454.5030906@oracle.com> <20140613095511.5fae7c4b483bb65f073e5628@danny.cz> <539AEA4D.8020204@oracle.com> <20140613145531.559f6d943ef097550580a1f6@danny.cz> <539E8F41.6070001@oracle.com> <20140616091702.28b895ce918bdf4c58d2506f@danny.cz> <539ED1EE.30107@oracle.com> <539EFCE5.9020108@oracle.com> <20140616162130.ac9bec274c625241bb7fd18d@danny.cz> <539F0CA0.3010309@oracle.com> <20140616182920.baccf0cc83debbe522189688@danny.cz> <539F22E0.2080202@oracle.com> <20140618085537.0cc3e3e85856ae67c491c88e@danny.cz> <53A18A50.50001@oracle.com> <53A2DDD8.4010605@oracle.com> <20140619150654.cf6c68f626234f01295fec61@danny.cz> <53D6C717.3040707@oracle.com> <13DF41DE-16C1-45B5-A4C0-8001134C74E8@oracle.com> Message-ID: <53DA29C6.8080600@oracle.com> Kim Barrett skrev 30/7/14 18:19: > On Jul 28, 2014, at 5:56 PM, Jesper Wilhelmsson wrote: >> >> Trying to get this discussion going again. >> >> There isn't too much of non-trivial template usage in HotSpot today and I'm not sure I think it's worth complicating the code to avoid a few type casts. >> >> How do other people feel about the non-trivial template usage? > > For what it?s worth, the code I sent out earlier can be further > simplified. It also contains a bug, because I didn't think through > the problem quite carefully enough. I know how to fix the bug (and > also how to improve the tests so it would have been caught!). If > there's still interest I can produce an update. > > I have a very strong dislike for casts in most contexts. The number > of casts and other unchecked conversions I'm running across in the > hotspot code base makes me cringe. > > My understanding is that the proposed casts are to work around > semantically different types being used (e.g. command flags of uintx > type that actually represent size_t values), which happen to be > implemented using different primitive types on some small set of > platforms. If that's true, a better solution would be to actually fix > the semantic type mismatches, though that may be more work. I think you have a really good point here. Is there a reason for the different types being used or should we simply converge to using the same type all over? /Jesper > > Casts that are only needed for a small set of platforms (and which > might not even be included in Oracle testing) seem quite problematic > to me - casts generally make for more difficult to understand code, > and in a situation like this a future reader is going to have a hard > time understanding which casts are truly meaningful and which are > workarounds for unusual platforms. Maintenance is also going to be > difficult: when is a cast needed in new code, and when can a cast in > existing code be eliminated? > > "Non-trivial" is of course dependent on the reader. And a few > comments might help the casual reader. > > My biggest worry, from a portability standpoint, is the #include of > , which could run afoul of issues similar to > https://bugs.openjdk.java.net/browse/JDK-8007770, or to bugs in > itself - Boost.Config has several defect macros to describe > bugs encountered in various (generally quite old) versions of > . > > That said, I don't have any attachment to that code. There was a > question about whether mixed type support could be done in this > situation where different platforms are using different primitive > types. It can. Whether we actually want to do so is a different > question. > > I think a simplified approach that just handles the specific known > problematic cases could also be done. I think the amount of template > infrastructure involved in that might be less than the more general > approach taken in the code I sent. That's kind of ugly, though (IMO) > still an improvement on cast littering. > > A completely different approach to the problem would be to > conditionally add the needed extra overloads, using preprocessor > conditionalization based on the toolchain. My impression is that the > coding conventions of this code base (attempt to) eschew such > conditionalizations in otherwise generic code, instead isolating such > stuff to platform/target/toolchain-specific files. That might be a > little awkward to do in this case, but that might be seen by some as > more palatable than the template approach. I don't like this any > better than the afore-mentioned template support for specific > additional types. > From jon.masamitsu at oracle.com Thu Jul 31 14:24:18 2014 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Thu, 31 Jul 2014 07:24:18 -0700 Subject: [jdk8u40] Request for review(s) - 8024366: Make UseNUMA enable UseNUMAInterleaving Message-ID: <53DA5192.3000100@oracle.com> This is a clean transplant of 8024366 from jdk9. For GC's that do not have explicit support for NUMA, UseNUMA will turn on UseNUMAInterleaving . webrev: http://cr.openjdk.java.net/~jmasa/8024366/webrev.01/ CR: https://bugs.openjdk.java.net/browse/JDK-8024366 Thanks. Jon From mikael.gerdin at oracle.com Thu Jul 31 14:33:40 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 31 Jul 2014 16:33:40 +0200 Subject: [jdk8u40] Request for review(s) - 8024366: Make UseNUMA enable UseNUMAInterleaving In-Reply-To: <53DA5192.3000100@oracle.com> References: <53DA5192.3000100@oracle.com> Message-ID: <2092947.qtUIkT14Xq@mgerdin-lap> Jon, On Thursday 31 July 2014 07.24.18 Jon Masamitsu wrote: > This is a clean transplant of 8024366 from jdk9. > > For GC's that do not have explicit support for NUMA, > UseNUMA will turn on UseNUMAInterleaving . > > webrev: > > http://cr.openjdk.java.net/~jmasa/8024366/webrev.01/ Looks good to me. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8024366 You need to change the Fix Version field on the CR from 9 to 8u40 otherwise the hg updater will create a backport bug and close it on 8u40. /Mikael > > Thanks. > > Jon From jon.masamitsu at oracle.com Thu Jul 31 16:16:21 2014 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Thu, 31 Jul 2014 09:16:21 -0700 Subject: [jdk8u40] Request for review(s) - 8024366: Make UseNUMA enable UseNUMAInterleaving In-Reply-To: <2092947.qtUIkT14Xq@mgerdin-lap> References: <53DA5192.3000100@oracle.com> <2092947.qtUIkT14Xq@mgerdin-lap> Message-ID: <53DA6BD5.5070300@oracle.com> On 7/31/2014 7:33 AM, Mikael Gerdin wrote: > Jon, > > On Thursday 31 July 2014 07.24.18 Jon Masamitsu wrote: >> This is a clean transplant of 8024366 from jdk9. >> >> For GC's that do not have explicit support for NUMA, >> UseNUMA will turn on UseNUMAInterleaving . >> >> webrev: >> >> http://cr.openjdk.java.net/~jmasa/8024366/webrev.01/ > Looks good to me. Thanks. > >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8024366 > You need to change the Fix Version field on the CR from 9 to 8u40 otherwise > the hg updater will create a backport bug and close it on 8u40. Double thanks. The backport bug would have confused me. Jon > > /Mikael > >> Thanks. >> >> Jon