From david.holmes at oracle.com Thu Jun 1 01:22:31 2017 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Jun 2017 11:22:31 +1000 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: <592EC8F6.5080605@oracle.com> References: <592EC8F6.5080605@oracle.com> Message-ID: <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> Hi Erik, A small change with big questions :) On 31/05/2017 11:45 PM, Erik ?sterlund wrote: > Hi, > > It would be desirable to be able to use harmless C++ standard library > headers like in the code as long as it does not add any > link-time dependencies to the standard library. What does a 'harmless' C++ standard library header look like? > This is possible on all supported platforms except the ones using the > solaris studio compiler where we enforce -library=%none in both CFLAGS > and LDFLAGS. > I propose to remove the restriction from CFLAGS but keep it on LDFLAGS. > > I have consulted with the studio folks, and they think this is > absolutely fine and thought that the choice of -library=stlport4 should > be fine for our CFLAGS and is indeed what is already used in the gtest > launcher. So what exactly does this mean? IIUC this allows you to use headers for, and compile against "STLport?s Standard Library implementation version 4.5.3 instead of the default libCstd". But how do you then not need to link against libstlport.so ?? https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html "STLport is binary incompatible with the default libCstd. If you use the STLport implementation of the standard library, then you must compile and link all files, including third-party libraries, with the option -library=stlport4" There are lots of other comments in that document regarding STLport that makes me think that using it may be introducing a fragile dependency into the OpenJDK code! "STLport is an open source product and does not guarantee compatibility across different releases. In other words, compiling with a future version of STLport may break applications compiled with STLport 4.5.3. It also might not be possible to link binaries compiled using STLport 4.5.3 with binaries compiled using a future version of STLport." "Future releases of the compiler might not include STLport4. They might include only a later version of STLport. The compiler option -library=stlport4 might not be available in future releases, but could be replaced by an option referring to a later STLport version." None of that sounds very good to me. Cheers, David > Webrev for jdk10-hs top level repository: > http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ > > Webrev for jdk10-hs hotspot repository: > http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ > > Testing: JPRT. > > Will need a sponsor. > > Thanks, > /Erik From david.holmes at oracle.com Thu Jun 1 01:33:58 2017 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Jun 2017 11:33:58 +1000 Subject: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode In-Reply-To: References: Message-ID: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> Hi Stuart, This looks like an accurate backport of the change. My only minor concern is if there may be tests in 8u that are no longer in 9 which may not work with agentvm mode. What platforms have you tested this on? Thanks, David On 31/05/2017 11:19 PM, Stuart Monteith wrote: > Hello, > Currently the jdk8u codebase fails some JTreg Hotspot tests when > running in the -agentvm mode. This is because the ProcessTools class > is not passing the classpath. There are substantial time savings to be > gained using -agentvm over -othervm. > > Fortunately, there was a fix for jdk9 (8077608) that has not been > backported to jdk8u. The details are as follows: > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/017937.html > https://bugs.openjdk.java.net/browse/JDK-8077608 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/af2a1e9f08f3 > > The patch just needed a slight change, to remove the change to the > file "test/compiler/uncommontrap/TestUnstableIfTrap.java" as that test > doesn't exist on jdk8u. > > My colleague Ningsheng has kindly hosted the change here: > > http://cr.openjdk.java.net/~njian/8077608/webrev.00 > > > BR, > Stuart > From david.holmes at oracle.com Thu Jun 1 01:48:31 2017 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Jun 2017 11:48:31 +1000 Subject: RFR(XS) 8181055: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <592EBB4B.1020909@linux.vnet.ibm.com> References: <3a2a0ef7-5eac-b72c-5dc6-b7594dc70c07@redhat.com> <5928C9AA.6030004@linux.vnet.ibm.com> <38f323bc-7416-5c3d-c534-f5f17be4c7c6@redhat.com> <95147596-caf9-4e49-f954-29fa13df3a56@oracle.com> <592CA97D.4000802@linux.vnet.ibm.com> <937b01c9-5569-ce73-e7a3-ad38aed82ab3@redhat.com> <4dc1ac9e-f35f-209a-761f-96dc584f68a1@oracle.com> <461d3048-88a2-c99d-818a-01de3813a29b@redhat.com> <592EBB4B.1020909@linux.vnet.ibm.com> Message-ID: <3252462d-f318-7287-c609-b84029e40117@oracle.com> Hi Gustavo, On 31/05/2017 10:47 PM, Gustavo Romero wrote: > Hi Zhengyu, > > On 30-05-2017 21:37, Zhengyu Gu wrote: >> Hi David, >> >> Thanks for the review. >> >> Gustavo, might I count you as a reviewer? > > Formally speaking (accordingly to the community Bylaws) I'm not a reviewer, so > I guess no. You are not a Reviewer (capital 'R') but you can certainly review and be listed as a reviewer. Cheers, David > > Kind regards, > Gustavo > >> Thanks, >> >> -Zhengyu >> >> >> >> On 05/30/2017 05:30 PM, David Holmes wrote: >>> Looks fine to me. >>> >>> Thanks, >>> David >>> >>> On 30/05/2017 9:59 PM, Zhengyu Gu wrote: >>>> Hi David and Gustavo, >>>> >>>> Thanks for the review. >>>> >>>> Webrev is updated according to your comments: >>>> >>>> http://cr.openjdk.java.net/~zgu/8181055/webrev.02/ >>>> >>>> Thanks, >>>> >>>> -Zhengyu >>>> >>>> >>>> On 05/29/2017 07:06 PM, Gustavo Romero wrote: >>>>> Hi David, >>>>> >>>>> On 29-05-2017 01:34, David Holmes wrote: >>>>>> Hi Zhengyu, >>>>>> >>>>>> On 29/05/2017 12:08 PM, Zhengyu Gu wrote: >>>>>>> Hi Gustavo, >>>>>>> >>>>>>> Thanks for the detail analysis and suggestion. I did not realize >>>>>>> the difference between from bitmask and nodemask. >>>>>>> >>>>>>> As you suggested, numa_interleave_memory_v2 works under this >>>>>>> configuration. >>>>>>> >>>>>>> Please updated Webrev: >>>>>>> http://cr.openjdk.java.net/~zgu/8181055/webrev.01/ >>>>>> >>>>>> The addition of support for the "v2" API seems okay. Though I think >>>>>> this comment needs some clarification for the existing code: >>>>>> >>>>>> 2837 // If we are running with libnuma version > 2, then we should >>>>>> 2838 // be trying to use symbols with versions 1.1 >>>>>> 2839 // If we are running with earlier version, which did not have >>>>>> symbol versions, >>>>>> 2840 // we should use the base version. >>>>>> 2841 void* os::Linux::libnuma_dlsym(void* handle, const char *name) { >>>>>> >>>>>> given that we now explicitly load the v1.2 symbol if present. >>>>>> >>>>>> Gustavo: can you vouch for the suitability of using the v2 API in >>>>>> all cases, if it exists? >>>>> >>>>> My understanding is that in the transition to API v2 only the usage of >>>>> numa_node_to_cpus() by the JVM will have to be adapted in >>>>> os::Linux::rebuild_cpu_to_node_map(). >>>>> The remaining functions (excluding numa_interleave_memory() as >>>>> Zhengyu already addressed it) >>>>> preserve the same functionality and signatures [1]. >>>>> >>>>> Currently JVM NUMA API requires the following libnuma functions: >>>>> >>>>> 1. numa_node_to_cpus v1 != v2 (using v1, JVM has to adapt) >>>>> 2. numa_max_node v1 == v2 (using v1, transition is >>>>> straightforward) >>>>> 3. numa_num_configured_nodes v2 (added by gromero: 8175813) >>>>> 4. numa_available v1 == v2 (using v1, transition is >>>>> straightforward) >>>>> 5. numa_tonode_memory v1 == v2 (using v1, transition is >>>>> straightforward) >>>>> 6. numa_interleave_memory v1 != v2 (updated by zhengyu: >>>>> 8181055. Default use of v2, fallback to v1) >>>>> 7. numa_set_bind_policy v1 == v2 (using v1, transition is >>>>> straightforward) >>>>> 8. numa_bitmask_isbitset v2 (added by gromero: 8175813) >>>>> 9. numa_distance v1 == v2 (added by gromero: 8175813. >>>>> Using v1, transition is straightforward) >>>>> >>>>> v1 != v2: function signature in version 1 is different from version 2 >>>>> v1 == v2: function signature in version 1 is equal to version 2 >>>>> v2 : function is only present in API v2 >>>>> >>>>> Thus, to the best of my knowledge, except for case 1. (which JVM need >>>>> to adapt to) >>>>> all other cases are suitable to use v2 API and we could use a >>>>> fallback mechanism as >>>>> proposed by Zhengyu or update directly to API v2 (risky?), given that >>>>> I can't see >>>>> how v2 API would not be available on current (not-EOL) Linux distro >>>>> releases. >>>>> >>>>> Regarding the comment, I agree, it needs an update since we are not >>>>> tied anymore >>>>> to version 1.1 (we are in effect already using v2 for some >>>>> functions). We could >>>>> delete the comment atop libnuma_dlsym() and add something like: >>>>> >>>>> "Handle request to load libnuma symbol version 1.1 (API v1). If it >>>>> fails load symbol from base version instead." >>>>> >>>>> and to libnuma_v2_dlsym() add: >>>>> >>>>> "Handle request to load libnuma symbol version 1.2 (API v2) only. If >>>>> it fails no symbol from any other version - even if present - is >>>>> loaded." >>>>> >>>>> I've opened a bug to track the transitions to API v2 (I also >>>>> discussed that with Volker): >>>>> https://bugs.openjdk.java.net/browse/JDK-8181196 >>>>> >>>>> >>>>> Regards, >>>>> Gustavo >>>>> >>>>> [1] API v1 vs API v2: >>>>> >>>>> API v1 >>>>> ====== >>>>> >>>>> int numa_node_to_cpus(int node, unsigned long *buffer, int bufferlen); >>>>> int numa_max_node(void); >>>>> - int numa_num_configured_nodes(void); >>>>> int numa_available(void); >>>>> void numa_tonode_memory(void *start, size_t size, int node); >>>>> void numa_interleave_memory(void *start, size_t size, nodemask_t >>>>> *nodemask); >>>>> void numa_set_bind_policy(int strict); >>>>> - int numa_bitmask_isbitset(const struct bitmask *bmp, unsigned int n); >>>>> int numa_distance(int node1, int node2); >>>>> >>>>> >>>>> API v2 >>>>> ====== >>>>> >>>>> int numa_node_to_cpus(int node, struct bitmask *mask); >>>>> int numa_max_node(void); >>>>> int numa_num_configured_nodes(void); >>>>> int numa_available(void); >>>>> void numa_tonode_memory(void *start, size_t size, int node); >>>>> void numa_interleave_memory(void *start, size_t size, struct bitmask >>>>> *nodemask); >>>>> void numa_set_bind_policy(int strict) >>>>> int numa_bitmask_isbitset(const struct bitmask *bmp, unsigned int n); >>>>> int numa_distance(int node1, int node2); >>>>> >>>>> >>>>>> I'm running this through JPRT now. >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> -Zhengyu >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 05/26/2017 08:34 PM, Gustavo Romero wrote: >>>>>>>> Hi Zhengyu, >>>>>>>> >>>>>>>> Thanks a lot for taking care of this corner case on PPC64. >>>>>>>> >>>>>>>> On 26-05-2017 10:41, Zhengyu Gu wrote: >>>>>>>>> This is a quick way to kill the symptom (or low risk?). I am not >>>>>>>>> sure if disabling NUMA is a better solution for this >>>>>>>>> circumstance? does 1 NUMA node = UMA? >>>>>>>> >>>>>>>> On PPC64, 1 (configured) NUMA does not necessarily imply UMA. In >>>>>>>> the POWER7 >>>>>>>> machine you found the corner case (I copy below the data you >>>>>>>> provided in the >>>>>>>> JBS - thanks for the additional information): >>>>>>>> >>>>>>>> $ numactl -H >>>>>>>> available: 2 nodes (0-1) >>>>>>>> node 0 cpus: 0 1 2 3 4 5 6 7 >>>>>>>> node 0 size: 0 MB >>>>>>>> node 0 free: 0 MB >>>>>>>> node 1 cpus: >>>>>>>> node 1 size: 7680 MB >>>>>>>> node 1 free: 1896 MB >>>>>>>> node distances: >>>>>>>> node 0 1 >>>>>>>> 0: 10 40 >>>>>>>> 1: 40 10 >>>>>>>> >>>>>>>> CPUs in node0 have no other alternative besides allocating memory >>>>>>>> from node1. In >>>>>>>> that case CPUs in node0 are always accessing remote memory from >>>>>>>> node1 in a constant >>>>>>>> distance (40), so in that case we could say that 1 NUMA >>>>>>>> (configured) node == UMA. >>>>>>>> Nonetheless, if you add CPUs in node1 (by filling up the other >>>>>>>> socket present in >>>>>>>> the board) you will end up with CPUs with different distances from >>>>>>>> the node that >>>>>>>> has configured memory (in that case, node1), so it yields a >>>>>>>> configuration where >>>>>>>> 1 NUMA (configured) != UMA (i.e. distances are not always equal to >>>>>>>> a single >>>>>>>> value). >>>>>>>> >>>>>>>> On the other hand, the POWER7 machine configuration in question is >>>>>>>> bad (and >>>>>>>> rare). It's indeed impacting the whole system performance and it >>>>>>>> would be >>>>>>>> reasonable to open the machine and move the memory module from >>>>>>>> bank related to >>>>>>>> node1 to bank related to node0, because all CPUs are accessing >>>>>>>> remote memory >>>>>>>> without any apparent necessity. Once you change it all CPUs will >>>>>>>> have local >>>>>>>> memory (distance = 10). >>>>>>>> >>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> -Zhengyu >>>>>>>>> >>>>>>>>> On 05/26/2017 09:14 AM, Zhengyu Gu wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> There is a corner case that still failed after JDK-8175813. >>>>>>>>>> >>>>>>>>>> The system shows that it has multiple NUMA nodes, but only one is >>>>>>>>>> configured. Under this scenario, numa_interleave_memory() call will >>>>>>>>>> result "mbind: Invalid argument" message. >>>>>>>>>> >>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >>>>>>>>>> Webrev: http://cr.openjdk.java.net/~zgu/8181055/webrev.00/ >>>>>>>> >>>>>>>> Looks like that even for that POWER7 rare numa topology >>>>>>>> numa_interleave_memory() >>>>>>>> should succeed without "mbind: Invalid argument" since the 'mask' >>>>>>>> argument >>>>>>>> should be already a mask with only nodes from which memory can be >>>>>>>> allocated, i.e. >>>>>>>> only a mask of configured nodes (even if mask contains only one >>>>>>>> configured node, >>>>>>>> as in >>>>>>>> http://cr.openjdk.java.net/~gromero/logs/numa_only_one_node.txt). >>>>>>>> >>>>>>>> Inspecting a little bit more, it looks like that the problem boils >>>>>>>> down to the >>>>>>>> fact that the JVM is passing to numa_interleave_memory() >>>>>>>> 'numa_all_nodes' [1] in >>>>>>>> Linux::numa_interleave_memory(). >>>>>>>> >>>>>>>> One would expect that 'numa_all_nodes' (which is api v1) would >>>>>>>> track the same >>>>>>>> information as 'numa_all_nodes_ptr' (api v2) [2], however there is >>>>>>>> a subtle but >>>>>>>> important difference: >>>>>>>> >>>>>>>> 'numa_all_nodes' is constructed assuming a consecutive node >>>>>>>> distribution [3]: >>>>>>>> >>>>>>>> 100 max = numa_num_configured_nodes(); >>>>>>>> 101 for (i = 0; i < max; i++) >>>>>>>> 102 nodemask_set_compat((nodemask_t >>>>>>>> *)&numa_all_nodes, i); >>>>>>>> >>>>>>>> >>>>>>>> whilst 'numa_all_nodes_ptr' is constructed parsing >>>>>>>> /proc/self/status [4]: >>>>>>>> >>>>>>>> 499 if (strncmp(buffer,"Mems_allowed:",13) == 0) { >>>>>>>> 500 numprocnode = read_mask(mask, >>>>>>>> numa_all_nodes_ptr); >>>>>>>> >>>>>>>> Thus for a topology like: >>>>>>>> >>>>>>>> available: 4 nodes (0-1,16-17) >>>>>>>> node 0 cpus: 0 8 16 24 32 >>>>>>>> node 0 size: 130706 MB >>>>>>>> node 0 free: 145 MB >>>>>>>> node 1 cpus: 40 48 56 64 72 >>>>>>>> node 1 size: 0 MB >>>>>>>> node 1 free: 0 MB >>>>>>>> node 16 cpus: 80 88 96 104 112 >>>>>>>> node 16 size: 130630 MB >>>>>>>> node 16 free: 529 MB >>>>>>>> node 17 cpus: 120 128 136 144 152 >>>>>>>> node 17 size: 0 MB >>>>>>>> node 17 free: 0 MB >>>>>>>> node distances: >>>>>>>> node 0 1 16 17 >>>>>>>> 0: 10 20 40 40 >>>>>>>> 1: 20 10 40 40 >>>>>>>> 16: 40 40 10 20 >>>>>>>> 17: 40 40 20 10 >>>>>>>> >>>>>>>> numa_all_nodes=0x3 => 0b11 (node0 and node1) >>>>>>>> numa_all_nodes_ptr=0x10001 => 0b10000000000000001 (node0 and node16) >>>>>>>> >>>>>>>> (Please, see details in the following gdb log: >>>>>>>> http://cr.openjdk.java.net/~gromero/logs/numa_api_v1_vs_api_v2.txt) >>>>>>>> >>>>>>>> In that case passing node0 and node1, although being suboptimal, >>>>>>>> does not bother >>>>>>>> mbind() since the following is satisfied: >>>>>>>> >>>>>>>> "[nodemask] must contain at least one node that is on-line, >>>>>>>> allowed by the >>>>>>>> process's current cpuset context, and contains memory." >>>>>>>> >>>>>>>> So back to the POWER7 case, I suppose that for: >>>>>>>> >>>>>>>> available: 2 nodes (0-1) >>>>>>>> node 0 cpus: 0 1 2 3 4 5 6 7 >>>>>>>> node 0 size: 0 MB >>>>>>>> node 0 free: 0 MB >>>>>>>> node 1 cpus: >>>>>>>> node 1 size: 7680 MB >>>>>>>> node 1 free: 1896 MB >>>>>>>> node distances: >>>>>>>> node 0 1 >>>>>>>> 0: 10 40 >>>>>>>> 1: 40 10 >>>>>>>> >>>>>>>> numa_all_nodes=0x1 => 0b01 (node0) >>>>>>>> numa_all_nodes_ptr=0x2 => 0b10 (node1) >>>>>>>> >>>>>>>> and hence numa_interleave_memory() gets nodemask = 0x1 (node0), >>>>>>>> which contains >>>>>>>> indeed no memory. That said, I don't know for sure if passing just >>>>>>>> node1 in the >>>>>>>> 'nodemask' will satisfy mbind() as in that case there are no cpus >>>>>>>> available in >>>>>>>> node1. >>>>>>>> >>>>>>>> In summing up, looks like that the root cause is not that >>>>>>>> numa_interleave_memory() >>>>>>>> does not accept only one configured node, but that the configured >>>>>>>> node being >>>>>>>> passed is wrong. I could not find a similar numa topology in my >>>>>>>> poll to test >>>>>>>> more, but it might be worth trying to write a small test using api >>>>>>>> v2 and >>>>>>>> 'numa_all_nodes_ptr' instead of 'numa_all_nodes' to see how >>>>>>>> numa_interleave_memory() >>>>>>>> goes in that machine :) If it behaves well, updating to api v2 >>>>>>>> would be a >>>>>>>> solution. >>>>>>>> >>>>>>>> HTH >>>>>>>> >>>>>>>> Regards, >>>>>>>> Gustavo >>>>>>>> >>>>>>>> >>>>>>>> [1] >>>>>>>> http://hg.openjdk.java.net/jdk10/hs/hotspot/file/4b93e1b1d5b7/src/os/linux/vm/os_linux.hpp#l274 >>>>>>>> >>>>>>>> [2] from libnuma.c:608 numa_all_nodes_ptr: "it only tracks nodes >>>>>>>> with memory from which the calling process can allocate." >>>>>>>> [3] >>>>>>>> https://github.com/numactl/numactl/blob/master/libnuma.c#L100-L102 >>>>>>>> [4] >>>>>>>> https://github.com/numactl/numactl/blob/master/libnuma.c#L499-L500 >>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>>>> The system NUMA configuration: >>>>>>>>>> >>>>>>>>>> Architecture: ppc64 >>>>>>>>>> CPU op-mode(s): 32-bit, 64-bit >>>>>>>>>> Byte Order: Big Endian >>>>>>>>>> CPU(s): 8 >>>>>>>>>> On-line CPU(s) list: 0-7 >>>>>>>>>> Thread(s) per core: 4 >>>>>>>>>> Core(s) per socket: 1 >>>>>>>>>> Socket(s): 2 >>>>>>>>>> NUMA node(s): 2 >>>>>>>>>> Model: 2.1 (pvr 003f 0201) >>>>>>>>>> Model name: POWER7 (architected), altivec supported >>>>>>>>>> L1d cache: 32K >>>>>>>>>> L1i cache: 32K >>>>>>>>>> NUMA node0 CPU(s): 0-7 >>>>>>>>>> NUMA node1 CPU(s): >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> -Zhengyu >>>>>>>>> >>>>>>>> >>>>>> >>>>> >> > From david.holmes at oracle.com Thu Jun 1 01:54:58 2017 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Jun 2017 11:54:58 +1000 Subject: RFR(XS) 8181055: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <13a703cc-9a82-0420-40c9-5c31290c78c4@redhat.com> References: <3a2a0ef7-5eac-b72c-5dc6-b7594dc70c07@redhat.com> <5928C9AA.6030004@linux.vnet.ibm.com> <38f323bc-7416-5c3d-c534-f5f17be4c7c6@redhat.com> <95147596-caf9-4e49-f954-29fa13df3a56@oracle.com> <592CA97D.4000802@linux.vnet.ibm.com> <937b01c9-5569-ce73-e7a3-ad38aed82ab3@redhat.com> <97b44e65-efea-2118-6740-2e197cd72d6b@redhat.com> <13a703cc-9a82-0420-40c9-5c31290c78c4@redhat.com> Message-ID: Hi Zhengyu, On 31/05/2017 11:23 PM, Zhengyu Gu wrote: > Hi David, > > It has two reviewers now. > > > Would you mind to sponsor this change? > I prepared the final patch: > http://cr.openjdk.java.net/~zgu/8181055/webrev.03/ Pushing now. I added Gustavo as a reviewer. I also had to tweak the patch as os_linux.hpp was modified by me yesterday :) Cheers, David > Thanks, > > -Zhengyu > > > > On 05/31/2017 09:04 AM, Aleksey Shipilev wrote: >> On 05/30/2017 01:59 PM, Zhengyu Gu wrote: >>> http://cr.openjdk.java.net/~zgu/8181055/webrev.02/ >> >> Looks fine to me too, given Gustavo's comments. >> >> -Aleksey >> >> From zgu at redhat.com Thu Jun 1 02:21:19 2017 From: zgu at redhat.com (Zhengyu Gu) Date: Wed, 31 May 2017 22:21:19 -0400 Subject: RFR(XS) 8181055: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: References: <3a2a0ef7-5eac-b72c-5dc6-b7594dc70c07@redhat.com> <5928C9AA.6030004@linux.vnet.ibm.com> <38f323bc-7416-5c3d-c534-f5f17be4c7c6@redhat.com> <95147596-caf9-4e49-f954-29fa13df3a56@oracle.com> <592CA97D.4000802@linux.vnet.ibm.com> <937b01c9-5569-ce73-e7a3-ad38aed82ab3@redhat.com> <97b44e65-efea-2118-6740-2e197cd72d6b@redhat.com> <13a703cc-9a82-0420-40c9-5c31290c78c4@redhat.com> Message-ID: <79206952-2e81-cf70-cb71-a96672aeab3a@redhat.com> Thank you, David. -Zhengyu On 05/31/2017 09:54 PM, David Holmes wrote: > Hi Zhengyu, > > On 31/05/2017 11:23 PM, Zhengyu Gu wrote: >> Hi David, >> >> It has two reviewers now. >> >> >> Would you mind to sponsor this change? >> I prepared the final patch: >> http://cr.openjdk.java.net/~zgu/8181055/webrev.03/ > > Pushing now. I added Gustavo as a reviewer. I also had to tweak the > patch as os_linux.hpp was modified by me yesterday :) > > Cheers, > David > >> Thanks, >> >> -Zhengyu >> >> >> >> On 05/31/2017 09:04 AM, Aleksey Shipilev wrote: >>> On 05/30/2017 01:59 PM, Zhengyu Gu wrote: >>>> http://cr.openjdk.java.net/~zgu/8181055/webrev.02/ >>> >>> Looks fine to me too, given Gustavo's comments. >>> >>> -Aleksey >>> >>> From abdul.kolarkunnu at oracle.com Thu Jun 1 04:31:06 2017 From: abdul.kolarkunnu at oracle.com (Muneer Kolarkunnu) Date: Wed, 31 May 2017 21:31:06 -0700 (PDT) Subject: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode In-Reply-To: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> References: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> Message-ID: <03724e7c-f656-473f-9a89-eb78073b518f@default> Hi David and Stuart, I recently reported one bug[1] for the same issue and listed which all test cases are failing with agentvm. I tested in Oracle.Linux.7.0 x64. [1] https://bugs.openjdk.java.net/browse/JDK-8180904 Regards, Muneer -----Original Message----- From: David Holmes Sent: Thursday, June 01, 2017 7:04 AM To: Stuart Monteith; hotspot-dev Source Developers Subject: Re: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode Hi Stuart, This looks like an accurate backport of the change. My only minor concern is if there may be tests in 8u that are no longer in 9 which may not work with agentvm mode. What platforms have you tested this on? Thanks, David On 31/05/2017 11:19 PM, Stuart Monteith wrote: > Hello, > Currently the jdk8u codebase fails some JTreg Hotspot tests when > running in the -agentvm mode. This is because the ProcessTools class > is not passing the classpath. There are substantial time savings to be > gained using -agentvm over -othervm. > > Fortunately, there was a fix for jdk9 (8077608) that has not been > backported to jdk8u. The details are as follows: > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/017937.h > tml > https://bugs.openjdk.java.net/browse/JDK-8077608 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/af2a1e9f08f3 > > The patch just needed a slight change, to remove the change to the > file "test/compiler/uncommontrap/TestUnstableIfTrap.java" as that test > doesn't exist on jdk8u. > > My colleague Ningsheng has kindly hosted the change here: > > http://cr.openjdk.java.net/~njian/8077608/webrev.00 > > > BR, > Stuart > From david.holmes at oracle.com Thu Jun 1 05:39:18 2017 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Jun 2017 15:39:18 +1000 Subject: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode In-Reply-To: <03724e7c-f656-473f-9a89-eb78073b518f@default> References: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> <03724e7c-f656-473f-9a89-eb78073b518f@default> Message-ID: <82c81d81-017f-fdc1-0e33-0f9cd5140e82@oracle.com> Thanks for that information Muneer, that is an unpleasant surprise. Stuart: I think 8180904 has to be fixed before this backport can take place. Thanks, David ----- On 1/06/2017 2:31 PM, Muneer Kolarkunnu wrote: > Hi David and Stuart, > > I recently reported one bug[1] for the same issue and listed which all test cases are failing with agentvm. > I tested in Oracle.Linux.7.0 x64. > > [1] https://bugs.openjdk.java.net/browse/JDK-8180904 > > Regards, > Muneer > > -----Original Message----- > From: David Holmes > Sent: Thursday, June 01, 2017 7:04 AM > To: Stuart Monteith; hotspot-dev Source Developers > Subject: Re: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode > > Hi Stuart, > > This looks like an accurate backport of the change. > > My only minor concern is if there may be tests in 8u that are no longer in 9 which may not work with agentvm mode. > > What platforms have you tested this on? > > Thanks, > David > > On 31/05/2017 11:19 PM, Stuart Monteith wrote: >> Hello, >> Currently the jdk8u codebase fails some JTreg Hotspot tests when >> running in the -agentvm mode. This is because the ProcessTools class >> is not passing the classpath. There are substantial time savings to be >> gained using -agentvm over -othervm. >> >> Fortunately, there was a fix for jdk9 (8077608) that has not been >> backported to jdk8u. The details are as follows: >> >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/017937.h >> tml >> https://bugs.openjdk.java.net/browse/JDK-8077608 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/af2a1e9f08f3 >> >> The patch just needed a slight change, to remove the change to the >> file "test/compiler/uncommontrap/TestUnstableIfTrap.java" as that test >> doesn't exist on jdk8u. >> >> My colleague Ningsheng has kindly hosted the change here: >> >> http://cr.openjdk.java.net/~njian/8077608/webrev.00 >> >> >> BR, >> Stuart >> From kim.barrett at oracle.com Thu Jun 1 05:51:24 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Jun 2017 01:51:24 -0400 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> Message-ID: > On May 31, 2017, at 9:22 PM, David Holmes wrote: > > Hi Erik, > > A small change with big questions :) > > On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >> Hi, >> It would be desirable to be able to use harmless C++ standard library headers like in the code as long as it does not add any link-time dependencies to the standard library. > > What does a 'harmless' C++ standard library header look like? Header-only (doesn't require linking), doesn't run afoul of our [vm]assert macro, and provides functionality we presently lack (or only handle poorly) and would not be easy to reproduce. The instigator for this is Erik and I are working on a project that needs information that is present in std::numeric_limits<> (provided by the header). Reproducing that functionality ourselves would require platform-specific code (with all the complexity that can imply). We'd really rather not re-discover and maintain information that is trivially accessible in every standard library. >> This is possible on all supported platforms except the ones using the solaris studio compiler where we enforce -library=%none in both CFLAGS and LDFLAGS. >> I propose to remove the restriction from CFLAGS but keep it on LDFLAGS. >> I have consulted with the studio folks, and they think this is absolutely fine and thought that the choice of -library=stlport4 should be fine for our CFLAGS and is indeed what is already used in the gtest launcher. > > So what exactly does this mean? IIUC this allows you to use headers for, and compile against "STLport?s Standard Library implementation version 4.5.3 instead of the default libCstd". But how do you then not need to link against libstlport.so ?? > > https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html > > "STLport is binary incompatible with the default libCstd. If you use the STLport implementation of the standard library, then you must compile and link all files, including third-party libraries, with the option -library=stlport4? It means we can only use header-only parts of the standard library. This was confirmed / suggested by the Studio folks Erik consulted, providing such limited access while continuing to constrain our dependency on the library. Figuring out what can be used will need to be determined on a case-by-case basis. Maybe we could just link with a standard library on Solaris too. So far as I can tell, Solaris is the only platform where we don't do that. But Erik is trying to be conservative. > There are lots of other comments in that document regarding STLport that makes me think that using it may be introducing a fragile dependency into the OpenJDK code! > > "STLport is an open source product and does not guarantee compatibility across different releases. In other words, compiling with a future version of STLport may break applications compiled with STLport 4.5.3. It also might not be possible to link binaries compiled using STLport 4.5.3 with binaries compiled using a future version of STLport." > > "Future releases of the compiler might not include STLport4. They might include only a later version of STLport. The compiler option -library=stlport4 might not be available in future releases, but could be replaced by an option referring to a later STLport version." > > None of that sounds very good to me. I don't see how this is any different from any other part of the process for using a different version of Solaris Studio. stlport4 is one of the three standard libraries that are presently included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the Studio folks which to use (for the purposes of our present project, we don't have any particular preference, so long as it works), and stlport4 seemed the right choice (libCstd was, I think, described as "ancient"). Perhaps more importantly, we already use stlport4, including linking against it, for gtest builds. Mixing two different standard libraries seems like a bad idea... > > Cheers, > David > > >> Webrev for jdk10-hs top level repository: >> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >> Webrev for jdk10-hs hotspot repository: >> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >> Testing: JPRT. >> Will need a sponsor. >> Thanks, >> /Erik From david.holmes at oracle.com Thu Jun 1 06:09:09 2017 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Jun 2017 16:09:09 +1000 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> Message-ID: Hi Kim, On 1/06/2017 3:51 PM, Kim Barrett wrote: >> On May 31, 2017, at 9:22 PM, David Holmes wrote: >> >> Hi Erik, >> >> A small change with big questions :) >> >> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>> Hi, >>> It would be desirable to be able to use harmless C++ standard library headers like in the code as long as it does not add any link-time dependencies to the standard library. >> >> What does a 'harmless' C++ standard library header look like? > > Header-only (doesn't require linking), doesn't run afoul of our > [vm]assert macro, and provides functionality we presently lack (or > only handle poorly) and would not be easy to reproduce. And how does one establish those properties exist for a given header file? Just use it and if no link errors then all is good? > The instigator for this is Erik and I are working on a project that > needs information that is present in std::numeric_limits<> (provided > by the header). Reproducing that functionality ourselves > would require platform-specific code (with all the complexity that can > imply). We'd really rather not re-discover and maintain information > that is trivially accessible in every standard library. Understood. I have no issue with using but am concerned by the state of stlport4. Can you use without changing -library=%none? >>> This is possible on all supported platforms except the ones using the solaris studio compiler where we enforce -library=%none in both CFLAGS and LDFLAGS. >>> I propose to remove the restriction from CFLAGS but keep it on LDFLAGS. >>> I have consulted with the studio folks, and they think this is absolutely fine and thought that the choice of -library=stlport4 should be fine for our CFLAGS and is indeed what is already used in the gtest launcher. >> >> So what exactly does this mean? IIUC this allows you to use headers for, and compile against "STLport?s Standard Library implementation version 4.5.3 instead of the default libCstd". But how do you then not need to link against libstlport.so ?? >> >> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >> >> "STLport is binary incompatible with the default libCstd. If you use the STLport implementation of the standard library, then you must compile and link all files, including third-party libraries, with the option -library=stlport4? > > It means we can only use header-only parts of the standard library. > This was confirmed / suggested by the Studio folks Erik consulted, > providing such limited access while continuing to constrain our > dependency on the library. Figuring out what can be used will need to > be determined on a case-by-case basis. Maybe we could just link with > a standard library on Solaris too. So far as I can tell, Solaris is > the only platform where we don't do that. But Erik is trying to be > conservative. Okay, but the docs don't seem to acknowledge the ability to use, but not link to, stlport4. >> There are lots of other comments in that document regarding STLport that makes me think that using it may be introducing a fragile dependency into the OpenJDK code! >> >> "STLport is an open source product and does not guarantee compatibility across different releases. In other words, compiling with a future version of STLport may break applications compiled with STLport 4.5.3. It also might not be possible to link binaries compiled using STLport 4.5.3 with binaries compiled using a future version of STLport." >> >> "Future releases of the compiler might not include STLport4. They might include only a later version of STLport. The compiler option -library=stlport4 might not be available in future releases, but could be replaced by an option referring to a later STLport version." >> >> None of that sounds very good to me. > > I don't see how this is any different from any other part of the > process for using a different version of Solaris Studio. Well we'd discover the problem when testing the compiler change, but my point was more to the fact that they don't seem very committed to this library - very much a "use at own risk" disclaimer. > stlport4 is one of the three standard libraries that are presently > included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the > Studio folks which to use (for the purposes of our present project, we > don't have any particular preference, so long as it works), and > stlport4 seemed the right choice (libCstd was, I think, described as > "ancient"). Perhaps more importantly, we already use stlport4, > including linking against it, for gtest builds. Mixing two different > standard libraries seems like a bad idea... So we have the choice of "ancient", "unsupported" or gcc :) My confidence in this has not increased :) What we do in gtest doesn't necessarily make things okay to do in the product. If this were part of a compiler upgrade process we'd be comparing binaries with old flag and new to ensure there are no unexpected consequences. Cheers, David >> >> Cheers, >> David >> >> >>> Webrev for jdk10-hs top level repository: >>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>> Webrev for jdk10-hs hotspot repository: >>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>> Testing: JPRT. >>> Will need a sponsor. >>> Thanks, >>> /Erik > > From volker.simonis at gmail.com Thu Jun 1 07:05:32 2017 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 1 Jun 2017 09:05:32 +0200 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> Message-ID: On Thu, Jun 1, 2017 at 7:51 AM, Kim Barrett wrote: >> On May 31, 2017, at 9:22 PM, David Holmes wrote: >> >> Hi Erik, >> >> A small change with big questions :) >> >> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>> Hi, >>> It would be desirable to be able to use harmless C++ standard library headers like in the code as long as it does not add any link-time dependencies to the standard library. >> >> What does a 'harmless' C++ standard library header look like? > > Header-only (doesn't require linking), doesn't run afoul of our > [vm]assert macro, and provides functionality we presently lack (or > only handle poorly) and would not be easy to reproduce. > > The instigator for this is Erik and I are working on a project that > needs information that is present in std::numeric_limits<> (provided > by the header). Reproducing that functionality ourselves > would require platform-specific code (with all the complexity that can > imply). We'd really rather not re-discover and maintain information > that is trivially accessible in every standard library. > Hi Kim, Erik, can you please explain why you only need this information on Solaris? I'm just a little concerned that if you start this for "Solaris only" it will be sooner or later be needed on other platforms as well. As David already asked, how do you ensure to only use functionality from the C++ standard library header which doesn't require link support? Thanks, Volker >>> This is possible on all supported platforms except the ones using the solaris studio compiler where we enforce -library=%none in both CFLAGS and LDFLAGS. >>> I propose to remove the restriction from CFLAGS but keep it on LDFLAGS. >>> I have consulted with the studio folks, and they think this is absolutely fine and thought that the choice of -library=stlport4 should be fine for our CFLAGS and is indeed what is already used in the gtest launcher. >> >> So what exactly does this mean? IIUC this allows you to use headers for, and compile against "STLport?s Standard Library implementation version 4.5.3 instead of the default libCstd". But how do you then not need to link against libstlport.so ?? >> >> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >> >> "STLport is binary incompatible with the default libCstd. If you use the STLport implementation of the standard library, then you must compile and link all files, including third-party libraries, with the option -library=stlport4? > > It means we can only use header-only parts of the standard library. > This was confirmed / suggested by the Studio folks Erik consulted, > providing such limited access while continuing to constrain our > dependency on the library. Figuring out what can be used will need to > be determined on a case-by-case basis. Maybe we could just link with > a standard library on Solaris too. So far as I can tell, Solaris is > the only platform where we don't do that. But Erik is trying to be > conservative. > >> There are lots of other comments in that document regarding STLport that makes me think that using it may be introducing a fragile dependency into the OpenJDK code! >> >> "STLport is an open source product and does not guarantee compatibility across different releases. In other words, compiling with a future version of STLport may break applications compiled with STLport 4.5.3. It also might not be possible to link binaries compiled using STLport 4.5.3 with binaries compiled using a future version of STLport." >> >> "Future releases of the compiler might not include STLport4. They might include only a later version of STLport. The compiler option -library=stlport4 might not be available in future releases, but could be replaced by an option referring to a later STLport version." >> >> None of that sounds very good to me. > > I don't see how this is any different from any other part of the > process for using a different version of Solaris Studio. > > stlport4 is one of the three standard libraries that are presently > included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the > Studio folks which to use (for the purposes of our present project, we > don't have any particular preference, so long as it works), and > stlport4 seemed the right choice (libCstd was, I think, described as > "ancient"). Perhaps more importantly, we already use stlport4, > including linking against it, for gtest builds. Mixing two different > standard libraries seems like a bad idea... > >> >> Cheers, >> David >> >> >>> Webrev for jdk10-hs top level repository: >>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>> Webrev for jdk10-hs hotspot repository: >>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>> Testing: JPRT. >>> Will need a sponsor. >>> Thanks, >>> /Erik > > From kim.barrett at oracle.com Thu Jun 1 07:18:09 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Jun 2017 03:18:09 -0400 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> Message-ID: > On Jun 1, 2017, at 3:05 AM, Volker Simonis wrote: > > On Thu, Jun 1, 2017 at 7:51 AM, Kim Barrett wrote: >> The instigator for this is Erik and I are working on a project that >> needs information that is present in std::numeric_limits<> (provided >> by the header). Reproducing that functionality ourselves >> would require platform-specific code (with all the complexity that can >> imply). We'd really rather not re-discover and maintain information >> that is trivially accessible in every standard library. >> > > Hi Kim, Erik, > > can you please explain why you only need this information on Solaris? > > I'm just a little concerned that if you start this for "Solaris only" > it will be sooner or later be needed on other platforms as well. The change is only to Solaris because the present Solaris build configuration is the only one which doesn?t already provide the access we want. From kim.barrett at oracle.com Thu Jun 1 08:18:36 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Jun 2017 04:18:36 -0400 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: <592EDADC.8040709@oracle.com> References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> Message-ID: > On May 31, 2017, at 11:01 AM, Erik ?sterlund wrote: > > Hi, > > Excellent. In that case I would like reviews on this patch that does exactly that: > http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ > > Testing: JPRT > > Need a sponsor. FWIW, another option would be to remove all the min/max stuff here, and add -DNOMINMAX (or whatever the proper syntax is) in the Windows build configuration. I did some later research on JDK-8161145, and defining that macro seems to be the "official" way to suppress those macros in the offending windows header. The proposed change also seems fine to me, not surprisingly. Under the circumstances, not sure I should be counted as a reviewer :) I can sponsor though. From erik.osterlund at oracle.com Thu Jun 1 09:36:00 2017 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 1 Jun 2017 11:36:00 +0200 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> Message-ID: <592FE000.50003@oracle.com> Hi David, On 2017-06-01 08:09, David Holmes wrote: > Hi Kim, > > On 1/06/2017 3:51 PM, Kim Barrett wrote: >>> On May 31, 2017, at 9:22 PM, David Holmes >>> wrote: >>> >>> Hi Erik, >>> >>> A small change with big questions :) >>> >>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>> Hi, >>>> It would be desirable to be able to use harmless C++ standard >>>> library headers like in the code as long as it does not >>>> add any link-time dependencies to the standard library. >>> >>> What does a 'harmless' C++ standard library header look like? >> >> Header-only (doesn't require linking), doesn't run afoul of our >> [vm]assert macro, and provides functionality we presently lack (or >> only handle poorly) and would not be easy to reproduce. > > And how does one establish those properties exist for a given header > file? Just use it and if no link errors then all is good? Objects from headers that are not ODR-used such as constant folded expressions are not imposing link-time dependencies to C++ libraries. The -xnolib that we already have in the LDFLAGS will catch any accidental ODR-uses of C++ objects, and the JVM will not build if that happens. As for external headers being included and not playing nicely with macros, this has to be evaluated on a case by case basis. Note that this is a problem that occurs when using system headers (that we are already using), as it is for using C++ standard library headers. We even run into that in our own JVM when e.g. the min/max macros occasionally slaps us gently in the face from time to time. > >> The instigator for this is Erik and I are working on a project that >> needs information that is present in std::numeric_limits<> (provided >> by the header). Reproducing that functionality ourselves >> would require platform-specific code (with all the complexity that can >> imply). We'd really rather not re-discover and maintain information >> that is trivially accessible in every standard library. > > Understood. I have no issue with using but am concerned by > the state of stlport4. Can you use without changing > -library=%none? No, that is precisely why we are here. > >>>> This is possible on all supported platforms except the ones using >>>> the solaris studio compiler where we enforce -library=%none in both >>>> CFLAGS and LDFLAGS. >>>> I propose to remove the restriction from CFLAGS but keep it on >>>> LDFLAGS. >>>> I have consulted with the studio folks, and they think this is >>>> absolutely fine and thought that the choice of -library=stlport4 >>>> should be fine for our CFLAGS and is indeed what is already used in >>>> the gtest launcher. >>> >>> So what exactly does this mean? IIUC this allows you to use headers >>> for, and compile against "STLport?s Standard Library implementation >>> version 4.5.3 instead of the default libCstd". But how do you then >>> not need to link against libstlport.so ?? >>> >>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>> >>> "STLport is binary incompatible with the default libCstd. If you use >>> the STLport implementation of the standard library, then you must >>> compile and link all files, including third-party libraries, with >>> the option -library=stlport4? >> >> It means we can only use header-only parts of the standard library. >> This was confirmed / suggested by the Studio folks Erik consulted, >> providing such limited access while continuing to constrain our >> dependency on the library. Figuring out what can be used will need to >> be determined on a case-by-case basis. Maybe we could just link with >> a standard library on Solaris too. So far as I can tell, Solaris is >> the only platform where we don't do that. But Erik is trying to be >> conservative. > > Okay, but the docs don't seem to acknowledge the ability to use, but > not link to, stlport4. Not ODR-used objects do not require linkage. (http://en.cppreference.com/w/cpp/language/definition) I have confirmed directly with the studio folks to be certain that accidental linkage would fail by keeping our existing guards in the LDFLAGS rather than the CFLAGS. This is also reasonably well documented already (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). > >>> There are lots of other comments in that document regarding STLport >>> that makes me think that using it may be introducing a fragile >>> dependency into the OpenJDK code! >>> >>> "STLport is an open source product and does not guarantee >>> compatibility across different releases. In other words, compiling >>> with a future version of STLport may break applications compiled >>> with STLport 4.5.3. It also might not be possible to link binaries >>> compiled using STLport 4.5.3 with binaries compiled using a future >>> version of STLport." >>> >>> "Future releases of the compiler might not include STLport4. They >>> might include only a later version of STLport. The compiler option >>> -library=stlport4 might not be available in future releases, but >>> could be replaced by an option referring to a later STLport version." >>> >>> None of that sounds very good to me. >> >> I don't see how this is any different from any other part of the >> process for using a different version of Solaris Studio. > > Well we'd discover the problem when testing the compiler change, but > my point was more to the fact that they don't seem very committed to > this library - very much a "use at own risk" disclaimer. If we eventually need to use something more modern for features that have not been around for a decade, like C++11 features, then we can change standard library when that day comes. > >> stlport4 is one of the three standard libraries that are presently >> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the >> Studio folks which to use (for the purposes of our present project, we >> don't have any particular preference, so long as it works), and >> stlport4 seemed the right choice (libCstd was, I think, described as >> "ancient"). Perhaps more importantly, we already use stlport4, >> including linking against it, for gtest builds. Mixing two different >> standard libraries seems like a bad idea... > > So we have the choice of "ancient", "unsupported" or gcc :) > > My confidence in this has not increased :) I trust that e.g. std::numeric_limits::is_signed in the standard libraries has more mileage than whatever simplified rewrite of that we try to replicate in the JVM. So it is not obvious to me that we should have less confidence in the same functionality from a standard library shipped together with the compiler we are using and that has already been used and tested in a variety of C++ applications for over a decade compared to the alternative of reinventing it ourselves. > What we do in gtest doesn't necessarily make things okay to do in the > product. > > If this were part of a compiler upgrade process we'd be comparing > binaries with old flag and new to ensure there are no unexpected > consequences. I would not compare including to a compiler upgrade process as we are not changing the compiler and hence not the way code is generated, but rather compare it to including a new system header that has previously not been included to use a constant folded expression from that header that has been used and tested for a decade. At least that is how I think of it. Thanks, /Erik > > Cheers, > David > >>> >>> Cheers, >>> David >>> >>> >>>> Webrev for jdk10-hs top level repository: >>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>> Webrev for jdk10-hs hotspot repository: >>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>> Testing: JPRT. >>>> Will need a sponsor. >>>> Thanks, >>>> /Erik >> >> From per.liden at oracle.com Thu Jun 1 09:49:30 2017 From: per.liden at oracle.com (Per Liden) Date: Thu, 1 Jun 2017 11:49:30 +0200 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> Message-ID: Hi, On 2017-06-01 10:18, Kim Barrett wrote: >> On May 31, 2017, at 11:01 AM, Erik ?sterlund wrote: >> >> Hi, >> >> Excellent. In that case I would like reviews on this patch that does exactly that: >> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ Looks good, but can we please add a comment here describing why we're doing this. It's not obvious :) >> >> Testing: JPRT >> >> Need a sponsor. > > FWIW, another option would be to remove all the min/max stuff here, > and add -DNOMINMAX (or whatever the proper syntax is) in the Windows > build configuration. I did some later research on JDK-8161145, and > defining that macro seems to be the "official" way to suppress those > macros in the offending windows header. We have a similar problem with min/max on Solaris, so it's not only a Windows issue. The approach Erik's patch takes seems more portable. cheers, Per > > The proposed change also seems fine to me, not surprisingly. Under the > circumstances, not sure I should be counted as a reviewer :) > > I can sponsor though. > From kim.barrett at oracle.com Thu Jun 1 09:54:20 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Jun 2017 05:54:20 -0400 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> Message-ID: > On Jun 1, 2017, at 5:49 AM, Per Liden wrote: >> >> FWIW, another option would be to remove all the min/max stuff here, >> and add -DNOMINMAX (or whatever the proper syntax is) in the Windows >> build configuration. I did some later research on JDK-8161145, and >> defining that macro seems to be the "official" way to suppress those >> macros in the offending windows header. > > We have a similar problem with min/max on Solaris, so it's not only a Windows issue. The approach Erik's patch takes seems more portable. Oh, I didn?t know Solaris has this bug too. I retract the -DNOMINMAX suggestion. Go with the blue paint approach. > cheers, > Per > >> >> The proposed change also seems fine to me, not surprisingly. Under the >> circumstances, not sure I should be counted as a reviewer :) >> >> I can sponsor though. From erik.osterlund at oracle.com Thu Jun 1 10:23:30 2017 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 1 Jun 2017 12:23:30 +0200 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> Message-ID: <592FEB22.2090504@oracle.com> On 2017-06-01 11:54, Kim Barrett wrote: >> On Jun 1, 2017, at 5:49 AM, Per Liden wrote: >>> FWIW, another option would be to remove all the min/max stuff here, >>> and add -DNOMINMAX (or whatever the proper syntax is) in the Windows >>> build configuration. I did some later research on JDK-8161145, and >>> defining that macro seems to be the "official" way to suppress those >>> macros in the offending windows header. >> We have a similar problem with min/max on Solaris, so it's not only a Windows issue. The approach Erik's patch takes seems more portable. > Oh, I didn?t know Solaris has this bug too. I retract the -DNOMINMAX suggestion. Go with the blue paint approach. It is a problem for any code that uses the min/max identifiers, such as e.g. std::numeric_limits::max(). That is why this global macro should prevent bad uses of other global macros rather than the identifiers. Thanks, /Erik >> cheers, >> Per >> >>> The proposed change also seems fine to me, not surprisingly. Under the >>> circumstances, not sure I should be counted as a reviewer :) >>> >>> I can sponsor though. > From erik.osterlund at oracle.com Thu Jun 1 10:34:40 2017 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 1 Jun 2017 12:34:40 +0200 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> Message-ID: <592FEDC0.2090109@oracle.com> Hi Per, On 2017-06-01 11:49, Per Liden wrote: > Hi, > > On 2017-06-01 10:18, Kim Barrett wrote: >>> On May 31, 2017, at 11:01 AM, Erik ?sterlund >>> wrote: >>> >>> Hi, >>> >>> Excellent. In that case I would like reviews on this patch that does >>> exactly that: >>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ > > Looks good, but can we please add a comment here describing why we're > doing this. It's not obvious :) Thank you for the review. Here is a webrev with the added comment: http://cr.openjdk.java.net/~eosterlund/8161145/webrev.01/ Thanks, /Erik > >>> >>> Testing: JPRT >>> >>> Need a sponsor. >> >> FWIW, another option would be to remove all the min/max stuff here, >> and add -DNOMINMAX (or whatever the proper syntax is) in the Windows >> build configuration. I did some later research on JDK-8161145, and >> defining that macro seems to be the "official" way to suppress those >> macros in the offending windows header. > > We have a similar problem with min/max on Solaris, so it's not only a > Windows issue. The approach Erik's patch takes seems more portable. > > cheers, > Per > >> >> The proposed change also seems fine to me, not surprisingly. Under the >> circumstances, not sure I should be counted as a reviewer :) >> >> I can sponsor though. >> From per.liden at oracle.com Thu Jun 1 11:17:16 2017 From: per.liden at oracle.com (Per Liden) Date: Thu, 1 Jun 2017 13:17:16 +0200 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: <592FEDC0.2090109@oracle.com> References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> <592FEDC0.2090109@oracle.com> Message-ID: On 2017-06-01 12:34, Erik ?sterlund wrote: > Hi Per, > > On 2017-06-01 11:49, Per Liden wrote: >> Hi, >> >> On 2017-06-01 10:18, Kim Barrett wrote: >>>> On May 31, 2017, at 11:01 AM, Erik ?sterlund >>>> wrote: >>>> >>>> Hi, >>>> >>>> Excellent. In that case I would like reviews on this patch that does >>>> exactly that: >>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ >> >> Looks good, but can we please add a comment here describing why we're >> doing this. It's not obvious :) > > Thank you for the review. Here is a webrev with the added comment: > http://cr.openjdk.java.net/~eosterlund/8161145/webrev.01/ Looks good, thanks! /Per > > Thanks, > /Erik > >> >>>> >>>> Testing: JPRT >>>> >>>> Need a sponsor. >>> >>> FWIW, another option would be to remove all the min/max stuff here, >>> and add -DNOMINMAX (or whatever the proper syntax is) in the Windows >>> build configuration. I did some later research on JDK-8161145, and >>> defining that macro seems to be the "official" way to suppress those >>> macros in the offending windows header. >> >> We have a similar problem with min/max on Solaris, so it's not only a >> Windows issue. The approach Erik's patch takes seems more portable. >> >> cheers, >> Per >> >>> >>> The proposed change also seems fine to me, not surprisingly. Under the >>> circumstances, not sure I should be counted as a reviewer :) >>> >>> I can sponsor though. >>> > From erik.osterlund at oracle.com Thu Jun 1 12:04:35 2017 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 1 Jun 2017 14:04:35 +0200 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> <592FEDC0.2090109@oracle.com> Message-ID: <593002D3.7080600@oracle.com> Thank you. /Erik On 2017-06-01 13:17, Per Liden wrote: > On 2017-06-01 12:34, Erik ?sterlund wrote: >> Hi Per, >> >> On 2017-06-01 11:49, Per Liden wrote: >>> Hi, >>> >>> On 2017-06-01 10:18, Kim Barrett wrote: >>>>> On May 31, 2017, at 11:01 AM, Erik ?sterlund >>>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> Excellent. In that case I would like reviews on this patch that does >>>>> exactly that: >>>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ >>> >>> Looks good, but can we please add a comment here describing why we're >>> doing this. It's not obvious :) >> >> Thank you for the review. Here is a webrev with the added comment: >> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.01/ > > Looks good, thanks! > > /Per > >> >> Thanks, >> /Erik >> >>> >>>>> >>>>> Testing: JPRT >>>>> >>>>> Need a sponsor. >>>> >>>> FWIW, another option would be to remove all the min/max stuff here, >>>> and add -DNOMINMAX (or whatever the proper syntax is) in the Windows >>>> build configuration. I did some later research on JDK-8161145, and >>>> defining that macro seems to be the "official" way to suppress those >>>> macros in the offending windows header. >>> >>> We have a similar problem with min/max on Solaris, so it's not only a >>> Windows issue. The approach Erik's patch takes seems more portable. >>> >>> cheers, >>> Per >>> >>>> >>>> The proposed change also seems fine to me, not surprisingly. Under the >>>> circumstances, not sure I should be counted as a reviewer :) >>>> >>>> I can sponsor though. >>>> >> From david.holmes at oracle.com Thu Jun 1 12:33:45 2017 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Jun 2017 22:33:45 +1000 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: <592FE000.50003@oracle.com> References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> Message-ID: Hi Erik, Just to be clear it is not the use of that I am concerned about, it is the -library=stlport4. It is the use of that flag that I would want to check in terms of having no affect on any existing code generation. I'm finding the actual build situation very confusing. It seems to me in looking at the hotspot build files and the top-level build files that -xnolib is used for C++ compilation & linking whereas -library=%none is used for C compilation & linking. But the change is being applied to $2JVM_CFLAGS which one would think is for C compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used for both! David On 1/06/2017 7:36 PM, Erik ?sterlund wrote: > Hi David, > > On 2017-06-01 08:09, David Holmes wrote: >> Hi Kim, >> >> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>> On May 31, 2017, at 9:22 PM, David Holmes >>>> wrote: >>>> >>>> Hi Erik, >>>> >>>> A small change with big questions :) >>>> >>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>> Hi, >>>>> It would be desirable to be able to use harmless C++ standard >>>>> library headers like in the code as long as it does not >>>>> add any link-time dependencies to the standard library. >>>> >>>> What does a 'harmless' C++ standard library header look like? >>> >>> Header-only (doesn't require linking), doesn't run afoul of our >>> [vm]assert macro, and provides functionality we presently lack (or >>> only handle poorly) and would not be easy to reproduce. >> >> And how does one establish those properties exist for a given header >> file? Just use it and if no link errors then all is good? > > Objects from headers that are not ODR-used such as constant folded > expressions are not imposing link-time dependencies to C++ libraries. > The -xnolib that we already have in the LDFLAGS will catch any > accidental ODR-uses of C++ objects, and the JVM will not build if that > happens. > > As for external headers being included and not playing nicely with > macros, this has to be evaluated on a case by case basis. Note that this > is a problem that occurs when using system headers (that we are already > using), as it is for using C++ standard library headers. We even run > into that in our own JVM when e.g. the min/max macros occasionally slaps > us gently in the face from time to time. > >> >>> The instigator for this is Erik and I are working on a project that >>> needs information that is present in std::numeric_limits<> (provided >>> by the header). Reproducing that functionality ourselves >>> would require platform-specific code (with all the complexity that can >>> imply). We'd really rather not re-discover and maintain information >>> that is trivially accessible in every standard library. >> >> Understood. I have no issue with using but am concerned by >> the state of stlport4. Can you use without changing >> -library=%none? > > No, that is precisely why we are here. > >> >>>>> This is possible on all supported platforms except the ones using >>>>> the solaris studio compiler where we enforce -library=%none in both >>>>> CFLAGS and LDFLAGS. >>>>> I propose to remove the restriction from CFLAGS but keep it on >>>>> LDFLAGS. >>>>> I have consulted with the studio folks, and they think this is >>>>> absolutely fine and thought that the choice of -library=stlport4 >>>>> should be fine for our CFLAGS and is indeed what is already used in >>>>> the gtest launcher. >>>> >>>> So what exactly does this mean? IIUC this allows you to use headers >>>> for, and compile against "STLport?s Standard Library implementation >>>> version 4.5.3 instead of the default libCstd". But how do you then >>>> not need to link against libstlport.so ?? >>>> >>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>> >>>> "STLport is binary incompatible with the default libCstd. If you use >>>> the STLport implementation of the standard library, then you must >>>> compile and link all files, including third-party libraries, with >>>> the option -library=stlport4? >>> >>> It means we can only use header-only parts of the standard library. >>> This was confirmed / suggested by the Studio folks Erik consulted, >>> providing such limited access while continuing to constrain our >>> dependency on the library. Figuring out what can be used will need to >>> be determined on a case-by-case basis. Maybe we could just link with >>> a standard library on Solaris too. So far as I can tell, Solaris is >>> the only platform where we don't do that. But Erik is trying to be >>> conservative. >> >> Okay, but the docs don't seem to acknowledge the ability to use, but >> not link to, stlport4. > > Not ODR-used objects do not require linkage. > (http://en.cppreference.com/w/cpp/language/definition) > I have confirmed directly with the studio folks to be certain that > accidental linkage would fail by keeping our existing guards in the > LDFLAGS rather than the CFLAGS. > This is also reasonably well documented already > (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). > >> >>>> There are lots of other comments in that document regarding STLport >>>> that makes me think that using it may be introducing a fragile >>>> dependency into the OpenJDK code! >>>> >>>> "STLport is an open source product and does not guarantee >>>> compatibility across different releases. In other words, compiling >>>> with a future version of STLport may break applications compiled >>>> with STLport 4.5.3. It also might not be possible to link binaries >>>> compiled using STLport 4.5.3 with binaries compiled using a future >>>> version of STLport." >>>> >>>> "Future releases of the compiler might not include STLport4. They >>>> might include only a later version of STLport. The compiler option >>>> -library=stlport4 might not be available in future releases, but >>>> could be replaced by an option referring to a later STLport version." >>>> >>>> None of that sounds very good to me. >>> >>> I don't see how this is any different from any other part of the >>> process for using a different version of Solaris Studio. >> >> Well we'd discover the problem when testing the compiler change, but >> my point was more to the fact that they don't seem very committed to >> this library - very much a "use at own risk" disclaimer. > > If we eventually need to use something more modern for features that > have not been around for a decade, like C++11 features, then we can > change standard library when that day comes. > >> >>> stlport4 is one of the three standard libraries that are presently >>> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the >>> Studio folks which to use (for the purposes of our present project, we >>> don't have any particular preference, so long as it works), and >>> stlport4 seemed the right choice (libCstd was, I think, described as >>> "ancient"). Perhaps more importantly, we already use stlport4, >>> including linking against it, for gtest builds. Mixing two different >>> standard libraries seems like a bad idea... >> >> So we have the choice of "ancient", "unsupported" or gcc :) >> >> My confidence in this has not increased :) > > I trust that e.g. std::numeric_limits::is_signed in the standard > libraries has more mileage than whatever simplified rewrite of that we > try to replicate in the JVM. So it is not obvious to me that we should > have less confidence in the same functionality from a standard library > shipped together with the compiler we are using and that has already > been used and tested in a variety of C++ applications for over a decade > compared to the alternative of reinventing it ourselves. > >> What we do in gtest doesn't necessarily make things okay to do in the >> product. >> >> If this were part of a compiler upgrade process we'd be comparing >> binaries with old flag and new to ensure there are no unexpected >> consequences. > > I would not compare including to a compiler upgrade process as > we are not changing the compiler and hence not the way code is > generated, but rather compare it to including a new system header that > has previously not been included to use a constant folded expression > from that header that has been used and tested for a decade. At least > that is how I think of it. > > Thanks, > /Erik > >> >> Cheers, >> David >> >>>> >>>> Cheers, >>>> David >>>> >>>> >>>>> Webrev for jdk10-hs top level repository: >>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>> Webrev for jdk10-hs hotspot repository: >>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>> Testing: JPRT. >>>>> Will need a sponsor. >>>>> Thanks, >>>>> /Erik >>> >>> > From stuart.monteith at linaro.org Thu Jun 1 13:26:41 2017 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Thu, 1 Jun 2017 14:26:41 +0100 Subject: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode In-Reply-To: <82c81d81-017f-fdc1-0e33-0f9cd5140e82@oracle.com> References: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> <03724e7c-f656-473f-9a89-eb78073b518f@default> <82c81d81-017f-fdc1-0e33-0f9cd5140e82@oracle.com> Message-ID: Hello, I tested this on x86 and aarch64. Muneer's bug is an accurate description of the failing tests. I'm not sure what you mean by "8180904 has to be fixed before this backport", as the backport is the fix for the issue Muneer presented. JDK9 doesn't exhibit these failures as it has the fix to be backported. Comparing the runs without and with the patch - this is on x86 - I get essentially the same on aarch64: 0: JTwork-without pass: 680; fail: 44; error: 3; not run: 4 1: JTwork-with pass: 718; fail: 6; error: 2; not run: 5 0 1 Test fail pass compiler/jsr292/PollutedTrapCounts.java fail pass compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java#id0 fail pass compiler/loopopts/UseCountedLoopSafepoints.java pass fail compiler/rtm/locking/TestRTMLockingThreshold.java#id0 fail pass compiler/types/correctness/OffTest.java#id0 fail pass gc/TestVerifySilently.java fail pass gc/TestVerifySubSet.java fail pass gc/class_unloading/TestCMSClassUnloadingEnabledHWM.java fail pass gc/class_unloading/TestG1ClassUnloadingHWM.java fail pass gc/ergonomics/TestDynamicNumberOfGCThreads.java fail pass gc/g1/TestEagerReclaimHumongousRegions.java fail pass gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java fail pass gc/g1/TestEagerReclaimHumongousRegionsWithRefs.java fail pass gc/g1/TestG1TraceEagerReclaimHumongousObjects.java fail pass gc/g1/TestGCLogMessages.java fail pass gc/g1/TestHumongousAllocInitialMark.java fail pass gc/g1/TestPrintGCDetails.java fail pass gc/g1/TestPrintRegionRememberedSetInfo.java fail pass gc/g1/TestShrinkAuxiliaryData00.java fail pass gc/g1/TestShrinkAuxiliaryData05.java fail pass gc/g1/TestShrinkAuxiliaryData10.java fail pass gc/g1/TestShrinkAuxiliaryData15.java fail pass gc/g1/TestShrinkAuxiliaryData20.java fail pass gc/g1/TestShrinkAuxiliaryData25.java fail pass gc/g1/TestShrinkDefragmentedHeap.java#id0 fail pass gc/g1/TestStringDeduplicationAgeThreshold.java fail pass gc/g1/TestStringDeduplicationFullGC.java fail pass gc/g1/TestStringDeduplicationInterned.java fail pass gc/g1/TestStringDeduplicationPrintOptions.java fail pass gc/g1/TestStringDeduplicationTableRehash.java fail pass gc/g1/TestStringDeduplicationTableResize.java fail pass gc/g1/TestStringDeduplicationYoungGC.java fail pass gc/g1/TestStringSymbolTableStats.java fail pass gc/logging/TestGCId.java fail pass gc/whitebox/TestWBGC.java fail pass runtime/ErrorHandling/TestOnOutOfMemoryError.java#id0 fail pass runtime/NMT/JcmdWithNMTDisabled.java fail pass runtime/memory/ReserveMemory.java pass --- sanity/WhiteBox.java fail pass serviceability/attach/AttachWithStalePidFile.java fail pass serviceability/jvmti/TestRedefineWithUnresolvedClass.java error pass serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java#id0 I find that compiler/rtm/locking/TestRTMLockingThreshold.java produces inconsistent results on my machine, regardless of whether or not the patch is applied. BR Stuart On 1 June 2017 at 06:39, David Holmes wrote: > Thanks for that information Muneer, that is an unpleasant surprise. > > Stuart: I think 8180904 has to be fixed before this backport can take place. > > Thanks, > David > ----- > > > On 1/06/2017 2:31 PM, Muneer Kolarkunnu wrote: >> >> Hi David and Stuart, >> >> I recently reported one bug[1] for the same issue and listed which all >> test cases are failing with agentvm. >> I tested in Oracle.Linux.7.0 x64. >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8180904 >> >> Regards, >> Muneer >> >> -----Original Message----- >> From: David Holmes >> Sent: Thursday, June 01, 2017 7:04 AM >> To: Stuart Monteith; hotspot-dev Source Developers >> Subject: Re: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg >> tests to run in agentvm mode >> >> Hi Stuart, >> >> This looks like an accurate backport of the change. >> >> My only minor concern is if there may be tests in 8u that are no longer in >> 9 which may not work with agentvm mode. >> >> What platforms have you tested this on? >> >> Thanks, >> David >> >> On 31/05/2017 11:19 PM, Stuart Monteith wrote: >>> >>> Hello, >>> Currently the jdk8u codebase fails some JTreg Hotspot tests when >>> running in the -agentvm mode. This is because the ProcessTools class >>> is not passing the classpath. There are substantial time savings to be >>> gained using -agentvm over -othervm. >>> >>> Fortunately, there was a fix for jdk9 (8077608) that has not been >>> backported to jdk8u. The details are as follows: >>> >>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/017937.h >>> tml >>> https://bugs.openjdk.java.net/browse/JDK-8077608 >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/af2a1e9f08f3 >>> >>> The patch just needed a slight change, to remove the change to the >>> file "test/compiler/uncommontrap/TestUnstableIfTrap.java" as that test >>> doesn't exist on jdk8u. >>> >>> My colleague Ningsheng has kindly hosted the change here: >>> >>> http://cr.openjdk.java.net/~njian/8077608/webrev.00 >>> >>> >>> BR, >>> Stuart >>> > From kim.barrett at oracle.com Thu Jun 1 14:42:49 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Jun 2017 10:42:49 -0400 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> Message-ID: <0162ADA7-003A-46C6-90A6-381E8667D969@oracle.com> > On Jun 1, 2017, at 8:33 AM, David Holmes wrote: > > Hi Erik, > > Just to be clear it is not the use of that I am concerned about, it is the -library=stlport4. It is the use of that flag that I would want to check in terms of having no affect on any existing code generation. > > I'm finding the actual build situation very confusing. It seems to me in looking at the hotspot build files and the top-level build files that -xnolib is used for C++ compilation & linking whereas -library=%none is used for C compilation & linking. But the change is being applied to $2JVM_CFLAGS which one would think is for C compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used for both! > > David Yes, it does look like there is some confusion there. The documentation says that if using -xnolib, then -library is ignored! Using -xnolib suppresses all the normal support libraries, and one must explicitly add back what?s needed. And it looks like we do add -lCrun. From gromero at linux.vnet.ibm.com Thu Jun 1 14:47:40 2017 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Thu, 1 Jun 2017 11:47:40 -0300 Subject: [8u] RFR (S) 8175813: PPC64: "mbind: Invalid argument" when -XX:+UseNUMA is used In-Reply-To: <1ef6fe36-5582-a041-2fde-19b2bf6c9c4f@redhat.com> References: <59258B49.9080602@linux.vnet.ibm.com> <592EE79D.1020104@linux.vnet.ibm.com> <1ef6fe36-5582-a041-2fde-19b2bf6c9c4f@redhat.com> Message-ID: <5930290C.8000406@linux.vnet.ibm.com> Hi Zhengyu, On 31-05-2017 14:15, Zhengyu Gu wrote: > Hi Gustavo, > > On 05/31/2017 11:56 AM, Gustavo Romero wrote: >> Hi David, >> >> On 29-05-2017 02:31, David Holmes wrote: >>> Hi Gustavo, >>> >>> This looks like an accurate backport. >> >> Thanks for reviewing the change. >> >> Does it need a second reviewer or should I proceed to request the approval? >> > You can add me as a reviewer, if needed. Thanks a lot for reviewing the change. Regards, Gustavo > Thanks for doing this backport. > > -Zhengyu > >> Regards, >> Gustavo >> >>> Thanks, >>> David >>> ----- >>> >>> On 24/05/2017 11:31 PM, Gustavo Romero wrote: >>>> Hi, >>>> >>>> Could this backport of 8175813 for jdk8u be reviewed, please? >>>> >>>> It applies cleanly to jdk8u except for a chunk in os::Linux::libnuma_init(), but >>>> it's just due to an indentation change introduced with cleanup [1]. >>>> >>>> It improves JVM NUMA node detection on PPC64. >>>> >>>> Currently there is no Linux distros that package only libnuma v1, so libnuma API >>>> v2 used in that change is always available. >>>> >>>> webrev : http://cr.openjdk.java.net/~gromero/8175813/backport/ >>>> bug : https://bugs.openjdk.java.net/browse/JDK-8175813 >>>> review thread: http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-May/026788.html >>>> >>>> Thank you. >>>> >>>> Regards, >>>> Gustavo >>>> >>>> [1] https://bugs.openjdk.java.net/browse/JDK-8057107 >>>> >>> >> > From gromero at linux.vnet.ibm.com Thu Jun 1 14:49:40 2017 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Thu, 1 Jun 2017 11:49:40 -0300 Subject: RFR(XS) 8181055: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <3252462d-f318-7287-c609-b84029e40117@oracle.com> References: <3a2a0ef7-5eac-b72c-5dc6-b7594dc70c07@redhat.com> <5928C9AA.6030004@linux.vnet.ibm.com> <38f323bc-7416-5c3d-c534-f5f17be4c7c6@redhat.com> <95147596-caf9-4e49-f954-29fa13df3a56@oracle.com> <592CA97D.4000802@linux.vnet.ibm.com> <937b01c9-5569-ce73-e7a3-ad38aed82ab3@redhat.com> <4dc1ac9e-f35f-209a-761f-96dc584f68a1@oracle.com> <461d3048-88a2-c99d-818a-01de3813a29b@redhat.com> <592EBB4B.1020909@linux.vnet.ibm.com> <3252462d-f318-7287-c609-b84029e40117@oracle.com> Message-ID: <59302984.6090807@linux.vnet.ibm.com> Hi David, On 31-05-2017 22:48, David Holmes wrote: > Hi Gustavo, > > On 31/05/2017 10:47 PM, Gustavo Romero wrote: >> Hi Zhengyu, >> >> On 30-05-2017 21:37, Zhengyu Gu wrote: >>> Hi David, >>> >>> Thanks for the review. >>> >>> Gustavo, might I count you as a reviewer? >> >> Formally speaking (accordingly to the community Bylaws) I'm not a reviewer, so >> I guess no. > > You are not a Reviewer (capital 'R') but you can certainly review and be listed as a reviewer. Got it! Thanks for clarifying. Cheers, Gustavo > Cheers, > David > > >> >> Kind regards, >> Gustavo >> >>> Thanks, >>> >>> -Zhengyu >>> >>> >>> >>> On 05/30/2017 05:30 PM, David Holmes wrote: >>>> Looks fine to me. >>>> >>>> Thanks, >>>> David >>>> >>>> On 30/05/2017 9:59 PM, Zhengyu Gu wrote: >>>>> Hi David and Gustavo, >>>>> >>>>> Thanks for the review. >>>>> >>>>> Webrev is updated according to your comments: >>>>> >>>>> http://cr.openjdk.java.net/~zgu/8181055/webrev.02/ >>>>> >>>>> Thanks, >>>>> >>>>> -Zhengyu >>>>> >>>>> >>>>> On 05/29/2017 07:06 PM, Gustavo Romero wrote: >>>>>> Hi David, >>>>>> >>>>>> On 29-05-2017 01:34, David Holmes wrote: >>>>>>> Hi Zhengyu, >>>>>>> >>>>>>> On 29/05/2017 12:08 PM, Zhengyu Gu wrote: >>>>>>>> Hi Gustavo, >>>>>>>> >>>>>>>> Thanks for the detail analysis and suggestion. I did not realize >>>>>>>> the difference between from bitmask and nodemask. >>>>>>>> >>>>>>>> As you suggested, numa_interleave_memory_v2 works under this >>>>>>>> configuration. >>>>>>>> >>>>>>>> Please updated Webrev: >>>>>>>> http://cr.openjdk.java.net/~zgu/8181055/webrev.01/ >>>>>>> >>>>>>> The addition of support for the "v2" API seems okay. Though I think >>>>>>> this comment needs some clarification for the existing code: >>>>>>> >>>>>>> 2837 // If we are running with libnuma version > 2, then we should >>>>>>> 2838 // be trying to use symbols with versions 1.1 >>>>>>> 2839 // If we are running with earlier version, which did not have >>>>>>> symbol versions, >>>>>>> 2840 // we should use the base version. >>>>>>> 2841 void* os::Linux::libnuma_dlsym(void* handle, const char *name) { >>>>>>> >>>>>>> given that we now explicitly load the v1.2 symbol if present. >>>>>>> >>>>>>> Gustavo: can you vouch for the suitability of using the v2 API in >>>>>>> all cases, if it exists? >>>>>> >>>>>> My understanding is that in the transition to API v2 only the usage of >>>>>> numa_node_to_cpus() by the JVM will have to be adapted in >>>>>> os::Linux::rebuild_cpu_to_node_map(). >>>>>> The remaining functions (excluding numa_interleave_memory() as >>>>>> Zhengyu already addressed it) >>>>>> preserve the same functionality and signatures [1]. >>>>>> >>>>>> Currently JVM NUMA API requires the following libnuma functions: >>>>>> >>>>>> 1. numa_node_to_cpus v1 != v2 (using v1, JVM has to adapt) >>>>>> 2. numa_max_node v1 == v2 (using v1, transition is >>>>>> straightforward) >>>>>> 3. numa_num_configured_nodes v2 (added by gromero: 8175813) >>>>>> 4. numa_available v1 == v2 (using v1, transition is >>>>>> straightforward) >>>>>> 5. numa_tonode_memory v1 == v2 (using v1, transition is >>>>>> straightforward) >>>>>> 6. numa_interleave_memory v1 != v2 (updated by zhengyu: >>>>>> 8181055. Default use of v2, fallback to v1) >>>>>> 7. numa_set_bind_policy v1 == v2 (using v1, transition is >>>>>> straightforward) >>>>>> 8. numa_bitmask_isbitset v2 (added by gromero: 8175813) >>>>>> 9. numa_distance v1 == v2 (added by gromero: 8175813. >>>>>> Using v1, transition is straightforward) >>>>>> >>>>>> v1 != v2: function signature in version 1 is different from version 2 >>>>>> v1 == v2: function signature in version 1 is equal to version 2 >>>>>> v2 : function is only present in API v2 >>>>>> >>>>>> Thus, to the best of my knowledge, except for case 1. (which JVM need >>>>>> to adapt to) >>>>>> all other cases are suitable to use v2 API and we could use a >>>>>> fallback mechanism as >>>>>> proposed by Zhengyu or update directly to API v2 (risky?), given that >>>>>> I can't see >>>>>> how v2 API would not be available on current (not-EOL) Linux distro >>>>>> releases. >>>>>> >>>>>> Regarding the comment, I agree, it needs an update since we are not >>>>>> tied anymore >>>>>> to version 1.1 (we are in effect already using v2 for some >>>>>> functions). We could >>>>>> delete the comment atop libnuma_dlsym() and add something like: >>>>>> >>>>>> "Handle request to load libnuma symbol version 1.1 (API v1). If it >>>>>> fails load symbol from base version instead." >>>>>> >>>>>> and to libnuma_v2_dlsym() add: >>>>>> >>>>>> "Handle request to load libnuma symbol version 1.2 (API v2) only. If >>>>>> it fails no symbol from any other version - even if present - is >>>>>> loaded." >>>>>> >>>>>> I've opened a bug to track the transitions to API v2 (I also >>>>>> discussed that with Volker): >>>>>> https://bugs.openjdk.java.net/browse/JDK-8181196 >>>>>> >>>>>> >>>>>> Regards, >>>>>> Gustavo >>>>>> >>>>>> [1] API v1 vs API v2: >>>>>> >>>>>> API v1 >>>>>> ====== >>>>>> >>>>>> int numa_node_to_cpus(int node, unsigned long *buffer, int bufferlen); >>>>>> int numa_max_node(void); >>>>>> - int numa_num_configured_nodes(void); >>>>>> int numa_available(void); >>>>>> void numa_tonode_memory(void *start, size_t size, int node); >>>>>> void numa_interleave_memory(void *start, size_t size, nodemask_t >>>>>> *nodemask); >>>>>> void numa_set_bind_policy(int strict); >>>>>> - int numa_bitmask_isbitset(const struct bitmask *bmp, unsigned int n); >>>>>> int numa_distance(int node1, int node2); >>>>>> >>>>>> >>>>>> API v2 >>>>>> ====== >>>>>> >>>>>> int numa_node_to_cpus(int node, struct bitmask *mask); >>>>>> int numa_max_node(void); >>>>>> int numa_num_configured_nodes(void); >>>>>> int numa_available(void); >>>>>> void numa_tonode_memory(void *start, size_t size, int node); >>>>>> void numa_interleave_memory(void *start, size_t size, struct bitmask >>>>>> *nodemask); >>>>>> void numa_set_bind_policy(int strict) >>>>>> int numa_bitmask_isbitset(const struct bitmask *bmp, unsigned int n); >>>>>> int numa_distance(int node1, int node2); >>>>>> >>>>>> >>>>>>> I'm running this through JPRT now. >>>>>>> >>>>>>> Thanks, >>>>>>> David >>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> -Zhengyu >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 05/26/2017 08:34 PM, Gustavo Romero wrote: >>>>>>>>> Hi Zhengyu, >>>>>>>>> >>>>>>>>> Thanks a lot for taking care of this corner case on PPC64. >>>>>>>>> >>>>>>>>> On 26-05-2017 10:41, Zhengyu Gu wrote: >>>>>>>>>> This is a quick way to kill the symptom (or low risk?). I am not >>>>>>>>>> sure if disabling NUMA is a better solution for this >>>>>>>>>> circumstance? does 1 NUMA node = UMA? >>>>>>>>> >>>>>>>>> On PPC64, 1 (configured) NUMA does not necessarily imply UMA. In >>>>>>>>> the POWER7 >>>>>>>>> machine you found the corner case (I copy below the data you >>>>>>>>> provided in the >>>>>>>>> JBS - thanks for the additional information): >>>>>>>>> >>>>>>>>> $ numactl -H >>>>>>>>> available: 2 nodes (0-1) >>>>>>>>> node 0 cpus: 0 1 2 3 4 5 6 7 >>>>>>>>> node 0 size: 0 MB >>>>>>>>> node 0 free: 0 MB >>>>>>>>> node 1 cpus: >>>>>>>>> node 1 size: 7680 MB >>>>>>>>> node 1 free: 1896 MB >>>>>>>>> node distances: >>>>>>>>> node 0 1 >>>>>>>>> 0: 10 40 >>>>>>>>> 1: 40 10 >>>>>>>>> >>>>>>>>> CPUs in node0 have no other alternative besides allocating memory >>>>>>>>> from node1. In >>>>>>>>> that case CPUs in node0 are always accessing remote memory from >>>>>>>>> node1 in a constant >>>>>>>>> distance (40), so in that case we could say that 1 NUMA >>>>>>>>> (configured) node == UMA. >>>>>>>>> Nonetheless, if you add CPUs in node1 (by filling up the other >>>>>>>>> socket present in >>>>>>>>> the board) you will end up with CPUs with different distances from >>>>>>>>> the node that >>>>>>>>> has configured memory (in that case, node1), so it yields a >>>>>>>>> configuration where >>>>>>>>> 1 NUMA (configured) != UMA (i.e. distances are not always equal to >>>>>>>>> a single >>>>>>>>> value). >>>>>>>>> >>>>>>>>> On the other hand, the POWER7 machine configuration in question is >>>>>>>>> bad (and >>>>>>>>> rare). It's indeed impacting the whole system performance and it >>>>>>>>> would be >>>>>>>>> reasonable to open the machine and move the memory module from >>>>>>>>> bank related to >>>>>>>>> node1 to bank related to node0, because all CPUs are accessing >>>>>>>>> remote memory >>>>>>>>> without any apparent necessity. Once you change it all CPUs will >>>>>>>>> have local >>>>>>>>> memory (distance = 10). >>>>>>>>> >>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> -Zhengyu >>>>>>>>>> >>>>>>>>>> On 05/26/2017 09:14 AM, Zhengyu Gu wrote: >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> There is a corner case that still failed after JDK-8175813. >>>>>>>>>>> >>>>>>>>>>> The system shows that it has multiple NUMA nodes, but only one is >>>>>>>>>>> configured. Under this scenario, numa_interleave_memory() call will >>>>>>>>>>> result "mbind: Invalid argument" message. >>>>>>>>>>> >>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >>>>>>>>>>> Webrev: http://cr.openjdk.java.net/~zgu/8181055/webrev.00/ >>>>>>>>> >>>>>>>>> Looks like that even for that POWER7 rare numa topology >>>>>>>>> numa_interleave_memory() >>>>>>>>> should succeed without "mbind: Invalid argument" since the 'mask' >>>>>>>>> argument >>>>>>>>> should be already a mask with only nodes from which memory can be >>>>>>>>> allocated, i.e. >>>>>>>>> only a mask of configured nodes (even if mask contains only one >>>>>>>>> configured node, >>>>>>>>> as in >>>>>>>>> http://cr.openjdk.java.net/~gromero/logs/numa_only_one_node.txt). >>>>>>>>> >>>>>>>>> Inspecting a little bit more, it looks like that the problem boils >>>>>>>>> down to the >>>>>>>>> fact that the JVM is passing to numa_interleave_memory() >>>>>>>>> 'numa_all_nodes' [1] in >>>>>>>>> Linux::numa_interleave_memory(). >>>>>>>>> >>>>>>>>> One would expect that 'numa_all_nodes' (which is api v1) would >>>>>>>>> track the same >>>>>>>>> information as 'numa_all_nodes_ptr' (api v2) [2], however there is >>>>>>>>> a subtle but >>>>>>>>> important difference: >>>>>>>>> >>>>>>>>> 'numa_all_nodes' is constructed assuming a consecutive node >>>>>>>>> distribution [3]: >>>>>>>>> >>>>>>>>> 100 max = numa_num_configured_nodes(); >>>>>>>>> 101 for (i = 0; i < max; i++) >>>>>>>>> 102 nodemask_set_compat((nodemask_t >>>>>>>>> *)&numa_all_nodes, i); >>>>>>>>> >>>>>>>>> >>>>>>>>> whilst 'numa_all_nodes_ptr' is constructed parsing >>>>>>>>> /proc/self/status [4]: >>>>>>>>> >>>>>>>>> 499 if (strncmp(buffer,"Mems_allowed:",13) == 0) { >>>>>>>>> 500 numprocnode = read_mask(mask, >>>>>>>>> numa_all_nodes_ptr); >>>>>>>>> >>>>>>>>> Thus for a topology like: >>>>>>>>> >>>>>>>>> available: 4 nodes (0-1,16-17) >>>>>>>>> node 0 cpus: 0 8 16 24 32 >>>>>>>>> node 0 size: 130706 MB >>>>>>>>> node 0 free: 145 MB >>>>>>>>> node 1 cpus: 40 48 56 64 72 >>>>>>>>> node 1 size: 0 MB >>>>>>>>> node 1 free: 0 MB >>>>>>>>> node 16 cpus: 80 88 96 104 112 >>>>>>>>> node 16 size: 130630 MB >>>>>>>>> node 16 free: 529 MB >>>>>>>>> node 17 cpus: 120 128 136 144 152 >>>>>>>>> node 17 size: 0 MB >>>>>>>>> node 17 free: 0 MB >>>>>>>>> node distances: >>>>>>>>> node 0 1 16 17 >>>>>>>>> 0: 10 20 40 40 >>>>>>>>> 1: 20 10 40 40 >>>>>>>>> 16: 40 40 10 20 >>>>>>>>> 17: 40 40 20 10 >>>>>>>>> >>>>>>>>> numa_all_nodes=0x3 => 0b11 (node0 and node1) >>>>>>>>> numa_all_nodes_ptr=0x10001 => 0b10000000000000001 (node0 and node16) >>>>>>>>> >>>>>>>>> (Please, see details in the following gdb log: >>>>>>>>> http://cr.openjdk.java.net/~gromero/logs/numa_api_v1_vs_api_v2.txt) >>>>>>>>> >>>>>>>>> In that case passing node0 and node1, although being suboptimal, >>>>>>>>> does not bother >>>>>>>>> mbind() since the following is satisfied: >>>>>>>>> >>>>>>>>> "[nodemask] must contain at least one node that is on-line, >>>>>>>>> allowed by the >>>>>>>>> process's current cpuset context, and contains memory." >>>>>>>>> >>>>>>>>> So back to the POWER7 case, I suppose that for: >>>>>>>>> >>>>>>>>> available: 2 nodes (0-1) >>>>>>>>> node 0 cpus: 0 1 2 3 4 5 6 7 >>>>>>>>> node 0 size: 0 MB >>>>>>>>> node 0 free: 0 MB >>>>>>>>> node 1 cpus: >>>>>>>>> node 1 size: 7680 MB >>>>>>>>> node 1 free: 1896 MB >>>>>>>>> node distances: >>>>>>>>> node 0 1 >>>>>>>>> 0: 10 40 >>>>>>>>> 1: 40 10 >>>>>>>>> >>>>>>>>> numa_all_nodes=0x1 => 0b01 (node0) >>>>>>>>> numa_all_nodes_ptr=0x2 => 0b10 (node1) >>>>>>>>> >>>>>>>>> and hence numa_interleave_memory() gets nodemask = 0x1 (node0), >>>>>>>>> which contains >>>>>>>>> indeed no memory. That said, I don't know for sure if passing just >>>>>>>>> node1 in the >>>>>>>>> 'nodemask' will satisfy mbind() as in that case there are no cpus >>>>>>>>> available in >>>>>>>>> node1. >>>>>>>>> >>>>>>>>> In summing up, looks like that the root cause is not that >>>>>>>>> numa_interleave_memory() >>>>>>>>> does not accept only one configured node, but that the configured >>>>>>>>> node being >>>>>>>>> passed is wrong. I could not find a similar numa topology in my >>>>>>>>> poll to test >>>>>>>>> more, but it might be worth trying to write a small test using api >>>>>>>>> v2 and >>>>>>>>> 'numa_all_nodes_ptr' instead of 'numa_all_nodes' to see how >>>>>>>>> numa_interleave_memory() >>>>>>>>> goes in that machine :) If it behaves well, updating to api v2 >>>>>>>>> would be a >>>>>>>>> solution. >>>>>>>>> >>>>>>>>> HTH >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Gustavo >>>>>>>>> >>>>>>>>> >>>>>>>>> [1] >>>>>>>>> http://hg.openjdk.java.net/jdk10/hs/hotspot/file/4b93e1b1d5b7/src/os/linux/vm/os_linux.hpp#l274 >>>>>>>>> >>>>>>>>> [2] from libnuma.c:608 numa_all_nodes_ptr: "it only tracks nodes >>>>>>>>> with memory from which the calling process can allocate." >>>>>>>>> [3] >>>>>>>>> https://github.com/numactl/numactl/blob/master/libnuma.c#L100-L102 >>>>>>>>> [4] >>>>>>>>> https://github.com/numactl/numactl/blob/master/libnuma.c#L499-L500 >>>>>>>>> >>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The system NUMA configuration: >>>>>>>>>>> >>>>>>>>>>> Architecture: ppc64 >>>>>>>>>>> CPU op-mode(s): 32-bit, 64-bit >>>>>>>>>>> Byte Order: Big Endian >>>>>>>>>>> CPU(s): 8 >>>>>>>>>>> On-line CPU(s) list: 0-7 >>>>>>>>>>> Thread(s) per core: 4 >>>>>>>>>>> Core(s) per socket: 1 >>>>>>>>>>> Socket(s): 2 >>>>>>>>>>> NUMA node(s): 2 >>>>>>>>>>> Model: 2.1 (pvr 003f 0201) >>>>>>>>>>> Model name: POWER7 (architected), altivec supported >>>>>>>>>>> L1d cache: 32K >>>>>>>>>>> L1i cache: 32K >>>>>>>>>>> NUMA node0 CPU(s): 0-7 >>>>>>>>>>> NUMA node1 CPU(s): >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> >>>>>>>>>>> -Zhengyu >>>>>>>>>> >>>>>>>>> >>>>>>> >>>>>> >>> >> > From erik.osterlund at oracle.com Thu Jun 1 14:50:51 2017 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 1 Jun 2017 16:50:51 +0200 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> Message-ID: <593029CB.7000100@oracle.com> Hi David, On 2017-06-01 14:33, David Holmes wrote: > Hi Erik, > > Just to be clear it is not the use of that I am concerned > about, it is the -library=stlport4. It is the use of that flag that I > would want to check in terms of having no affect on any existing code > generation. Thank you for the clarification. The use of -library=stlport4 should not have anything to do with code generation. It only says where to look for the standard library headers such as that are used in the compilation units. Specifically, the man pages for CC say: -library=lib[,lib...] Incorporates specified CC-provided libraries into compilation and linking. When the -library option is used to specify a CC-provided library, the proper -I paths are set during compilation and the proper -L, -Y, -P, and -R paths and -l options are set during linking. As we are setting this during compilation and not during linking, this corresponds to setting the right -I paths to find our C++ standard library headers. My studio friends mentioned I could double-check that we did indeed not add a dependency to any C++ standard library by running elfdump on the generated libjvm.so file and check if the NEEDED entries in the dynamic section look right. I did and here are the results: [0] NEEDED 0x2918ee libsocket.so.1 [1] NEEDED 0x2918fd libsched.so.1 [2] NEEDED 0x29190b libdl.so.1 [3] NEEDED 0x291916 libm.so.1 [4] NEEDED 0x291920 libCrun.so.1 [5] NEEDED 0x29192d libthread.so.1 [6] NEEDED 0x29193c libdoor.so.1 [7] NEEDED 0x291949 libc.so.1 [8] NEEDED 0x291953 libdemangle.so.1 [9] NEEDED 0x291964 libnsl.so.1 [10] NEEDED 0x291970 libkstat.so.1 [11] NEEDED 0x29197e librt.so.1 This list does not include any C++ standard libraries, as expected (libCrun is always in there even with -library=%none, and as expected no libstlport4.so or libCstd.so files are in there). The NEEDED entries in the dynamic section look identical with and without my patch. > I'm finding the actual build situation very confusing. It seems to me > in looking at the hotspot build files and the top-level build files > that -xnolib is used for C++ compilation & linking whereas > -library=%none is used for C compilation & linking. But the change is > being applied to $2JVM_CFLAGS which one would think is for C > compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used > for both! I have also been confused by this when I tried adding CXX flags through configure that seemed to not be used. But that's a different can of worms I suppose. Thanks, /Erik > David > > On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >> Hi David, >> >> On 2017-06-01 08:09, David Holmes wrote: >>> Hi Kim, >>> >>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>> On May 31, 2017, at 9:22 PM, David Holmes >>>>> wrote: >>>>> >>>>> Hi Erik, >>>>> >>>>> A small change with big questions :) >>>>> >>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>> Hi, >>>>>> It would be desirable to be able to use harmless C++ standard >>>>>> library headers like in the code as long as it does not >>>>>> add any link-time dependencies to the standard library. >>>>> >>>>> What does a 'harmless' C++ standard library header look like? >>>> >>>> Header-only (doesn't require linking), doesn't run afoul of our >>>> [vm]assert macro, and provides functionality we presently lack (or >>>> only handle poorly) and would not be easy to reproduce. >>> >>> And how does one establish those properties exist for a given header >>> file? Just use it and if no link errors then all is good? >> >> Objects from headers that are not ODR-used such as constant folded >> expressions are not imposing link-time dependencies to C++ libraries. >> The -xnolib that we already have in the LDFLAGS will catch any >> accidental ODR-uses of C++ objects, and the JVM will not build if >> that happens. >> >> As for external headers being included and not playing nicely with >> macros, this has to be evaluated on a case by case basis. Note that >> this is a problem that occurs when using system headers (that we are >> already using), as it is for using C++ standard library headers. We >> even run into that in our own JVM when e.g. the min/max macros >> occasionally slaps us gently in the face from time to time. >> >>> >>>> The instigator for this is Erik and I are working on a project that >>>> needs information that is present in std::numeric_limits<> (provided >>>> by the header). Reproducing that functionality ourselves >>>> would require platform-specific code (with all the complexity that can >>>> imply). We'd really rather not re-discover and maintain information >>>> that is trivially accessible in every standard library. >>> >>> Understood. I have no issue with using but am concerned by >>> the state of stlport4. Can you use without changing >>> -library=%none? >> >> No, that is precisely why we are here. >> >>> >>>>>> This is possible on all supported platforms except the ones using >>>>>> the solaris studio compiler where we enforce -library=%none in >>>>>> both CFLAGS and LDFLAGS. >>>>>> I propose to remove the restriction from CFLAGS but keep it on >>>>>> LDFLAGS. >>>>>> I have consulted with the studio folks, and they think this is >>>>>> absolutely fine and thought that the choice of -library=stlport4 >>>>>> should be fine for our CFLAGS and is indeed what is already used >>>>>> in the gtest launcher. >>>>> >>>>> So what exactly does this mean? IIUC this allows you to use >>>>> headers for, and compile against "STLport?s Standard Library >>>>> implementation version 4.5.3 instead of the default libCstd". But >>>>> how do you then not need to link against libstlport.so ?? >>>>> >>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>> >>>>> "STLport is binary incompatible with the default libCstd. If you >>>>> use the STLport implementation of the standard library, then you >>>>> must compile and link all files, including third-party libraries, >>>>> with the option -library=stlport4? >>>> >>>> It means we can only use header-only parts of the standard library. >>>> This was confirmed / suggested by the Studio folks Erik consulted, >>>> providing such limited access while continuing to constrain our >>>> dependency on the library. Figuring out what can be used will need to >>>> be determined on a case-by-case basis. Maybe we could just link with >>>> a standard library on Solaris too. So far as I can tell, Solaris is >>>> the only platform where we don't do that. But Erik is trying to be >>>> conservative. >>> >>> Okay, but the docs don't seem to acknowledge the ability to use, but >>> not link to, stlport4. >> >> Not ODR-used objects do not require linkage. >> (http://en.cppreference.com/w/cpp/language/definition) >> I have confirmed directly with the studio folks to be certain that >> accidental linkage would fail by keeping our existing guards in the >> LDFLAGS rather than the CFLAGS. >> This is also reasonably well documented already >> (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >> >>> >>>>> There are lots of other comments in that document regarding >>>>> STLport that makes me think that using it may be introducing a >>>>> fragile dependency into the OpenJDK code! >>>>> >>>>> "STLport is an open source product and does not guarantee >>>>> compatibility across different releases. In other words, compiling >>>>> with a future version of STLport may break applications compiled >>>>> with STLport 4.5.3. It also might not be possible to link binaries >>>>> compiled using STLport 4.5.3 with binaries compiled using a future >>>>> version of STLport." >>>>> >>>>> "Future releases of the compiler might not include STLport4. They >>>>> might include only a later version of STLport. The compiler option >>>>> -library=stlport4 might not be available in future releases, but >>>>> could be replaced by an option referring to a later STLport version." >>>>> >>>>> None of that sounds very good to me. >>>> >>>> I don't see how this is any different from any other part of the >>>> process for using a different version of Solaris Studio. >>> >>> Well we'd discover the problem when testing the compiler change, but >>> my point was more to the fact that they don't seem very committed to >>> this library - very much a "use at own risk" disclaimer. >> >> If we eventually need to use something more modern for features that >> have not been around for a decade, like C++11 features, then we can >> change standard library when that day comes. >> >>> >>>> stlport4 is one of the three standard libraries that are presently >>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the >>>> Studio folks which to use (for the purposes of our present project, we >>>> don't have any particular preference, so long as it works), and >>>> stlport4 seemed the right choice (libCstd was, I think, described as >>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>> including linking against it, for gtest builds. Mixing two different >>>> standard libraries seems like a bad idea... >>> >>> So we have the choice of "ancient", "unsupported" or gcc :) >>> >>> My confidence in this has not increased :) >> >> I trust that e.g. std::numeric_limits::is_signed in the standard >> libraries has more mileage than whatever simplified rewrite of that >> we try to replicate in the JVM. So it is not obvious to me that we >> should have less confidence in the same functionality from a standard >> library shipped together with the compiler we are using and that has >> already been used and tested in a variety of C++ applications for >> over a decade compared to the alternative of reinventing it ourselves. >> >>> What we do in gtest doesn't necessarily make things okay to do in >>> the product. >>> >>> If this were part of a compiler upgrade process we'd be comparing >>> binaries with old flag and new to ensure there are no unexpected >>> consequences. >> >> I would not compare including to a compiler upgrade process >> as we are not changing the compiler and hence not the way code is >> generated, but rather compare it to including a new system header >> that has previously not been included to use a constant folded >> expression from that header that has been used and tested for a >> decade. At least that is how I think of it. >> >> Thanks, >> /Erik >> >>> >>> Cheers, >>> David >>> >>>>> >>>>> Cheers, >>>>> David >>>>> >>>>> >>>>>> Webrev for jdk10-hs top level repository: >>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>> Webrev for jdk10-hs hotspot repository: >>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>> Testing: JPRT. >>>>>> Will need a sponsor. >>>>>> Thanks, >>>>>> /Erik >>>> >>>> >> From bob.vandette at oracle.com Thu Jun 1 15:12:33 2017 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 1 Jun 2017 11:12:33 -0400 Subject: RFR: 8181093 arm64 crash when relocating address Message-ID: <87F26099-3F86-4A05-8739-FCE73EE48ECF@oracle.com> Please review this fix which avoids a crash when attempting to update the address of a metadata_Relocation in the arm64 port. http://cr.openjdk.java.net/~bobv/8181093/webrev The problem is that the nativeInst NativeMovContReg logic does not handle the case where NativeMovContReg::set_data is processing an optimized ?or? instruction that was generated by MacroAssembler::mov_metadata -> MacroAssembler::mov_slow_helper. The crash trace shows that this occurred during metadata processing. The fix avoids the updating of the address since the metadata pointers do not move and the references are not PC relative. Note that metadata_Relocation::pd_fix_value is a noop on all other implementations. Current CompileTask: C1: 2052 303 ! 3 java.lang.invoke.MemberName::getMethodType (202 bytes) Stack: [0x0000007f7efa9000,0x0000007f7f0a9000], sp=0x0000007f7f0a64e0, free space=1013k Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xff8838] VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140;; VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140 V [libjvm.so+0xff9448] VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54;; VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54 V [libjvm.so+0x6a62b0] report_vm_error(char const*, int, char const*, char const*, ...)+0xe0;; report_vm_error(char const*, int, char const*, char const*, ...)+0xe0 V [libjvm.so+0xcdaa34] NativeMovConstReg::set_data(long)+0x158;; NativeMovConstReg::set_data(long)+0x158 V [libjvm.so+0xe470ec] Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188;; Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188 V [libjvm.so+0xe48768] metadata_Relocation::pd_fix_value(unsigned char*)+0xe4;; metadata_Relocation::pd_fix_value(unsigned char*)+0xe4 V [libjvm.so+0xce337c] nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0;; nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0 V [libjvm.so+0xceb014] nmethod::copy_values(GrowableArray<_jobject*>*)+0x154;; nmethod::copy_values(GrowableArray<_jobject*>*)+0x154 V [libjvm.so+0xce1b44] nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0;; nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0 V [libjvm.so+0xce245c] nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208;; nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208 V [libjvm.so+0x4efae0] ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330;; ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330 V [libjvm.so+0x3b319c] Compilation::install_code(int)+0x128;; Compilation::install_code(int)+0x128 V [libjvm.so+0x3b5e50] Compilation::compile_method()+0x280;; Compilation::compile_method()+0x280 V [libjvm.so+0x3b6054] Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8;; Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8 V [libjvm.so+0x3b7814] Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118;; Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118 V [libjvm.so+0x6324e4] CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354;; CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354 V [libjvm.so+0x632ea4] CompileBroker::compiler_thread_loop()+0x2b8;; CompileBroker::compiler_thread_loop()+0x2b8 V [libjvm.so+0xf72964] JavaThread::thread_main_inner()+0x1fc;; JavaThread::thread_main_inner()+0x1fc V [libjvm.so+0xf72bb0] JavaThread::run()+0x1c0;; JavaThread::run()+0x1c0 V [libjvm.so+0xd3ba64] thread_native_entry(Thread*)+0x118;; thread_native_entry(Thread*)+0x118 C [libpthread.so.0+0x7e2c] start_thread+0xb0 C [libc.so.6+0xc8430] clone+0x70 Bob. From Paul.Sandoz at oracle.com Thu Jun 1 15:56:32 2017 From: Paul.Sandoz at oracle.com (Paul Sandoz) Date: Thu, 1 Jun 2017 08:56:32 -0700 Subject: RFR 8181292 Backport Rename internal Unsafe.compare methods from 10 to 9 Message-ID: <75E82CFC-EC08-4FDA-AFE0-B7572D0AAB25@oracle.com> Hi, To make it easier on 166 and Graal code to support both 9 and 10 we should back port the renaming of the internal Unsafe.compareAndSwap to Unsafe.compareAndSet: http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-jdk/webrev/ http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-hotspot/webrev/ This is an explicit back port with a new bug. This is the easiest approach given the current nature of how 9 and 10 are currently kept in sync. The change sets are the same as those associated with the following issues and apply cleanly without modification: Rename internal Unsafe.compare methods https://bugs.openjdk.java.net/browse/JDK-8159995 [TESTBUG] Some hotspot tests broken after internal Unsafe name changes https://bugs.openjdk.java.net/browse/JDK-8180479 When running JPRT tests i observe a Graal test error on linux_x64_3.8-fastdebug-c2-hotspot_fast_compiler [*]. I dunno how this is manifesting given i cannot find any explicit reference to jdk.internal.Unsafe.compareAndSwap. Any idea? Paul. [*] [2017-05-31 12:33:08,163] Agent[4]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.caller()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) [2017-05-31 12:33:08,163] Agent[4]: stdout: at parsing app//compiler.calls.common.InvokeInterface.caller(InvokeInterface.java:45) [2017-05-31 12:33:08,213] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.callerNative()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.callerNative(InvokeInterface.java:82) [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.(InvokeInterface.java:31) [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.main([Ljava/lang/String;)V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.main(InvokeInterface.java:35) [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.callee(IJFDLjava/lang/String;)Z: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.callee(InvokeInterface.java:60) [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.caller()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.caller(InvokeInterface.java:45) [2017-05-31 12:33:08,428] Agent[3]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.caller()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) [2017-05-31 12:33:08,429] Agent[3]: stdout: at parsing app//compiler.calls.common.InvokeInterface.caller(InvokeInterface.java:45) TEST: compiler/aot/calls/fromAot/AotInvokeInterface2AotTest.java TEST JDK: /opt/jprt/T/P1/191630.sandoz/testproduct/linux_x64_3.8-fastdebug From gnu.andrew at redhat.com Thu Jun 1 16:25:01 2017 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Thu, 1 Jun 2017 17:25:01 +0100 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> <592FEDC0.2090109@oracle.com> Message-ID: On 1 June 2017 at 12:17, Per Liden wrote: > On 2017-06-01 12:34, Erik ?sterlund wrote: >> >> Hi Per, >> >> On 2017-06-01 11:49, Per Liden wrote: >>> >>> Hi, >>> >>> On 2017-06-01 10:18, Kim Barrett wrote: >>>>> >>>>> On May 31, 2017, at 11:01 AM, Erik ?sterlund >>>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> Excellent. In that case I would like reviews on this patch that does >>>>> exactly that: >>>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ >>> >>> >>> Looks good, but can we please add a comment here describing why we're >>> doing this. It's not obvious :) >> >> >> Thank you for the review. Here is a webrev with the added comment: >> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.01/ > > > Looks good, thanks! > > /Per > Looks good to me too, and will be great to finally see this fixed. It'll also need backporting to 9 now. Thanks, -- Andrew :) Senior Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Web Site: http://fuseyism.com Twitter: https://twitter.com/gnu_andrew_java PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From erik.osterlund at oracle.com Thu Jun 1 16:44:06 2017 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Thu, 1 Jun 2017 18:44:06 +0200 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> <592FEDC0.2090109@oracle.com> Message-ID: <2849B1E3-1125-4A40-B1D9-CDBB8546DA62@oracle.com> Thank you Andrew. /Erik > On 1 Jun 2017, at 18:25, Andrew Hughes wrote: > >> On 1 June 2017 at 12:17, Per Liden wrote: >>> On 2017-06-01 12:34, Erik ?sterlund wrote: >>> >>> Hi Per, >>> >>>> On 2017-06-01 11:49, Per Liden wrote: >>>> >>>> Hi, >>>> >>>> On 2017-06-01 10:18, Kim Barrett wrote: >>>>>> >>>>>> On May 31, 2017, at 11:01 AM, Erik ?sterlund >>>>>> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> Excellent. In that case I would like reviews on this patch that does >>>>>> exactly that: >>>>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ >>>> >>>> >>>> Looks good, but can we please add a comment here describing why we're >>>> doing this. It's not obvious :) >>> >>> >>> Thank you for the review. Here is a webrev with the added comment: >>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.01/ >> >> >> Looks good, thanks! >> >> /Per >> > > Looks good to me too, and will be great to finally see this fixed. > > It'll also need backporting to 9 now. > > Thanks, > -- > Andrew :) > > Senior Free Java Software Engineer > Red Hat, Inc. (http://www.redhat.com) > > Web Site: http://fuseyism.com > Twitter: https://twitter.com/gnu_andrew_java > PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) > Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From vladimir.kozlov at oracle.com Thu Jun 1 18:14:19 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 1 Jun 2017 11:14:19 -0700 Subject: RFR: 8181093 arm64 crash when relocating address In-Reply-To: <87F26099-3F86-4A05-8739-FCE73EE48ECF@oracle.com> References: <87F26099-3F86-4A05-8739-FCE73EE48ECF@oracle.com> Message-ID: <7de1a062-ca7a-3fd3-9ee6-f4684a0dd992@oracle.com> I agree that it should be fixed in JDK 9. Problem evaluation and fix seems reasonable to me. What performance regression you see? Fix is more critical than a small regression I think. Thanks, Vladimir On 6/1/17 8:12 AM, Bob Vandette wrote: > Please review this fix which avoids a crash when attempting to update the address > of a metadata_Relocation in the arm64 port. > > http://cr.openjdk.java.net/~bobv/8181093/webrev > > The problem is that the nativeInst NativeMovContReg logic does not handle the case > where NativeMovContReg::set_data is processing an optimized ?or? instruction that > was generated by MacroAssembler::mov_metadata -> MacroAssembler::mov_slow_helper. > > The crash trace shows that this occurred during metadata processing. > > The fix avoids the updating of the address since the metadata pointers do not move and > the references are not PC relative. Note that metadata_Relocation::pd_fix_value is > a noop on all other implementations. > > > Current CompileTask: > C1: 2052 303 ! 3 java.lang.invoke.MemberName::getMethodType (202 bytes) > > Stack: [0x0000007f7efa9000,0x0000007f7f0a9000], sp=0x0000007f7f0a64e0, free space=1013k > Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) > V [libjvm.so+0xff8838] VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140;; VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140 > V [libjvm.so+0xff9448] VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54;; VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54 > V [libjvm.so+0x6a62b0] report_vm_error(char const*, int, char const*, char const*, ...)+0xe0;; report_vm_error(char const*, int, char const*, char const*, ...)+0xe0 > V [libjvm.so+0xcdaa34] NativeMovConstReg::set_data(long)+0x158;; NativeMovConstReg::set_data(long)+0x158 > V [libjvm.so+0xe470ec] Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188;; Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188 > V [libjvm.so+0xe48768] metadata_Relocation::pd_fix_value(unsigned char*)+0xe4;; metadata_Relocation::pd_fix_value(unsigned char*)+0xe4 > V [libjvm.so+0xce337c] nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0;; nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0 > V [libjvm.so+0xceb014] nmethod::copy_values(GrowableArray<_jobject*>*)+0x154;; nmethod::copy_values(GrowableArray<_jobject*>*)+0x154 > V [libjvm.so+0xce1b44] nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0;; nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0 > V [libjvm.so+0xce245c] nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208;; nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208 > V [libjvm.so+0x4efae0] ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330;; ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330 > V [libjvm.so+0x3b319c] Compilation::install_code(int)+0x128;; Compilation::install_code(int)+0x128 > V [libjvm.so+0x3b5e50] Compilation::compile_method()+0x280;; Compilation::compile_method()+0x280 > V [libjvm.so+0x3b6054] Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8;; Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8 > V [libjvm.so+0x3b7814] Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118;; Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118 > V [libjvm.so+0x6324e4] CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354;; CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354 > V [libjvm.so+0x632ea4] CompileBroker::compiler_thread_loop()+0x2b8;; CompileBroker::compiler_thread_loop()+0x2b8 > V [libjvm.so+0xf72964] JavaThread::thread_main_inner()+0x1fc;; JavaThread::thread_main_inner()+0x1fc > V [libjvm.so+0xf72bb0] JavaThread::run()+0x1c0;; JavaThread::run()+0x1c0 > V [libjvm.so+0xd3ba64] thread_native_entry(Thread*)+0x118;; thread_native_entry(Thread*)+0x118 > C [libpthread.so.0+0x7e2c] start_thread+0xb0 > C [libc.so.6+0xc8430] clone+0x70 > > Bob. > From vladimir.kozlov at oracle.com Thu Jun 1 18:28:50 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 1 Jun 2017 11:28:50 -0700 Subject: RFR 8181292 Backport Rename internal Unsafe.compare methods from 10 to 9 In-Reply-To: <75E82CFC-EC08-4FDA-AFE0-B7572D0AAB25@oracle.com> References: <75E82CFC-EC08-4FDA-AFE0-B7572D0AAB25@oracle.com> Message-ID: <007e16b8-b342-53e9-be52-91f5d01e9f55@oracle.com> Thank you, Paul, for backporting it. On 6/1/17 8:56 AM, Paul Sandoz wrote: > Hi, > > To make it easier on 166 and Graal code to support both 9 and 10 we should back port the renaming of the internal Unsafe.compareAndSwap to Unsafe.compareAndSet: > > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-jdk/webrev/ > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-hotspot/webrev/ Hotspot changes are fine to me. > > This is an explicit back port with a new bug. This is the easiest approach given the current nature of how 9 and 10 are currently kept in sync. > > The change sets are the same as those associated with the following issues and apply cleanly without modification: > > Rename internal Unsafe.compare methods > https://bugs.openjdk.java.net/browse/JDK-8159995 > > [TESTBUG] Some hotspot tests broken after internal Unsafe name changes > https://bugs.openjdk.java.net/browse/JDK-8180479 > > > When running JPRT tests i observe a Graal test error on linux_x64_3.8-fastdebug-c2-hotspot_fast_compiler [*]. I dunno how this is manifesting given i cannot find any explicit reference to jdk.internal.Unsafe.compareAndSwap. Any idea? This is Graal bug I told about before - not all places in Graal are fixed with 8181292 changes (only a test was fixed): https://bugs.openjdk.java.net/browse/JDK-8180785 After you do backport we will fix Graal in JDK 9 and JDK 10. So don't worry about those failures. I will update 'Affected' and 'Fix' version later. Thanks, Vladimir > > Paul. > > [*] > [2017-05-31 12:33:08,163] Agent[4]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.caller()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) > [2017-05-31 12:33:08,163] Agent[4]: stdout: at parsing app//compiler.calls.common.InvokeInterface.caller(InvokeInterface.java:45) > [2017-05-31 12:33:08,213] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.callerNative()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) > [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.callerNative(InvokeInterface.java:82) > [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) > [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.(InvokeInterface.java:31) > [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.main([Ljava/lang/String;)V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) > [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.main(InvokeInterface.java:35) > [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.callee(IJFDLjava/lang/String;)Z: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) > [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.callee(InvokeInterface.java:60) > [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.caller()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) > [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.caller(InvokeInterface.java:45) > [2017-05-31 12:33:08,428] Agent[3]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.caller()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) > [2017-05-31 12:33:08,429] Agent[3]: stdout: at parsing app//compiler.calls.common.InvokeInterface.caller(InvokeInterface.java:45) > TEST: compiler/aot/calls/fromAot/AotInvokeInterface2AotTest.java > TEST JDK: /opt/jprt/T/P1/191630.sandoz/testproduct/linux_x64_3.8-fastdebug > From bob.vandette at oracle.com Thu Jun 1 19:12:00 2017 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 1 Jun 2017 15:12:00 -0400 Subject: RFR: 8181093 arm64 crash when relocating address In-Reply-To: <7de1a062-ca7a-3fd3-9ee6-f4684a0dd992@oracle.com> References: <87F26099-3F86-4A05-8739-FCE73EE48ECF@oracle.com> <7de1a062-ca7a-3fd3-9ee6-f4684a0dd992@oracle.com> Message-ID: <1CBE17D6-17AA-4989-A9D7-78CCC3E53240@oracle.com> > On Jun 1, 2017, at 2:14 PM, Vladimir Kozlov wrote: > > I agree that it should be fixed in JDK 9. > Problem evaluation and fix seems reasonable to me. > What performance regression you see? Fix is more critical than a small regression I think. No regression since the generated code doesn?t even change. A specJVM98 run shows no significant difference. Bob. > > Thanks, > Vladimir > > On 6/1/17 8:12 AM, Bob Vandette wrote: >> Please review this fix which avoids a crash when attempting to update the address >> of a metadata_Relocation in the arm64 port. >> http://cr.openjdk.java.net/~bobv/8181093/webrev >> The problem is that the nativeInst NativeMovContReg logic does not handle the case >> where NativeMovContReg::set_data is processing an optimized ?or? instruction that >> was generated by MacroAssembler::mov_metadata -> MacroAssembler::mov_slow_helper. >> The crash trace shows that this occurred during metadata processing. >> The fix avoids the updating of the address since the metadata pointers do not move and >> the references are not PC relative. Note that metadata_Relocation::pd_fix_value is >> a noop on all other implementations. >> Current CompileTask: >> C1: 2052 303 ! 3 java.lang.invoke.MemberName::getMethodType (202 bytes) >> Stack: [0x0000007f7efa9000,0x0000007f7f0a9000], sp=0x0000007f7f0a64e0, free space=1013k >> Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) >> V [libjvm.so+0xff8838] VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140;; VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140 >> V [libjvm.so+0xff9448] VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54;; VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54 >> V [libjvm.so+0x6a62b0] report_vm_error(char const*, int, char const*, char const*, ...)+0xe0;; report_vm_error(char const*, int, char const*, char const*, ...)+0xe0 >> V [libjvm.so+0xcdaa34] NativeMovConstReg::set_data(long)+0x158;; NativeMovConstReg::set_data(long)+0x158 >> V [libjvm.so+0xe470ec] Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188;; Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188 >> V [libjvm.so+0xe48768] metadata_Relocation::pd_fix_value(unsigned char*)+0xe4;; metadata_Relocation::pd_fix_value(unsigned char*)+0xe4 >> V [libjvm.so+0xce337c] nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0;; nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0 >> V [libjvm.so+0xceb014] nmethod::copy_values(GrowableArray<_jobject*>*)+0x154;; nmethod::copy_values(GrowableArray<_jobject*>*)+0x154 >> V [libjvm.so+0xce1b44] nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0;; nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0 >> V [libjvm.so+0xce245c] nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208;; nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208 >> V [libjvm.so+0x4efae0] ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330;; ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330 >> V [libjvm.so+0x3b319c] Compilation::install_code(int)+0x128;; Compilation::install_code(int)+0x128 >> V [libjvm.so+0x3b5e50] Compilation::compile_method()+0x280;; Compilation::compile_method()+0x280 >> V [libjvm.so+0x3b6054] Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8;; Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8 >> V [libjvm.so+0x3b7814] Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118;; Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118 >> V [libjvm.so+0x6324e4] CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354;; CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354 >> V [libjvm.so+0x632ea4] CompileBroker::compiler_thread_loop()+0x2b8;; CompileBroker::compiler_thread_loop()+0x2b8 >> V [libjvm.so+0xf72964] JavaThread::thread_main_inner()+0x1fc;; JavaThread::thread_main_inner()+0x1fc >> V [libjvm.so+0xf72bb0] JavaThread::run()+0x1c0;; JavaThread::run()+0x1c0 >> V [libjvm.so+0xd3ba64] thread_native_entry(Thread*)+0x118;; thread_native_entry(Thread*)+0x118 >> C [libpthread.so.0+0x7e2c] start_thread+0xb0 >> C [libc.so.6+0xc8430] clone+0x70 >> Bob. From vladimir.kozlov at oracle.com Thu Jun 1 19:35:29 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 1 Jun 2017 12:35:29 -0700 Subject: RFR: 8181093 arm64 crash when relocating address In-Reply-To: <1CBE17D6-17AA-4989-A9D7-78CCC3E53240@oracle.com> References: <87F26099-3F86-4A05-8739-FCE73EE48ECF@oracle.com> <7de1a062-ca7a-3fd3-9ee6-f4684a0dd992@oracle.com> <1CBE17D6-17AA-4989-A9D7-78CCC3E53240@oracle.com> Message-ID: On 6/1/17 12:12 PM, Bob Vandette wrote: > >> On Jun 1, 2017, at 2:14 PM, Vladimir Kozlov wrote: >> >> I agree that it should be fixed in JDK 9. >> Problem evaluation and fix seems reasonable to me. >> What performance regression you see? Fix is more critical than a small regression I think. > No regression since the generated code doesn?t even change. > > A specJVM98 run shows no significant difference. Typo in the bug report?: "tested this fix using specJVM98 on release and fastdebug binaries and confirmed that there is are performance regressions." Vladimir > > > Bob. > >> >> Thanks, >> Vladimir >> >> On 6/1/17 8:12 AM, Bob Vandette wrote: >>> Please review this fix which avoids a crash when attempting to update the address >>> of a metadata_Relocation in the arm64 port. >>> http://cr.openjdk.java.net/~bobv/8181093/webrev >>> The problem is that the nativeInst NativeMovContReg logic does not handle the case >>> where NativeMovContReg::set_data is processing an optimized ?or? instruction that >>> was generated by MacroAssembler::mov_metadata -> MacroAssembler::mov_slow_helper. >>> The crash trace shows that this occurred during metadata processing. >>> The fix avoids the updating of the address since the metadata pointers do not move and >>> the references are not PC relative. Note that metadata_Relocation::pd_fix_value is >>> a noop on all other implementations. >>> Current CompileTask: >>> C1: 2052 303 ! 3 java.lang.invoke.MemberName::getMethodType (202 bytes) >>> Stack: [0x0000007f7efa9000,0x0000007f7f0a9000], sp=0x0000007f7f0a64e0, free space=1013k >>> Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) >>> V [libjvm.so+0xff8838] VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140;; VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140 >>> V [libjvm.so+0xff9448] VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54;; VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54 >>> V [libjvm.so+0x6a62b0] report_vm_error(char const*, int, char const*, char const*, ...)+0xe0;; report_vm_error(char const*, int, char const*, char const*, ...)+0xe0 >>> V [libjvm.so+0xcdaa34] NativeMovConstReg::set_data(long)+0x158;; NativeMovConstReg::set_data(long)+0x158 >>> V [libjvm.so+0xe470ec] Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188;; Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188 >>> V [libjvm.so+0xe48768] metadata_Relocation::pd_fix_value(unsigned char*)+0xe4;; metadata_Relocation::pd_fix_value(unsigned char*)+0xe4 >>> V [libjvm.so+0xce337c] nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0;; nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0 >>> V [libjvm.so+0xceb014] nmethod::copy_values(GrowableArray<_jobject*>*)+0x154;; nmethod::copy_values(GrowableArray<_jobject*>*)+0x154 >>> V [libjvm.so+0xce1b44] nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0;; nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0 >>> V [libjvm.so+0xce245c] nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208;; nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208 >>> V [libjvm.so+0x4efae0] ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330;; ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330 >>> V [libjvm.so+0x3b319c] Compilation::install_code(int)+0x128;; Compilation::install_code(int)+0x128 >>> V [libjvm.so+0x3b5e50] Compilation::compile_method()+0x280;; Compilation::compile_method()+0x280 >>> V [libjvm.so+0x3b6054] Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8;; Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8 >>> V [libjvm.so+0x3b7814] Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118;; Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118 >>> V [libjvm.so+0x6324e4] CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354;; CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354 >>> V [libjvm.so+0x632ea4] CompileBroker::compiler_thread_loop()+0x2b8;; CompileBroker::compiler_thread_loop()+0x2b8 >>> V [libjvm.so+0xf72964] JavaThread::thread_main_inner()+0x1fc;; JavaThread::thread_main_inner()+0x1fc >>> V [libjvm.so+0xf72bb0] JavaThread::run()+0x1c0;; JavaThread::run()+0x1c0 >>> V [libjvm.so+0xd3ba64] thread_native_entry(Thread*)+0x118;; thread_native_entry(Thread*)+0x118 >>> C [libpthread.so.0+0x7e2c] start_thread+0xb0 >>> C [libc.so.6+0xc8430] clone+0x70 >>> Bob. > From bob.vandette at oracle.com Thu Jun 1 19:37:50 2017 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 1 Jun 2017 15:37:50 -0400 Subject: RFR: 8181093 arm64 crash when relocating address In-Reply-To: References: <87F26099-3F86-4A05-8739-FCE73EE48ECF@oracle.com> <7de1a062-ca7a-3fd3-9ee6-f4684a0dd992@oracle.com> <1CBE17D6-17AA-4989-A9D7-78CCC3E53240@oracle.com> Message-ID: <82D26A8D-7F24-4459-9086-73F136D5E3A1@oracle.com> Ooops. I fixed the bug comment. Bob. > On Jun 1, 2017, at 3:35 PM, Vladimir Kozlov wrote: > > On 6/1/17 12:12 PM, Bob Vandette wrote: >>> On Jun 1, 2017, at 2:14 PM, Vladimir Kozlov wrote: >>> >>> I agree that it should be fixed in JDK 9. >>> Problem evaluation and fix seems reasonable to me. >>> What performance regression you see? Fix is more critical than a small regression I think. >> No regression since the generated code doesn?t even change. >> A specJVM98 run shows no significant difference. > > Typo in the bug report?: > > "tested this fix using specJVM98 on release and fastdebug binaries and confirmed that there is are performance regressions." > > Vladimir > >> Bob. >>> >>> Thanks, >>> Vladimir >>> >>> On 6/1/17 8:12 AM, Bob Vandette wrote: >>>> Please review this fix which avoids a crash when attempting to update the address >>>> of a metadata_Relocation in the arm64 port. >>>> http://cr.openjdk.java.net/~bobv/8181093/webrev >>>> The problem is that the nativeInst NativeMovContReg logic does not handle the case >>>> where NativeMovContReg::set_data is processing an optimized ?or? instruction that >>>> was generated by MacroAssembler::mov_metadata -> MacroAssembler::mov_slow_helper. >>>> The crash trace shows that this occurred during metadata processing. >>>> The fix avoids the updating of the address since the metadata pointers do not move and >>>> the references are not PC relative. Note that metadata_Relocation::pd_fix_value is >>>> a noop on all other implementations. >>>> Current CompileTask: >>>> C1: 2052 303 ! 3 java.lang.invoke.MemberName::getMethodType (202 bytes) >>>> Stack: [0x0000007f7efa9000,0x0000007f7f0a9000], sp=0x0000007f7f0a64e0, free space=1013k >>>> Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) >>>> V [libjvm.so+0xff8838] VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140;; VMError::report_and_die(int, char const*, char const*, std::__va_list, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x140 >>>> V [libjvm.so+0xff9448] VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54;; VMError::report_and_die(Thread*, char const*, int, char const*, char const*, std::__va_list)+0x54 >>>> V [libjvm.so+0x6a62b0] report_vm_error(char const*, int, char const*, char const*, ...)+0xe0;; report_vm_error(char const*, int, char const*, char const*, ...)+0xe0 >>>> V [libjvm.so+0xcdaa34] NativeMovConstReg::set_data(long)+0x158;; NativeMovConstReg::set_data(long)+0x158 >>>> V [libjvm.so+0xe470ec] Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188;; Relocation::pd_set_data_value(unsigned char*, long, bool)+0x188 >>>> V [libjvm.so+0xe48768] metadata_Relocation::pd_fix_value(unsigned char*)+0xe4;; metadata_Relocation::pd_fix_value(unsigned char*)+0xe4 >>>> V [libjvm.so+0xce337c] nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0;; nmethod::fix_oop_relocations(unsigned char*, unsigned char*, bool)+0xe0 >>>> V [libjvm.so+0xceb014] nmethod::copy_values(GrowableArray<_jobject*>*)+0x154;; nmethod::copy_values(GrowableArray<_jobject*>*)+0x154 >>>> V [libjvm.so+0xce1b44] nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0;; nmethod::nmethod(Method*, CompilerType, int, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x3a0 >>>> V [libjvm.so+0xce245c] nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208;; nmethod::new_nmethod(methodHandle const&, int, int, CodeOffsets*, int, DebugInformationRecorder*, Dependencies*, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, int)+0x208 >>>> V [libjvm.so+0x4efae0] ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330;; ciEnv::register_method(ciMethod*, int, CodeOffsets*, int, CodeBuffer*, int, OopMapSet*, ExceptionHandlerTable*, ImplicitExceptionTable*, AbstractCompiler*, bool, bool, RTMState)+0x330 >>>> V [libjvm.so+0x3b319c] Compilation::install_code(int)+0x128;; Compilation::install_code(int)+0x128 >>>> V [libjvm.so+0x3b5e50] Compilation::compile_method()+0x280;; Compilation::compile_method()+0x280 >>>> V [libjvm.so+0x3b6054] Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8;; Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*, DirectiveSet*)+0x1b8 >>>> V [libjvm.so+0x3b7814] Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118;; Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x118 >>>> V [libjvm.so+0x6324e4] CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354;; CompileBroker::invoke_compiler_on_method(CompileTask*)+0x354 >>>> V [libjvm.so+0x632ea4] CompileBroker::compiler_thread_loop()+0x2b8;; CompileBroker::compiler_thread_loop()+0x2b8 >>>> V [libjvm.so+0xf72964] JavaThread::thread_main_inner()+0x1fc;; JavaThread::thread_main_inner()+0x1fc >>>> V [libjvm.so+0xf72bb0] JavaThread::run()+0x1c0;; JavaThread::run()+0x1c0 >>>> V [libjvm.so+0xd3ba64] thread_native_entry(Thread*)+0x118;; thread_native_entry(Thread*)+0x118 >>>> C [libpthread.so.0+0x7e2c] start_thread+0xb0 >>>> C [libc.so.6+0xc8430] clone+0x70 >>>> Bob. From mandy.chung at oracle.com Thu Jun 1 21:55:13 2017 From: mandy.chung at oracle.com (Mandy Chung) Date: Thu, 1 Jun 2017 14:55:13 -0700 Subject: RFR 8181292 Backport Rename internal Unsafe.compare methods from 10 to 9 In-Reply-To: <75E82CFC-EC08-4FDA-AFE0-B7572D0AAB25@oracle.com> References: <75E82CFC-EC08-4FDA-AFE0-B7572D0AAB25@oracle.com> Message-ID: > On Jun 1, 2017, at 8:56 AM, Paul Sandoz wrote: > > Hi, > > To make it easier on 166 and Graal code to support both 9 and 10 we should back port the renaming of the internal Unsafe.compareAndSwap to Unsafe.compareAndSet: > > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-jdk/webrev/ > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-hotspot/webrev/ Looks fine to me. Just to mention - a few test/compiler/unsafe/SunMiscUnsafeAccessTestXXX.java tests only have the copyright header fix and it?s okay to backport them as it?s part of the JDK 10 changeset. Mandy From paul.sandoz at oracle.com Thu Jun 1 22:11:30 2017 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 1 Jun 2017 15:11:30 -0700 Subject: RFR 8181292 Backport Rename internal Unsafe.compare methods from 10 to 9 In-Reply-To: References: <75E82CFC-EC08-4FDA-AFE0-B7572D0AAB25@oracle.com> Message-ID: > On 1 Jun 2017, at 14:55, Mandy Chung wrote: > > >> On Jun 1, 2017, at 8:56 AM, Paul Sandoz wrote: >> >> Hi, >> >> To make it easier on 166 and Graal code to support both 9 and 10 we should back port the renaming of the internal Unsafe.compareAndSwap to Unsafe.compareAndSet: >> >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-jdk/webrev/ >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-hotspot/webrev/ > > Looks fine to me. Just to mention - a few test/compiler/unsafe/SunMiscUnsafeAccessTestXXX.java tests only have the copyright header fix and it?s okay to backport them as it?s part of the JDK 10 changeset. > Thanks, those non-functional updates are a result of updating the template and regenerating all instances. Paul. From david.holmes at oracle.com Fri Jun 2 00:00:34 2017 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Jun 2017 10:00:34 +1000 Subject: (10) (M) RFR: 8174231: Factor out and share PlatformEvent and Parker code for POSIX systems In-Reply-To: <78ee6517-fefb-1a08-e8c8-68bbdfcbca6a@oracle.com> References: <3401d786-e657-35b4-cb0f-70848f5215b4@oracle.com> <368d99c5-a836-088e-b107-486cd4020b34@oracle.com> <96e65645-263f-e5a5-996d-efd7b4cfc01f@oracle.com> <29aef2eb-5870-5923-abe9-fd15f2c4b919@oracle.com> <25e10752-719a-237b-5797-16f225cdbd34@oracle.com> <78ee6517-fefb-1a08-e8c8-68bbdfcbca6a@oracle.com> Message-ID: As is typical my change caused a breakage on newer Mac OS (Sierra) systems that we don't have in JPRT: https://bugs.openjdk.java.net/browse/JDK-8181451 Ironically the proposed fix is the same typedef cleanup that Thomas had suggested earlier, but which I didn't take up. I'm trying to find a system I can actually investigate this on. IIUC these newer Mac's do have clock_gettime (others don't!), but we don't know how well it works. David On 31/05/2017 6:50 AM, David Holmes wrote: > Hi Dan, > > On 31/05/2017 1:11 AM, Daniel D. Daugherty wrote: >> On 5/28/17 10:19 PM, David Holmes wrote: >>> Dan, Robbin, Thomas, >>> >>> Okay here is the final ready to push version: >>> >>> http://cr.openjdk.java.net/~dholmes/8174231/webrev.hotspot.v2/ >> >> General >> - Approaching the review differently than last round. This time I'm >> focused on the os_posix.[ch]pp changes as if this were all new code. >> - i.e., I'm going to assume that code deleted from the platform >> specific files is all appropriately represented in os_posix.[ch]pp. > > Okay - thanks again. > >> src/os/posix/vm/os_posix.hpp >> No comments. > > Okay I'm leaving the #includes as-is. > >> src/os/posix/vm/os_posix.cpp >> L1518: _use_clock_monotonic_condattr = true; >> L1522: _use_clock_monotonic_condattr = false; >> _use_clock_monotonic_condattr could briefly be observed as >> 'true' >> before being reset to 'false' due to the EINVAL. I think we are >> single threaded at this point so there should be no other thread >> running to be confused by this. > > Right this is single-threaded VM init. > >> An alternative would be to set _use_clock_monotonic_condattr >> to true only when _pthread_condattr_setclock() returns 0. > > Yes - fixed. > >> L1581: // number of seconds, in abstime, is less than >> current_time + 100,000,000. >> L1582: // As it will be over 20 years before "now + 100000000" >> will overflow we can >> L1584: // of "now + 100,000,000". This places a limit on the >> timeout of about 3.17 >> nit - consistency of using ',' or not in 100000000. Personally, >> I would prefer no commas so the comments match MAX_SECS. > > Fixed. > >> L1703: if (Atomic::cmpxchg(v-1, &_event, v) == v) break; >> L1743: if (Atomic::cmpxchg(v-1, &_event, v) == v) break; >> nit - please add spaces around the '-' operator. > > Fixed. > >> L1749: to_abstime(&abst, millis * (NANOUNITS/MILLIUNITS), >> false); >> nit - please add spaces around the '/' operator. > > Fixed. > >> src/os/aix/vm/os_aix.hpp >> No comments. >> >> src/os/aix/vm/os_aix.cpp >> No comments. >> >> src/os/bsd/vm/os_bsd.hpp >> No comments. >> >> src/os/bsd/vm/os_bsd.cpp >> No comments. >> >> src/os/linux/vm/os_linux.hpp >> No comments. >> >> src/os/linux/vm/os_linux.cpp >> No comments. >> >> src/os/solaris/vm/os_solaris.hpp >> No comments. >> >> src/os/solaris/vm/os_solaris.cpp >> No comments. >> >> >> Thumbs up. Don't need to see another webrev if you choose to fix >> the bits... > > Thanks again. > > David > >> Dan >> >> >>> >>> this fixes all Dan's nits and refactors the time calculation code as >>> suggested by Robbin. >>> >>> Thomas: if you are around and able, it would be good to get a final >>> sanity check on AIX. Thanks. >>> >>> Testing: >>> - JPRT: -testset hotspot >>> -testset core >>> >>> - manual: >>> - jtreg:java/util/concurrent >>> - various little test programs that try to validate sleep/wait >>> times to show early returns or unexpected delays >>> >>> Thanks again for the reviews. >>> >>> David >>> >>> On 29/05/2017 10:29 AM, David Holmes wrote: >>>> On 27/05/2017 4:19 AM, Daniel D. Daugherty wrote: >>>>> On 5/26/17 1:27 AM, David Holmes wrote: >>>>>> Robbin, Dan, >>>>>> >>>>>> Below is a modified version of the refactored to_abstime code that >>>>>> Robbin suggested. >>>>>> >>>>>> Robbin: there were a couple of issues with your version. For >>>>>> relative time the timeout is always in nanoseconds - the "unit" >>>>>> only tells you what form the "now_part_sec" is - nanos or micros. >>>>>> And the calc_abs_time always has a deadline in millis. So I >>>>>> simplified and did a little renaming, and tracked max_secs in >>>>>> debug_only instead of returning it. >>>>>> >>>>>> Please let me know what you think. >>>>> >>>>> Looks OK to me. Nit comments below... >>>> >>>> Thanks Dan - more below. >>>> >>>>>> >>>>>> >>>>>> // Calculate a new absolute time that is "timeout" nanoseconds >>>>>> from "now". >>>>>> // "unit" indicates the unit of "now_part_sec" (may be nanos or >>>>>> micros depending >>>>>> // on which clock is being used). >>>>>> static void calc_rel_time(timespec* abstime, jlong timeout, jlong >>>>>> now_sec, >>>>>> jlong now_part_sec, jlong unit) { >>>>>> time_t max_secs = now_sec + MAX_SECS; >>>>>> >>>>>> jlong seconds = timeout / NANOUNITS; >>>>>> timeout %= NANOUNITS; // remaining nanos >>>>>> >>>>>> if (seconds >= MAX_SECS) { >>>>>> // More seconds than we can add, so pin to max_secs. >>>>>> abstime->tv_sec = max_secs; >>>>>> abstime->tv_nsec = 0; >>>>>> } else { >>>>>> abstime->tv_sec = now_sec + seconds; >>>>>> long nanos = (now_part_sec * (NANOUNITS / unit)) + timeout; >>>>>> if (nanos >= NANOUNITS) { // overflow >>>>>> abstime->tv_sec += 1; >>>>>> nanos -= NANOUNITS; >>>>>> } >>>>>> abstime->tv_nsec = nanos; >>>>>> } >>>>>> } >>>>>> >>>>>> // Unpack the given deadline in milliseconds since the epoch, into >>>>>> the given timespec. >>>>>> // The current time in seconds is also passed in to enforce an >>>>>> upper bound as discussed above. >>>>>> static void unpack_abs_time(timespec* abstime, jlong deadline, >>>>>> jlong now_sec) { >>>>>> time_t max_secs = now_sec + MAX_SECS; >>>>>> >>>>>> jlong seconds = deadline / MILLIUNITS; >>>>>> jlong millis = deadline % MILLIUNITS; >>>>>> >>>>>> if (seconds >= max_secs) { >>>>>> // Absolute seconds exceeds allowed max, so pin to max_secs. >>>>>> abstime->tv_sec = max_secs; >>>>>> abstime->tv_nsec = 0; >>>>>> } else { >>>>>> abstime->tv_sec = seconds; >>>>>> abstime->tv_nsec = millis * (NANOUNITS / MILLIUNITS); >>>>>> } >>>>>> } >>>>>> >>>>>> >>>>>> static void to_abstime(timespec* abstime, jlong timeout, bool >>>>>> isAbsolute) { >>>>> >>>>> There's an extra blank line here. >>>> >>>> Fixed. >>>> >>>>>> >>>>>> DEBUG_ONLY(int max_secs = MAX_SECS;) >>>>>> >>>>>> if (timeout < 0) { >>>>>> timeout = 0; >>>>>> } >>>>>> >>>>>> #ifdef SUPPORTS_CLOCK_MONOTONIC >>>>>> >>>>>> if (_use_clock_monotonic_condattr && !isAbsolute) { >>>>>> struct timespec now; >>>>>> int status = _clock_gettime(CLOCK_MONOTONIC, &now); >>>>>> assert_status(status == 0, status, "clock_gettime"); >>>>>> calc_rel_time(abstime, timeout, now.tv_sec, now.tv_nsec, >>>>>> NANOUNITS); >>>>>> DEBUG_ONLY(max_secs += now.tv_sec;) >>>>>> } else { >>>>>> >>>>>> #else >>>>>> >>>>>> { // Match the block scope. >>>>>> >>>>>> #endif // SUPPORTS_CLOCK_MONOTONIC >>>>>> >>>>>> // Time-of-day clock is all we can reliably use. >>>>>> struct timeval now; >>>>>> int status = gettimeofday(&now, NULL); >>>>>> assert(status == 0, "gettimeofday"); >>>>> >>>>> assert_status() is used above, but assert() is used here. Why? >>>> >>>> Historical. assert_status was introduced for the pthread* and other >>>> posix funcs that return the error value rather than returning -1 and >>>> setting errno. gettimeofday is not one of those so still has the old >>>> assert. However, as someone pointed out a while ago you can use >>>> assert_status with these and pass errno as the "status". So I did that. >>>> >>>>> >>>>>> if (isAbsolute) { >>>>>> unpack_abs_time(abstime, timeout, now.tv_sec); >>>>>> } >>>>>> else { >>>>> >>>>> Inconsistent "else-branch" formatting. >>>>> I believe HotSpot style is "} else {" >>>> >>>> Fixed. >>>> >>>>>> calc_rel_time(abstime, timeout, now.tv_sec, now.tv_usec, MICROUNITS); >>>>>> } >>>>>> DEBUG_ONLY(max_secs += now.tv_sec;) >>>>>> } >>>>>> >>>>>> assert(abstime->tv_sec >= 0, "tv_sec < 0"); >>>>>> assert(abstime->tv_sec <= max_secs, "tv_sec > max_secs"); >>>>>> assert(abstime->tv_nsec >= 0, "tv_nsec < 0"); >>>>>> assert(abstime->tv_nsec < NANOSECS_PER_SEC, "tv_nsec >= >>>>>> nanos_per_sec"); >>>>> >>>>> Why does the assert mesg have "nanos_per_sec" instead of >>>>> "NANOSECS_PER_SEC"? >>>> >>>> No reason. Actually that should now refer to NANOUNITS. Hmmm I can >>>> not recall why we have NANOUNITS and NANAOSECS_PER_SEC ... possibly >>>> an oversight. >>>> >>>>> There's an extra blank line here. >>>> >>>> Fixed. >>>> >>>> Will send out complete updated webrev soon. >>>> >>>> Thanks, >>>> David >>>> >>>>>> >>>>>> } >>>>> >>>>> Definitely looks and reads much cleaner. >>>>> >>>>> Dan >>>>> >> From kim.barrett at oracle.com Fri Jun 2 00:41:50 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Jun 2017 20:41:50 -0400 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> <592FEDC0.2090109@oracle.com> Message-ID: <64A10C62-8008-4B43-99E9-8E6D82388FD7@oracle.com> > On Jun 1, 2017, at 12:25 PM, Andrew Hughes wrote: > It'll also need backporting to 9 now. > > Thanks, > -- > Andrew :) The current process of forward porting all changes from 9 to 10 means that if this should be in 9 any time soon then we should push there first, after going through the RDP2 approval process. It feels kind of late for that. I?m guessing that at some point we?ll switch to a more normal push to 10 and backport to 9 process. How urgent is it to get this to 9? Like, if you (Andrew) want that to happen soon, how about you doing the request and justification? We can hold off the push until it?s decided where it should go? From david.holmes at oracle.com Fri Jun 2 01:30:11 2017 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Jun 2017 11:30:11 +1000 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: <593029CB.7000100@oracle.com> References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> <593029CB.7000100@oracle.com> Message-ID: Hi Erik, On 2/06/2017 12:50 AM, Erik ?sterlund wrote: > Hi David, > > On 2017-06-01 14:33, David Holmes wrote: >> Hi Erik, >> >> Just to be clear it is not the use of that I am concerned >> about, it is the -library=stlport4. It is the use of that flag that I >> would want to check in terms of having no affect on any existing code >> generation. > > Thank you for the clarification. The use of -library=stlport4 should not > have anything to do with code generation. It only says where to look for > the standard library headers such as that are used in the > compilation units. The potential problem is that the stlport4 include path eg: ./SS12u4/lib/compilers/include/CC/stlport4/ doesn't only contain the C++ headers (new, limits, string etc) but also a whole bunch of regular 'standard' .h headers that are _different_ to those found outside the stlport4 directory ie the ones we would currently include. I don't know if the differences are significant, nor whether those others may be found ahead of the stlport4 version. But that is my concern about the effects on the code. Thanks, David ----- > Specifically, the man pages for CC say: > > > -library=lib[,lib...] > > Incorporates specified CC-provided libraries into > compilation and > linking. > > When the -library option is used to specify a CC-provided > library, > the proper -I paths are set during compilation and the > proper -L, > -Y, -P, and -R paths and -l options are set during linking. > > > As we are setting this during compilation and not during linking, this > corresponds to setting the right -I paths to find our C++ standard > library headers. > > My studio friends mentioned I could double-check that we did indeed not > add a dependency to any C++ standard library by running elfdump on the > generated libjvm.so file and check if the NEEDED entries in the dynamic > section look right. I did and here are the results: > > [0] NEEDED 0x2918ee libsocket.so.1 > [1] NEEDED 0x2918fd libsched.so.1 > [2] NEEDED 0x29190b libdl.so.1 > [3] NEEDED 0x291916 libm.so.1 > [4] NEEDED 0x291920 libCrun.so.1 > [5] NEEDED 0x29192d libthread.so.1 > [6] NEEDED 0x29193c libdoor.so.1 > [7] NEEDED 0x291949 libc.so.1 > [8] NEEDED 0x291953 libdemangle.so.1 > [9] NEEDED 0x291964 libnsl.so.1 > [10] NEEDED 0x291970 libkstat.so.1 > [11] NEEDED 0x29197e librt.so.1 > > This list does not include any C++ standard libraries, as expected > (libCrun is always in there even with -library=%none, and as expected no > libstlport4.so or libCstd.so files are in there). The NEEDED entries in > the dynamic section look identical with and without my patch. > >> I'm finding the actual build situation very confusing. It seems to me >> in looking at the hotspot build files and the top-level build files >> that -xnolib is used for C++ compilation & linking whereas >> -library=%none is used for C compilation & linking. But the change is >> being applied to $2JVM_CFLAGS which one would think is for C >> compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used >> for both! > > I have also been confused by this when I tried adding CXX flags through > configure that seemed to not be used. But that's a different can of > worms I suppose. > > Thanks, > /Erik > >> David >> >> On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >>> Hi David, >>> >>> On 2017-06-01 08:09, David Holmes wrote: >>>> Hi Kim, >>>> >>>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>>> On May 31, 2017, at 9:22 PM, David Holmes >>>>>> wrote: >>>>>> >>>>>> Hi Erik, >>>>>> >>>>>> A small change with big questions :) >>>>>> >>>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>>> Hi, >>>>>>> It would be desirable to be able to use harmless C++ standard >>>>>>> library headers like in the code as long as it does not >>>>>>> add any link-time dependencies to the standard library. >>>>>> >>>>>> What does a 'harmless' C++ standard library header look like? >>>>> >>>>> Header-only (doesn't require linking), doesn't run afoul of our >>>>> [vm]assert macro, and provides functionality we presently lack (or >>>>> only handle poorly) and would not be easy to reproduce. >>>> >>>> And how does one establish those properties exist for a given header >>>> file? Just use it and if no link errors then all is good? >>> >>> Objects from headers that are not ODR-used such as constant folded >>> expressions are not imposing link-time dependencies to C++ libraries. >>> The -xnolib that we already have in the LDFLAGS will catch any >>> accidental ODR-uses of C++ objects, and the JVM will not build if >>> that happens. >>> >>> As for external headers being included and not playing nicely with >>> macros, this has to be evaluated on a case by case basis. Note that >>> this is a problem that occurs when using system headers (that we are >>> already using), as it is for using C++ standard library headers. We >>> even run into that in our own JVM when e.g. the min/max macros >>> occasionally slaps us gently in the face from time to time. >>> >>>> >>>>> The instigator for this is Erik and I are working on a project that >>>>> needs information that is present in std::numeric_limits<> (provided >>>>> by the header). Reproducing that functionality ourselves >>>>> would require platform-specific code (with all the complexity that can >>>>> imply). We'd really rather not re-discover and maintain information >>>>> that is trivially accessible in every standard library. >>>> >>>> Understood. I have no issue with using but am concerned by >>>> the state of stlport4. Can you use without changing >>>> -library=%none? >>> >>> No, that is precisely why we are here. >>> >>>> >>>>>>> This is possible on all supported platforms except the ones using >>>>>>> the solaris studio compiler where we enforce -library=%none in >>>>>>> both CFLAGS and LDFLAGS. >>>>>>> I propose to remove the restriction from CFLAGS but keep it on >>>>>>> LDFLAGS. >>>>>>> I have consulted with the studio folks, and they think this is >>>>>>> absolutely fine and thought that the choice of -library=stlport4 >>>>>>> should be fine for our CFLAGS and is indeed what is already used >>>>>>> in the gtest launcher. >>>>>> >>>>>> So what exactly does this mean? IIUC this allows you to use >>>>>> headers for, and compile against "STLport?s Standard Library >>>>>> implementation version 4.5.3 instead of the default libCstd". But >>>>>> how do you then not need to link against libstlport.so ?? >>>>>> >>>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>>> >>>>>> "STLport is binary incompatible with the default libCstd. If you >>>>>> use the STLport implementation of the standard library, then you >>>>>> must compile and link all files, including third-party libraries, >>>>>> with the option -library=stlport4? >>>>> >>>>> It means we can only use header-only parts of the standard library. >>>>> This was confirmed / suggested by the Studio folks Erik consulted, >>>>> providing such limited access while continuing to constrain our >>>>> dependency on the library. Figuring out what can be used will need to >>>>> be determined on a case-by-case basis. Maybe we could just link with >>>>> a standard library on Solaris too. So far as I can tell, Solaris is >>>>> the only platform where we don't do that. But Erik is trying to be >>>>> conservative. >>>> >>>> Okay, but the docs don't seem to acknowledge the ability to use, but >>>> not link to, stlport4. >>> >>> Not ODR-used objects do not require linkage. >>> (http://en.cppreference.com/w/cpp/language/definition) >>> I have confirmed directly with the studio folks to be certain that >>> accidental linkage would fail by keeping our existing guards in the >>> LDFLAGS rather than the CFLAGS. >>> This is also reasonably well documented already >>> (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >>> >>>> >>>>>> There are lots of other comments in that document regarding >>>>>> STLport that makes me think that using it may be introducing a >>>>>> fragile dependency into the OpenJDK code! >>>>>> >>>>>> "STLport is an open source product and does not guarantee >>>>>> compatibility across different releases. In other words, compiling >>>>>> with a future version of STLport may break applications compiled >>>>>> with STLport 4.5.3. It also might not be possible to link binaries >>>>>> compiled using STLport 4.5.3 with binaries compiled using a future >>>>>> version of STLport." >>>>>> >>>>>> "Future releases of the compiler might not include STLport4. They >>>>>> might include only a later version of STLport. The compiler option >>>>>> -library=stlport4 might not be available in future releases, but >>>>>> could be replaced by an option referring to a later STLport version." >>>>>> >>>>>> None of that sounds very good to me. >>>>> >>>>> I don't see how this is any different from any other part of the >>>>> process for using a different version of Solaris Studio. >>>> >>>> Well we'd discover the problem when testing the compiler change, but >>>> my point was more to the fact that they don't seem very committed to >>>> this library - very much a "use at own risk" disclaimer. >>> >>> If we eventually need to use something more modern for features that >>> have not been around for a decade, like C++11 features, then we can >>> change standard library when that day comes. >>> >>>> >>>>> stlport4 is one of the three standard libraries that are presently >>>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the >>>>> Studio folks which to use (for the purposes of our present project, we >>>>> don't have any particular preference, so long as it works), and >>>>> stlport4 seemed the right choice (libCstd was, I think, described as >>>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>>> including linking against it, for gtest builds. Mixing two different >>>>> standard libraries seems like a bad idea... >>>> >>>> So we have the choice of "ancient", "unsupported" or gcc :) >>>> >>>> My confidence in this has not increased :) >>> >>> I trust that e.g. std::numeric_limits::is_signed in the standard >>> libraries has more mileage than whatever simplified rewrite of that >>> we try to replicate in the JVM. So it is not obvious to me that we >>> should have less confidence in the same functionality from a standard >>> library shipped together with the compiler we are using and that has >>> already been used and tested in a variety of C++ applications for >>> over a decade compared to the alternative of reinventing it ourselves. >>> >>>> What we do in gtest doesn't necessarily make things okay to do in >>>> the product. >>>> >>>> If this were part of a compiler upgrade process we'd be comparing >>>> binaries with old flag and new to ensure there are no unexpected >>>> consequences. >>> >>> I would not compare including to a compiler upgrade process >>> as we are not changing the compiler and hence not the way code is >>> generated, but rather compare it to including a new system header >>> that has previously not been included to use a constant folded >>> expression from that header that has been used and tested for a >>> decade. At least that is how I think of it. >>> >>> Thanks, >>> /Erik >>> >>>> >>>> Cheers, >>>> David >>>> >>>>>> >>>>>> Cheers, >>>>>> David >>>>>> >>>>>> >>>>>>> Webrev for jdk10-hs top level repository: >>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>>> Webrev for jdk10-hs hotspot repository: >>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>>> Testing: JPRT. >>>>>>> Will need a sponsor. >>>>>>> Thanks, >>>>>>> /Erik >>>>> >>>>> >>> > From david.holmes at oracle.com Fri Jun 2 01:36:05 2017 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Jun 2017 11:36:05 +1000 Subject: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode In-Reply-To: References: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> <03724e7c-f656-473f-9a89-eb78073b518f@default> <82c81d81-017f-fdc1-0e33-0f9cd5140e82@oracle.com> Message-ID: Hi Stuart, On 1/06/2017 11:26 PM, Stuart Monteith wrote: > Hello, > I tested this on x86 and aarch64. Muneer's bug is an accurate > description of the failing tests. I'm not sure what you mean by > "8180904 has to be fixed before this backport", as the backport is the > fix for the issue Muneer presented. JDK9 doesn't exhibit these > failures as it has the fix to be backported. As I understood it, 8180904 reports that a whole bunch of tests fail if run in agentvm mode. The current backport would enable agentvm mode and hence all those tests would start to fail. Did I misunderstand something? Thanks, David > Comparing the runs without and with the patch - this is on x86 - I get > essentially the same on aarch64: > > 0: JTwork-without pass: 680; fail: 44; error: 3; not run: 4 > 1: JTwork-with pass: 718; fail: 6; error: 2; not run: 5 > > 0 1 Test > fail pass compiler/jsr292/PollutedTrapCounts.java > fail pass compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java#id0 > fail pass compiler/loopopts/UseCountedLoopSafepoints.java > pass fail compiler/rtm/locking/TestRTMLockingThreshold.java#id0 > fail pass compiler/types/correctness/OffTest.java#id0 > fail pass gc/TestVerifySilently.java > fail pass gc/TestVerifySubSet.java > fail pass gc/class_unloading/TestCMSClassUnloadingEnabledHWM.java > fail pass gc/class_unloading/TestG1ClassUnloadingHWM.java > fail pass gc/ergonomics/TestDynamicNumberOfGCThreads.java > fail pass gc/g1/TestEagerReclaimHumongousRegions.java > fail pass gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java > fail pass gc/g1/TestEagerReclaimHumongousRegionsWithRefs.java > fail pass gc/g1/TestG1TraceEagerReclaimHumongousObjects.java > fail pass gc/g1/TestGCLogMessages.java > fail pass gc/g1/TestHumongousAllocInitialMark.java > fail pass gc/g1/TestPrintGCDetails.java > fail pass gc/g1/TestPrintRegionRememberedSetInfo.java > fail pass gc/g1/TestShrinkAuxiliaryData00.java > fail pass gc/g1/TestShrinkAuxiliaryData05.java > fail pass gc/g1/TestShrinkAuxiliaryData10.java > fail pass gc/g1/TestShrinkAuxiliaryData15.java > fail pass gc/g1/TestShrinkAuxiliaryData20.java > fail pass gc/g1/TestShrinkAuxiliaryData25.java > fail pass gc/g1/TestShrinkDefragmentedHeap.java#id0 > fail pass gc/g1/TestStringDeduplicationAgeThreshold.java > fail pass gc/g1/TestStringDeduplicationFullGC.java > fail pass gc/g1/TestStringDeduplicationInterned.java > fail pass gc/g1/TestStringDeduplicationPrintOptions.java > fail pass gc/g1/TestStringDeduplicationTableRehash.java > fail pass gc/g1/TestStringDeduplicationTableResize.java > fail pass gc/g1/TestStringDeduplicationYoungGC.java > fail pass gc/g1/TestStringSymbolTableStats.java > fail pass gc/logging/TestGCId.java > fail pass gc/whitebox/TestWBGC.java > fail pass runtime/ErrorHandling/TestOnOutOfMemoryError.java#id0 > fail pass runtime/NMT/JcmdWithNMTDisabled.java > fail pass runtime/memory/ReserveMemory.java > pass --- sanity/WhiteBox.java > fail pass serviceability/attach/AttachWithStalePidFile.java > fail pass serviceability/jvmti/TestRedefineWithUnresolvedClass.java > error pass serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java#id0 > > > I find that compiler/rtm/locking/TestRTMLockingThreshold.java produces > inconsistent results on my machine, regardless of whether or not the > patch is applied. > > BR > Stuart > > > On 1 June 2017 at 06:39, David Holmes wrote: >> Thanks for that information Muneer, that is an unpleasant surprise. >> >> Stuart: I think 8180904 has to be fixed before this backport can take place. >> >> Thanks, >> David >> ----- >> >> >> On 1/06/2017 2:31 PM, Muneer Kolarkunnu wrote: >>> >>> Hi David and Stuart, >>> >>> I recently reported one bug[1] for the same issue and listed which all >>> test cases are failing with agentvm. >>> I tested in Oracle.Linux.7.0 x64. >>> >>> [1] https://bugs.openjdk.java.net/browse/JDK-8180904 >>> >>> Regards, >>> Muneer >>> >>> -----Original Message----- >>> From: David Holmes >>> Sent: Thursday, June 01, 2017 7:04 AM >>> To: Stuart Monteith; hotspot-dev Source Developers >>> Subject: Re: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg >>> tests to run in agentvm mode >>> >>> Hi Stuart, >>> >>> This looks like an accurate backport of the change. >>> >>> My only minor concern is if there may be tests in 8u that are no longer in >>> 9 which may not work with agentvm mode. >>> >>> What platforms have you tested this on? >>> >>> Thanks, >>> David >>> >>> On 31/05/2017 11:19 PM, Stuart Monteith wrote: >>>> >>>> Hello, >>>> Currently the jdk8u codebase fails some JTreg Hotspot tests when >>>> running in the -agentvm mode. This is because the ProcessTools class >>>> is not passing the classpath. There are substantial time savings to be >>>> gained using -agentvm over -othervm. >>>> >>>> Fortunately, there was a fix for jdk9 (8077608) that has not been >>>> backported to jdk8u. The details are as follows: >>>> >>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/017937.h >>>> tml >>>> https://bugs.openjdk.java.net/browse/JDK-8077608 >>>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/af2a1e9f08f3 >>>> >>>> The patch just needed a slight change, to remove the change to the >>>> file "test/compiler/uncommontrap/TestUnstableIfTrap.java" as that test >>>> doesn't exist on jdk8u. >>>> >>>> My colleague Ningsheng has kindly hosted the change here: >>>> >>>> http://cr.openjdk.java.net/~njian/8077608/webrev.00 >>>> >>>> >>>> BR, >>>> Stuart >>>> >> From mark.reinhold at oracle.com Fri Jun 2 04:13:57 2017 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Thu, 01 Jun 2017 21:13:57 -0700 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: <2849B1E3-1125-4A40-B1D9-CDBB8546DA62@oracle.com> References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> <592FEDC0.2090109@oracle.com> <2849B1E3-1125-4A40-B1D9-CDBB8546DA62@oracle.com> Message-ID: <20170601211357.184958343@eggemoggin.niobe.net> Erik -- I think this is worth fixing in 9, given that GCC 6 is no longer new and the sustaining lines of 9 will be around for a while. Would you mind pushing it to 9, from which it will automatically be forward-ported to 10? I'd be happy to approve the fix request. Thanks, - Mark 2017/6/1 9:44:06 -0700, erik.osterlund at oracle.com: > Thank you Andrew. > > /Erik > >> On 1 Jun 2017, at 18:25, Andrew Hughes wrote: >> >>> On 1 June 2017 at 12:17, Per Liden wrote: >>>> On 2017-06-01 12:34, Erik ?sterlund wrote: >>>> >>>> Hi Per, >>>> >>>>> On 2017-06-01 11:49, Per Liden wrote: >>>>> >>>>> Hi, >>>>> >>>>> On 2017-06-01 10:18, Kim Barrett wrote: >>>>>>> >>>>>>> On May 31, 2017, at 11:01 AM, Erik ?sterlund >>>>>>> wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Excellent. In that case I would like reviews on this patch that does >>>>>>> exactly that: >>>>>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ >>>>> >>>>> >>>>> Looks good, but can we please add a comment here describing why we're >>>>> doing this. It's not obvious :) >>>> >>>> >>>> Thank you for the review. Here is a webrev with the added comment: >>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.01/ >>> >>> >>> Looks good, thanks! >>> >>> /Per >>> >> >> Looks good to me too, and will be great to finally see this fixed. >> >> It'll also need backporting to 9 now. >> >> Thanks, >> -- >> Andrew :) >> >> Senior Free Java Software Engineer >> Red Hat, Inc. (http://www.redhat.com) >> >> Web Site: http://fuseyism.com >> Twitter: https://twitter.com/gnu_andrew_java >> PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) >> Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From erik.osterlund at oracle.com Fri Jun 2 06:21:04 2017 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 2 Jun 2017 08:21:04 +0200 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: <20170601211357.184958343@eggemoggin.niobe.net> References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> <592FEDC0.2090109@oracle.com> <2849B1E3-1125-4A40-B1D9-CDBB8546DA62@oracle.com> <20170601211357.184958343@eggemoggin.niobe.net> Message-ID: <593103D0.6080109@oracle.com> Hi Mark, Thank you for supporting a push to 9. /Erik On 2017-06-02 06:13, mark.reinhold at oracle.com wrote: > Erik -- I think this is worth fixing in 9, given that GCC 6 is no longer > new and the sustaining lines of 9 will be around for a while. Would you > mind pushing it to 9, from which it will automatically be forward-ported > to 10? I'd be happy to approve the fix request. > > Thanks, > - Mark > > > 2017/6/1 9:44:06 -0700, erik.osterlund at oracle.com: >> Thank you Andrew. >> >> /Erik >> >>> On 1 Jun 2017, at 18:25, Andrew Hughes wrote: >>> >>>> On 1 June 2017 at 12:17, Per Liden wrote: >>>>> On 2017-06-01 12:34, Erik ?sterlund wrote: >>>>> >>>>> Hi Per, >>>>> >>>>>> On 2017-06-01 11:49, Per Liden wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> On 2017-06-01 10:18, Kim Barrett wrote: >>>>>>>> On May 31, 2017, at 11:01 AM, Erik ?sterlund >>>>>>>> wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> Excellent. In that case I would like reviews on this patch that does >>>>>>>> exactly that: >>>>>>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ >>>>>> >>>>>> Looks good, but can we please add a comment here describing why we're >>>>>> doing this. It's not obvious :) >>>>> >>>>> Thank you for the review. Here is a webrev with the added comment: >>>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.01/ >>>> >>>> Looks good, thanks! >>>> >>>> /Per >>>> >>> Looks good to me too, and will be great to finally see this fixed. >>> >>> It'll also need backporting to 9 now. >>> >>> Thanks, >>> -- >>> Andrew :) >>> >>> Senior Free Java Software Engineer >>> Red Hat, Inc. (http://www.redhat.com) >>> >>> Web Site: http://fuseyism.com >>> Twitter: https://twitter.com/gnu_andrew_java >>> PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) >>> Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From bourges.laurent at gmail.com Fri Jun 2 06:33:35 2017 From: bourges.laurent at gmail.com (=?UTF-8?Q?Laurent_Bourg=C3=A8s?=) Date: Fri, 2 Jun 2017 08:33:35 +0200 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: <20170601211357.184958343@eggemoggin.niobe.net> References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> <592FEDC0.2090109@oracle.com> <2849B1E3-1125-4A40-B1D9-CDBB8546DA62@oracle.com> <20170601211357.184958343@eggemoggin.niobe.net> Message-ID: Hi, I confirm this patch let me build openjdk9 on a fresh ubuntu 17.04 install using gcc 6.3 and it is working well. Laurent Le 2 juin 2017 6:15 AM, a ?crit : Erik -- I think this is worth fixing in 9, given that GCC 6 is no longer new and the sustaining lines of 9 will be around for a while. Would you mind pushing it to 9, from which it will automatically be forward-ported to 10? I'd be happy to approve the fix request. Thanks, - Mark 2017/6/1 9:44:06 -0700, erik.osterlund at oracle.com: > Thank you Andrew. > > /Erik > >> On 1 Jun 2017, at 18:25, Andrew Hughes wrote: >> >>> On 1 June 2017 at 12:17, Per Liden wrote: >>>> On 2017-06-01 12:34, Erik ?sterlund wrote: >>>> >>>> Hi Per, >>>> >>>>> On 2017-06-01 11:49, Per Liden wrote: >>>>> >>>>> Hi, >>>>> >>>>> On 2017-06-01 10:18, Kim Barrett wrote: >>>>>>> >>>>>>> On May 31, 2017, at 11:01 AM, Erik ?sterlund >>>>>>> wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Excellent. In that case I would like reviews on this patch that does >>>>>>> exactly that: >>>>>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ >>>>> >>>>> >>>>> Looks good, but can we please add a comment here describing why we're >>>>> doing this. It's not obvious :) >>>> >>>> >>>> Thank you for the review. Here is a webrev with the added comment: >>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.01/ >>> >>> >>> Looks good, thanks! >>> >>> /Per >>> >> >> Looks good to me too, and will be great to finally see this fixed. >> >> It'll also need backporting to 9 now. >> >> Thanks, >> -- >> Andrew :) >> >> Senior Free Java Software Engineer >> Red Hat, Inc. (http://www.redhat.com) >> >> Web Site: http://fuseyism.com >> Twitter: https://twitter.com/gnu_andrew_java >> PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) >> Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From stuart.monteith at linaro.org Fri Jun 2 09:17:23 2017 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Fri, 2 Jun 2017 10:17:23 +0100 Subject: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode In-Reply-To: References: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> <03724e7c-f656-473f-9a89-eb78073b518f@default> <82c81d81-017f-fdc1-0e33-0f9cd5140e82@oracle.com> Message-ID: Hi David, Yes, I was being a bit unclear. The patch includes a fix to allow the tests that fail under -agentvm to pass successfully. Under agentvm, tests that spawn their own processes don't inherit a working classpath, so the patch changes ProcessTools to pass this on. The results I presented before show how the failing tests will then pass with agentvm once the patch is applied. Thanks, Stuart On 2 June 2017 at 02:36, David Holmes wrote: > Hi Stuart, > > On 1/06/2017 11:26 PM, Stuart Monteith wrote: >> >> Hello, >> I tested this on x86 and aarch64. Muneer's bug is an accurate >> description of the failing tests. I'm not sure what you mean by >> "8180904 has to be fixed before this backport", as the backport is the >> fix for the issue Muneer presented. JDK9 doesn't exhibit these >> failures as it has the fix to be backported. > > > As I understood it, 8180904 reports that a whole bunch of tests fail if run > in agentvm mode. The current backport would enable agentvm mode and hence > all those tests would start to fail. > > Did I misunderstand something? > > Thanks, > David > > > >> Comparing the runs without and with the patch - this is on x86 - I get >> essentially the same on aarch64: >> >> 0: JTwork-without pass: 680; fail: 44; error: 3; not run: 4 >> 1: JTwork-with pass: 718; fail: 6; error: 2; not run: 5 >> >> 0 1 Test >> fail pass compiler/jsr292/PollutedTrapCounts.java >> fail pass >> compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java#id0 >> fail pass compiler/loopopts/UseCountedLoopSafepoints.java >> pass fail compiler/rtm/locking/TestRTMLockingThreshold.java#id0 >> fail pass compiler/types/correctness/OffTest.java#id0 >> fail pass gc/TestVerifySilently.java >> fail pass gc/TestVerifySubSet.java >> fail pass gc/class_unloading/TestCMSClassUnloadingEnabledHWM.java >> fail pass gc/class_unloading/TestG1ClassUnloadingHWM.java >> fail pass gc/ergonomics/TestDynamicNumberOfGCThreads.java >> fail pass gc/g1/TestEagerReclaimHumongousRegions.java >> fail pass gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java >> fail pass gc/g1/TestEagerReclaimHumongousRegionsWithRefs.java >> fail pass gc/g1/TestG1TraceEagerReclaimHumongousObjects.java >> fail pass gc/g1/TestGCLogMessages.java >> fail pass gc/g1/TestHumongousAllocInitialMark.java >> fail pass gc/g1/TestPrintGCDetails.java >> fail pass gc/g1/TestPrintRegionRememberedSetInfo.java >> fail pass gc/g1/TestShrinkAuxiliaryData00.java >> fail pass gc/g1/TestShrinkAuxiliaryData05.java >> fail pass gc/g1/TestShrinkAuxiliaryData10.java >> fail pass gc/g1/TestShrinkAuxiliaryData15.java >> fail pass gc/g1/TestShrinkAuxiliaryData20.java >> fail pass gc/g1/TestShrinkAuxiliaryData25.java >> fail pass gc/g1/TestShrinkDefragmentedHeap.java#id0 >> fail pass gc/g1/TestStringDeduplicationAgeThreshold.java >> fail pass gc/g1/TestStringDeduplicationFullGC.java >> fail pass gc/g1/TestStringDeduplicationInterned.java >> fail pass gc/g1/TestStringDeduplicationPrintOptions.java >> fail pass gc/g1/TestStringDeduplicationTableRehash.java >> fail pass gc/g1/TestStringDeduplicationTableResize.java >> fail pass gc/g1/TestStringDeduplicationYoungGC.java >> fail pass gc/g1/TestStringSymbolTableStats.java >> fail pass gc/logging/TestGCId.java >> fail pass gc/whitebox/TestWBGC.java >> fail pass runtime/ErrorHandling/TestOnOutOfMemoryError.java#id0 >> fail pass runtime/NMT/JcmdWithNMTDisabled.java >> fail pass runtime/memory/ReserveMemory.java >> pass --- sanity/WhiteBox.java >> fail pass serviceability/attach/AttachWithStalePidFile.java >> fail pass serviceability/jvmti/TestRedefineWithUnresolvedClass.java >> error pass serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java#id0 >> >> >> I find that compiler/rtm/locking/TestRTMLockingThreshold.java produces >> inconsistent results on my machine, regardless of whether or not the >> patch is applied. >> >> BR >> Stuart >> >> >> On 1 June 2017 at 06:39, David Holmes wrote: >>> >>> Thanks for that information Muneer, that is an unpleasant surprise. >>> >>> Stuart: I think 8180904 has to be fixed before this backport can take >>> place. >>> >>> Thanks, >>> David >>> ----- >>> >>> >>> On 1/06/2017 2:31 PM, Muneer Kolarkunnu wrote: >>>> >>>> >>>> Hi David and Stuart, >>>> >>>> I recently reported one bug[1] for the same issue and listed which all >>>> test cases are failing with agentvm. >>>> I tested in Oracle.Linux.7.0 x64. >>>> >>>> [1] https://bugs.openjdk.java.net/browse/JDK-8180904 >>>> >>>> Regards, >>>> Muneer >>>> >>>> -----Original Message----- >>>> From: David Holmes >>>> Sent: Thursday, June 01, 2017 7:04 AM >>>> To: Stuart Monteith; hotspot-dev Source Developers >>>> Subject: Re: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg >>>> tests to run in agentvm mode >>>> >>>> Hi Stuart, >>>> >>>> This looks like an accurate backport of the change. >>>> >>>> My only minor concern is if there may be tests in 8u that are no longer >>>> in >>>> 9 which may not work with agentvm mode. >>>> >>>> What platforms have you tested this on? >>>> >>>> Thanks, >>>> David >>>> >>>> On 31/05/2017 11:19 PM, Stuart Monteith wrote: >>>>> >>>>> >>>>> Hello, >>>>> Currently the jdk8u codebase fails some JTreg Hotspot tests when >>>>> running in the -agentvm mode. This is because the ProcessTools class >>>>> is not passing the classpath. There are substantial time savings to be >>>>> gained using -agentvm over -othervm. >>>>> >>>>> Fortunately, there was a fix for jdk9 (8077608) that has not been >>>>> backported to jdk8u. The details are as follows: >>>>> >>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/017937.h >>>>> tml >>>>> https://bugs.openjdk.java.net/browse/JDK-8077608 >>>>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/af2a1e9f08f3 >>>>> >>>>> The patch just needed a slight change, to remove the change to the >>>>> file "test/compiler/uncommontrap/TestUnstableIfTrap.java" as that test >>>>> doesn't exist on jdk8u. >>>>> >>>>> My colleague Ningsheng has kindly hosted the change here: >>>>> >>>>> http://cr.openjdk.java.net/~njian/8077608/webrev.00 >>>>> >>>>> >>>>> BR, >>>>> Stuart >>>>> >>> > From david.holmes at oracle.com Fri Jun 2 09:30:22 2017 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Jun 2017 19:30:22 +1000 Subject: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode In-Reply-To: References: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> <03724e7c-f656-473f-9a89-eb78073b518f@default> <82c81d81-017f-fdc1-0e33-0f9cd5140e82@oracle.com> Message-ID: <9fb25d3c-4b55-3872-f9c7-fc460a675ba1@oracle.com> On 2/06/2017 7:17 PM, Stuart Monteith wrote: > Hi David, > Yes, I was being a bit unclear. The patch includes a fix to allow > the tests that fail under -agentvm to pass successfully. Under > agentvm, tests that spawn their own processes don't inherit a working > classpath, so the patch changes ProcessTools to pass this on. The > results I presented before show how the failing tests will then pass > with agentvm once the patch is applied. Ah I see. Thanks I missed the significance of the ProcessTools change. >>> 1: JTwork-with pass: 718; fail: 6; error: 2; not run: 5 So out of those 8 non-passing tests are any of the failures specifically related to using agentvm? Thanks, David > Thanks, > Stuart > > > On 2 June 2017 at 02:36, David Holmes wrote: >> Hi Stuart, >> >> On 1/06/2017 11:26 PM, Stuart Monteith wrote: >>> >>> Hello, >>> I tested this on x86 and aarch64. Muneer's bug is an accurate >>> description of the failing tests. I'm not sure what you mean by >>> "8180904 has to be fixed before this backport", as the backport is the >>> fix for the issue Muneer presented. JDK9 doesn't exhibit these >>> failures as it has the fix to be backported. >> >> >> As I understood it, 8180904 reports that a whole bunch of tests fail if run >> in agentvm mode. The current backport would enable agentvm mode and hence >> all those tests would start to fail. >> >> Did I misunderstand something? >> >> Thanks, >> David >> >> >> >>> Comparing the runs without and with the patch - this is on x86 - I get >>> essentially the same on aarch64: >>> >>> 0: JTwork-without pass: 680; fail: 44; error: 3; not run: 4 >>> 1: JTwork-with pass: 718; fail: 6; error: 2; not run: 5 >>> >>> 0 1 Test >>> fail pass compiler/jsr292/PollutedTrapCounts.java >>> fail pass >>> compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java#id0 >>> fail pass compiler/loopopts/UseCountedLoopSafepoints.java >>> pass fail compiler/rtm/locking/TestRTMLockingThreshold.java#id0 >>> fail pass compiler/types/correctness/OffTest.java#id0 >>> fail pass gc/TestVerifySilently.java >>> fail pass gc/TestVerifySubSet.java >>> fail pass gc/class_unloading/TestCMSClassUnloadingEnabledHWM.java >>> fail pass gc/class_unloading/TestG1ClassUnloadingHWM.java >>> fail pass gc/ergonomics/TestDynamicNumberOfGCThreads.java >>> fail pass gc/g1/TestEagerReclaimHumongousRegions.java >>> fail pass gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java >>> fail pass gc/g1/TestEagerReclaimHumongousRegionsWithRefs.java >>> fail pass gc/g1/TestG1TraceEagerReclaimHumongousObjects.java >>> fail pass gc/g1/TestGCLogMessages.java >>> fail pass gc/g1/TestHumongousAllocInitialMark.java >>> fail pass gc/g1/TestPrintGCDetails.java >>> fail pass gc/g1/TestPrintRegionRememberedSetInfo.java >>> fail pass gc/g1/TestShrinkAuxiliaryData00.java >>> fail pass gc/g1/TestShrinkAuxiliaryData05.java >>> fail pass gc/g1/TestShrinkAuxiliaryData10.java >>> fail pass gc/g1/TestShrinkAuxiliaryData15.java >>> fail pass gc/g1/TestShrinkAuxiliaryData20.java >>> fail pass gc/g1/TestShrinkAuxiliaryData25.java >>> fail pass gc/g1/TestShrinkDefragmentedHeap.java#id0 >>> fail pass gc/g1/TestStringDeduplicationAgeThreshold.java >>> fail pass gc/g1/TestStringDeduplicationFullGC.java >>> fail pass gc/g1/TestStringDeduplicationInterned.java >>> fail pass gc/g1/TestStringDeduplicationPrintOptions.java >>> fail pass gc/g1/TestStringDeduplicationTableRehash.java >>> fail pass gc/g1/TestStringDeduplicationTableResize.java >>> fail pass gc/g1/TestStringDeduplicationYoungGC.java >>> fail pass gc/g1/TestStringSymbolTableStats.java >>> fail pass gc/logging/TestGCId.java >>> fail pass gc/whitebox/TestWBGC.java >>> fail pass runtime/ErrorHandling/TestOnOutOfMemoryError.java#id0 >>> fail pass runtime/NMT/JcmdWithNMTDisabled.java >>> fail pass runtime/memory/ReserveMemory.java >>> pass --- sanity/WhiteBox.java >>> fail pass serviceability/attach/AttachWithStalePidFile.java >>> fail pass serviceability/jvmti/TestRedefineWithUnresolvedClass.java >>> error pass serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java#id0 >>> >>> >>> I find that compiler/rtm/locking/TestRTMLockingThreshold.java produces >>> inconsistent results on my machine, regardless of whether or not the >>> patch is applied. >>> >>> BR >>> Stuart >>> >>> >>> On 1 June 2017 at 06:39, David Holmes wrote: >>>> >>>> Thanks for that information Muneer, that is an unpleasant surprise. >>>> >>>> Stuart: I think 8180904 has to be fixed before this backport can take >>>> place. >>>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>> >>>> On 1/06/2017 2:31 PM, Muneer Kolarkunnu wrote: >>>>> >>>>> >>>>> Hi David and Stuart, >>>>> >>>>> I recently reported one bug[1] for the same issue and listed which all >>>>> test cases are failing with agentvm. >>>>> I tested in Oracle.Linux.7.0 x64. >>>>> >>>>> [1] https://bugs.openjdk.java.net/browse/JDK-8180904 >>>>> >>>>> Regards, >>>>> Muneer >>>>> >>>>> -----Original Message----- >>>>> From: David Holmes >>>>> Sent: Thursday, June 01, 2017 7:04 AM >>>>> To: Stuart Monteith; hotspot-dev Source Developers >>>>> Subject: Re: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg >>>>> tests to run in agentvm mode >>>>> >>>>> Hi Stuart, >>>>> >>>>> This looks like an accurate backport of the change. >>>>> >>>>> My only minor concern is if there may be tests in 8u that are no longer >>>>> in >>>>> 9 which may not work with agentvm mode. >>>>> >>>>> What platforms have you tested this on? >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>> On 31/05/2017 11:19 PM, Stuart Monteith wrote: >>>>>> >>>>>> >>>>>> Hello, >>>>>> Currently the jdk8u codebase fails some JTreg Hotspot tests when >>>>>> running in the -agentvm mode. This is because the ProcessTools class >>>>>> is not passing the classpath. There are substantial time savings to be >>>>>> gained using -agentvm over -othervm. >>>>>> >>>>>> Fortunately, there was a fix for jdk9 (8077608) that has not been >>>>>> backported to jdk8u. The details are as follows: >>>>>> >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/017937.h >>>>>> tml >>>>>> https://bugs.openjdk.java.net/browse/JDK-8077608 >>>>>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/af2a1e9f08f3 >>>>>> >>>>>> The patch just needed a slight change, to remove the change to the >>>>>> file "test/compiler/uncommontrap/TestUnstableIfTrap.java" as that test >>>>>> doesn't exist on jdk8u. >>>>>> >>>>>> My colleague Ningsheng has kindly hosted the change here: >>>>>> >>>>>> http://cr.openjdk.java.net/~njian/8077608/webrev.00 >>>>>> >>>>>> >>>>>> BR, >>>>>> Stuart >>>>>> >>>> >> From stuart.monteith at linaro.org Fri Jun 2 13:55:55 2017 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Fri, 2 Jun 2017 14:55:55 +0100 Subject: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode In-Reply-To: <9fb25d3c-4b55-3872-f9c7-fc460a675ba1@oracle.com> References: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> <03724e7c-f656-473f-9a89-eb78073b518f@default> <82c81d81-017f-fdc1-0e33-0f9cd5140e82@oracle.com> <9fb25d3c-4b55-3872-f9c7-fc460a675ba1@oracle.com> Message-ID: Hi David, Good point - runtime/os/AvailableProcessors.java does some custom execution that misses out the classpath. It is easy enough to fix, but there might be follow on patches to backport. compiler/rtm/locking/TestRTMLockingThreshold.java might also have some trouble. I'm investigating further. Thanks, Stuart On 2 June 2017 at 10:30, David Holmes wrote: > On 2/06/2017 7:17 PM, Stuart Monteith wrote: >> >> Hi David, >> Yes, I was being a bit unclear. The patch includes a fix to allow >> the tests that fail under -agentvm to pass successfully. Under >> agentvm, tests that spawn their own processes don't inherit a working >> classpath, so the patch changes ProcessTools to pass this on. The >> results I presented before show how the failing tests will then pass >> with agentvm once the patch is applied. > > > Ah I see. Thanks I missed the significance of the ProcessTools change. > >>>> 1: JTwork-with pass: 718; fail: 6; error: 2; not run: 5 > > So out of those 8 non-passing tests are any of the failures specifically > related to using agentvm? > > Thanks, > David > > >> Thanks, >> Stuart >> >> >> On 2 June 2017 at 02:36, David Holmes wrote: >>> >>> Hi Stuart, >>> >>> On 1/06/2017 11:26 PM, Stuart Monteith wrote: >>>> >>>> >>>> Hello, >>>> I tested this on x86 and aarch64. Muneer's bug is an accurate >>>> description of the failing tests. I'm not sure what you mean by >>>> "8180904 has to be fixed before this backport", as the backport is the >>>> fix for the issue Muneer presented. JDK9 doesn't exhibit these >>>> failures as it has the fix to be backported. >>> >>> >>> >>> As I understood it, 8180904 reports that a whole bunch of tests fail if >>> run >>> in agentvm mode. The current backport would enable agentvm mode and hence >>> all those tests would start to fail. >>> >>> Did I misunderstand something? >>> >>> Thanks, >>> David >>> >>> >>> >>>> Comparing the runs without and with the patch - this is on x86 - I get >>>> essentially the same on aarch64: >>>> >>>> 0: JTwork-without pass: 680; fail: 44; error: 3; not run: 4 >>>> 1: JTwork-with pass: 718; fail: 6; error: 2; not run: 5 >>>> >>>> 0 1 Test >>>> fail pass compiler/jsr292/PollutedTrapCounts.java >>>> fail pass >>>> compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java#id0 >>>> fail pass compiler/loopopts/UseCountedLoopSafepoints.java >>>> pass fail compiler/rtm/locking/TestRTMLockingThreshold.java#id0 >>>> fail pass compiler/types/correctness/OffTest.java#id0 >>>> fail pass gc/TestVerifySilently.java >>>> fail pass gc/TestVerifySubSet.java >>>> fail pass gc/class_unloading/TestCMSClassUnloadingEnabledHWM.java >>>> fail pass gc/class_unloading/TestG1ClassUnloadingHWM.java >>>> fail pass gc/ergonomics/TestDynamicNumberOfGCThreads.java >>>> fail pass gc/g1/TestEagerReclaimHumongousRegions.java >>>> fail pass gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java >>>> fail pass gc/g1/TestEagerReclaimHumongousRegionsWithRefs.java >>>> fail pass gc/g1/TestG1TraceEagerReclaimHumongousObjects.java >>>> fail pass gc/g1/TestGCLogMessages.java >>>> fail pass gc/g1/TestHumongousAllocInitialMark.java >>>> fail pass gc/g1/TestPrintGCDetails.java >>>> fail pass gc/g1/TestPrintRegionRememberedSetInfo.java >>>> fail pass gc/g1/TestShrinkAuxiliaryData00.java >>>> fail pass gc/g1/TestShrinkAuxiliaryData05.java >>>> fail pass gc/g1/TestShrinkAuxiliaryData10.java >>>> fail pass gc/g1/TestShrinkAuxiliaryData15.java >>>> fail pass gc/g1/TestShrinkAuxiliaryData20.java >>>> fail pass gc/g1/TestShrinkAuxiliaryData25.java >>>> fail pass gc/g1/TestShrinkDefragmentedHeap.java#id0 >>>> fail pass gc/g1/TestStringDeduplicationAgeThreshold.java >>>> fail pass gc/g1/TestStringDeduplicationFullGC.java >>>> fail pass gc/g1/TestStringDeduplicationInterned.java >>>> fail pass gc/g1/TestStringDeduplicationPrintOptions.java >>>> fail pass gc/g1/TestStringDeduplicationTableRehash.java >>>> fail pass gc/g1/TestStringDeduplicationTableResize.java >>>> fail pass gc/g1/TestStringDeduplicationYoungGC.java >>>> fail pass gc/g1/TestStringSymbolTableStats.java >>>> fail pass gc/logging/TestGCId.java >>>> fail pass gc/whitebox/TestWBGC.java >>>> fail pass runtime/ErrorHandling/TestOnOutOfMemoryError.java#id0 >>>> fail pass runtime/NMT/JcmdWithNMTDisabled.java >>>> fail pass runtime/memory/ReserveMemory.java >>>> pass --- sanity/WhiteBox.java >>>> fail pass serviceability/attach/AttachWithStalePidFile.java >>>> fail pass serviceability/jvmti/TestRedefineWithUnresolvedClass.java >>>> error pass >>>> serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java#id0 >>>> >>>> >>>> I find that compiler/rtm/locking/TestRTMLockingThreshold.java produces >>>> inconsistent results on my machine, regardless of whether or not the >>>> patch is applied. >>>> >>>> BR >>>> Stuart >>>> >>>> >>>> On 1 June 2017 at 06:39, David Holmes wrote: >>>>> >>>>> >>>>> Thanks for that information Muneer, that is an unpleasant surprise. >>>>> >>>>> Stuart: I think 8180904 has to be fixed before this backport can take >>>>> place. >>>>> >>>>> Thanks, >>>>> David >>>>> ----- >>>>> >>>>> >>>>> On 1/06/2017 2:31 PM, Muneer Kolarkunnu wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hi David and Stuart, >>>>>> >>>>>> I recently reported one bug[1] for the same issue and listed which all >>>>>> test cases are failing with agentvm. >>>>>> I tested in Oracle.Linux.7.0 x64. >>>>>> >>>>>> [1] https://bugs.openjdk.java.net/browse/JDK-8180904 >>>>>> >>>>>> Regards, >>>>>> Muneer >>>>>> >>>>>> -----Original Message----- >>>>>> From: David Holmes >>>>>> Sent: Thursday, June 01, 2017 7:04 AM >>>>>> To: Stuart Monteith; hotspot-dev Source Developers >>>>>> Subject: Re: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg >>>>>> tests to run in agentvm mode >>>>>> >>>>>> Hi Stuart, >>>>>> >>>>>> This looks like an accurate backport of the change. >>>>>> >>>>>> My only minor concern is if there may be tests in 8u that are no >>>>>> longer >>>>>> in >>>>>> 9 which may not work with agentvm mode. >>>>>> >>>>>> What platforms have you tested this on? >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> >>>>>> On 31/05/2017 11:19 PM, Stuart Monteith wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hello, >>>>>>> Currently the jdk8u codebase fails some JTreg Hotspot tests >>>>>>> when >>>>>>> running in the -agentvm mode. This is because the ProcessTools class >>>>>>> is not passing the classpath. There are substantial time savings to >>>>>>> be >>>>>>> gained using -agentvm over -othervm. >>>>>>> >>>>>>> Fortunately, there was a fix for jdk9 (8077608) that has not been >>>>>>> backported to jdk8u. The details are as follows: >>>>>>> >>>>>>> >>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/017937.h >>>>>>> tml >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8077608 >>>>>>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/af2a1e9f08f3 >>>>>>> >>>>>>> The patch just needed a slight change, to remove the change to the >>>>>>> file "test/compiler/uncommontrap/TestUnstableIfTrap.java" as that >>>>>>> test >>>>>>> doesn't exist on jdk8u. >>>>>>> >>>>>>> My colleague Ningsheng has kindly hosted the change here: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~njian/8077608/webrev.00 >>>>>>> >>>>>>> >>>>>>> BR, >>>>>>> Stuart >>>>>>> >>>>> >>> > From chris.plummer at oracle.com Fri Jun 2 18:54:14 2017 From: chris.plummer at oracle.com (Chris Plummer) Date: Fri, 2 Jun 2017 11:54:14 -0700 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events Message-ID: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> Hello, [I'd like a compiler team member to comment on this, in addition to runtime or svc] Please review the following: https://bugs.openjdk.java.net/browse/JDK-8171365 http://cr.openjdk.java.net/~cjplummer/8171365/webrev.00/ The CR is closed, so I'll describe the issue here: The test is making sure that all |JVMTI_EVENT_DYNAMIC_CODE_GENERATED| events that occur during the agent's OnLoad phase also occur when later GenerateEvents() is called. GenerateEvents() is generating some of the events, but most are not sent. The problem is CodeCache::blobs_do() is only iterating over NMethod code heaps, not all of the code heaps, so many code blobs are missed. I changed it to iterate over all the code heaps, and now all the |JVMTI_EVENT_DYNAMIC_CODE_GENERATED|events are sent. Note there is another version of CodeCache::blobs_do() that takes a closure object instead of a function pointer. It is used by GC and I assume is working properly by only iterating over NMethod code heaps, so I did not change it. The version that takes a function pointer is only used by JVMTI for implementing GenerateEvents(), so this change should not impact any other part of the VM. However, I do wonder if these blobs_do() methods should be renamed to avoid confusion since they don't (and haven't in the past) iterated over the same set of code blobs. thanks, Chris https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.html#DynamicCodeGenerated From kim.barrett at oracle.com Mon Jun 5 05:02:38 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 5 Jun 2017 01:02:38 -0400 Subject: RFR: 8166651: OrderAccess::load_acquire &etc should have const parameters In-Reply-To: References: <5ec3ca7f-f4cd-337c-63d5-3b00fd2839a7@redhat.com> <702f7e19-da8d-6ca2-8277-185a9468ef2a@redhat.com> Message-ID: <7E188A13-A839-4DE4-8AA4-FA1E36F326AD@oracle.com> > On May 26, 2017, at 6:37 PM, Kim Barrett wrote: > > Looking over the changes again, I realized there was a problem with > the changes for zero. The added const qualifier to Atomic::load would > run afoul of a non-const-qualified source for os::atomic_copy64. > > I've updated all three definitions of os::atomic_copy64. Two were in > zero-specific files. One was in os_linux_aarch64.hpp. > > Unfortunately, I wasn't able to test these additional changes, as > building zero is already broken in jdk10/hs for other reasons > (JDK-8181158). > > New webrev: > full: http://cr.openjdk.java.net/~kbarrett/8166651/hotspot.02/ > incr: http://cr.openjdk.java.net/~kbarrett/8166651/hotspot.02.inc/ Waiting for re-reviews for the additional changes. From erik.osterlund at oracle.com Mon Jun 5 10:38:53 2017 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 5 Jun 2017 12:38:53 +0200 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> <593029CB.7000100@oracle.com> Message-ID: <593534BD.1090805@oracle.com> Hi David, On 2017-06-02 03:30, David Holmes wrote: > Hi Erik, > > On 2/06/2017 12:50 AM, Erik ?sterlund wrote: >> Hi David, >> >> On 2017-06-01 14:33, David Holmes wrote: >>> Hi Erik, >>> >>> Just to be clear it is not the use of that I am concerned >>> about, it is the -library=stlport4. It is the use of that flag that >>> I would want to check in terms of having no affect on any existing >>> code generation. >> >> Thank you for the clarification. The use of -library=stlport4 should >> not have anything to do with code generation. It only says where to >> look for the standard library headers such as that are used >> in the compilation units. > > The potential problem is that the stlport4 include path eg: > > ./SS12u4/lib/compilers/include/CC/stlport4/ > > doesn't only contain the C++ headers (new, limits, string etc) but > also a whole bunch of regular 'standard' .h headers that are > _different_ to those found outside the stlport4 directory ie the ones > we would currently include. I don't know if the differences are > significant, nor whether those others may be found ahead of the > stlport4 version. But that is my concern about the effects on the code. While I do not think exchanging these headers will have any behavioral impact, I agree that we can not prove so as they are indeed different header files. That is a good point. However, I think that makes the stlport4 case stronger rather than weaker. We already use stlport4 for our gtest testing (because it is required and does not build without it). And if those headers would indeed have slightly different behaviour as you imply, it further motivates using the same standard library when compiling the product as the testing code. If they were to behave slightly differently, it might be that our gtest tests does not catch hidden bugs that only manifest when building with a different set of headers used for the product build. I therefore find it exceedingly dangerous to stay on two standard libraries (depending on if test code or product code is compiled) compared to consistently using the same standard library across all compilations. So for me, the larger the risk is of them behaving differently is, the bigger the motivation is to use stlport4 consistently. Thanks, /Erik > Thanks, > David > ----- > > >> Specifically, the man pages for CC say: >> >> >> -library=lib[,lib...] >> >> Incorporates specified CC-provided libraries into >> compilation and >> linking. >> >> When the -library option is used to specify a CC-provided >> library, >> the proper -I paths are set during compilation and the >> proper -L, >> -Y, -P, and -R paths and -l options are set during linking. >> >> >> As we are setting this during compilation and not during linking, >> this corresponds to setting the right -I paths to find our C++ >> standard library headers. >> >> My studio friends mentioned I could double-check that we did indeed >> not add a dependency to any C++ standard library by running elfdump >> on the generated libjvm.so file and check if the NEEDED entries in >> the dynamic section look right. I did and here are the results: >> >> [0] NEEDED 0x2918ee libsocket.so.1 >> [1] NEEDED 0x2918fd libsched.so.1 >> [2] NEEDED 0x29190b libdl.so.1 >> [3] NEEDED 0x291916 libm.so.1 >> [4] NEEDED 0x291920 libCrun.so.1 >> [5] NEEDED 0x29192d libthread.so.1 >> [6] NEEDED 0x29193c libdoor.so.1 >> [7] NEEDED 0x291949 libc.so.1 >> [8] NEEDED 0x291953 libdemangle.so.1 >> [9] NEEDED 0x291964 libnsl.so.1 >> [10] NEEDED 0x291970 libkstat.so.1 >> [11] NEEDED 0x29197e librt.so.1 >> >> This list does not include any C++ standard libraries, as expected >> (libCrun is always in there even with -library=%none, and as expected >> no libstlport4.so or libCstd.so files are in there). The NEEDED >> entries in the dynamic section look identical with and without my patch. >> >>> I'm finding the actual build situation very confusing. It seems to >>> me in looking at the hotspot build files and the top-level build >>> files that -xnolib is used for C++ compilation & linking whereas >>> -library=%none is used for C compilation & linking. But the change >>> is being applied to $2JVM_CFLAGS which one would think is for C >>> compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used >>> for both! >> >> I have also been confused by this when I tried adding CXX flags >> through configure that seemed to not be used. But that's a different >> can of worms I suppose. >> >> Thanks, >> /Erik >> >>> David >>> >>> On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >>>> Hi David, >>>> >>>> On 2017-06-01 08:09, David Holmes wrote: >>>>> Hi Kim, >>>>> >>>>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>>>> On May 31, 2017, at 9:22 PM, David Holmes >>>>>>> wrote: >>>>>>> >>>>>>> Hi Erik, >>>>>>> >>>>>>> A small change with big questions :) >>>>>>> >>>>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>>>> Hi, >>>>>>>> It would be desirable to be able to use harmless C++ standard >>>>>>>> library headers like in the code as long as it does >>>>>>>> not add any link-time dependencies to the standard library. >>>>>>> >>>>>>> What does a 'harmless' C++ standard library header look like? >>>>>> >>>>>> Header-only (doesn't require linking), doesn't run afoul of our >>>>>> [vm]assert macro, and provides functionality we presently lack (or >>>>>> only handle poorly) and would not be easy to reproduce. >>>>> >>>>> And how does one establish those properties exist for a given >>>>> header file? Just use it and if no link errors then all is good? >>>> >>>> Objects from headers that are not ODR-used such as constant folded >>>> expressions are not imposing link-time dependencies to C++ >>>> libraries. The -xnolib that we already have in the LDFLAGS will >>>> catch any accidental ODR-uses of C++ objects, and the JVM will not >>>> build if that happens. >>>> >>>> As for external headers being included and not playing nicely with >>>> macros, this has to be evaluated on a case by case basis. Note that >>>> this is a problem that occurs when using system headers (that we >>>> are already using), as it is for using C++ standard library >>>> headers. We even run into that in our own JVM when e.g. the min/max >>>> macros occasionally slaps us gently in the face from time to time. >>>> >>>>> >>>>>> The instigator for this is Erik and I are working on a project that >>>>>> needs information that is present in std::numeric_limits<> (provided >>>>>> by the header). Reproducing that functionality ourselves >>>>>> would require platform-specific code (with all the complexity >>>>>> that can >>>>>> imply). We'd really rather not re-discover and maintain information >>>>>> that is trivially accessible in every standard library. >>>>> >>>>> Understood. I have no issue with using but am concerned >>>>> by the state of stlport4. Can you use without changing >>>>> -library=%none? >>>> >>>> No, that is precisely why we are here. >>>> >>>>> >>>>>>>> This is possible on all supported platforms except the ones >>>>>>>> using the solaris studio compiler where we enforce >>>>>>>> -library=%none in both CFLAGS and LDFLAGS. >>>>>>>> I propose to remove the restriction from CFLAGS but keep it on >>>>>>>> LDFLAGS. >>>>>>>> I have consulted with the studio folks, and they think this is >>>>>>>> absolutely fine and thought that the choice of >>>>>>>> -library=stlport4 should be fine for our CFLAGS and is indeed >>>>>>>> what is already used in the gtest launcher. >>>>>>> >>>>>>> So what exactly does this mean? IIUC this allows you to use >>>>>>> headers for, and compile against "STLport?s Standard Library >>>>>>> implementation version 4.5.3 instead of the default libCstd". >>>>>>> But how do you then not need to link against libstlport.so ?? >>>>>>> >>>>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>>>> >>>>>>> "STLport is binary incompatible with the default libCstd. If you >>>>>>> use the STLport implementation of the standard library, then you >>>>>>> must compile and link all files, including third-party >>>>>>> libraries, with the option -library=stlport4? >>>>>> >>>>>> It means we can only use header-only parts of the standard library. >>>>>> This was confirmed / suggested by the Studio folks Erik consulted, >>>>>> providing such limited access while continuing to constrain our >>>>>> dependency on the library. Figuring out what can be used will >>>>>> need to >>>>>> be determined on a case-by-case basis. Maybe we could just link >>>>>> with >>>>>> a standard library on Solaris too. So far as I can tell, Solaris is >>>>>> the only platform where we don't do that. But Erik is trying to be >>>>>> conservative. >>>>> >>>>> Okay, but the docs don't seem to acknowledge the ability to use, >>>>> but not link to, stlport4. >>>> >>>> Not ODR-used objects do not require linkage. >>>> (http://en.cppreference.com/w/cpp/language/definition) >>>> I have confirmed directly with the studio folks to be certain that >>>> accidental linkage would fail by keeping our existing guards in the >>>> LDFLAGS rather than the CFLAGS. >>>> This is also reasonably well documented already >>>> (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >>>> >>>>> >>>>>>> There are lots of other comments in that document regarding >>>>>>> STLport that makes me think that using it may be introducing a >>>>>>> fragile dependency into the OpenJDK code! >>>>>>> >>>>>>> "STLport is an open source product and does not guarantee >>>>>>> compatibility across different releases. In other words, >>>>>>> compiling with a future version of STLport may break >>>>>>> applications compiled with STLport 4.5.3. It also might not be >>>>>>> possible to link binaries compiled using STLport 4.5.3 with >>>>>>> binaries compiled using a future version of STLport." >>>>>>> >>>>>>> "Future releases of the compiler might not include STLport4. >>>>>>> They might include only a later version of STLport. The compiler >>>>>>> option -library=stlport4 might not be available in future >>>>>>> releases, but could be replaced by an option referring to a >>>>>>> later STLport version." >>>>>>> >>>>>>> None of that sounds very good to me. >>>>>> >>>>>> I don't see how this is any different from any other part of the >>>>>> process for using a different version of Solaris Studio. >>>>> >>>>> Well we'd discover the problem when testing the compiler change, >>>>> but my point was more to the fact that they don't seem very >>>>> committed to this library - very much a "use at own risk" disclaimer. >>>> >>>> If we eventually need to use something more modern for features >>>> that have not been around for a decade, like C++11 features, then >>>> we can change standard library when that day comes. >>>> >>>>> >>>>>> stlport4 is one of the three standard libraries that are presently >>>>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked >>>>>> the >>>>>> Studio folks which to use (for the purposes of our present >>>>>> project, we >>>>>> don't have any particular preference, so long as it works), and >>>>>> stlport4 seemed the right choice (libCstd was, I think, described as >>>>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>>>> including linking against it, for gtest builds. Mixing two >>>>>> different >>>>>> standard libraries seems like a bad idea... >>>>> >>>>> So we have the choice of "ancient", "unsupported" or gcc :) >>>>> >>>>> My confidence in this has not increased :) >>>> >>>> I trust that e.g. std::numeric_limits::is_signed in the standard >>>> libraries has more mileage than whatever simplified rewrite of that >>>> we try to replicate in the JVM. So it is not obvious to me that we >>>> should have less confidence in the same functionality from a >>>> standard library shipped together with the compiler we are using >>>> and that has already been used and tested in a variety of C++ >>>> applications for over a decade compared to the alternative of >>>> reinventing it ourselves. >>>> >>>>> What we do in gtest doesn't necessarily make things okay to do in >>>>> the product. >>>>> >>>>> If this were part of a compiler upgrade process we'd be comparing >>>>> binaries with old flag and new to ensure there are no unexpected >>>>> consequences. >>>> >>>> I would not compare including to a compiler upgrade >>>> process as we are not changing the compiler and hence not the way >>>> code is generated, but rather compare it to including a new system >>>> header that has previously not been included to use a constant >>>> folded expression from that header that has been used and tested >>>> for a decade. At least that is how I think of it. >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> >>>>> Cheers, >>>>> David >>>>> >>>>>>> >>>>>>> Cheers, >>>>>>> David >>>>>>> >>>>>>> >>>>>>>> Webrev for jdk10-hs top level repository: >>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>>>> Webrev for jdk10-hs hotspot repository: >>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>>>> Testing: JPRT. >>>>>>>> Will need a sponsor. >>>>>>>> Thanks, >>>>>>>> /Erik >>>>>> >>>>>> >>>> >> From david.holmes at oracle.com Mon Jun 5 12:45:52 2017 From: david.holmes at oracle.com (David Holmes) Date: Mon, 5 Jun 2017 22:45:52 +1000 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: <593534BD.1090805@oracle.com> References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> <593029CB.7000100@oracle.com> <593534BD.1090805@oracle.com> Message-ID: Hi Erik, On 5/06/2017 8:38 PM, Erik ?sterlund wrote: > Hi David, > > On 2017-06-02 03:30, David Holmes wrote: >> Hi Erik, >> >> On 2/06/2017 12:50 AM, Erik ?sterlund wrote: >>> Hi David, >>> >>> On 2017-06-01 14:33, David Holmes wrote: >>>> Hi Erik, >>>> >>>> Just to be clear it is not the use of that I am concerned >>>> about, it is the -library=stlport4. It is the use of that flag that >>>> I would want to check in terms of having no affect on any existing >>>> code generation. >>> >>> Thank you for the clarification. The use of -library=stlport4 should >>> not have anything to do with code generation. It only says where to >>> look for the standard library headers such as that are used >>> in the compilation units. >> >> The potential problem is that the stlport4 include path eg: >> >> ./SS12u4/lib/compilers/include/CC/stlport4/ >> >> doesn't only contain the C++ headers (new, limits, string etc) but >> also a whole bunch of regular 'standard' .h headers that are >> _different_ to those found outside the stlport4 directory ie the ones >> we would currently include. I don't know if the differences are >> significant, nor whether those others may be found ahead of the >> stlport4 version. But that is my concern about the effects on the code. > > While I do not think exchanging these headers will have any behavioral > impact, I agree that we can not prove so as they are indeed different > header files. That is a good point. > > However, I think that makes the stlport4 case stronger rather than > weaker. We already use stlport4 for our gtest testing (because it is > required and does not build without it). And if those headers would > indeed have slightly different behaviour as you imply, it further > motivates using the same standard library when compiling the product as > the testing code. If they were to behave slightly differently, it might > be that our gtest tests does not catch hidden bugs that only manifest > when building with a different set of headers used for the product > build. I therefore find it exceedingly dangerous to stay on two standard > libraries (depending on if test code or product code is compiled) > compared to consistently using the same standard library across all > compilations. So for me, the larger the risk is of them behaving > differently is, the bigger the motivation is to use stlport4 consistently. Regardless of what gtest does if you want to switch the standard libraries used by the product then IMHO that should go through a vetting process no weaker than that for changing the toolchain, as you effectively are doing that. Cheers, David > Thanks, > /Erik > >> Thanks, >> David >> ----- >> >> >>> Specifically, the man pages for CC say: >>> >>> >>> -library=lib[,lib...] >>> >>> Incorporates specified CC-provided libraries into >>> compilation and >>> linking. >>> >>> When the -library option is used to specify a CC-provided >>> library, >>> the proper -I paths are set during compilation and the >>> proper -L, >>> -Y, -P, and -R paths and -l options are set during linking. >>> >>> >>> As we are setting this during compilation and not during linking, >>> this corresponds to setting the right -I paths to find our C++ >>> standard library headers. >>> >>> My studio friends mentioned I could double-check that we did indeed >>> not add a dependency to any C++ standard library by running elfdump >>> on the generated libjvm.so file and check if the NEEDED entries in >>> the dynamic section look right. I did and here are the results: >>> >>> [0] NEEDED 0x2918ee libsocket.so.1 >>> [1] NEEDED 0x2918fd libsched.so.1 >>> [2] NEEDED 0x29190b libdl.so.1 >>> [3] NEEDED 0x291916 libm.so.1 >>> [4] NEEDED 0x291920 libCrun.so.1 >>> [5] NEEDED 0x29192d libthread.so.1 >>> [6] NEEDED 0x29193c libdoor.so.1 >>> [7] NEEDED 0x291949 libc.so.1 >>> [8] NEEDED 0x291953 libdemangle.so.1 >>> [9] NEEDED 0x291964 libnsl.so.1 >>> [10] NEEDED 0x291970 libkstat.so.1 >>> [11] NEEDED 0x29197e librt.so.1 >>> >>> This list does not include any C++ standard libraries, as expected >>> (libCrun is always in there even with -library=%none, and as expected >>> no libstlport4.so or libCstd.so files are in there). The NEEDED >>> entries in the dynamic section look identical with and without my patch. >>> >>>> I'm finding the actual build situation very confusing. It seems to >>>> me in looking at the hotspot build files and the top-level build >>>> files that -xnolib is used for C++ compilation & linking whereas >>>> -library=%none is used for C compilation & linking. But the change >>>> is being applied to $2JVM_CFLAGS which one would think is for C >>>> compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used >>>> for both! >>> >>> I have also been confused by this when I tried adding CXX flags >>> through configure that seemed to not be used. But that's a different >>> can of worms I suppose. >>> >>> Thanks, >>> /Erik >>> >>>> David >>>> >>>> On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >>>>> Hi David, >>>>> >>>>> On 2017-06-01 08:09, David Holmes wrote: >>>>>> Hi Kim, >>>>>> >>>>>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>>>>> On May 31, 2017, at 9:22 PM, David Holmes >>>>>>>> wrote: >>>>>>>> >>>>>>>> Hi Erik, >>>>>>>> >>>>>>>> A small change with big questions :) >>>>>>>> >>>>>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>>>>> Hi, >>>>>>>>> It would be desirable to be able to use harmless C++ standard >>>>>>>>> library headers like in the code as long as it does >>>>>>>>> not add any link-time dependencies to the standard library. >>>>>>>> >>>>>>>> What does a 'harmless' C++ standard library header look like? >>>>>>> >>>>>>> Header-only (doesn't require linking), doesn't run afoul of our >>>>>>> [vm]assert macro, and provides functionality we presently lack (or >>>>>>> only handle poorly) and would not be easy to reproduce. >>>>>> >>>>>> And how does one establish those properties exist for a given >>>>>> header file? Just use it and if no link errors then all is good? >>>>> >>>>> Objects from headers that are not ODR-used such as constant folded >>>>> expressions are not imposing link-time dependencies to C++ >>>>> libraries. The -xnolib that we already have in the LDFLAGS will >>>>> catch any accidental ODR-uses of C++ objects, and the JVM will not >>>>> build if that happens. >>>>> >>>>> As for external headers being included and not playing nicely with >>>>> macros, this has to be evaluated on a case by case basis. Note that >>>>> this is a problem that occurs when using system headers (that we >>>>> are already using), as it is for using C++ standard library >>>>> headers. We even run into that in our own JVM when e.g. the min/max >>>>> macros occasionally slaps us gently in the face from time to time. >>>>> >>>>>> >>>>>>> The instigator for this is Erik and I are working on a project that >>>>>>> needs information that is present in std::numeric_limits<> (provided >>>>>>> by the header). Reproducing that functionality ourselves >>>>>>> would require platform-specific code (with all the complexity >>>>>>> that can >>>>>>> imply). We'd really rather not re-discover and maintain information >>>>>>> that is trivially accessible in every standard library. >>>>>> >>>>>> Understood. I have no issue with using but am concerned >>>>>> by the state of stlport4. Can you use without changing >>>>>> -library=%none? >>>>> >>>>> No, that is precisely why we are here. >>>>> >>>>>> >>>>>>>>> This is possible on all supported platforms except the ones >>>>>>>>> using the solaris studio compiler where we enforce >>>>>>>>> -library=%none in both CFLAGS and LDFLAGS. >>>>>>>>> I propose to remove the restriction from CFLAGS but keep it on >>>>>>>>> LDFLAGS. >>>>>>>>> I have consulted with the studio folks, and they think this is >>>>>>>>> absolutely fine and thought that the choice of >>>>>>>>> -library=stlport4 should be fine for our CFLAGS and is indeed >>>>>>>>> what is already used in the gtest launcher. >>>>>>>> >>>>>>>> So what exactly does this mean? IIUC this allows you to use >>>>>>>> headers for, and compile against "STLport?s Standard Library >>>>>>>> implementation version 4.5.3 instead of the default libCstd". >>>>>>>> But how do you then not need to link against libstlport.so ?? >>>>>>>> >>>>>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>>>>> >>>>>>>> "STLport is binary incompatible with the default libCstd. If you >>>>>>>> use the STLport implementation of the standard library, then you >>>>>>>> must compile and link all files, including third-party >>>>>>>> libraries, with the option -library=stlport4? >>>>>>> >>>>>>> It means we can only use header-only parts of the standard library. >>>>>>> This was confirmed / suggested by the Studio folks Erik consulted, >>>>>>> providing such limited access while continuing to constrain our >>>>>>> dependency on the library. Figuring out what can be used will >>>>>>> need to >>>>>>> be determined on a case-by-case basis. Maybe we could just link >>>>>>> with >>>>>>> a standard library on Solaris too. So far as I can tell, Solaris is >>>>>>> the only platform where we don't do that. But Erik is trying to be >>>>>>> conservative. >>>>>> >>>>>> Okay, but the docs don't seem to acknowledge the ability to use, >>>>>> but not link to, stlport4. >>>>> >>>>> Not ODR-used objects do not require linkage. >>>>> (http://en.cppreference.com/w/cpp/language/definition) >>>>> I have confirmed directly with the studio folks to be certain that >>>>> accidental linkage would fail by keeping our existing guards in the >>>>> LDFLAGS rather than the CFLAGS. >>>>> This is also reasonably well documented already >>>>> (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >>>>> >>>>>> >>>>>>>> There are lots of other comments in that document regarding >>>>>>>> STLport that makes me think that using it may be introducing a >>>>>>>> fragile dependency into the OpenJDK code! >>>>>>>> >>>>>>>> "STLport is an open source product and does not guarantee >>>>>>>> compatibility across different releases. In other words, >>>>>>>> compiling with a future version of STLport may break >>>>>>>> applications compiled with STLport 4.5.3. It also might not be >>>>>>>> possible to link binaries compiled using STLport 4.5.3 with >>>>>>>> binaries compiled using a future version of STLport." >>>>>>>> >>>>>>>> "Future releases of the compiler might not include STLport4. >>>>>>>> They might include only a later version of STLport. The compiler >>>>>>>> option -library=stlport4 might not be available in future >>>>>>>> releases, but could be replaced by an option referring to a >>>>>>>> later STLport version." >>>>>>>> >>>>>>>> None of that sounds very good to me. >>>>>>> >>>>>>> I don't see how this is any different from any other part of the >>>>>>> process for using a different version of Solaris Studio. >>>>>> >>>>>> Well we'd discover the problem when testing the compiler change, >>>>>> but my point was more to the fact that they don't seem very >>>>>> committed to this library - very much a "use at own risk" disclaimer. >>>>> >>>>> If we eventually need to use something more modern for features >>>>> that have not been around for a decade, like C++11 features, then >>>>> we can change standard library when that day comes. >>>>> >>>>>> >>>>>>> stlport4 is one of the three standard libraries that are presently >>>>>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked >>>>>>> the >>>>>>> Studio folks which to use (for the purposes of our present >>>>>>> project, we >>>>>>> don't have any particular preference, so long as it works), and >>>>>>> stlport4 seemed the right choice (libCstd was, I think, described as >>>>>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>>>>> including linking against it, for gtest builds. Mixing two >>>>>>> different >>>>>>> standard libraries seems like a bad idea... >>>>>> >>>>>> So we have the choice of "ancient", "unsupported" or gcc :) >>>>>> >>>>>> My confidence in this has not increased :) >>>>> >>>>> I trust that e.g. std::numeric_limits::is_signed in the standard >>>>> libraries has more mileage than whatever simplified rewrite of that >>>>> we try to replicate in the JVM. So it is not obvious to me that we >>>>> should have less confidence in the same functionality from a >>>>> standard library shipped together with the compiler we are using >>>>> and that has already been used and tested in a variety of C++ >>>>> applications for over a decade compared to the alternative of >>>>> reinventing it ourselves. >>>>> >>>>>> What we do in gtest doesn't necessarily make things okay to do in >>>>>> the product. >>>>>> >>>>>> If this were part of a compiler upgrade process we'd be comparing >>>>>> binaries with old flag and new to ensure there are no unexpected >>>>>> consequences. >>>>> >>>>> I would not compare including to a compiler upgrade >>>>> process as we are not changing the compiler and hence not the way >>>>> code is generated, but rather compare it to including a new system >>>>> header that has previously not been included to use a constant >>>>> folded expression from that header that has been used and tested >>>>> for a decade. At least that is how I think of it. >>>>> >>>>> Thanks, >>>>> /Erik >>>>> >>>>>> >>>>>> Cheers, >>>>>> David >>>>>> >>>>>>>> >>>>>>>> Cheers, >>>>>>>> David >>>>>>>> >>>>>>>> >>>>>>>>> Webrev for jdk10-hs top level repository: >>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>>>>> Webrev for jdk10-hs hotspot repository: >>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>>>>> Testing: JPRT. >>>>>>>>> Will need a sponsor. >>>>>>>>> Thanks, >>>>>>>>> /Erik >>>>>>> >>>>>>> >>>>> >>> > From adinn at redhat.com Mon Jun 5 15:02:31 2017 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 5 Jun 2017 16:02:31 +0100 Subject: RFR: 8166651: OrderAccess::load_acquire &etc should have const parameters In-Reply-To: <7E188A13-A839-4DE4-8AA4-FA1E36F326AD@oracle.com> References: <5ec3ca7f-f4cd-337c-63d5-3b00fd2839a7@redhat.com> <702f7e19-da8d-6ca2-8277-185a9468ef2a@redhat.com> <7E188A13-A839-4DE4-8AA4-FA1E36F326AD@oracle.com> Message-ID: Hi Kim, On 05/06/17 06:02, Kim Barrett wrote: > > Waiting for re-reviews for the additional changes. The new changes work fine on AArch64. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From chris.plummer at oracle.com Mon Jun 5 15:32:38 2017 From: chris.plummer at oracle.com (Chris Plummer) Date: Mon, 5 Jun 2017 08:32:38 -0700 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> Message-ID: <7952815d-bb1d-e0dc-5dfc-f047f5b4b2fb@oracle.com> I could still use a couple of reviewers for this change. thanks, Chris On 6/2/17 11:54 AM, Chris Plummer wrote: > Hello, > > [I'd like a compiler team member to comment on this, in addition to > runtime or svc] > > Please review the following: > > https://bugs.openjdk.java.net/browse/JDK-8171365 > http://cr.openjdk.java.net/~cjplummer/8171365/webrev.00/ > > The CR is closed, so I'll describe the issue here: > > The test is making sure that all |JVMTI_EVENT_DYNAMIC_CODE_GENERATED| > events that occur during the agent's OnLoad phase also occur when > later GenerateEvents() is called. GenerateEvents() is generating some > of the events, but most are not sent. The problem is > CodeCache::blobs_do() is only iterating over NMethod code heaps, not > all of the code heaps, so many code blobs are missed. I changed it to > iterate over all the code heaps, and now all the > |JVMTI_EVENT_DYNAMIC_CODE_GENERATED|events are sent. > > Note there is another version of CodeCache::blobs_do() that takes a > closure object instead of a function pointer. It is used by GC and I > assume is working properly by only iterating over NMethod code heaps, > so I did not change it. The version that takes a function pointer is > only used by JVMTI for implementing GenerateEvents(), so this change > should not impact any other part of the VM. However, I do wonder if > these blobs_do() methods should be renamed to avoid confusion since > they don't (and haven't in the past) iterated over the same set of > code blobs. > > thanks, > > Chris > > https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.html#DynamicCodeGenerated > From ioi.lam at oracle.com Mon Jun 5 15:56:34 2017 From: ioi.lam at oracle.com (Ioi Lam) Date: Mon, 5 Jun 2017 08:56:34 -0700 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> Message-ID: Hi Chris, On 6/2/17 11:54 AM, Chris Plummer wrote: > Hello, > > [I'd like a compiler team member to comment on this, in addition to > runtime or svc] > > Please review the following: > > https://bugs.openjdk.java.net/browse/JDK-8171365 > http://cr.openjdk.java.net/~cjplummer/8171365/webrev.00/ > > The CR is closed, so I'll describe the issue here: > > The test is making sure that all |JVMTI_EVENT_DYNAMIC_CODE_GENERATED| > events that occur during the agent's OnLoad phase also occur when > later GenerateEvents() is called. GenerateEvents() is generating some > of the events, but most are not sent. The problem is > CodeCache::blobs_do() is only iterating over NMethod code heaps, not > all of the code heaps, so many code blobs are missed. I changed it to > iterate over all the code heaps, and now all the > |JVMTI_EVENT_DYNAMIC_CODE_GENERATED|events are sent. > > Note there is another version of CodeCache::blobs_do() that takes a > closure object instead of a function pointer. It is used by GC and I > assume is working properly by only iterating over NMethod code heaps, > so I did not change it. The version that takes a function pointer is > only used by JVMTI for implementing GenerateEvents(), so this change > should not impact any other part of the VM. However, I do wonder if > these blobs_do() methods should be renamed to avoid confusion since > they don't (and haven't in the past) iterated over the same set of > code blobs. > Yes, I think these two function would indeed be confusing. The second variant also does a liveness check which is missing from the first one: 621 void CodeCache::blobs_do(void f(CodeBlob* nm)) { 622 assert_locked_or_safepoint(CodeCache_lock); 623 FOR_ALL_HEAPS(heap) { 624 FOR_ALL_BLOBS(cb, *heap) { 625 f(cb); 626 } 627 } 628 } 664 void CodeCache::blobs_do(CodeBlobClosure* f) { 665 assert_locked_or_safepoint(CodeCache_lock); 666 FOR_ALL_NMETHOD_HEAPS(heap) { 667 FOR_ALL_BLOBS(cb, *heap) { 668 if (cb->is_alive()) { 669 f->do_code_blob(cb); 670 #ifdef ASSERT 671 if (cb->is_nmethod()) 672 ((nmethod*)cb)->verify_scavenge_root_oops(); 673 #endif //ASSERT 674 } 675 } 676 } 677 } The two function's APIs are equivalent, since CodeBlobClosure has a single function, so I think it's better to stick with one API, i.e. replace the function pointer with CodeBlobClosure class CodeBlobClosure : public Closure { public: // Called for each code blob. virtual void do_code_blob(CodeBlob* cb) = 0; }; For consistency, maybe we should change the first version to CodeCache::all_blobs_do(CodeBlobClosure* f) and the second to CodeCache::live_nmethod_blobs_do(CodeBlobClosure* f) Thanks - Ioi > thanks, > > Chris > > https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.html#DynamicCodeGenerated > From erik.osterlund at oracle.com Mon Jun 5 16:19:30 2017 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 5 Jun 2017 18:19:30 +0200 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> <593029CB.7000100@oracle.com> <593534BD.1090805@oracle.com> Message-ID: <59358492.7030803@oracle.com> Hi David, On 2017-06-05 14:45, David Holmes wrote: > Hi Erik, > > On 5/06/2017 8:38 PM, Erik ?sterlund wrote: >> Hi David, >> >> On 2017-06-02 03:30, David Holmes wrote: >>> Hi Erik, >>> >>> On 2/06/2017 12:50 AM, Erik ?sterlund wrote: >>>> Hi David, >>>> >>>> On 2017-06-01 14:33, David Holmes wrote: >>>>> Hi Erik, >>>>> >>>>> Just to be clear it is not the use of that I am concerned >>>>> about, it is the -library=stlport4. It is the use of that flag >>>>> that I would want to check in terms of having no affect on any >>>>> existing code generation. >>>> >>>> Thank you for the clarification. The use of -library=stlport4 >>>> should not have anything to do with code generation. It only says >>>> where to look for the standard library headers such as >>>> that are used in the compilation units. >>> >>> The potential problem is that the stlport4 include path eg: >>> >>> ./SS12u4/lib/compilers/include/CC/stlport4/ >>> >>> doesn't only contain the C++ headers (new, limits, string etc) but >>> also a whole bunch of regular 'standard' .h headers that are >>> _different_ to those found outside the stlport4 directory ie the >>> ones we would currently include. I don't know if the differences are >>> significant, nor whether those others may be found ahead of the >>> stlport4 version. But that is my concern about the effects on the code. >> >> While I do not think exchanging these headers will have any >> behavioral impact, I agree that we can not prove so as they are >> indeed different header files. That is a good point. >> >> However, I think that makes the stlport4 case stronger rather than >> weaker. We already use stlport4 for our gtest testing (because it is >> required and does not build without it). And if those headers would >> indeed have slightly different behaviour as you imply, it further >> motivates using the same standard library when compiling the product >> as the testing code. If they were to behave slightly differently, it >> might be that our gtest tests does not catch hidden bugs that only >> manifest when building with a different set of headers used for the >> product build. I therefore find it exceedingly dangerous to stay on >> two standard libraries (depending on if test code or product code is >> compiled) compared to consistently using the same standard library >> across all compilations. So for me, the larger the risk is of them >> behaving differently is, the bigger the motivation is to use stlport4 >> consistently. > > Regardless of what gtest does if you want to switch the standard > libraries used by the product then IMHO that should go through a > vetting process no weaker than that for changing the toolchain, as you > effectively are doing that. I talked to Erik Joelsson about how to compare two builds. He introduced me to our compare.sh script that is used to compare two builds. I built a baseline without these changes and a new build with these changes applied, both on a Solaris SPARC T7 machine. Then I compared them with ./compare.sh -2dirs {$BUILD1}/hotspot/variant-server/libjvm/objs {$BUILD2}/hotspot/variant-server/libjvm/objs -libs --strip This compares the object files produced when compiling hotspot in build 1 and build 2 after stripping symbols. First it reported: Libraries... Size : Symbols : Deps : Disass : :* diff *: : : ./dtrace.o :* diff *: :* 38918*: ./jni.o :* diff *: :* 23226*: ./unsafe.o It seems like all symbols were not stripped here on these mentioned files and constituted all differences in the disassembly. So I made a simple sed filter to filter out symbol names in the disassembly with the regexp <.*>. The result was: Libraries... Size : Symbols : Deps : Disass : :* diff *: : : ./dtrace.o :* diff *: : : ./jni.o :* diff *: : : ./unsafe.o This shows that not a single instruction was emitted differently between the two builds. I also did the filtering manually on jni.o and unsafe.o in emacs to make sure I did not mess up. Are we happy with this, or do you still have doubts that this might result in different code or behavior? Thanks, /Erik > Cheers, > David > > >> Thanks, >> /Erik >> >>> Thanks, >>> David >>> ----- >>> >>> >>>> Specifically, the man pages for CC say: >>>> >>>> >>>> -library=lib[,lib...] >>>> >>>> Incorporates specified CC-provided libraries into >>>> compilation and >>>> linking. >>>> >>>> When the -library option is used to specify a >>>> CC-provided library, >>>> the proper -I paths are set during compilation and >>>> the proper -L, >>>> -Y, -P, and -R paths and -l options are set during >>>> linking. >>>> >>>> >>>> As we are setting this during compilation and not during linking, >>>> this corresponds to setting the right -I paths to find our C++ >>>> standard library headers. >>>> >>>> My studio friends mentioned I could double-check that we did indeed >>>> not add a dependency to any C++ standard library by running elfdump >>>> on the generated libjvm.so file and check if the NEEDED entries in >>>> the dynamic section look right. I did and here are the results: >>>> >>>> [0] NEEDED 0x2918ee libsocket.so.1 >>>> [1] NEEDED 0x2918fd libsched.so.1 >>>> [2] NEEDED 0x29190b libdl.so.1 >>>> [3] NEEDED 0x291916 libm.so.1 >>>> [4] NEEDED 0x291920 libCrun.so.1 >>>> [5] NEEDED 0x29192d libthread.so.1 >>>> [6] NEEDED 0x29193c libdoor.so.1 >>>> [7] NEEDED 0x291949 libc.so.1 >>>> [8] NEEDED 0x291953 libdemangle.so.1 >>>> [9] NEEDED 0x291964 libnsl.so.1 >>>> [10] NEEDED 0x291970 libkstat.so.1 >>>> [11] NEEDED 0x29197e librt.so.1 >>>> >>>> This list does not include any C++ standard libraries, as expected >>>> (libCrun is always in there even with -library=%none, and as >>>> expected no libstlport4.so or libCstd.so files are in there). The >>>> NEEDED entries in the dynamic section look identical with and >>>> without my patch. >>>> >>>>> I'm finding the actual build situation very confusing. It seems to >>>>> me in looking at the hotspot build files and the top-level build >>>>> files that -xnolib is used for C++ compilation & linking whereas >>>>> -library=%none is used for C compilation & linking. But the change >>>>> is being applied to $2JVM_CFLAGS which one would think is for C >>>>> compilation but we don't have $2JVM_CXXFLAGS, so it seems to be >>>>> used for both! >>>> >>>> I have also been confused by this when I tried adding CXX flags >>>> through configure that seemed to not be used. But that's a >>>> different can of worms I suppose. >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> David >>>>> >>>>> On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >>>>>> Hi David, >>>>>> >>>>>> On 2017-06-01 08:09, David Holmes wrote: >>>>>>> Hi Kim, >>>>>>> >>>>>>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>>>>>> On May 31, 2017, at 9:22 PM, David Holmes >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> Hi Erik, >>>>>>>>> >>>>>>>>> A small change with big questions :) >>>>>>>>> >>>>>>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>>>>>> Hi, >>>>>>>>>> It would be desirable to be able to use harmless C++ standard >>>>>>>>>> library headers like in the code as long as it does >>>>>>>>>> not add any link-time dependencies to the standard library. >>>>>>>>> >>>>>>>>> What does a 'harmless' C++ standard library header look like? >>>>>>>> >>>>>>>> Header-only (doesn't require linking), doesn't run afoul of our >>>>>>>> [vm]assert macro, and provides functionality we presently lack (or >>>>>>>> only handle poorly) and would not be easy to reproduce. >>>>>>> >>>>>>> And how does one establish those properties exist for a given >>>>>>> header file? Just use it and if no link errors then all is good? >>>>>> >>>>>> Objects from headers that are not ODR-used such as constant >>>>>> folded expressions are not imposing link-time dependencies to C++ >>>>>> libraries. The -xnolib that we already have in the LDFLAGS will >>>>>> catch any accidental ODR-uses of C++ objects, and the JVM will >>>>>> not build if that happens. >>>>>> >>>>>> As for external headers being included and not playing nicely >>>>>> with macros, this has to be evaluated on a case by case basis. >>>>>> Note that this is a problem that occurs when using system headers >>>>>> (that we are already using), as it is for using C++ standard >>>>>> library headers. We even run into that in our own JVM when e.g. >>>>>> the min/max macros occasionally slaps us gently in the face from >>>>>> time to time. >>>>>> >>>>>>> >>>>>>>> The instigator for this is Erik and I are working on a project >>>>>>>> that >>>>>>>> needs information that is present in std::numeric_limits<> >>>>>>>> (provided >>>>>>>> by the header). Reproducing that functionality ourselves >>>>>>>> would require platform-specific code (with all the complexity >>>>>>>> that can >>>>>>>> imply). We'd really rather not re-discover and maintain >>>>>>>> information >>>>>>>> that is trivially accessible in every standard library. >>>>>>> >>>>>>> Understood. I have no issue with using but am concerned >>>>>>> by the state of stlport4. Can you use without changing >>>>>>> -library=%none? >>>>>> >>>>>> No, that is precisely why we are here. >>>>>> >>>>>>> >>>>>>>>>> This is possible on all supported platforms except the ones >>>>>>>>>> using the solaris studio compiler where we enforce >>>>>>>>>> -library=%none in both CFLAGS and LDFLAGS. >>>>>>>>>> I propose to remove the restriction from CFLAGS but keep it >>>>>>>>>> on LDFLAGS. >>>>>>>>>> I have consulted with the studio folks, and they think this >>>>>>>>>> is absolutely fine and thought that the choice of >>>>>>>>>> -library=stlport4 should be fine for our CFLAGS and is indeed >>>>>>>>>> what is already used in the gtest launcher. >>>>>>>>> >>>>>>>>> So what exactly does this mean? IIUC this allows you to use >>>>>>>>> headers for, and compile against "STLport?s Standard Library >>>>>>>>> implementation version 4.5.3 instead of the default libCstd". >>>>>>>>> But how do you then not need to link against libstlport.so ?? >>>>>>>>> >>>>>>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>>>>>> >>>>>>>>> "STLport is binary incompatible with the default libCstd. If >>>>>>>>> you use the STLport implementation of the standard library, >>>>>>>>> then you must compile and link all files, including >>>>>>>>> third-party libraries, with the option -library=stlport4? >>>>>>>> >>>>>>>> It means we can only use header-only parts of the standard >>>>>>>> library. >>>>>>>> This was confirmed / suggested by the Studio folks Erik consulted, >>>>>>>> providing such limited access while continuing to constrain our >>>>>>>> dependency on the library. Figuring out what can be used will >>>>>>>> need to >>>>>>>> be determined on a case-by-case basis. Maybe we could just >>>>>>>> link with >>>>>>>> a standard library on Solaris too. So far as I can tell, >>>>>>>> Solaris is >>>>>>>> the only platform where we don't do that. But Erik is trying >>>>>>>> to be >>>>>>>> conservative. >>>>>>> >>>>>>> Okay, but the docs don't seem to acknowledge the ability to use, >>>>>>> but not link to, stlport4. >>>>>> >>>>>> Not ODR-used objects do not require linkage. >>>>>> (http://en.cppreference.com/w/cpp/language/definition) >>>>>> I have confirmed directly with the studio folks to be certain >>>>>> that accidental linkage would fail by keeping our existing guards >>>>>> in the LDFLAGS rather than the CFLAGS. >>>>>> This is also reasonably well documented already >>>>>> (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >>>>>> >>>>>>> >>>>>>>>> There are lots of other comments in that document regarding >>>>>>>>> STLport that makes me think that using it may be introducing a >>>>>>>>> fragile dependency into the OpenJDK code! >>>>>>>>> >>>>>>>>> "STLport is an open source product and does not guarantee >>>>>>>>> compatibility across different releases. In other words, >>>>>>>>> compiling with a future version of STLport may break >>>>>>>>> applications compiled with STLport 4.5.3. It also might not be >>>>>>>>> possible to link binaries compiled using STLport 4.5.3 with >>>>>>>>> binaries compiled using a future version of STLport." >>>>>>>>> >>>>>>>>> "Future releases of the compiler might not include STLport4. >>>>>>>>> They might include only a later version of STLport. The >>>>>>>>> compiler option -library=stlport4 might not be available in >>>>>>>>> future releases, but could be replaced by an option referring >>>>>>>>> to a later STLport version." >>>>>>>>> >>>>>>>>> None of that sounds very good to me. >>>>>>>> >>>>>>>> I don't see how this is any different from any other part of the >>>>>>>> process for using a different version of Solaris Studio. >>>>>>> >>>>>>> Well we'd discover the problem when testing the compiler change, >>>>>>> but my point was more to the fact that they don't seem very >>>>>>> committed to this library - very much a "use at own risk" >>>>>>> disclaimer. >>>>>> >>>>>> If we eventually need to use something more modern for features >>>>>> that have not been around for a decade, like C++11 features, then >>>>>> we can change standard library when that day comes. >>>>>> >>>>>>> >>>>>>>> stlport4 is one of the three standard libraries that are presently >>>>>>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik >>>>>>>> asked the >>>>>>>> Studio folks which to use (for the purposes of our present >>>>>>>> project, we >>>>>>>> don't have any particular preference, so long as it works), and >>>>>>>> stlport4 seemed the right choice (libCstd was, I think, >>>>>>>> described as >>>>>>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>>>>>> including linking against it, for gtest builds. Mixing two >>>>>>>> different >>>>>>>> standard libraries seems like a bad idea... >>>>>>> >>>>>>> So we have the choice of "ancient", "unsupported" or gcc :) >>>>>>> >>>>>>> My confidence in this has not increased :) >>>>>> >>>>>> I trust that e.g. std::numeric_limits::is_signed in the >>>>>> standard libraries has more mileage than whatever simplified >>>>>> rewrite of that we try to replicate in the JVM. So it is not >>>>>> obvious to me that we should have less confidence in the same >>>>>> functionality from a standard library shipped together with the >>>>>> compiler we are using and that has already been used and tested >>>>>> in a variety of C++ applications for over a decade compared to >>>>>> the alternative of reinventing it ourselves. >>>>>> >>>>>>> What we do in gtest doesn't necessarily make things okay to do >>>>>>> in the product. >>>>>>> >>>>>>> If this were part of a compiler upgrade process we'd be >>>>>>> comparing binaries with old flag and new to ensure there are no >>>>>>> unexpected consequences. >>>>>> >>>>>> I would not compare including to a compiler upgrade >>>>>> process as we are not changing the compiler and hence not the way >>>>>> code is generated, but rather compare it to including a new >>>>>> system header that has previously not been included to use a >>>>>> constant folded expression from that header that has been used >>>>>> and tested for a decade. At least that is how I think of it. >>>>>> >>>>>> Thanks, >>>>>> /Erik >>>>>> >>>>>>> >>>>>>> Cheers, >>>>>>> David >>>>>>> >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> David >>>>>>>>> >>>>>>>>> >>>>>>>>>> Webrev for jdk10-hs top level repository: >>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>>>>>> Webrev for jdk10-hs hotspot repository: >>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>>>>>> Testing: JPRT. >>>>>>>>>> Will need a sponsor. >>>>>>>>>> Thanks, >>>>>>>>>> /Erik >>>>>>>> >>>>>>>> >>>>>> >>>> >> From daniel.daugherty at oracle.com Mon Jun 5 16:31:40 2017 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Mon, 5 Jun 2017 10:31:40 -0600 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: <59358492.7030803@oracle.com> References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> <593029CB.7000100@oracle.com> <593534BD.1090805@oracle.com> <59358492.7030803@oracle.com> Message-ID: <997648e7-2880-a626-f866-8892abee2a5d@oracle.com> On 6/5/17 10:19 AM, Erik ?sterlund wrote: > Hi David, > > On 2017-06-05 14:45, David Holmes wrote: >> Hi Erik, >> >> On 5/06/2017 8:38 PM, Erik ?sterlund wrote: >>> Hi David, >>> >>> On 2017-06-02 03:30, David Holmes wrote: >>>> Hi Erik, >>>> >>>> On 2/06/2017 12:50 AM, Erik ?sterlund wrote: >>>>> Hi David, >>>>> >>>>> On 2017-06-01 14:33, David Holmes wrote: >>>>>> Hi Erik, >>>>>> >>>>>> Just to be clear it is not the use of that I am >>>>>> concerned about, it is the -library=stlport4. It is the use of >>>>>> that flag that I would want to check in terms of having no affect >>>>>> on any existing code generation. >>>>> >>>>> Thank you for the clarification. The use of -library=stlport4 >>>>> should not have anything to do with code generation. It only says >>>>> where to look for the standard library headers such as >>>>> that are used in the compilation units. >>>> >>>> The potential problem is that the stlport4 include path eg: >>>> >>>> ./SS12u4/lib/compilers/include/CC/stlport4/ >>>> >>>> doesn't only contain the C++ headers (new, limits, string etc) but >>>> also a whole bunch of regular 'standard' .h headers that are >>>> _different_ to those found outside the stlport4 directory ie the >>>> ones we would currently include. I don't know if the differences >>>> are significant, nor whether those others may be found ahead of the >>>> stlport4 version. But that is my concern about the effects on the >>>> code. >>> >>> While I do not think exchanging these headers will have any >>> behavioral impact, I agree that we can not prove so as they are >>> indeed different header files. That is a good point. >>> >>> However, I think that makes the stlport4 case stronger rather than >>> weaker. We already use stlport4 for our gtest testing (because it is >>> required and does not build without it). And if those headers would >>> indeed have slightly different behaviour as you imply, it further >>> motivates using the same standard library when compiling the product >>> as the testing code. If they were to behave slightly differently, it >>> might be that our gtest tests does not catch hidden bugs that only >>> manifest when building with a different set of headers used for the >>> product build. I therefore find it exceedingly dangerous to stay on >>> two standard libraries (depending on if test code or product code is >>> compiled) compared to consistently using the same standard library >>> across all compilations. So for me, the larger the risk is of them >>> behaving differently is, the bigger the motivation is to use >>> stlport4 consistently. >> >> Regardless of what gtest does if you want to switch the standard >> libraries used by the product then IMHO that should go through a >> vetting process no weaker than that for changing the toolchain, as >> you effectively are doing that. > > I talked to Erik Joelsson about how to compare two builds. He > introduced me to our compare.sh script that is used to compare two > builds. > I built a baseline without these changes and a new build with these > changes applied, both on a Solaris SPARC T7 machine. Then I compared > them with ./compare.sh -2dirs > {$BUILD1}/hotspot/variant-server/libjvm/objs > {$BUILD2}/hotspot/variant-server/libjvm/objs -libs --strip > > This compares the object files produced when compiling hotspot in > build 1 and build 2 after stripping symbols. > > First it reported: > Libraries... > Size : Symbols : Deps : Disass : > :* diff *: : : ./dtrace.o > :* diff *: :* 38918*: ./jni.o > :* diff *: :* 23226*: ./unsafe.o > > It seems like all symbols were not stripped here on these mentioned > files and constituted all differences in the disassembly. So I made a > simple sed filter to filter out symbol names in the disassembly with > the regexp <.*>. > > The result was: > Libraries... > Size : Symbols : Deps : Disass : > :* diff *: : : ./dtrace.o > :* diff *: : : ./jni.o > :* diff *: : : ./unsafe.o > > This shows that not a single instruction was emitted differently > between the two builds. > > I also did the filtering manually on jni.o and unsafe.o in emacs to > make sure I did not mess up. > > Are we happy with this, or do you still have doubts that this might > result in different code or behavior? Just to be clear: The current experiment changes both the header and the standard library right? If so, then the compare.sh run works for validating that using the new header file will not result in a change in behavior. However, that comparison doesn't do anything for testing a switch in the standard libraries right? Dan > > Thanks, > /Erik > >> Cheers, >> David >> >> >>> Thanks, >>> /Erik >>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>> >>>>> Specifically, the man pages for CC say: >>>>> >>>>> >>>>> -library=lib[,lib...] >>>>> >>>>> Incorporates specified CC-provided libraries into >>>>> compilation and >>>>> linking. >>>>> >>>>> When the -library option is used to specify a >>>>> CC-provided library, >>>>> the proper -I paths are set during compilation and >>>>> the proper -L, >>>>> -Y, -P, and -R paths and -l options are set during >>>>> linking. >>>>> >>>>> >>>>> As we are setting this during compilation and not during linking, >>>>> this corresponds to setting the right -I paths to find our C++ >>>>> standard library headers. >>>>> >>>>> My studio friends mentioned I could double-check that we did >>>>> indeed not add a dependency to any C++ standard library by running >>>>> elfdump on the generated libjvm.so file and check if the NEEDED >>>>> entries in the dynamic section look right. I did and here are the >>>>> results: >>>>> >>>>> [0] NEEDED 0x2918ee libsocket.so.1 >>>>> [1] NEEDED 0x2918fd libsched.so.1 >>>>> [2] NEEDED 0x29190b libdl.so.1 >>>>> [3] NEEDED 0x291916 libm.so.1 >>>>> [4] NEEDED 0x291920 libCrun.so.1 >>>>> [5] NEEDED 0x29192d libthread.so.1 >>>>> [6] NEEDED 0x29193c libdoor.so.1 >>>>> [7] NEEDED 0x291949 libc.so.1 >>>>> [8] NEEDED 0x291953 libdemangle.so.1 >>>>> [9] NEEDED 0x291964 libnsl.so.1 >>>>> [10] NEEDED 0x291970 libkstat.so.1 >>>>> [11] NEEDED 0x29197e librt.so.1 >>>>> >>>>> This list does not include any C++ standard libraries, as expected >>>>> (libCrun is always in there even with -library=%none, and as >>>>> expected no libstlport4.so or libCstd.so files are in there). The >>>>> NEEDED entries in the dynamic section look identical with and >>>>> without my patch. >>>>> >>>>>> I'm finding the actual build situation very confusing. It seems >>>>>> to me in looking at the hotspot build files and the top-level >>>>>> build files that -xnolib is used for C++ compilation & linking >>>>>> whereas -library=%none is used for C compilation & linking. But >>>>>> the change is being applied to $2JVM_CFLAGS which one would think >>>>>> is for C compilation but we don't have $2JVM_CXXFLAGS, so it >>>>>> seems to be used for both! >>>>> >>>>> I have also been confused by this when I tried adding CXX flags >>>>> through configure that seemed to not be used. But that's a >>>>> different can of worms I suppose. >>>>> >>>>> Thanks, >>>>> /Erik >>>>> >>>>>> David >>>>>> >>>>>> On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >>>>>>> Hi David, >>>>>>> >>>>>>> On 2017-06-01 08:09, David Holmes wrote: >>>>>>>> Hi Kim, >>>>>>>> >>>>>>>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>>>>>>> On May 31, 2017, at 9:22 PM, David Holmes >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> Hi Erik, >>>>>>>>>> >>>>>>>>>> A small change with big questions :) >>>>>>>>>> >>>>>>>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>>>>>>> Hi, >>>>>>>>>>> It would be desirable to be able to use harmless C++ >>>>>>>>>>> standard library headers like in the code as long >>>>>>>>>>> as it does not add any link-time dependencies to the >>>>>>>>>>> standard library. >>>>>>>>>> >>>>>>>>>> What does a 'harmless' C++ standard library header look like? >>>>>>>>> >>>>>>>>> Header-only (doesn't require linking), doesn't run afoul of our >>>>>>>>> [vm]assert macro, and provides functionality we presently lack >>>>>>>>> (or >>>>>>>>> only handle poorly) and would not be easy to reproduce. >>>>>>>> >>>>>>>> And how does one establish those properties exist for a given >>>>>>>> header file? Just use it and if no link errors then all is good? >>>>>>> >>>>>>> Objects from headers that are not ODR-used such as constant >>>>>>> folded expressions are not imposing link-time dependencies to >>>>>>> C++ libraries. The -xnolib that we already have in the LDFLAGS >>>>>>> will catch any accidental ODR-uses of C++ objects, and the JVM >>>>>>> will not build if that happens. >>>>>>> >>>>>>> As for external headers being included and not playing nicely >>>>>>> with macros, this has to be evaluated on a case by case basis. >>>>>>> Note that this is a problem that occurs when using system >>>>>>> headers (that we are already using), as it is for using C++ >>>>>>> standard library headers. We even run into that in our own JVM >>>>>>> when e.g. the min/max macros occasionally slaps us gently in the >>>>>>> face from time to time. >>>>>>> >>>>>>>> >>>>>>>>> The instigator for this is Erik and I are working on a project >>>>>>>>> that >>>>>>>>> needs information that is present in std::numeric_limits<> >>>>>>>>> (provided >>>>>>>>> by the header). Reproducing that functionality >>>>>>>>> ourselves >>>>>>>>> would require platform-specific code (with all the complexity >>>>>>>>> that can >>>>>>>>> imply). We'd really rather not re-discover and maintain >>>>>>>>> information >>>>>>>>> that is trivially accessible in every standard library. >>>>>>>> >>>>>>>> Understood. I have no issue with using but am >>>>>>>> concerned by the state of stlport4. Can you use >>>>>>>> without changing -library=%none? >>>>>>> >>>>>>> No, that is precisely why we are here. >>>>>>> >>>>>>>> >>>>>>>>>>> This is possible on all supported platforms except the ones >>>>>>>>>>> using the solaris studio compiler where we enforce >>>>>>>>>>> -library=%none in both CFLAGS and LDFLAGS. >>>>>>>>>>> I propose to remove the restriction from CFLAGS but keep it >>>>>>>>>>> on LDFLAGS. >>>>>>>>>>> I have consulted with the studio folks, and they think this >>>>>>>>>>> is absolutely fine and thought that the choice of >>>>>>>>>>> -library=stlport4 should be fine for our CFLAGS and is >>>>>>>>>>> indeed what is already used in the gtest launcher. >>>>>>>>>> >>>>>>>>>> So what exactly does this mean? IIUC this allows you to use >>>>>>>>>> headers for, and compile against "STLport?s Standard Library >>>>>>>>>> implementation version 4.5.3 instead of the default libCstd". >>>>>>>>>> But how do you then not need to link against libstlport.so ?? >>>>>>>>>> >>>>>>>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>>>>>>> >>>>>>>>>> "STLport is binary incompatible with the default libCstd. If >>>>>>>>>> you use the STLport implementation of the standard library, >>>>>>>>>> then you must compile and link all files, including >>>>>>>>>> third-party libraries, with the option -library=stlport4? >>>>>>>>> >>>>>>>>> It means we can only use header-only parts of the standard >>>>>>>>> library. >>>>>>>>> This was confirmed / suggested by the Studio folks Erik >>>>>>>>> consulted, >>>>>>>>> providing such limited access while continuing to constrain our >>>>>>>>> dependency on the library. Figuring out what can be used will >>>>>>>>> need to >>>>>>>>> be determined on a case-by-case basis. Maybe we could just >>>>>>>>> link with >>>>>>>>> a standard library on Solaris too. So far as I can tell, >>>>>>>>> Solaris is >>>>>>>>> the only platform where we don't do that. But Erik is trying >>>>>>>>> to be >>>>>>>>> conservative. >>>>>>>> >>>>>>>> Okay, but the docs don't seem to acknowledge the ability to >>>>>>>> use, but not link to, stlport4. >>>>>>> >>>>>>> Not ODR-used objects do not require linkage. >>>>>>> (http://en.cppreference.com/w/cpp/language/definition) >>>>>>> I have confirmed directly with the studio folks to be certain >>>>>>> that accidental linkage would fail by keeping our existing >>>>>>> guards in the LDFLAGS rather than the CFLAGS. >>>>>>> This is also reasonably well documented already >>>>>>> (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >>>>>>> >>>>>>>> >>>>>>>>>> There are lots of other comments in that document regarding >>>>>>>>>> STLport that makes me think that using it may be introducing >>>>>>>>>> a fragile dependency into the OpenJDK code! >>>>>>>>>> >>>>>>>>>> "STLport is an open source product and does not guarantee >>>>>>>>>> compatibility across different releases. In other words, >>>>>>>>>> compiling with a future version of STLport may break >>>>>>>>>> applications compiled with STLport 4.5.3. It also might not >>>>>>>>>> be possible to link binaries compiled using STLport 4.5.3 >>>>>>>>>> with binaries compiled using a future version of STLport." >>>>>>>>>> >>>>>>>>>> "Future releases of the compiler might not include STLport4. >>>>>>>>>> They might include only a later version of STLport. The >>>>>>>>>> compiler option -library=stlport4 might not be available in >>>>>>>>>> future releases, but could be replaced by an option referring >>>>>>>>>> to a later STLport version." >>>>>>>>>> >>>>>>>>>> None of that sounds very good to me. >>>>>>>>> >>>>>>>>> I don't see how this is any different from any other part of the >>>>>>>>> process for using a different version of Solaris Studio. >>>>>>>> >>>>>>>> Well we'd discover the problem when testing the compiler >>>>>>>> change, but my point was more to the fact that they don't seem >>>>>>>> very committed to this library - very much a "use at own risk" >>>>>>>> disclaimer. >>>>>>> >>>>>>> If we eventually need to use something more modern for features >>>>>>> that have not been around for a decade, like C++11 features, >>>>>>> then we can change standard library when that day comes. >>>>>>> >>>>>>>> >>>>>>>>> stlport4 is one of the three standard libraries that are >>>>>>>>> presently >>>>>>>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik >>>>>>>>> asked the >>>>>>>>> Studio folks which to use (for the purposes of our present >>>>>>>>> project, we >>>>>>>>> don't have any particular preference, so long as it works), and >>>>>>>>> stlport4 seemed the right choice (libCstd was, I think, >>>>>>>>> described as >>>>>>>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>>>>>>> including linking against it, for gtest builds. Mixing two >>>>>>>>> different >>>>>>>>> standard libraries seems like a bad idea... >>>>>>>> >>>>>>>> So we have the choice of "ancient", "unsupported" or gcc :) >>>>>>>> >>>>>>>> My confidence in this has not increased :) >>>>>>> >>>>>>> I trust that e.g. std::numeric_limits::is_signed in the >>>>>>> standard libraries has more mileage than whatever simplified >>>>>>> rewrite of that we try to replicate in the JVM. So it is not >>>>>>> obvious to me that we should have less confidence in the same >>>>>>> functionality from a standard library shipped together with the >>>>>>> compiler we are using and that has already been used and tested >>>>>>> in a variety of C++ applications for over a decade compared to >>>>>>> the alternative of reinventing it ourselves. >>>>>>> >>>>>>>> What we do in gtest doesn't necessarily make things okay to do >>>>>>>> in the product. >>>>>>>> >>>>>>>> If this were part of a compiler upgrade process we'd be >>>>>>>> comparing binaries with old flag and new to ensure there are no >>>>>>>> unexpected consequences. >>>>>>> >>>>>>> I would not compare including to a compiler upgrade >>>>>>> process as we are not changing the compiler and hence not the >>>>>>> way code is generated, but rather compare it to including a new >>>>>>> system header that has previously not been included to use a >>>>>>> constant folded expression from that header that has been used >>>>>>> and tested for a decade. At least that is how I think of it. >>>>>>> >>>>>>> Thanks, >>>>>>> /Erik >>>>>>> >>>>>>>> >>>>>>>> Cheers, >>>>>>>> David >>>>>>>> >>>>>>>>>> >>>>>>>>>> Cheers, >>>>>>>>>> David >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Webrev for jdk10-hs top level repository: >>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>>>>>>> Webrev for jdk10-hs hotspot repository: >>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>>>>>>> Testing: JPRT. >>>>>>>>>>> Will need a sponsor. >>>>>>>>>>> Thanks, >>>>>>>>>>> /Erik >>>>>>>>> >>>>>>>>> >>>>>>> >>>>> >>> > From kim.barrett at oracle.com Mon Jun 5 16:51:27 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 5 Jun 2017 12:51:27 -0400 Subject: RFR: 8166651: OrderAccess::load_acquire &etc should have const parameters In-Reply-To: References: <5ec3ca7f-f4cd-337c-63d5-3b00fd2839a7@redhat.com> <702f7e19-da8d-6ca2-8277-185a9468ef2a@redhat.com> <7E188A13-A839-4DE4-8AA4-FA1E36F326AD@oracle.com> Message-ID: > On Jun 5, 2017, at 11:02 AM, Andrew Dinn wrote: > > Hi Kim, > > On 05/06/17 06:02, Kim Barrett wrote: >> >> Waiting for re-reviews for the additional changes. > > The new changes work fine on AArch64. Thanks. > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From erik.osterlund at oracle.com Mon Jun 5 16:59:38 2017 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Mon, 5 Jun 2017 18:59:38 +0200 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: <997648e7-2880-a626-f866-8892abee2a5d@oracle.com> References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> <593029CB.7000100@oracle.com> <593534BD.1090805@oracle.com> <59358492.7030803@oracle.com> <997648e7-2880-a626-f866-8892abee2a5d@oracle.com> Message-ID: Hi Dan, >> On 5 Jun 2017, at 18:31, Daniel D. Daugherty wrote: >> >> On 6/5/17 10:19 AM, Erik ?sterlund wrote: >> Hi David, >> >>> On 2017-06-05 14:45, David Holmes wrote: >>> Hi Erik, >>> >>>> On 5/06/2017 8:38 PM, Erik ?sterlund wrote: >>>> Hi David, >>>> >>>>> On 2017-06-02 03:30, David Holmes wrote: >>>>> Hi Erik, >>>>> >>>>>> On 2/06/2017 12:50 AM, Erik ?sterlund wrote: >>>>>> Hi David, >>>>>> >>>>>>> On 2017-06-01 14:33, David Holmes wrote: >>>>>>> Hi Erik, >>>>>>> >>>>>>> Just to be clear it is not the use of that I am concerned about, it is the -library=stlport4. It is the use of that flag that I would want to check in terms of having no affect on any existing code generation. >>>>>> >>>>>> Thank you for the clarification. The use of -library=stlport4 should not have anything to do with code generation. It only says where to look for the standard library headers such as that are used in the compilation units. >>>>> >>>>> The potential problem is that the stlport4 include path eg: >>>>> >>>>> ./SS12u4/lib/compilers/include/CC/stlport4/ >>>>> >>>>> doesn't only contain the C++ headers (new, limits, string etc) but also a whole bunch of regular 'standard' .h headers that are _different_ to those found outside the stlport4 directory ie the ones we would currently include. I don't know if the differences are significant, nor whether those others may be found ahead of the stlport4 version. But that is my concern about the effects on the code. >>>> >>>> While I do not think exchanging these headers will have any behavioral impact, I agree that we can not prove so as they are indeed different header files. That is a good point. >>>> >>>> However, I think that makes the stlport4 case stronger rather than weaker. We already use stlport4 for our gtest testing (because it is required and does not build without it). And if those headers would indeed have slightly different behaviour as you imply, it further motivates using the same standard library when compiling the product as the testing code. If they were to behave slightly differently, it might be that our gtest tests does not catch hidden bugs that only manifest when building with a different set of headers used for the product build. I therefore find it exceedingly dangerous to stay on two standard libraries (depending on if test code or product code is compiled) compared to consistently using the same standard library across all compilations. So for me, the larger the risk is of them behaving differently is, the bigger the motivation is to use stlport4 consistently. >>> >>> Regardless of what gtest does if you want to switch the standard libraries used by the product then IMHO that should go through a vetting process no weaker than that for changing the toolchain, as you effectively are doing that. >> >> I talked to Erik Joelsson about how to compare two builds. He introduced me to our compare.sh script that is used to compare two builds. >> I built a baseline without these changes and a new build with these changes applied, both on a Solaris SPARC T7 machine. Then I compared them with ./compare.sh -2dirs {$BUILD1}/hotspot/variant-server/libjvm/objs {$BUILD2}/hotspot/variant-server/libjvm/objs -libs --strip >> >> This compares the object files produced when compiling hotspot in build 1 and build 2 after stripping symbols. >> >> First it reported: >> Libraries... >> Size : Symbols : Deps : Disass : >> :* diff *: : : ./dtrace.o >> :* diff *: :* 38918*: ./jni.o >> :* diff *: :* 23226*: ./unsafe.o >> >> It seems like all symbols were not stripped here on these mentioned files and constituted all differences in the disassembly. So I made a simple sed filter to filter out symbol names in the disassembly with the regexp <.*>. >> >> The result was: >> Libraries... >> Size : Symbols : Deps : Disass : >> :* diff *: : : ./dtrace.o >> :* diff *: : : ./jni.o >> :* diff *: : : ./unsafe.o >> >> This shows that not a single instruction was emitted differently between the two builds. >> >> I also did the filtering manually on jni.o and unsafe.o in emacs to make sure I did not mess up. >> >> Are we happy with this, or do you still have doubts that this might result in different code or behavior? > > Just to be clear: The current experiment changes both the header and > the standard library right? If so, then the compare.sh run works for > validating that using the new header file will not result in a change > in behavior. However, that comparison doesn't do anything for testing > a switch in the standard libraries right? The -xnolib guards are still there in the LDFLAGS. That is, the linker will not allow anything to link against either standard library. I have manually confirmed this by doing the sanity check of comparing the NEEDED entries in the dynamic section of the libjvm.so elf file using elfdump. It has no references to neither libstlport4 nor libCstd with or without my changes. Summary: the changes do not add any linktime dependencies to either standard library, and we are still guarded in the sense that if such dependencies were to accidentally be introduced, it would not build. The only difference then would be slightly different code generation of object files. And their disassemblies have been confirmed not to differ by even a single instruction generated differently. Thanks, /Erik > Dan > > >> >> Thanks, >> /Erik >> >>> Cheers, >>> David >>> >>> >>>> Thanks, >>>> /Erik >>>> >>>>> Thanks, >>>>> David >>>>> ----- >>>>> >>>>> >>>>>> Specifically, the man pages for CC say: >>>>>> >>>>>> >>>>>> -library=lib[,lib...] >>>>>> >>>>>> Incorporates specified CC-provided libraries into compilation and >>>>>> linking. >>>>>> >>>>>> When the -library option is used to specify a CC-provided library, >>>>>> the proper -I paths are set during compilation and the proper -L, >>>>>> -Y, -P, and -R paths and -l options are set during linking. >>>>>> >>>>>> >>>>>> As we are setting this during compilation and not during linking, this corresponds to setting the right -I paths to find our C++ standard library headers. >>>>>> >>>>>> My studio friends mentioned I could double-check that we did indeed not add a dependency to any C++ standard library by running elfdump on the generated libjvm.so file and check if the NEEDED entries in the dynamic section look right. I did and here are the results: >>>>>> >>>>>> [0] NEEDED 0x2918ee libsocket.so.1 >>>>>> [1] NEEDED 0x2918fd libsched.so.1 >>>>>> [2] NEEDED 0x29190b libdl.so.1 >>>>>> [3] NEEDED 0x291916 libm.so.1 >>>>>> [4] NEEDED 0x291920 libCrun.so.1 >>>>>> [5] NEEDED 0x29192d libthread.so.1 >>>>>> [6] NEEDED 0x29193c libdoor.so.1 >>>>>> [7] NEEDED 0x291949 libc.so.1 >>>>>> [8] NEEDED 0x291953 libdemangle.so.1 >>>>>> [9] NEEDED 0x291964 libnsl.so.1 >>>>>> [10] NEEDED 0x291970 libkstat.so.1 >>>>>> [11] NEEDED 0x29197e librt.so.1 >>>>>> >>>>>> This list does not include any C++ standard libraries, as expected (libCrun is always in there even with -library=%none, and as expected no libstlport4.so or libCstd.so files are in there). The NEEDED entries in the dynamic section look identical with and without my patch. >>>>>> >>>>>>> I'm finding the actual build situation very confusing. It seems to me in looking at the hotspot build files and the top-level build files that -xnolib is used for C++ compilation & linking whereas -library=%none is used for C compilation & linking. But the change is being applied to $2JVM_CFLAGS which one would think is for C compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used for both! >>>>>> >>>>>> I have also been confused by this when I tried adding CXX flags through configure that seemed to not be used. But that's a different can of worms I suppose. >>>>>> >>>>>> Thanks, >>>>>> /Erik >>>>>> >>>>>>> David >>>>>>> >>>>>>>> On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >>>>>>>> Hi David, >>>>>>>> >>>>>>>>> On 2017-06-01 08:09, David Holmes wrote: >>>>>>>>> Hi Kim, >>>>>>>>> >>>>>>>>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>>>>>>>> On May 31, 2017, at 9:22 PM, David Holmes wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Erik, >>>>>>>>>>> >>>>>>>>>>> A small change with big questions :) >>>>>>>>>>> >>>>>>>>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>>>>>>>> Hi, >>>>>>>>>>>> It would be desirable to be able to use harmless C++ standard library headers like in the code as long as it does not add any link-time dependencies to the standard library. >>>>>>>>>>> >>>>>>>>>>> What does a 'harmless' C++ standard library header look like? >>>>>>>>>> >>>>>>>>>> Header-only (doesn't require linking), doesn't run afoul of our >>>>>>>>>> [vm]assert macro, and provides functionality we presently lack (or >>>>>>>>>> only handle poorly) and would not be easy to reproduce. >>>>>>>>> >>>>>>>>> And how does one establish those properties exist for a given header file? Just use it and if no link errors then all is good? >>>>>>>> >>>>>>>> Objects from headers that are not ODR-used such as constant folded expressions are not imposing link-time dependencies to C++ libraries. The -xnolib that we already have in the LDFLAGS will catch any accidental ODR-uses of C++ objects, and the JVM will not build if that happens. >>>>>>>> >>>>>>>> As for external headers being included and not playing nicely with macros, this has to be evaluated on a case by case basis. Note that this is a problem that occurs when using system headers (that we are already using), as it is for using C++ standard library headers. We even run into that in our own JVM when e.g. the min/max macros occasionally slaps us gently in the face from time to time. >>>>>>>> >>>>>>>>> >>>>>>>>>> The instigator for this is Erik and I are working on a project that >>>>>>>>>> needs information that is present in std::numeric_limits<> (provided >>>>>>>>>> by the header). Reproducing that functionality ourselves >>>>>>>>>> would require platform-specific code (with all the complexity that can >>>>>>>>>> imply). We'd really rather not re-discover and maintain information >>>>>>>>>> that is trivially accessible in every standard library. >>>>>>>>> >>>>>>>>> Understood. I have no issue with using but am concerned by the state of stlport4. Can you use without changing -library=%none? >>>>>>>> >>>>>>>> No, that is precisely why we are here. >>>>>>>> >>>>>>>>> >>>>>>>>>>>> This is possible on all supported platforms except the ones using the solaris studio compiler where we enforce -library=%none in both CFLAGS and LDFLAGS. >>>>>>>>>>>> I propose to remove the restriction from CFLAGS but keep it on LDFLAGS. >>>>>>>>>>>> I have consulted with the studio folks, and they think this is absolutely fine and thought that the choice of -library=stlport4 should be fine for our CFLAGS and is indeed what is already used in the gtest launcher. >>>>>>>>>>> >>>>>>>>>>> So what exactly does this mean? IIUC this allows you to use headers for, and compile against "STLport?s Standard Library implementation version 4.5.3 instead of the default libCstd". But how do you then not need to link against libstlport.so ?? >>>>>>>>>>> >>>>>>>>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>>>>>>>> >>>>>>>>>>> "STLport is binary incompatible with the default libCstd. If you use the STLport implementation of the standard library, then you must compile and link all files, including third-party libraries, with the option -library=stlport4? >>>>>>>>>> >>>>>>>>>> It means we can only use header-only parts of the standard library. >>>>>>>>>> This was confirmed / suggested by the Studio folks Erik consulted, >>>>>>>>>> providing such limited access while continuing to constrain our >>>>>>>>>> dependency on the library. Figuring out what can be used will need to >>>>>>>>>> be determined on a case-by-case basis. Maybe we could just link with >>>>>>>>>> a standard library on Solaris too. So far as I can tell, Solaris is >>>>>>>>>> the only platform where we don't do that. But Erik is trying to be >>>>>>>>>> conservative. >>>>>>>>> >>>>>>>>> Okay, but the docs don't seem to acknowledge the ability to use, but not link to, stlport4. >>>>>>>> >>>>>>>> Not ODR-used objects do not require linkage. (http://en.cppreference.com/w/cpp/language/definition) >>>>>>>> I have confirmed directly with the studio folks to be certain that accidental linkage would fail by keeping our existing guards in the LDFLAGS rather than the CFLAGS. >>>>>>>> This is also reasonably well documented already (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >>>>>>>> >>>>>>>>> >>>>>>>>>>> There are lots of other comments in that document regarding STLport that makes me think that using it may be introducing a fragile dependency into the OpenJDK code! >>>>>>>>>>> >>>>>>>>>>> "STLport is an open source product and does not guarantee compatibility across different releases. In other words, compiling with a future version of STLport may break applications compiled with STLport 4.5.3. It also might not be possible to link binaries compiled using STLport 4.5.3 with binaries compiled using a future version of STLport." >>>>>>>>>>> >>>>>>>>>>> "Future releases of the compiler might not include STLport4. They might include only a later version of STLport. The compiler option -library=stlport4 might not be available in future releases, but could be replaced by an option referring to a later STLport version." >>>>>>>>>>> >>>>>>>>>>> None of that sounds very good to me. >>>>>>>>>> >>>>>>>>>> I don't see how this is any different from any other part of the >>>>>>>>>> process for using a different version of Solaris Studio. >>>>>>>>> >>>>>>>>> Well we'd discover the problem when testing the compiler change, but my point was more to the fact that they don't seem very committed to this library - very much a "use at own risk" disclaimer. >>>>>>>> >>>>>>>> If we eventually need to use something more modern for features that have not been around for a decade, like C++11 features, then we can change standard library when that day comes. >>>>>>>> >>>>>>>>> >>>>>>>>>> stlport4 is one of the three standard libraries that are presently >>>>>>>>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the >>>>>>>>>> Studio folks which to use (for the purposes of our present project, we >>>>>>>>>> don't have any particular preference, so long as it works), and >>>>>>>>>> stlport4 seemed the right choice (libCstd was, I think, described as >>>>>>>>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>>>>>>>> including linking against it, for gtest builds. Mixing two different >>>>>>>>>> standard libraries seems like a bad idea... >>>>>>>>> >>>>>>>>> So we have the choice of "ancient", "unsupported" or gcc :) >>>>>>>>> >>>>>>>>> My confidence in this has not increased :) >>>>>>>> >>>>>>>> I trust that e.g. std::numeric_limits::is_signed in the standard libraries has more mileage than whatever simplified rewrite of that we try to replicate in the JVM. So it is not obvious to me that we should have less confidence in the same functionality from a standard library shipped together with the compiler we are using and that has already been used and tested in a variety of C++ applications for over a decade compared to the alternative of reinventing it ourselves. >>>>>>>> >>>>>>>>> What we do in gtest doesn't necessarily make things okay to do in the product. >>>>>>>>> >>>>>>>>> If this were part of a compiler upgrade process we'd be comparing binaries with old flag and new to ensure there are no unexpected consequences. >>>>>>>> >>>>>>>> I would not compare including to a compiler upgrade process as we are not changing the compiler and hence not the way code is generated, but rather compare it to including a new system header that has previously not been included to use a constant folded expression from that header that has been used and tested for a decade. At least that is how I think of it. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> /Erik >>>>>>>> >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> David >>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Cheers, >>>>>>>>>>> David >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Webrev for jdk10-hs top level repository: >>>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>>>>>>>> Webrev for jdk10-hs hotspot repository: >>>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>>>>>>>> Testing: JPRT. >>>>>>>>>>>> Will need a sponsor. >>>>>>>>>>>> Thanks, >>>>>>>>>>>> /Erik > From daniel.daugherty at oracle.com Mon Jun 5 17:22:32 2017 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Mon, 5 Jun 2017 11:22:32 -0600 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> <593029CB.7000100@oracle.com> <593534BD.1090805@oracle.com> <59358492.7030803@oracle.com> <997648e7-2880-a626-f866-8892abee2a5d@oracle.com> Message-ID: On 6/5/17 10:59 AM, Erik Osterlund wrote: > Hi Dan, > >>> On 5 Jun 2017, at 18:31, Daniel D. Daugherty wrote: >>> >>> On 6/5/17 10:19 AM, Erik ?sterlund wrote: >>> Hi David, >>> >>>> On 2017-06-05 14:45, David Holmes wrote: >>>> Hi Erik, >>>> >>>>> On 5/06/2017 8:38 PM, Erik ?sterlund wrote: >>>>> Hi David, >>>>> >>>>>> On 2017-06-02 03:30, David Holmes wrote: >>>>>> Hi Erik, >>>>>> >>>>>>> On 2/06/2017 12:50 AM, Erik ?sterlund wrote: >>>>>>> Hi David, >>>>>>> >>>>>>>> On 2017-06-01 14:33, David Holmes wrote: >>>>>>>> Hi Erik, >>>>>>>> >>>>>>>> Just to be clear it is not the use of that I am concerned about, it is the -library=stlport4. It is the use of that flag that I would want to check in terms of having no affect on any existing code generation. >>>>>>> Thank you for the clarification. The use of -library=stlport4 should not have anything to do with code generation. It only says where to look for the standard library headers such as that are used in the compilation units. >>>>>> The potential problem is that the stlport4 include path eg: >>>>>> >>>>>> ./SS12u4/lib/compilers/include/CC/stlport4/ >>>>>> >>>>>> doesn't only contain the C++ headers (new, limits, string etc) but also a whole bunch of regular 'standard' .h headers that are _different_ to those found outside the stlport4 directory ie the ones we would currently include. I don't know if the differences are significant, nor whether those others may be found ahead of the stlport4 version. But that is my concern about the effects on the code. >>>>> While I do not think exchanging these headers will have any behavioral impact, I agree that we can not prove so as they are indeed different header files. That is a good point. >>>>> >>>>> However, I think that makes the stlport4 case stronger rather than weaker. We already use stlport4 for our gtest testing (because it is required and does not build without it). And if those headers would indeed have slightly different behaviour as you imply, it further motivates using the same standard library when compiling the product as the testing code. If they were to behave slightly differently, it might be that our gtest tests does not catch hidden bugs that only manifest when building with a different set of headers used for the product build. I therefore find it exceedingly dangerous to stay on two standard libraries (depending on if test code or product code is compiled) compared to consistently using the same standard library across all compilations. So for me, the larger the risk is of them behaving differently is, the bigger the motivation is to use stlport4 consistently. >>>> Regardless of what gtest does if you want to switch the standard libraries used by the product then IMHO that should go through a vetting process no weaker than that for changing the toolchain, as you effectively are doing that. >>> I talked to Erik Joelsson about how to compare two builds. He introduced me to our compare.sh script that is used to compare two builds. >>> I built a baseline without these changes and a new build with these changes applied, both on a Solaris SPARC T7 machine. Then I compared them with ./compare.sh -2dirs {$BUILD1}/hotspot/variant-server/libjvm/objs {$BUILD2}/hotspot/variant-server/libjvm/objs -libs --strip >>> >>> This compares the object files produced when compiling hotspot in build 1 and build 2 after stripping symbols. >>> >>> First it reported: >>> Libraries... >>> Size : Symbols : Deps : Disass : >>> :* diff *: : : ./dtrace.o >>> :* diff *: :* 38918*: ./jni.o >>> :* diff *: :* 23226*: ./unsafe.o >>> >>> It seems like all symbols were not stripped here on these mentioned files and constituted all differences in the disassembly. So I made a simple sed filter to filter out symbol names in the disassembly with the regexp <.*>. >>> >>> The result was: >>> Libraries... >>> Size : Symbols : Deps : Disass : >>> :* diff *: : : ./dtrace.o >>> :* diff *: : : ./jni.o >>> :* diff *: : : ./unsafe.o >>> >>> This shows that not a single instruction was emitted differently between the two builds. >>> >>> I also did the filtering manually on jni.o and unsafe.o in emacs to make sure I did not mess up. >>> >>> Are we happy with this, or do you still have doubts that this might result in different code or behavior? >> Just to be clear: The current experiment changes both the header and >> the standard library right? If so, then the compare.sh run works for >> validating that using the new header file will not result in a change >> in behavior. However, that comparison doesn't do anything for testing >> a switch in the standard libraries right? > The -xnolib guards are still there in the LDFLAGS. That is, the linker will not allow anything to link against either standard library. I have manually confirmed this by doing the sanity check of comparing the NEEDED entries in the dynamic section of the libjvm.so elf file using elfdump. It has no references to neither libstlport4 nor libCstd with or without my changes. > > Summary: the changes do not add any linktime dependencies to either standard library, and we are still guarded in the sense that if such dependencies were to accidentally be introduced, it would not build. The only difference then would be slightly different code generation of object files. And their disassemblies have been confirmed not to differ by even a single instruction generated differently. So your current changes use the stlport4 include path for both product build and 'gtest' build. You've verified the following: - The product binaries do not change even one instruction with the new include path. - The options to keep us from linking to anything in stlport4 are still in place. - You've manually verified that there are no linkage dependencies in the resulting binaries. If I have all that right, then I think you've covered your bases. Dan > > Thanks, > /Erik > >> Dan >> >> >>> Thanks, >>> /Erik >>> >>>> Cheers, >>>> David >>>> >>>> >>>>> Thanks, >>>>> /Erik >>>>> >>>>>> Thanks, >>>>>> David >>>>>> ----- >>>>>> >>>>>> >>>>>>> Specifically, the man pages for CC say: >>>>>>> >>>>>>> >>>>>>> -library=lib[,lib...] >>>>>>> >>>>>>> Incorporates specified CC-provided libraries into compilation and >>>>>>> linking. >>>>>>> >>>>>>> When the -library option is used to specify a CC-provided library, >>>>>>> the proper -I paths are set during compilation and the proper -L, >>>>>>> -Y, -P, and -R paths and -l options are set during linking. >>>>>>> >>>>>>> >>>>>>> As we are setting this during compilation and not during linking, this corresponds to setting the right -I paths to find our C++ standard library headers. >>>>>>> >>>>>>> My studio friends mentioned I could double-check that we did indeed not add a dependency to any C++ standard library by running elfdump on the generated libjvm.so file and check if the NEEDED entries in the dynamic section look right. I did and here are the results: >>>>>>> >>>>>>> [0] NEEDED 0x2918ee libsocket.so.1 >>>>>>> [1] NEEDED 0x2918fd libsched.so.1 >>>>>>> [2] NEEDED 0x29190b libdl.so.1 >>>>>>> [3] NEEDED 0x291916 libm.so.1 >>>>>>> [4] NEEDED 0x291920 libCrun.so.1 >>>>>>> [5] NEEDED 0x29192d libthread.so.1 >>>>>>> [6] NEEDED 0x29193c libdoor.so.1 >>>>>>> [7] NEEDED 0x291949 libc.so.1 >>>>>>> [8] NEEDED 0x291953 libdemangle.so.1 >>>>>>> [9] NEEDED 0x291964 libnsl.so.1 >>>>>>> [10] NEEDED 0x291970 libkstat.so.1 >>>>>>> [11] NEEDED 0x29197e librt.so.1 >>>>>>> >>>>>>> This list does not include any C++ standard libraries, as expected (libCrun is always in there even with -library=%none, and as expected no libstlport4.so or libCstd.so files are in there). The NEEDED entries in the dynamic section look identical with and without my patch. >>>>>>> >>>>>>>> I'm finding the actual build situation very confusing. It seems to me in looking at the hotspot build files and the top-level build files that -xnolib is used for C++ compilation & linking whereas -library=%none is used for C compilation & linking. But the change is being applied to $2JVM_CFLAGS which one would think is for C compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used for both! >>>>>>> I have also been confused by this when I tried adding CXX flags through configure that seemed to not be used. But that's a different can of worms I suppose. >>>>>>> >>>>>>> Thanks, >>>>>>> /Erik >>>>>>> >>>>>>>> David >>>>>>>> >>>>>>>>> On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >>>>>>>>> Hi David, >>>>>>>>> >>>>>>>>>> On 2017-06-01 08:09, David Holmes wrote: >>>>>>>>>> Hi Kim, >>>>>>>>>> >>>>>>>>>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>>>>>>>>> On May 31, 2017, at 9:22 PM, David Holmes wrote: >>>>>>>>>>>> >>>>>>>>>>>> Hi Erik, >>>>>>>>>>>> >>>>>>>>>>>> A small change with big questions :) >>>>>>>>>>>> >>>>>>>>>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> It would be desirable to be able to use harmless C++ standard library headers like in the code as long as it does not add any link-time dependencies to the standard library. >>>>>>>>>>>> What does a 'harmless' C++ standard library header look like? >>>>>>>>>>> Header-only (doesn't require linking), doesn't run afoul of our >>>>>>>>>>> [vm]assert macro, and provides functionality we presently lack (or >>>>>>>>>>> only handle poorly) and would not be easy to reproduce. >>>>>>>>>> And how does one establish those properties exist for a given header file? Just use it and if no link errors then all is good? >>>>>>>>> Objects from headers that are not ODR-used such as constant folded expressions are not imposing link-time dependencies to C++ libraries. The -xnolib that we already have in the LDFLAGS will catch any accidental ODR-uses of C++ objects, and the JVM will not build if that happens. >>>>>>>>> >>>>>>>>> As for external headers being included and not playing nicely with macros, this has to be evaluated on a case by case basis. Note that this is a problem that occurs when using system headers (that we are already using), as it is for using C++ standard library headers. We even run into that in our own JVM when e.g. the min/max macros occasionally slaps us gently in the face from time to time. >>>>>>>>> >>>>>>>>>>> The instigator for this is Erik and I are working on a project that >>>>>>>>>>> needs information that is present in std::numeric_limits<> (provided >>>>>>>>>>> by the header). Reproducing that functionality ourselves >>>>>>>>>>> would require platform-specific code (with all the complexity that can >>>>>>>>>>> imply). We'd really rather not re-discover and maintain information >>>>>>>>>>> that is trivially accessible in every standard library. >>>>>>>>>> Understood. I have no issue with using but am concerned by the state of stlport4. Can you use without changing -library=%none? >>>>>>>>> No, that is precisely why we are here. >>>>>>>>> >>>>>>>>>>>>> This is possible on all supported platforms except the ones using the solaris studio compiler where we enforce -library=%none in both CFLAGS and LDFLAGS. >>>>>>>>>>>>> I propose to remove the restriction from CFLAGS but keep it on LDFLAGS. >>>>>>>>>>>>> I have consulted with the studio folks, and they think this is absolutely fine and thought that the choice of -library=stlport4 should be fine for our CFLAGS and is indeed what is already used in the gtest launcher. >>>>>>>>>>>> So what exactly does this mean? IIUC this allows you to use headers for, and compile against "STLport?s Standard Library implementation version 4.5.3 instead of the default libCstd". But how do you then not need to link against libstlport.so ?? >>>>>>>>>>>> >>>>>>>>>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>>>>>>>>> >>>>>>>>>>>> "STLport is binary incompatible with the default libCstd. If you use the STLport implementation of the standard library, then you must compile and link all files, including third-party libraries, with the option -library=stlport4? >>>>>>>>>>> It means we can only use header-only parts of the standard library. >>>>>>>>>>> This was confirmed / suggested by the Studio folks Erik consulted, >>>>>>>>>>> providing such limited access while continuing to constrain our >>>>>>>>>>> dependency on the library. Figuring out what can be used will need to >>>>>>>>>>> be determined on a case-by-case basis. Maybe we could just link with >>>>>>>>>>> a standard library on Solaris too. So far as I can tell, Solaris is >>>>>>>>>>> the only platform where we don't do that. But Erik is trying to be >>>>>>>>>>> conservative. >>>>>>>>>> Okay, but the docs don't seem to acknowledge the ability to use, but not link to, stlport4. >>>>>>>>> Not ODR-used objects do not require linkage. (http://en.cppreference.com/w/cpp/language/definition) >>>>>>>>> I have confirmed directly with the studio folks to be certain that accidental linkage would fail by keeping our existing guards in the LDFLAGS rather than the CFLAGS. >>>>>>>>> This is also reasonably well documented already (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >>>>>>>>> >>>>>>>>>>>> There are lots of other comments in that document regarding STLport that makes me think that using it may be introducing a fragile dependency into the OpenJDK code! >>>>>>>>>>>> >>>>>>>>>>>> "STLport is an open source product and does not guarantee compatibility across different releases. In other words, compiling with a future version of STLport may break applications compiled with STLport 4.5.3. It also might not be possible to link binaries compiled using STLport 4.5.3 with binaries compiled using a future version of STLport." >>>>>>>>>>>> >>>>>>>>>>>> "Future releases of the compiler might not include STLport4. They might include only a later version of STLport. The compiler option -library=stlport4 might not be available in future releases, but could be replaced by an option referring to a later STLport version." >>>>>>>>>>>> >>>>>>>>>>>> None of that sounds very good to me. >>>>>>>>>>> I don't see how this is any different from any other part of the >>>>>>>>>>> process for using a different version of Solaris Studio. >>>>>>>>>> Well we'd discover the problem when testing the compiler change, but my point was more to the fact that they don't seem very committed to this library - very much a "use at own risk" disclaimer. >>>>>>>>> If we eventually need to use something more modern for features that have not been around for a decade, like C++11 features, then we can change standard library when that day comes. >>>>>>>>> >>>>>>>>>>> stlport4 is one of the three standard libraries that are presently >>>>>>>>>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the >>>>>>>>>>> Studio folks which to use (for the purposes of our present project, we >>>>>>>>>>> don't have any particular preference, so long as it works), and >>>>>>>>>>> stlport4 seemed the right choice (libCstd was, I think, described as >>>>>>>>>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>>>>>>>>> including linking against it, for gtest builds. Mixing two different >>>>>>>>>>> standard libraries seems like a bad idea... >>>>>>>>>> So we have the choice of "ancient", "unsupported" or gcc :) >>>>>>>>>> >>>>>>>>>> My confidence in this has not increased :) >>>>>>>>> I trust that e.g. std::numeric_limits::is_signed in the standard libraries has more mileage than whatever simplified rewrite of that we try to replicate in the JVM. So it is not obvious to me that we should have less confidence in the same functionality from a standard library shipped together with the compiler we are using and that has already been used and tested in a variety of C++ applications for over a decade compared to the alternative of reinventing it ourselves. >>>>>>>>> >>>>>>>>>> What we do in gtest doesn't necessarily make things okay to do in the product. >>>>>>>>>> >>>>>>>>>> If this were part of a compiler upgrade process we'd be comparing binaries with old flag and new to ensure there are no unexpected consequences. >>>>>>>>> I would not compare including to a compiler upgrade process as we are not changing the compiler and hence not the way code is generated, but rather compare it to including a new system header that has previously not been included to use a constant folded expression from that header that has been used and tested for a decade. At least that is how I think of it. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> /Erik >>>>>>>>> >>>>>>>>>> Cheers, >>>>>>>>>> David >>>>>>>>>> >>>>>>>>>>>> Cheers, >>>>>>>>>>>> David >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Webrev for jdk10-hs top level repository: >>>>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>>>>>>>>> Webrev for jdk10-hs hotspot repository: >>>>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>>>>>>>>> Testing: JPRT. >>>>>>>>>>>>> Will need a sponsor. >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> /Erik From erik.osterlund at oracle.com Mon Jun 5 17:25:17 2017 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Mon, 5 Jun 2017 19:25:17 +0200 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> <593029CB.7000100@oracle.com> <593534BD.1090805@oracle.com> <59358492.7030803@oracle.com> <997648e7-2880-a626-f866-8892abee2a5d@oracle.com> Message-ID: <883AE90B-C118-44E1-A8AF-8B3F7FB61E63@oracle.com> Hi Dan, You got that right. :) Thanks for the review! /Erik > On 5 Jun 2017, at 19:22, Daniel D. Daugherty wrote: > >> On 6/5/17 10:59 AM, Erik Osterlund wrote: >> Hi Dan, >> >>>> On 5 Jun 2017, at 18:31, Daniel D. Daugherty wrote: >>>> >>>> On 6/5/17 10:19 AM, Erik ?sterlund wrote: >>>> Hi David, >>>> >>>>> On 2017-06-05 14:45, David Holmes wrote: >>>>> Hi Erik, >>>>> >>>>>> On 5/06/2017 8:38 PM, Erik ?sterlund wrote: >>>>>> Hi David, >>>>>> >>>>>>> On 2017-06-02 03:30, David Holmes wrote: >>>>>>> Hi Erik, >>>>>>> >>>>>>>> On 2/06/2017 12:50 AM, Erik ?sterlund wrote: >>>>>>>> Hi David, >>>>>>>> >>>>>>>>> On 2017-06-01 14:33, David Holmes wrote: >>>>>>>>> Hi Erik, >>>>>>>>> >>>>>>>>> Just to be clear it is not the use of that I am concerned about, it is the -library=stlport4. It is the use of that flag that I would want to check in terms of having no affect on any existing code generation. >>>>>>>> Thank you for the clarification. The use of -library=stlport4 should not have anything to do with code generation. It only says where to look for the standard library headers such as that are used in the compilation units. >>>>>>> The potential problem is that the stlport4 include path eg: >>>>>>> >>>>>>> ./SS12u4/lib/compilers/include/CC/stlport4/ >>>>>>> >>>>>>> doesn't only contain the C++ headers (new, limits, string etc) but also a whole bunch of regular 'standard' .h headers that are _different_ to those found outside the stlport4 directory ie the ones we would currently include. I don't know if the differences are significant, nor whether those others may be found ahead of the stlport4 version. But that is my concern about the effects on the code. >>>>>> While I do not think exchanging these headers will have any behavioral impact, I agree that we can not prove so as they are indeed different header files. That is a good point. >>>>>> >>>>>> However, I think that makes the stlport4 case stronger rather than weaker. We already use stlport4 for our gtest testing (because it is required and does not build without it). And if those headers would indeed have slightly different behaviour as you imply, it further motivates using the same standard library when compiling the product as the testing code. If they were to behave slightly differently, it might be that our gtest tests does not catch hidden bugs that only manifest when building with a different set of headers used for the product build. I therefore find it exceedingly dangerous to stay on two standard libraries (depending on if test code or product code is compiled) compared to consistently using the same standard library across all compilations. So for me, the larger the risk is of them behaving differently is, the bigger the motivation is to use stlport4 consistently. >>>>> Regardless of what gtest does if you want to switch the standard libraries used by the product then IMHO that should go through a vetting process no weaker than that for changing the toolchain, as you effectively are doing that. >>>> I talked to Erik Joelsson about how to compare two builds. He introduced me to our compare.sh script that is used to compare two builds. >>>> I built a baseline without these changes and a new build with these changes applied, both on a Solaris SPARC T7 machine. Then I compared them with ./compare.sh -2dirs {$BUILD1}/hotspot/variant-server/libjvm/objs {$BUILD2}/hotspot/variant-server/libjvm/objs -libs --strip >>>> >>>> This compares the object files produced when compiling hotspot in build 1 and build 2 after stripping symbols. >>>> >>>> First it reported: >>>> Libraries... >>>> Size : Symbols : Deps : Disass : >>>> :* diff *: : : ./dtrace.o >>>> :* diff *: :* 38918*: ./jni.o >>>> :* diff *: :* 23226*: ./unsafe.o >>>> >>>> It seems like all symbols were not stripped here on these mentioned files and constituted all differences in the disassembly. So I made a simple sed filter to filter out symbol names in the disassembly with the regexp <.*>. >>>> >>>> The result was: >>>> Libraries... >>>> Size : Symbols : Deps : Disass : >>>> :* diff *: : : ./dtrace.o >>>> :* diff *: : : ./jni.o >>>> :* diff *: : : ./unsafe.o >>>> >>>> This shows that not a single instruction was emitted differently between the two builds. >>>> >>>> I also did the filtering manually on jni.o and unsafe.o in emacs to make sure I did not mess up. >>>> >>>> Are we happy with this, or do you still have doubts that this might result in different code or behavior? >>> Just to be clear: The current experiment changes both the header and >>> the standard library right? If so, then the compare.sh run works for >>> validating that using the new header file will not result in a change >>> in behavior. However, that comparison doesn't do anything for testing >>> a switch in the standard libraries right? >> The -xnolib guards are still there in the LDFLAGS. That is, the linker will not allow anything to link against either standard library. I have manually confirmed this by doing the sanity check of comparing the NEEDED entries in the dynamic section of the libjvm.so elf file using elfdump. It has no references to neither libstlport4 nor libCstd with or without my changes. >> >> Summary: the changes do not add any linktime dependencies to either standard library, and we are still guarded in the sense that if such dependencies were to accidentally be introduced, it would not build. The only difference then would be slightly different code generation of object files. And their disassemblies have been confirmed not to differ by even a single instruction generated differently. > > So your current changes use the stlport4 include path for both product > build and 'gtest' build. You've verified the following: > > - The product binaries do not change even one instruction with the > new include path. > - The options to keep us from linking to anything in stlport4 are > still in place. > - You've manually verified that there are no linkage dependencies > in the resulting binaries. > > If I have all that right, then I think you've covered your bases. > > Dan > > > >> >> Thanks, >> /Erik >> >>> Dan >>> >>> >>>> Thanks, >>>> /Erik >>>> >>>>> Cheers, >>>>> David >>>>> >>>>> >>>>>> Thanks, >>>>>> /Erik >>>>>> >>>>>>> Thanks, >>>>>>> David >>>>>>> ----- >>>>>>> >>>>>>> >>>>>>>> Specifically, the man pages for CC say: >>>>>>>> >>>>>>>> >>>>>>>> -library=lib[,lib...] >>>>>>>> >>>>>>>> Incorporates specified CC-provided libraries into compilation and >>>>>>>> linking. >>>>>>>> >>>>>>>> When the -library option is used to specify a CC-provided library, >>>>>>>> the proper -I paths are set during compilation and the proper -L, >>>>>>>> -Y, -P, and -R paths and -l options are set during linking. >>>>>>>> >>>>>>>> >>>>>>>> As we are setting this during compilation and not during linking, this corresponds to setting the right -I paths to find our C++ standard library headers. >>>>>>>> >>>>>>>> My studio friends mentioned I could double-check that we did indeed not add a dependency to any C++ standard library by running elfdump on the generated libjvm.so file and check if the NEEDED entries in the dynamic section look right. I did and here are the results: >>>>>>>> >>>>>>>> [0] NEEDED 0x2918ee libsocket.so.1 >>>>>>>> [1] NEEDED 0x2918fd libsched.so.1 >>>>>>>> [2] NEEDED 0x29190b libdl.so.1 >>>>>>>> [3] NEEDED 0x291916 libm.so.1 >>>>>>>> [4] NEEDED 0x291920 libCrun.so.1 >>>>>>>> [5] NEEDED 0x29192d libthread.so.1 >>>>>>>> [6] NEEDED 0x29193c libdoor.so.1 >>>>>>>> [7] NEEDED 0x291949 libc.so.1 >>>>>>>> [8] NEEDED 0x291953 libdemangle.so.1 >>>>>>>> [9] NEEDED 0x291964 libnsl.so.1 >>>>>>>> [10] NEEDED 0x291970 libkstat.so.1 >>>>>>>> [11] NEEDED 0x29197e librt.so.1 >>>>>>>> >>>>>>>> This list does not include any C++ standard libraries, as expected (libCrun is always in there even with -library=%none, and as expected no libstlport4.so or libCstd.so files are in there). The NEEDED entries in the dynamic section look identical with and without my patch. >>>>>>>> >>>>>>>>> I'm finding the actual build situation very confusing. It seems to me in looking at the hotspot build files and the top-level build files that -xnolib is used for C++ compilation & linking whereas -library=%none is used for C compilation & linking. But the change is being applied to $2JVM_CFLAGS which one would think is for C compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used for both! >>>>>>>> I have also been confused by this when I tried adding CXX flags through configure that seemed to not be used. But that's a different can of worms I suppose. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> /Erik >>>>>>>> >>>>>>>>> David >>>>>>>>> >>>>>>>>>> On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >>>>>>>>>> Hi David, >>>>>>>>>> >>>>>>>>>>> On 2017-06-01 08:09, David Holmes wrote: >>>>>>>>>>> Hi Kim, >>>>>>>>>>> >>>>>>>>>>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>>>>>>>>>> On May 31, 2017, at 9:22 PM, David Holmes wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> Hi Erik, >>>>>>>>>>>>> >>>>>>>>>>>>> A small change with big questions :) >>>>>>>>>>>>> >>>>>>>>>>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> It would be desirable to be able to use harmless C++ standard library headers like in the code as long as it does not add any link-time dependencies to the standard library. >>>>>>>>>>>>> What does a 'harmless' C++ standard library header look like? >>>>>>>>>>>> Header-only (doesn't require linking), doesn't run afoul of our >>>>>>>>>>>> [vm]assert macro, and provides functionality we presently lack (or >>>>>>>>>>>> only handle poorly) and would not be easy to reproduce. >>>>>>>>>>> And how does one establish those properties exist for a given header file? Just use it and if no link errors then all is good? >>>>>>>>>> Objects from headers that are not ODR-used such as constant folded expressions are not imposing link-time dependencies to C++ libraries. The -xnolib that we already have in the LDFLAGS will catch any accidental ODR-uses of C++ objects, and the JVM will not build if that happens. >>>>>>>>>> >>>>>>>>>> As for external headers being included and not playing nicely with macros, this has to be evaluated on a case by case basis. Note that this is a problem that occurs when using system headers (that we are already using), as it is for using C++ standard library headers. We even run into that in our own JVM when e.g. the min/max macros occasionally slaps us gently in the face from time to time. >>>>>>>>>> >>>>>>>>>>>> The instigator for this is Erik and I are working on a project that >>>>>>>>>>>> needs information that is present in std::numeric_limits<> (provided >>>>>>>>>>>> by the header). Reproducing that functionality ourselves >>>>>>>>>>>> would require platform-specific code (with all the complexity that can >>>>>>>>>>>> imply). We'd really rather not re-discover and maintain information >>>>>>>>>>>> that is trivially accessible in every standard library. >>>>>>>>>>> Understood. I have no issue with using but am concerned by the state of stlport4. Can you use without changing -library=%none? >>>>>>>>>> No, that is precisely why we are here. >>>>>>>>>> >>>>>>>>>>>>>> This is possible on all supported platforms except the ones using the solaris studio compiler where we enforce -library=%none in both CFLAGS and LDFLAGS. >>>>>>>>>>>>>> I propose to remove the restriction from CFLAGS but keep it on LDFLAGS. >>>>>>>>>>>>>> I have consulted with the studio folks, and they think this is absolutely fine and thought that the choice of -library=stlport4 should be fine for our CFLAGS and is indeed what is already used in the gtest launcher. >>>>>>>>>>>>> So what exactly does this mean? IIUC this allows you to use headers for, and compile against "STLport?s Standard Library implementation version 4.5.3 instead of the default libCstd". But how do you then not need to link against libstlport.so ?? >>>>>>>>>>>>> >>>>>>>>>>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>>>>>>>>>> >>>>>>>>>>>>> "STLport is binary incompatible with the default libCstd. If you use the STLport implementation of the standard library, then you must compile and link all files, including third-party libraries, with the option -library=stlport4? >>>>>>>>>>>> It means we can only use header-only parts of the standard library. >>>>>>>>>>>> This was confirmed / suggested by the Studio folks Erik consulted, >>>>>>>>>>>> providing such limited access while continuing to constrain our >>>>>>>>>>>> dependency on the library. Figuring out what can be used will need to >>>>>>>>>>>> be determined on a case-by-case basis. Maybe we could just link with >>>>>>>>>>>> a standard library on Solaris too. So far as I can tell, Solaris is >>>>>>>>>>>> the only platform where we don't do that. But Erik is trying to be >>>>>>>>>>>> conservative. >>>>>>>>>>> Okay, but the docs don't seem to acknowledge the ability to use, but not link to, stlport4. >>>>>>>>>> Not ODR-used objects do not require linkage. (http://en.cppreference.com/w/cpp/language/definition) >>>>>>>>>> I have confirmed directly with the studio folks to be certain that accidental linkage would fail by keeping our existing guards in the LDFLAGS rather than the CFLAGS. >>>>>>>>>> This is also reasonably well documented already (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >>>>>>>>>> >>>>>>>>>>>>> There are lots of other comments in that document regarding STLport that makes me think that using it may be introducing a fragile dependency into the OpenJDK code! >>>>>>>>>>>>> >>>>>>>>>>>>> "STLport is an open source product and does not guarantee compatibility across different releases. In other words, compiling with a future version of STLport may break applications compiled with STLport 4.5.3. It also might not be possible to link binaries compiled using STLport 4.5.3 with binaries compiled using a future version of STLport." >>>>>>>>>>>>> >>>>>>>>>>>>> "Future releases of the compiler might not include STLport4. They might include only a later version of STLport. The compiler option -library=stlport4 might not be available in future releases, but could be replaced by an option referring to a later STLport version." >>>>>>>>>>>>> >>>>>>>>>>>>> None of that sounds very good to me. >>>>>>>>>>>> I don't see how this is any different from any other part of the >>>>>>>>>>>> process for using a different version of Solaris Studio. >>>>>>>>>>> Well we'd discover the problem when testing the compiler change, but my point was more to the fact that they don't seem very committed to this library - very much a "use at own risk" disclaimer. >>>>>>>>>> If we eventually need to use something more modern for features that have not been around for a decade, like C++11 features, then we can change standard library when that day comes. >>>>>>>>>> >>>>>>>>>>>> stlport4 is one of the three standard libraries that are presently >>>>>>>>>>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the >>>>>>>>>>>> Studio folks which to use (for the purposes of our present project, we >>>>>>>>>>>> don't have any particular preference, so long as it works), and >>>>>>>>>>>> stlport4 seemed the right choice (libCstd was, I think, described as >>>>>>>>>>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>>>>>>>>>> including linking against it, for gtest builds. Mixing two different >>>>>>>>>>>> standard libraries seems like a bad idea... >>>>>>>>>>> So we have the choice of "ancient", "unsupported" or gcc :) >>>>>>>>>>> >>>>>>>>>>> My confidence in this has not increased :) >>>>>>>>>> I trust that e.g. std::numeric_limits::is_signed in the standard libraries has more mileage than whatever simplified rewrite of that we try to replicate in the JVM. So it is not obvious to me that we should have less confidence in the same functionality from a standard library shipped together with the compiler we are using and that has already been used and tested in a variety of C++ applications for over a decade compared to the alternative of reinventing it ourselves. >>>>>>>>>> >>>>>>>>>>> What we do in gtest doesn't necessarily make things okay to do in the product. >>>>>>>>>>> >>>>>>>>>>> If this were part of a compiler upgrade process we'd be comparing binaries with old flag and new to ensure there are no unexpected consequences. >>>>>>>>>> I would not compare including to a compiler upgrade process as we are not changing the compiler and hence not the way code is generated, but rather compare it to including a new system header that has previously not been included to use a constant folded expression from that header that has been used and tested for a decade. At least that is how I think of it. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> /Erik >>>>>>>>>> >>>>>>>>>>> Cheers, >>>>>>>>>>> David >>>>>>>>>>> >>>>>>>>>>>>> Cheers, >>>>>>>>>>>>> David >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Webrev for jdk10-hs top level repository: >>>>>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>>>>>>>>>> Webrev for jdk10-hs hotspot repository: >>>>>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>>>>>>>>>> Testing: JPRT. >>>>>>>>>>>>>> Will need a sponsor. >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> /Erik > From chris.plummer at oracle.com Mon Jun 5 20:30:05 2017 From: chris.plummer at oracle.com (Chris Plummer) Date: Mon, 5 Jun 2017 13:30:05 -0700 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> Message-ID: <64c515ad-8aed-befb-8a0a-1c1833032584@oracle.com> On 6/5/17 8:56 AM, Ioi Lam wrote: > Hi Chris, > > > On 6/2/17 11:54 AM, Chris Plummer wrote: >> Hello, >> >> [I'd like a compiler team member to comment on this, in addition to >> runtime or svc] >> >> Please review the following: >> >> https://bugs.openjdk.java.net/browse/JDK-8171365 >> http://cr.openjdk.java.net/~cjplummer/8171365/webrev.00/ >> >> The CR is closed, so I'll describe the issue here: >> >> The test is making sure that all |JVMTI_EVENT_DYNAMIC_CODE_GENERATED| >> events that occur during the agent's OnLoad phase also occur when >> later GenerateEvents() is called. GenerateEvents() is generating some >> of the events, but most are not sent. The problem is >> CodeCache::blobs_do() is only iterating over NMethod code heaps, not >> all of the code heaps, so many code blobs are missed. I changed it to >> iterate over all the code heaps, and now all the >> |JVMTI_EVENT_DYNAMIC_CODE_GENERATED|events are sent. >> >> Note there is another version of CodeCache::blobs_do() that takes a >> closure object instead of a function pointer. It is used by GC and I >> assume is working properly by only iterating over NMethod code heaps, >> so I did not change it. The version that takes a function pointer is >> only used by JVMTI for implementing GenerateEvents(), so this change >> should not impact any other part of the VM. However, I do wonder if >> these blobs_do() methods should be renamed to avoid confusion since >> they don't (and haven't in the past) iterated over the same set of >> code blobs. >> > Yes, I think these two function would indeed be confusing. The second > variant also does a liveness check which is missing from the first one: > > 621 void CodeCache::blobs_do(void f(CodeBlob* nm)) { > 622 assert_locked_or_safepoint(CodeCache_lock); > 623 FOR_ALL_HEAPS(heap) { > 624 FOR_ALL_BLOBS(cb, *heap) { > 625 f(cb); > 626 } > 627 } > 628 } > > 664 void CodeCache::blobs_do(CodeBlobClosure* f) { > 665 assert_locked_or_safepoint(CodeCache_lock); > 666 FOR_ALL_NMETHOD_HEAPS(heap) { > 667 FOR_ALL_BLOBS(cb, *heap) { > 668 if (cb->is_alive()) { > 669 f->do_code_blob(cb); > 670 #ifdef ASSERT > 671 if (cb->is_nmethod()) > 672 ((nmethod*)cb)->verify_scavenge_root_oops(); > 673 #endif //ASSERT > 674 } > 675 } > 676 } > 677 } > > The two function's APIs are equivalent, since CodeBlobClosure has a > single function, so I think it's better to stick with one API, i.e. > replace the function pointer with CodeBlobClosure > > class CodeBlobClosure : public Closure { > public: > // Called for each code blob. > virtual void do_code_blob(CodeBlob* cb) = 0; > }; > > For consistency, maybe we should change the first version to > > CodeCache::all_blobs_do(CodeBlobClosure* f) > > and the second to > > CodeCache::live_nmethod_blobs_do(CodeBlobClosure* f) > > Thanks > - Ioi Hi Ioi, Thanks for the review. I don't see a good reason why a closure should be used here. There's no state belonging to the iteration that needs to be accessed by the callback. It's just a simple functional callback, so it seems converting this to use a closure is overkill. I'd have to say the same is true of the closure version of do_blobs also. Why use a closure for that? BTW, there also: void CodeCache::nmethods_do(void f(nmethod* nm)) { assert_locked_or_safepoint(CodeCache_lock); NMethodIterator iter; while(iter.next()) { f(iter.method()); } } So another functional version of a blob iterator. It's not real clear to me all the ways this differs from the closure version of blobs_do(), other than it allows you to provide an nmethod as a starting point for the iteration. Chris > >> thanks, >> >> Chris >> >> https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.html#DynamicCodeGenerated >> > From kim.barrett at oracle.com Mon Jun 5 20:52:03 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 5 Jun 2017 16:52:03 -0400 Subject: RFR: 8161145: The min/max macros make hotspot tests fail to build with GCC 6 In-Reply-To: <20170601211357.184958343@eggemoggin.niobe.net> References: <20160711184513.GA1485@redhat.com> <20D15A8D-1C0A-4D44-9E83-A99B38622A6C@oracle.com> <1655965499.4239256.1468347672234.JavaMail.zimbra@redhat.com> <27891E85-5736-4D44-8D79-1C44B01499EE@oracle.com> <1461764727.4609223.1468437153322.JavaMail.zimbra@redhat.com> <1471353948.2985.22.camel@redhat.com> <592EA400.9010004@oracle.com> <1496231707.3749.4.camel@redhat.com> <592EDADC.8040709@oracle.com> <592FEDC0.2090109@oracle.com> <2849B1E3-1125-4A40-B1D9-CDBB8546DA62@oracle.com> <20170601211357.184958343@eggemoggin.niobe.net> Message-ID: <8C251954-5EA5-44FC-B8D7-30A04F1E5B45@oracle.com> > On Jun 2, 2017, at 12:13 AM, mark.reinhold at oracle.com wrote: > > Erik -- I think this is worth fixing in 9, given that GCC 6 is no longer > new and the sustaining lines of 9 will be around for a while. Would you > mind pushing it to 9, from which it will automatically be forward-ported > to 10? I'd be happy to approve the fix request. > > Thanks, > - Mark Thanks for supporting the suggestion of fixing in 9. I just added the fix request. > > > 2017/6/1 9:44:06 -0700, erik.osterlund at oracle.com: >> Thank you Andrew. >> >> /Erik >> >>> On 1 Jun 2017, at 18:25, Andrew Hughes wrote: >>> >>>> On 1 June 2017 at 12:17, Per Liden wrote: >>>>> On 2017-06-01 12:34, Erik ?sterlund wrote: >>>>> >>>>> Hi Per, >>>>> >>>>>> On 2017-06-01 11:49, Per Liden wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> On 2017-06-01 10:18, Kim Barrett wrote: >>>>>>>> >>>>>>>> On May 31, 2017, at 11:01 AM, Erik ?sterlund >>>>>>>> wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> Excellent. In that case I would like reviews on this patch that does >>>>>>>> exactly that: >>>>>>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.00/ >>>>>> >>>>>> >>>>>> Looks good, but can we please add a comment here describing why we're >>>>>> doing this. It's not obvious :) >>>>> >>>>> >>>>> Thank you for the review. Here is a webrev with the added comment: >>>>> http://cr.openjdk.java.net/~eosterlund/8161145/webrev.01/ >>>> >>>> >>>> Looks good, thanks! >>>> >>>> /Per >>>> >>> >>> Looks good to me too, and will be great to finally see this fixed. >>> >>> It'll also need backporting to 9 now. >>> >>> Thanks, >>> -- >>> Andrew :) >>> >>> Senior Free Java Software Engineer >>> Red Hat, Inc. (http://www.redhat.com) >>> >>> Web Site: http://fuseyism.com >>> Twitter: https://twitter.com/gnu_andrew_java >>> PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) >>> Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From david.holmes at oracle.com Mon Jun 5 21:39:18 2017 From: david.holmes at oracle.com (David Holmes) Date: Tue, 6 Jun 2017 07:39:18 +1000 Subject: RFR (2xS): 8181318: Allow C++ library headers on Solaris Studio In-Reply-To: <883AE90B-C118-44E1-A8AF-8B3F7FB61E63@oracle.com> References: <592EC8F6.5080605@oracle.com> <0cfb3eb8-a19b-60f8-fdb1-249dc4798e7c@oracle.com> <592FE000.50003@oracle.com> <593029CB.7000100@oracle.com> <593534BD.1090805@oracle.com> <59358492.7030803@oracle.com> <997648e7-2880-a626-f866-8892abee2a5d@oracle.com> <883AE90B-C118-44E1-A8AF-8B3F7FB61E63@oracle.com> Message-ID: Hi Erik, Thanks - yes this is the level of checking that was needed. Cheers, David On 6/06/2017 3:25 AM, Erik Osterlund wrote: > Hi Dan, > > You got that right. :) > Thanks for the review! > > /Erik > >> On 5 Jun 2017, at 19:22, Daniel D. Daugherty wrote: >> >>> On 6/5/17 10:59 AM, Erik Osterlund wrote: >>> Hi Dan, >>> >>>>> On 5 Jun 2017, at 18:31, Daniel D. Daugherty wrote: >>>>> >>>>> On 6/5/17 10:19 AM, Erik ?sterlund wrote: >>>>> Hi David, >>>>> >>>>>> On 2017-06-05 14:45, David Holmes wrote: >>>>>> Hi Erik, >>>>>> >>>>>>> On 5/06/2017 8:38 PM, Erik ?sterlund wrote: >>>>>>> Hi David, >>>>>>> >>>>>>>> On 2017-06-02 03:30, David Holmes wrote: >>>>>>>> Hi Erik, >>>>>>>> >>>>>>>>> On 2/06/2017 12:50 AM, Erik ?sterlund wrote: >>>>>>>>> Hi David, >>>>>>>>> >>>>>>>>>> On 2017-06-01 14:33, David Holmes wrote: >>>>>>>>>> Hi Erik, >>>>>>>>>> >>>>>>>>>> Just to be clear it is not the use of that I am concerned about, it is the -library=stlport4. It is the use of that flag that I would want to check in terms of having no affect on any existing code generation. >>>>>>>>> Thank you for the clarification. The use of -library=stlport4 should not have anything to do with code generation. It only says where to look for the standard library headers such as that are used in the compilation units. >>>>>>>> The potential problem is that the stlport4 include path eg: >>>>>>>> >>>>>>>> ./SS12u4/lib/compilers/include/CC/stlport4/ >>>>>>>> >>>>>>>> doesn't only contain the C++ headers (new, limits, string etc) but also a whole bunch of regular 'standard' .h headers that are _different_ to those found outside the stlport4 directory ie the ones we would currently include. I don't know if the differences are significant, nor whether those others may be found ahead of the stlport4 version. But that is my concern about the effects on the code. >>>>>>> While I do not think exchanging these headers will have any behavioral impact, I agree that we can not prove so as they are indeed different header files. That is a good point. >>>>>>> >>>>>>> However, I think that makes the stlport4 case stronger rather than weaker. We already use stlport4 for our gtest testing (because it is required and does not build without it). And if those headers would indeed have slightly different behaviour as you imply, it further motivates using the same standard library when compiling the product as the testing code. If they were to behave slightly differently, it might be that our gtest tests does not catch hidden bugs that only manifest when building with a different set of headers used for the product build. I therefore find it exceedingly dangerous to stay on two standard libraries (depending on if test code or product code is compiled) compared to consistently using the same standard library across all compilations. So for me, the larger the risk is of them behaving differently is, the bigger the motivation is to use stlport4 consistently. >>>>>> Regardless of what gtest does if you want to switch the standard libraries used by the product then IMHO that should go through a vetting process no weaker than that for changing the toolchain, as you effectively are doing that. >>>>> I talked to Erik Joelsson about how to compare two builds. He introduced me to our compare.sh script that is used to compare two builds. >>>>> I built a baseline without these changes and a new build with these changes applied, both on a Solaris SPARC T7 machine. Then I compared them with ./compare.sh -2dirs {$BUILD1}/hotspot/variant-server/libjvm/objs {$BUILD2}/hotspot/variant-server/libjvm/objs -libs --strip >>>>> >>>>> This compares the object files produced when compiling hotspot in build 1 and build 2 after stripping symbols. >>>>> >>>>> First it reported: >>>>> Libraries... >>>>> Size : Symbols : Deps : Disass : >>>>> :* diff *: : : ./dtrace.o >>>>> :* diff *: :* 38918*: ./jni.o >>>>> :* diff *: :* 23226*: ./unsafe.o >>>>> >>>>> It seems like all symbols were not stripped here on these mentioned files and constituted all differences in the disassembly. So I made a simple sed filter to filter out symbol names in the disassembly with the regexp <.*>. >>>>> >>>>> The result was: >>>>> Libraries... >>>>> Size : Symbols : Deps : Disass : >>>>> :* diff *: : : ./dtrace.o >>>>> :* diff *: : : ./jni.o >>>>> :* diff *: : : ./unsafe.o >>>>> >>>>> This shows that not a single instruction was emitted differently between the two builds. >>>>> >>>>> I also did the filtering manually on jni.o and unsafe.o in emacs to make sure I did not mess up. >>>>> >>>>> Are we happy with this, or do you still have doubts that this might result in different code or behavior? >>>> Just to be clear: The current experiment changes both the header and >>>> the standard library right? If so, then the compare.sh run works for >>>> validating that using the new header file will not result in a change >>>> in behavior. However, that comparison doesn't do anything for testing >>>> a switch in the standard libraries right? >>> The -xnolib guards are still there in the LDFLAGS. That is, the linker will not allow anything to link against either standard library. I have manually confirmed this by doing the sanity check of comparing the NEEDED entries in the dynamic section of the libjvm.so elf file using elfdump. It has no references to neither libstlport4 nor libCstd with or without my changes. >>> >>> Summary: the changes do not add any linktime dependencies to either standard library, and we are still guarded in the sense that if such dependencies were to accidentally be introduced, it would not build. The only difference then would be slightly different code generation of object files. And their disassemblies have been confirmed not to differ by even a single instruction generated differently. >> >> So your current changes use the stlport4 include path for both product >> build and 'gtest' build. You've verified the following: >> >> - The product binaries do not change even one instruction with the >> new include path. >> - The options to keep us from linking to anything in stlport4 are >> still in place. >> - You've manually verified that there are no linkage dependencies >> in the resulting binaries. >> >> If I have all that right, then I think you've covered your bases. >> >> Dan >> >> >> >>> >>> Thanks, >>> /Erik >>> >>>> Dan >>>> >>>> >>>>> Thanks, >>>>> /Erik >>>>> >>>>>> Cheers, >>>>>> David >>>>>> >>>>>> >>>>>>> Thanks, >>>>>>> /Erik >>>>>>> >>>>>>>> Thanks, >>>>>>>> David >>>>>>>> ----- >>>>>>>> >>>>>>>> >>>>>>>>> Specifically, the man pages for CC say: >>>>>>>>> >>>>>>>>> >>>>>>>>> -library=lib[,lib...] >>>>>>>>> >>>>>>>>> Incorporates specified CC-provided libraries into compilation and >>>>>>>>> linking. >>>>>>>>> >>>>>>>>> When the -library option is used to specify a CC-provided library, >>>>>>>>> the proper -I paths are set during compilation and the proper -L, >>>>>>>>> -Y, -P, and -R paths and -l options are set during linking. >>>>>>>>> >>>>>>>>> >>>>>>>>> As we are setting this during compilation and not during linking, this corresponds to setting the right -I paths to find our C++ standard library headers. >>>>>>>>> >>>>>>>>> My studio friends mentioned I could double-check that we did indeed not add a dependency to any C++ standard library by running elfdump on the generated libjvm.so file and check if the NEEDED entries in the dynamic section look right. I did and here are the results: >>>>>>>>> >>>>>>>>> [0] NEEDED 0x2918ee libsocket.so.1 >>>>>>>>> [1] NEEDED 0x2918fd libsched.so.1 >>>>>>>>> [2] NEEDED 0x29190b libdl.so.1 >>>>>>>>> [3] NEEDED 0x291916 libm.so.1 >>>>>>>>> [4] NEEDED 0x291920 libCrun.so.1 >>>>>>>>> [5] NEEDED 0x29192d libthread.so.1 >>>>>>>>> [6] NEEDED 0x29193c libdoor.so.1 >>>>>>>>> [7] NEEDED 0x291949 libc.so.1 >>>>>>>>> [8] NEEDED 0x291953 libdemangle.so.1 >>>>>>>>> [9] NEEDED 0x291964 libnsl.so.1 >>>>>>>>> [10] NEEDED 0x291970 libkstat.so.1 >>>>>>>>> [11] NEEDED 0x29197e librt.so.1 >>>>>>>>> >>>>>>>>> This list does not include any C++ standard libraries, as expected (libCrun is always in there even with -library=%none, and as expected no libstlport4.so or libCstd.so files are in there). The NEEDED entries in the dynamic section look identical with and without my patch. >>>>>>>>> >>>>>>>>>> I'm finding the actual build situation very confusing. It seems to me in looking at the hotspot build files and the top-level build files that -xnolib is used for C++ compilation & linking whereas -library=%none is used for C compilation & linking. But the change is being applied to $2JVM_CFLAGS which one would think is for C compilation but we don't have $2JVM_CXXFLAGS, so it seems to be used for both! >>>>>>>>> I have also been confused by this when I tried adding CXX flags through configure that seemed to not be used. But that's a different can of worms I suppose. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> /Erik >>>>>>>>> >>>>>>>>>> David >>>>>>>>>> >>>>>>>>>>> On 1/06/2017 7:36 PM, Erik ?sterlund wrote: >>>>>>>>>>> Hi David, >>>>>>>>>>> >>>>>>>>>>>> On 2017-06-01 08:09, David Holmes wrote: >>>>>>>>>>>> Hi Kim, >>>>>>>>>>>> >>>>>>>>>>>> On 1/06/2017 3:51 PM, Kim Barrett wrote: >>>>>>>>>>>>>> On May 31, 2017, at 9:22 PM, David Holmes wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Erik, >>>>>>>>>>>>>> >>>>>>>>>>>>>> A small change with big questions :) >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 31/05/2017 11:45 PM, Erik ?sterlund wrote: >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> It would be desirable to be able to use harmless C++ standard library headers like in the code as long as it does not add any link-time dependencies to the standard library. >>>>>>>>>>>>>> What does a 'harmless' C++ standard library header look like? >>>>>>>>>>>>> Header-only (doesn't require linking), doesn't run afoul of our >>>>>>>>>>>>> [vm]assert macro, and provides functionality we presently lack (or >>>>>>>>>>>>> only handle poorly) and would not be easy to reproduce. >>>>>>>>>>>> And how does one establish those properties exist for a given header file? Just use it and if no link errors then all is good? >>>>>>>>>>> Objects from headers that are not ODR-used such as constant folded expressions are not imposing link-time dependencies to C++ libraries. The -xnolib that we already have in the LDFLAGS will catch any accidental ODR-uses of C++ objects, and the JVM will not build if that happens. >>>>>>>>>>> >>>>>>>>>>> As for external headers being included and not playing nicely with macros, this has to be evaluated on a case by case basis. Note that this is a problem that occurs when using system headers (that we are already using), as it is for using C++ standard library headers. We even run into that in our own JVM when e.g. the min/max macros occasionally slaps us gently in the face from time to time. >>>>>>>>>>> >>>>>>>>>>>>> The instigator for this is Erik and I are working on a project that >>>>>>>>>>>>> needs information that is present in std::numeric_limits<> (provided >>>>>>>>>>>>> by the header). Reproducing that functionality ourselves >>>>>>>>>>>>> would require platform-specific code (with all the complexity that can >>>>>>>>>>>>> imply). We'd really rather not re-discover and maintain information >>>>>>>>>>>>> that is trivially accessible in every standard library. >>>>>>>>>>>> Understood. I have no issue with using but am concerned by the state of stlport4. Can you use without changing -library=%none? >>>>>>>>>>> No, that is precisely why we are here. >>>>>>>>>>> >>>>>>>>>>>>>>> This is possible on all supported platforms except the ones using the solaris studio compiler where we enforce -library=%none in both CFLAGS and LDFLAGS. >>>>>>>>>>>>>>> I propose to remove the restriction from CFLAGS but keep it on LDFLAGS. >>>>>>>>>>>>>>> I have consulted with the studio folks, and they think this is absolutely fine and thought that the choice of -library=stlport4 should be fine for our CFLAGS and is indeed what is already used in the gtest launcher. >>>>>>>>>>>>>> So what exactly does this mean? IIUC this allows you to use headers for, and compile against "STLport?s Standard Library implementation version 4.5.3 instead of the default libCstd". But how do you then not need to link against libstlport.so ?? >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://docs.oracle.com/cd/E19205-01/819-5267/bkakg/index.html >>>>>>>>>>>>>> >>>>>>>>>>>>>> "STLport is binary incompatible with the default libCstd. If you use the STLport implementation of the standard library, then you must compile and link all files, including third-party libraries, with the option -library=stlport4? >>>>>>>>>>>>> It means we can only use header-only parts of the standard library. >>>>>>>>>>>>> This was confirmed / suggested by the Studio folks Erik consulted, >>>>>>>>>>>>> providing such limited access while continuing to constrain our >>>>>>>>>>>>> dependency on the library. Figuring out what can be used will need to >>>>>>>>>>>>> be determined on a case-by-case basis. Maybe we could just link with >>>>>>>>>>>>> a standard library on Solaris too. So far as I can tell, Solaris is >>>>>>>>>>>>> the only platform where we don't do that. But Erik is trying to be >>>>>>>>>>>>> conservative. >>>>>>>>>>>> Okay, but the docs don't seem to acknowledge the ability to use, but not link to, stlport4. >>>>>>>>>>> Not ODR-used objects do not require linkage. (http://en.cppreference.com/w/cpp/language/definition) >>>>>>>>>>> I have confirmed directly with the studio folks to be certain that accidental linkage would fail by keeping our existing guards in the LDFLAGS rather than the CFLAGS. >>>>>>>>>>> This is also reasonably well documented already (https://docs.oracle.com/cd/E19205-01/819-5267/bkbeq/index.html). >>>>>>>>>>> >>>>>>>>>>>>>> There are lots of other comments in that document regarding STLport that makes me think that using it may be introducing a fragile dependency into the OpenJDK code! >>>>>>>>>>>>>> >>>>>>>>>>>>>> "STLport is an open source product and does not guarantee compatibility across different releases. In other words, compiling with a future version of STLport may break applications compiled with STLport 4.5.3. It also might not be possible to link binaries compiled using STLport 4.5.3 with binaries compiled using a future version of STLport." >>>>>>>>>>>>>> >>>>>>>>>>>>>> "Future releases of the compiler might not include STLport4. They might include only a later version of STLport. The compiler option -library=stlport4 might not be available in future releases, but could be replaced by an option referring to a later STLport version." >>>>>>>>>>>>>> >>>>>>>>>>>>>> None of that sounds very good to me. >>>>>>>>>>>>> I don't see how this is any different from any other part of the >>>>>>>>>>>>> process for using a different version of Solaris Studio. >>>>>>>>>>>> Well we'd discover the problem when testing the compiler change, but my point was more to the fact that they don't seem very committed to this library - very much a "use at own risk" disclaimer. >>>>>>>>>>> If we eventually need to use something more modern for features that have not been around for a decade, like C++11 features, then we can change standard library when that day comes. >>>>>>>>>>> >>>>>>>>>>>>> stlport4 is one of the three standard libraries that are presently >>>>>>>>>>>>> included with Solaris Studio (libCstd, stlport4, gcc). Erik asked the >>>>>>>>>>>>> Studio folks which to use (for the purposes of our present project, we >>>>>>>>>>>>> don't have any particular preference, so long as it works), and >>>>>>>>>>>>> stlport4 seemed the right choice (libCstd was, I think, described as >>>>>>>>>>>>> "ancient"). Perhaps more importantly, we already use stlport4, >>>>>>>>>>>>> including linking against it, for gtest builds. Mixing two different >>>>>>>>>>>>> standard libraries seems like a bad idea... >>>>>>>>>>>> So we have the choice of "ancient", "unsupported" or gcc :) >>>>>>>>>>>> >>>>>>>>>>>> My confidence in this has not increased :) >>>>>>>>>>> I trust that e.g. std::numeric_limits::is_signed in the standard libraries has more mileage than whatever simplified rewrite of that we try to replicate in the JVM. So it is not obvious to me that we should have less confidence in the same functionality from a standard library shipped together with the compiler we are using and that has already been used and tested in a variety of C++ applications for over a decade compared to the alternative of reinventing it ourselves. >>>>>>>>>>> >>>>>>>>>>>> What we do in gtest doesn't necessarily make things okay to do in the product. >>>>>>>>>>>> >>>>>>>>>>>> If this were part of a compiler upgrade process we'd be comparing binaries with old flag and new to ensure there are no unexpected consequences. >>>>>>>>>>> I would not compare including to a compiler upgrade process as we are not changing the compiler and hence not the way code is generated, but rather compare it to including a new system header that has previously not been included to use a constant folded expression from that header that has been used and tested for a decade. At least that is how I think of it. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> /Erik >>>>>>>>>>> >>>>>>>>>>>> Cheers, >>>>>>>>>>>> David >>>>>>>>>>>> >>>>>>>>>>>>>> Cheers, >>>>>>>>>>>>>> David >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Webrev for jdk10-hs top level repository: >>>>>>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.00/ >>>>>>>>>>>>>>> Webrev for jdk10-hs hotspot repository: >>>>>>>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8181318/webrev.01/ >>>>>>>>>>>>>>> Testing: JPRT. >>>>>>>>>>>>>>> Will need a sponsor. >>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>> /Erik >> > From vladimir.kozlov at oracle.com Mon Jun 5 22:43:32 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 5 Jun 2017 15:43:32 -0700 Subject: RFR 8181292 Backport Rename internal Unsafe.compare methods from 10 to 9 In-Reply-To: <007e16b8-b342-53e9-be52-91f5d01e9f55@oracle.com> References: <75E82CFC-EC08-4FDA-AFE0-B7572D0AAB25@oracle.com> <007e16b8-b342-53e9-be52-91f5d01e9f55@oracle.com> Message-ID: To clarify. I agree with this renaming to be pushed into JDK 9. AOT testing failures will be fixed separately as fix for 8180785 bug which requires changes in Graal. Renaming should be pushed first before we fix Graal to simplify Graal changes (no need to condition for JDK 10 and 9). Thanks, Vladimir On 6/1/17 11:28 AM, Vladimir Kozlov wrote: > Thank you, Paul, for backporting it. > > On 6/1/17 8:56 AM, Paul Sandoz wrote: >> Hi, >> >> To make it easier on 166 and Graal code to support both 9 and 10 we should back port the renaming of the internal Unsafe.compareAndSwap to Unsafe.compareAndSet: >> >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-jdk/webrev/ >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8181292-unsafe-cas-rename-hotspot/webrev/ > > Hotspot changes are fine to me. > >> >> This is an explicit back port with a new bug. This is the easiest approach given the current nature of how 9 and 10 are currently kept in sync. >> >> The change sets are the same as those associated with the following issues and apply cleanly without modification: >> >> Rename internal Unsafe.compare methods >> https://bugs.openjdk.java.net/browse/JDK-8159995 >> >> [TESTBUG] Some hotspot tests broken after internal Unsafe name changes >> https://bugs.openjdk.java.net/browse/JDK-8180479 >> >> >> When running JPRT tests i observe a Graal test error on linux_x64_3.8-fastdebug-c2-hotspot_fast_compiler [*]. I dunno how this is manifesting given i cannot find any explicit reference to >> jdk.internal.Unsafe.compareAndSwap. Any idea? > > This is Graal bug I told about before - not all places in Graal are fixed with 8181292 changes (only a test was fixed): > > https://bugs.openjdk.java.net/browse/JDK-8180785 > > After you do backport we will fix Graal in JDK 9 and JDK 10. So don't worry about those failures. I will update 'Affected' and 'Fix' version later. > > Thanks, > Vladimir > >> >> Paul. >> >> [*] >> [2017-05-31 12:33:08,163] Agent[4]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.caller()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: >> java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) >> [2017-05-31 12:33:08,163] Agent[4]: stdout: at parsing app//compiler.calls.common.InvokeInterface.caller(InvokeInterface.java:45) >> [2017-05-31 12:33:08,213] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.callerNative()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: >> java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) >> [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.callerNative(InvokeInterface.java:82) >> [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: >> java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) >> [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.(InvokeInterface.java:31) >> [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.main([Ljava/lang/String;)V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: >> java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) >> [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.main(InvokeInterface.java:35) >> [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.callee(IJFDLjava/lang/String;)Z: >> org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) >> [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.callee(InvokeInterface.java:60) >> [2017-05-31 12:33:08,214] Agent[5]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.caller()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: >> java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) >> [2017-05-31 12:33:08,214] Agent[5]: stdout: at parsing app//compiler.calls.common.InvokeInterface.caller(InvokeInterface.java:45) >> [2017-05-31 12:33:08,428] Agent[3]: stdout: Error: Failed compilation: compiler.calls.common.InvokeInterface.caller()V: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: >> java.lang.InternalError: java.lang.NoSuchMethodException: jdk.internal.misc.Unsafe.compareAndSwapInt(java.lang.Object, long, int, int) >> [2017-05-31 12:33:08,429] Agent[3]: stdout: at parsing app//compiler.calls.common.InvokeInterface.caller(InvokeInterface.java:45) >> TEST: compiler/aot/calls/fromAot/AotInvokeInterface2AotTest.java >> TEST JDK: /opt/jprt/T/P1/191630.sandoz/testproduct/linux_x64_3.8-fastdebug >> From paul.sandoz at oracle.com Mon Jun 5 22:48:45 2017 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 5 Jun 2017 15:48:45 -0700 Subject: RFR 8181292 Backport Rename internal Unsafe.compare methods from 10 to 9 In-Reply-To: References: <75E82CFC-EC08-4FDA-AFE0-B7572D0AAB25@oracle.com> <007e16b8-b342-53e9-be52-91f5d01e9f55@oracle.com> Message-ID: Thanks. I accidentally deleted your first message so did not read it. Apologies for the confusion. Paul. > On 5 Jun 2017, at 15:43, Vladimir Kozlov wrote: > > To clarify. I agree with this renaming to be pushed into JDK 9. > > AOT testing failures will be fixed separately as fix for 8180785 bug which requires changes in Graal. Renaming should be pushed first before we fix Graal to simplify Graal changes (no need to condition for JDK 10 and 9). > > Thanks, > Vladimir From ioi.lam at oracle.com Tue Jun 6 01:57:13 2017 From: ioi.lam at oracle.com (Ioi Lam) Date: Mon, 5 Jun 2017 18:57:13 -0700 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: <64c515ad-8aed-befb-8a0a-1c1833032584@oracle.com> References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> <64c515ad-8aed-befb-8a0a-1c1833032584@oracle.com> Message-ID: <1dfa33ea-b913-81fc-b1c3-54e64483b93a@oracle.com> On 6/5/17 1:30 PM, Chris Plummer wrote: > On 6/5/17 8:56 AM, Ioi Lam wrote: >> Hi Chris, >> >> >> On 6/2/17 11:54 AM, Chris Plummer wrote: >>> Hello, >>> >>> [I'd like a compiler team member to comment on this, in addition to >>> runtime or svc] >>> >>> Please review the following: >>> >>> https://bugs.openjdk.java.net/browse/JDK-8171365 >>> http://cr.openjdk.java.net/~cjplummer/8171365/webrev.00/ >>> >>> The CR is closed, so I'll describe the issue here: >>> >>> The test is making sure that all >>> |JVMTI_EVENT_DYNAMIC_CODE_GENERATED| events that occur during the >>> agent's OnLoad phase also occur when later GenerateEvents() is >>> called. GenerateEvents() is generating some of the events, but most >>> are not sent. The problem is CodeCache::blobs_do() is only iterating >>> over NMethod code heaps, not all of the code heaps, so many code >>> blobs are missed. I changed it to iterate over all the code heaps, >>> and now all the |JVMTI_EVENT_DYNAMIC_CODE_GENERATED|events are sent. >>> >>> Note there is another version of CodeCache::blobs_do() that takes a >>> closure object instead of a function pointer. It is used by GC and I >>> assume is working properly by only iterating over NMethod code >>> heaps, so I did not change it. The version that takes a function >>> pointer is only used by JVMTI for implementing GenerateEvents(), so >>> this change should not impact any other part of the VM. However, I >>> do wonder if these blobs_do() methods should be renamed to avoid >>> confusion since they don't (and haven't in the past) iterated over >>> the same set of code blobs. >>> >> Yes, I think these two function would indeed be confusing. The second >> variant also does a liveness check which is missing from the first one: >> >> 621 void CodeCache::blobs_do(void f(CodeBlob* nm)) { >> 622 assert_locked_or_safepoint(CodeCache_lock); >> 623 FOR_ALL_HEAPS(heap) { >> 624 FOR_ALL_BLOBS(cb, *heap) { >> 625 f(cb); >> 626 } >> 627 } >> 628 } >> >> 664 void CodeCache::blobs_do(CodeBlobClosure* f) { >> 665 assert_locked_or_safepoint(CodeCache_lock); >> 666 FOR_ALL_NMETHOD_HEAPS(heap) { >> 667 FOR_ALL_BLOBS(cb, *heap) { >> 668 if (cb->is_alive()) { >> 669 f->do_code_blob(cb); >> 670 #ifdef ASSERT >> 671 if (cb->is_nmethod()) >> 672 ((nmethod*)cb)->verify_scavenge_root_oops(); >> 673 #endif //ASSERT >> 674 } >> 675 } >> 676 } >> 677 } >> >> The two function's APIs are equivalent, since CodeBlobClosure has a >> single function, so I think it's better to stick with one API, i.e. >> replace the function pointer with CodeBlobClosure >> >> class CodeBlobClosure : public Closure { >> public: >> // Called for each code blob. >> virtual void do_code_blob(CodeBlob* cb) = 0; >> }; >> >> For consistency, maybe we should change the first version to >> >> CodeCache::all_blobs_do(CodeBlobClosure* f) >> >> and the second to >> >> CodeCache::live_nmethod_blobs_do(CodeBlobClosure* f) >> >> Thanks >> - Ioi > Hi Ioi, > > Thanks for the review. > > I don't see a good reason why a closure should be used here. There's > no state belonging to the iteration that needs to be accessed by the > callback. It's just a simple functional callback, so it seems > converting this to use a closure is overkill. I'd have to say the same > is true of the closure version of do_blobs also. Why use a closure for > that? > Some of the closures actually have instance fields. E.g., class G1CodeBlobClosure : public CodeBlobClosure { class HeapRegionGatheringOopClosure : public OopClosure { ... }; HeapRegionGatheringOopClosure _oc; ... }; I think in general using a closure is better than a function pointer. It makes evolution of the code easier when in the future you need to handle more complicated situations that would require states. Not sure what the HotSpot convention is, but my preference would be to use closures unless there's a strict performance requirement for using a function pointer. Thanks - Ioi > BTW, there also: > > void CodeCache::nmethods_do(void f(nmethod* nm)) { > assert_locked_or_safepoint(CodeCache_lock); > NMethodIterator iter; > while(iter.next()) { > f(iter.method()); > } > } > > So another functional version of a blob iterator. It's not real clear > to me all the ways this differs from the closure version of > blobs_do(), other than it allows you to provide an nmethod as a > starting point for the iteration. > > Chris >> >>> thanks, >>> >>> Chris >>> >>> https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.html#DynamicCodeGenerated >>> >> > From chris.plummer at oracle.com Tue Jun 6 02:48:33 2017 From: chris.plummer at oracle.com (Chris Plummer) Date: Mon, 5 Jun 2017 19:48:33 -0700 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: <1dfa33ea-b913-81fc-b1c3-54e64483b93a@oracle.com> References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> <64c515ad-8aed-befb-8a0a-1c1833032584@oracle.com> <1dfa33ea-b913-81fc-b1c3-54e64483b93a@oracle.com> Message-ID: <311a9c18-7bb8-9b97-dcda-74def73b278a@oracle.com> On 6/5/17 6:57 PM, Ioi Lam wrote: > > > On 6/5/17 1:30 PM, Chris Plummer wrote: >> On 6/5/17 8:56 AM, Ioi Lam wrote: >>> Hi Chris, >>> >>> >>> On 6/2/17 11:54 AM, Chris Plummer wrote: >>>> Hello, >>>> >>>> [I'd like a compiler team member to comment on this, in addition to >>>> runtime or svc] >>>> >>>> Please review the following: >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8171365 >>>> http://cr.openjdk.java.net/~cjplummer/8171365/webrev.00/ >>>> >>>> The CR is closed, so I'll describe the issue here: >>>> >>>> The test is making sure that all >>>> |JVMTI_EVENT_DYNAMIC_CODE_GENERATED| events that occur during the >>>> agent's OnLoad phase also occur when later GenerateEvents() is >>>> called. GenerateEvents() is generating some of the events, but most >>>> are not sent. The problem is CodeCache::blobs_do() is only >>>> iterating over NMethod code heaps, not all of the code heaps, so >>>> many code blobs are missed. I changed it to iterate over all the >>>> code heaps, and now all the >>>> |JVMTI_EVENT_DYNAMIC_CODE_GENERATED|events are sent. >>>> >>>> Note there is another version of CodeCache::blobs_do() that takes a >>>> closure object instead of a function pointer. It is used by GC and >>>> I assume is working properly by only iterating over NMethod code >>>> heaps, so I did not change it. The version that takes a function >>>> pointer is only used by JVMTI for implementing GenerateEvents(), so >>>> this change should not impact any other part of the VM. However, I >>>> do wonder if these blobs_do() methods should be renamed to avoid >>>> confusion since they don't (and haven't in the past) iterated over >>>> the same set of code blobs. >>>> >>> Yes, I think these two function would indeed be confusing. The >>> second variant also does a liveness check which is missing from the >>> first one: >>> >>> 621 void CodeCache::blobs_do(void f(CodeBlob* nm)) { >>> 622 assert_locked_or_safepoint(CodeCache_lock); >>> 623 FOR_ALL_HEAPS(heap) { >>> 624 FOR_ALL_BLOBS(cb, *heap) { >>> 625 f(cb); >>> 626 } >>> 627 } >>> 628 } >>> >>> 664 void CodeCache::blobs_do(CodeBlobClosure* f) { >>> 665 assert_locked_or_safepoint(CodeCache_lock); >>> 666 FOR_ALL_NMETHOD_HEAPS(heap) { >>> 667 FOR_ALL_BLOBS(cb, *heap) { >>> 668 if (cb->is_alive()) { >>> 669 f->do_code_blob(cb); >>> 670 #ifdef ASSERT >>> 671 if (cb->is_nmethod()) >>> 672 ((nmethod*)cb)->verify_scavenge_root_oops(); >>> 673 #endif //ASSERT >>> 674 } >>> 675 } >>> 676 } >>> 677 } >>> >>> The two function's APIs are equivalent, since CodeBlobClosure has a >>> single function, so I think it's better to stick with one API, i.e. >>> replace the function pointer with CodeBlobClosure >>> >>> class CodeBlobClosure : public Closure { >>> public: >>> // Called for each code blob. >>> virtual void do_code_blob(CodeBlob* cb) = 0; >>> }; >>> >>> For consistency, maybe we should change the first version to >>> >>> CodeCache::all_blobs_do(CodeBlobClosure* f) >>> >>> and the second to >>> >>> CodeCache::live_nmethod_blobs_do(CodeBlobClosure* f) >>> >>> Thanks >>> - Ioi >> Hi Ioi, >> >> Thanks for the review. >> >> I don't see a good reason why a closure should be used here. There's >> no state belonging to the iteration that needs to be accessed by the >> callback. It's just a simple functional callback, so it seems >> converting this to use a closure is overkill. I'd have to say the >> same is true of the closure version of do_blobs also. Why use a >> closure for that? >> > Some of the closures actually have instance fields. E.g., > > class G1CodeBlobClosure : public CodeBlobClosure { > class HeapRegionGatheringOopClosure : public OopClosure { > ... > }; > > HeapRegionGatheringOopClosure _oc; > ... > }; Ok, I missed that. So at least some of the gc usage requires a closure. Not sure that's enough to make JVMTI switch over to it. > > I think in general using a closure is better than a function pointer. > It makes evolution of the code easier when in the future you need to > handle more complicated situations that would require states. Not sure > what the HotSpot convention is, but my preference would be to use > closures unless there's a strict performance requirement for using a > function pointer. I guess I have the opposite thinking here. Unnecessary abstraction drives me batty, and I like to keep it simple unless there's clear future need for something more flexible. We have one call site in JVMTI for this method, and a new user ever did require a closure, JVMTI could be changed at that time. I'll wait for the compiler team to chime in. I've already asked them to have a look (this issue not withstanding) because it is their code. thanks, Chris > > > Thanks > - Ioi > > >> BTW, there also: >> >> void CodeCache::nmethods_do(void f(nmethod* nm)) { >> assert_locked_or_safepoint(CodeCache_lock); >> NMethodIterator iter; >> while(iter.next()) { >> f(iter.method()); >> } >> } >> >> So another functional version of a blob iterator. It's not real clear >> to me all the ways this differs from the closure version of >> blobs_do(), other than it allows you to provide an nmethod as a >> starting point for the iteration. >> >> Chris >>> >>>> thanks, >>>> >>>> Chris >>>> >>>> https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.html#DynamicCodeGenerated >>>> >>> >> > From tobias.hartmann at oracle.com Tue Jun 6 06:19:26 2017 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 6 Jun 2017 08:19:26 +0200 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: <311a9c18-7bb8-9b97-dcda-74def73b278a@oracle.com> References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> <64c515ad-8aed-befb-8a0a-1c1833032584@oracle.com> <1dfa33ea-b913-81fc-b1c3-54e64483b93a@oracle.com> <311a9c18-7bb8-9b97-dcda-74def73b278a@oracle.com> Message-ID: <7215f9c5-fb06-5026-837e-7567d88a7122@oracle.com> Hi Chris, On 06.06.2017 04:48, Chris Plummer wrote: > I'll wait for the compiler team to chime in. I've already asked them to have a look (this issue not withstanding) because it is their code. Your change looks good to me and I'm fine with the simple fix without a closure. This is actually a regression introduced by the AOT integration (JDK-8171008): http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/777aaa19c4b1#l116.123 Before, CodeCache::blobs_do() used to iterate over all code heaps. Best regards, Tobias From thomas.stuefe at gmail.com Tue Jun 6 09:40:02 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 6 Jun 2017 11:40:02 +0200 Subject: stringStream in UL and nested ResourceMarks Message-ID: Hi all, In our VM we recently hit something similar to https://bugs.openjdk.java.net/browse/JDK-8167995 or https://bugs.openjdk.java.net/browse/JDK-8149557: A stringStream* was handed down to nested print functions which create their own ResourceMarks and, while being down the stack under the scope of that new ResourceMark, the stringStream needed to enlarge its internal buffer. This is the situation the assert inside stringStream::write() attempts to catch (assert(Thread::current()->current_resource_mark() == rm); in our case this was a release build, so we just crashed. The solution for both JDK-816795 and JDK-8149557 seemed to be to just remove the offending ResourceMarks, or shuffle them around, but generally this is not an optimal solution, or? We actually question whether using resource area memory is a good idea for outputStream chuild objects at all: outputStream instances typically travel down the stack a lot by getting handed sub-print-functions, so they run danger of crossing resource mark boundaries like above. The sub functions are usually oblivious to the type of outputStream* handed down, and as well they should be. And if the allocate resource area memory themselves, it makes sense to guard them with ResourceMark in case they are called in a loop. The assert inside stringStream::write() is not a real help either, because whether or not it hits depends on pure luck - whether the realloc code path is hit just in the right moment while printing. Which depends on the buffer size and the print history, which is variable, especially with logging. The only advantage to using bufferedStream (C-Heap) is a small performance improvement when allocating. The question is whether this is really worth the risk of using resource area memory in this fashion. Especially in the context of UL where we are about to do expensive IO operations (writing to log file) or may lock (os::flockfile). Also, the difference between bufferedStream and stringStream might be reduced by improving bufferedStream (e.g. by using a member char array for small allocations and delay using malloc() for larger arrays.) What you think? Should we get rid of stringStream and only use an (possibly improved) bufferedStream? I also imagine this could make UL coding a bit simpler. Thank you, Kind Regards, Thomas From glaubitz at physik.fu-berlin.de Wed Jun 7 00:10:38 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 7 Jun 2017 02:10:38 +0200 Subject: Cannot link against memset_with_concurrent_readers_sparc.cpp Message-ID: <1b9a20ca-9481-dfc4-125a-6d779c380f3e@physik.fu-berlin.de> Hi! I'm still working on fixing OpenJDK-9 on Linux/sparc64 and I'm currently running into something which should be a trivial Makefile issue which is a linker problem (see for the paste below, full log in [1]). Now, the problem obviously happens on SPARC only because it has its own custom implementation of the memset_with_concurrent_readers() function in ./src/cpu/sparc/vm/memset_with_concurrent_readers_sparc.cpp, the other platforms use ./src/share/vm/gc/shared/memset_with_concurrent_readers.hpp. >From the full build log, it's obvious that memset_with_concurrent_readers_sparc.cpp is being compiled earlier, but it's apparently missing on the linker command line later. I have been trying to understand the hand-written Makefiles but I can't seem to find the place which I need to patch. Does anyone who is more familiar with the build system have an idea where to look? Thanks, Adrian > [1] http://people.debian.org/~glaubitz/openjdk-9_9~b170-2_sparc64.build.gz === Output from failing command(s) repeated here === /usr/bin/printf "* For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link:\n" * For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link: (/bin/grep -v -e "^Note: including file:" < /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link.log || true) | /usr/bin/head -n 12 /<>/build-zero/hotspot/variant-zero/libjvm/gtest/objs/test_memset_with_concurrent_readers.o: In function `gc_memset_with_concurrent_readers_test_Test::TestBody()': ./src/hotspot/make/./src/hotspot/test/native/gc/shared/test_memset_with_concurrent_readers.cpp:66: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetSharedArray::fill_range(unsigned long, unsigned long, unsigned char)': ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o:./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: more undefined references to `memset_with_concurrent_readers(void*, int, unsigned long)' follow collect2: error: ld returned 1 exit status if test `/usr/bin/wc -l < /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link:\n" * For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link: (/bin/grep -v -e "^Note: including file:" < /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link.log || true) | /usr/bin/head -n 12 /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetSharedArray::fill_range(unsigned long, unsigned long, unsigned char)': ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o:./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: more undefined references to `memset_with_concurrent_readers(void*, int, unsigned long)' follow collect2: error: ld returned 1 exit status if test `/usr/bin/wc -l < /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "\n* All command lines available in /<>/build-zero/make-support/failure-logs.\n" * All command lines available in /<>/build-zero/make-support/failure-logs. /usr/bin/printf "=== End of repeated output ===\n" === End of repeated output === if /bin/grep -q "recipe for target .* failed" /<>/build-zero/build.log 2> /dev/null; then /usr/bin/printf "\n=== Make failed targets repeated here ===\n" ; /bin/grep "recipe for target .* failed" /<>/build-zero/build.log ; /usr/bin/printf "=== End of repeated output ===\n" ; /usr/bin/printf "\nHint: Try searching the build log for the name of the first failed target.\n" ; else /usr/bin/printf "\nNo indication of failed target found.\n" ; /usr/bin/printf "Hint: Try searching the build log for '] Error'.\n" ; fi === Make failed targets repeated here === lib/CompileJvm.gmk:212: recipe for target '/<>/build-zero/support/modules_libs/java.base/server/libjvm.so' failed lib/CompileGtest.gmk:64: recipe for target '/<>/build-zero/hotspot/variant-zero/libjvm/gtest/libjvm.so' failed make/Main.gmk:263: recipe for target 'hotspot-zero-libs' failed === End of repeated output === -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From stefan.karlsson at oracle.com Wed Jun 7 07:20:12 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 7 Jun 2017 09:20:12 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: Message-ID: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> Hi Thomas, On 2017-06-06 11:40, Thomas St?fe wrote: > Hi all, > > In our VM we recently hit something similar to > https://bugs.openjdk.java.net/browse/JDK-8167995 or > https://bugs.openjdk.java.net/browse/JDK-8149557: > > A stringStream* was handed down to nested print functions which create > their own ResourceMarks and, while being down the stack under the scope of > that new ResourceMark, the stringStream needed to enlarge its internal > buffer. This is the situation the assert inside stringStream::write() > attempts to catch (assert(Thread::current()->current_resource_mark() == > rm); in our case this was a release build, so we just crashed. > > The solution for both JDK-816795 and JDK-8149557 seemed to be to just > remove the offending ResourceMarks, or shuffle them around, but generally > this is not an optimal solution, or? > > We actually question whether using resource area memory is a good idea for > outputStream chuild objects at all: > > outputStream instances typically travel down the stack a lot by getting > handed sub-print-functions, so they run danger of crossing resource mark > boundaries like above. The sub functions are usually oblivious to the type > of outputStream* handed down, and as well they should be. And if the > allocate resource area memory themselves, it makes sense to guard them with > ResourceMark in case they are called in a loop. > > The assert inside stringStream::write() is not a real help either, because > whether or not it hits depends on pure luck - whether the realloc code path > is hit just in the right moment while printing. Which depends on the buffer > size and the print history, which is variable, especially with logging. > > The only advantage to using bufferedStream (C-Heap) is a small performance > improvement when allocating. The question is whether this is really worth > the risk of using resource area memory in this fashion. Especially in the > context of UL where we are about to do expensive IO operations (writing to > log file) or may lock (os::flockfile). > > Also, the difference between bufferedStream and stringStream might be > reduced by improving bufferedStream (e.g. by using a member char array for > small allocations and delay using malloc() for larger arrays.) > > What you think? Should we get rid of stringStream and only use an (possibly > improved) bufferedStream? I also imagine this could make UL coding a bit > simpler. Not answering your questions, but I want to point out that we already have a UL stream that uses C-Heap: logging/logStream.hpp: // The backing buffer is allocated in CHeap memory. typedef LogStreamBase LogStreamCHeap; StefanK > > Thank you, > > Kind Regards, Thomas > From tobias.hartmann at oracle.com Wed Jun 7 07:49:16 2017 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 7 Jun 2017 09:49:16 +0200 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> <64c515ad-8aed-befb-8a0a-1c1833032584@oracle.com> <1dfa33ea-b913-81fc-b1c3-54e64483b93a@oracle.com> <311a9c18-7bb8-9b97-dcda-74def73b278a@oracle.com> <7215f9c5-fb06-5026-837e-7567d88a7122@oracle.com> <6015eee1-fe2d-a36d-b574-1d60ebccf270@oracle.com> <643c1117-6739-c284-1236-4ccf87336dd0@oracle.com> Message-ID: <170a33a7-ce04-cc94-e08d-be2fbcb422a4@oracle.com> Hi, On 07.06.2017 00:35, serguei.spitsyn at oracle.com wrote: > I agree. > Just wanted to highlight the regressed patch might have more issues. > So, the Compiler team needs to double-check it. I quickly checked all cases and they look okay to me. Will double check with the team though. Best regards, Tobias From thomas.stuefe at gmail.com Wed Jun 7 09:15:59 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 7 Jun 2017 11:15:59 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> Message-ID: Hi Stefan, I saw this, but I also see LogStreamNoResourceMark being used as a default for the (trace|debug|info|warning|error)_stream() methods of Log. In this form it is used quite a lot. Looking further, I see that one cannot just exchange LogStreamNoResourceMark with LogStreamCHeap, because there are hidden usage conventions I was not aware of: LogStreamNoResourceMark is allocated with new() in create_log_stream(). LogStreamNoResourceMark is an outputStream, which is a ResourceObj. In its current form ResourceObj cannot be deleted, so destructors for ResourceObj child cannot be called. So, we could not use malloc in the stringStream - or exchange stringStream for bufferedStream - because we would need a non-empty destructor to free the malloc'd memory, and that destructor cannot exist. Looking further, I see that this imposes subtle usage restrictions for UL: LogStreamNoResourceMark objects are used via "log.debug_stream()" or similar. For example: codecache_print(log.debug_stream(), /* detailed= */ false); debug_stream() will allocate a LogStreamNoResourceMark object which lives in the resourcearea. This is a bit surprising, because "debug_stream()" feels like it returns a singleton or a member variable of log. If one wants to use LogStreamCHeap instead, it must not be created with new() - which would be a subtle memory leak because the destructor would never be called - but instead on the stack as automatic variable: LogStreamCHeap log_stream(log); log_stream.print("hallo"); I may understand this wrong, but if this is true, this is quite a difficult API. I have two classes which look like siblings but LogStreamCHeap can only be allocated on the local stack - otherwise I'll get a memory leak - while LogStreamNoResourceMark gets created in the resource area, which prevents its destructor from running and may fill the resource area up with temporary stream objects if used in a certain way. Have I understood this right so far? If yes, would it be possible to simplify this? Kind Regards, Thomas On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson wrote: > Hi Thomas, > > > On 2017-06-06 11:40, Thomas St?fe wrote: > >> Hi all, >> >> In our VM we recently hit something similar to >> https://bugs.openjdk.java.net/browse/JDK-8167995 or >> https://bugs.openjdk.java.net/browse/JDK-8149557: >> >> A stringStream* was handed down to nested print functions which create >> their own ResourceMarks and, while being down the stack under the scope of >> that new ResourceMark, the stringStream needed to enlarge its internal >> buffer. This is the situation the assert inside stringStream::write() >> attempts to catch (assert(Thread::current()->current_resource_mark() == >> rm); in our case this was a release build, so we just crashed. >> >> The solution for both JDK-816795 and JDK-8149557 seemed to be to just >> remove the offending ResourceMarks, or shuffle them around, but generally >> this is not an optimal solution, or? >> >> We actually question whether using resource area memory is a good idea for >> outputStream chuild objects at all: >> >> outputStream instances typically travel down the stack a lot by getting >> handed sub-print-functions, so they run danger of crossing resource mark >> boundaries like above. The sub functions are usually oblivious to the type >> of outputStream* handed down, and as well they should be. And if the >> allocate resource area memory themselves, it makes sense to guard them >> with >> ResourceMark in case they are called in a loop. >> >> The assert inside stringStream::write() is not a real help either, because >> whether or not it hits depends on pure luck - whether the realloc code >> path >> is hit just in the right moment while printing. Which depends on the >> buffer >> size and the print history, which is variable, especially with logging. >> >> The only advantage to using bufferedStream (C-Heap) is a small performance >> improvement when allocating. The question is whether this is really worth >> the risk of using resource area memory in this fashion. Especially in the >> context of UL where we are about to do expensive IO operations (writing to >> log file) or may lock (os::flockfile). >> >> Also, the difference between bufferedStream and stringStream might be >> reduced by improving bufferedStream (e.g. by using a member char array for >> small allocations and delay using malloc() for larger arrays.) >> >> What you think? Should we get rid of stringStream and only use an >> (possibly >> improved) bufferedStream? I also imagine this could make UL coding a bit >> simpler. >> > > Not answering your questions, but I want to point out that we already have > a UL stream that uses C-Heap: > > logging/logStream.hpp: > > // The backing buffer is allocated in CHeap memory. > typedef LogStreamBase LogStreamCHeap; > > StefanK > > > >> Thank you, >> >> Kind Regards, Thomas >> >> From sgehwolf at redhat.com Wed Jun 7 09:29:39 2017 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Wed, 07 Jun 2017 11:29:39 +0200 Subject: Cannot link against memset_with_concurrent_readers_sparc.cpp In-Reply-To: <1b9a20ca-9481-dfc4-125a-6d779c380f3e@physik.fu-berlin.de> References: <1b9a20ca-9481-dfc4-125a-6d779c380f3e@physik.fu-berlin.de> Message-ID: <1496827779.3701.5.camel@redhat.com> Hi, On Wed, 2017-06-07 at 02:10 +0200, John Paul Adrian Glaubitz wrote: > Hi! > > I'm still working on fixing OpenJDK-9 on Linux/sparc64 and I'm currently > running into something which should be a trivial Makefile issue which > is a linker problem (see for the paste below, full log in [1]). > > Now, the problem obviously happens on SPARC only because it has its own > custom implementation of the memset_with_concurrent_readers() function > in ./src/cpu/sparc/vm/memset_with_concurrent_readers_sparc.cpp, the > other platforms use ./src/share/vm/gc/shared/memset_with_concurrent_readers.hpp. > > From the full build log, it's obvious that memset_with_concurrent_readers_sparc.cpp > is being compiled earlier, but it's apparently missing on the linker command > line later. > > I have been trying to understand the hand-written Makefiles but I can't > seem to find the place which I need to patch. > > Does anyone who is more familiar with the build system have an idea > where to look? This is likely a question for build-dev (CC). They might have some pointers. Thanks, Severin > Thanks, > Adrian > > > [1] http://people.debian.org/~glaubitz/openjdk-9_9~b170-2_sparc64.build.gz > > === Output from failing command(s) repeated here === > /usr/bin/printf "* For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link:\n" > * For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link: > (/bin/grep -v -e "^Note: including file:" < > /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link.log || true) | /usr/bin/head -n 12 > /<>/build-zero/hotspot/variant-zero/libjvm/gtest/objs/test_memset_with_concurrent_readers.o: In function > `gc_memset_with_concurrent_readers_test_Test::TestBody()': > ./src/hotspot/make/./src/hotspot/test/native/gc/shared/test_memset_with_concurrent_readers.cpp:66: undefined reference to `memset_with_concurrent_readers(void*, > int, unsigned long)' > /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetSharedArray::fill_range(unsigned long, unsigned long, > unsigned char)': > ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' > ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' > ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' > ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' > /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o:./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: > more undefined references to `memset_with_concurrent_readers(void*, int, unsigned long)' follow > collect2: error: ld returned 1 exit status > if test `/usr/bin/wc -l < /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link.log` -gt 12; then > /bin/echo "???... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link:\n" > * For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link: > (/bin/grep -v -e "^Note: including file:" >/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link.log || > true) | /usr/bin/head -n 12 > /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetSharedArray::fill_range(unsigned long, unsigned long, > unsigned char)': > ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' > ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' > ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' > ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' > ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' > /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o:./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: > more undefined references to `memset_with_concurrent_readers(void*, int, unsigned long)' follow > collect2: error: ld returned 1 exit status > if test `/usr/bin/wc -l < /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link.log` -gt 12; then /bin/echo " > ? ... (rest of output omitted)" ; fi > /usr/bin/printf "\n* All command lines available in /<>/build-zero/make-support/failure-logs.\n" > > * All command lines available in /<>/build-zero/make-support/failure-logs. > /usr/bin/printf "=== End of repeated output ===\n" > === End of repeated output === > if /bin/grep -q "recipe for target .* failed" /<>/build-zero/build.log 2> /dev/null; then /usr/bin/printf "\n=== Make failed targets repeated here > ===\n" ; /bin/grep "recipe for target .* failed" /<>/build-zero/build.log ; /usr/bin/printf "=== End of repeated output ===\n" ; /usr/bin/printf > "\nHint: Try searching the build log for the name of the first failed target.\n" ; else /usr/bin/printf "\nNo indication of failed target found.\n" ; > /usr/bin/printf "Hint: Try searching the build log for '] Error'.\n" ; fi > > === Make failed targets repeated here === > lib/CompileJvm.gmk:212: recipe for target '/<>/build-zero/support/modules_libs/java.base/server/libjvm.so' failed > lib/CompileGtest.gmk:64: recipe for target '/<>/build-zero/hotspot/variant-zero/libjvm/gtest/libjvm.so' failed > make/Main.gmk:263: recipe for target 'hotspot-zero-libs' failed > === End of repeated output === > From glaubitz at physik.fu-berlin.de Wed Jun 7 09:34:21 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 7 Jun 2017 11:34:21 +0200 Subject: Cannot link against memset_with_concurrent_readers_sparc.cpp In-Reply-To: <1496827779.3701.5.camel@redhat.com> References: <1b9a20ca-9481-dfc4-125a-6d779c380f3e@physik.fu-berlin.de> <1496827779.3701.5.camel@redhat.com> Message-ID: <20170607093421.GC6481@physik.fu-berlin.de> On Wed, Jun 07, 2017 at 11:29:39AM +0200, Severin Gehwolf wrote: > > Does anyone who is more familiar with the build system have an idea > > where to look? > > This is likely a question for build-dev (CC). They might have some > pointers. Ok, thanks, I'll ask over there. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From erik.joelsson at oracle.com Wed Jun 7 10:05:26 2017 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Wed, 7 Jun 2017 12:05:26 +0200 Subject: Cannot link against memset_with_concurrent_readers_sparc.cpp In-Reply-To: <1496827779.3701.5.camel@redhat.com> References: <1b9a20ca-9481-dfc4-125a-6d779c380f3e@physik.fu-berlin.de> <1496827779.3701.5.camel@redhat.com> Message-ID: <7cafff82-8413-ea5b-ebde-7479f9337570@oracle.com> Hello John, If the cpp file is compiled for libjvm.so, the object file should automatically end up on the link command line. I found your build log in the original email. The link command line is: /usr/bin/sparc64-linux-gnu-g++-6 -Wl,-z,defs -Wl,-z,noexecstack -Wl,-O1 -Wl,-z,relro -shared -Xlinker -z -Xli nker relro -Xlinker -Bsymbolic-functions -Wl,-version-script=/<>/build-zero/hotspot/variant-zero/l ibjvm/mapfile -Wl,-soname=libjvm.so -o /<>/build-zero/support/modules_libs/java.base/server/libjvm .so @/<>/build-zero/hotspot/variant-zero/libjvm/objs/_BUILD_LIBJVM_objectfilenames.txt -lm -ldl -l pthread -lffi_pic Can you check the contents of this file to verify that the object file in question is actually missing: /<>/build-zero/hotspot/variant-zero/libjvm/objs/_BUILD_LIBJVM_objectfilenames.txt /Erik On 2017-06-07 11:29, Severin Gehwolf wrote: > Hi, > > On Wed, 2017-06-07 at 02:10 +0200, John Paul Adrian Glaubitz wrote: >> Hi! >> >> I'm still working on fixing OpenJDK-9 on Linux/sparc64 and I'm currently >> running into something which should be a trivial Makefile issue which >> is a linker problem (see for the paste below, full log in [1]). >> >> Now, the problem obviously happens on SPARC only because it has its own >> custom implementation of the memset_with_concurrent_readers() function >> in ./src/cpu/sparc/vm/memset_with_concurrent_readers_sparc.cpp, the >> other platforms use ./src/share/vm/gc/shared/memset_with_concurrent_readers.hpp. >> >> From the full build log, it's obvious that memset_with_concurrent_readers_sparc.cpp >> is being compiled earlier, but it's apparently missing on the linker command >> line later. >> >> I have been trying to understand the hand-written Makefiles but I can't >> seem to find the place which I need to patch. >> >> Does anyone who is more familiar with the build system have an idea >> where to look? > This is likely a question for build-dev (CC). They might have some > pointers. > > Thanks, > Severin > >> Thanks, >> Adrian >> >>> [1] http://people.debian.org/~glaubitz/openjdk-9_9~b170-2_sparc64.build.gz >> === Output from failing command(s) repeated here === >> /usr/bin/printf "* For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link:\n" >> * For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link: >> (/bin/grep -v -e "^Note: including file:" < >> /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link.log || true) | /usr/bin/head -n 12 >> /<>/build-zero/hotspot/variant-zero/libjvm/gtest/objs/test_memset_with_concurrent_readers.o: In function >> `gc_memset_with_concurrent_readers_test_Test::TestBody()': >> ./src/hotspot/make/./src/hotspot/test/native/gc/shared/test_memset_with_concurrent_readers.cpp:66: undefined reference to `memset_with_concurrent_readers(void*, >> int, unsigned long)' >> /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetSharedArray::fill_range(unsigned long, unsigned long, >> unsigned char)': >> ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' >> ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' >> ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' >> ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' >> /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o:./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: >> more undefined references to `memset_with_concurrent_readers(void*, int, unsigned long)' follow >> collect2: error: ld returned 1 exit status >> if test `/usr/bin/wc -l < /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link.log` -gt 12; then >> /bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link:\n" >> * For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link: >> (/bin/grep -v -e "^Note: including file:" < /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link.log || >> true) | /usr/bin/head -n 12 >> /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetSharedArray::fill_range(unsigned long, unsigned long, >> unsigned char)': >> ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' >> ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' >> ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' >> ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' >> ./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' >> /<>/build-zero/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o:./src/hotspot/make/./src/hotspot/src/share/vm/gc/shared/blockOffsetTable.hpp:159: >> more undefined references to `memset_with_concurrent_readers(void*, int, unsigned long)' follow >> collect2: error: ld returned 1 exit status >> if test `/usr/bin/wc -l < /<>/build-zero/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link.log` -gt 12; then /bin/echo " >> ... (rest of output omitted)" ; fi >> /usr/bin/printf "\n* All command lines available in /<>/build-zero/make-support/failure-logs.\n" >> >> * All command lines available in /<>/build-zero/make-support/failure-logs. >> /usr/bin/printf "=== End of repeated output ===\n" >> === End of repeated output === >> if /bin/grep -q "recipe for target .* failed" /<>/build-zero/build.log 2> /dev/null; then /usr/bin/printf "\n=== Make failed targets repeated here >> ===\n" ; /bin/grep "recipe for target .* failed" /<>/build-zero/build.log ; /usr/bin/printf "=== End of repeated output ===\n" ; /usr/bin/printf >> "\nHint: Try searching the build log for the name of the first failed target.\n" ; else /usr/bin/printf "\nNo indication of failed target found.\n" ; >> /usr/bin/printf "Hint: Try searching the build log for '] Error'.\n" ; fi >> >> === Make failed targets repeated here === >> lib/CompileJvm.gmk:212: recipe for target '/<>/build-zero/support/modules_libs/java.base/server/libjvm.so' failed >> lib/CompileGtest.gmk:64: recipe for target '/<>/build-zero/hotspot/variant-zero/libjvm/gtest/libjvm.so' failed >> make/Main.gmk:263: recipe for target 'hotspot-zero-libs' failed >> === End of repeated output === >> From stefan.karlsson at oracle.com Wed Jun 7 10:17:32 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 7 Jun 2017 12:17:32 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> Message-ID: <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> Hi Thomas, On 2017-06-07 11:15, Thomas St?fe wrote: > Hi Stefan, > > I saw this, but I also see LogStreamNoResourceMark being used as a > default for the (trace|debug|info|warning|error)_stream() methods of > Log. In this form it is used quite a lot. > > Looking further, I see that one cannot just exchange > LogStreamNoResourceMark with LogStreamCHeap, because there are hidden > usage conventions I was not aware of: Just to be clear, I didn't propose that you did a wholesale replacement of LogStreamNoResourceMark with LogStreamCHeap. I merely pointed out the existence of this class in case you had missed it. > > LogStreamNoResourceMark is allocated with new() in create_log_stream(). > LogStreamNoResourceMark is an outputStream, which is a ResourceObj. In > its current form ResourceObj cannot be deleted, so destructors for > ResourceObj child cannot be called. By default ResourceObj classes are allocated in the resource area, but the class also supports CHeap allocations. For example, see some of the allocations of GrowableArray instances: _deallocate_list = new (ResourceObj::C_HEAP, mtClass) GrowableArray(100, true); These can still be deleted: delete _deallocate_list; > > So, we could not use malloc in the stringStream - or exchange > stringStream for bufferedStream - because we would need a non-empty > destructor to free the malloc'd memory, and that destructor cannot exist. > > Looking further, I see that this imposes subtle usage restrictions for UL: > > LogStreamNoResourceMark objects are used via "log.debug_stream()" or > similar. For example: > > codecache_print(log.debug_stream(), /* detailed= */ false); > > debug_stream() will allocate a LogStreamNoResourceMark object which > lives in the resourcearea. This is a bit surprising, because > "debug_stream()" feels like it returns a singleton or a member variable > of log. IIRC, this was done to: 1) break up a cyclic dependencies between logStream.hpp and log.hpp 2) Not have log.hpp depend on the stream.hpp. This used to be important, but the includes in stream.hpp has been fixed so this might be a non-issue. > > If one wants to use LogStreamCHeap instead, it must not be created with > new() - which would be a subtle memory leak because the destructor would > never be called - but instead on the stack as automatic variable: > > LogStreamCHeap log_stream(log); > log_stream.print("hallo"); > > I may understand this wrong, but if this is true, this is quite a > difficult API. Feel free to rework this and propose a simpler model. Anything that would simplify this would be helpful. I have two classes which look like siblings but > LogStreamCHeap can only be allocated on the local stack - otherwise I'll > get a memory leak - while LogStreamNoResourceMark gets created in the > resource area, which prevents its destructor from running and may fill > the resource area up with temporary stream objects if used in a certain way. > > Have I understood this right so far? If yes, would it be possible to > simplify this? I think you understand the code correctly, and yes, there are probably ways to make this simpler. Thanks, StefanK > > Kind Regards, Thomas > > > > > On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson > > wrote: > > Hi Thomas, > > > On 2017-06-06 11:40, Thomas St?fe wrote: > > Hi all, > > In our VM we recently hit something similar to > https://bugs.openjdk.java.net/browse/JDK-8167995 > or > https://bugs.openjdk.java.net/browse/JDK-8149557 > : > > A stringStream* was handed down to nested print functions which > create > their own ResourceMarks and, while being down the stack under > the scope of > that new ResourceMark, the stringStream needed to enlarge its > internal > buffer. This is the situation the assert inside > stringStream::write() > attempts to catch > (assert(Thread::current()->current_resource_mark() == > rm); in our case this was a release build, so we just crashed. > > The solution for both JDK-816795 and JDK-8149557 seemed to be to > just > remove the offending ResourceMarks, or shuffle them around, but > generally > this is not an optimal solution, or? > > We actually question whether using resource area memory is a > good idea for > outputStream chuild objects at all: > > outputStream instances typically travel down the stack a lot by > getting > handed sub-print-functions, so they run danger of crossing > resource mark > boundaries like above. The sub functions are usually oblivious > to the type > of outputStream* handed down, and as well they should be. And if the > allocate resource area memory themselves, it makes sense to > guard them with > ResourceMark in case they are called in a loop. > > The assert inside stringStream::write() is not a real help > either, because > whether or not it hits depends on pure luck - whether the > realloc code path > is hit just in the right moment while printing. Which depends on > the buffer > size and the print history, which is variable, especially with > logging. > > The only advantage to using bufferedStream (C-Heap) is a small > performance > improvement when allocating. The question is whether this is > really worth > the risk of using resource area memory in this fashion. > Especially in the > context of UL where we are about to do expensive IO operations > (writing to > log file) or may lock (os::flockfile). > > Also, the difference between bufferedStream and stringStream > might be > reduced by improving bufferedStream (e.g. by using a member char > array for > small allocations and delay using malloc() for larger arrays.) > > What you think? Should we get rid of stringStream and only use > an (possibly > improved) bufferedStream? I also imagine this could make UL > coding a bit > simpler. > > > Not answering your questions, but I want to point out that we > already have a UL stream that uses C-Heap: > > logging/logStream.hpp: > > // The backing buffer is allocated in CHeap memory. > typedef LogStreamBase LogStreamCHeap; > > StefanK > > > > Thank you, > > Kind Regards, Thomas > > From thomas.stuefe at gmail.com Wed Jun 7 10:25:15 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 7 Jun 2017 12:25:15 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> Message-ID: Hi Stefan, On Wed, Jun 7, 2017 at 12:17 PM, Stefan Karlsson wrote: > Hi Thomas, > > On 2017-06-07 11:15, Thomas St?fe wrote: > >> Hi Stefan, >> >> I saw this, but I also see LogStreamNoResourceMark being used as a >> default for the (trace|debug|info|warning|error)_stream() methods of >> Log. In this form it is used quite a lot. >> >> Looking further, I see that one cannot just exchange >> LogStreamNoResourceMark with LogStreamCHeap, because there are hidden usage >> conventions I was not aware of: >> > > Just to be clear, I didn't propose that you did a wholesale replacement of > LogStreamNoResourceMark with LogStreamCHeap. I merely pointed out the > existence of this class in case you had missed it. > > Sure! I implied this myself with my original post which proposed to replace the resource area allocation inside stringStream with malloc'd memory. > >> LogStreamNoResourceMark is allocated with new() in create_log_stream(). >> LogStreamNoResourceMark is an outputStream, which is a ResourceObj. In its >> current form ResourceObj cannot be deleted, so destructors for ResourceObj >> child cannot be called. >> > > By default ResourceObj classes are allocated in the resource area, but the > class also supports CHeap allocations. For example, see some of the > allocations of GrowableArray instances: > > _deallocate_list = new (ResourceObj::C_HEAP, mtClass) > GrowableArray(100, true); > > These can still be deleted: > > delete _deallocate_list; > > >> So, we could not use malloc in the stringStream - or exchange >> stringStream for bufferedStream - because we would need a non-empty >> destructor to free the malloc'd memory, and that destructor cannot exist. >> >> Looking further, I see that this imposes subtle usage restrictions for UL: >> >> LogStreamNoResourceMark objects are used via "log.debug_stream()" or >> similar. For example: >> >> codecache_print(log.debug_stream(), /* detailed= */ false); >> >> debug_stream() will allocate a LogStreamNoResourceMark object which lives >> in the resourcearea. This is a bit surprising, because "debug_stream()" >> feels like it returns a singleton or a member variable of log. >> > > IIRC, this was done to: > > 1) break up a cyclic dependencies between logStream.hpp and log.hpp > > 2) Not have log.hpp depend on the stream.hpp. This used to be important, > but the includes in stream.hpp has been fixed so this might be a non-issue. > > >> If one wants to use LogStreamCHeap instead, it must not be created with >> new() - which would be a subtle memory leak because the destructor would >> never be called - but instead on the stack as automatic variable: >> >> LogStreamCHeap log_stream(log); >> log_stream.print("hallo"); >> >> I may understand this wrong, but if this is true, this is quite a >> difficult API. >> > > Feel free to rework this and propose a simpler model. Anything that would > simplify this would be helpful. > > I will mull over this a bit (and I would be thankful for other viewpoints as well). A bottomline question which is difficult to answer is whether folks value the slight performance increase of resource area backed memory allocation in stringStream more than simplicity and robustness which would come with switching to malloced memory. And then, there is the second question of why outputStream objects should be ResourceObj at all; for me, they feel much more at home as stack objects. They themselves are small and do not allocate a lot of memory (if they do, they do it dynamically). And they are not allocated in vast amounts... Lets see what others think. > I have two classes which look like siblings but > >> LogStreamCHeap can only be allocated on the local stack - otherwise I'll >> get a memory leak - while LogStreamNoResourceMark gets created in the >> resource area, which prevents its destructor from running and may fill the >> resource area up with temporary stream objects if used in a certain way. >> >> Have I understood this right so far? If yes, would it be possible to >> simplify this? >> > > I think you understand the code correctly, and yes, there are probably > ways to make this simpler. > > Thanks for your input! Kind regards, Thomas > Thanks, > StefanK > > >> Kind Regards, Thomas >> >> >> >> >> >> On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson < >> stefan.karlsson at oracle.com > wrote: >> >> Hi Thomas, >> >> >> On 2017-06-06 11:40, Thomas St?fe wrote: >> >> Hi all, >> >> In our VM we recently hit something similar to >> https://bugs.openjdk.java.net/browse/JDK-8167995 >> or >> https://bugs.openjdk.java.net/browse/JDK-8149557 >> : >> >> A stringStream* was handed down to nested print functions which >> create >> their own ResourceMarks and, while being down the stack under >> the scope of >> that new ResourceMark, the stringStream needed to enlarge its >> internal >> buffer. This is the situation the assert inside >> stringStream::write() >> attempts to catch >> (assert(Thread::current()->current_resource_mark() == >> rm); in our case this was a release build, so we just crashed. >> >> The solution for both JDK-816795 and JDK-8149557 seemed to be to >> just >> remove the offending ResourceMarks, or shuffle them around, but >> generally >> this is not an optimal solution, or? >> >> We actually question whether using resource area memory is a >> good idea for >> outputStream chuild objects at all: >> >> outputStream instances typically travel down the stack a lot by >> getting >> handed sub-print-functions, so they run danger of crossing >> resource mark >> boundaries like above. The sub functions are usually oblivious >> to the type >> of outputStream* handed down, and as well they should be. And if >> the >> allocate resource area memory themselves, it makes sense to >> guard them with >> ResourceMark in case they are called in a loop. >> >> The assert inside stringStream::write() is not a real help >> either, because >> whether or not it hits depends on pure luck - whether the >> realloc code path >> is hit just in the right moment while printing. Which depends on >> the buffer >> size and the print history, which is variable, especially with >> logging. >> >> The only advantage to using bufferedStream (C-Heap) is a small >> performance >> improvement when allocating. The question is whether this is >> really worth >> the risk of using resource area memory in this fashion. >> Especially in the >> context of UL where we are about to do expensive IO operations >> (writing to >> log file) or may lock (os::flockfile). >> >> Also, the difference between bufferedStream and stringStream >> might be >> reduced by improving bufferedStream (e.g. by using a member char >> array for >> small allocations and delay using malloc() for larger arrays.) >> >> What you think? Should we get rid of stringStream and only use >> an (possibly >> improved) bufferedStream? I also imagine this could make UL >> coding a bit >> simpler. >> >> >> Not answering your questions, but I want to point out that we >> already have a UL stream that uses C-Heap: >> >> logging/logStream.hpp: >> >> // The backing buffer is allocated in CHeap memory. >> typedef LogStreamBase LogStreamCHeap; >> >> StefanK >> >> >> >> Thank you, >> >> Kind Regards, Thomas >> >> >> From glaubitz at physik.fu-berlin.de Wed Jun 7 11:05:06 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 7 Jun 2017 13:05:06 +0200 Subject: Cannot link against memset_with_concurrent_readers_sparc.cpp In-Reply-To: <7cafff82-8413-ea5b-ebde-7479f9337570@oracle.com> References: <1b9a20ca-9481-dfc4-125a-6d779c380f3e@physik.fu-berlin.de> <1496827779.3701.5.camel@redhat.com> <7cafff82-8413-ea5b-ebde-7479f9337570@oracle.com> Message-ID: <20170607110506.GF6481@physik.fu-berlin.de> On Wed, Jun 07, 2017 at 12:05:26PM +0200, Erik Joelsson wrote: > Can you check the contents of this file to verify that the object file in > question is actually missing: > > /<>/build-zero/hotspot/variant-zero/libjvm/objs/_BUILD_LIBJVM_objectfilenames.txt Yes, it's missing, see [1]. Adrian > [1] https://people.debian.org/~glaubitz/_BUILD_LIBJVM_objectfilenames.txt -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From mikael.gerdin at oracle.com Wed Jun 7 11:42:47 2017 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 7 Jun 2017 13:42:47 +0200 Subject: Cannot link against memset_with_concurrent_readers_sparc.cpp In-Reply-To: <20170607110506.GF6481@physik.fu-berlin.de> References: <1b9a20ca-9481-dfc4-125a-6d779c380f3e@physik.fu-berlin.de> <1496827779.3701.5.camel@redhat.com> <7cafff82-8413-ea5b-ebde-7479f9337570@oracle.com> <20170607110506.GF6481@physik.fu-berlin.de> Message-ID: <9e6409be-5823-bef4-96ec-2c4603c41ecc@oracle.com> Hi, On 2017-06-07 13:05, John Paul Adrian Glaubitz wrote: > On Wed, Jun 07, 2017 at 12:05:26PM +0200, Erik Joelsson wrote: >> Can you check the contents of this file to verify that the object file in >> question is actually missing: >> >> /<>/build-zero/hotspot/variant-zero/libjvm/objs/_BUILD_LIBJVM_objectfilenames.txt > > Yes, it's missing, see [1]. > I think the problem is that your build configuration is ZERO but the file memset_with_concurrent_readers_sparc.cpp is in the cpu/sparc directory and will only be visible if building a native (non-zero) SPARC config. Perhaps there is a mismatch between -DSPARC and the source directory selection? /MIkael > Adrian > >> [1] https://people.debian.org/~glaubitz/_BUILD_LIBJVM_objectfilenames.txt > From glaubitz at physik.fu-berlin.de Wed Jun 7 11:49:47 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 7 Jun 2017 13:49:47 +0200 Subject: Cannot link against memset_with_concurrent_readers_sparc.cpp In-Reply-To: <9e6409be-5823-bef4-96ec-2c4603c41ecc@oracle.com> References: <1b9a20ca-9481-dfc4-125a-6d779c380f3e@physik.fu-berlin.de> <1496827779.3701.5.camel@redhat.com> <7cafff82-8413-ea5b-ebde-7479f9337570@oracle.com> <20170607110506.GF6481@physik.fu-berlin.de> <9e6409be-5823-bef4-96ec-2c4603c41ecc@oracle.com> Message-ID: <20170607114947.GH6481@physik.fu-berlin.de> On Wed, Jun 07, 2017 at 01:42:47PM +0200, Mikael Gerdin wrote: > I think the problem is that your build configuration is ZERO but the file > memset_with_concurrent_readers_sparc.cpp is in the cpu/sparc directory and > will only be visible if building a native (non-zero) SPARC config. That's a *very* good hint. I just saw there is actually a related patch in the Debian package which came from the openjdk-8 package and is currently disabled for the openjdk-9 package! Attaching the old patch, it pretty much looks like what I need to be looking into. Thanks for the pointer! Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 -------------- next part -------------- A non-text attachment was scrubbed... Name: zero-sparc.diff Type: text/x-diff Size: 13366 bytes Desc: not available URL: From zgu at redhat.com Wed Jun 7 18:07:26 2017 From: zgu at redhat.com (Zhengyu Gu) Date: Wed, 7 Jun 2017 14:07:26 -0400 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 Message-ID: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> Hi, Please review 8u backport of JDK-8181055. There is only one minor conflict from original patch. Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ Thanks, -Zhengyu From shade at redhat.com Wed Jun 7 18:11:10 2017 From: shade at redhat.com (Aleksey Shipilev) Date: Wed, 7 Jun 2017 20:11:10 +0200 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> References: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> Message-ID: <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> On 06/07/2017 08:07 PM, Zhengyu Gu wrote: > Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 > Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ Looks almost the same as 9. Looks good to me. Shouldn't you do this at jdk8-dev? -Aleksey From zgu at redhat.com Wed Jun 7 18:16:29 2017 From: zgu at redhat.com (Zhengyu Gu) Date: Wed, 7 Jun 2017 14:16:29 -0400 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> References: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> Message-ID: Hi Aleksey, Thanks for the review. On 06/07/2017 02:11 PM, Aleksey Shipilev wrote: > On 06/07/2017 08:07 PM, Zhengyu Gu wrote: >> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >> Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ > > Looks almost the same as 9. Looks good to me. > > Shouldn't you do this at jdk8-dev? > There is not jdk8-dev and jdk8 is read-only. I think jdk8u-dev is right one. Thanks, -Zhengyu > -Aleksey > From daniel.daugherty at oracle.com Wed Jun 7 20:14:55 2017 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 7 Jun 2017 14:14:55 -0600 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: References: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> Message-ID: <62524c62-ded1-6fab-e00c-8666fbff1645@oracle.com> hotspot-dev at ... is the right place for the RFR (which this is). jdk8u-dev at ... is the right place for the RFA (Request For Approval) after the RFR is approved. Dan On 6/7/17 12:16 PM, Zhengyu Gu wrote: > Hi Aleksey, > > Thanks for the review. > > On 06/07/2017 02:11 PM, Aleksey Shipilev wrote: >> On 06/07/2017 08:07 PM, Zhengyu Gu wrote: >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >>> Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ >> >> Looks almost the same as 9. Looks good to me. >> >> Shouldn't you do this at jdk8-dev? >> > There is not jdk8-dev and jdk8 is read-only. I think jdk8u-dev is > right one. > > Thanks, > > -Zhengyu > > >> -Aleksey >> > From zgu at redhat.com Wed Jun 7 20:37:07 2017 From: zgu at redhat.com (Zhengyu Gu) Date: Wed, 7 Jun 2017 16:37:07 -0400 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <62524c62-ded1-6fab-e00c-8666fbff1645@oracle.com> References: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> <62524c62-ded1-6fab-e00c-8666fbff1645@oracle.com> Message-ID: <2239bacf-1760-1f73-20a1-ce792a191741@redhat.com> Thanks for the clarification, Dan. -Zhengyu On 06/07/2017 04:14 PM, Daniel D. Daugherty wrote: > hotspot-dev at ... is the right place for the RFR (which this is). > > jdk8u-dev at ... is the right place for the RFA (Request For Approval) > after the RFR is approved. > > Dan > > > On 6/7/17 12:16 PM, Zhengyu Gu wrote: >> Hi Aleksey, >> >> Thanks for the review. >> >> On 06/07/2017 02:11 PM, Aleksey Shipilev wrote: >>> On 06/07/2017 08:07 PM, Zhengyu Gu wrote: >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >>>> Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ >>> >>> Looks almost the same as 9. Looks good to me. >>> >>> Shouldn't you do this at jdk8-dev? >>> >> There is not jdk8-dev and jdk8 is read-only. I think jdk8u-dev is >> right one. >> >> Thanks, >> >> -Zhengyu >> >> >>> -Aleksey >>> >> > From vladimir.kozlov at oracle.com Wed Jun 7 22:17:42 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 7 Jun 2017 15:17:42 -0700 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: <7215f9c5-fb06-5026-837e-7567d88a7122@oracle.com> References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> <64c515ad-8aed-befb-8a0a-1c1833032584@oracle.com> <1dfa33ea-b913-81fc-b1c3-54e64483b93a@oracle.com> <311a9c18-7bb8-9b97-dcda-74def73b278a@oracle.com> <7215f9c5-fb06-5026-837e-7567d88a7122@oracle.com> Message-ID: Thank you Chris for finding the issue. I would like Dean to look on it since he did this changes. One concern is change to FOR_ALL_HEAPS() will also scan AOT heap. We need to figure-out to how to scan all blobs in regular CodeCache excluding only AOT. Thanks, Vladimir On 6/5/17 11:19 PM, Tobias Hartmann wrote: > Hi Chris, > > On 06.06.2017 04:48, Chris Plummer wrote: >> I'll wait for the compiler team to chime in. I've already asked them to have a look (this issue not withstanding) because it is their code. > > Your change looks good to me and I'm fine with the simple fix without a closure. > > This is actually a regression introduced by the AOT integration (JDK-8171008): > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/777aaa19c4b1#l116.123 > > Before, CodeCache::blobs_do() used to iterate over all code heaps. > > Best regards, > Tobias > From dean.long at oracle.com Wed Jun 7 22:20:40 2017 From: dean.long at oracle.com (dean.long at oracle.com) Date: Wed, 7 Jun 2017 15:20:40 -0700 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: <170a33a7-ce04-cc94-e08d-be2fbcb422a4@oracle.com> References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> <64c515ad-8aed-befb-8a0a-1c1833032584@oracle.com> <1dfa33ea-b913-81fc-b1c3-54e64483b93a@oracle.com> <311a9c18-7bb8-9b97-dcda-74def73b278a@oracle.com> <7215f9c5-fb06-5026-837e-7567d88a7122@oracle.com> <6015eee1-fe2d-a36d-b574-1d60ebccf270@oracle.com> <643c1117-6739-c284-1236-4ccf87336dd0@oracle.com> <170a33a7-ce04-cc94-e08d-be2fbcb422a4@oracle.com> Message-ID: <943a1478-75a0-f5c1-8885-e369fbb4bcca@oracle.com> Sorry for the late review, but I think that anything called blobs_do should use FOR_ALL_HEAPS, and if we want to restrict to just FOR_ALL_NMETHOD_HEAPS then that should be called nmethod_blobs_do. We are just getting lucky that all the GC uses are only interested in nmethods. If all uses turn out to be interested in nmetods only, then only a rename from blobs_do to nmethods_blobs_do is needed. dl On 6/7/17 12:49 AM, Tobias Hartmann wrote: > Hi, > > On 07.06.2017 00:35, serguei.spitsyn at oracle.com wrote: >> I agree. >> Just wanted to highlight the regressed patch might have more issues. >> So, the Compiler team needs to double-check it. > I quickly checked all cases and they look okay to me. Will double check with the team though. > > Best regards, > Tobias From chris.plummer at oracle.com Wed Jun 7 23:06:26 2017 From: chris.plummer at oracle.com (Chris Plummer) Date: Wed, 7 Jun 2017 16:06:26 -0700 Subject: RFR(10)(XS): JDK-8171365: nsk/jvmti/scenarios/events/EM04/em04t001: many errors for missed events In-Reply-To: References: <75c8ac80-dbe8-cd34-cfc9-9b2a709fd25e@oracle.com> <64c515ad-8aed-befb-8a0a-1c1833032584@oracle.com> <1dfa33ea-b913-81fc-b1c3-54e64483b93a@oracle.com> <311a9c18-7bb8-9b97-dcda-74def73b278a@oracle.com> <7215f9c5-fb06-5026-837e-7567d88a7122@oracle.com> Message-ID: Hi Vladimir, Tobias already looked at the changes and they have been pushed. If Dean finds any AOT related concerns, we'll need to address them with a separate CR. However, I don't think there will be an issue with AOT, at least not for JVMTI's use (and it is the only user of this API), since it is ignoring all nmethods (it just want's blobs that don't represent java methods). thanks, Chris On 6/7/17 3:17 PM, Vladimir Kozlov wrote: > Thank you Chris for finding the issue. > > I would like Dean to look on it since he did this changes. > One concern is change to FOR_ALL_HEAPS() will also scan AOT heap. We > need to figure-out to how to scan all blobs in regular CodeCache > excluding only AOT. > > Thanks, > Vladimir > > On 6/5/17 11:19 PM, Tobias Hartmann wrote: >> Hi Chris, >> >> On 06.06.2017 04:48, Chris Plummer wrote: >>> I'll wait for the compiler team to chime in. I've already asked them >>> to have a look (this issue not withstanding) because it is their code. >> >> Your change looks good to me and I'm fine with the simple fix without >> a closure. >> >> This is actually a regression introduced by the AOT integration >> (JDK-8171008): >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/777aaa19c4b1#l116.123 >> >> Before, CodeCache::blobs_do() used to iterate over all code heaps. >> >> Best regards, >> Tobias >> From kim.barrett at oracle.com Wed Jun 7 23:51:32 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 7 Jun 2017 19:51:32 -0400 Subject: RFR: 8086005: Define __STDC_xxx_MACROS config macros globally via build system Message-ID: Please review this change to the build of hotspot to globally define the __STDC_xxx_MACROS macros via the command line, rather than via #defines scattered through several header files. CR: https://bugs.openjdk.java.net/browse/JDK-8086005 Webrev: http://cr.openjdk.java.net/~kbarrett/8086005/hs.00/ http://cr.openjdk.java.net/~kbarrett/8086005/hotspot.00/ Testing: JPRT From erik.joelsson at oracle.com Thu Jun 8 06:28:06 2017 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Thu, 8 Jun 2017 08:28:06 +0200 Subject: RFR: 8086005: Define __STDC_xxx_MACROS config macros globally via build system In-Reply-To: References: Message-ID: Build changes look good to me. /Erik On 2017-06-08 01:51, Kim Barrett wrote: > Please review this change to the build of hotspot to globally define > the __STDC_xxx_MACROS macros via the command line, rather than > via #defines scattered through several header files. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8086005 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8086005/hs.00/ > http://cr.openjdk.java.net/~kbarrett/8086005/hotspot.00/ > > Testing: > JPRT > From kim.barrett at oracle.com Thu Jun 8 10:36:37 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 8 Jun 2017 06:36:37 -0400 Subject: RFR: 8086005: Define __STDC_xxx_MACROS config macros globally via build system In-Reply-To: References: Message-ID: <8D1B5CE1-782E-4942-B069-E31067F3E8A3@oracle.com> > On Jun 8, 2017, at 2:28 AM, Erik Joelsson wrote: > > Build changes look good to me. Thanks. > /Erik > > > On 2017-06-08 01:51, Kim Barrett wrote: >> Please review this change to the build of hotspot to globally define >> the __STDC_xxx_MACROS macros via the command line, rather than >> via #defines scattered through several header files. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8086005 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8086005/hs.00/ >> http://cr.openjdk.java.net/~kbarrett/8086005/hotspot.00/ >> >> Testing: >> JPRT From gerard.ziemski at oracle.com Thu Jun 8 21:21:01 2017 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 8 Jun 2017 16:21:01 -0500 Subject: RFR: 8086005: Define __STDC_xxx_MACROS config macros globally via build system In-Reply-To: References: Message-ID: <38B1C99D-0DE4-481E-A971-311CB16B59F3@oracle.com> hi Kim, My understanding is that to enable c++11, for example, we need to do 2 things (at least on Mac OS X): #1 For the compilation phase we need to add ?-std=c++11 -stdlib=libc++?, where ?-std=c++11? selects the language model, and ?-stdlib=libc++? selects the corresponding headers. #2 For the linking phase we need to add "-stdlib=libc++? to select the corresponding c++ standard lib. Ie, we need to set both cflags and ldflags, but you are only allowing to add to JVM_CFLAGS. Without the ability to also modify JVM_LDFLAGS, this fix, as is, is not complete on Mac OS X. Unless I?m mistaken, please correct me if I?m wrong, can we include modifying JVM_LDFLAGS in this fix as well? cheers > On Jun 7, 2017, at 6:51 PM, Kim Barrett wrote: > > Please review this change to the build of hotspot to globally define > the __STDC_xxx_MACROS macros via the command line, rather than > via #defines scattered through several header files. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8086005 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8086005/hs.00/ > http://cr.openjdk.java.net/~kbarrett/8086005/hotspot.00/ > > Testing: > JPRT > From thomas.stuefe at gmail.com Fri Jun 9 07:48:03 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 9 Jun 2017 09:48:03 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> Message-ID: Hi Stefan, just a small question to verify that I understood everything correctly. The LogStream classes (LogStreamBase and children) are basically the write-to-UL frontend classes, right? Their purpose is to collect input via various print.. methods until \n is encountered, then pipe the assembled line to the UL backend. To do that it needs a backing store for the to-be-assembled-line, and this is the whole reason stringStream is used (via the "streamClass" template parameter for LogStreamBase)? So, basically the whole rather involved class tree rooted at LogStreamBase only deals with the various ways that one line backing store is allocated? Including LogStream itself, which contains - I was very surprised to see - an embedded ResourceMark (stringStreamWithResourceMark). There are no other reasons for this ResourceMark? I am currently experimenting with changing LogStream to use a simple malloc'd backing store, in combination with a small fixed size member buffer for small lines; I'd like to see if that had any measurable negative performance impact. The coding is halfway done already, but the callers need fixing up, because due to my change LogStreamBase children cannot be allocated with new anymore, because of the ResourceObj-destructor problem. What do you think, is this worthwhile and am I overlooking something obvious? The UL coding is quite large, after all. Kind Regards, Thomas On Wed, Jun 7, 2017 at 12:25 PM, Thomas St?fe wrote: > Hi Stefan, > > On Wed, Jun 7, 2017 at 12:17 PM, Stefan Karlsson < > stefan.karlsson at oracle.com> wrote: > >> Hi Thomas, >> >> On 2017-06-07 11:15, Thomas St?fe wrote: >> >>> Hi Stefan, >>> >>> I saw this, but I also see LogStreamNoResourceMark being used as a >>> default for the (trace|debug|info|warning|error)_stream() methods of >>> Log. In this form it is used quite a lot. >>> >>> Looking further, I see that one cannot just exchange >>> LogStreamNoResourceMark with LogStreamCHeap, because there are hidden usage >>> conventions I was not aware of: >>> >> >> Just to be clear, I didn't propose that you did a wholesale replacement >> of LogStreamNoResourceMark with LogStreamCHeap. I merely pointed out the >> existence of this class in case you had missed it. >> >> > Sure! I implied this myself with my original post which proposed to > replace the resource area allocation inside stringStream with malloc'd > memory. > > >> >>> LogStreamNoResourceMark is allocated with new() in create_log_stream(). >>> LogStreamNoResourceMark is an outputStream, which is a ResourceObj. In its >>> current form ResourceObj cannot be deleted, so destructors for ResourceObj >>> child cannot be called. >>> >> >> By default ResourceObj classes are allocated in the resource area, but >> the class also supports CHeap allocations. For example, see some of the >> allocations of GrowableArray instances: >> >> _deallocate_list = new (ResourceObj::C_HEAP, mtClass) >> GrowableArray(100, true); >> >> These can still be deleted: >> >> delete _deallocate_list; >> >> >>> So, we could not use malloc in the stringStream - or exchange >>> stringStream for bufferedStream - because we would need a non-empty >>> destructor to free the malloc'd memory, and that destructor cannot exist. >>> >>> Looking further, I see that this imposes subtle usage restrictions for >>> UL: >>> >>> LogStreamNoResourceMark objects are used via "log.debug_stream()" or >>> similar. For example: >>> >>> codecache_print(log.debug_stream(), /* detailed= */ false); >>> >>> debug_stream() will allocate a LogStreamNoResourceMark object which >>> lives in the resourcearea. This is a bit surprising, because >>> "debug_stream()" feels like it returns a singleton or a member variable of >>> log. >>> >> >> IIRC, this was done to: >> >> 1) break up a cyclic dependencies between logStream.hpp and log.hpp >> >> 2) Not have log.hpp depend on the stream.hpp. This used to be important, >> but the includes in stream.hpp has been fixed so this might be a non-issue. >> >> >>> If one wants to use LogStreamCHeap instead, it must not be created with >>> new() - which would be a subtle memory leak because the destructor would >>> never be called - but instead on the stack as automatic variable: >>> >>> LogStreamCHeap log_stream(log); >>> log_stream.print("hallo"); >>> >>> I may understand this wrong, but if this is true, this is quite a >>> difficult API. >>> >> >> Feel free to rework this and propose a simpler model. Anything that would >> simplify this would be helpful. >> >> > I will mull over this a bit (and I would be thankful for other viewpoints > as well). A bottomline question which is difficult to answer is whether > folks value the slight performance increase of resource area backed memory > allocation in stringStream more than simplicity and robustness which would > come with switching to malloced memory. And then, there is the second > question of why outputStream objects should be ResourceObj at all; for me, > they feel much more at home as stack objects. They themselves are small and > do not allocate a lot of memory (if they do, they do it dynamically). And > they are not allocated in vast amounts... > > Lets see what others think. > > >> I have two classes which look like siblings but >> >>> LogStreamCHeap can only be allocated on the local stack - otherwise I'll >>> get a memory leak - while LogStreamNoResourceMark gets created in the >>> resource area, which prevents its destructor from running and may fill the >>> resource area up with temporary stream objects if used in a certain way. >>> >>> Have I understood this right so far? If yes, would it be possible to >>> simplify this? >>> >> >> I think you understand the code correctly, and yes, there are probably >> ways to make this simpler. >> >> > Thanks for your input! > > Kind regards, Thomas > > >> Thanks, >> StefanK >> >> >>> Kind Regards, Thomas >>> >>> >>> >>> >>> >>> On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson < >>> stefan.karlsson at oracle.com > wrote: >>> >>> Hi Thomas, >>> >>> >>> On 2017-06-06 11:40, Thomas St?fe wrote: >>> >>> Hi all, >>> >>> In our VM we recently hit something similar to >>> https://bugs.openjdk.java.net/browse/JDK-8167995 >>> or >>> https://bugs.openjdk.java.net/browse/JDK-8149557 >>> : >>> >>> A stringStream* was handed down to nested print functions which >>> create >>> their own ResourceMarks and, while being down the stack under >>> the scope of >>> that new ResourceMark, the stringStream needed to enlarge its >>> internal >>> buffer. This is the situation the assert inside >>> stringStream::write() >>> attempts to catch >>> (assert(Thread::current()->current_resource_mark() == >>> rm); in our case this was a release build, so we just crashed. >>> >>> The solution for both JDK-816795 and JDK-8149557 seemed to be to >>> just >>> remove the offending ResourceMarks, or shuffle them around, but >>> generally >>> this is not an optimal solution, or? >>> >>> We actually question whether using resource area memory is a >>> good idea for >>> outputStream chuild objects at all: >>> >>> outputStream instances typically travel down the stack a lot by >>> getting >>> handed sub-print-functions, so they run danger of crossing >>> resource mark >>> boundaries like above. The sub functions are usually oblivious >>> to the type >>> of outputStream* handed down, and as well they should be. And if >>> the >>> allocate resource area memory themselves, it makes sense to >>> guard them with >>> ResourceMark in case they are called in a loop. >>> >>> The assert inside stringStream::write() is not a real help >>> either, because >>> whether or not it hits depends on pure luck - whether the >>> realloc code path >>> is hit just in the right moment while printing. Which depends on >>> the buffer >>> size and the print history, which is variable, especially with >>> logging. >>> >>> The only advantage to using bufferedStream (C-Heap) is a small >>> performance >>> improvement when allocating. The question is whether this is >>> really worth >>> the risk of using resource area memory in this fashion. >>> Especially in the >>> context of UL where we are about to do expensive IO operations >>> (writing to >>> log file) or may lock (os::flockfile). >>> >>> Also, the difference between bufferedStream and stringStream >>> might be >>> reduced by improving bufferedStream (e.g. by using a member char >>> array for >>> small allocations and delay using malloc() for larger arrays.) >>> >>> What you think? Should we get rid of stringStream and only use >>> an (possibly >>> improved) bufferedStream? I also imagine this could make UL >>> coding a bit >>> simpler. >>> >>> >>> Not answering your questions, but I want to point out that we >>> already have a UL stream that uses C-Heap: >>> >>> logging/logStream.hpp: >>> >>> // The backing buffer is allocated in CHeap memory. >>> typedef LogStreamBase LogStreamCHeap; >>> >>> StefanK >>> >>> >>> >>> Thank you, >>> >>> Kind Regards, Thomas >>> >>> >>> > From glaubitz at physik.fu-berlin.de Fri Jun 9 10:20:42 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Jun 2017 12:20:42 +0200 Subject: [PATCH] linux-sparc build fixes Message-ID: <20170609102041.GA2477@physik.fu-berlin.de> Hi! I am currently working on fixing OpenJDK-9 on all non-mainstream targets available in Debian. For Debian/sparc64, the attached four patches were necessary to make the build succeed [1]. I know the patches cannot be merged right now, but I'm posting them anyway in case someone else is interested in using them. All patches are: Signed-off-by: John Paul Adrian Glaubitz I also signed the OCA. I'm now looking into fixing the builds on alpha (DEC Alpha), armel (ARMv4T), m68k (680x0), powerpc (PPC32) and sh4 (SuperH/J-Core). Cheers, Adrian > [1] https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=sparc64&ver=9%7Eb170-2&stamp=1496931563&raw=0 -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 -------------- next part -------------- A non-text attachment was scrubbed... Name: hotspot-add-missing-log-header.diff Type: text/x-diff Size: 352 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hotspot-fix-checkbytebuffer.diff Type: text/x-diff Size: 792 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rename-sparc-linux-atomic-header.diff Type: text/x-diff Size: 16060 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fix-zero-build-on-sparc.diff Type: text/x-diff Size: 11316 bytes Desc: not available URL: From glaubitz at physik.fu-berlin.de Fri Jun 9 13:15:21 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Jun 2017 15:15:21 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc Message-ID: <20170609131521.GE2477@physik.fu-berlin.de> Hi! I'm currently trying to fix the build of openjdk-9 (b170) on linux-powerpc (PPC32) on Debian which fails with the following segmentation fault (full build log available in [1]): Creating images/jmods/java.desktop.jmod /bin/rm -f /<>/build/images/jmods/java.desktop.jmod /<>/build/support/jmods/java.desktop.jmod /<>/build/jdk/bin/jmod -J-XX:+UseSerialGC -J-Xms32M -J-Xmx512M -J-XX:TieredStopAtLevel=1 create \ --module-version 9-Debian \ --target-platform 'linux-ppc' \ --module-path /<>/build/images/jmods \ --exclude '**{_the.*,_*.marker,*.diz,*.debuginfo,*.dSYM/**,*.dSYM,*.pdb,*.map}' \ --libs /<>/build/support/modules_libs/java.desktop --cmds /<>/build/support/modules_cmds/java.desktop --config /<>/build/support/modules_conf/java.desktop --class-path /<>/build/jdk/modules/java.desktop --header-fil es /<>/build/support/modules_include/java.desktop --legal-notices "/<>/build/support/modules_legal/java.base:/<>/src/jdk/src/java.desktop/unix/legal:/<>/src/jdk/src/java.desktop/share/legal" /<>/build/suppo rt/jmods/java.desktop.jmod (...) make[4]: *** [/<>/build/images/jmods/java.desktop.jmod] Segmentation fault CreateJmods.gmk:133: recipe for target '/<>/build/images/jmods/java.desktop.jmod' failed make[4]: Leaving directory '/<>/src/make' make/Main.gmk:305: recipe for target 'java.desktop-jmod' failed make[3]: *** [java.desktop-jmod] Error 2 make[3]: Leaving directory '/<>/src' To reproduce the segfault, I tried running the command above but whichever way I try to run the jmod command, it just bails out with an error message about insufficient memory: (sid-powerpc-sbuild)root at kapitsa:/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk/bin# ./jmod Error occurred during initialization of boot layer java.lang.OutOfMemoryError: Direct buffer memory (sid-powerpc-sbuild)root at kapitsa:/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk/bin# This applies to all the various Java commands I tried (javac, java etc) as well. Does anyone have an idea what I am overlooking here? Also, does anyone have any pointers to debugging such issues in JVM? It's probably not just a matter of loading the offending command into gdb with the proper version of libjvm.so preloaded, is it? Thanks, Adrian > [1] https://people.debian.org/~glaubitz/openjdk-9_9~b170-2_powerpc.build.gz -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From gromero at linux.vnet.ibm.com Fri Jun 9 14:04:48 2017 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Fri, 9 Jun 2017 11:04:48 -0300 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <2239bacf-1760-1f73-20a1-ce792a191741@redhat.com> References: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> <62524c62-ded1-6fab-e00c-8666fbff1645@oracle.com> <2239bacf-1760-1f73-20a1-ce792a191741@redhat.com> Message-ID: <593AAB00.4060801@linux.vnet.ibm.com> Hi Zhengyu, Dan On 07-06-2017 17:37, Zhengyu Gu wrote: > Thanks for the clarification, Dan. > > -Zhengyu > > > On 06/07/2017 04:14 PM, Daniel D. Daugherty wrote: >> hotspot-dev at ... is the right place for the RFR (which this is). >> >> jdk8u-dev at ... is the right place for the RFA (Request For Approval) >> after the RFR is approved. >> >> Dan Does it mean that besides Aleksey's review it's still missing one additional review in order to proceed with the request for approval in the jdk8u-dev ML or this bug? Is my understanding correct? Thanks! Regards, Gustavo >> >> On 6/7/17 12:16 PM, Zhengyu Gu wrote: >>> Hi Aleksey, >>> >>> Thanks for the review. >>> >>> On 06/07/2017 02:11 PM, Aleksey Shipilev wrote: >>>> On 06/07/2017 08:07 PM, Zhengyu Gu wrote: >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >>>>> Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ >>>> >>>> Looks almost the same as 9. Looks good to me. >>>> >>>> Shouldn't you do this at jdk8-dev? >>>> >>> There is not jdk8-dev and jdk8 is read-only. I think jdk8u-dev is >>> right one. >>> >>> Thanks, >>> >>> -Zhengyu >>> >>> >>>> -Aleksey >>>> >>> >> > From gromero at linux.vnet.ibm.com Fri Jun 9 14:30:29 2017 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Fri, 9 Jun 2017 11:30:29 -0300 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <20170609131521.GE2477@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> Message-ID: <593AB105.5020303@linux.vnet.ibm.com> Hi John, On 09-06-2017 10:15, John Paul Adrian Glaubitz wrote: > Hi! > > I'm currently trying to fix the build of openjdk-9 (b170) on linux-powerpc > (PPC32) on Debian which fails with the following segmentation fault > (full build log available in [1]): > > Creating images/jmods/java.desktop.jmod > /bin/rm -f /<>/build/images/jmods/java.desktop.jmod /<>/build/support/jmods/java.desktop.jmod > /<>/build/jdk/bin/jmod -J-XX:+UseSerialGC -J-Xms32M -J-Xmx512M -J-XX:TieredStopAtLevel=1 create \ > --module-version 9-Debian \ > --target-platform 'linux-ppc' \ > --module-path /<>/build/images/jmods \ > --exclude '**{_the.*,_*.marker,*.diz,*.debuginfo,*.dSYM/**,*.dSYM,*.pdb,*.map}' \ > --libs /<>/build/support/modules_libs/java.desktop --cmds /<>/build/support/modules_cmds/java.desktop --config /<>/build/support/modules_conf/java.desktop --class-path /<>/build/jdk/modules/java.desktop --header-fil > es /<>/build/support/modules_include/java.desktop --legal-notices "/<>/build/support/modules_legal/java.base:/<>/src/jdk/src/java.desktop/unix/legal:/<>/src/jdk/src/java.desktop/share/legal" /<>/build/suppo > rt/jmods/java.desktop.jmod > (...) > make[4]: *** [/<>/build/images/jmods/java.desktop.jmod] Segmentation fault > CreateJmods.gmk:133: recipe for target '/<>/build/images/jmods/java.desktop.jmod' failed > make[4]: Leaving directory '/<>/src/make' > make/Main.gmk:305: recipe for target 'java.desktop-jmod' failed > make[3]: *** [java.desktop-jmod] Error 2 > make[3]: Leaving directory '/<>/src' > > To reproduce the segfault, I tried running the command above but whichever > way I try to run the jmod command, it just bails out with an error message > about insufficient memory: > > (sid-powerpc-sbuild)root at kapitsa:/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk/bin# ./jmod > Error occurred during initialization of boot layer > java.lang.OutOfMemoryError: Direct buffer memory > (sid-powerpc-sbuild)root at kapitsa:/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk/bin# > > This applies to all the various Java commands I tried (javac, java etc) as > well. Does anyone have an idea what I am overlooking here? > > Also, does anyone have any pointers to debugging such issues in JVM? It's probably > not just a matter of loading the offending command into gdb with the proper > version of libjvm.so preloaded, is it? You can attach gdb when the error occurs passing to the JVM: -XX:OnError="gdb %p" -XX:OnOutOfMemoryError="gdb %p" Another thing is that the JVM will use SIGSEGV for some state transitions, hence in gdb I usually let SIGSEGV be passed to the JVM: (gdb) handle SIGSEGV pass noprint nostop On PPC64 (not sure what's the current state on PPC32) we can also have SIGTRAP for state transitions and I had some trouble in the past debugging with -XX:+UseSIGTRAP enabled (basically gdb halts on some specific thread types that generates such a type of signal), so I also usually ask to the JVM to not use SIGTRAP and use SIGILL instead, enabling the passthrough of SIGILL: (gdb) handle SIGILL pass noprint nostop and calling the JVM with "-XX:-UseSIGTRAP". Starting the JVM from gdb it's also fine given that you handle the signals, otherwise all can go well until init_globals() but beyond that, after some threads are created, the debugging can halt for no apparently reason. HTH Cheers, Gustavo > Thanks, > Adrian > >> [1] https://people.debian.org/~glaubitz/openjdk-9_9~b170-2_powerpc.build.gz > From glaubitz at physik.fu-berlin.de Fri Jun 9 15:02:26 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Jun 2017 17:02:26 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <593AB105.5020303@linux.vnet.ibm.com> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> Message-ID: <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> Hi Gustavo! On 06/09/2017 04:30 PM, Gustavo Romero wrote: > You can attach gdb when the error occurs passing to the JVM: > > -XX:OnError="gdb %p" > -XX:OnOutOfMemoryError="gdb %p" Aha, that's a very useful feature. Thanks for the tip. > Another thing is that the JVM will use SIGSEGV for some state transitions, hence > in gdb I usually let SIGSEGV be passed to the JVM: > > (gdb) handle SIGSEGV pass noprint nostop But it does not mean the segmentation faults I have observed are actually intentional, are they? What confuses me most is that the JVM segfaults during the build but bails out with the out-of-memory error when I manually run any of the commands after the failed build. It almost looks like I forgot to set some environment variables. > On PPC64 (not sure what's the current state on PPC32) we can also have SIGTRAP > for state transitions and I had some trouble in the past debugging with > -XX:+UseSIGTRAP enabled (basically gdb halts on some specific thread types that > generates such a type of signal), so I also usually ask to the JVM to not use > SIGTRAP and use SIGILL instead, enabling the passthrough of SIGILL: > > (gdb) handle SIGILL pass noprint nostop > > and calling the JVM with "-XX:-UseSIGTRAP". Good to know. I would have been confused by that behavior for sure. > Starting the JVM from gdb it's also fine given that you handle the signals, > otherwise all can go well until init_globals() but beyond that, after some > threads are created, the debugging can halt for no apparently reason. What's the best way to actually start the JVM from gdb? Do I just load "java" into gdb and run it with the suggested parameters? Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From daniel.daugherty at oracle.com Fri Jun 9 15:18:11 2017 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Fri, 9 Jun 2017 09:18:11 -0600 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <593AAB00.4060801@linux.vnet.ibm.com> References: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> <62524c62-ded1-6fab-e00c-8666fbff1645@oracle.com> <2239bacf-1760-1f73-20a1-ce792a191741@redhat.com> <593AAB00.4060801@linux.vnet.ibm.com> Message-ID: <64ae8569-6fa7-ac4b-e088-9cb64cc27c3c@oracle.com> On 6/9/17 8:04 AM, Gustavo Romero wrote: > Hi Zhengyu, Dan > > On 07-06-2017 17:37, Zhengyu Gu wrote: >> Thanks for the clarification, Dan. >> >> -Zhengyu >> >> >> On 06/07/2017 04:14 PM, Daniel D. Daugherty wrote: >>> hotspot-dev at ... is the right place for the RFR (which this is). >>> >>> jdk8u-dev at ... is the right place for the RFA (Request For Approval) >>> after the RFR is approved. >>> >>> Dan > Does it mean that besides Aleksey's review it's still missing one additional > review in order to proceed with the request for approval in the jdk8u-dev ML > or this bug? Is my understanding correct? I believe that for a backport that is almost identical to the original, all we require is a single reviewer, even in HotSpot. Dan > > Thanks! > > Regards, > Gustavo > > >>> On 6/7/17 12:16 PM, Zhengyu Gu wrote: >>>> Hi Aleksey, >>>> >>>> Thanks for the review. >>>> >>>> On 06/07/2017 02:11 PM, Aleksey Shipilev wrote: >>>>> On 06/07/2017 08:07 PM, Zhengyu Gu wrote: >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >>>>>> Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ >>>>> Looks almost the same as 9. Looks good to me. >>>>> >>>>> Shouldn't you do this at jdk8-dev? >>>>> >>>> There is not jdk8-dev and jdk8 is read-only. I think jdk8u-dev is >>>> right one. >>>> >>>> Thanks, >>>> >>>> -Zhengyu >>>> >>>> >>>>> -Aleksey >>>>> > From glaubitz at physik.fu-berlin.de Fri Jun 9 15:20:32 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Jun 2017 17:20:32 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> Message-ID: <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> On 06/09/2017 05:02 PM, John Paul Adrian Glaubitz wrote: > On 06/09/2017 04:30 PM, Gustavo Romero wrote: >> You can attach gdb when the error occurs passing to the JVM: >> >> -XX:OnError="gdb %p" >> -XX:OnOutOfMemoryError="gdb %p" > > Aha, that's a very useful feature. Thanks for the tip. Hmm, that doesn't seem to work: root at kapitsa:/srv/sid-powerpc-sbuild/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk# ./bin/java -XX:OnError="gdb %p" -XX:OnOutOfMemoryError="gdb %p" Error occurred during initialization of boot layer java.lang.OutOfMemoryError: Direct buffer memory root at kapitsa:/srv/sid-powerpc-sbuild/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk# Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From glaubitz at physik.fu-berlin.de Fri Jun 9 15:21:48 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Jun 2017 17:21:48 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> Message-ID: <6c02c929-d9a3-3e1c-b42f-d9d477f82112@physik.fu-berlin.de> On 06/09/2017 05:02 PM, John Paul Adrian Glaubitz wrote: > On 06/09/2017 04:30 PM, Gustavo Romero wrote: >> You can attach gdb when the error occurs passing to the JVM: >> >> -XX:OnError="gdb %p" >> -XX:OnOutOfMemoryError="gdb %p" > > Aha, that's a very useful feature. Thanks for the tip. Hmm, that doesn't seem to work though: root at kapitsa:/srv/sid-powerpc-sbuild/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk# ./bin/java -XX:OnError="gdb %p" -XX:OnOutOfMemoryError="gdb %p" Error occurred during initialization of boot layer java.lang.OutOfMemoryError: Direct buffer memory root at kapitsa:/srv/sid-powerpc-sbuild/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk# Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From Alan.Bateman at oracle.com Fri Jun 9 15:26:16 2017 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Fri, 9 Jun 2017 16:26:16 +0100 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> Message-ID: <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> On 09/06/2017 16:20, John Paul Adrian Glaubitz wrote: > : > Hmm, that doesn't seem to work: > > root at kapitsa:/srv/sid-powerpc-sbuild/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk# ./bin/java -XX:OnError="gdb %p" -XX:OnOutOfMemoryError="gdb %p" > Error occurred during initialization of boot layer > java.lang.OutOfMemoryError: Direct buffer memory > root at kapitsa:/srv/sid-powerpc-sbuild/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk# > Something fishy if you are running our of direct memory during startup. Does -Xlog:init=debug print any more? -Alan From glaubitz at physik.fu-berlin.de Fri Jun 9 15:27:18 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Jun 2017 17:27:18 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> Message-ID: On 06/09/2017 05:26 PM, Alan Bateman wrote: > Something fishy if you are running our of direct memory during startup. Does -Xlog:init=debug print any more? It does: root at kapitsa:/srv/sid-powerpc-sbuild/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk/bin# ./java -Xlog:init=debug Error occurred during initialization of boot layer java.lang.OutOfMemoryError: Direct buffer memory at java.base/java.nio.Bits.reserveMemory(Bits.java:187) at java.base/java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:310) at java.base/sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:232) at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:195) at java.base/sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:165) at java.base/sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:65) at java.base/sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109) at java.base/sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.base/java.io.DataInputStream.readInt(DataInputStream.java:392) at java.base/jdk.internal.module.ModuleInfo.doRead(ModuleInfo.java:185) at java.base/jdk.internal.module.ModuleInfo.read(ModuleInfo.java:129) at java.base/jdk.internal.module.ModulePath.readExplodedModule(ModulePath.java:669) at java.base/jdk.internal.module.ModulePath.readModule(ModulePath.java:321) at java.base/jdk.internal.module.ModulePath.scanDirectory(ModulePath.java:285) at java.base/jdk.internal.module.ModulePath.scan(ModulePath.java:233) at java.base/jdk.internal.module.ModulePath.scanNextEntry(ModulePath.java:191) at java.base/jdk.internal.module.ModulePath.find(ModulePath.java:155) at java.base/java.lang.module.ModuleFinder$1.lambda$find$0(ModuleFinder.java:195) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/java.lang.module.ModuleFinder$1.find(ModuleFinder.java:196) at java.base/jdk.internal.module.ModuleBootstrap.boot(ModuleBootstrap.java:136) at java.base/java.lang.System.initPhase2(System.java:2003) root at kapitsa:/srv/sid-powerpc-sbuild/build/openjdk-9-2gWg6b/openjdk-9-9~b170/build/jdk/bin# -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From zgu at redhat.com Fri Jun 9 15:39:07 2017 From: zgu at redhat.com (Zhengyu Gu) Date: Fri, 9 Jun 2017 11:39:07 -0400 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <593AAB00.4060801@linux.vnet.ibm.com> References: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> <62524c62-ded1-6fab-e00c-8666fbff1645@oracle.com> <2239bacf-1760-1f73-20a1-ce792a191741@redhat.com> <593AAB00.4060801@linux.vnet.ibm.com> Message-ID: On 06/09/2017 10:04 AM, Gustavo Romero wrote: > Hi Zhengyu, Dan > > On 07-06-2017 17:37, Zhengyu Gu wrote: >> Thanks for the clarification, Dan. >> >> -Zhengyu >> >> >> On 06/07/2017 04:14 PM, Daniel D. Daugherty wrote: >>> hotspot-dev at ... is the right place for the RFR (which this is). >>> >>> jdk8u-dev at ... is the right place for the RFA (Request For Approval) >>> after the RFR is approved. >>> >>> Dan > > Does it mean that besides Aleksey's review it's still missing one additional > review in order to proceed with the request for approval in the jdk8u-dev ML > or this bug? Is my understanding correct? > Hi Gustavo, If backport has the same review process, then it needs a "R"eviwer. Thanks, -Zhengyu > Thanks! > > Regards, > Gustavo > > >>> >>> On 6/7/17 12:16 PM, Zhengyu Gu wrote: >>>> Hi Aleksey, >>>> >>>> Thanks for the review. >>>> >>>> On 06/07/2017 02:11 PM, Aleksey Shipilev wrote: >>>>> On 06/07/2017 08:07 PM, Zhengyu Gu wrote: >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >>>>>> Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ >>>>> >>>>> Looks almost the same as 9. Looks good to me. >>>>> >>>>> Shouldn't you do this at jdk8-dev? >>>>> >>>> There is not jdk8-dev and jdk8 is read-only. I think jdk8u-dev is >>>> right one. >>>> >>>> Thanks, >>>> >>>> -Zhengyu >>>> >>>> >>>>> -Aleksey >>>>> >>>> >>> >> > From gerard.ziemski at oracle.com Fri Jun 9 15:43:46 2017 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Fri, 9 Jun 2017 10:43:46 -0500 Subject: RFR: 8086005: Define __STDC_xxx_MACROS config macros globally via build system In-Reply-To: <38B1C99D-0DE4-481E-A971-311CB16B59F3@oracle.com> References: <38B1C99D-0DE4-481E-A971-311CB16B59F3@oracle.com> Message-ID: <2E5939FD-B3E6-4650-9DBE-742E870F5169@oracle.com> hi Kim, I?m withdrawing my objection. Making this mechanism cover ldflags is beyond the scope of this fix and deserves its own feature request. cheers > On Jun 8, 2017, at 4:21 PM, Gerard Ziemski wrote: > > hi Kim, > > My understanding is that to enable c++11, for example, we need to do 2 things (at least on Mac OS X): > > #1 For the compilation phase we need to add ?-std=c++11 -stdlib=libc++?, where ?-std=c++11? selects the language model, and ?-stdlib=libc++? selects the corresponding headers. > > #2 For the linking phase we need to add "-stdlib=libc++? to select the corresponding c++ standard lib. > > Ie, we need to set both cflags and ldflags, but you are only allowing to add to JVM_CFLAGS. Without the ability to also modify JVM_LDFLAGS, this fix, as is, is not complete on Mac OS X. > > Unless I?m mistaken, please correct me if I?m wrong, can we include modifying JVM_LDFLAGS in this fix as well? > > > cheers > > >> On Jun 7, 2017, at 6:51 PM, Kim Barrett wrote: >> >> Please review this change to the build of hotspot to globally define >> the __STDC_xxx_MACROS macros via the command line, rather than >> via #defines scattered through several header files. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8086005 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8086005/hs.00/ >> http://cr.openjdk.java.net/~kbarrett/8086005/hotspot.00/ >> >> Testing: >> JPRT >> > From glaubitz at physik.fu-berlin.de Fri Jun 9 15:58:19 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Jun 2017 17:58:19 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> Message-ID: On 06/09/2017 05:27 PM, John Paul Adrian Glaubitz wrote: > On 06/09/2017 05:26 PM, Alan Bateman wrote: >> Something fishy if you are running our of direct memory during startup. Does -Xlog:init=debug print any more? > > It does: I'll rebuild everything with --enable-debug --with-debug-level=slowdebug now to see if I can get more verbosity during the crash. I still have the old build root available in any case. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From glaubitz at physik.fu-berlin.de Fri Jun 9 17:54:20 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Jun 2017 19:54:20 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> Message-ID: On 06/09/2017 05:58 PM, John Paul Adrian Glaubitz wrote: > I'll rebuild everything with --enable-debug --with-debug-level=slowdebug Just rebuilding with "--with-debug-level=slowdebug", of course. Both options are mutually exclusive. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From gromero at linux.vnet.ibm.com Fri Jun 9 18:19:38 2017 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Fri, 9 Jun 2017 15:19:38 -0300 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: References: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> <62524c62-ded1-6fab-e00c-8666fbff1645@oracle.com> <2239bacf-1760-1f73-20a1-ce792a191741@redhat.com> <593AAB00.4060801@linux.vnet.ibm.com> Message-ID: <593AE6BA.9030608@linux.vnet.ibm.com> Hi Zhengyu, On 09-06-2017 12:39, Zhengyu Gu wrote: > > > On 06/09/2017 10:04 AM, Gustavo Romero wrote: >> Hi Zhengyu, Dan >> >> On 07-06-2017 17:37, Zhengyu Gu wrote: >>> Thanks for the clarification, Dan. >>> >>> -Zhengyu >>> >>> >>> On 06/07/2017 04:14 PM, Daniel D. Daugherty wrote: >>>> hotspot-dev at ... is the right place for the RFR (which this is). >>>> >>>> jdk8u-dev at ... is the right place for the RFA (Request For Approval) >>>> after the RFR is approved. >>>> >>>> Dan >> >> Does it mean that besides Aleksey's review it's still missing one additional >> review in order to proceed with the request for approval in the jdk8u-dev ML >> or this bug? Is my understanding correct? >> > Hi Gustavo, > > If backport has the same review process, then it needs a "R"eviwer. Got it! Thanks for clarifying. Regards, Gustavo > Thanks, > > -Zhengyu > > >> Thanks! >> >> Regards, >> Gustavo >> >> >>>> >>>> On 6/7/17 12:16 PM, Zhengyu Gu wrote: >>>>> Hi Aleksey, >>>>> >>>>> Thanks for the review. >>>>> >>>>> On 06/07/2017 02:11 PM, Aleksey Shipilev wrote: >>>>>> On 06/07/2017 08:07 PM, Zhengyu Gu wrote: >>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >>>>>>> Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ >>>>>> >>>>>> Looks almost the same as 9. Looks good to me. >>>>>> >>>>>> Shouldn't you do this at jdk8-dev? >>>>>> >>>>> There is not jdk8-dev and jdk8 is read-only. I think jdk8u-dev is >>>>> right one. >>>>> >>>>> Thanks, >>>>> >>>>> -Zhengyu >>>>> >>>>> >>>>>> -Aleksey >>>>>> >>>>> >>>> >>> >> > From gromero at linux.vnet.ibm.com Fri Jun 9 18:21:07 2017 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Fri, 9 Jun 2017 15:21:07 -0300 Subject: RFR(XS) [8u backport] 8181055: PPC64: "mbind: Invalid argument" still seen after 8175813 In-Reply-To: <64ae8569-6fa7-ac4b-e088-9cb64cc27c3c@oracle.com> References: <5361e308-00e1-3041-c728-b6ebae7586ae@redhat.com> <25ab002d-0728-4aaa-eeb7-d3b9a3ea8254@redhat.com> <62524c62-ded1-6fab-e00c-8666fbff1645@oracle.com> <2239bacf-1760-1f73-20a1-ce792a191741@redhat.com> <593AAB00.4060801@linux.vnet.ibm.com> <64ae8569-6fa7-ac4b-e088-9cb64cc27c3c@oracle.com> Message-ID: <593AE713.5070201@linux.vnet.ibm.com> Hi Dan, On 09-06-2017 12:18, Daniel D. Daugherty wrote: > On 6/9/17 8:04 AM, Gustavo Romero wrote: >> Hi Zhengyu, Dan >> >> On 07-06-2017 17:37, Zhengyu Gu wrote: >>> Thanks for the clarification, Dan. >>> >>> -Zhengyu >>> >>> >>> On 06/07/2017 04:14 PM, Daniel D. Daugherty wrote: >>>> hotspot-dev at ... is the right place for the RFR (which this is). >>>> >>>> jdk8u-dev at ... is the right place for the RFA (Request For Approval) >>>> after the RFR is approved. >>>> >>>> Dan >> Does it mean that besides Aleksey's review it's still missing one additional >> review in order to proceed with the request for approval in the jdk8u-dev ML >> or this bug? Is my understanding correct? > > I believe that for a backport that is almost identical to the > original, all we require is a single reviewer, even in HotSpot. I see now. Thanks for clarifying. Regards, Gustavo > Dan > > >> >> Thanks! >> >> Regards, >> Gustavo >> >> >>>> On 6/7/17 12:16 PM, Zhengyu Gu wrote: >>>>> Hi Aleksey, >>>>> >>>>> Thanks for the review. >>>>> >>>>> On 06/07/2017 02:11 PM, Aleksey Shipilev wrote: >>>>>> On 06/07/2017 08:07 PM, Zhengyu Gu wrote: >>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8181055 >>>>>>> Webrev: http://cr.openjdk.java.net/~zgu/8181055/8u/webrev.00/ >>>>>> Looks almost the same as 9. Looks good to me. >>>>>> >>>>>> Shouldn't you do this at jdk8-dev? >>>>>> >>>>> There is not jdk8-dev and jdk8 is read-only. I think jdk8u-dev is >>>>> right one. >>>>> >>>>> Thanks, >>>>> >>>>> -Zhengyu >>>>> >>>>> >>>>>> -Aleksey >>>>>> >> > From gromero at linux.vnet.ibm.com Fri Jun 9 18:57:17 2017 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Fri, 9 Jun 2017 15:57:17 -0300 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> Message-ID: <593AEF8D.1000600@linux.vnet.ibm.com> Hello Adrian, On 09-06-2017 12:02, John Paul Adrian Glaubitz wrote: > Hi Gustavo! > > On 06/09/2017 04:30 PM, Gustavo Romero wrote: >> You can attach gdb when the error occurs passing to the JVM: >> >> -XX:OnError="gdb %p" >> -XX:OnOutOfMemoryError="gdb %p" > > Aha, that's a very useful feature. Thanks for the tip. > >> Another thing is that the JVM will use SIGSEGV for some state transitions, hence >> in gdb I usually let SIGSEGV be passed to the JVM: >> >> (gdb) handle SIGSEGV pass noprint nostop > > But it does not mean the segmentation faults I have observed are actually > intentional, are they? No, if an intended SIGSEGV is caught in an expected transition the JVM will not abort, exit, or crash. It will instead treat it and move on. Looks like also that the segfault you observed happened in a way that did not reach the JVM signal handler since it looks that not even a hs_err log was generated. > What confuses me most is that the JVM segfaults during the build but > bails out with the out-of-memory error when I manually run any of > the commands after the failed build. It almost looks like I forgot > to set some environment variables. > >> On PPC64 (not sure what's the current state on PPC32) we can also have SIGTRAP >> for state transitions and I had some trouble in the past debugging with >> -XX:+UseSIGTRAP enabled (basically gdb halts on some specific thread types that >> generates such a type of signal), so I also usually ask to the JVM to not use >> SIGTRAP and use SIGILL instead, enabling the passthrough of SIGILL: >> >> (gdb) handle SIGILL pass noprint nostop >> >> and calling the JVM with "-XX:-UseSIGTRAP". > > Good to know. I would have been confused by that behavior for sure. > >> Starting the JVM from gdb it's also fine given that you handle the signals, >> otherwise all can go well until init_globals() but beyond that, after some >> threads are created, the debugging can halt for no apparently reason. > > What's the best way to actually start the JVM from gdb? Do I just > load "java" into gdb and run it with the suggested parameters? Yes, exactly. Also use a slow/fast debug build, not a release one. With a debug build you can also use, from the gdb, "call help()" which is very helpful to walk the native and the JVM stack, for instance: (gdb) call help() "Executing help" basic pp(void* p) - try to make sense of p pv(intptr_t p)- ((PrintableResourceObj*) p)->print() ps() - print current thread stack pss() - print all thread stacks pm(int pc) - print Method* given compiled PC findm(intptr_t pc) - finds Method* find(intptr_t x) - finds & prints nmethod/stub/bytecode/oop based on pointer into it pns(void* sp, void* fp, void* pc) - print native (i.e. mixed) stack trace. E.g. pns($sp, $rbp, $pc) on Linux/amd64 and Solaris/amd64 or pns($sp, $ebp, $pc) on Linux/x86 or pns($sp, 0, $pc) on Linux/ppc64 or pns($sp + 0x7ff, 0, $pc) on Solaris/SPARC - in gdb do 'set overload-resolution off' before calling pns() - in dbx do 'frame 1' before calling pns() misc. flush() - flushes the log file events() - dump events from ring buffers compiler debugging debug() - to set things up for compiler debugging ndebug() - undo debug (gdb) call pns($sp, 0, $pc) "Executing pns" Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) v ~StubRoutines::SafeFetch32 V [libjvm.so+0xd0bf9c] init_globals()+0x19c V [libjvm.so+0x15acf04] Threads::create_vm(JavaVMInitArgs*, bool*)+0x364 V [libjvm.so+0xdb085c] JNI_CreateJavaVM+0x10c C [libjli.so+0x48c0] JavaMain+0xd0 C [libpthread.so.0+0x8070] start_thread+0xf0 C [libc.so.6+0x123230] clone+0x98 Regards, Gustavo > Adrian > From glaubitz at physik.fu-berlin.de Fri Jun 9 18:57:46 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Jun 2017 20:57:46 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> Message-ID: <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> On 06/09/2017 07:54 PM, John Paul Adrian Glaubitz wrote: > On 06/09/2017 05:58 PM, John Paul Adrian Glaubitz wrote: >> I'll rebuild everything with --enable-debug --with-debug-level=slowdebug > > Just rebuilding with "--with-debug-level=slowdebug", of course. Both options > are mutually exclusive. Surprise, surprise. Building with "--with-debug-level=slowdebug" instead of "--with-debug-level=release" made the crash go away. Does gcc optimize too aggressively here? It's still building, let's see if it actually succeeds. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From daniel.daugherty at oracle.com Fri Jun 9 20:00:01 2017 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Fri, 9 Jun 2017 14:00:01 -0600 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> Message-ID: This bug seems relevant to this discussion: JDK-8181807 Graal internal error "StringStream is re-allocated with a different ResourceMark" https://bugs.openjdk.java.net/browse/JDK-8181807 Dan On 6/9/17 1:48 AM, Thomas St?fe wrote: > Hi Stefan, > > just a small question to verify that I understood everything correctly. > > The LogStream classes (LogStreamBase and children) are basically the > write-to-UL frontend classes, right? Their purpose is to collect input via > various print.. methods until \n is encountered, then pipe the assembled > line to the UL backend. To do that it needs a backing store for the > to-be-assembled-line, and this is the whole reason stringStream is used > (via the "streamClass" template parameter for LogStreamBase)? > > So, basically the whole rather involved class tree rooted at LogStreamBase > only deals with the various ways that one line backing store is allocated? > Including LogStream itself, which contains - I was very surprised to see - > an embedded ResourceMark (stringStreamWithResourceMark). There are no other > reasons for this ResourceMark? > > I am currently experimenting with changing LogStream to use a simple > malloc'd backing store, in combination with a small fixed size member > buffer for small lines; I'd like to see if that had any measurable negative > performance impact. The coding is halfway done already, but the callers > need fixing up, because due to my change LogStreamBase children cannot be > allocated with new anymore, because of the ResourceObj-destructor problem. > > What do you think, is this worthwhile and am I overlooking something > obvious? The UL coding is quite large, after all. > > Kind Regards, Thomas > > > > > > > > > On Wed, Jun 7, 2017 at 12:25 PM, Thomas St?fe > wrote: > >> Hi Stefan, >> >> On Wed, Jun 7, 2017 at 12:17 PM, Stefan Karlsson < >> stefan.karlsson at oracle.com> wrote: >> >>> Hi Thomas, >>> >>> On 2017-06-07 11:15, Thomas St?fe wrote: >>> >>>> Hi Stefan, >>>> >>>> I saw this, but I also see LogStreamNoResourceMark being used as a >>>> default for the (trace|debug|info|warning|error)_stream() methods of >>>> Log. In this form it is used quite a lot. >>>> >>>> Looking further, I see that one cannot just exchange >>>> LogStreamNoResourceMark with LogStreamCHeap, because there are hidden usage >>>> conventions I was not aware of: >>>> >>> Just to be clear, I didn't propose that you did a wholesale replacement >>> of LogStreamNoResourceMark with LogStreamCHeap. I merely pointed out the >>> existence of this class in case you had missed it. >>> >>> >> Sure! I implied this myself with my original post which proposed to >> replace the resource area allocation inside stringStream with malloc'd >> memory. >> >> >>>> LogStreamNoResourceMark is allocated with new() in create_log_stream(). >>>> LogStreamNoResourceMark is an outputStream, which is a ResourceObj. In its >>>> current form ResourceObj cannot be deleted, so destructors for ResourceObj >>>> child cannot be called. >>>> >>> By default ResourceObj classes are allocated in the resource area, but >>> the class also supports CHeap allocations. For example, see some of the >>> allocations of GrowableArray instances: >>> >>> _deallocate_list = new (ResourceObj::C_HEAP, mtClass) >>> GrowableArray(100, true); >>> >>> These can still be deleted: >>> >>> delete _deallocate_list; >>> >>> >>>> So, we could not use malloc in the stringStream - or exchange >>>> stringStream for bufferedStream - because we would need a non-empty >>>> destructor to free the malloc'd memory, and that destructor cannot exist. >>>> >>>> Looking further, I see that this imposes subtle usage restrictions for >>>> UL: >>>> >>>> LogStreamNoResourceMark objects are used via "log.debug_stream()" or >>>> similar. For example: >>>> >>>> codecache_print(log.debug_stream(), /* detailed= */ false); >>>> >>>> debug_stream() will allocate a LogStreamNoResourceMark object which >>>> lives in the resourcearea. This is a bit surprising, because >>>> "debug_stream()" feels like it returns a singleton or a member variable of >>>> log. >>>> >>> IIRC, this was done to: >>> >>> 1) break up a cyclic dependencies between logStream.hpp and log.hpp >>> >>> 2) Not have log.hpp depend on the stream.hpp. This used to be important, >>> but the includes in stream.hpp has been fixed so this might be a non-issue. >>> >>> >>>> If one wants to use LogStreamCHeap instead, it must not be created with >>>> new() - which would be a subtle memory leak because the destructor would >>>> never be called - but instead on the stack as automatic variable: >>>> >>>> LogStreamCHeap log_stream(log); >>>> log_stream.print("hallo"); >>>> >>>> I may understand this wrong, but if this is true, this is quite a >>>> difficult API. >>>> >>> Feel free to rework this and propose a simpler model. Anything that would >>> simplify this would be helpful. >>> >>> >> I will mull over this a bit (and I would be thankful for other viewpoints >> as well). A bottomline question which is difficult to answer is whether >> folks value the slight performance increase of resource area backed memory >> allocation in stringStream more than simplicity and robustness which would >> come with switching to malloced memory. And then, there is the second >> question of why outputStream objects should be ResourceObj at all; for me, >> they feel much more at home as stack objects. They themselves are small and >> do not allocate a lot of memory (if they do, they do it dynamically). And >> they are not allocated in vast amounts... >> >> Lets see what others think. >> >> >>> I have two classes which look like siblings but >>> >>>> LogStreamCHeap can only be allocated on the local stack - otherwise I'll >>>> get a memory leak - while LogStreamNoResourceMark gets created in the >>>> resource area, which prevents its destructor from running and may fill the >>>> resource area up with temporary stream objects if used in a certain way. >>>> >>>> Have I understood this right so far? If yes, would it be possible to >>>> simplify this? >>>> >>> I think you understand the code correctly, and yes, there are probably >>> ways to make this simpler. >>> >>> >> Thanks for your input! >> >> Kind regards, Thomas >> >> >>> Thanks, >>> StefanK >>> >>> >>>> Kind Regards, Thomas >>>> >>>> >>>> >>>> >>>> >>>> On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson < >>>> stefan.karlsson at oracle.com > wrote: >>>> >>>> Hi Thomas, >>>> >>>> >>>> On 2017-06-06 11:40, Thomas St?fe wrote: >>>> >>>> Hi all, >>>> >>>> In our VM we recently hit something similar to >>>> https://bugs.openjdk.java.net/browse/JDK-8167995 >>>> or >>>> https://bugs.openjdk.java.net/browse/JDK-8149557 >>>> : >>>> >>>> A stringStream* was handed down to nested print functions which >>>> create >>>> their own ResourceMarks and, while being down the stack under >>>> the scope of >>>> that new ResourceMark, the stringStream needed to enlarge its >>>> internal >>>> buffer. This is the situation the assert inside >>>> stringStream::write() >>>> attempts to catch >>>> (assert(Thread::current()->current_resource_mark() == >>>> rm); in our case this was a release build, so we just crashed. >>>> >>>> The solution for both JDK-816795 and JDK-8149557 seemed to be to >>>> just >>>> remove the offending ResourceMarks, or shuffle them around, but >>>> generally >>>> this is not an optimal solution, or? >>>> >>>> We actually question whether using resource area memory is a >>>> good idea for >>>> outputStream chuild objects at all: >>>> >>>> outputStream instances typically travel down the stack a lot by >>>> getting >>>> handed sub-print-functions, so they run danger of crossing >>>> resource mark >>>> boundaries like above. The sub functions are usually oblivious >>>> to the type >>>> of outputStream* handed down, and as well they should be. And if >>>> the >>>> allocate resource area memory themselves, it makes sense to >>>> guard them with >>>> ResourceMark in case they are called in a loop. >>>> >>>> The assert inside stringStream::write() is not a real help >>>> either, because >>>> whether or not it hits depends on pure luck - whether the >>>> realloc code path >>>> is hit just in the right moment while printing. Which depends on >>>> the buffer >>>> size and the print history, which is variable, especially with >>>> logging. >>>> >>>> The only advantage to using bufferedStream (C-Heap) is a small >>>> performance >>>> improvement when allocating. The question is whether this is >>>> really worth >>>> the risk of using resource area memory in this fashion. >>>> Especially in the >>>> context of UL where we are about to do expensive IO operations >>>> (writing to >>>> log file) or may lock (os::flockfile). >>>> >>>> Also, the difference between bufferedStream and stringStream >>>> might be >>>> reduced by improving bufferedStream (e.g. by using a member char >>>> array for >>>> small allocations and delay using malloc() for larger arrays.) >>>> >>>> What you think? Should we get rid of stringStream and only use >>>> an (possibly >>>> improved) bufferedStream? I also imagine this could make UL >>>> coding a bit >>>> simpler. >>>> >>>> >>>> Not answering your questions, but I want to point out that we >>>> already have a UL stream that uses C-Heap: >>>> >>>> logging/logStream.hpp: >>>> >>>> // The backing buffer is allocated in CHeap memory. >>>> typedef LogStreamBase LogStreamCHeap; >>>> >>>> StefanK >>>> >>>> >>>> >>>> Thank you, >>>> >>>> Kind Regards, Thomas >>>> >>>> >>>> From thomas.stuefe at gmail.com Sat Jun 10 06:33:27 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Sat, 10 Jun 2017 08:33:27 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> Message-ID: Yes, this seems to be the same issue. ..Thomas On Fri, Jun 9, 2017 at 10:00 PM, Daniel D. Daugherty < daniel.daugherty at oracle.com> wrote: > This bug seems relevant to this discussion: > > JDK-8181807 Graal internal error "StringStream is re-allocated with a > different ResourceMark" > https://bugs.openjdk.java.net/browse/JDK-8181807 > > Dan > > > > On 6/9/17 1:48 AM, Thomas St?fe wrote: > >> Hi Stefan, >> >> just a small question to verify that I understood everything correctly. >> >> The LogStream classes (LogStreamBase and children) are basically the >> write-to-UL frontend classes, right? Their purpose is to collect input via >> various print.. methods until \n is encountered, then pipe the assembled >> line to the UL backend. To do that it needs a backing store for the >> to-be-assembled-line, and this is the whole reason stringStream is used >> (via the "streamClass" template parameter for LogStreamBase)? >> >> So, basically the whole rather involved class tree rooted at LogStreamBase >> only deals with the various ways that one line backing store is allocated? >> Including LogStream itself, which contains - I was very surprised to see - >> an embedded ResourceMark (stringStreamWithResourceMark). There are no >> other >> reasons for this ResourceMark? >> >> I am currently experimenting with changing LogStream to use a simple >> malloc'd backing store, in combination with a small fixed size member >> buffer for small lines; I'd like to see if that had any measurable >> negative >> performance impact. The coding is halfway done already, but the callers >> need fixing up, because due to my change LogStreamBase children cannot be >> allocated with new anymore, because of the ResourceObj-destructor problem. >> >> What do you think, is this worthwhile and am I overlooking something >> obvious? The UL coding is quite large, after all. >> >> Kind Regards, Thomas >> >> >> >> >> >> >> >> >> On Wed, Jun 7, 2017 at 12:25 PM, Thomas St?fe >> wrote: >> >> Hi Stefan, >>> >>> On Wed, Jun 7, 2017 at 12:17 PM, Stefan Karlsson < >>> stefan.karlsson at oracle.com> wrote: >>> >>> Hi Thomas, >>>> >>>> On 2017-06-07 11:15, Thomas St?fe wrote: >>>> >>>> Hi Stefan, >>>>> >>>>> I saw this, but I also see LogStreamNoResourceMark being used as a >>>>> default for the (trace|debug|info|warning|error)_stream() methods of >>>>> Log. In this form it is used quite a lot. >>>>> >>>>> Looking further, I see that one cannot just exchange >>>>> LogStreamNoResourceMark with LogStreamCHeap, because there are hidden >>>>> usage >>>>> conventions I was not aware of: >>>>> >>>>> Just to be clear, I didn't propose that you did a wholesale replacement >>>> of LogStreamNoResourceMark with LogStreamCHeap. I merely pointed out the >>>> existence of this class in case you had missed it. >>>> >>>> >>>> Sure! I implied this myself with my original post which proposed to >>> replace the resource area allocation inside stringStream with malloc'd >>> memory. >>> >>> >>> LogStreamNoResourceMark is allocated with new() in create_log_stream(). >>>>> LogStreamNoResourceMark is an outputStream, which is a ResourceObj. In >>>>> its >>>>> current form ResourceObj cannot be deleted, so destructors for >>>>> ResourceObj >>>>> child cannot be called. >>>>> >>>>> By default ResourceObj classes are allocated in the resource area, but >>>> the class also supports CHeap allocations. For example, see some of the >>>> allocations of GrowableArray instances: >>>> >>>> _deallocate_list = new (ResourceObj::C_HEAP, mtClass) >>>> GrowableArray(100, true); >>>> >>>> These can still be deleted: >>>> >>>> delete _deallocate_list; >>>> >>>> >>>> So, we could not use malloc in the stringStream - or exchange >>>>> stringStream for bufferedStream - because we would need a non-empty >>>>> destructor to free the malloc'd memory, and that destructor cannot >>>>> exist. >>>>> >>>>> Looking further, I see that this imposes subtle usage restrictions for >>>>> UL: >>>>> >>>>> LogStreamNoResourceMark objects are used via "log.debug_stream()" or >>>>> similar. For example: >>>>> >>>>> codecache_print(log.debug_stream(), /* detailed= */ false); >>>>> >>>>> debug_stream() will allocate a LogStreamNoResourceMark object which >>>>> lives in the resourcearea. This is a bit surprising, because >>>>> "debug_stream()" feels like it returns a singleton or a member >>>>> variable of >>>>> log. >>>>> >>>>> IIRC, this was done to: >>>> >>>> 1) break up a cyclic dependencies between logStream.hpp and log.hpp >>>> >>>> 2) Not have log.hpp depend on the stream.hpp. This used to be important, >>>> but the includes in stream.hpp has been fixed so this might be a >>>> non-issue. >>>> >>>> >>>> If one wants to use LogStreamCHeap instead, it must not be created with >>>>> new() - which would be a subtle memory leak because the destructor >>>>> would >>>>> never be called - but instead on the stack as automatic variable: >>>>> >>>>> LogStreamCHeap log_stream(log); >>>>> log_stream.print("hallo"); >>>>> >>>>> I may understand this wrong, but if this is true, this is quite a >>>>> difficult API. >>>>> >>>>> Feel free to rework this and propose a simpler model. Anything that >>>> would >>>> simplify this would be helpful. >>>> >>>> >>>> I will mull over this a bit (and I would be thankful for other >>> viewpoints >>> as well). A bottomline question which is difficult to answer is whether >>> folks value the slight performance increase of resource area backed >>> memory >>> allocation in stringStream more than simplicity and robustness which >>> would >>> come with switching to malloced memory. And then, there is the second >>> question of why outputStream objects should be ResourceObj at all; for >>> me, >>> they feel much more at home as stack objects. They themselves are small >>> and >>> do not allocate a lot of memory (if they do, they do it dynamically). And >>> they are not allocated in vast amounts... >>> >>> Lets see what others think. >>> >>> >>> I have two classes which look like siblings but >>>> >>>> LogStreamCHeap can only be allocated on the local stack - otherwise I'll >>>>> get a memory leak - while LogStreamNoResourceMark gets created in the >>>>> resource area, which prevents its destructor from running and may fill >>>>> the >>>>> resource area up with temporary stream objects if used in a certain >>>>> way. >>>>> >>>>> Have I understood this right so far? If yes, would it be possible to >>>>> simplify this? >>>>> >>>>> I think you understand the code correctly, and yes, there are probably >>>> ways to make this simpler. >>>> >>>> >>>> Thanks for your input! >>> >>> Kind regards, Thomas >>> >>> >>> Thanks, >>>> StefanK >>>> >>>> >>>> Kind Regards, Thomas >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson < >>>>> stefan.karlsson at oracle.com > wrote: >>>>> >>>>> Hi Thomas, >>>>> >>>>> >>>>> On 2017-06-06 11:40, Thomas St?fe wrote: >>>>> >>>>> Hi all, >>>>> >>>>> In our VM we recently hit something similar to >>>>> https://bugs.openjdk.java.net/browse/JDK-8167995 >>>>> or >>>>> https://bugs.openjdk.java.net/browse/JDK-8149557 >>>>> : >>>>> >>>>> A stringStream* was handed down to nested print functions >>>>> which >>>>> create >>>>> their own ResourceMarks and, while being down the stack under >>>>> the scope of >>>>> that new ResourceMark, the stringStream needed to enlarge its >>>>> internal >>>>> buffer. This is the situation the assert inside >>>>> stringStream::write() >>>>> attempts to catch >>>>> (assert(Thread::current()->current_resource_mark() == >>>>> rm); in our case this was a release build, so we just crashed. >>>>> >>>>> The solution for both JDK-816795 and JDK-8149557 seemed to be >>>>> to >>>>> just >>>>> remove the offending ResourceMarks, or shuffle them around, >>>>> but >>>>> generally >>>>> this is not an optimal solution, or? >>>>> >>>>> We actually question whether using resource area memory is a >>>>> good idea for >>>>> outputStream chuild objects at all: >>>>> >>>>> outputStream instances typically travel down the stack a lot >>>>> by >>>>> getting >>>>> handed sub-print-functions, so they run danger of crossing >>>>> resource mark >>>>> boundaries like above. The sub functions are usually oblivious >>>>> to the type >>>>> of outputStream* handed down, and as well they should be. And >>>>> if >>>>> the >>>>> allocate resource area memory themselves, it makes sense to >>>>> guard them with >>>>> ResourceMark in case they are called in a loop. >>>>> >>>>> The assert inside stringStream::write() is not a real help >>>>> either, because >>>>> whether or not it hits depends on pure luck - whether the >>>>> realloc code path >>>>> is hit just in the right moment while printing. Which depends >>>>> on >>>>> the buffer >>>>> size and the print history, which is variable, especially with >>>>> logging. >>>>> >>>>> The only advantage to using bufferedStream (C-Heap) is a small >>>>> performance >>>>> improvement when allocating. The question is whether this is >>>>> really worth >>>>> the risk of using resource area memory in this fashion. >>>>> Especially in the >>>>> context of UL where we are about to do expensive IO operations >>>>> (writing to >>>>> log file) or may lock (os::flockfile). >>>>> >>>>> Also, the difference between bufferedStream and stringStream >>>>> might be >>>>> reduced by improving bufferedStream (e.g. by using a member >>>>> char >>>>> array for >>>>> small allocations and delay using malloc() for larger arrays.) >>>>> >>>>> What you think? Should we get rid of stringStream and only use >>>>> an (possibly >>>>> improved) bufferedStream? I also imagine this could make UL >>>>> coding a bit >>>>> simpler. >>>>> >>>>> >>>>> Not answering your questions, but I want to point out that we >>>>> already have a UL stream that uses C-Heap: >>>>> >>>>> logging/logStream.hpp: >>>>> >>>>> // The backing buffer is allocated in CHeap memory. >>>>> typedef LogStreamBase LogStreamCHeap; >>>>> >>>>> StefanK >>>>> >>>>> >>>>> >>>>> Thank you, >>>>> >>>>> Kind Regards, Thomas >>>>> >>>>> >>>>> >>>>> > From glaubitz at physik.fu-berlin.de Sat Jun 10 17:09:26 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sat, 10 Jun 2017 19:09:26 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> Message-ID: <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> On 06/09/2017 08:57 PM, John Paul Adrian Glaubitz wrote: > It's still building, let's see if it actually succeeds. It does. And it fails again with "--with-debug-level=fastdebug": # Internal Error (/<>/src/hotspot/src/os_cpu/linux_zero/vm/os_linux_zero.cpp:260), pid=46604, tid=46606 # fatal error: # # /--------------------\ # | segmentation fault | # \---\ /--------------/ # / # [-] |\_/| # (+)=C |o o|__ # | | =-*-=__\ # OOO c_c_(___) # # JRE version: OpenJDK Runtime Environment (9.0) (fastdebug build 9-Debian+0-9b170-2) # Java VM: OpenJDK Zero VM (fastdebug 9-Debian+0-9b170-2, interpreted mode, serial gc, linux-ppc) # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again Logs: --with-debug-level=slowdebug (succeeds): > https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=powerpc&ver=9%7Eb170-2&stamp=1497054989&raw=0 --with-debug-level=fastdebug (fails): > https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=powerpc&ver=9%7Eb170-2&stamp=1497076948&raw=0 Does any of the JVM wizards have an idea? I haven't looked at the code yet myself :). Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From coleen.phillimore at oracle.com Sat Jun 10 19:21:13 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Sat, 10 Jun 2017 15:21:13 -0400 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> Message-ID: <51f083ac-d4f2-5a0c-fb4f-5851ede08b49@oracle.com> It looks like you're building the zero interpreter which doesn't get the same level support as other platforms, well, it gets actually almost no support. What I would do is create two repositories, build one without the zero options. Build the other with --with-build-jdk= and zero. Debug zero like: gunzip libjvm.diz where it is built gdb --args java -version (or whatever fails). Good luck, Coleen On 6/10/17 1:09 PM, John Paul Adrian Glaubitz wrote: > On 06/09/2017 08:57 PM, John Paul Adrian Glaubitz wrote: >> It's still building, let's see if it actually succeeds. > It does. And it fails again with "--with-debug-level=fastdebug": > > # Internal Error (/<>/src/hotspot/src/os_cpu/linux_zero/vm/os_linux_zero.cpp:260), pid=46604, tid=46606 > # fatal error: > # > # /--------------------\ > # | segmentation fault | > # \---\ /--------------/ > # / > # [-] |\_/| > # (+)=C |o o|__ > # | | =-*-=__\ > # OOO c_c_(___) > # > # JRE version: OpenJDK Runtime Environment (9.0) (fastdebug build 9-Debian+0-9b170-2) > # Java VM: OpenJDK Zero VM (fastdebug 9-Debian+0-9b170-2, interpreted mode, serial gc, linux-ppc) > # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again > > Logs: > > --with-debug-level=slowdebug (succeeds): > >> https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=powerpc&ver=9%7Eb170-2&stamp=1497054989&raw=0 > --with-debug-level=fastdebug (fails): > >> https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=powerpc&ver=9%7Eb170-2&stamp=1497076948&raw=0 > Does any of the JVM wizards have an idea? I haven't looked at the code yet myself :). > > Adrian > From glaubitz at physik.fu-berlin.de Sat Jun 10 19:52:39 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sat, 10 Jun 2017 21:52:39 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <51f083ac-d4f2-5a0c-fb4f-5851ede08b49@oracle.com> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> <51f083ac-d4f2-5a0c-fb4f-5851ede08b49@oracle.com> Message-ID: <7fae59e0-88a4-7803-a017-617589a4f8e2@physik.fu-berlin.de> On 06/10/2017 09:21 PM, coleen.phillimore at oracle.com wrote: > It looks like you're building the zero interpreter which doesn't get the same > level support as other platforms, well, it gets actually almost no support. Well, that's a pity being it the only processor-independent implementation. > What I would do is create two repositories, build one without the zero options. > Build the other with --with-build-jdk= and zero. Debug > zero like: I'm not sure what you mean. I am building on linux-ppc (32-bit). There is only the zero implementation I can use here. Btw, zero on linux-x86_64 does not show this problem. > gunzip libjvm.diz where it is built What's that for? > gdb --args java -version (or whatever fails). Yeah. That was the plan anyway. -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From glaubitz at physik.fu-berlin.de Sat Jun 10 21:57:34 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sat, 10 Jun 2017 23:57:34 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> Message-ID: <3a2e4168-9d1e-a81b-6bb7-0b711a44a029@physik.fu-berlin.de> On 06/10/2017 07:09 PM, John Paul Adrian Glaubitz wrote: > On 06/09/2017 08:57 PM, John Paul Adrian Glaubitz wrote: >> It's still building, let's see if it actually succeeds. > > It does. And it fails again with "--with-debug-level=fastdebug": And here's the backtrace: (sid-powerpc-sbuild)root at kapitsa:/build/openjdk-9-fz188x/openjdk-9-9~b170# gdb --args /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod -J-XX:+UseSerialGC -J-Xms32M -J-Xmx512M -J-XX:TieredStopAtLevel=1 create --module-version 9-Debian --target-platform 'linux-ppc' --module-path /build/openjdk-9-fz188x/openjdk-9-9~b170/build/images/jmods --exclude '**{_the.*,_*.marker,*.diz,*.debuginfo,*.dSYM/**,*.dSYM,*.pdb,*.map}' --libs /build/openjdk-9-fz188x/openjdk-9-9~b170/build/support/modules_libs/java.management --class-path /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/modules/java.management --legal-notices "/build/openjdk-9-fz188x/openjdk-9-9~b170/build/support/modules_legal/java.base" /build/openjdk-9-fz188x/openjdk-9-9~b170/build/support/jmods/java.management.jmod GNU gdb (Debian 7.12-6) 7.12.0.20161007-git Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "powerpc-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod...Reading symbols from /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod.debuginfo...done. done. (gdb) r Starting program: /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod -J-XX:+UseSerialGC -J-Xms32M -J-Xmx512M -J-XX:TieredStopAtLevel=1 create --module-version 9-Debian --target-platform linux-ppc --module-path /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/images/jmods --exclude \*\*\{_the.\*,_\*.marker,\*.diz,\*.debuginfo,\*.dSYM/\*\*,\*.dSYM,\*.pdb,\*.map\} --libs /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/support/modules_libs/java.management --class-path /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/jdk/modules/java.management --legal-notices /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/support/modules_legal/java.base /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/support/jmods/java.management.jmod [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/powerpc-linux-gnu/libthread_db.so.1". [New Thread 0xf7f9f460 (LWP 19187)] Thread 2 "jmod" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xf7f9f460 (LWP 19187)] StubGenerator::SafeFetch32 (adr=0xabc, errValue=2748) at ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:211 211 value = *adr; (gdb) bt #0 StubGenerator::SafeFetch32 (adr=0xabc, errValue=2748) at ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:211 #1 0x0f9c6c0c in SafeFetch32 (errValue=2748, adr=0xabc) at ./src/hotspot/src/share/vm/runtime/stubRoutines.hpp:464 #2 test_safefetch32 () at ./src/hotspot/src/share/vm/runtime/stubRoutines.cpp:243 #3 StubRoutines::initialize2 () at ./src/hotspot/src/share/vm/runtime/stubRoutines.cpp:364 #4 0x0f9c7544 in stubRoutines_init2 () at ./src/hotspot/src/share/vm/runtime/stubRoutines.cpp:373 #5 0x0f4f04a0 in init_globals () at ./src/hotspot/src/share/vm/runtime/init.cpp:143 #6 0x0fa115e8 in Threads::create_vm (args=args at entry=0xf7f9ed3c, canTryAgain=canTryAgain at entry=0xf7f9ecd0) at ./src/hotspot/src/share/vm/runtime/thread.cpp:3630 #7 0x0f56da68 in JNI_CreateJavaVM_inner (args=0xf7f9ed3c, penv=0xf7f9ed38, vm=0xf7f9ed34) at ./src/hotspot/src/share/vm/prims/jni.cpp:3937 #8 JNI_CreateJavaVM (vm=vm at entry=0xf7f9ed34, penv=penv at entry=0xf7f9ed38, args=args at entry=0xf7f9ed3c) at ./src/hotspot/src/share/vm/prims/jni.cpp:4032 #9 0x0ff244cc in InitializeJVM (ifn=, penv=0xf7f9ed38, pvm=0xf7f9ed34) at ./src/jdk/src/java.base/share/native/libjli/java.c:1481 #10 JavaMain (_args=) at ./src/jdk/src/java.base/share/native/libjli/java.c:408 #11 0x0ff28aa0 in call_continuation (_args=) at ./src/jdk/src/java.base/unix/native/libjli/java_md_solinux.c:895 #12 0x0ff67500 in start_thread () from /lib/powerpc-linux-gnu/libpthread.so.0 #13 0x0fe239b0 in clone () from /lib/powerpc-linux-gnu/libc.so.6 (gdb) -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From glaubitz at physik.fu-berlin.de Sat Jun 10 22:18:28 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sun, 11 Jun 2017 00:18:28 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <3a2e4168-9d1e-a81b-6bb7-0b711a44a029@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> <3a2e4168-9d1e-a81b-6bb7-0b711a44a029@physik.fu-berlin.de> Message-ID: <2730f60d-a854-f849-9027-28c7917d4354@physik.fu-berlin.de> On 06/10/2017 11:57 PM, John Paul Adrian Glaubitz wrote: > Thread 2 "jmod" received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0xf7f9f460 (LWP 19187)] > StubGenerator::SafeFetch32 (adr=0xabc, errValue=2748) at ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:211 > 211 value = *adr; >From the description of SafeFetch32(): // Safefetch allows to load a value from a location that's not known // to be valid. If the load causes a fault, the error value is returned. So, it seems SafeFetch32() is not able to properly determine whether it's safe to read from a particular location and then causes a segfault when trying to read from that location because it assumed that was safe. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From glaubitz at physik.fu-berlin.de Sat Jun 10 23:02:57 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sun, 11 Jun 2017 01:02:57 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <2730f60d-a854-f849-9027-28c7917d4354@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> <3a2e4168-9d1e-a81b-6bb7-0b711a44a029@physik.fu-berlin.de> <2730f60d-a854-f849-9027-28c7917d4354@physik.fu-berlin.de> Message-ID: And more: (sid-powerpc-sbuild)root at kapitsa:/build/openjdk-9-fz188x/openjdk-9-9~b170# gdb --args /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod -J-XX:+UseSerialGC -J-Xms32M -J-Xmx512M -J-XX:TieredStopAtLevel=1 create --module-version 9-Debian --target-platform 'linux-ppc' --module-path /build/openjdk-9-fz188x/openjdk-9-9~b170/build/images/jmods --exclude '**{_the.*,_*.marker,*.diz,*.debuginfo,*.dSYM/**,*.dSYM,*.pdb,*.map}' --libs /build/openjdk-9-fz188x/openjdk-9-9~b170/build/support/modules_libs/java.management --class-path /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/modules/java.management --legal-notices "/build/openjdk-9-fz188x/openjdk-9-9~b170/build/support/modules_legal/java.base" /build/openjdk-9-fz188x/openjdk-9-9~b170/build/support/jmods/java.management.jmod GNU gdb (Debian 7.12-6) 7.12.0.20161007-git Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "powerpc-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod...Reading symbols from /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod.debuginfo...done. done. (gdb) r Starting program: /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod -J-XX:+UseSerialGC -J-Xms32M -J-Xmx512M -J-XX:TieredStopAtLevel=1 create --module-version 9-Debian --target-platform linux-ppc --module-path /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/images/jmods --exclude \*\*\{_the.\*,_\*.marker,\*.diz,\*.debuginfo,\*.dSYM/\*\*,\*.dSYM,\*.pdb,\*.map\} --libs /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/support/modules_libs/java.management --class-path /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/jdk/modules/java.management --legal-notices /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/support/modules_legal/java.base /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/support/jmods/java.management.jmod [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/powerpc-linux-gnu/libthread_db.so.1". [New Thread 0xf7f9f460 (LWP 22075)] Thread 2 "jmod" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xf7f9f460 (LWP 22075)] StubGenerator::SafeFetch32 (adr=0xabc, errValue=2748) at ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:211 211 value = *adr; (gdb) si signalHandler (sig=11, info=0xf7f9d9c0, uc=0xf7f9da40) at ./src/hotspot/src/os/linux/vm/os_linux.cpp:4229 4229 void signalHandler(int sig, siginfo_t* info, void* uc) { (gdb) bt #0 signalHandler (sig=11, info=0xf7f9d9c0, uc=0xf7f9da40) at ./src/hotspot/src/os/linux/vm/os_linux.cpp:4229 #1 #2 StubGenerator::SafeFetch32 (adr=0xabc, errValue=2748) at ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:211 #3 0x0f9c6c0c in SafeFetch32 (errValue=2748, adr=0xabc) at ./src/hotspot/src/share/vm/runtime/stubRoutines.hpp:464 #4 test_safefetch32 () at ./src/hotspot/src/share/vm/runtime/stubRoutines.cpp:243 #5 StubRoutines::initialize2 () at ./src/hotspot/src/share/vm/runtime/stubRoutines.cpp:364 #6 0x0f9c7544 in stubRoutines_init2 () at ./src/hotspot/src/share/vm/runtime/stubRoutines.cpp:373 #7 0x0f4f04a0 in init_globals () at ./src/hotspot/src/share/vm/runtime/init.cpp:143 #8 0x0fa115e8 in Threads::create_vm (args=args at entry=0xf7f9ed3c, canTryAgain=canTryAgain at entry=0xf7f9ecd0) at ./src/hotspot/src/share/vm/runtime/thread.cpp:3630 #9 0x0f56da68 in JNI_CreateJavaVM_inner (args=0xf7f9ed3c, penv=0xf7f9ed38, vm=0xf7f9ed34) at ./src/hotspot/src/share/vm/prims/jni.cpp:3937 #10 JNI_CreateJavaVM (vm=vm at entry=0xf7f9ed34, penv=penv at entry=0xf7f9ed38, args=args at entry=0xf7f9ed3c) at ./src/hotspot/src/share/vm/prims/jni.cpp:4032 #11 0x0ff244cc in InitializeJVM (ifn=, penv=0xf7f9ed38, pvm=0xf7f9ed34) at ./src/jdk/src/java.base/share/native/libjli/java.c:1481 #12 JavaMain (_args=) at ./src/jdk/src/java.base/share/native/libjli/java.c:408 #13 0x0ff28aa0 in call_continuation (_args=) at ./src/jdk/src/java.base/unix/native/libjli/java_md_solinux.c:895 #14 0x0ff67500 in start_thread () from /lib/powerpc-linux-gnu/libpthread.so.0 #15 0x0fe239b0 in clone () from /lib/powerpc-linux-gnu/libc.so.6 (gdb) n 4230 assert(info != NULL && uc != NULL, "it must be old kernel"); (gdb) n 4229 void signalHandler(int sig, siginfo_t* info, void* uc) { (gdb) n 4230 assert(info != NULL && uc != NULL, "it must be old kernel"); (gdb) n 4231 int orig_errno = errno; // Preserve errno value over signal handler. (gdb) n 4232 JVM_handle_linux_signal(sig, info, uc, true); (gdb) n 4231 int orig_errno = errno; // Preserve errno value over signal handler. (gdb) n 4232 JVM_handle_linux_signal(sig, info, uc, true); (gdb) n Thread 2 "jmod" received signal SIGSEGV, Segmentation fault. StubGenerator::SafeFetchN (adr=0xabc, errValue=-553787651) at ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:232 232 value = *adr; (gdb) n signalHandler (sig=11, info=0xf7f9d9c0, uc=0xf7f9da40) at ./src/hotspot/src/os/linux/vm/os_linux.cpp:4229 4229 void signalHandler(int sig, siginfo_t* info, void* uc) { (gdb) n 4230 assert(info != NULL && uc != NULL, "it must be old kernel"); (gdb) n 4229 void signalHandler(int sig, siginfo_t* info, void* uc) { (gdb) n 4230 assert(info != NULL && uc != NULL, "it must be old kernel"); (gdb) n 4231 int orig_errno = errno; // Preserve errno value over signal handler. (gdb) n 4232 JVM_handle_linux_signal(sig, info, uc, true); (gdb) n 4231 int orig_errno = errno; // Preserve errno value over signal handler. (gdb) n 4232 JVM_handle_linux_signal(sig, info, uc, true); (gdb) n [New Thread 0xf55bf460 (LWP 22078)] [New Thread 0xf53ff460 (LWP 22079)] [New Thread 0xf50ff460 (LWP 22080)] [New Thread 0xf4dff460 (LWP 22081)] [New Thread 0xf4aff460 (LWP 22082)] [New Thread 0xf45ff460 (LWP 22083)] [New Thread 0xf447f460 (LWP 22084)] Thread 2 "jmod" received signal SIGSEGV, Segmentation fault. Java_java_util_zip_Deflater_deflateBytes (env=0xf7d0f460, this=0xf7f9e4e8, addr=, b=0xf7f9e4dc, off=, len=512, flush=0) at ./src/jdk/src/java.base/share/native/libzip/Deflater.c:199 199 strm->next_in = (Bytef *) (in_buf + this_off); (gdb) bt #0 Java_java_util_zip_Deflater_deflateBytes (env=0xf7d0f460, this=0xf7f9e4e8, addr=, b=0xf7f9e4dc, off=, len=512, flush=0) at ./src/jdk/src/java.base/share/native/libzip/Deflater.c:199 #1 0x0fb002d0 in ffi_call_SYSV () from /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/lib/server/libjvm.so #2 0x0faff2f4 in ffi_call () from /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/lib/server/libjvm.so #3 0x0f2a7dfc in CppInterpreter::native_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:372 #4 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf4120f68, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #5 CppInterpreter::invoke_method (method=method at entry=0xf4120f68, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #6 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #7 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #8 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf41207a0, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #9 CppInterpreter::invoke_method (method=method at entry=0xf41207a0, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #10 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #11 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #12 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf4120598, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #13 CppInterpreter::invoke_method (method=method at entry=0xf4120598, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #14 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #15 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #16 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf4116f98, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #17 CppInterpreter::invoke_method (method=method at entry=0xf4116f98, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #18 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #19 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #20 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf4116d88, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #21 CppInterpreter::invoke_method (method=method at entry=0xf4116d88, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #22 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #23 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #24 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf4115070, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #25 CppInterpreter::invoke_method (method=method at entry=0xf4115070, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #26 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #27 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #28 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf57daaf0, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #29 CppInterpreter::invoke_method (method=method at entry=0xf57daaf0, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #30 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #31 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #32 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf41083e0, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #33 CppInterpreter::invoke_method (method=method at entry=0xf41083e0, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #34 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #35 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #36 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf41054e0, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #37 CppInterpreter::invoke_method (method=method at entry=0xf41054e0, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #38 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #39 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #40 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf4104fb0, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #41 CppInterpreter::invoke_method (method=method at entry=0xf4104fb0, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #42 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #43 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #44 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf5a15c88, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #45 CppInterpreter::invoke_method (method=method at entry=0xf5a15c88, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #46 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #47 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #48 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf5a15078, this=) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #49 CppInterpreter::invoke_method (method=method at entry=0xf5a15078, entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #50 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 #51 0x0f2a7138 in CppInterpreter::normal_entry (method=, UNUSED=, __the_thread__=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 #52 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, method=method at entry=0xf5a0a870, this=this at entry=0xf5d00140) at ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 #53 CppInterpreter::invoke_method (method=method at entry=0xf5a0a870, entry_point=entry_point at entry=0xf5d00140 "\017*p\300\017*p\300\017*\201P\017*\202p\017", __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 #54 0x0f9c5630 in StubGenerator::call_stub (call_wrapper=call_wrapper at entry=0xf7f9ea48, result=result at entry=0xf7f9ec60, result_type=result_type at entry=T_INT, method=method at entry=0xf5a0a870, entry_point=entry_point at entry=0xf5d00140 "\017*p\300\017*p\300\017*\201P\017*\202p\017", parameters=, parameter_words=1, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:98 #55 0x0f53cccc in JavaCalls::call_helper (result=result at entry=0xf7f9ec58, method=..., args=args at entry=0xf7f9eb28, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/runtime/javaCalls.cpp:419 #56 0x0f868838 in os::os_exception_wrapper (f=f at entry=0xf53c500 , value=value at entry=0xf7f9ec58, method=..., args=args at entry=0xf7f9eb28, thread=thread at entry=0xf7d0f308) at ./src/hotspot/src/os/linux/vm/os_linux.cpp:5130 #57 0x0f53b91c in JavaCalls::call (result=result at entry=0xf7f9ec58, method=..., args=args at entry=0xf7f9eb28, __the_thread__=__the_thread__ at entry=0xf7d0f308) at ./src/hotspot/src/share/vm/runtime/javaCalls.cpp:306 #58 0x0f56b8e8 in jni_invoke_static (env=env at entry=0xf7d0f460, result=result at entry=0xf7f9ec58, method_id=method_id at entry=0xf469d808, args=args at entry=0xf7f9ec18, __the_thread__=__the_thread__ at entry=0xf7d0f308, call_type=JNI_STATIC, receiver=0x0) at ./src/hotspot/src/share/vm/prims/jni.cpp:1119 #59 0x0f58d228 in jni_CallStaticVoidMethod (env=0xf7d0f460, cls=, methodID=0xf469d808) at ./src/hotspot/src/share/vm/prims/jni.cpp:1989 #60 0x0ff24f94 in JavaMain (_args=) at ./src/jdk/src/java.base/share/native/libjli/java.c:546 #61 0x0ff28aa0 in call_continuation (_args=) at ./src/jdk/src/java.base/unix/native/libjli/java_md_solinux.c:895 #62 0x0ff67500 in start_thread () from /lib/powerpc-linux-gnu/libpthread.so.0 #63 0x0fe239b0 in clone () from /lib/powerpc-linux-gnu/libc.so.6 (gdb) p in_buf $1 = (jbyte *) 0xd43cb264 "\312\376\272\276" (gdb) p this_off $2 = 0 (gdb) p strm $3 = (z_stream *) 0x10006 (gdb) p *strm Cannot access memory at address 0x10006 (gdb) -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Sun Jun 11 06:45:36 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Sun, 11 Jun 2017 08:45:36 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> <3a2e4168-9d1e-a81b-6bb7-0b711a44a029@physik.fu-berlin.de> <2730f60d-a854-f849-9027-28c7917d4354@physik.fu-berlin.de> Message-ID: Hi John Paul, I'll take a look at it, I believe the final SafeFetch implementation for zero was last done by me: https://bugs.openjdk.java.net/browse/JDK-8076185 . SafeFetch is used to load data from a potentially unmapped address, mainly used in error reporting. If that load triggers a segfault, that fault is catched and the function returns a special value to indicate the address was unmapped. Its function is in the debug build tested at VM startup, which is the segfault you are seeing. If it were to work correctly, signal handler would recognize the segfault to be originating from a safefetch call and not crash but return the mentioned special value. On almost all platforms this is implemented via stub assembler but as zero aims to be pure C we did implement this using posix setjmp. I'll take a look at why this stopped working. In the meantime, as a workaround just comment out the calls to test_safefetch32() and test_safefetchN() in StubRoutines::initialize2(). Kind Regards, Thomas On Sun, Jun 11, 2017 at 1:02 AM, John Paul Adrian Glaubitz < glaubitz at physik.fu-berlin.de> wrote: > And more: > > (sid-powerpc-sbuild)root at kapitsa:/build/openjdk-9-fz188x/openjdk-9-9~b170# > gdb --args /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod > -J-XX:+UseSerialGC -J-Xms32M -J-Xmx512M -J-XX:TieredStopAtLevel=1 create > --module-version 9-Debian --target-platform 'linux-ppc' --module-path > /build/openjdk-9-fz188x/openjdk-9-9~b170/build/images/jmods --exclude > '**{_the.*,_*.marker,*.diz,*.debuginfo,*.dSYM/**,*.dSYM,*.pdb,*.map}' > --libs > /build/openjdk-9-fz188x/openjdk-9-9~b170/build/support/modules_libs/java.management > --class-path > /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/modules/java.management > --legal-notices > "/build/openjdk-9-fz188x/openjdk-9-9~b170/build/ > support/modules_legal/java.base" /build/openjdk-9-fz188x/ > openjdk-9-9~b170/build/support/jmods/java.management.jmod > GNU gdb (Debian 7.12-6) 7.12.0.20161007-git > Copyright (C) 2016 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later html> > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "powerpc-linux-gnu". > Type "show configuration" for configuration details. > For bug reporting instructions, please see: > . > Find the GDB manual and other documentation resources online at: > . > For help, type "help". > Type "apropos word" to search for commands related to "word"... > Reading symbols from /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod...Reading > symbols from > /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/ > bin/jmod.debuginfo...done. > done. > (gdb) r > Starting program: /build/openjdk-9-fz188x/openjdk-9-9~b170/build/jdk/bin/jmod > -J-XX:+UseSerialGC -J-Xms32M -J-Xmx512M -J-XX:TieredStopAtLevel=1 create > --module-version 9-Debian --target-platform linux-ppc --module-path > /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/images/jmods --exclude > \*\*\{_the.\*,_\*.marker,\*.diz,\*.debuginfo,\*.dSYM/\*\*,\*.dSYM,\*.pdb,\*.map\} > --libs > /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/support/modules_libs/java.management > --class-path > /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/jdk/modules/java.management > --legal-notices > /build/openjdk-9-fz188x/openjdk-9-9\~b170/build/ > support/modules_legal/java.base /build/openjdk-9-fz188x/ > openjdk-9-9\~b170/build/support/jmods/java.management.jmod > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib/powerpc-linux-gnu/ > libthread_db.so.1". > [New Thread 0xf7f9f460 (LWP 22075)] > > Thread 2 "jmod" received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0xf7f9f460 (LWP 22075)] > StubGenerator::SafeFetch32 (adr=0xabc, errValue=2748) at > ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:211 > 211 value = *adr; > (gdb) si > signalHandler (sig=11, info=0xf7f9d9c0, uc=0xf7f9da40) at > ./src/hotspot/src/os/linux/vm/os_linux.cpp:4229 > 4229 void signalHandler(int sig, siginfo_t* info, void* uc) { > (gdb) bt > #0 signalHandler (sig=11, info=0xf7f9d9c0, uc=0xf7f9da40) at > ./src/hotspot/src/os/linux/vm/os_linux.cpp:4229 > #1 > #2 StubGenerator::SafeFetch32 (adr=0xabc, errValue=2748) at > ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:211 > #3 0x0f9c6c0c in SafeFetch32 (errValue=2748, adr=0xabc) at > ./src/hotspot/src/share/vm/runtime/stubRoutines.hpp:464 > #4 test_safefetch32 () at ./src/hotspot/src/share/vm/ > runtime/stubRoutines.cpp:243 > #5 StubRoutines::initialize2 () at ./src/hotspot/src/share/vm/ > runtime/stubRoutines.cpp:364 > #6 0x0f9c7544 in stubRoutines_init2 () at ./src/hotspot/src/share/vm/ > runtime/stubRoutines.cpp:373 > #7 0x0f4f04a0 in init_globals () at ./src/hotspot/src/share/vm/ > runtime/init.cpp:143 > #8 0x0fa115e8 in Threads::create_vm (args=args at entry=0xf7f9ed3c, > canTryAgain=canTryAgain at entry=0xf7f9ecd0) at ./src/hotspot/src/share/vm/ > runtime/thread.cpp:3630 > #9 0x0f56da68 in JNI_CreateJavaVM_inner (args=0xf7f9ed3c, > penv=0xf7f9ed38, vm=0xf7f9ed34) at ./src/hotspot/src/share/vm/ > prims/jni.cpp:3937 > #10 JNI_CreateJavaVM (vm=vm at entry=0xf7f9ed34, penv=penv at entry=0xf7f9ed38, > args=args at entry=0xf7f9ed3c) at ./src/hotspot/src/share/vm/ > prims/jni.cpp:4032 > #11 0x0ff244cc in InitializeJVM (ifn=, penv=0xf7f9ed38, > pvm=0xf7f9ed34) at ./src/jdk/src/java.base/share/native/libjli/java.c:1481 > #12 JavaMain (_args=) at ./src/jdk/src/java.base/share/ > native/libjli/java.c:408 > #13 0x0ff28aa0 in call_continuation (_args=) at > ./src/jdk/src/java.base/unix/native/libjli/java_md_solinux.c:895 > #14 0x0ff67500 in start_thread () from /lib/powerpc-linux-gnu/ > libpthread.so.0 > #15 0x0fe239b0 in clone () from /lib/powerpc-linux-gnu/libc.so.6 > (gdb) n > 4230 assert(info != NULL && uc != NULL, "it must be old kernel"); > (gdb) n > 4229 void signalHandler(int sig, siginfo_t* info, void* uc) { > (gdb) n > 4230 assert(info != NULL && uc != NULL, "it must be old kernel"); > (gdb) n > 4231 int orig_errno = errno; // Preserve errno value over signal > handler. > (gdb) n > 4232 JVM_handle_linux_signal(sig, info, uc, true); > (gdb) n > 4231 int orig_errno = errno; // Preserve errno value over signal > handler. > (gdb) n > 4232 JVM_handle_linux_signal(sig, info, uc, true); > (gdb) n > > Thread 2 "jmod" received signal SIGSEGV, Segmentation fault. > StubGenerator::SafeFetchN (adr=0xabc, errValue=-553787651) at > ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:232 > 232 value = *adr; > (gdb) n > signalHandler (sig=11, info=0xf7f9d9c0, uc=0xf7f9da40) at > ./src/hotspot/src/os/linux/vm/os_linux.cpp:4229 > 4229 void signalHandler(int sig, siginfo_t* info, void* uc) { > (gdb) n > 4230 assert(info != NULL && uc != NULL, "it must be old kernel"); > (gdb) n > 4229 void signalHandler(int sig, siginfo_t* info, void* uc) { > (gdb) n > 4230 assert(info != NULL && uc != NULL, "it must be old kernel"); > (gdb) n > 4231 int orig_errno = errno; // Preserve errno value over signal > handler. > (gdb) n > 4232 JVM_handle_linux_signal(sig, info, uc, true); > (gdb) n > 4231 int orig_errno = errno; // Preserve errno value over signal > handler. > (gdb) n > 4232 JVM_handle_linux_signal(sig, info, uc, true); > (gdb) n > [New Thread 0xf55bf460 (LWP 22078)] > [New Thread 0xf53ff460 (LWP 22079)] > [New Thread 0xf50ff460 (LWP 22080)] > [New Thread 0xf4dff460 (LWP 22081)] > [New Thread 0xf4aff460 (LWP 22082)] > [New Thread 0xf45ff460 (LWP 22083)] > [New Thread 0xf447f460 (LWP 22084)] > > Thread 2 "jmod" received signal SIGSEGV, Segmentation fault. > Java_java_util_zip_Deflater_deflateBytes (env=0xf7d0f460, > this=0xf7f9e4e8, addr=, b=0xf7f9e4dc, off=, > len=512, flush=0) at > ./src/jdk/src/java.base/share/native/libzip/Deflater.c:199 > 199 strm->next_in = (Bytef *) (in_buf + this_off); > (gdb) bt > #0 Java_java_util_zip_Deflater_deflateBytes (env=0xf7d0f460, > this=0xf7f9e4e8, addr=, b=0xf7f9e4dc, off=, > len=512, flush=0) at > ./src/jdk/src/java.base/share/native/libzip/Deflater.c:199 > #1 0x0fb002d0 in ffi_call_SYSV () from /build/openjdk-9-fz188x/ > openjdk-9-9~b170/build/jdk/lib/server/libjvm.so > #2 0x0faff2f4 in ffi_call () from /build/openjdk-9-fz188x/ > openjdk-9-9~b170/build/jdk/lib/server/libjvm.so > #3 0x0f2a7dfc in CppInterpreter::native_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:372 > #4 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf4120f68, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #5 CppInterpreter::invoke_method (method=method at entry=0xf4120f68, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #6 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #7 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #8 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf41207a0, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #9 CppInterpreter::invoke_method (method=method at entry=0xf41207a0, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #10 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #11 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #12 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf4120598, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #13 CppInterpreter::invoke_method (method=method at entry=0xf4120598, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #14 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #15 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #16 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf4116f98, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #17 CppInterpreter::invoke_method (method=method at entry=0xf4116f98, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #18 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #19 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #20 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf4116d88, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #21 CppInterpreter::invoke_method (method=method at entry=0xf4116d88, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #22 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #23 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #24 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf4115070, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #25 CppInterpreter::invoke_method (method=method at entry=0xf4115070, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #26 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #27 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #28 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf57daaf0, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #29 CppInterpreter::invoke_method (method=method at entry=0xf57daaf0, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #30 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #31 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #32 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf41083e0, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #33 CppInterpreter::invoke_method (method=method at entry=0xf41083e0, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #34 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #35 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #36 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf41054e0, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #37 CppInterpreter::invoke_method (method=method at entry=0xf41054e0, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #38 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #39 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #40 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf4104fb0, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #41 CppInterpreter::invoke_method (method=method at entry=0xf4104fb0, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #42 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #43 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #44 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf5a15c88, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #45 CppInterpreter::invoke_method (method=method at entry=0xf5a15c88, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #46 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #47 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #48 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf5a15078, this=) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #49 CppInterpreter::invoke_method (method=method at entry=0xf5a15078, > entry_point=, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #50 0x0f2a5628 in CppInterpreter::main_loop (recurse=recurse at entry=0, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:150 > #51 0x0f2a7138 in CppInterpreter::normal_entry (method=, > UNUSED=, __the_thread__=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp:79 > #52 0x0f2a412c in ZeroEntry::invoke (__the_thread__=0xf7d0f308, > method=method at entry=0xf5a0a870, this=this at entry=0xf5d00140) at > ./src/hotspot/src/cpu/zero/vm/entry_zero.hpp:59 > #53 CppInterpreter::invoke_method (method=method at entry=0xf5a0a870, > entry_point=entry_point at entry=0xf5d00140 "\017*p\300\017*p\300\017*\ > 201P\017*\202p\017", > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/share/vm/interpreter/cppInterpreter.cpp:66 > #54 0x0f9c5630 in StubGenerator::call_stub (call_wrapper=call_wrapper at entry=0xf7f9ea48, > result=result at entry=0xf7f9ec60, result_type=result_type at entry=T_INT, > method=method at entry=0xf5a0a870, entry_point=entry_point at entry=0xf5d00140 > "\017*p\300\017*p\300\017*\201P\017*\202p\017", parameters= out>, > parameter_words=1, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/cpu/zero/vm/stubGenerator_zero.cpp:98 > #55 0x0f53cccc in JavaCalls::call_helper (result=result at entry=0xf7f9ec58, > method=..., args=args at entry=0xf7f9eb28, > __the_thread__=__the_thread__ at entry=0xf7d0f308) at > ./src/hotspot/src/share/vm/runtime/javaCalls.cpp:419 > #56 0x0f868838 in os::os_exception_wrapper (f=f at entry=0xf53c500 > JavaCallArguments*, Thread*)>, > value=value at entry=0xf7f9ec58, method=..., args=args at entry=0xf7f9eb28, > thread=thread at entry=0xf7d0f308) at ./src/hotspot/src/os/linux/vm/ > os_linux.cpp:5130 > #57 0x0f53b91c in JavaCalls::call (result=result at entry=0xf7f9ec58, > method=..., args=args at entry=0xf7f9eb28, __the_thread__=__the_thread__ at entry=0xf7d0f308) > at > ./src/hotspot/src/share/vm/runtime/javaCalls.cpp:306 > #58 0x0f56b8e8 in jni_invoke_static (env=env at entry=0xf7d0f460, > result=result at entry=0xf7f9ec58, method_id=method_id at entry=0xf469d808, > args=args at entry=0xf7f9ec18, > __the_thread__=__the_thread__ at entry=0xf7d0f308, call_type=JNI_STATIC, > receiver=0x0) at ./src/hotspot/src/share/vm/prims/jni.cpp:1119 > #59 0x0f58d228 in jni_CallStaticVoidMethod (env=0xf7d0f460, cls= out>, methodID=0xf469d808) at ./src/hotspot/src/share/vm/ > prims/jni.cpp:1989 > #60 0x0ff24f94 in JavaMain (_args=) at > ./src/jdk/src/java.base/share/native/libjli/java.c:546 > #61 0x0ff28aa0 in call_continuation (_args=) at > ./src/jdk/src/java.base/unix/native/libjli/java_md_solinux.c:895 > #62 0x0ff67500 in start_thread () from /lib/powerpc-linux-gnu/ > libpthread.so.0 > #63 0x0fe239b0 in clone () from /lib/powerpc-linux-gnu/libc.so.6 > (gdb) p in_buf > $1 = (jbyte *) 0xd43cb264 "\312\376\272\276" > (gdb) p this_off > $2 = 0 > (gdb) p strm > $3 = (z_stream *) 0x10006 > (gdb) p *strm > Cannot access memory at address 0x10006 > (gdb) > > -- > .''`. John Paul Adrian Glaubitz > : :' : Debian Developer - glaubitz at debian.org > `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de > `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 > From glaubitz at physik.fu-berlin.de Sun Jun 11 10:53:15 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sun, 11 Jun 2017 12:53:15 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> <3a2e4168-9d1e-a81b-6bb7-0b711a44a029@physik.fu-berlin.de> <2730f60d-a854-f849-9027-28c7917d4354@physik.fu-berlin.de> Message-ID: Hi Thomas! On 06/11/2017 08:45 AM, Thomas St?fe wrote: > I'll take a look at it, I believe the final SafeFetch implementation for zero was last done by me: https://bugs.openjdk.java.net/browse/JDK-8076185 . Thanks. I'm very glad to hear that someone more knowledgeable with the code will have a look. > SafeFetch is used to load data from a potentially unmapped address, mainly used in error reporting. If that load triggers a segfault, that fault is catched and > the function returns a special value to indicate the address was unmapped. Yeah. I have learned that now as well ;). > Its function is in the debug build tested at VM startup, which is the segfault you are seeing. If it were to work correctly, signal handler would recognize the > segfault to be originating from a safefetch call and not crash but return the mentioned special value. > > On almost all platforms this is implemented via stub assembler but as zero aims to be pure C we did implement this using posix setjmp. I'll take a look at why > this stopped working. > > In the meantime, as a workaround just comment out the calls to test_safefetch32() and test_safefetchN() in StubRoutines::initialize2(). That doesn't seem to work though, it still crashes [1]. I made this change: --- a/hotspot/src/share/vm/runtime/stubRoutines.cpp~ 2017-05-11 15:11:42.000000000 +0300 +++ b/hotspot/src/share/vm/runtime/stubRoutines.cpp 2017-06-11 12:25:56.068000000 +0300 @@ -358,13 +358,6 @@ test_arraycopy_func(CAST_FROM_FN_PTR(address, Copy::aligned_conjoint_words), sizeof(jlong)); test_arraycopy_func(CAST_FROM_FN_PTR(address, Copy::aligned_disjoint_words), sizeof(jlong)); - // test safefetch routines - // Not on Windows 32bit until 8074860 is fixed -#if ! (defined(_WIN32) && defined(_M_IX86)) - test_safefetch32(); - test_safefetchN(); -#endif - #endif } But it still segfaults. Are there other places where safefetch*() needs to be disabled? Please note: I cannot reproduce the problem on x86_64 which made me believe to think that there might be some code guarded out on x86_64 which is only used on the generic zero targets. Thanks! Adrian > [1] https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=powerpc&ver=9%7Eb170-2&stamp=1497177935&raw=0 -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Sun Jun 11 18:58:11 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Sun, 11 Jun 2017 20:58:11 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> <3a2e4168-9d1e-a81b-6bb7-0b711a44a029@physik.fu-berlin.de> <2730f60d-a854-f849-9027-28c7917d4354@physik.fu-berlin.de> Message-ID: Hi Adrian, On Sun, Jun 11, 2017 at 12:53 PM, John Paul Adrian Glaubitz < glaubitz at physik.fu-berlin.de> wrote: > Hi Thomas! > > On 06/11/2017 08:45 AM, Thomas St?fe wrote: > > I'll take a look at it, I believe the final SafeFetch implementation for > zero was last done by me: https://bugs.openjdk.java.net/browse/JDK-8076185 > . > > Thanks. I'm very glad to hear that someone more knowledgeable with the > code will have a look. > > > SafeFetch is used to load data from a potentially unmapped address, > mainly used in error reporting. If that load triggers a segfault, that > fault is catched and > > the function returns a special value to indicate the address was > unmapped. > > Yeah. I have learned that now as well ;). > > > Its function is in the debug build tested at VM startup, which is the > segfault you are seeing. If it were to work correctly, signal handler would > recognize the > > segfault to be originating from a safefetch call and not crash but > return the mentioned special value. > > > > On almost all platforms this is implemented via stub assembler but as > zero aims to be pure C we did implement this using posix setjmp. I'll take > a look at why > > this stopped working. > > > > In the meantime, as a workaround just comment out the calls to > test_safefetch32() and test_safefetchN() in StubRoutines::initialize2(). > > That doesn't seem to work though, it still crashes [1]. > > I made this change: > > --- a/hotspot/src/share/vm/runtime/stubRoutines.cpp~ 2017-05-11 > 15:11:42.000000000 +0300 > +++ b/hotspot/src/share/vm/runtime/stubRoutines.cpp 2017-06-11 > 12:25:56.068000000 +0300 > @@ -358,13 +358,6 @@ > test_arraycopy_func(CAST_FROM_FN_PTR(address, > Copy::aligned_conjoint_words), sizeof(jlong)); > test_arraycopy_func(CAST_FROM_FN_PTR(address, > Copy::aligned_disjoint_words), sizeof(jlong)); > > - // test safefetch routines > - // Not on Windows 32bit until 8074860 is fixed > -#if ! (defined(_WIN32) && defined(_M_IX86)) > - test_safefetch32(); > - test_safefetchN(); > -#endif > - > #endif > } > > But it still segfaults. Are there other places where safefetch*() needs to > be disabled? > > Sorry, I was wrong, the workaround cannot work. SafeFetch is required to work in a number of other places, so there is no easy way around fixing this. I also am very curious as to why we crash, the SafeFetch mechanism on Zero is quite simple. > Please note: > > I cannot reproduce the problem on x86_64 which made me believe to think > that there might > be some code guarded out on x86_64 which is only used on the generic zero > targets. > Unfortunately, I ran into a number of issues building zero on ppc and x64, which is quite annoying. Zero gets not enough love :) Finally managed to build it on x64, but as you said, SafeFetch works fine here. I was not able to build it on ppc64 (the only ppc machines I have available). Will retry next week. Without reproducing this error, it is difficult to fix. Note that I have no ppc32 (big endian?) machine available. If you feel like investigating yourself, feel free. The mechanism for safefetch on zero is quite simple, see the patch for JDK-8076185: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/633053d4d137 Basically, before the questionable memory access a jump buffer is set up and its pointer is stored in pthread TLS; which then is recovered in the signal handler. Its existence is taken as proof that the SIGSEGV was caused by SafeFetch; and we use longjmp to jump back to before the crash. Simple and unexciting :) Kind Regards, Thomas > Thanks! > Adrian > > > [1] https://buildd.debian.org/status/fetch.php?pkg=openjdk-9& > arch=powerpc&ver=9%7Eb170-2&stamp=1497177935&raw=0 > > -- > .''`. John Paul Adrian Glaubitz > : :' : Debian Developer - glaubitz at debian.org > `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de > `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 > From glaubitz at physik.fu-berlin.de Sun Jun 11 19:15:26 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sun, 11 Jun 2017 21:15:26 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1c5c0bef-691d-98a1-6ad3-82be7db0fe36@physik.fu-berlin.de> <3a2e4168-9d1e-a81b-6bb7-0b711a44a029@physik.fu-berlin.de> <2730f60d-a854-f849-9027-28c7917d4354@physik.fu-berlin.de> Message-ID: On 06/11/2017 08:58 PM, Thomas St?fe wrote: > Sorry, I was wrong, the workaround cannot work. SafeFetch is required to work in a number > of other places, so there is no easy way around fixing this. I also am > very curious as to why we crash, the SafeFetch mechanism on Zero is quite simple. No problem. As for the why of the crash, I have discussed this with another friend and expert today and he suggested that gcc might be optimizing too aggressively here. > I cannot reproduce the problem on x86_64 which made me believe to think that there might > be some code guarded out on x86_64 which is only used on the generic zero targets. > > > Unfortunately, I ran into a number of issues building zero on ppc and x64, which is quite annoying. > Zero gets not enough love :) Finally managed to build it on > x64, but as you said, SafeFetch works fine here. Yep. > I was not able to build it on ppc64 (the only ppc machines I have available). Will retry next > week. Without reproducing this error, it is difficult to fix. I can just create an account for you on the PPC machine which I used. It's actually a POWER8 machine, so rebuilding everything is reasonably fast - even for zero. So, if you send me a private email, preferably signed by some trusted key together with your public SSH key and your desired user account, I'm happy to create an account for you on this machine. From your name, I assume you're German, so we can also speak German then. I have been providing accounts to Debian porter boxes for various upstream projects in the past :). > Note that I have no ppc32 (big endian?) machine available. Yes. That's big-endian. Not sure whether little-endian ppc32 actually exists. > If you feel like investigating yourself, feel free. The mechanism for safefetch on zero is quite simple, see the patch for > JDK-8076185: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/633053d4d137 > > Basically, before the questionable memory access a jump buffer is set up and its pointer is stored in pthread TLS; which then is recovered in the signal > handler. Its existence is taken as proof that the SIGSEGV was caused by SafeFetch; and we use longjmp to jump back to before the crash. Simple and unexciting :) Thanks. I will have a look, too. But as I said, if you want, I'll just create an account for you. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From ioi.lam at oracle.com Mon Jun 12 04:19:49 2017 From: ioi.lam at oracle.com (Ioi Lam) Date: Sun, 11 Jun 2017 21:19:49 -0700 Subject: Calling C++ destructor directly in resourceHash.hpp Message-ID: <032526f1-b90a-b32e-f039-05d6e6ec5d77@oracle.com> Yet another "you can do that in C++??" moment. I am looking at these two functions in "utilities/resourceHash.hpp": bool put(K const& key, V const& value) { unsigned hv = HASH(key); Node** ptr = lookup_node(hv, key); if (*ptr != NULL) { (*ptr)->_value = value; // <----------- point (a) return false; } else { *ptr = new (ALLOC_TYPE, MEM_TYPE) Node(hv, key, value); return true; } } bool remove(K const& key) { unsigned hv = HASH(key); Node** ptr = lookup_node(hv, key); Node* node = *ptr; if (node != NULL) { *ptr = node->_next; node->_value.~V(); // <------------ point (b): add this?? if (ALLOC_TYPE == C_HEAP) { delete node; } return true; } return false; } At point (a), the assignment could invoke a copying constructor of V::V(const &V), which could do complex allocation. Shouldn't we add a corresponding destructor call at point (b), so V::~V() can be called to perform the appropriate deallocation? Similarly, in ~ResourceHashtable(), should we also add the V::~V() calls? ... and also call V::~V() at point (a) to destroy the old value? Thanks - Ioi From john.r.rose at oracle.com Mon Jun 12 05:20:31 2017 From: john.r.rose at oracle.com (John Rose) Date: Sun, 11 Jun 2017 22:20:31 -0700 Subject: Calling C++ destructor directly in resourceHash.hpp In-Reply-To: <032526f1-b90a-b32e-f039-05d6e6ec5d77@oracle.com> References: <032526f1-b90a-b32e-f039-05d6e6ec5d77@oracle.com> Message-ID: On Jun 11, 2017, at 9:19 PM, Ioi Lam wrote: > > I am looking at these two functions in "utilities/resourceHash.hpp": You are worried about the V destructor running, where the V struct is a member of Node (Node::_value). In the normal case, running the destructor of Node transparently runs the destructors of the K and V members of Node. The place where dropped destructors can happen in this sort of pattern is when you overwrite a variable, which is point (a) in your example. Your V::operator= is responsible for retiring any resources used by the previous value of Node::_value which are not going to be used by the new value. Eventually, when "delete node" happens, whatever resources were in use in Node::_value will be freed. So I don't think you have to do anything with point (b). Your problem, if you have one, is operator=. Those are hard to get right. ? John From thomas.stuefe at gmail.com Mon Jun 12 06:52:28 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Jun 2017 08:52:28 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> Message-ID: Hi guys, After our friday discussion I try to collect my thoughts about how to solve this. Could you please check if what I think makes sense: -> LogStream, an outputStream child, is used to give us outputStream compatibility to write to UL. It assembles a line and, once complete, sends it off to UL backend. It needs memory to do this, and it often uses resource memory for this. This is a problem if we cross resource marks. -> We do not want to use resource memory. Alternatives would be a) using a fixed sized char array as member in LogStream. This would be a very simple solution. But that imposes a max. line length, which we do not want(?) and, where LogStream is used as a Stack object, may consume too much stack space. b) using malloc-ed memory (perhaps in combination with having a small fixed sized char array for small lines to reduce the number of malloc invocations). But that requires us to be able to free the malloced memory, so the destructor of LogStream must be called. -> If possible, we want the UL API to stay the same, so we want to avoid having to fix up many UL call sites. Lets look at alternative (b) closer, because I think (a) may be out of question (?). ------ LogStreams are used in three ways which I can see: A) As a stack object: LogTarget(Error, logging) target; LogStreamCHeap errstream(target); return LogConfiguration::parse_log_arguments(_gc_log_filename, gc_conf, NULL, NULL, &errstream); No problem here: LogStream is an automatic variable and its destructor will be called when it goes out of scope. So we can free malloced memory. B) More often, by calling the static method LogImpl<>::xxx_stream(): Either like this: 1) Log(jit, compilation) log; if (log.is_debug()) { print_impl(log.debug_stream(), ... ); } Or like this: 2) outputStream* st = Log(stackwalk)::debug_stream(); (So, either via an instance of LogImpl<>, or directly via the class::). Here, LogImpl<>::debug_stream() creates a LogStream with ResourceObj::operator new(). For this LogStream the destructor cannot be called (not without rewriting the call site). ---- If we want to keep the syntax of the call sites as much as possible, we need to keep the xxxx_stream() methods as they are. But instead of allocating the LogStream objects each time anew, we could change LogImpl<> to manage pointers to its LogStream objects. This would help in all those cases where an instance of LogImpl<> is placed on the stack, which seems to be the majority of call sites. So, if there is a LogImpl<> object on the stack, like in (B1), its destructor would run when it goes out of scope and it would clean up its LogStream objects. Care must be taken that the construction of the LogImpl<> object stays cheap, because we do not want to pay for logging if we do not log. Which means that construction of the LogStream objects must be deferred to when they are actually needed. for instance: LogImpl<> { outputStream* _debugStream; outputStream* _errorStream; ... public: LogImpl<> :: _debugStream(NULL), _errorStream(NULL), ... {} ~LogImpl<> () { if (_debugStream) { cleanup debugStream; } if (_errorStream) { cleanup errorStream; } ... } debugStream* debug_stream() { if (!_debugStream) _debugStream = new LogStream(); return _debugStream; } } }; Unfortunately, this would not help for cases where xxx_stream() is called directly as static method (Log(..)::xx_stream()) - I do not see a possibility to keep this syntax. If we go with the "LogImpl<> manages its associated LogStream objects" solution, these sites would have to be rewritten: From: outputStream* st = Log(stackwalk)::debug_stream(); to: Log(stackwalk) log; outputStream* st = log.debug_stream(); --- What do you think? Does any of this make sense? Also, I opened https://bugs.openjdk.java.net/browse/JDK-8181917 to track this issue. Note that even though I assigned this to myself for now, if any of you rather wants to take it this over it is fine by me. Kind regards, Thomas On Sat, Jun 10, 2017 at 8:33 AM, Thomas St?fe wrote: > Yes, this seems to be the same issue. > > ..Thomas > > On Fri, Jun 9, 2017 at 10:00 PM, Daniel D. Daugherty < > daniel.daugherty at oracle.com> wrote: > >> This bug seems relevant to this discussion: >> >> JDK-8181807 Graal internal error "StringStream is re-allocated with a >> different ResourceMark" >> https://bugs.openjdk.java.net/browse/JDK-8181807 >> >> Dan >> >> >> >> On 6/9/17 1:48 AM, Thomas St?fe wrote: >> >>> Hi Stefan, >>> >>> just a small question to verify that I understood everything correctly. >>> >>> The LogStream classes (LogStreamBase and children) are basically the >>> write-to-UL frontend classes, right? Their purpose is to collect input >>> via >>> various print.. methods until \n is encountered, then pipe the assembled >>> line to the UL backend. To do that it needs a backing store for the >>> to-be-assembled-line, and this is the whole reason stringStream is used >>> (via the "streamClass" template parameter for LogStreamBase)? >>> >>> So, basically the whole rather involved class tree rooted at >>> LogStreamBase >>> only deals with the various ways that one line backing store is >>> allocated? >>> Including LogStream itself, which contains - I was very surprised to see >>> - >>> an embedded ResourceMark (stringStreamWithResourceMark). There are no >>> other >>> reasons for this ResourceMark? >>> >>> I am currently experimenting with changing LogStream to use a simple >>> malloc'd backing store, in combination with a small fixed size member >>> buffer for small lines; I'd like to see if that had any measurable >>> negative >>> performance impact. The coding is halfway done already, but the callers >>> need fixing up, because due to my change LogStreamBase children cannot be >>> allocated with new anymore, because of the ResourceObj-destructor >>> problem. >>> >>> What do you think, is this worthwhile and am I overlooking something >>> obvious? The UL coding is quite large, after all. >>> >>> Kind Regards, Thomas >>> >>> >>> >>> >>> >>> >>> >>> >>> On Wed, Jun 7, 2017 at 12:25 PM, Thomas St?fe >>> wrote: >>> >>> Hi Stefan, >>>> >>>> On Wed, Jun 7, 2017 at 12:17 PM, Stefan Karlsson < >>>> stefan.karlsson at oracle.com> wrote: >>>> >>>> Hi Thomas, >>>>> >>>>> On 2017-06-07 11:15, Thomas St?fe wrote: >>>>> >>>>> Hi Stefan, >>>>>> >>>>>> I saw this, but I also see LogStreamNoResourceMark being used as a >>>>>> default for the (trace|debug|info|warning|error)_stream() methods of >>>>>> Log. In this form it is used quite a lot. >>>>>> >>>>>> Looking further, I see that one cannot just exchange >>>>>> LogStreamNoResourceMark with LogStreamCHeap, because there are hidden >>>>>> usage >>>>>> conventions I was not aware of: >>>>>> >>>>>> Just to be clear, I didn't propose that you did a wholesale >>>>> replacement >>>>> of LogStreamNoResourceMark with LogStreamCHeap. I merely pointed out >>>>> the >>>>> existence of this class in case you had missed it. >>>>> >>>>> >>>>> Sure! I implied this myself with my original post which proposed to >>>> replace the resource area allocation inside stringStream with malloc'd >>>> memory. >>>> >>>> >>>> LogStreamNoResourceMark is allocated with new() in create_log_stream(). >>>>>> LogStreamNoResourceMark is an outputStream, which is a ResourceObj. >>>>>> In its >>>>>> current form ResourceObj cannot be deleted, so destructors for >>>>>> ResourceObj >>>>>> child cannot be called. >>>>>> >>>>>> By default ResourceObj classes are allocated in the resource area, but >>>>> the class also supports CHeap allocations. For example, see some of the >>>>> allocations of GrowableArray instances: >>>>> >>>>> _deallocate_list = new (ResourceObj::C_HEAP, mtClass) >>>>> GrowableArray(100, true); >>>>> >>>>> These can still be deleted: >>>>> >>>>> delete _deallocate_list; >>>>> >>>>> >>>>> So, we could not use malloc in the stringStream - or exchange >>>>>> stringStream for bufferedStream - because we would need a non-empty >>>>>> destructor to free the malloc'd memory, and that destructor cannot >>>>>> exist. >>>>>> >>>>>> Looking further, I see that this imposes subtle usage restrictions for >>>>>> UL: >>>>>> >>>>>> LogStreamNoResourceMark objects are used via "log.debug_stream()" or >>>>>> similar. For example: >>>>>> >>>>>> codecache_print(log.debug_stream(), /* detailed= */ false); >>>>>> >>>>>> debug_stream() will allocate a LogStreamNoResourceMark object which >>>>>> lives in the resourcearea. This is a bit surprising, because >>>>>> "debug_stream()" feels like it returns a singleton or a member >>>>>> variable of >>>>>> log. >>>>>> >>>>>> IIRC, this was done to: >>>>> >>>>> 1) break up a cyclic dependencies between logStream.hpp and log.hpp >>>>> >>>>> 2) Not have log.hpp depend on the stream.hpp. This used to be >>>>> important, >>>>> but the includes in stream.hpp has been fixed so this might be a >>>>> non-issue. >>>>> >>>>> >>>>> If one wants to use LogStreamCHeap instead, it must not be created with >>>>>> new() - which would be a subtle memory leak because the destructor >>>>>> would >>>>>> never be called - but instead on the stack as automatic variable: >>>>>> >>>>>> LogStreamCHeap log_stream(log); >>>>>> log_stream.print("hallo"); >>>>>> >>>>>> I may understand this wrong, but if this is true, this is quite a >>>>>> difficult API. >>>>>> >>>>>> Feel free to rework this and propose a simpler model. Anything that >>>>> would >>>>> simplify this would be helpful. >>>>> >>>>> >>>>> I will mull over this a bit (and I would be thankful for other >>>> viewpoints >>>> as well). A bottomline question which is difficult to answer is whether >>>> folks value the slight performance increase of resource area backed >>>> memory >>>> allocation in stringStream more than simplicity and robustness which >>>> would >>>> come with switching to malloced memory. And then, there is the second >>>> question of why outputStream objects should be ResourceObj at all; for >>>> me, >>>> they feel much more at home as stack objects. They themselves are small >>>> and >>>> do not allocate a lot of memory (if they do, they do it dynamically). >>>> And >>>> they are not allocated in vast amounts... >>>> >>>> Lets see what others think. >>>> >>>> >>>> I have two classes which look like siblings but >>>>> >>>>> LogStreamCHeap can only be allocated on the local stack - otherwise >>>>>> I'll >>>>>> get a memory leak - while LogStreamNoResourceMark gets created in the >>>>>> resource area, which prevents its destructor from running and may >>>>>> fill the >>>>>> resource area up with temporary stream objects if used in a certain >>>>>> way. >>>>>> >>>>>> Have I understood this right so far? If yes, would it be possible to >>>>>> simplify this? >>>>>> >>>>>> I think you understand the code correctly, and yes, there are probably >>>>> ways to make this simpler. >>>>> >>>>> >>>>> Thanks for your input! >>>> >>>> Kind regards, Thomas >>>> >>>> >>>> Thanks, >>>>> StefanK >>>>> >>>>> >>>>> Kind Regards, Thomas >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson < >>>>>> stefan.karlsson at oracle.com > >>>>>> wrote: >>>>>> >>>>>> Hi Thomas, >>>>>> >>>>>> >>>>>> On 2017-06-06 11:40, Thomas St?fe wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> In our VM we recently hit something similar to >>>>>> https://bugs.openjdk.java.net/browse/JDK-8167995 >>>>>> or >>>>>> https://bugs.openjdk.java.net/browse/JDK-8149557 >>>>>> : >>>>>> >>>>>> A stringStream* was handed down to nested print functions >>>>>> which >>>>>> create >>>>>> their own ResourceMarks and, while being down the stack under >>>>>> the scope of >>>>>> that new ResourceMark, the stringStream needed to enlarge its >>>>>> internal >>>>>> buffer. This is the situation the assert inside >>>>>> stringStream::write() >>>>>> attempts to catch >>>>>> (assert(Thread::current()->current_resource_mark() == >>>>>> rm); in our case this was a release build, so we just >>>>>> crashed. >>>>>> >>>>>> The solution for both JDK-816795 and JDK-8149557 seemed to >>>>>> be to >>>>>> just >>>>>> remove the offending ResourceMarks, or shuffle them around, >>>>>> but >>>>>> generally >>>>>> this is not an optimal solution, or? >>>>>> >>>>>> We actually question whether using resource area memory is a >>>>>> good idea for >>>>>> outputStream chuild objects at all: >>>>>> >>>>>> outputStream instances typically travel down the stack a lot >>>>>> by >>>>>> getting >>>>>> handed sub-print-functions, so they run danger of crossing >>>>>> resource mark >>>>>> boundaries like above. The sub functions are usually >>>>>> oblivious >>>>>> to the type >>>>>> of outputStream* handed down, and as well they should be. >>>>>> And if >>>>>> the >>>>>> allocate resource area memory themselves, it makes sense to >>>>>> guard them with >>>>>> ResourceMark in case they are called in a loop. >>>>>> >>>>>> The assert inside stringStream::write() is not a real help >>>>>> either, because >>>>>> whether or not it hits depends on pure luck - whether the >>>>>> realloc code path >>>>>> is hit just in the right moment while printing. Which >>>>>> depends on >>>>>> the buffer >>>>>> size and the print history, which is variable, especially >>>>>> with >>>>>> logging. >>>>>> >>>>>> The only advantage to using bufferedStream (C-Heap) is a >>>>>> small >>>>>> performance >>>>>> improvement when allocating. The question is whether this is >>>>>> really worth >>>>>> the risk of using resource area memory in this fashion. >>>>>> Especially in the >>>>>> context of UL where we are about to do expensive IO >>>>>> operations >>>>>> (writing to >>>>>> log file) or may lock (os::flockfile). >>>>>> >>>>>> Also, the difference between bufferedStream and stringStream >>>>>> might be >>>>>> reduced by improving bufferedStream (e.g. by using a member >>>>>> char >>>>>> array for >>>>>> small allocations and delay using malloc() for larger >>>>>> arrays.) >>>>>> >>>>>> What you think? Should we get rid of stringStream and only >>>>>> use >>>>>> an (possibly >>>>>> improved) bufferedStream? I also imagine this could make UL >>>>>> coding a bit >>>>>> simpler. >>>>>> >>>>>> >>>>>> Not answering your questions, but I want to point out that we >>>>>> already have a UL stream that uses C-Heap: >>>>>> >>>>>> logging/logStream.hpp: >>>>>> >>>>>> // The backing buffer is allocated in CHeap memory. >>>>>> typedef LogStreamBase LogStreamCHeap; >>>>>> >>>>>> StefanK >>>>>> >>>>>> >>>>>> >>>>>> Thank you, >>>>>> >>>>>> Kind Regards, Thomas >>>>>> >>>>>> >>>>>> >>>>>> >> > From thomas.stuefe at gmail.com Mon Jun 12 07:16:31 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Jun 2017 09:16:31 +0200 Subject: Calling C++ destructor directly in resourceHash.hpp In-Reply-To: References: <032526f1-b90a-b32e-f039-05d6e6ec5d77@oracle.com> Message-ID: On Mon, Jun 12, 2017 at 7:20 AM, John Rose wrote: > On Jun 11, 2017, at 9:19 PM, Ioi Lam wrote: > > > > I am looking at these two functions in "utilities/resourceHash.hpp": > > You are worried about the V destructor running, > where the V struct is a member of Node (Node::_value). > In the normal case, running the destructor of Node > transparently runs the destructors of the K and V > members of Node. > > The place where dropped destructors can happen > in this sort of pattern is when you overwrite a variable, > which is point (a) in your example. Your V::operator= > is responsible for retiring any resources used by the > previous value of Node::_value which are not going to > be used by the new value. > > Eventually, when "delete node" happens, whatever > resources were in use in Node::_value will be freed. > > So I don't think you have to do anything with point (b). > Your problem, if you have one, is operator=. Those are > hard to get right. > > ? John > > Also, all usages of ResourceHashTable I can see have either primitive data types as value type or they do their own cleanup (e.g. Handle in jvmci). Do you see any concrete usage where a real object needing destruction is placed in a ResourceHashTable? ..Thomas From marcus.larsson at oracle.com Mon Jun 12 08:28:54 2017 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Mon, 12 Jun 2017 10:28:54 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> Message-ID: <4a9cc5cf-6e6c-ffeb-32c7-a5428e706fe9@oracle.com> Hi, Thanks for looking into this. On 2017-06-12 08:52, Thomas St?fe wrote: > Hi guys, > > After our friday discussion I try to collect my thoughts about how to > solve this. Could you please check if what I think makes sense: > > -> LogStream, an outputStream child, is used to give us outputStream > compatibility to write to UL. It assembles a line and, once complete, > sends it off to UL backend. It needs memory to do this, and it often > uses resource memory for this. This is a problem if we cross resource > marks. > > -> We do not want to use resource memory. Alternatives would be > a) using a fixed sized char array as member in LogStream. This would > be a very simple solution. But that imposes a max. line length, which > we do not want(?) and, where LogStream is used as a Stack object, may > consume too much stack space. > b) using malloc-ed memory (perhaps in combination with having a small > fixed sized char array for small lines to reduce the number of malloc > invocations). But that requires us to be able to free the malloced > memory, so the destructor of LogStream must be called. > > -> If possible, we want the UL API to stay the same, so we want to > avoid having to fix up many UL call sites. > > Lets look at alternative (b) closer, because I think (a) may be out of > question (?). Yes, I believe (a) is not an option. > > ------ > > LogStreams are used in three ways which I can see: > > A) As a stack object: > LogTarget(Error, logging) target; > LogStreamCHeap errstream(target); > return LogConfiguration::parse_log_arguments(_gc_log_filename, > gc_conf, NULL, NULL, &errstream); > > No problem here: LogStream is an automatic variable and its destructor > will be called when it goes out of scope. So we can free malloced memory. > > B) More often, by calling the static method LogImpl<>::xxx_stream(): > > Either like this: > > 1) Log(jit, compilation) log; > if (log.is_debug()) { > print_impl(log.debug_stream(), ... ); > } > > Or like this: > > 2) outputStream* st = Log(stackwalk)::debug_stream(); > > (So, either via an instance of LogImpl<>, or directly via the class::). > > Here, LogImpl<>::debug_stream() creates a LogStream with > ResourceObj::operator new(). For this LogStream the destructor cannot > be called (not without rewriting the call site). > > ---- > > If we want to keep the syntax of the call sites as much as possible, > we need to keep the xxxx_stream() methods as they are. > > But instead of allocating the LogStream objects each time anew, we > could change LogImpl<> to manage pointers to its LogStream objects. > This would help in all those cases where an instance of LogImpl<> is > placed on the stack, which seems to be the majority of call sites. So, > if there is a LogImpl<> object on the stack, like in (B1), its > destructor would run when it goes out of scope and it would clean up > its LogStream objects. > > Care must be taken that the construction of the LogImpl<> object stays > cheap, because we do not want to pay for logging if we do not log. > Which means that construction of the LogStream objects must be > deferred to when they are actually needed. > > for instance: > > LogImpl<> { > outputStream* _debugStream; > outputStream* _errorStream; > ... > public: > LogImpl<> :: _debugStream(NULL), _errorStream(NULL), ... {} > ~LogImpl<> () { > if (_debugStream) { cleanup debugStream; } > if (_errorStream) { cleanup errorStream; } > ... > } > debugStream* debug_stream() { > if (!_debugStream) _debugStream = new LogStream(); > return _debugStream; > } > } > }; > > Unfortunately, this would not help for cases where xxx_stream() is > called directly as static method (Log(..)::xx_stream()) - I do not see > a possibility to keep this syntax. If we go with the "LogImpl<> > manages its associated LogStream objects" solution, these sites would > have to be rewritten: > > From: > outputStream* st = Log(stackwalk)::debug_stream(); > to: > Log(stackwalk) log; > outputStream* st = log.debug_stream(); > > --- > > What do you think? Does any of this make sense? I think it makes sense. IMO, we could as well get rid of the xxx_info() functions as a whole, and instead always require stack objects (LogStream instance). We would need to change some of the call sites with the proposed solution, so why not change all of them while we're at it. It saves complexity and makes it more obvious to the user where and how the memory is allocated. Thanks, Marcus > > > Also, I opened https://bugs.openjdk.java.net/browse/JDK-8181917 to > track this issue. Note that even though I assigned this to myself for > now, if any of you rather wants to take it this over it is fine by me. > > > Kind regards, Thomas > > > > > > On Sat, Jun 10, 2017 at 8:33 AM, Thomas St?fe > wrote: > > Yes, this seems to be the same issue. > > ..Thomas > > On Fri, Jun 9, 2017 at 10:00 PM, Daniel D. Daugherty > > > wrote: > > This bug seems relevant to this discussion: > > JDK-8181807 Graal internal error "StringStream is re-allocated > with a different ResourceMark" > https://bugs.openjdk.java.net/browse/JDK-8181807 > > > Dan > > > > On 6/9/17 1:48 AM, Thomas St?fe wrote: > > Hi Stefan, > > just a small question to verify that I understood > everything correctly. > > The LogStream classes (LogStreamBase and children) are > basically the > write-to-UL frontend classes, right? Their purpose is to > collect input via > various print.. methods until \n is encountered, then pipe > the assembled > line to the UL backend. To do that it needs a backing > store for the > to-be-assembled-line, and this is the whole reason > stringStream is used > (via the "streamClass" template parameter for LogStreamBase)? > > So, basically the whole rather involved class tree rooted > at LogStreamBase > only deals with the various ways that one line backing > store is allocated? > Including LogStream itself, which contains - I was very > surprised to see - > an embedded ResourceMark (stringStreamWithResourceMark). > There are no other > reasons for this ResourceMark? > > I am currently experimenting with changing LogStream to > use a simple > malloc'd backing store, in combination with a small fixed > size member > buffer for small lines; I'd like to see if that had any > measurable negative > performance impact. The coding is halfway done already, > but the callers > need fixing up, because due to my change LogStreamBase > children cannot be > allocated with new anymore, because of the > ResourceObj-destructor problem. > > What do you think, is this worthwhile and am I overlooking > something > obvious? The UL coding is quite large, after all. > > Kind Regards, Thomas > > > > > > > > > On Wed, Jun 7, 2017 at 12:25 PM, Thomas St?fe > > > wrote: > > Hi Stefan, > > On Wed, Jun 7, 2017 at 12:17 PM, Stefan Karlsson < > stefan.karlsson at oracle.com > > wrote: > > Hi Thomas, > > On 2017-06-07 11:15, Thomas St?fe wrote: > > Hi Stefan, > > I saw this, but I also see > LogStreamNoResourceMark being used as a > default for the > (trace|debug|info|warning|error)_stream() > methods of > Log. In this form it is used quite a lot. > > Looking further, I see that one cannot just > exchange > LogStreamNoResourceMark with LogStreamCHeap, > because there are hidden usage > conventions I was not aware of: > > Just to be clear, I didn't propose that you did a > wholesale replacement > of LogStreamNoResourceMark with LogStreamCHeap. I > merely pointed out the > existence of this class in case you had missed it. > > > Sure! I implied this myself with my original post > which proposed to > replace the resource area allocation inside > stringStream with malloc'd > memory. > > > LogStreamNoResourceMark is allocated with > new() in create_log_stream(). > LogStreamNoResourceMark is an outputStream, > which is a ResourceObj. In its > current form ResourceObj cannot be deleted, so > destructors for ResourceObj > child cannot be called. > > By default ResourceObj classes are allocated in > the resource area, but > the class also supports CHeap allocations. For > example, see some of the > allocations of GrowableArray instances: > > _deallocate_list = new (ResourceObj::C_HEAP, mtClass) > GrowableArray(100, true); > > These can still be deleted: > > delete _deallocate_list; > > > So, we could not use malloc in the > stringStream - or exchange > stringStream for bufferedStream - because we > would need a non-empty > destructor to free the malloc'd memory, and > that destructor cannot exist. > > Looking further, I see that this imposes > subtle usage restrictions for > UL: > > LogStreamNoResourceMark objects are used via > "log.debug_stream()" or > similar. For example: > > codecache_print(log.debug_stream(), /* > detailed= */ false); > > debug_stream() will allocate a > LogStreamNoResourceMark object which > lives in the resourcearea. This is a bit > surprising, because > "debug_stream()" feels like it returns a > singleton or a member variable of > log. > > IIRC, this was done to: > > 1) break up a cyclic dependencies between > logStream.hpp and log.hpp > > 2) Not have log.hpp depend on the stream.hpp. This > used to be important, > but the includes in stream.hpp has been fixed so > this might be a non-issue. > > > If one wants to use LogStreamCHeap instead, it > must not be created with > new() - which would be a subtle memory leak > because the destructor would > never be called - but instead on the stack as > automatic variable: > > LogStreamCHeap log_stream(log); > log_stream.print("hallo"); > > I may understand this wrong, but if this is > true, this is quite a > difficult API. > > Feel free to rework this and propose a simpler > model. Anything that would > simplify this would be helpful. > > > I will mull over this a bit (and I would be thankful > for other viewpoints > as well). A bottomline question which is difficult to > answer is whether > folks value the slight performance increase of > resource area backed memory > allocation in stringStream more than simplicity and > robustness which would > come with switching to malloced memory. And then, > there is the second > question of why outputStream objects should be > ResourceObj at all; for me, > they feel much more at home as stack objects. They > themselves are small and > do not allocate a lot of memory (if they do, they do > it dynamically). And > they are not allocated in vast amounts... > > Lets see what others think. > > > I have two classes which look like siblings but > > LogStreamCHeap can only be allocated on the > local stack - otherwise I'll > get a memory leak - while > LogStreamNoResourceMark gets created in the > resource area, which prevents its destructor > from running and may fill the > resource area up with temporary stream objects > if used in a certain way. > > Have I understood this right so far? If yes, > would it be possible to > simplify this? > > I think you understand the code correctly, and > yes, there are probably > ways to make this simpler. > > > Thanks for your input! > > Kind regards, Thomas > > > Thanks, > StefanK > > > Kind Regards, Thomas > > > > > > On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson < > stefan.karlsson at oracle.com > > >> wrote: > > Hi Thomas, > > > On 2017-06-06 11:40, Thomas St?fe wrote: > > Hi all, > > In our VM we recently hit something > similar to > https://bugs.openjdk.java.net/browse/JDK-8167995 > > > > > or > https://bugs.openjdk.java.net/browse/JDK-8149557 > > > >: > > A stringStream* was handed down to > nested print functions which > create > their own ResourceMarks and, while > being down the stack under > the scope of > that new ResourceMark, the > stringStream needed to enlarge its > internal > buffer. This is the situation the > assert inside > stringStream::write() > attempts to catch > (assert(Thread::current()->current_resource_mark() > == > rm); in our case this was a release > build, so we just crashed. > > The solution for both JDK-816795 and > JDK-8149557 seemed to be to > just > remove the offending ResourceMarks, > or shuffle them around, but > generally > this is not an optimal solution, or? > > We actually question whether using > resource area memory is a > good idea for > outputStream chuild objects at all: > > outputStream instances typically > travel down the stack a lot by > getting > handed sub-print-functions, so they > run danger of crossing > resource mark > boundaries like above. The sub > functions are usually oblivious > to the type > of outputStream* handed down, and as > well they should be. And if > the > allocate resource area memory > themselves, it makes sense to > guard them with > ResourceMark in case they are called > in a loop. > > The assert inside > stringStream::write() is not a real help > either, because > whether or not it hits depends on > pure luck - whether the > realloc code path > is hit just in the right moment while > printing. Which depends on > the buffer > size and the print history, which is > variable, especially with > logging. > > The only advantage to using > bufferedStream (C-Heap) is a small > performance > improvement when allocating. The > question is whether this is > really worth > the risk of using resource area > memory in this fashion. > Especially in the > context of UL where we are about to > do expensive IO operations > (writing to > log file) or may lock (os::flockfile). > > Also, the difference between > bufferedStream and stringStream > might be > reduced by improving bufferedStream > (e.g. by using a member char > array for > small allocations and delay using > malloc() for larger arrays.) > > What you think? Should we get rid of > stringStream and only use > an (possibly > improved) bufferedStream? I also > imagine this could make UL > coding a bit > simpler. > > > Not answering your questions, but I want > to point out that we > already have a UL stream that uses C-Heap: > > logging/logStream.hpp: > > // The backing buffer is allocated in > CHeap memory. > typedef LogStreamBase > LogStreamCHeap; > > StefanK > > > > Thank you, > > Kind Regards, Thomas > > > > > > From thomas.stuefe at gmail.com Mon Jun 12 10:16:02 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Jun 2017 12:16:02 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: <4a9cc5cf-6e6c-ffeb-32c7-a5428e706fe9@oracle.com> References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> <4a9cc5cf-6e6c-ffeb-32c7-a5428e706fe9@oracle.com> Message-ID: Hi Marcus, On Mon, Jun 12, 2017 at 10:28 AM, Marcus Larsson wrote: > Hi, > > Thanks for looking into this. > > On 2017-06-12 08:52, Thomas St?fe wrote: > > Hi guys, > > After our friday discussion I try to collect my thoughts about how to > solve this. Could you please check if what I think makes sense: > > -> LogStream, an outputStream child, is used to give us outputStream > compatibility to write to UL. It assembles a line and, once complete, sends > it off to UL backend. It needs memory to do this, and it often uses > resource memory for this. This is a problem if we cross resource marks. > > -> We do not want to use resource memory. Alternatives would be > a) using a fixed sized char array as member in LogStream. This would be a > very simple solution. But that imposes a max. line length, which we do not > want(?) and, where LogStream is used as a Stack object, may consume too > much stack space. > b) using malloc-ed memory (perhaps in combination with having a small > fixed sized char array for small lines to reduce the number of malloc > invocations). But that requires us to be able to free the malloced memory, > so the destructor of LogStream must be called. > > -> If possible, we want the UL API to stay the same, so we want to avoid > having to fix up many UL call sites. > > Lets look at alternative (b) closer, because I think (a) may be out of > question (?). > > > Yes, I believe (a) is not an option. > > > > ------ > > LogStreams are used in three ways which I can see: > > A) As a stack object: > LogTarget(Error, logging) target; > LogStreamCHeap errstream(target); > return LogConfiguration::parse_log_arguments(_gc_log_filename, > gc_conf, NULL, NULL, &errstream); > > No problem here: LogStream is an automatic variable and its destructor > will be called when it goes out of scope. So we can free malloced memory. > > B) More often, by calling the static method LogImpl<>::xxx_stream(): > > Either like this: > > 1) Log(jit, compilation) log; > if (log.is_debug()) { > print_impl(log.debug_stream(), ... ); > } > > Or like this: > > 2) outputStream* st = Log(stackwalk)::debug_stream(); > > (So, either via an instance of LogImpl<>, or directly via the class::). > > Here, LogImpl<>::debug_stream() creates a LogStream with > ResourceObj::operator new(). For this LogStream the destructor cannot be > called (not without rewriting the call site). > > ---- > > If we want to keep the syntax of the call sites as much as possible, we > need to keep the xxxx_stream() methods as they are. > > But instead of allocating the LogStream objects each time anew, we could > change LogImpl<> to manage pointers to its LogStream objects. This would > help in all those cases where an instance of LogImpl<> is placed on the > stack, which seems to be the majority of call sites. So, if there is a > LogImpl<> object on the stack, like in (B1), its destructor would run when > it goes out of scope and it would clean up its LogStream objects. > > Care must be taken that the construction of the LogImpl<> object stays > cheap, because we do not want to pay for logging if we do not log. Which > means that construction of the LogStream objects must be deferred to when > they are actually needed. > > for instance: > > LogImpl<> { > outputStream* _debugStream; > outputStream* _errorStream; > ... > public: > LogImpl<> :: _debugStream(NULL), _errorStream(NULL), ... {} > ~LogImpl<> () { > if (_debugStream) { cleanup debugStream; } > if (_errorStream) { cleanup errorStream; } > ... > } > debugStream* debug_stream() { > if (!_debugStream) _debugStream = new LogStream(); > return _debugStream; > } > } > }; > > Unfortunately, this would not help for cases where xxx_stream() is called > directly as static method (Log(..)::xx_stream()) - I do not see a > possibility to keep this syntax. If we go with the "LogImpl<> manages its > associated LogStream objects" solution, these sites would have to be > rewritten: > > From: > outputStream* st = Log(stackwalk)::debug_stream(); > to: > Log(stackwalk) log; > outputStream* st = log.debug_stream(); > > --- > > What do you think? Does any of this make sense? > > > I think it makes sense. > > IMO, we could as well get rid of the xxx_info() functions as a whole, and > instead always require stack objects (LogStream instance). We would need to > change some of the call sites with the proposed solution, so why not change > all of them while we're at it. It saves complexity and makes it more > obvious to the user where and how the memory is allocated. > > I'd be all for it but if we really change the UL Api that drastically and fix up all call sites, will this not make it harder to backport it to jdk9 if we want that? Other than this worry, I like it, and it is also simpler than my proposal. I will attempt a patch reworking UL in that matter: remove the xx_stream() functions and make LogStream malloc-backed and stack-allocatable-only. ...Thomas Thanks, > Marcus > > > > > Also, I opened https://bugs.openjdk.java.net/browse/JDK-8181917 to track > this issue. Note that even though I assigned this to myself for now, if any > of you rather wants to take it this over it is fine by me. > > > Kind regards, Thomas > > > > > > On Sat, Jun 10, 2017 at 8:33 AM, Thomas St?fe > wrote: > >> Yes, this seems to be the same issue. >> >> ..Thomas >> >> On Fri, Jun 9, 2017 at 10:00 PM, Daniel D. Daugherty < >> daniel.daugherty at oracle.com> wrote: >> >>> This bug seems relevant to this discussion: >>> >>> JDK-8181807 Graal internal error "StringStream is re-allocated with a >>> different ResourceMark" >>> https://bugs.openjdk.java.net/browse/JDK-8181807 >>> >>> Dan >>> >>> >>> >>> On 6/9/17 1:48 AM, Thomas St?fe wrote: >>> >>>> Hi Stefan, >>>> >>>> just a small question to verify that I understood everything correctly. >>>> >>>> The LogStream classes (LogStreamBase and children) are basically the >>>> write-to-UL frontend classes, right? Their purpose is to collect input >>>> via >>>> various print.. methods until \n is encountered, then pipe the assembled >>>> line to the UL backend. To do that it needs a backing store for the >>>> to-be-assembled-line, and this is the whole reason stringStream is used >>>> (via the "streamClass" template parameter for LogStreamBase)? >>>> >>>> So, basically the whole rather involved class tree rooted at >>>> LogStreamBase >>>> only deals with the various ways that one line backing store is >>>> allocated? >>>> Including LogStream itself, which contains - I was very surprised to >>>> see - >>>> an embedded ResourceMark (stringStreamWithResourceMark). There are no >>>> other >>>> reasons for this ResourceMark? >>>> >>>> I am currently experimenting with changing LogStream to use a simple >>>> malloc'd backing store, in combination with a small fixed size member >>>> buffer for small lines; I'd like to see if that had any measurable >>>> negative >>>> performance impact. The coding is halfway done already, but the callers >>>> need fixing up, because due to my change LogStreamBase children cannot >>>> be >>>> allocated with new anymore, because of the ResourceObj-destructor >>>> problem. >>>> >>>> What do you think, is this worthwhile and am I overlooking something >>>> obvious? The UL coding is quite large, after all. >>>> >>>> Kind Regards, Thomas >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Wed, Jun 7, 2017 at 12:25 PM, Thomas St?fe >>>> wrote: >>>> >>>> Hi Stefan, >>>>> >>>>> On Wed, Jun 7, 2017 at 12:17 PM, Stefan Karlsson < >>>>> stefan.karlsson at oracle.com> wrote: >>>>> >>>>> Hi Thomas, >>>>>> >>>>>> On 2017-06-07 11:15, Thomas St?fe wrote: >>>>>> >>>>>> Hi Stefan, >>>>>>> >>>>>>> I saw this, but I also see LogStreamNoResourceMark being used as a >>>>>>> default for the (trace|debug|info|warning|error)_stream() methods of >>>>>>> Log. In this form it is used quite a lot. >>>>>>> >>>>>>> Looking further, I see that one cannot just exchange >>>>>>> LogStreamNoResourceMark with LogStreamCHeap, because there are >>>>>>> hidden usage >>>>>>> conventions I was not aware of: >>>>>>> >>>>>>> Just to be clear, I didn't propose that you did a wholesale >>>>>> replacement >>>>>> of LogStreamNoResourceMark with LogStreamCHeap. I merely pointed out >>>>>> the >>>>>> existence of this class in case you had missed it. >>>>>> >>>>>> >>>>>> Sure! I implied this myself with my original post which proposed to >>>>> replace the resource area allocation inside stringStream with malloc'd >>>>> memory. >>>>> >>>>> >>>>> LogStreamNoResourceMark is allocated with new() in >>>>>>> create_log_stream(). >>>>>>> LogStreamNoResourceMark is an outputStream, which is a ResourceObj. >>>>>>> In its >>>>>>> current form ResourceObj cannot be deleted, so destructors for >>>>>>> ResourceObj >>>>>>> child cannot be called. >>>>>>> >>>>>>> By default ResourceObj classes are allocated in the resource area, >>>>>> but >>>>>> the class also supports CHeap allocations. For example, see some of >>>>>> the >>>>>> allocations of GrowableArray instances: >>>>>> >>>>>> _deallocate_list = new (ResourceObj::C_HEAP, mtClass) >>>>>> GrowableArray(100, true); >>>>>> >>>>>> These can still be deleted: >>>>>> >>>>>> delete _deallocate_list; >>>>>> >>>>>> >>>>>> So, we could not use malloc in the stringStream - or exchange >>>>>>> stringStream for bufferedStream - because we would need a non-empty >>>>>>> destructor to free the malloc'd memory, and that destructor cannot >>>>>>> exist. >>>>>>> >>>>>>> Looking further, I see that this imposes subtle usage restrictions >>>>>>> for >>>>>>> UL: >>>>>>> >>>>>>> LogStreamNoResourceMark objects are used via "log.debug_stream()" or >>>>>>> similar. For example: >>>>>>> >>>>>>> codecache_print(log.debug_stream(), /* detailed= */ false); >>>>>>> >>>>>>> debug_stream() will allocate a LogStreamNoResourceMark object which >>>>>>> lives in the resourcearea. This is a bit surprising, because >>>>>>> "debug_stream()" feels like it returns a singleton or a member >>>>>>> variable of >>>>>>> log. >>>>>>> >>>>>>> IIRC, this was done to: >>>>>> >>>>>> 1) break up a cyclic dependencies between logStream.hpp and log.hpp >>>>>> >>>>>> 2) Not have log.hpp depend on the stream.hpp. This used to be >>>>>> important, >>>>>> but the includes in stream.hpp has been fixed so this might be a >>>>>> non-issue. >>>>>> >>>>>> >>>>>> If one wants to use LogStreamCHeap instead, it must not be created >>>>>>> with >>>>>>> new() - which would be a subtle memory leak because the destructor >>>>>>> would >>>>>>> never be called - but instead on the stack as automatic variable: >>>>>>> >>>>>>> LogStreamCHeap log_stream(log); >>>>>>> log_stream.print("hallo"); >>>>>>> >>>>>>> I may understand this wrong, but if this is true, this is quite a >>>>>>> difficult API. >>>>>>> >>>>>>> Feel free to rework this and propose a simpler model. Anything that >>>>>> would >>>>>> simplify this would be helpful. >>>>>> >>>>>> >>>>>> I will mull over this a bit (and I would be thankful for other >>>>> viewpoints >>>>> as well). A bottomline question which is difficult to answer is whether >>>>> folks value the slight performance increase of resource area backed >>>>> memory >>>>> allocation in stringStream more than simplicity and robustness which >>>>> would >>>>> come with switching to malloced memory. And then, there is the second >>>>> question of why outputStream objects should be ResourceObj at all; for >>>>> me, >>>>> they feel much more at home as stack objects. They themselves are >>>>> small and >>>>> do not allocate a lot of memory (if they do, they do it dynamically). >>>>> And >>>>> they are not allocated in vast amounts... >>>>> >>>>> Lets see what others think. >>>>> >>>>> >>>>> I have two classes which look like siblings but >>>>>> >>>>>> LogStreamCHeap can only be allocated on the local stack - otherwise >>>>>>> I'll >>>>>>> get a memory leak - while LogStreamNoResourceMark gets created in the >>>>>>> resource area, which prevents its destructor from running and may >>>>>>> fill the >>>>>>> resource area up with temporary stream objects if used in a certain >>>>>>> way. >>>>>>> >>>>>>> Have I understood this right so far? If yes, would it be possible to >>>>>>> simplify this? >>>>>>> >>>>>>> I think you understand the code correctly, and yes, there are >>>>>> probably >>>>>> ways to make this simpler. >>>>>> >>>>>> >>>>>> Thanks for your input! >>>>> >>>>> Kind regards, Thomas >>>>> >>>>> >>>>> Thanks, >>>>>> StefanK >>>>>> >>>>>> >>>>>> Kind Regards, Thomas >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson < >>>>>>> stefan.karlsson at oracle.com > >>>>>>> wrote: >>>>>>> >>>>>>> Hi Thomas, >>>>>>> >>>>>>> >>>>>>> On 2017-06-06 11:40, Thomas St?fe wrote: >>>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> In our VM we recently hit something similar to >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8167995 >>>>>>> or >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8149557 >>>>>>> : >>>>>>> >>>>>>> A stringStream* was handed down to nested print functions >>>>>>> which >>>>>>> create >>>>>>> their own ResourceMarks and, while being down the stack >>>>>>> under >>>>>>> the scope of >>>>>>> that new ResourceMark, the stringStream needed to enlarge >>>>>>> its >>>>>>> internal >>>>>>> buffer. This is the situation the assert inside >>>>>>> stringStream::write() >>>>>>> attempts to catch >>>>>>> (assert(Thread::current()->current_resource_mark() == >>>>>>> rm); in our case this was a release build, so we just >>>>>>> crashed. >>>>>>> >>>>>>> The solution for both JDK-816795 and JDK-8149557 seemed to >>>>>>> be to >>>>>>> just >>>>>>> remove the offending ResourceMarks, or shuffle them around, >>>>>>> but >>>>>>> generally >>>>>>> this is not an optimal solution, or? >>>>>>> >>>>>>> We actually question whether using resource area memory is a >>>>>>> good idea for >>>>>>> outputStream chuild objects at all: >>>>>>> >>>>>>> outputStream instances typically travel down the stack a >>>>>>> lot by >>>>>>> getting >>>>>>> handed sub-print-functions, so they run danger of crossing >>>>>>> resource mark >>>>>>> boundaries like above. The sub functions are usually >>>>>>> oblivious >>>>>>> to the type >>>>>>> of outputStream* handed down, and as well they should be. >>>>>>> And if >>>>>>> the >>>>>>> allocate resource area memory themselves, it makes sense to >>>>>>> guard them with >>>>>>> ResourceMark in case they are called in a loop. >>>>>>> >>>>>>> The assert inside stringStream::write() is not a real help >>>>>>> either, because >>>>>>> whether or not it hits depends on pure luck - whether the >>>>>>> realloc code path >>>>>>> is hit just in the right moment while printing. Which >>>>>>> depends on >>>>>>> the buffer >>>>>>> size and the print history, which is variable, especially >>>>>>> with >>>>>>> logging. >>>>>>> >>>>>>> The only advantage to using bufferedStream (C-Heap) is a >>>>>>> small >>>>>>> performance >>>>>>> improvement when allocating. The question is whether this is >>>>>>> really worth >>>>>>> the risk of using resource area memory in this fashion. >>>>>>> Especially in the >>>>>>> context of UL where we are about to do expensive IO >>>>>>> operations >>>>>>> (writing to >>>>>>> log file) or may lock (os::flockfile). >>>>>>> >>>>>>> Also, the difference between bufferedStream and stringStream >>>>>>> might be >>>>>>> reduced by improving bufferedStream (e.g. by using a member >>>>>>> char >>>>>>> array for >>>>>>> small allocations and delay using malloc() for larger >>>>>>> arrays.) >>>>>>> >>>>>>> What you think? Should we get rid of stringStream and only >>>>>>> use >>>>>>> an (possibly >>>>>>> improved) bufferedStream? I also imagine this could make UL >>>>>>> coding a bit >>>>>>> simpler. >>>>>>> >>>>>>> >>>>>>> Not answering your questions, but I want to point out that we >>>>>>> already have a UL stream that uses C-Heap: >>>>>>> >>>>>>> logging/logStream.hpp: >>>>>>> >>>>>>> // The backing buffer is allocated in CHeap memory. >>>>>>> typedef LogStreamBase LogStreamCHeap; >>>>>>> >>>>>>> StefanK >>>>>>> >>>>>>> >>>>>>> >>>>>>> Thank you, >>>>>>> >>>>>>> Kind Regards, Thomas >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>> >> > > From sgehwolf at redhat.com Mon Jun 12 11:51:20 2017 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Mon, 12 Jun 2017 13:51:20 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> Message-ID: <1497268280.3582.11.camel@redhat.com> Hi, On Fri, 2017-06-09 at 20:57 +0200, John Paul Adrian Glaubitz wrote: > On 06/09/2017 07:54 PM, John Paul Adrian Glaubitz wrote: > > On 06/09/2017 05:58 PM, John Paul Adrian Glaubitz wrote: > > > I'll rebuild everything with --enable-debug --with-debug-level=slowdebug > > > > Just rebuilding with "--with-debug-level=slowdebug", of course. Both options > > are mutually exclusive. > > Surprise, surprise. Building with "--with-debug-level=slowdebug" instead of > "--with-debug-level=release" made the crash go away. Does gcc optimize too > aggressively here? It smells a lot like the GCC issues we've run across when building with newer compilers in Fedora/RHEL. Here is one example which turned out to be a GCC bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63341 Others were UB in hotspot code: https://bugs.openjdk.java.net/browse/JDK-8078666 Which GCC version is this? Provided this is a GCC issue: In order to figure out which new optimization might have caused this, I'd suggest to go through the list of new opto flags that are on by default in the new version and disable one by one. Once you know which opto flag causes it, you might be able to figure out which object file causes the problem with something like this: https://github.com/jerboaa/hotspot-tools-find-bad-object It might be something else entirely. Cheers, Severin From glaubitz at physik.fu-berlin.de Mon Jun 12 12:01:48 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 12 Jun 2017 14:01:48 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <1497268280.3582.11.camel@redhat.com> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1497268280.3582.11.camel@redhat.com> Message-ID: <20170612120148.GB23760@physik.fu-berlin.de> Hi Severin! On Mon, Jun 12, 2017 at 01:51:20PM +0200, Severin Gehwolf wrote: > It smells a lot like the GCC issues we've run across when building with > newer compilers in Fedora/RHEL. That's something that I have been thinking of as well. Especially since the problem goes away when building with --with-debug-level=slowdebug. > Here is one example which turned out to be a GCC bug: > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63341 Aha, thanks for the pointer. > Others were UB in hotspot code: > https://bugs.openjdk.java.net/browse/JDK-8078666 > > Which GCC version is this? gcc version 6.3.0 20170516 (SVN r248076 from the 6 branch). > Provided this is a GCC issue: > > In order to figure out which new optimization might have caused this, > I'd suggest to go through the list of new opto flags that are on by > default in the new version and disable one by one. Good idea, I will try that. Thanks! > Once you know which opto flag causes it, you might be able to figure > out which object file causes the problem with something like this: > https://github.com/jerboaa/hotspot-tools-find-bad-object > > It might be something else entirely. Very good pointers. I will try what you have suggested. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Mon Jun 12 12:08:46 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Jun 2017 14:08:46 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: <1497268280.3582.11.camel@redhat.com> References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1497268280.3582.11.camel@redhat.com> Message-ID: On Mon, Jun 12, 2017 at 1:51 PM, Severin Gehwolf wrote: > Hi, > > On Fri, 2017-06-09 at 20:57 +0200, John Paul Adrian Glaubitz wrote: > > On 06/09/2017 07:54 PM, John Paul Adrian Glaubitz wrote: > > > On 06/09/2017 05:58 PM, John Paul Adrian Glaubitz wrote: > > > > I'll rebuild everything with --enable-debug > --with-debug-level=slowdebug > > > > > > Just rebuilding with "--with-debug-level=slowdebug", of course. Both > options > > > are mutually exclusive. > > > > Surprise, surprise. Building with "--with-debug-level=slowdebug" instead > of > > "--with-debug-level=release" made the crash go away. Does gcc optimize > too > > aggressively here? > > It smells a lot like the GCC issues we've run across when building with > newer compilers in Fedora/RHEL. > > Here is one example which turned out to be a GCC bug: > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63341 > > Others were UB in hotspot code: > https://bugs.openjdk.java.net/browse/JDK-8078666 > > Which GCC version is this? > > Provided this is a GCC issue: > > In order to figure out which new optimization might have caused this, > I'd suggest to go through the list of new opto flags that are on by > default in the new version and disable one by one. > > Once you know which opto flag causes it, you might be able to figure > out which object file causes the problem with something like this: > https://github.com/jerboaa/hotspot-tools-find-bad-object > > It might be something else entirely. > > Cheers, > Severin > I agree with Severin. It makes sense to rule out a compiler failure before spending more work on it. The coding in question is completely architecture independent, and the fact that it only happens on one platform, and that the error vanishes in slowdebug, is suspicious. Kind Regards, Thomas From aph at redhat.com Mon Jun 12 12:22:53 2017 From: aph at redhat.com (Andrew Haley) Date: Mon, 12 Jun 2017 13:22:53 +0100 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1497268280.3582.11.camel@redhat.com> Message-ID: On 12/06/17 13:08, Thomas St?fe wrote: > I agree with Severin. It makes sense to rule out a compiler failure before > spending more work on it. The coding in question is completely architecture > independent, and the fact that it only happens on one platform, and that > the error vanishes in slowdebug, is suspicious. Not necessarily. The problem seems to be that we get a failure when allocating a direct byte bufer, in Bits.reserveMemory(). The limit is controlled by -XX:MaxDirectMemorySize. It might well be a bug in the Zero interpreter. I'd start by adding a couple of lines in Bits.tryReserveMemory() to print out the totalCapacity when allocation fails. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From glaubitz at physik.fu-berlin.de Mon Jun 12 12:28:16 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 12 Jun 2017 14:28:16 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1497268280.3582.11.camel@redhat.com> Message-ID: <20170612122816.GE23760@physik.fu-berlin.de> On Mon, Jun 12, 2017 at 02:08:46PM +0200, Thomas St?fe wrote: > I agree with Severin. It makes sense to rule out a compiler failure before > spending more work on it. The coding in question is completely architecture > independent, and the fact that it only happens on one platform, and that > the error vanishes in slowdebug, is suspicious. Actually. It doesn't just crash on PowerPC. I had to build with "--with-debug-level=slowdebug" on m68k as well to make the build succeed. So far, it's only been x86_64 where it built without changing the debug level. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Mon Jun 12 13:15:44 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Jun 2017 15:15:44 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1497268280.3582.11.camel@redhat.com> Message-ID: On Mon, Jun 12, 2017 at 2:22 PM, Andrew Haley wrote: > On 12/06/17 13:08, Thomas St?fe wrote: > > I agree with Severin. It makes sense to rule out a compiler failure > before > > spending more work on it. The coding in question is completely > architecture > > independent, and the fact that it only happens on one platform, and that > > the error vanishes in slowdebug, is suspicious. > > Not necessarily. The problem seems to be that we get a failure when > allocating a direct byte bufer, in Bits.reserveMemory(). The limit is > controlled by -XX:MaxDirectMemorySize. > > It might well be a bug in the Zero interpreter. I'd start by adding a > couple of lines in Bits.tryReserveMemory() to print out the > totalCapacity when allocation fails. > > I think there are several problems. The original problem, the bytebuffer OOM, happens in his release build. He then did a debug build. Slowdebug worked (?) but fastdebug crashed very early in SafeFetch initialization in tests which are omitted in release build. So, multiple problems in optimized code. ..Thomas > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 > From aph at redhat.com Mon Jun 12 13:23:24 2017 From: aph at redhat.com (Andrew Haley) Date: Mon, 12 Jun 2017 14:23:24 +0100 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <20170609131521.GE2477@physik.fu-berlin.de> <593AB105.5020303@linux.vnet.ibm.com> <6e2530d9-294c-7fd6-2117-178fcb295f1e@physik.fu-berlin.de> <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1497268280.3582.11.camel@redhat.com> Message-ID: On 12/06/17 14:15, Thomas St?fe wrote: > On Mon, Jun 12, 2017 at 2:22 PM, Andrew Haley wrote: > >> On 12/06/17 13:08, Thomas St?fe wrote: >>> I agree with Severin. It makes sense to rule out a compiler failure >> before >>> spending more work on it. The coding in question is completely >> architecture >>> independent, and the fact that it only happens on one platform, and that >>> the error vanishes in slowdebug, is suspicious. >> >> Not necessarily. The problem seems to be that we get a failure when >> allocating a direct byte bufer, in Bits.reserveMemory(). The limit is >> controlled by -XX:MaxDirectMemorySize. >> >> It might well be a bug in the Zero interpreter. I'd start by adding a >> couple of lines in Bits.tryReserveMemory() to print out the >> totalCapacity when allocation fails. >> >> > I think there are several problems. The original problem, the bytebuffer > OOM, happens in his release build. He then did a debug build. Slowdebug > worked (?) but fastdebug crashed very early in SafeFetch initialization in > tests which are omitted in release build. Ignore the SafeFetch problem; it's something else. Let's concentrate on one thing at at a time. > So, multiple problems in optimized code. > > ..Thomas > > >> -- >> Andrew Haley >> Java Platform Lead Engineer >> Red Hat UK Ltd. >> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 >> > -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From glaubitz at physik.fu-berlin.de Mon Jun 12 13:29:40 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 12 Jun 2017 15:29:40 +0200 Subject: Debugging segmentation faults in the JVM on linux-powerpc In-Reply-To: References: <0e758a35-fa4a-96d2-5ab4-14e04ff3c3de@physik.fu-berlin.de> <89c98506-3586-80a7-1058-41c86f391f3e@oracle.com> <6b65524c-ec36-bef7-b7cf-948ff2b3f03b@physik.fu-berlin.de> <1497268280.3582.11.camel@redhat.com> Message-ID: <20170612132940.GH23760@physik.fu-berlin.de> On Mon, Jun 12, 2017 at 03:15:44PM +0200, Thomas St?fe wrote: > I think there are several problems. The original problem, the bytebuffer > OOM, happens in his release build. He then did a debug build. Slowdebug > worked (?) but fastdebug crashed very early in SafeFetch initialization in > tests which are omitted in release build. To summarize: On linux-ppc: --with-debug-level=release => Segfault > https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=powerpc&ver=9%7Eb170-2&stamp=1497029141&raw=0 Running the built jmod binary without arguments binary resulted in the out-of-memory error. --with-debug-level=fastdebug => Segfault > https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=powerpc&ver=9%7Eb170-2&stamp=1497076948&raw=0 Running the built jmod binary without arguments actually worked and printed the usage text. Trying to run the full jmod command line from the build resulted in a segmentation fault. --with-debug-level=slowdebug => Build succeeds > https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=powerpc&ver=9%7Eb170-2&stamp=1497054989&raw=0 On linux-m68k (with patches to add m68k support): --with-debug-level=release => JVM Exception /<>/build/jdk/bin/java -Xms64M -Xmx1024M -XX:ThreadStackSize=768 -XX:+UseSerialGC -Xms32M -Xmx512M -XX:TieredStopAtLevel=1 -cp /<>/build/buildtool s/tools_jigsaw_classes --add-exports java.base/jdk.internal.module=ALL-UNNAMED build.tools.jigsaw.AddPackagesAttribute /<>/build/jdk Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. > https://people.debian.org/~glaubitz/openjdk-9_9~b170-2_m68k-fail.build --with-debug-level=slowdebug => Works > https://people.debian.org/~glaubitz/openjdk-9_9~b170-2_m68k-success.build Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From gerard.ziemski at oracle.com Mon Jun 12 14:29:51 2017 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Mon, 12 Jun 2017 09:29:51 -0500 Subject: RFR(10)(S): 8181503: Can't compile hotspot with c++11 Message-ID: hi all, Please review this small fix, which addresses 4 issues caught by c++11 compiler on a Mac: #1 Error in src/share/vm/utilities/debug.hpp jdk10/hotspot/src/share/vm/utilities/vmError.cpp:450:13: error: case value evaluates to 3758096384, which cannot be narrowed to type 'int' [-Wc++11-narrowing] case INTERNAL_ERROR: For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/share/vm/utilities/vmError.cpp.udiff.html and http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/share/vm/utilities/vmError.hpp.udiff.html #2 Error in src/share/vm/compiler/methodMatcher.cpp jdk10/hotspot/src/share/vm/compiler/methodMatcher.cpp:99:19: error: comparison between pointer and integer ('char *' and 'int') if (colon + 2 != '\0') { ~~~~~~~~~ ^ ~~~~ For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/share/vm/compiler/methodMatcher.cpp.udiff.html #3 Error in src/os_cpu/bsd_x86/vm/os_bsd_x86.cpp jdk10/hotspot/src/os_cpu/bsd_x86/vm/os_bsd_x86.cpp:282:19: error: invalid suffix on literal; C++11 requires a space between literal and identifier [-Wreserved-user-defined-literal] __asm__("mov %%"SPELL_REG_SP", %0":"=r"(esp)); For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/os_cpu/bsd_x86/vm/os_bsd_x86.cpp.udiff.html #4 Error in src/share/vm/code/compiledIC.cpp /Volumes/Work/jdk10/hotspot/src/share/vm/code/compiledIC.cpp:227:15: error: comparison between pointer and integer ('address' (aka 'unsigned char *') and 'int') if (entry == false) { ~~~~~ ^ ~~~~~ For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/share/vm/code/compiledIC.cpp.udiff.html References: bug link at https://bugs.openjdk.java.net/browse/JDK-8181503 webrev at http://cr.openjdk.java.net/~gziemski/8181503_rev1 Tested with JPRT hotspot. cheers From thomas.stuefe at gmail.com Mon Jun 12 15:47:22 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Jun 2017 17:47:22 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> <4a9cc5cf-6e6c-ffeb-32c7-a5428e706fe9@oracle.com> Message-ID: On Mon, Jun 12, 2017 at 12:16 PM, Thomas St?fe wrote: > Hi Marcus, > > On Mon, Jun 12, 2017 at 10:28 AM, Marcus Larsson < > marcus.larsson at oracle.com> wrote: > >> Hi, >> >> Thanks for looking into this. >> >> On 2017-06-12 08:52, Thomas St?fe wrote: >> >> Hi guys, >> >> After our friday discussion I try to collect my thoughts about how to >> solve this. Could you please check if what I think makes sense: >> >> -> LogStream, an outputStream child, is used to give us outputStream >> compatibility to write to UL. It assembles a line and, once complete, sends >> it off to UL backend. It needs memory to do this, and it often uses >> resource memory for this. This is a problem if we cross resource marks. >> >> -> We do not want to use resource memory. Alternatives would be >> a) using a fixed sized char array as member in LogStream. This would be a >> very simple solution. But that imposes a max. line length, which we do not >> want(?) and, where LogStream is used as a Stack object, may consume too >> much stack space. >> b) using malloc-ed memory (perhaps in combination with having a small >> fixed sized char array for small lines to reduce the number of malloc >> invocations). But that requires us to be able to free the malloced memory, >> so the destructor of LogStream must be called. >> >> -> If possible, we want the UL API to stay the same, so we want to avoid >> having to fix up many UL call sites. >> >> Lets look at alternative (b) closer, because I think (a) may be out of >> question (?). >> >> >> Yes, I believe (a) is not an option. >> >> >> >> ------ >> >> LogStreams are used in three ways which I can see: >> >> A) As a stack object: >> LogTarget(Error, logging) target; >> LogStreamCHeap errstream(target); >> return LogConfiguration::parse_log_arguments(_gc_log_filename, >> gc_conf, NULL, NULL, &errstream); >> >> No problem here: LogStream is an automatic variable and its destructor >> will be called when it goes out of scope. So we can free malloced memory. >> >> B) More often, by calling the static method LogImpl<>::xxx_stream(): >> >> Either like this: >> >> 1) Log(jit, compilation) log; >> if (log.is_debug()) { >> print_impl(log.debug_stream(), ... ); >> } >> >> Or like this: >> >> 2) outputStream* st = Log(stackwalk)::debug_stream(); >> >> (So, either via an instance of LogImpl<>, or directly via the class::). >> >> Here, LogImpl<>::debug_stream() creates a LogStream with >> ResourceObj::operator new(). For this LogStream the destructor cannot be >> called (not without rewriting the call site). >> >> ---- >> >> If we want to keep the syntax of the call sites as much as possible, we >> need to keep the xxxx_stream() methods as they are. >> >> But instead of allocating the LogStream objects each time anew, we could >> change LogImpl<> to manage pointers to its LogStream objects. This would >> help in all those cases where an instance of LogImpl<> is placed on the >> stack, which seems to be the majority of call sites. So, if there is a >> LogImpl<> object on the stack, like in (B1), its destructor would run when >> it goes out of scope and it would clean up its LogStream objects. >> >> Care must be taken that the construction of the LogImpl<> object stays >> cheap, because we do not want to pay for logging if we do not log. Which >> means that construction of the LogStream objects must be deferred to when >> they are actually needed. >> >> for instance: >> >> LogImpl<> { >> outputStream* _debugStream; >> outputStream* _errorStream; >> ... >> public: >> LogImpl<> :: _debugStream(NULL), _errorStream(NULL), ... {} >> ~LogImpl<> () { >> if (_debugStream) { cleanup debugStream; } >> if (_errorStream) { cleanup errorStream; } >> ... >> } >> debugStream* debug_stream() { >> if (!_debugStream) _debugStream = new LogStream(); >> return _debugStream; >> } >> } >> }; >> >> Unfortunately, this would not help for cases where xxx_stream() is called >> directly as static method (Log(..)::xx_stream()) - I do not see a >> possibility to keep this syntax. If we go with the "LogImpl<> manages its >> associated LogStream objects" solution, these sites would have to be >> rewritten: >> >> From: >> outputStream* st = Log(stackwalk)::debug_stream(); >> to: >> Log(stackwalk) log; >> outputStream* st = log.debug_stream(); >> >> --- >> >> What do you think? Does any of this make sense? >> >> >> I think it makes sense. >> >> IMO, we could as well get rid of the xxx_info() functions as a whole, and >> instead always require stack objects (LogStream instance). We would need to >> change some of the call sites with the proposed solution, so why not change >> all of them while we're at it. It saves complexity and makes it more >> obvious to the user where and how the memory is allocated. >> >> > I'd be all for it but if we really change the UL Api that drastically and > fix up all call sites, will this not make it harder to backport it to jdk9 > if we want that? > > Other than this worry, I like it, and it is also simpler than my proposal. > I will attempt a patch reworking UL in that matter: remove the xx_stream() > functions and make LogStream malloc-backed and stack-allocatable-only. > > Okay, I put a bit of work into it (stack-only LogStream) and encountered nothing difficult, just a lot of boilerplate work. This is how I am changing the callsites: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-UL-should-not-use-resource-memory-for-LogStream/current-work/webrev/ Please tell me if you had something different in mind; I want to make sure we have the same idea before changing such a lot of code. Notes: - I only remove ResourceMark objects where I can be sure that they are intended to scope the LogStream itself. If I cannot be sure, I leave them in for now. - using LogStream as a stack object means we need to include logStream.hpp (or decide to move LogStream to log.hpp). - If there is only one debug level involved (true in a lot of places), I prefer LogTarget to Log, because it minimizes error possibilities - In some places the coding gets more complicated, usually if the code wants to decide the log level dynamically, but the log level is a template parameter to LogTarget. See e.g. linkresolver.cpp. - I see the macro "log_develop_is_enabled" and wonder why it exists and why the code is not just surrounded by #ifndef PRODUCT? Kind Regards, Thomas > ...Thomas > > Thanks, >> Marcus >> >> >> >> >> Also, I opened https://bugs.openjdk.java.net/browse/JDK-8181917 to track >> this issue. Note that even though I assigned this to myself for now, if any >> of you rather wants to take it this over it is fine by me. >> >> >> Kind regards, Thomas >> >> >> >> >> >> On Sat, Jun 10, 2017 at 8:33 AM, Thomas St?fe >> wrote: >> >>> Yes, this seems to be the same issue. >>> >>> ..Thomas >>> >>> On Fri, Jun 9, 2017 at 10:00 PM, Daniel D. Daugherty < >>> daniel.daugherty at oracle.com> wrote: >>> >>>> This bug seems relevant to this discussion: >>>> >>>> JDK-8181807 Graal internal error "StringStream is re-allocated with a >>>> different ResourceMark" >>>> https://bugs.openjdk.java.net/browse/JDK-8181807 >>>> >>>> Dan >>>> >>>> >>>> >>>> On 6/9/17 1:48 AM, Thomas St?fe wrote: >>>> >>>>> Hi Stefan, >>>>> >>>>> just a small question to verify that I understood everything correctly. >>>>> >>>>> The LogStream classes (LogStreamBase and children) are basically the >>>>> write-to-UL frontend classes, right? Their purpose is to collect input >>>>> via >>>>> various print.. methods until \n is encountered, then pipe the >>>>> assembled >>>>> line to the UL backend. To do that it needs a backing store for the >>>>> to-be-assembled-line, and this is the whole reason stringStream is used >>>>> (via the "streamClass" template parameter for LogStreamBase)? >>>>> >>>>> So, basically the whole rather involved class tree rooted at >>>>> LogStreamBase >>>>> only deals with the various ways that one line backing store is >>>>> allocated? >>>>> Including LogStream itself, which contains - I was very surprised to >>>>> see - >>>>> an embedded ResourceMark (stringStreamWithResourceMark). There are no >>>>> other >>>>> reasons for this ResourceMark? >>>>> >>>>> I am currently experimenting with changing LogStream to use a simple >>>>> malloc'd backing store, in combination with a small fixed size member >>>>> buffer for small lines; I'd like to see if that had any measurable >>>>> negative >>>>> performance impact. The coding is halfway done already, but the callers >>>>> need fixing up, because due to my change LogStreamBase children cannot >>>>> be >>>>> allocated with new anymore, because of the ResourceObj-destructor >>>>> problem. >>>>> >>>>> What do you think, is this worthwhile and am I overlooking something >>>>> obvious? The UL coding is quite large, after all. >>>>> >>>>> Kind Regards, Thomas >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Jun 7, 2017 at 12:25 PM, Thomas St?fe >>>> > >>>>> wrote: >>>>> >>>>> Hi Stefan, >>>>>> >>>>>> On Wed, Jun 7, 2017 at 12:17 PM, Stefan Karlsson < >>>>>> stefan.karlsson at oracle.com> wrote: >>>>>> >>>>>> Hi Thomas, >>>>>>> >>>>>>> On 2017-06-07 11:15, Thomas St?fe wrote: >>>>>>> >>>>>>> Hi Stefan, >>>>>>>> >>>>>>>> I saw this, but I also see LogStreamNoResourceMark being used as a >>>>>>>> default for the (trace|debug|info|warning|error)_stream() methods >>>>>>>> of >>>>>>>> Log. In this form it is used quite a lot. >>>>>>>> >>>>>>>> Looking further, I see that one cannot just exchange >>>>>>>> LogStreamNoResourceMark with LogStreamCHeap, because there are >>>>>>>> hidden usage >>>>>>>> conventions I was not aware of: >>>>>>>> >>>>>>>> Just to be clear, I didn't propose that you did a wholesale >>>>>>> replacement >>>>>>> of LogStreamNoResourceMark with LogStreamCHeap. I merely pointed out >>>>>>> the >>>>>>> existence of this class in case you had missed it. >>>>>>> >>>>>>> >>>>>>> Sure! I implied this myself with my original post which proposed to >>>>>> replace the resource area allocation inside stringStream with malloc'd >>>>>> memory. >>>>>> >>>>>> >>>>>> LogStreamNoResourceMark is allocated with new() in >>>>>>>> create_log_stream(). >>>>>>>> LogStreamNoResourceMark is an outputStream, which is a ResourceObj. >>>>>>>> In its >>>>>>>> current form ResourceObj cannot be deleted, so destructors for >>>>>>>> ResourceObj >>>>>>>> child cannot be called. >>>>>>>> >>>>>>>> By default ResourceObj classes are allocated in the resource area, >>>>>>> but >>>>>>> the class also supports CHeap allocations. For example, see some of >>>>>>> the >>>>>>> allocations of GrowableArray instances: >>>>>>> >>>>>>> _deallocate_list = new (ResourceObj::C_HEAP, mtClass) >>>>>>> GrowableArray(100, true); >>>>>>> >>>>>>> These can still be deleted: >>>>>>> >>>>>>> delete _deallocate_list; >>>>>>> >>>>>>> >>>>>>> So, we could not use malloc in the stringStream - or exchange >>>>>>>> stringStream for bufferedStream - because we would need a non-empty >>>>>>>> destructor to free the malloc'd memory, and that destructor cannot >>>>>>>> exist. >>>>>>>> >>>>>>>> Looking further, I see that this imposes subtle usage restrictions >>>>>>>> for >>>>>>>> UL: >>>>>>>> >>>>>>>> LogStreamNoResourceMark objects are used via "log.debug_stream()" or >>>>>>>> similar. For example: >>>>>>>> >>>>>>>> codecache_print(log.debug_stream(), /* detailed= */ false); >>>>>>>> >>>>>>>> debug_stream() will allocate a LogStreamNoResourceMark object which >>>>>>>> lives in the resourcearea. This is a bit surprising, because >>>>>>>> "debug_stream()" feels like it returns a singleton or a member >>>>>>>> variable of >>>>>>>> log. >>>>>>>> >>>>>>>> IIRC, this was done to: >>>>>>> >>>>>>> 1) break up a cyclic dependencies between logStream.hpp and log.hpp >>>>>>> >>>>>>> 2) Not have log.hpp depend on the stream.hpp. This used to be >>>>>>> important, >>>>>>> but the includes in stream.hpp has been fixed so this might be a >>>>>>> non-issue. >>>>>>> >>>>>>> >>>>>>> If one wants to use LogStreamCHeap instead, it must not be created >>>>>>>> with >>>>>>>> new() - which would be a subtle memory leak because the destructor >>>>>>>> would >>>>>>>> never be called - but instead on the stack as automatic variable: >>>>>>>> >>>>>>>> LogStreamCHeap log_stream(log); >>>>>>>> log_stream.print("hallo"); >>>>>>>> >>>>>>>> I may understand this wrong, but if this is true, this is quite a >>>>>>>> difficult API. >>>>>>>> >>>>>>>> Feel free to rework this and propose a simpler model. Anything that >>>>>>> would >>>>>>> simplify this would be helpful. >>>>>>> >>>>>>> >>>>>>> I will mull over this a bit (and I would be thankful for other >>>>>> viewpoints >>>>>> as well). A bottomline question which is difficult to answer is >>>>>> whether >>>>>> folks value the slight performance increase of resource area backed >>>>>> memory >>>>>> allocation in stringStream more than simplicity and robustness which >>>>>> would >>>>>> come with switching to malloced memory. And then, there is the second >>>>>> question of why outputStream objects should be ResourceObj at all; >>>>>> for me, >>>>>> they feel much more at home as stack objects. They themselves are >>>>>> small and >>>>>> do not allocate a lot of memory (if they do, they do it dynamically). >>>>>> And >>>>>> they are not allocated in vast amounts... >>>>>> >>>>>> Lets see what others think. >>>>>> >>>>>> >>>>>> I have two classes which look like siblings but >>>>>>> >>>>>>> LogStreamCHeap can only be allocated on the local stack - otherwise >>>>>>>> I'll >>>>>>>> get a memory leak - while LogStreamNoResourceMark gets created in >>>>>>>> the >>>>>>>> resource area, which prevents its destructor from running and may >>>>>>>> fill the >>>>>>>> resource area up with temporary stream objects if used in a certain >>>>>>>> way. >>>>>>>> >>>>>>>> Have I understood this right so far? If yes, would it be possible to >>>>>>>> simplify this? >>>>>>>> >>>>>>>> I think you understand the code correctly, and yes, there are >>>>>>> probably >>>>>>> ways to make this simpler. >>>>>>> >>>>>>> >>>>>>> Thanks for your input! >>>>>> >>>>>> Kind regards, Thomas >>>>>> >>>>>> >>>>>> Thanks, >>>>>>> StefanK >>>>>>> >>>>>>> >>>>>>> Kind Regards, Thomas >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Jun 7, 2017 at 9:20 AM, Stefan Karlsson < >>>>>>>> stefan.karlsson at oracle.com > >>>>>>>> wrote: >>>>>>>> >>>>>>>> Hi Thomas, >>>>>>>> >>>>>>>> >>>>>>>> On 2017-06-06 11:40, Thomas St?fe wrote: >>>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> In our VM we recently hit something similar to >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8167995 >>>>>>>> or >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8149557 >>>>>>>> : >>>>>>>> >>>>>>>> A stringStream* was handed down to nested print functions >>>>>>>> which >>>>>>>> create >>>>>>>> their own ResourceMarks and, while being down the stack >>>>>>>> under >>>>>>>> the scope of >>>>>>>> that new ResourceMark, the stringStream needed to enlarge >>>>>>>> its >>>>>>>> internal >>>>>>>> buffer. This is the situation the assert inside >>>>>>>> stringStream::write() >>>>>>>> attempts to catch >>>>>>>> (assert(Thread::current()->current_resource_mark() == >>>>>>>> rm); in our case this was a release build, so we just >>>>>>>> crashed. >>>>>>>> >>>>>>>> The solution for both JDK-816795 and JDK-8149557 seemed to >>>>>>>> be to >>>>>>>> just >>>>>>>> remove the offending ResourceMarks, or shuffle them >>>>>>>> around, but >>>>>>>> generally >>>>>>>> this is not an optimal solution, or? >>>>>>>> >>>>>>>> We actually question whether using resource area memory is >>>>>>>> a >>>>>>>> good idea for >>>>>>>> outputStream chuild objects at all: >>>>>>>> >>>>>>>> outputStream instances typically travel down the stack a >>>>>>>> lot by >>>>>>>> getting >>>>>>>> handed sub-print-functions, so they run danger of crossing >>>>>>>> resource mark >>>>>>>> boundaries like above. The sub functions are usually >>>>>>>> oblivious >>>>>>>> to the type >>>>>>>> of outputStream* handed down, and as well they should be. >>>>>>>> And if >>>>>>>> the >>>>>>>> allocate resource area memory themselves, it makes sense to >>>>>>>> guard them with >>>>>>>> ResourceMark in case they are called in a loop. >>>>>>>> >>>>>>>> The assert inside stringStream::write() is not a real help >>>>>>>> either, because >>>>>>>> whether or not it hits depends on pure luck - whether the >>>>>>>> realloc code path >>>>>>>> is hit just in the right moment while printing. Which >>>>>>>> depends on >>>>>>>> the buffer >>>>>>>> size and the print history, which is variable, especially >>>>>>>> with >>>>>>>> logging. >>>>>>>> >>>>>>>> The only advantage to using bufferedStream (C-Heap) is a >>>>>>>> small >>>>>>>> performance >>>>>>>> improvement when allocating. The question is whether this >>>>>>>> is >>>>>>>> really worth >>>>>>>> the risk of using resource area memory in this fashion. >>>>>>>> Especially in the >>>>>>>> context of UL where we are about to do expensive IO >>>>>>>> operations >>>>>>>> (writing to >>>>>>>> log file) or may lock (os::flockfile). >>>>>>>> >>>>>>>> Also, the difference between bufferedStream and >>>>>>>> stringStream >>>>>>>> might be >>>>>>>> reduced by improving bufferedStream (e.g. by using a >>>>>>>> member char >>>>>>>> array for >>>>>>>> small allocations and delay using malloc() for larger >>>>>>>> arrays.) >>>>>>>> >>>>>>>> What you think? Should we get rid of stringStream and only >>>>>>>> use >>>>>>>> an (possibly >>>>>>>> improved) bufferedStream? I also imagine this could make UL >>>>>>>> coding a bit >>>>>>>> simpler. >>>>>>>> >>>>>>>> >>>>>>>> Not answering your questions, but I want to point out that we >>>>>>>> already have a UL stream that uses C-Heap: >>>>>>>> >>>>>>>> logging/logStream.hpp: >>>>>>>> >>>>>>>> // The backing buffer is allocated in CHeap memory. >>>>>>>> typedef LogStreamBase LogStreamCHeap; >>>>>>>> >>>>>>>> StefanK >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Thank you, >>>>>>>> >>>>>>>> Kind Regards, Thomas >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>> >>> >> >> > From thomas.stuefe at gmail.com Mon Jun 12 16:13:51 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Jun 2017 18:13:51 +0200 Subject: RFR(10)(S): 8181503: Can't compile hotspot with c++11 In-Reply-To: References: Message-ID: Hi Gerard, looks fine. I cannot comment on the asm syntax changes in the bsd code. methodMatcher.cpp: this is a real bug. compiledIC.cpp: this too but as false is usually defined as 0, so it probably never mattered. Kind Regards, Thomas On Mon, Jun 12, 2017 at 4:29 PM, Gerard Ziemski wrote: > hi all, > > Please review this small fix, which addresses 4 issues caught by c++11 > compiler on a Mac: > > > #1 Error in src/share/vm/utilities/debug.hpp > > jdk10/hotspot/src/share/vm/utilities/vmError.cpp:450:13: error: case > value evaluates to 3758096384, which cannot be narrowed to type 'int' > [-Wc++11-narrowing] > case INTERNAL_ERROR: > > For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/ > share/vm/utilities/vmError.cpp.udiff.html and http://cr.openjdk.java.net/~ > gziemski/8181503_rev1/src/share/vm/utilities/vmError.hpp.udiff.html > > > #2 Error in src/share/vm/compiler/methodMatcher.cpp > > jdk10/hotspot/src/share/vm/compiler/methodMatcher.cpp:99:19: error: > comparison between pointer and integer ('char *' and 'int') > if (colon + 2 != '\0') { > ~~~~~~~~~ ^ ~~~~ > > For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/ > share/vm/compiler/methodMatcher.cpp.udiff.html > > > #3 Error in src/os_cpu/bsd_x86/vm/os_bsd_x86.cpp > > jdk10/hotspot/src/os_cpu/bsd_x86/vm/os_bsd_x86.cpp:282:19: error: invalid > suffix on literal; C++11 requires a space between literal and identifier > [-Wreserved-user-defined-literal] > __asm__("mov %%"SPELL_REG_SP", %0":"=r"(esp)); > > For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/os_ > cpu/bsd_x86/vm/os_bsd_x86.cpp.udiff.html > > > #4 Error in src/share/vm/code/compiledIC.cpp > > /Volumes/Work/jdk10/hotspot/src/share/vm/code/compiledIC.cpp:227:15: > error: comparison between pointer and integer ('address' (aka 'unsigned > char *') and 'int') > if (entry == false) { > ~~~~~ ^ ~~~~~ > > For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/ > share/vm/code/compiledIC.cpp.udiff.html > > References: > bug link at https://bugs.openjdk.java.net/browse/JDK-8181503 > webrev at http://cr.openjdk.java.net/~gziemski/8181503_rev1 > > Tested with JPRT hotspot. > > > cheers From ioi.lam at oracle.com Mon Jun 12 16:15:36 2017 From: ioi.lam at oracle.com (Ioi Lam) Date: Mon, 12 Jun 2017 09:15:36 -0700 Subject: Calling C++ destructor directly in resourceHash.hpp In-Reply-To: References: <032526f1-b90a-b32e-f039-05d6e6ec5d77@oracle.com> Message-ID: <27e78231-1f84-1f39-6312-b3e6018c275e@oracle.com> On 6/11/17 10:20 PM, John Rose wrote: > On Jun 11, 2017, at 9:19 PM, Ioi Lam > wrote: >> >> I am looking at these two functions in "utilities/resourceHash.hpp": > > You are worried about the V destructor running, > where the V struct is a member of Node (Node::_value). > In the normal case, running the destructor of Node > transparently runs the destructors of the K and V > members of Node. > > The place where dropped destructors can happen > in this sort of pattern is when you overwrite a variable, > which is point (a) in your example. Your V::operator= > is responsible for retiring any resources used by the > previous value of Node::_value which are not going to > be used by the new value. > > Eventually, when "delete node" happens, whatever > resources were in use in Node::_value will be freed. > I checked and you're right: when "delete node" is called, _value's destructor is called. Our problem here is that if the ResourceHashtable was resource-allocated, node is not deleted. if (ALLOC_TYPE == C_HEAP) { delete node; } I think the reason is here: void ResourceObj::operator delete(void* p) { assert(((ResourceObj *)p)->allocated_on_C_heap(), "delete only allowed for C_HEAP objects"); DEBUG_ONLY(((ResourceObj *)p)->_allocation_t[0] = (uintptr_t)badHeapOopVal;) FreeHeap(p); } The assert seems wrong -- if the object is allocated_on_res_area(), we should just fall-through and do nothing. That would allow the destructor of Node to be called. > So I don't think you have to do anything with point (b). > Your problem, if you have one, is operator=. Those are > hard to get right. > Ah! I just found out "C++ copy constructor is called when a new object is created from an existing object; assignment operator is called when an already initialized object is assigned a new value from another existing object." Thanks - Ioi > ? John > > From ioi.lam at oracle.com Mon Jun 12 16:17:19 2017 From: ioi.lam at oracle.com (Ioi Lam) Date: Mon, 12 Jun 2017 09:17:19 -0700 Subject: Calling C++ destructor directly in resourceHash.hpp In-Reply-To: References: <032526f1-b90a-b32e-f039-05d6e6ec5d77@oracle.com> Message-ID: On 6/12/17 12:16 AM, Thomas St?fe wrote: > > > On Mon, Jun 12, 2017 at 7:20 AM, John Rose > wrote: > > On Jun 11, 2017, at 9:19 PM, Ioi Lam > wrote: > > > > I am looking at these two functions in "utilities/resourceHash.hpp": > > You are worried about the V destructor running, > where the V struct is a member of Node (Node::_value). > In the normal case, running the destructor of Node > transparently runs the destructors of the K and V > members of Node. > > The place where dropped destructors can happen > in this sort of pattern is when you overwrite a variable, > which is point (a) in your example. Your V::operator= > is responsible for retiring any resources used by the > previous value of Node::_value which are not going to > be used by the new value. > > Eventually, when "delete node" happens, whatever > resources were in use in Node::_value will be freed. > > So I don't think you have to do anything with point (b). > Your problem, if you have one, is operator=. Those are > hard to get right. > > ? John > > > Also, all usages of ResourceHashTable I can see have either primitive > data types as value type or they do their own cleanup (e.g. Handle in > jvmci). Do you see any concrete usage where a real object needing > destruction is placed in a ResourceHashTable? > I am adding a new ResourceHashTable that needs destruction of the stored object. And I found out that my destructor is not called. Thanks - Ioi > ..Thomas > From thomas.stuefe at gmail.com Mon Jun 12 16:26:06 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Jun 2017 18:26:06 +0200 Subject: Calling C++ destructor directly in resourceHash.hpp In-Reply-To: References: <032526f1-b90a-b32e-f039-05d6e6ec5d77@oracle.com> Message-ID: On Mon, Jun 12, 2017 at 6:17 PM, Ioi Lam wrote: > > > On 6/12/17 12:16 AM, Thomas St?fe wrote: > > > > On Mon, Jun 12, 2017 at 7:20 AM, John Rose wrote: > >> On Jun 11, 2017, at 9:19 PM, Ioi Lam wrote: >> > >> > I am looking at these two functions in "utilities/resourceHash.hpp": >> >> You are worried about the V destructor running, >> where the V struct is a member of Node (Node::_value). >> In the normal case, running the destructor of Node >> transparently runs the destructors of the K and V >> members of Node. >> >> The place where dropped destructors can happen >> in this sort of pattern is when you overwrite a variable, >> which is point (a) in your example. Your V::operator= >> is responsible for retiring any resources used by the >> previous value of Node::_value which are not going to >> be used by the new value. >> >> Eventually, when "delete node" happens, whatever >> resources were in use in Node::_value will be freed. >> >> So I don't think you have to do anything with point (b). >> Your problem, if you have one, is operator=. Those are >> hard to get right. >> >> ? John >> >> > Also, all usages of ResourceHashTable I can see have either primitive data > types as value type or they do their own cleanup (e.g. Handle in jvmci). Do > you see any concrete usage where a real object needing destruction is > placed in a ResourceHashTable? > > I am adding a new ResourceHashTable that needs destruction of the stored > object. And I found out that my destructor is not called. > > Thanks > - Ioi > > Thanks, now I understand the problem. Kind Regards, Thomas > ..Thomas > > > From kim.barrett at oracle.com Mon Jun 12 22:07:24 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 12 Jun 2017 18:07:24 -0400 Subject: RFR(10)(S): 8181503: Can't compile hotspot with c++11 In-Reply-To: References: Message-ID: <159B33A6-6932-4E34-BFF1-E8AA0D3A783D@oracle.com> > On Jun 12, 2017, at 10:29 AM, Gerard Ziemski wrote: > > hi all, > > Please review this small fix, which addresses 4 issues caught by c++11 compiler on a Mac: > > [?] > References: > bug link at https://bugs.openjdk.java.net/browse/JDK-8181503 > webrev at http://cr.openjdk.java.net/~gziemski/8181503_rev1 > > Tested with JPRT hotspot. ------------------------------------------------------------------------------ src/share/vm/utilities/vmError.hpp 38 static uint _id; // Solaris/Linux signals: 0 - SIGRTMAX I think changing the type of _id from int to uint is really not so simple. There's a bit of a type mess in this area, with some functions expecting or using int and others uint. _id is set from an int value. It is passed to os::exception_name, which takes an int argument. The windows implementation of that function immediately casts that argument to a uint, but the posix implementation actually wants an int value. OTOH, there are other places that expect or treat _id as a uint. So the proposed change is really just rearranging the deck chairs in that mess, and is not really much of an improvement. I *think* using uint consistently throughout for this value could be made to work, but I haven't completely worked through it. Also, in chasing through some of this, I noticed os::Posix::is_valid_signal calls sigaddset with an uninitialized sigset_t (neither sigemptyset nor sigfillset has been applied). The documentation says the results are undefined if that initialization hasn't been done. ------------------------------------------------------------------------------ Other changes look good. From thomas.stuefe at gmail.com Tue Jun 13 06:56:25 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Jun 2017 08:56:25 +0200 Subject: RFR(10)(S): 8181503: Can't compile hotspot with c++11 In-Reply-To: <159B33A6-6932-4E34-BFF1-E8AA0D3A783D@oracle.com> References: <159B33A6-6932-4E34-BFF1-E8AA0D3A783D@oracle.com> Message-ID: hi Kim, On Tue, Jun 13, 2017 at 12:07 AM, Kim Barrett wrote: > > On Jun 12, 2017, at 10:29 AM, Gerard Ziemski > wrote: > > > > hi all, > > > > Please review this small fix, which addresses 4 issues caught by c++11 > compiler on a Mac: > > > > [?] > > References: > > bug link at https://bugs.openjdk.java.net/browse/JDK-8181503 > > webrev at http://cr.openjdk.java.net/~gziemski/8181503_rev1 > > > > Tested with JPRT hotspot. > > > > Also, in chasing through some of this, I noticed > os::Posix::is_valid_signal calls sigaddset with an uninitialized > sigset_t (neither sigemptyset nor sigfillset has been applied). The > documentation says the results are undefined if that initialization > hasn't been done. > > You are right, this is relying on undefined behavior. It never caused an error, but still invalid. I opened a separate issue: https://bugs.openjdk.java.net/browse/JDK-8182034 ..Thomas > ------------------------------------------------------------ > ------------------ > > Other changes look good. > > From marcus.larsson at oracle.com Tue Jun 13 08:32:43 2017 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Tue, 13 Jun 2017 10:32:43 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> <4a9cc5cf-6e6c-ffeb-32c7-a5428e706fe9@oracle.com> Message-ID: <21d4c4ee-e386-d908-2c7f-4573e715f91e@oracle.com> On 2017-06-12 17:47, Thomas St?fe wrote: > > > Okay, I put a bit of work into it (stack-only LogStream) and > encountered nothing difficult, just a lot of boilerplate work. > > This is how I am changing the callsites: > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-UL-should-not-use-resource-memory-for-LogStream/current-work/webrev/ > > > Please tell me if you had something different in mind; I want to make > sure we have the same idea before changing such a lot of code. Nice work. This is what I expected to see. Just a small suggestion I thought of when looking at it: Instead of 769 Log(itables) logi; 770 LogStream lsi(logi.trace()); you could do 769 LogStream lsi(Log(itables)::trace()); to save a line. Of course this only works if the log instance isn't used for anything else than initializing the LogStream. Touching on this is a suggestion that was brought up here when discussing this locally, which is to add is_enabled() functions to the LogStream class. By doing that we wouldn't need the log instance at all in places where only the stream API is used, which is good. If we're going for that, it would be great to do it before this change so we could fix up the call sites to use that as well. Let me know what you think. > > Notes: > - I only remove ResourceMark objects where I can be sure that they are > intended to scope the LogStream itself. If I cannot be sure, I leave > them in for now. > - using LogStream as a stack object means we need to include > logStream.hpp (or decide to move LogStream to log.hpp). I don't think it's a big deal to include logStream.hpp as well, but we can consider moving it into log.hpp if people prefer that. I don't really have a strong opinion on this. > - If there is only one debug level involved (true in a lot of places), > I prefer LogTarget to Log, because it minimizes error possibilities > - In some places the coding gets more complicated, usually if the code > wants to decide the log level dynamically, but the log level is a > template parameter to LogTarget. See e.g. linkresolver.cpp. > - I see the macro "log_develop_is_enabled" and wonder why it exists > and why the code is not just surrounded by #ifndef PRODUCT? That's only for convenience. It could as well have been ifndef:ed. Thanks, Marcus From thomas.stuefe at gmail.com Tue Jun 13 09:53:22 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Jun 2017 11:53:22 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: <21d4c4ee-e386-d908-2c7f-4573e715f91e@oracle.com> References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> <4a9cc5cf-6e6c-ffeb-32c7-a5428e706fe9@oracle.com> <21d4c4ee-e386-d908-2c7f-4573e715f91e@oracle.com> Message-ID: Hi Marcus, On Tue, Jun 13, 2017 at 10:32 AM, Marcus Larsson wrote: > > On 2017-06-12 17:47, Thomas St?fe wrote: > > > > Okay, I put a bit of work into it (stack-only LogStream) and encountered > nothing difficult, just a lot of boilerplate work. > > This is how I am changing the callsites: > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-UL- > should-not-use-resource-memory-for-LogStream/current-work/webrev/ > > Please tell me if you had something different in mind; I want to make sure > we have the same idea before changing such a lot of code. > > > Nice work. This is what I expected to see. > > Great. > Just a small suggestion I thought of when looking at it: > > Instead of > > 769 Log(itables) logi; 770 LogStream lsi(logi.trace()); > > you could do > > 769 LogStream lsi(Log(itables)::trace()); > > to save a line. > Of course this only works if the log instance isn't used for anything else > than initializing the LogStream. > > Touching on this is a suggestion that was brought up here when discussing > this locally, which is to add is_enabled() functions to the LogStream class. > By doing that we wouldn't need the log instance at all in places where > only the stream API is used, which is good. > If we're going for that, it would be great to do it before this change so > we could fix up the call sites to use that as well. > Let me know what you think. > > Makes sense. But then, how about making the syntax consistent with Log and LogTarget: LogStream(Debug, gc) ls; if (ls.is_enabled()) {... } We actually already have this already, "LogStreamHandle". I'd propose instead to make this a default for LogStream: template class LogStreamImpl : public outputStream { LogTargetImpl _log_target; public: static bool is_enabled() { return LogImpl::is_level(level); } void write(const char* s, size_t len) { _log_target.print("%s", s); } }; #define LogStream(level, ...) LogStreamImpl This is now made easy because with the planned simplification of LogStream line buffer handling, the LogStream class hierarchy can be simplified down to this one class LogStreamImpl. Then, syntax for using Log and LogTarget is the same as for LogStream. Also, implementation is similar for all three cases, which helps maintenance. > Notes: > - I only remove ResourceMark objects where I can be sure that they are > intended to scope the LogStream itself. If I cannot be sure, I leave them > in for now. > - using LogStream as a stack object means we need to include logStream.hpp > (or decide to move LogStream to log.hpp). > > > I don't think it's a big deal to include logStream.hpp as well, but we can > consider moving it into log.hpp if people prefer that. I don't really have > a strong opinion on this. > > I prefer keeping logStream.hpp around and to add the second include if you need an outputStream. > - If there is only one debug level involved (true in a lot of places), I > prefer LogTarget to Log, because it minimizes error possibilities > - In some places the coding gets more complicated, usually if the code > wants to decide the log level dynamically, but the log level is a template > parameter to LogTarget. See e.g. linkresolver.cpp. > - I see the macro "log_develop_is_enabled" and wonder why it exists and > why the code is not just surrounded by #ifndef PRODUCT? > > > That's only for convenience. It could as well have been ifndef:ed. > > Thanks, > Marcus > Note that I am currently in the process of doing these changes, because I like to get this fix done. The number of callsites to fix up is large (~100). I do this partly with scripting, but still a bit of work. I would appreciate it if we could agree quickly, to prevent having to rewrite larger sections later. ..Thomas From erik.osterlund at oracle.com Tue Jun 13 12:49:45 2017 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 13 Jun 2017 14:49:45 +0200 Subject: RFR: 8086005: Define __STDC_xxx_MACROS config macros globally via build system In-Reply-To: References: Message-ID: <593FDF69.4030701@oracle.com> Hi, This looks good to me. Relying on header include order is never a good idea. Thanks, /Erik On 2017-06-08 01:51, Kim Barrett wrote: > Please review this change to the build of hotspot to globally define > the __STDC_xxx_MACROS macros via the command line, rather than > via #defines scattered through several header files. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8086005 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8086005/hs.00/ > http://cr.openjdk.java.net/~kbarrett/8086005/hotspot.00/ > > Testing: > JPRT > From kim.barrett at oracle.com Tue Jun 13 13:16:10 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 13 Jun 2017 09:16:10 -0400 Subject: RFR: 8086005: Define __STDC_xxx_MACROS config macros globally via build system In-Reply-To: <593FDF69.4030701@oracle.com> References: <593FDF69.4030701@oracle.com> Message-ID: <2D79DE80-C5B3-48E4-830F-669A090490C0@oracle.com> > On Jun 13, 2017, at 8:49 AM, Erik ?sterlund wrote: > > Hi, > > This looks good to me. Relying on header include order is never a good idea. Thanks. > > Thanks, > /Erik > > On 2017-06-08 01:51, Kim Barrett wrote: >> Please review this change to the build of hotspot to globally define >> the __STDC_xxx_MACROS macros via the command line, rather than >> via #defines scattered through several header files. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8086005 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8086005/hs.00/ >> http://cr.openjdk.java.net/~kbarrett/8086005/hotspot.00/ >> >> Testing: >> JPRT From gromero at linux.vnet.ibm.com Tue Jun 13 14:44:19 2017 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Tue, 13 Jun 2017 11:44:19 -0300 Subject: Jtreg JVMCI test failures Message-ID: <593FFA43.5030005@linux.vnet.ibm.com> Hi, I'm trying to run the jtreg JVMCI tests (both standalone and using 'make test-hotspot-jtreg') but I'm getting something like that in all of them: -------------------------------------------------- TEST: compiler/jvmci/compilerToVM/AllocateCompileIdTest.java TEST JDK: /home/gromero/hg/jdk9/dev/build/linux-x86_64-normal-server-release/images/jdk ACTION: build -- Not run. Test running... REASON: User specified action: run build jdk.internal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox TIME: rnal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox seconds messages: command: build jdk.internal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox reason: User specified action: run build jdk.internal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox TEST RESULT: Error. can't find module jdk.internal.vm.ci in test directory or libraries -------------------------------------------------- It's on x64. I've tried tips of jdk9/dev, jdk9/jdk9, and jdk10/hs. Any clue on that? Thanks, Gustavo From cthalinger at twitter.com Tue Jun 13 15:32:05 2017 From: cthalinger at twitter.com (Christian Thalinger) Date: Tue, 13 Jun 2017 08:32:05 -0700 Subject: Jtreg JVMCI test failures In-Reply-To: <593FFA43.5030005@linux.vnet.ibm.com> References: <593FFA43.5030005@linux.vnet.ibm.com> Message-ID: > On Jun 13, 2017, at 7:44 AM, Gustavo Romero wrote: > > Hi, > > I'm trying to run the jtreg JVMCI tests (both standalone and using > 'make test-hotspot-jtreg') but I'm getting something like that in all of them: > > -------------------------------------------------- > TEST: compiler/jvmci/compilerToVM/AllocateCompileIdTest.java > TEST JDK: /home/gromero/hg/jdk9/dev/build/linux-x86_64-normal-server-release/images/jdk > > ACTION: build -- Not run. Test running... > REASON: User specified action: run build jdk.internal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox > TIME: rnal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox seconds > messages: > command: build jdk.internal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox > reason: User specified action: run build jdk.internal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox > > TEST RESULT: Error. can't find module jdk.internal.vm.ci in test directory or libraries > -------------------------------------------------- > > It's on x64. I've tried tips of jdk9/dev, jdk9/jdk9, and jdk10/hs. > > Any clue on that? That?s odd. It works for me with jdk9: $ jtreg -verbose:summary -noreport -jdk:$PWD/build/macosx-x86_64-normal-server-release/images/jdk hotspot/test/compiler/jvmci/compilerToVM/AllocateCompileIdTest.java Passed: compiler/jvmci/compilerToVM/AllocateCompileIdTest.java Test results: passed: 1 Do you get this? $ ./build/macosx-x86_64-normal-server-release/images/jdk/bin/java --list-modules | grep jdk.internal.vm.ci jdk.internal.vm.ci at 9-internal > > > Thanks, > Gustavo > From thomas.stuefe at gmail.com Tue Jun 13 16:23:03 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Jun 2017 18:23:03 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> <4a9cc5cf-6e6c-ffeb-32c7-a5428e706fe9@oracle.com> <21d4c4ee-e386-d908-2c7f-4573e715f91e@oracle.com> Message-ID: Hi, On Tue, Jun 13, 2017 at 11:53 AM, Thomas St?fe wrote: > Hi Marcus, > > On Tue, Jun 13, 2017 at 10:32 AM, Marcus Larsson < > marcus.larsson at oracle.com> wrote: > >> >> On 2017-06-12 17:47, Thomas St?fe wrote: >> >> >> >> Okay, I put a bit of work into it (stack-only LogStream) and encountered >> nothing difficult, just a lot of boilerplate work. >> >> This is how I am changing the callsites: >> >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-UL-should >> -not-use-resource-memory-for-LogStream/current-work/webrev/ >> >> Please tell me if you had something different in mind; I want to make >> sure we have the same idea before changing such a lot of code. >> >> >> Nice work. This is what I expected to see. >> >> > Great. > > >> Just a small suggestion I thought of when looking at it: >> >> Instead of >> >> 769 Log(itables) logi; 770 LogStream lsi(logi.trace()); >> >> you could do >> >> 769 LogStream lsi(Log(itables)::trace()); >> >> to save a line. >> Of course this only works if the log instance isn't used for anything >> else than initializing the LogStream. >> >> Touching on this is a suggestion that was brought up here when discussing >> this locally, which is to add is_enabled() functions to the LogStream class. >> By doing that we wouldn't need the log instance at all in places where >> only the stream API is used, which is good. >> If we're going for that, it would be great to do it before this change so >> we could fix up the call sites to use that as well. >> Let me know what you think. >> >> > Makes sense. But then, how about making the syntax consistent with Log and > LogTarget: > > LogStream(Debug, gc) ls; > if (ls.is_enabled()) {... } > > We actually already have this already, "LogStreamHandle". I'd propose > instead to make this a default for LogStream: > > template LogTag::__NO_TAG, ......> > class LogStreamImpl : public outputStream { > LogTargetImpl _log_target; > public: > static bool is_enabled() { > return LogImpl::is_level(level); > } > void write(const char* s, size_t len) { > _log_target.print("%s", s); > } > }; > > #define LogStream(level, ...) LogStreamImpl LOG_TAGS(__VA_ARGS__)> > > This is now made easy because with the planned simplification of LogStream > line buffer handling, the LogStream class hierarchy can be simplified down > to this one class LogStreamImpl. > > Then, syntax for using Log and LogTarget is the same as for LogStream. > Also, implementation is similar for all three cases, which helps > maintenance. > > >> Notes: >> - I only remove ResourceMark objects where I can be sure that they are >> intended to scope the LogStream itself. If I cannot be sure, I leave them >> in for now. >> - using LogStream as a stack object means we need to include >> logStream.hpp (or decide to move LogStream to log.hpp). >> >> >> I don't think it's a big deal to include logStream.hpp as well, but we >> can consider moving it into log.hpp if people prefer that. I don't really >> have a strong opinion on this. >> >> > I prefer keeping logStream.hpp around and to add the second include if you > need an outputStream. > > >> - If there is only one debug level involved (true in a lot of places), I >> prefer LogTarget to Log, because it minimizes error possibilities >> - In some places the coding gets more complicated, usually if the code >> wants to decide the log level dynamically, but the log level is a template >> parameter to LogTarget. See e.g. linkresolver.cpp. >> - I see the macro "log_develop_is_enabled" and wonder why it exists and >> why the code is not just surrounded by #ifndef PRODUCT? >> >> >> That's only for convenience. It could as well have been ifndef:ed. >> >> Thanks, >> Marcus >> > > Note that I am currently in the process of doing these changes, because I > like to get this fix done. The number of callsites to fix up is large > (~100). I do this partly with scripting, but still a bit of work. I would > appreciate it if we could agree quickly, to prevent having to rewrite > larger sections later. > > ..Thomas > So, I changed a whole bunch of callsites to stack-only LogStreams and my brain is slowly turning to cheese :) therefore, lets do a sanity check if this is still what we want. Current snapshot of my work here: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-UL-should-not-use-resource-memory-for-LogStream/current-work-2/webrev/ Some thoughts: After talking this over with Eric off-list, I do not think anymore that reducing the: LogTarget(...) log; if (log.is_enabled()) { LogStream ls(log)... } to just LogStream ls(..); if (ls.is_enabled()) { .. } is really a good idea. We want logging to not cause costs if logging is disabled. But this way, we would always to pay the cost for initializing the LogStream, which means initializing outputStream at least once (for the parent class) and maybe twice (if the line buffer is an outputStream class too). outputStream constructor just assigns a bunch of member variables, but this is still more than nothing. --- Funnily, when translating all these callsites, I almost never used Log, but mostly LogTarget. This is because I wanted to avoid repeating the (level, tag, tag..) declarations, and the only way to do this is via LogTarget. Compare: Log(gc, metaspace, freelist) log; if (log.is_debug()) { LogStream ls(log.debug()); } repeats the "debug" info. Even worse are cases where the whole taglist would be repeated: if (log_is_enabled(Info, class, loader, constraints)) { LogStream ls(Log( class, loader, constraints)::info()); } --- I found cases where the usage of "xx_stream()" was not guarded by any is_enabled() flag but executed unconditionally, e.g. metaspace.cpp (VirtualSpaceNode::take_from_committed()): 1016 if (!is_available(chunk_word_size)) { 1017 Log(gc, metaspace, freelist) log; 1018 log.debug("VirtualSpaceNode::take_from_committed() not available " SIZE_FORMAT " words ", chunk_word_size); 1019 // Dump some information about the virtual space that is nearly full 1020 ResourceMark rm; 1021 print_on(log.debug_stream()); 1022 return NULL; 1023 } So I really wondered: print_on(log.debug_stream()) is executed unconditionally, what happens here? What happens is that the whole printing is executed, first inside the LogStream, then down to LogTargetImpl, and somewhere deep down in UL (in LogTagSet::log()) the assembled message is ignored because there is no output connected to it. So we always pay for the whole printing. I consider this an error, right? I wonder how this could be prevented. --- After doing all these changes, I am unsure. Is this the right direction? The alternative would still be my original proposal (tying the LogStream instances as members to the LogTarget instance on the stack). What do you think? I also think that if we go this direction, it might make sense to do this in jdk9, because auto-merging jdk9 to jdk10 may be a problem with so many changes. Or am I too pessimistic? Kind regards, Thomas From gerard.ziemski at oracle.com Tue Jun 13 19:28:40 2017 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Tue, 13 Jun 2017 14:28:40 -0500 Subject: RFR(10)(S): 8181503: Can't compile hotspot with c++11 In-Reply-To: References: Message-ID: Thank you Thomas! > On Jun 12, 2017, at 11:13 AM, Thomas St?fe wrote: > > Hi Gerard, > > looks fine. > > I cannot comment on the asm syntax changes in the bsd code. > > methodMatcher.cpp: this is a real bug. > > compiledIC.cpp: this too but as false is usually defined as 0, so it probably never mattered. > > Kind Regards, Thomas > > > On Mon, Jun 12, 2017 at 4:29 PM, Gerard Ziemski wrote: > hi all, > > Please review this small fix, which addresses 4 issues caught by c++11 compiler on a Mac: > > > #1 Error in src/share/vm/utilities/debug.hpp > > jdk10/hotspot/src/share/vm/utilities/vmError.cpp:450:13: error: case value evaluates to 3758096384, which cannot be narrowed to type 'int' [-Wc++11-narrowing] > case INTERNAL_ERROR: > > For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/share/vm/utilities/vmError.cpp.udiff.html and http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/share/vm/utilities/vmError.hpp.udiff.html > > > #2 Error in src/share/vm/compiler/methodMatcher.cpp > > jdk10/hotspot/src/share/vm/compiler/methodMatcher.cpp:99:19: error: comparison between pointer and integer ('char *' and 'int') > if (colon + 2 != '\0') { > ~~~~~~~~~ ^ ~~~~ > > For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/share/vm/compiler/methodMatcher.cpp.udiff.html > > > #3 Error in src/os_cpu/bsd_x86/vm/os_bsd_x86.cpp > > jdk10/hotspot/src/os_cpu/bsd_x86/vm/os_bsd_x86.cpp:282:19: error: invalid suffix on literal; C++11 requires a space between literal and identifier [-Wreserved-user-defined-literal] > __asm__("mov %%"SPELL_REG_SP", %0":"=r"(esp)); > > For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/os_cpu/bsd_x86/vm/os_bsd_x86.cpp.udiff.html > > > #4 Error in src/share/vm/code/compiledIC.cpp > > /Volumes/Work/jdk10/hotspot/src/share/vm/code/compiledIC.cpp:227:15: error: comparison between pointer and integer ('address' (aka 'unsigned char *') and 'int') > if (entry == false) { > ~~~~~ ^ ~~~~~ > > For a fix see http://cr.openjdk.java.net/~gziemski/8181503_rev1/src/share/vm/code/compiledIC.cpp.udiff.html > > References: > bug link at https://bugs.openjdk.java.net/browse/JDK-8181503 > webrev at http://cr.openjdk.java.net/~gziemski/8181503_rev1 > > Tested with JPRT hotspot. > > > cheers > From gerard.ziemski at oracle.com Tue Jun 13 19:59:21 2017 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Tue, 13 Jun 2017 14:59:21 -0500 Subject: RFR(10)(S): 8181503: Can't compile hotspot with c++11 In-Reply-To: <159B33A6-6932-4E34-BFF1-E8AA0D3A783D@oracle.com> References: <159B33A6-6932-4E34-BFF1-E8AA0D3A783D@oracle.com> Message-ID: <8496489F-C159-45B9-A5AE-1B40B58BA3DA@oracle.com> > On Jun 12, 2017, at 5:07 PM, Kim Barrett wrote: > > src/share/vm/utilities/vmError.hpp > 38 static uint _id; // Solaris/Linux signals: 0 - SIGRTMAX > > I think changing the type of _id from int to uint is really not so > simple. There's a bit of a type mess in this area, with some functions > expecting or using int and others uint. _id is set from an int value. > It is passed to os::exception_name, which takes an int argument. The > windows implementation of that function immediately casts that > argument to a uint, but the posix implementation actually wants an int > value. OTOH, there are other places that expect or treat _id as a > uint. So the proposed change is really just rearranging the deck > chairs in that mess, and is not really much of an improvement. > I *think* using uint consistently throughout for this value could be > made to work, but I haven't completely worked through it. Thanks Kim, I didn?t see anywhere in the code the _id being compared using arithmetic (ex: ?if (_id < 0)"), so I thought we were good using uint. Thanks for taking a closer look. Would redefining the 3 troublesome enums from: enum VMErrorType { INTERNAL_ERROR = 0xe0000000, OOM_MALLOC_ERROR = 0xe0000001, OOM_MMAP_ERROR = 0xe0000002 }; to: enum VMErrorType { INTERNAL_ERROR = 0xe000000, OOM_MALLOC_ERROR = 0xe000001, OOM_MMAP_ERROR = 0xe000002 }; (i.e. removing one 0 from the defined values) be a good fix? SIGRTMAX is 64 (on Linux tested using ?kill -l") so anything well above that would be user defined and therefore safe? cheers From igor.ignatyev at oracle.com Tue Jun 13 23:09:45 2017 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Tue, 13 Jun 2017 16:09:45 -0700 Subject: RFR(S) : 8181053 : port basicvmtest to jtreg Message-ID: <4DE9B200-A792-452D-9AAD-E1BC1B1E7001@oracle.com> http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html > 121 lines changed: 54 ins; 67 del; 0 mod; Hi all, could you please review this small patch which introduces jtreg version of basicvmtest test? make version of basicvmtest also included sanity testing for CDS on client JVM, but this testing modified the product binaries, so it might interfere with results of other tests and is not very reliable. I have consulted w/ Misha about this, and he assured me that there are other better CDS tests which check the same functionality, so we should not lose test coverage there. webrev: http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html jbs: https://bugs.openjdk.java.net/browse/JDK-8181053 testing: jprt, new added test Thanks, -- Igor From kim.barrett at oracle.com Wed Jun 14 02:07:12 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 13 Jun 2017 22:07:12 -0400 Subject: RFR(10)(S): 8181503: Can't compile hotspot with c++11 In-Reply-To: <8496489F-C159-45B9-A5AE-1B40B58BA3DA@oracle.com> References: <159B33A6-6932-4E34-BFF1-E8AA0D3A783D@oracle.com> <8496489F-C159-45B9-A5AE-1B40B58BA3DA@oracle.com> Message-ID: > On Jun 13, 2017, at 3:59 PM, Gerard Ziemski wrote: > > >> On Jun 12, 2017, at 5:07 PM, Kim Barrett wrote: >> >> src/share/vm/utilities/vmError.hpp >> 38 static uint _id; // Solaris/Linux signals: 0 - SIGRTMAX >> >> I think changing the type of _id from int to uint is really not so >> simple. There's a bit of a type mess in this area, with some functions >> expecting or using int and others uint. _id is set from an int value. >> It is passed to os::exception_name, which takes an int argument. The >> windows implementation of that function immediately casts that >> argument to a uint, but the posix implementation actually wants an int >> value. OTOH, there are other places that expect or treat _id as a >> uint. So the proposed change is really just rearranging the deck >> chairs in that mess, and is not really much of an improvement. >> I *think* using uint consistently throughout for this value could be >> made to work, but I haven't completely worked through it. > > Thanks Kim, > > I didn?t see anywhere in the code the _id being compared using arithmetic (ex: ?if (_id < 0)"), so I thought we were good using uint. Thanks for taking a closer look. > > Would redefining the 3 troublesome enums from: > > enum VMErrorType { > INTERNAL_ERROR = 0xe0000000, > OOM_MALLOC_ERROR = 0xe0000001, > OOM_MMAP_ERROR = 0xe0000002 > }; > > to: > > enum VMErrorType { > INTERNAL_ERROR = 0xe000000, > OOM_MALLOC_ERROR = 0xe000001, > OOM_MMAP_ERROR = 0xe000002 > }; > > (i.e. removing one 0 from the defined values) be a good fix? SIGRTMAX is 64 (on Linux tested using ?kill -l") so anything well above that would be user defined and therefore safe? I *think* the current values were chosen for consistency with other Windows error codes. There?s a table in os_windows, used by os::exception_name. And then one would need to deal with the question of a VMErrorType possibly having a different type on different platforms. As is, its representation type is forced to uint (for all of LP32, LP64, or LLP64), but with that change of values it could be representationally either int or uint, and I?m pretty sure different platforms make different choices in that regard. So such a value change is less simple than it might appear. I?m not sure there is an easy, clean, and portable answer. It looks like windows works best here with uint, while for posix int would be preferred, so that regardless of which is used there will be inconsistencies somewhere. From thomas.stuefe at gmail.com Wed Jun 14 04:36:13 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 14 Jun 2017 06:36:13 +0200 Subject: RFR(10)(S): 8181503: Can't compile hotspot with c++11 In-Reply-To: <8496489F-C159-45B9-A5AE-1B40B58BA3DA@oracle.com> References: <159B33A6-6932-4E34-BFF1-E8AA0D3A783D@oracle.com> <8496489F-C159-45B9-A5AE-1B40B58BA3DA@oracle.com> Message-ID: Hi Gerard, Kim, On Tue, Jun 13, 2017 at 9:59 PM, Gerard Ziemski wrote: > > > On Jun 12, 2017, at 5:07 PM, Kim Barrett wrote: > > > > src/share/vm/utilities/vmError.hpp > > 38 static uint _id; // Solaris/Linux signals: 0 > - SIGRTMAX > > > > I think changing the type of _id from int to uint is really not so > > simple. There's a bit of a type mess in this area, with some functions > > expecting or using int and others uint. _id is set from an int value. > > It is passed to os::exception_name, which takes an int argument. The > > windows implementation of that function immediately casts that > > argument to a uint, but the posix implementation actually wants an int > > value. OTOH, there are other places that expect or treat _id as a > > uint. So the proposed change is really just rearranging the deck > > chairs in that mess, and is not really much of an improvement. > > I *think* using uint consistently throughout for this value could be > > made to work, but I haven't completely worked through it. > > Thanks Kim, > > I didn?t see anywhere in the code the _id being compared using arithmetic > (ex: ?if (_id < 0)"), so I thought we were good using uint. Thanks for > taking a closer look. > > Would redefining the 3 troublesome enums from: > > enum VMErrorType { > INTERNAL_ERROR = 0xe0000000, > OOM_MALLOC_ERROR = 0xe0000001, > OOM_MMAP_ERROR = 0xe0000002 > }; > > to: > > enum VMErrorType { > INTERNAL_ERROR = 0xe000000, > OOM_MALLOC_ERROR = 0xe000001, > OOM_MMAP_ERROR = 0xe000002 > }; > > (i.e. removing one 0 from the defined values) be a good fix? SIGRTMAX is > 64 (on Linux tested using ?kill -l") so anything well above that would be > user defined and therefore safe? > > We did something similar in our (SAPs) port to prevent between VMErrorType and existing SEH exception numbers. We extended the _id type to uint64_t and moved the VMErrorType enum values up to the upper 32bit (SEH numbers are DWORD, so 32bit unsigned). We also visibly changed them to not look in the debugger like typical SEH exception numbers. enum VMErrorType { INTERNAL_ERROR = 0xab0000000, OOM_MALLOC_ERROR = 0xab0000001, OOM_MMAP_ERROR = 0xab0000002, } Code has been running in our VM for many years without issues. ..Thomas > > cheers > > From erik.helin at oracle.com Wed Jun 14 07:41:57 2017 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 14 Jun 2017 09:41:57 +0200 Subject: RFR(S) : 8181053 : port basicvmtest to jtreg In-Reply-To: <4DE9B200-A792-452D-9AAD-E1BC1B1E7001@oracle.com> References: <4DE9B200-A792-452D-9AAD-E1BC1B1E7001@oracle.com> Message-ID: <15a84631-f799-92f8-8a51-bfd3dab6bf90@oracle.com> On 06/14/2017 01:09 AM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html >> 121 lines changed: 54 ins; 67 del; 0 mod; > > Hi all, > > could you please review this small patch which introduces jtreg version of basicvmtest test? > > make version of basicvmtest also included sanity testing for CDS on client JVM, but this testing modified the product binaries, so it might interfere with results of other tests and is not very reliable. I have consulted w/ Misha about this, and he assured me that there are other better CDS tests which check the same functionality, so we should not lose test coverage there. > > webrev: http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html Looks good, Reviewed. Thank you for this patch Igor! I've been meaning to fix this for a long time but never got around to it... Erik > jbs: https://bugs.openjdk.java.net/browse/JDK-8181053 > testing: jprt, new added test > > Thanks, > -- Igor > From erik.helin at oracle.com Wed Jun 14 09:39:55 2017 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 14 Jun 2017 11:39:55 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> <4a9cc5cf-6e6c-ffeb-32c7-a5428e706fe9@oracle.com> <21d4c4ee-e386-d908-2c7f-4573e715f91e@oracle.com> Message-ID: <7528b942-a591-401a-433d-5e16b85bc10c@oracle.com> On 06/13/2017 06:23 PM, Thomas St?fe wrote: > So, I changed a whole bunch of callsites to stack-only LogStreams and my > brain is slowly turning to cheese :) therefore, lets do a sanity check > if this is still what we want. Current snapshot of my work here: > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-UL-should-not-use-resource-memory-for-LogStream/current-work-2/webrev/ I think this looks really good! A few comments: --- old/src/cpu/sparc/vm/vm_version_sparc.cpp +++ new/src/cpu/sparc/vm/vm_version_sparc.cpp @@ -381,7 +382,8 @@ - outputStream* log = Log(os, cpu)::info_stream(); + LogStream ls(Log(os, cpu)::info()); + outputStream* log = &ls; I think the above pattern, LogStream ls(Log(foo, bar)::info()), turned out very good, succinct and readable. Great work. --- old/src/share/vm/classfile/classLoaderData.cpp +++ new/src/share/vm/classfile/classLoaderData.cpp @@ -831,16 +833,17 @@ - outputStream* log = Log(class, loader, data)::debug_stream(); - log->print("create class loader data " INTPTR_FORMAT, p2i(cld)); - log->print(" for instance " INTPTR_FORMAT " of %s", p2i((void *)cld->class_loader()), + Log(class, loader, data) log; + LogStream ls(log.debug()); + ls.print("create class loader data " INTPTR_FORMAT, p2i(cld)); + ls.print(" for instance " INTPTR_FORMAT " of %s", p2i((void *)cld->class_loader()), cld->loader_name()); if (string.not_null()) { - log->print(": "); - java_lang_String::print(string(), log); + ls.print(": "); + java_lang_String::print(string(), &ls); } - log->cr(); + ls.cr(); Do you really need the `log` variable here? It seems to that only `ls` is used? Or did you mean to do the `outputStream* log = &ls` pattern here as well? Or maybe I missed something? --- old/src/share/vm/classfile/loaderConstraints.cpp +++ new/src/share/vm/classfile/loaderConstraints.cpp @@ -98,14 +101,14 @@ if (klass != NULL && klass->class_loader_data()->is_unloading()) { probe->set_klass(NULL); - if (log_is_enabled(Info, class, loader, constraints)) { + if (lt.is_enabled()) { ResourceMark rm; - outputStream* out = Log(class, loader, constraints)::info_stream(); - out->print_cr("purging class object from constraint for name %s," + LogStream ls(lt); + ls.print_cr("purging class object from constraint for name %s," " loader list:", probe->name()->as_C_string()); for (int i = 0; i < probe->num_loaders(); i++) { - out->print_cr(" [%d]: %s", i, + ls.print_cr(" [%d]: %s", i, probe->loader_data(i)->loader_name()); } } Could the pattern LogStream ls(lt); ls.print_cr("hello, brave new logging world"); become LogStream(lt).print_cr("hello, brave new logging world"); in order to have less line? Not sure if it is better, but it is at least shorter :) Seems to be a rather common pattern as well... --- old/src/share/vm/gc/g1/g1AllocRegion.cpp +++ new/src/share/vm/gc/g1/g1AllocRegion.cpp @@ -211,12 +213,9 @@ if ((actual_word_size == 0 && result == NULL) || detailed_info) { ResourceMark rm; - outputStream* out; - if (detailed_info) { - out = log.trace_stream(); - } else { - out = log.debug_stream(); - } + LogStream ls_trace(log.trace()); + LogStream ls_debug(log.debug()); + outputStream* out = detailed_info ? &ls_trace : &ls_debug; Could this be LogStream out = LogStream(detailed_info ? log.trace() : log.debug()); or is this too succinct? Anyways, nice use of the ternary operator here, makes the code much more readable. I didn't have to look through the entire patch (got approx 50% of the way), but I think the patch is becoming really good. > Some thoughts: > > After talking this over with Eric off-list, Well, off-list, but on IRC ;) #openjdk on irc.oftc.net for those that want to follow along or join in on the discussion. > I do not think anymore that reducing the: > > LogTarget(...) log; > if (log.is_enabled()) { > LogStream ls(log)... > } > > to just > > LogStream ls(..); > if (ls.is_enabled()) { > .. > } > > is really a good idea. We want logging to not cause costs if logging is > disabled. But this way, we would always to pay the cost for initializing > the LogStream, which means initializing outputStream at least once (for > the parent class) and maybe twice (if the line buffer is an outputStream > class too). outputStream constructor just assigns a bunch of member > variables, but this is still more than nothing. Yep, I still agree with this. > --- > > Funnily, when translating all these callsites, I almost never used Log, > but mostly LogTarget. This is because I wanted to avoid repeating the > (level, tag, tag..) declarations, and the only way to do this is via > LogTarget. Compare: > > Log(gc, metaspace, freelist) log; > if (log.is_debug()) { > LogStream ls(log.debug()); > } > > repeats the "debug" info. Even worse are cases where the whole taglist > would be repeated: > > if (log_is_enabled(Info, class, loader, constraints)) { > LogStream ls(Log( class, loader, constraints)::info()); > } I think using LogTarget makes a lot sense in these situations, I prefer that solution. > --- > > I found cases where the usage of "xx_stream()" was not guarded by any > is_enabled() flag but executed unconditionally, e.g. metaspace.cpp > (VirtualSpaceNode::take_from_committed()): > > 1016 if (!is_available(chunk_word_size)) { > 1017 Log(gc, metaspace, freelist) log; > 1018 log.debug("VirtualSpaceNode::take_from_committed() not > available " SIZE_FORMAT " words ", chunk_word_size); > 1019 // Dump some information about the virtual space that is nearly > full > 1020 ResourceMark rm; > 1021 print_on(log.debug_stream()); > 1022 return NULL; > 1023 } > > So I really wondered: print_on(log.debug_stream()) is executed > unconditionally, what happens here? What happens is that the whole > printing is executed, first inside the LogStream, then down to > LogTargetImpl, and somewhere deep down in UL (in LogTagSet::log()) the > assembled message is ignored because there is no output connected to it. > So we always pay for the whole printing. I consider this an error, > right? I wonder how this could be prevented. Hmm, I'm not an expert on UL internals, so take my ideas with a large grain of salt :) Would be possible to have log.debug() do the is_enabled() check (and just do nothing if the check is false)? That would unfortunately penalize code that want to call log multiple times, such as: Log(foo, bar) log; if (log.is_enabled()) { log.debug(...); log.debug(...); log.debug(...); } In the above snippet, we would with my suggestion do the is_enabled check 4 times instead of 1. OTOH, then one could then remove the first check and just have: Log(foo, bar) log; log.debug(...); log.debug(...); log.debug(...); (but still, this is 3 checks compared to 1). How expensive is the is_enabled check? I _think_ (others, please correct me if I'm wrong) that code is meant to use the "if (is_enabled()" pattern if either the logging or getting the data for logging is expensive. Hence, if code doesn't do this (and instead rely on log.debug() to discard the data), then it should be fine with this costing a bit more (or we have a bug). > --- > > After doing all these changes, I am unsure. Is this the right direction? > The alternative would still be my original proposal (tying the LogStream > instances as members to the LogTarget instance on the stack). What do > you think? I also think that if we go this direction, it might make > sense to do this in jdk9, because auto-merging jdk9 to jdk10 may be a > problem with so many changes. Or am I too pessimistic? IMHO you are definitely heading in the right direction. Again, IMO, I don't think we should do this in JDK 9. Focus on 10 and if backporting turns to be problematic due to this change, then we fix it then ;) Again, others might have a different view (if so, please chime in). Thanks, Erik > Kind regards, Thomas > > > > > From erik.helin at oracle.com Wed Jun 14 11:50:06 2017 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 14 Jun 2017 13:50:06 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <20170609102041.GA2477@physik.fu-berlin.de> References: <20170609102041.GA2477@physik.fu-berlin.de> Message-ID: Hey Adrian, thanks for contributing and signing the OCA! I think the first three patches (hotspot-add-missing-log-header.diff, hotspot-fix-checkbytebuffer.diff, rename-sparc-linux-atomic-header.diff) all look good, thanks for fixing broken code. Consider them Reviewed by me. Every patch needs a corresponding issue in the bug tracker, so I went ahead and created: - https://bugs.openjdk.java.net/browse/JDK-8182163 - https://bugs.openjdk.java.net/browse/JDK-8182164 - https://bugs.openjdk.java.net/browse/JDK-8182165 For the last of those three patches, rename-space-linux-atomic-header.diff, did you do `hg mv` when renaming the file (in order to preserve version control history)? For the fourth patch, fix-zero-build-on-sparc.diff, I'm not so sure. For example, the following is a bit surprising to me (mostly because I'm not familiar with zero): --- a/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp +++ b/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp @@ -37,7 +37,7 @@ // understanding that there may be concurrent readers of that memory. void memset_with_concurrent_readers(void* to, int value, size_t size); -#ifdef SPARC +#if defined(SPARC) && !defined(ZERO) When this code was written, the intent was clearly to have a specialized version of this function for SPARC. When writing such code, do we always have to take into account the zero case with !defined(ZERO)? That doesn't seem like the right (or a scalable) approach to me. Severin and/or Roman, do you guys know more about Zero and how this should work? If I want to write a function that I want to specialize for e.g. x86-64 or arm, do I always have to take Zero into account? Or should some other define be used, like #ifdef TARGET_ARCH_sparc? Thanks, Erik On 06/09/2017 12:20 PM, John Paul Adrian Glaubitz wrote: > Hi! > > I am currently working on fixing OpenJDK-9 on all non-mainstream > targets available in Debian. For Debian/sparc64, the attached four > patches were necessary to make the build succeed [1]. > > I know the patches cannot be merged right now, but I'm posting them > anyway in case someone else is interested in using them. > > All patches are: > > Signed-off-by: John Paul Adrian Glaubitz > > I also signed the OCA. > > I'm now looking into fixing the builds on alpha (DEC Alpha), armel > (ARMv4T), m68k (680x0), powerpc (PPC32) and sh4 (SuperH/J-Core). > > Cheers, > Adrian > >> [1] https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=sparc64&ver=9%7Eb170-2&stamp=1496931563&raw=0 > From glaubitz at physik.fu-berlin.de Wed Jun 14 12:04:08 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 14 Jun 2017 14:04:08 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: References: <20170609102041.GA2477@physik.fu-berlin.de> Message-ID: <20170614120408.GB16230@physik.fu-berlin.de> Hi Erik! On Wed, Jun 14, 2017 at 01:50:06PM +0200, Erik Helin wrote: > thanks for contributing and signing the OCA! Thanks for reviewing my patches ;-). > I think the first three patches (hotspot-add-missing-log-header.diff, > hotspot-fix-checkbytebuffer.diff, rename-sparc-linux-atomic-header.diff) all > look good, thanks for fixing broken code. Consider them Reviewed by me. > Every patch needs a corresponding issue in the bug tracker, so I went ahead > and created: > - https://bugs.openjdk.java.net/browse/JDK-8182163 > - https://bugs.openjdk.java.net/browse/JDK-8182164 > - https://bugs.openjdk.java.net/browse/JDK-8182165 Great, thank you! > For the last of those three patches, rename-space-linux-atomic-header.diff, > did you do `hg mv` when renaming the file (in order to preserve version > control history)? I'm not 100% whether I did that. I'm not very familar with mercurial as I'm more used to git. If the patch format looks wrong to you, I can resend a revised version of this patch. > For the fourth patch, fix-zero-build-on-sparc.diff, I'm not so sure. For > example, the following is a bit surprising to me (mostly because I'm not > familiar with zero): The fourth patch may not be a 100% clean as it's more a result of fixing compile errors until the build finished. I can definitely send a revised, cleaner version of this patch after more extensive testing. > When this code was written, the intent was clearly to have a specialized > version of this function for SPARC. When writing such code, do we always > have to take into account the zero case with !defined(ZERO)? That doesn't > seem like the right (or a scalable) approach to me. I agree. It's rather suboptimal. > Severin and/or Roman, do you guys know more about Zero and how this should > work? If I want to write a function that I want to specialize for e.g. > x86-64 or arm, do I always have to take Zero into account? Or should some > other define be used, like #ifdef TARGET_ARCH_sparc? Thanks a lot for the review! Can't wait for my first patches getting merged into OpenJDK ;-). Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From sgehwolf at redhat.com Wed Jun 14 12:21:24 2017 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Wed, 14 Jun 2017 14:21:24 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: References: <20170609102041.GA2477@physik.fu-berlin.de> Message-ID: <1497442884.3741.1.camel@redhat.com> Hi Eric, On Wed, 2017-06-14 at 13:50 +0200, Erik Helin wrote: > For the fourth patch, fix-zero-build-on-sparc.diff, I'm not so sure. For? > example, the following is a bit surprising to me (mostly because I'm not? > familiar with zero): > > --- a/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp > +++ b/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp > @@ -37,7 +37,7 @@ > ? // understanding that there may be concurrent readers of that memory. > ? void memset_with_concurrent_readers(void* to, int value, size_t size); > > -#ifdef SPARC > +#if defined(SPARC) && !defined(ZERO) > > When this code was written, the intent was clearly to have a specialized? > version of this function for SPARC. When writing such code, do we always? > have to take into account the zero case with !defined(ZERO)? As of now, yes I think so. The thing is that Zero is supposed to be architecture agnostic for the most part. That is, you can build Zero on x86_64, SPARC, aarch64, etc. > That? > doesn't seem like the right (or a scalable) approach to me. Agreed. That's how it is at the moment, though. > Severin and/or Roman, do you guys know more about Zero and how this? > should work? If I want to write a function that I want to specialize for? > e.g. x86-64 or arm, do I always have to take Zero into account? Or? > should some other define be used, like #ifdef TARGET_ARCH_sparc? So the ZERO define can happen regardless of arch. I don't really know any define which does what you want except #if defined() && !defined(ZERO) perhaps. Thanks, Severin > Thanks, > Erik > > On 06/09/2017 12:20 PM, John Paul Adrian Glaubitz wrote: > > Hi! > > > > I am currently working on fixing OpenJDK-9 on all non-mainstream > > targets available in Debian. For Debian/sparc64, the attached four > > patches were necessary to make the build succeed [1]. > > > > I know the patches cannot be merged right now, but I'm posting them > > anyway in case someone else is interested in using them. > > > > All patches are: > > > > ????Signed-off-by: John Paul Adrian Glaubitz > > > > I also signed the OCA. > > > > I'm now looking into fixing the builds on alpha (DEC Alpha), armel > > (ARMv4T), m68k (680x0), powerpc (PPC32) and sh4 (SuperH/J-Core). > > > > Cheers, > > Adrian > > > > > [1] https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=sparc64&ver=9%7Eb170-2&stamp=1496931563&raw=0 From erik.helin at oracle.com Wed Jun 14 12:30:24 2017 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 14 Jun 2017 14:30:24 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <20170614120408.GB16230@physik.fu-berlin.de> References: <20170609102041.GA2477@physik.fu-berlin.de> <20170614120408.GB16230@physik.fu-berlin.de> Message-ID: <2979e5ff-5bc7-fd83-8b15-b62dfa9cf593@oracle.com> On 06/14/2017 02:04 PM, John Paul Adrian Glaubitz wrote: > Hi Erik! > > On Wed, Jun 14, 2017 at 01:50:06PM +0200, Erik Helin wrote: >> thanks for contributing and signing the OCA! > > Thanks for reviewing my patches ;-). > >> I think the first three patches (hotspot-add-missing-log-header.diff, >> hotspot-fix-checkbytebuffer.diff, rename-sparc-linux-atomic-header.diff) all >> look good, thanks for fixing broken code. Consider them Reviewed by me. >> Every patch needs a corresponding issue in the bug tracker, so I went ahead >> and created: >> - https://bugs.openjdk.java.net/browse/JDK-8182163 >> - https://bugs.openjdk.java.net/browse/JDK-8182164 >> - https://bugs.openjdk.java.net/browse/JDK-8182165 > > Great, thank you! > >> For the last of those three patches, rename-space-linux-atomic-header.diff, >> did you do `hg mv` when renaming the file (in order to preserve version >> control history)? > > I'm not 100% whether I did that. I'm not very familar with mercurial > as I'm more used to git. If the patch format looks wrong to you, I can > resend a revised version of this patch. No worries, someone will have to commit your patches anyway (most likely me). I can have a look then and ensure that `hg mv` is used for renaming the file. >> For the fourth patch, fix-zero-build-on-sparc.diff, I'm not so sure. For >> example, the following is a bit surprising to me (mostly because I'm not >> familiar with zero): > > The fourth patch may not be a 100% clean as it's more a result of > fixing compile errors until the build finished. I can definitely send > a revised, cleaner version of this patch after more extensive testing. Yeah, I guessed that was the case :) Without the fourth patch (fix-zero-build-on-sparc.diff), does the "regular" linux/sparc build compile and run? Is that something you can test? Also, have you run the tier 1 testing for hotspot (the tests that need to pass for every commit)? You can run those tests by running (from the top-level "root" repo): $ make test TEST=hotspot_tier1 or, if you want to try the new run-test functionality $ make run-test-hotspot_tier1 >> When this code was written, the intent was clearly to have a specialized >> version of this function for SPARC. When writing such code, do we always >> have to take into account the zero case with !defined(ZERO)? That doesn't >> seem like the right (or a scalable) approach to me. > > I agree. It's rather suboptimal. Yes, which is why I want to get a better understanding before I give a "thumbs up" for this last patch. I hope (suspect) that there is a better way to do this. >> Severin and/or Roman, do you guys know more about Zero and how this should >> work? If I want to write a function that I want to specialize for e.g. >> x86-64 or arm, do I always have to take Zero into account? Or should some >> other define be used, like #ifdef TARGET_ARCH_sparc? > > Thanks a lot for the review! You are welcome :) > Can't wait for my first patches getting merged into OpenJDK ;-). Well, you do need one more reviewer for your patches. Hotspot requires at least two reviewers for every patch (and one of the reviewers has to have the Reviewer role). Thanks, Erik > Adrian > From glaubitz at physik.fu-berlin.de Wed Jun 14 12:46:07 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 14 Jun 2017 14:46:07 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <2979e5ff-5bc7-fd83-8b15-b62dfa9cf593@oracle.com> References: <20170609102041.GA2477@physik.fu-berlin.de> <20170614120408.GB16230@physik.fu-berlin.de> <2979e5ff-5bc7-fd83-8b15-b62dfa9cf593@oracle.com> Message-ID: <20170614124607.GD16230@physik.fu-berlin.de> On Wed, Jun 14, 2017 at 02:30:24PM +0200, Erik Helin wrote: > >I'm not 100% whether I did that. I'm not very familar with mercurial > >as I'm more used to git. If the patch format looks wrong to you, I can > >resend a revised version of this patch. > > No worries, someone will have to commit your patches anyway (most likely > me). I can have a look then and ensure that `hg mv` is used for renaming the > file. Great, thank you! > >>For the fourth patch, fix-zero-build-on-sparc.diff, I'm not so sure. For > >>example, the following is a bit surprising to me (mostly because I'm not > >>familiar with zero): > > > >The fourth patch may not be a 100% clean as it's more a result of > >fixing compile errors until the build finished. I can definitely send > >a revised, cleaner version of this patch after more extensive testing. > > Yeah, I guessed that was the case :) Without the fourth patch > (fix-zero-build-on-sparc.diff), does the "regular" linux/sparc build compile > and run? Is that something you can test? I will have a look tonight. I'm currently at work. > Also, have you run the tier 1 testing for hotspot (the tests that need to > pass for every commit)? You can run those tests by running (from the > top-level "root" repo): > > $ make test TEST=hotspot_tier1 > > or, if you want to try the new run-test functionality > > $ make run-test-hotspot_tier1 Ok, I will give that a try. > >>When this code was written, the intent was clearly to have a specialized > >>version of this function for SPARC. When writing such code, do we always > >>have to take into account the zero case with !defined(ZERO)? That doesn't > >>seem like the right (or a scalable) approach to me. > > > >I agree. It's rather suboptimal. > > Yes, which is why I want to get a better understanding before I give a > "thumbs up" for this last patch. I hope (suspect) that there is a better way > to do this. Ok. > >Can't wait for my first patches getting merged into OpenJDK ;-). > > Well, you do need one more reviewer for your patches. Hotspot requires at > least two reviewers for every patch (and one of the reviewers has to have > the Reviewer role). Hey, it's a first step ;-). Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From gerard.ziemski at oracle.com Wed Jun 14 13:23:09 2017 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Wed, 14 Jun 2017 08:23:09 -0500 Subject: RFR(10)(S): 8181503: Can't compile hotspot with c++11 In-Reply-To: References: <159B33A6-6932-4E34-BFF1-E8AA0D3A783D@oracle.com> <8496489F-C159-45B9-A5AE-1B40B58BA3DA@oracle.com> Message-ID: <46967906-E255-4D3F-BE51-13C1BFFF077D@oracle.com> hi Thomas, Kim, > On Jun 13, 2017, at 11:36 PM, Thomas St?fe wrote: > > Hi Gerard, Kim, > > On Tue, Jun 13, 2017 at 9:59 PM, Gerard Ziemski wrote: > > > On Jun 12, 2017, at 5:07 PM, Kim Barrett wrote: > > > > src/share/vm/utilities/vmError.hpp > > 38 static uint _id; // Solaris/Linux signals: 0 - SIGRTMAX > > > > I think changing the type of _id from int to uint is really not so > > simple. There's a bit of a type mess in this area, with some functions > > expecting or using int and others uint. _id is set from an int value. > > It is passed to os::exception_name, which takes an int argument. The > > windows implementation of that function immediately casts that > > argument to a uint, but the posix implementation actually wants an int > > value. OTOH, there are other places that expect or treat _id as a > > uint. So the proposed change is really just rearranging the deck > > chairs in that mess, and is not really much of an improvement. > > I *think* using uint consistently throughout for this value could be > > made to work, but I haven't completely worked through it. > > Thanks Kim, > > I didn?t see anywhere in the code the _id being compared using arithmetic (ex: ?if (_id < 0)"), so I thought we were good using uint. Thanks for taking a closer look. > > Would redefining the 3 troublesome enums from: > > enum VMErrorType { > INTERNAL_ERROR = 0xe0000000, > OOM_MALLOC_ERROR = 0xe0000001, > OOM_MMAP_ERROR = 0xe0000002 > }; > > to: > > enum VMErrorType { > INTERNAL_ERROR = 0xe000000, > OOM_MALLOC_ERROR = 0xe000001, > OOM_MMAP_ERROR = 0xe000002 > }; > > (i.e. removing one 0 from the defined values) be a good fix? SIGRTMAX is 64 (on Linux tested using ?kill -l") so anything well above that would be user defined and therefore safe? > > > We did something similar in our (SAPs) port to prevent between VMErrorType and existing SEH exception numbers. We extended the _id type to uint64_t and moved the VMErrorType enum values up to the upper 32bit (SEH numbers are DWORD, so 32bit unsigned). We also visibly changed them to not look in the debugger like typical SEH exception numbers. > > enum VMErrorType { > INTERNAL_ERROR = 0xab0000000, > OOM_MALLOC_ERROR = 0xab0000001, > OOM_MMAP_ERROR = 0xab0000002, > } > > Code has been running in our VM for many years without issues. Thank you for reviews and Thomas? suggestion. I?m off until the end of this week, but I will think about this next week when I?m back. cheers From erik.helin at oracle.com Wed Jun 14 14:38:34 2017 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 14 Jun 2017 16:38:34 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <1497442884.3741.1.camel@redhat.com> References: <20170609102041.GA2477@physik.fu-berlin.de> <1497442884.3741.1.camel@redhat.com> Message-ID: <70fb60d3-7d19-ed9b-840e-bbb7315ae864@oracle.com> On 06/14/2017 02:21 PM, Severin Gehwolf wrote: > Hi Eric, > > On Wed, 2017-06-14 at 13:50 +0200, Erik Helin wrote: >> For the fourth patch, fix-zero-build-on-sparc.diff, I'm not so sure. For >> example, the following is a bit surprising to me (mostly because I'm not >> familiar with zero): >> >> --- a/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp >> +++ b/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp >> @@ -37,7 +37,7 @@ >> // understanding that there may be concurrent readers of that memory. >> void memset_with_concurrent_readers(void* to, int value, size_t size); >> >> -#ifdef SPARC >> +#if defined(SPARC) && !defined(ZERO) >> >> When this code was written, the intent was clearly to have a specialized >> version of this function for SPARC. When writing such code, do we always >> have to take into account the zero case with !defined(ZERO)? > > As of now, yes I think so. The thing is that Zero is supposed to be > architecture agnostic for the most part. That is, you can build Zero on > x86_64, SPARC, aarch64, etc. Ok. But if Zero is architecture agnostic, why do we have the directory hotspot/src/cpu/zero? Sorry, I don't know much about Zero... >> That >> doesn't seem like the right (or a scalable) approach to me. > > Agreed. That's how it is at the moment, though. > >> Severin and/or Roman, do you guys know more about Zero and how this >> should work? If I want to write a function that I want to specialize for >> e.g. x86-64 or arm, do I always have to take Zero into account? Or >> should some other define be used, like #ifdef TARGET_ARCH_sparc? > > So the ZERO define can happen regardless of arch. I don't really know > any define which does what you want except #if defined() && > !defined(ZERO) perhaps. Hmm, ok, but for the above code snippet, if we are running with Zero on Sparc, can't we use the Sparc optimized version of memset_with_concurrent_readers? Or can't we use Sparc assembly in the runtime when running with Zero? Thanks, Erik > Thanks, > Severin > >> Thanks, >> Erik >> >> On 06/09/2017 12:20 PM, John Paul Adrian Glaubitz wrote: >>> Hi! >>> >>> I am currently working on fixing OpenJDK-9 on all non-mainstream >>> targets available in Debian. For Debian/sparc64, the attached four >>> patches were necessary to make the build succeed [1]. >>> >>> I know the patches cannot be merged right now, but I'm posting them >>> anyway in case someone else is interested in using them. >>> >>> All patches are: >>> >>> Signed-off-by: John Paul Adrian Glaubitz >>> >>> I also signed the OCA. >>> >>> I'm now looking into fixing the builds on alpha (DEC Alpha), armel >>> (ARMv4T), m68k (680x0), powerpc (PPC32) and sh4 (SuperH/J-Core). >>> >>> Cheers, >>> Adrian >>> >>>> [1] https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=sparc64&ver=9%7Eb170-2&stamp=1496931563&raw=0 > From sgehwolf at redhat.com Wed Jun 14 16:05:42 2017 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Wed, 14 Jun 2017 18:05:42 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <70fb60d3-7d19-ed9b-840e-bbb7315ae864@oracle.com> References: <20170609102041.GA2477@physik.fu-berlin.de> <1497442884.3741.1.camel@redhat.com> <70fb60d3-7d19-ed9b-840e-bbb7315ae864@oracle.com> Message-ID: <1497456342.3741.15.camel@redhat.com> Hi Eric, On Wed, 2017-06-14 at 16:38 +0200, Erik Helin wrote: > On 06/14/2017 02:21 PM, Severin Gehwolf wrote: > > Hi Eric, > > > > On Wed, 2017-06-14 at 13:50 +0200, Erik Helin wrote: > > > For the fourth patch, fix-zero-build-on-sparc.diff, I'm not so sure. For? > > > example, the following is a bit surprising to me (mostly because I'm not? > > > familiar with zero): > > > > > > --- a/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp > > > +++ b/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp > > > @@ -37,7 +37,7 @@ > > > ? // understanding that there may be concurrent readers of that memory. > > > ? void memset_with_concurrent_readers(void* to, int value, size_t size); > > > > > > -#ifdef SPARC > > > +#if defined(SPARC) && !defined(ZERO) > > > > > > When this code was written, the intent was clearly to have a specialized? > > > version of this function for SPARC. When writing such code, do we always? > > > have to take into account the zero case with !defined(ZERO)? > > > > As of now, yes I think so. The thing is that Zero is supposed to be > > architecture agnostic for the most part. That is, you can build Zero on > > x86_64, SPARC, aarch64, etc. > > Ok. But if Zero is architecture agnostic, why do we have the directory > hotspot/src/cpu/zero? Sorry, I don't know much about Zero... I don't know a lot about Zero either ;-) Zero uses the C++ interpreter and is supposed to be a "Zero assembler port". In contrast to the old C++ interpreter, Zero uses no platform specific code to set up frames. It's glue code specific to Zero. Zero isn't a cpu arch, though. It predates me as to why the code ended up in src/cpu/zero. > > > That? > > > doesn't seem like the right (or a scalable) approach to me. > > > > Agreed. That's how it is at the moment, though. > > > > > Severin and/or Roman, do you guys know more about Zero and how this? > > > should work? If I want to write a function that I want to specialize for? > > > e.g. x86-64 or arm, do I always have to take Zero into account? Or? > > > should some other define be used, like #ifdef TARGET_ARCH_sparc? > > > > So the ZERO define can happen regardless of arch. I don't really know > > any define which does what you want except #if defined() && > > !defined(ZERO) perhaps. > > Hmm, ok, but for the above code snippet, if we are running with Zero on > Sparc, can't we use the Sparc optimized version of > memset_with_concurrent_readers? Or can't we use Sparc assembly in the > runtime when running with Zero? Zero == Zero assembler. So the latter. Yet, I'm unsure as to what that assembler is doing exactly. What's more, I've never built Zero on SPARC, so I don't know whether or not the patch in question fixes a compile or runtime issue. It might be technically possible to use assembler, but it hinders it's goal of Zero being a porters tool[1]. HTH, Severin [1] http://icedtea.classpath.org/wiki/ZeroSharkFaq#Why_was_Zero_written.3F > Thanks, > Erik > > > Thanks, > > Severin > > > > > Thanks, > > > Erik > > > > > > On 06/09/2017 12:20 PM, John Paul Adrian Glaubitz wrote: > > > > Hi! > > > > > > > > I am currently working on fixing OpenJDK-9 on all non-mainstream > > > > targets available in Debian. For Debian/sparc64, the attached four > > > > patches were necessary to make the build succeed [1]. > > > > > > > > I know the patches cannot be merged right now, but I'm posting them > > > > anyway in case someone else is interested in using them. > > > > > > > > All patches are: > > > > > > > > ????Signed-off-by: John Paul Adrian Glaubitz > > > > > > > > I also signed the OCA. > > > > > > > > I'm now looking into fixing the builds on alpha (DEC Alpha), armel > > > > (ARMv4T), m68k (680x0), powerpc (PPC32) and sh4 (SuperH/J-Core). > > > > > > > > Cheers, > > > > Adrian > > > > > > > > > [1] https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=sparc64&ver=9%7Eb170-2&stamp=1496931563&raw=0 From patric.hedlin at oracle.com Thu Jun 15 16:03:15 2017 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Thu, 15 Jun 2017 18:03:15 +0200 Subject: JDK10/RFR(L): 8172231: SPARC ISA/CPU feature detection is broken/insufficient (on Solaris). Message-ID: <33a21702-285a-e2f7-c6ed-8b530215cca5@oracle.com> Dear all, I would like to ask for help to review the following change/update: Issue: https://bugs.openjdk.java.net/browse/JDK-8172231 Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8172231/ This is review #2. Thanks to Stefan Anzinger David Holmes -> follow-up in JDK-8181852 Vladimir Kozlov -> follow-up in JDK-8181853 for reviewing the previous version. Slight change after review #1 (agreed with Vladimir Kozlov): Revoked instruction fetch alignment based on derived caps. (gain is small on T4 and T7/M7 even for perfectly slotted bundles, and absent on M8). 8172231: SPARC ISA/CPU feature detection is broken/insufficient (on Solaris). Updating SPARC feature/capability detection (incorporating changes from Martin Walsh). More complete set of features as provided by 'getisax(2)' interface, propagated via JVMCI. More robust hardware probing for additional features (up to Core S4). Removing support for old, pre Niagara, hardware. Removing support for old, pre 11.1, Solaris. Changed behaviour: Changing SPARC setup for AllocatePrefetchLines and AllocateInstancePrefetchLines such that they will (still) be doubled when cache-line size is small (32 bytes), but more moderately increased on new/contemporary hardware (inc >= 50%). The above changes also subsumes: 8035146: assert(is_T_family(features) == is_niagara(features), "Niagara should be T series") is incorrect 8054979: Remove unnecessary defines in SPARC's VM_Version::platform_features Rationale: Current hardware detection on Solaris/SPARC is not up to date with the "latest" (here, meaning commercially available server solutions, i.e. T7/M7). To facilitate improved use of the new hardware features provided (by Core S3&S4) these capabilities need to be recognised by the JVM. NOTE: This update is limited to Core S3&S4, i.e. not including Core S5. Caveat: This update will introduce some redundancies into the code base, features and definitions currently not used, as well as a (small) number of FIXMEs, addressed by subsequent bug or feature updates/patches. Fujitsu HW is treated very conservatively. Testing: Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). Benchmarking: No additional benchmarking results produced (since previous review). Best regards, Patric From patric.hedlin at oracle.com Thu Jun 15 16:03:39 2017 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Thu, 15 Jun 2017 18:03:39 +0200 Subject: JDK10/RFR(XS): 8181852: Remove option 'UseV8InstrsOnly' Message-ID: Dear all, I would like to ask for help to review the following change/update: Issue: https://bugs.openjdk.java.net/browse/JDK-8181852 Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8181852/ Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8172231 8181852: Remove option 'UseV8InstrsOnly' After JDK-8172231 we will no longer identify SPARC V8 as a valid ISA. Testing: Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). Best regards, Patric From patric.hedlin at oracle.com Thu Jun 15 16:04:00 2017 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Thu, 15 Jun 2017 18:04:00 +0200 Subject: JDK10/RFR(M): 8181853: Remove use of 'v9_only()' Message-ID: <7114241b-95b0-9ec5-16cd-f697d9bfa258@oracle.com> Dear all, I would like to ask for help to review the following change/update: Issue: https://bugs.openjdk.java.net/browse/JDK-8181853 Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8181853/ Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8172231 8181853: Remove use of 'v9_only()' After JDK-8172231 we will no longer identify SPARC V8 as a valid ISA, i.e. there is no need to distinguish between V8 and V9 instructions. Testing: Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). Best regards, Patric From patric.hedlin at oracle.com Thu Jun 15 16:04:59 2017 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Thu, 15 Jun 2017 18:04:59 +0200 Subject: JDK10/RFR(S): 8181868: Remove use of 'has_fast_fxtof()' Message-ID: <995d76a2-15ae-8acd-9bc0-ff1546c84516@oracle.com> Dear all, I would like to ask for help to review the following change/update: Issue: https://bugs.openjdk.java.net/browse/JDK-8181868 Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8181868/ Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8172231 8181868: Remove use of 'has_fast_fxtof()' After JDK-8172231 we no longer support old SPARC HW lacking fast integer/floating-point conversion. Testing: Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). Best regards, Patric From patric.hedlin at oracle.com Thu Jun 15 16:06:27 2017 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Thu, 15 Jun 2017 18:06:27 +0200 Subject: JDK10/RFR(L): 8144448: Avoid placing CTI immediately following or preceding RDPC instruction. Message-ID: <2258a67a-c9d3-d9a9-2de1-4464321cb983@oracle.com> Dear all, I would like to ask for help to review the following change/update: Issue: https://bugs.openjdk.java.net/browse/JDK-8144448 Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8144448/ Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8181853 *** As a comment to the discussion on how to simplify processing of "mundane" changes, this change/update comes with an additional prerequisite (patch) cleaning-up whitespace and two lingering uses of 'NOT_LP64' and 'LP64_ONLY'. Prerequisite: http://cr.openjdk.java.net/~neliasso/phedlin/tr8144448.pre/ 8144448: Avoid placing CTI immediately following or preceding RDPC instruction. Approach taken here is to handle 'rdpc' in the same manner as 'cbcond', using a simple scheme to prohibit the assembler from emitting any 'rdpc' instruction back-to-back with other CTI ('rdpc' itself included), inserting 'nop' as needed. Caveat: This change is applied to all generations of SPARC cores event though it is the SPARC Core S5 that is the actual target. Benchmarking on T4 and M7 suggests that there is no penalty. This choice (which is subject to change) has been made in order to give the update some mileage while waiting for Core S5 hardware to be available in regular testing. Testing: Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). Best regards, Patric From patric.hedlin at oracle.com Thu Jun 15 16:07:20 2017 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Thu, 15 Jun 2017 18:07:20 +0200 Subject: JDK10/RFR(M): 8164888: Intrinsify fused mac operations on SPARC. Message-ID: <3cc811c8-27b4-1ab0-8d78-92d64443b789@oracle.com> Dear all, I would like to ask for help to review the following change/update: Issue: https://bugs.openjdk.java.net/browse/JDK-8164888 Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8164888/ Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8172231 8164888: Intrinsify fused mac operations on SPARC. Added C2, C1 and interpreter part of FMA support for SPARC. Testing: Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). Best regards, Patric From vladimir.kozlov at oracle.com Thu Jun 15 16:31:15 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 15 Jun 2017 09:31:15 -0700 Subject: JDK10/RFR(XS): 8181852: Remove option 'UseV8InstrsOnly' In-Reply-To: References: Message-ID: Good. Thanks, Vladimir On 6/15/17 9:03 AM, Patric Hedlin wrote: > Dear all, > > I would like to ask for help to review the following change/update: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8181852 > > Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8181852/ > > Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8172231 > > > 8181852: Remove option 'UseV8InstrsOnly' > > After JDK-8172231 we will no longer identify SPARC V8 as a valid ISA. > > > Testing: > > Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). > Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). > > > Best regards, > Patric From vladimir.kozlov at oracle.com Thu Jun 15 16:50:06 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 15 Jun 2017 09:50:06 -0700 Subject: JDK10/RFR(M): 8181853: Remove use of 'v9_only()' In-Reply-To: <7114241b-95b0-9ec5-16cd-f697d9bfa258@oracle.com> References: <7114241b-95b0-9ec5-16cd-f697d9bfa258@oracle.com> Message-ID: <7feaa22d-16d7-48df-24d3-d6d9ac5ab3fb@oracle.com> Nice. Thanks, Vladimir On 6/15/17 9:04 AM, Patric Hedlin wrote: > Dear all, > > I would like to ask for help to review the following change/update: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8181853 > > Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8181853/ > > Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8172231 > > > 8181853: Remove use of 'v9_only()' > > After JDK-8172231 we will no longer identify SPARC V8 as a valid ISA, > i.e. there is no need to distinguish between V8 and V9 instructions. > > > Testing: > > Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). > Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). > > > Best regards, > Patric From vladimir.kozlov at oracle.com Thu Jun 15 17:02:51 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 15 Jun 2017 10:02:51 -0700 Subject: JDK10/RFR(M): 8164888: Intrinsify fused mac operations on SPARC. In-Reply-To: <3cc811c8-27b4-1ab0-8d78-92d64443b789@oracle.com> References: <3cc811c8-27b4-1ab0-8d78-92d64443b789@oracle.com> Message-ID: <439892b9-ac64-887c-a9ff-9f9d4d17027e@oracle.com> Looks good. Thanks, Vladimir On 6/15/17 9:07 AM, Patric Hedlin wrote: > Dear all, > > I would like to ask for help to review the following change/update: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8164888 > > Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8164888/ > > Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8172231 > > > 8164888: Intrinsify fused mac operations on SPARC. > > Added C2, C1 and interpreter part of FMA support for SPARC. > > > Testing: > > Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). > Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). > > > Best regards, > Patric From vladimir.kozlov at oracle.com Thu Jun 15 17:51:09 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 15 Jun 2017 10:51:09 -0700 Subject: JDK10/RFR(L): 8144448: Avoid placing CTI immediately following or preceding RDPC instruction. In-Reply-To: <2258a67a-c9d3-d9a9-2de1-4464321cb983@oracle.com> References: <2258a67a-c9d3-d9a9-2de1-4464321cb983@oracle.com> Message-ID: Patric, assembler_sparc.cpp - may be use BytesPerInstWord instead of 4. In sparc.ad in MachConstantBaseNode::emit() can you use nop() in case disp == 0? To avoid changing O7 reg. Thanks, Vladimir On 6/15/17 9:06 AM, Patric Hedlin wrote: > Dear all, > > I would like to ask for help to review the following change/update: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8144448 > > Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8144448/ > > Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8181853 > > > *** As a comment to the discussion on how to simplify processing of > "mundane" changes, > this change/update comes with an additional prerequisite (patch) > cleaning-up > whitespace and two lingering uses of 'NOT_LP64' and 'LP64_ONLY'. > > Prerequisite: http://cr.openjdk.java.net/~neliasso/phedlin/tr8144448.pre/ > > > 8144448: Avoid placing CTI immediately following or preceding RDPC > instruction. > > Approach taken here is to handle 'rdpc' in the same manner as > 'cbcond', using > a simple scheme to prohibit the assembler from emitting any 'rdpc' > instruction > back-to-back with other CTI ('rdpc' itself included), inserting > 'nop' as needed. > > > Caveat: > > This change is applied to all generations of SPARC cores event > though it is the > SPARC Core S5 that is the actual target. Benchmarking on T4 and M7 > suggests that > there is no penalty. This choice (which is subject to change) has > been made in > order to give the update some mileage while waiting for Core S5 > hardware to be > available in regular testing. > > > Testing: > > Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). > Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). > > > Best regards, > Patric From coleen.phillimore at oracle.com Fri Jun 16 13:50:59 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Jun 2017 09:50:59 -0400 Subject: JDK10/RFR(L): 8172231: SPARC ISA/CPU feature detection is broken/insufficient (on Solaris). In-Reply-To: <33a21702-285a-e2f7-c6ed-8b530215cca5@oracle.com> References: <33a21702-285a-e2f7-c6ed-8b530215cca5@oracle.com> Message-ID: <5330356b-8c66-f614-4357-49a50f4c8572@oracle.com> http://cr.openjdk.java.net/~neliasso/phedlin/tr8172231/webrev/src/cpu/sparc/vm/vmStructs_sparc.hpp.udiff.html This isn't a review but if the SA doesn't use these hardware features, I suggest to just remove them. In the unlikely event that the SA does add support for this (not sure what it would do with these), they can be added in at that time. thanks, Coleen On 6/15/17 12:03 PM, Patric Hedlin wrote: > Dear all, > > I would like to ask for help to review the following change/update: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8172231 > > Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8172231/ > > > This is review #2. > > Thanks to Stefan Anzinger > David Holmes -> follow-up in > JDK-8181852 > Vladimir Kozlov -> follow-up in > JDK-8181853 > for reviewing the previous version. > > Slight change after review #1 (agreed with Vladimir Kozlov): > > Revoked instruction fetch alignment based on derived caps. (gain > is small on > T4 and T7/M7 even for perfectly slotted bundles, and absent on M8). > > > 8172231: SPARC ISA/CPU feature detection is broken/insufficient (on > Solaris). > > Updating SPARC feature/capability detection (incorporating changes > from Martin Walsh). > More complete set of features as provided by 'getisax(2)' > interface, propagated via JVMCI. > More robust hardware probing for additional features (up to Core S4). > Removing support for old, pre Niagara, hardware. > Removing support for old, pre 11.1, Solaris. > > Changed behaviour: > Changing SPARC setup for AllocatePrefetchLines and > AllocateInstancePrefetchLines > such that they will (still) be doubled when cache-line size is > small (32 bytes), > but more moderately increased on new/contemporary hardware (inc >= > 50%). > > The above changes also subsumes: > 8035146: assert(is_T_family(features) == is_niagara(features), > "Niagara should be T series") is incorrect > 8054979: Remove unnecessary defines in SPARC's > VM_Version::platform_features > > > Rationale: > > Current hardware detection on Solaris/SPARC is not up to date with > the "latest" (here, > meaning commercially available server solutions, i.e. T7/M7). To > facilitate improved > use of the new hardware features provided (by Core S3&S4) these > capabilities need to > be recognised by the JVM. > > NOTE: This update is limited to Core S3&S4, i.e. not including > Core S5. > > > Caveat: > > This update will introduce some redundancies into the code base, > features and definitions > currently not used, as well as a (small) number of FIXMEs, > addressed by subsequent bug or > feature updates/patches. Fujitsu HW is treated very conservatively. > > > Testing: > > Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). > Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). > > > Benchmarking: > > No additional benchmarking results produced (since previous review). > > > Best regards, > Patric > From patric.hedlin at oracle.com Fri Jun 16 13:58:58 2017 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Fri, 16 Jun 2017 15:58:58 +0200 Subject: JDK10/RFR(L): 8144448: Avoid placing CTI immediately following or preceding RDPC instruction. In-Reply-To: References: <2258a67a-c9d3-d9a9-2de1-4464321cb983@oracle.com> Message-ID: <16102289-d402-e3aa-34ab-4282a0b95b08@oracle.com> On 2017-06-15 19:51, Vladimir Kozlov wrote: > Patric, > > assembler_sparc.cpp - may be use BytesPerInstWord instead of 4. Indeed... sloppy me. > > In sparc.ad in MachConstantBaseNode::emit() can you use nop() in case > disp == 0? To avoid changing O7 reg. Sure, updated accordingly. Thanks for your time, Patric > > Thanks, > Vladimir > > On 6/15/17 9:06 AM, Patric Hedlin wrote: >> Dear all, >> >> I would like to ask for help to review the following change/update: >> >> Issue: https://bugs.openjdk.java.net/browse/JDK-8144448 >> >> Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8144448/ >> >> Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8181853 >> >> >> *** As a comment to the discussion on how to simplify processing of >> "mundane" changes, >> this change/update comes with an additional prerequisite (patch) >> cleaning-up >> whitespace and two lingering uses of 'NOT_LP64' and 'LP64_ONLY'. >> >> Prerequisite: >> http://cr.openjdk.java.net/~neliasso/phedlin/tr8144448.pre/ >> >> >> 8144448: Avoid placing CTI immediately following or preceding RDPC >> instruction. >> >> Approach taken here is to handle 'rdpc' in the same manner as >> 'cbcond', using >> a simple scheme to prohibit the assembler from emitting any >> 'rdpc' instruction >> back-to-back with other CTI ('rdpc' itself included), inserting >> 'nop' as needed. >> >> >> Caveat: >> >> This change is applied to all generations of SPARC cores event >> though it is the >> SPARC Core S5 that is the actual target. Benchmarking on T4 and >> M7 suggests that >> there is no penalty. This choice (which is subject to change) >> has been made in >> order to give the update some mileage while waiting for Core S5 >> hardware to be >> available in regular testing. >> >> >> Testing: >> >> Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp). >> Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp). >> >> >> Best regards, >> Patric From stuart.monteith at linaro.org Fri Jun 16 14:31:59 2017 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Fri, 16 Jun 2017 15:31:59 +0100 Subject: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode In-Reply-To: References: <5f437ff4-98f5-91b6-608e-d7c2d6f1e469@oracle.com> <03724e7c-f656-473f-9a89-eb78073b518f@default> <82c81d81-017f-fdc1-0e33-0f9cd5140e82@oracle.com> <9fb25d3c-4b55-3872-f9c7-fc460a675ba1@oracle.com> Message-ID: With the following patches, I get the following results (comparing -othervm without the patches against -agentvm with the patches. Comparing the results, before and after, I get: 0: JTwork-otherwithout pass: 720; fail: 4; error: 2; not run: 5 1: JTwork-agentwithnew pass: 721; fail: 4; error: 2; not run: 4 0 1 Test --- pass sanity/MismatchedWhiteBox/WhiteBox.java 1 differences # HG changeset patch # User iignatyev # Date 1397746449 -14400 # Thu Apr 17 18:54:09 2014 +0400 # Node ID 17b08aa75d401874f200fcb3347d485b65d32d3f # Parent 68758c5ab0c1ef01e89bea8a9b799714831a177f 8039260: c.o.j.t.ProcessTools::createJavaProcessBuilder(boolean, String... ) must also take TestJavaOptions Reviewed-by: kvn, iignatyev Contributed-by: lev.priima at oracle.com diff -r 68758c5ab0c1 -r 17b08aa75d40 test/testlibrary/com/oracle/java/testlibrary/ProcessTools.java --- a/test/testlibrary/com/oracle/java/testlibrary/ProcessTools.java Sun Jun 11 07:45:07 2017 -0700 +++ b/test/testlibrary/com/oracle/java/testlibrary/ProcessTools.java Thu Apr 17 18:54:09 2014 +0400 @@ -145,18 +145,15 @@ return createJavaProcessBuilder(false, command); } - public static ProcessBuilder createJavaProcessBuilder(boolean addTestVmOptions, String... command) throws Exception { + public static ProcessBuilder createJavaProcessBuilder(boolean addTestVmAndJavaOptions, String... command) throws Exception { String javapath = JDKToolFinder.getJDKTool("java"); ArrayList args = new ArrayList<>(); args.add(javapath); Collections.addAll(args, getPlatformSpecificVMArgs()); - if (addTestVmOptions) { - String vmopts = System.getProperty("test.vm.opts"); - if (vmopts != null && vmopts.length() > 0) { - Collections.addAll(args, vmopts.split("\\s")); - } + if (addTestVmAndJavaOptions) { + Collections.addAll(args, Utils.getTestJavaOpts()); } Collections.addAll(args, command); # HG changeset patch # User ctornqvi # Date 1429312336 25200 # Fri Apr 17 16:12:16 2015 -0700 # Node ID 7be2e2cd1390df9e13b76925398d152c96585b47 # Parent 17b08aa75d401874f200fcb3347d485b65d32d3f 8077608: [TESTBUG] Enable Hotspot jtreg tests to run in agentvm mode Reviewed-by: sla, gtriantafill diff -r 17b08aa75d40 -r 7be2e2cd1390 test/Makefile --- a/test/Makefile Thu Apr 17 18:54:09 2014 +0400 +++ b/test/Makefile Fri Apr 17 16:12:16 2015 -0700 @@ -262,6 +262,8 @@ # Default JTREG to run JTREG = $(JT_HOME)/bin/jtreg +# Use agent mode +JTREG_BASIC_OPTIONS += -agentvm # Only run automatic tests JTREG_BASIC_OPTIONS += -a # Report details on all failed or error tests, times too diff -r 17b08aa75d40 -r 7be2e2cd1390 test/compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java --- a/test/compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java Thu Apr 17 18:54:09 2014 +0400 +++ b/test/compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java Fri Apr 17 16:12:16 2015 -0700 @@ -26,7 +26,7 @@ * @bug 8042235 * @summary redefining method used by multiple MethodHandles crashes VM * @compile -XDignore.symbol.file RedefineMethodUsedByMultipleMethodHandles.java - * @run main RedefineMethodUsedByMultipleMethodHandles + * @run main/othervm RedefineMethodUsedByMultipleMethodHandles */ import java.io.*; diff -r 17b08aa75d40 -r 7be2e2cd1390 test/sanity/MismatchedWhiteBox/WhiteBox.java --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/sanity/MismatchedWhiteBox/WhiteBox.java Fri Apr 17 16:12:16 2015 -0700 @@ -0,0 +1,57 @@ +/* + * Copyright (c) 2013, 2014, Oracle and/or its affiliates. All rights reserved. + * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. + * + * This code is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 only, as + * published by the Free Software Foundation. + * + * This code is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License + * version 2 for more details (a copy is included in the LICENSE file that + * accompanied this code). + * + * You should have received a copy of the GNU General Public License version + * 2 along with this work; if not, write to the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA + * or visit www.oracle.com if you need additional information or have any + * questions. + */ + +/* + * @test WhiteBox + * @bug 8011675 + * @summary verify that whitebox can be used even if not all functions are declared in java-part + * @author igor.ignatyev at oracle.com + * @library /testlibrary + * @compile WhiteBox.java + * @run main ClassFileInstaller sun.hotspot.WhiteBox + * @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI sun.hotspot.WhiteBox + */ + +package sun.hotspot; + +public class WhiteBox { + private static native void registerNatives(); + static { registerNatives(); } + public native int notExistedMethod(); + public native int getHeapOopSize(); + public static void main(String[] args) { + WhiteBox wb = new WhiteBox(); + if (wb.getHeapOopSize() < 0) { + throw new Error("wb.getHeapOopSize() < 0"); + } + boolean catched = false; + try { + wb.notExistedMethod(); + } catch (UnsatisfiedLinkError e) { + catched = true; + } + if (!catched) { + throw new Error("wb.notExistedMethod() was invoked"); + } + } +} diff -r 17b08aa75d40 -r 7be2e2cd1390 test/sanity/WhiteBox.java --- a/test/sanity/WhiteBox.java Thu Apr 17 18:54:09 2014 +0400 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,58 +0,0 @@ -/* - * Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved. - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - * - * This code is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 only, as - * published by the Free Software Foundation. - * - * This code is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License - * version 2 for more details (a copy is included in the LICENSE file that - * accompanied this code). - * - * You should have received a copy of the GNU General Public License version - * 2 along with this work; if not, write to the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA - * or visit www.oracle.com if you need additional information or have any - * questions. - */ - -/* - * @test WhiteBox - * @bug 8011675 - * @summary verify that whitebox can be used even if not all functions are declared in java-part - * @author igor.ignatyev at oracle.com - * @library /testlibrary - * @compile WhiteBox.java - * @run main ClassFileInstaller sun.hotspot.WhiteBox - * @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI sun.hotspot.WhiteBox - * @clean sun.hotspot.WhiteBox - */ - -package sun.hotspot; - -public class WhiteBox { - private static native void registerNatives(); - static { registerNatives(); } - public native int notExistedMethod(); - public native int getHeapOopSize(); - public static void main(String[] args) { - WhiteBox wb = new WhiteBox(); - if (wb.getHeapOopSize() < 0) { - throw new Error("wb.getHeapOopSize() < 0"); - } - boolean catched = false; - try { - wb.notExistedMethod(); - } catch (UnsatisfiedLinkError e) { - catched = true; - } - if (!catched) { - throw new Error("wb.notExistedMethod() was invoked"); - } - } -} diff -r 17b08aa75d40 -r 7be2e2cd1390 test/testlibrary/com/oracle/java/testlibrary/ProcessTools.java --- a/test/testlibrary/com/oracle/java/testlibrary/ProcessTools.java Thu Apr 17 18:54:09 2014 +0400 +++ b/test/testlibrary/com/oracle/java/testlibrary/ProcessTools.java Fri Apr 17 16:12:16 2015 -0700 @@ -152,6 +152,9 @@ args.add(javapath); Collections.addAll(args, getPlatformSpecificVMArgs()); + args.add("-cp"); + args.add(System.getProperty("java.class.path")); + if (addTestVmAndJavaOptions) { Collections.addAll(args, Utils.getTestJavaOpts()); } On 2 June 2017 at 14:55, Stuart Monteith wrote: > Hi David, > Good point - runtime/os/AvailableProcessors.java does some custom > execution that misses out the classpath. It is easy enough to fix, but > there might be follow on patches to backport. > compiler/rtm/locking/TestRTMLockingThreshold.java might also have some > trouble. > > I'm investigating further. > > > Thanks, > Stuart > > On 2 June 2017 at 10:30, David Holmes wrote: >> On 2/06/2017 7:17 PM, Stuart Monteith wrote: >>> >>> Hi David, >>> Yes, I was being a bit unclear. The patch includes a fix to allow >>> the tests that fail under -agentvm to pass successfully. Under >>> agentvm, tests that spawn their own processes don't inherit a working >>> classpath, so the patch changes ProcessTools to pass this on. The >>> results I presented before show how the failing tests will then pass >>> with agentvm once the patch is applied. >> >> >> Ah I see. Thanks I missed the significance of the ProcessTools change. >> >>>>> 1: JTwork-with pass: 718; fail: 6; error: 2; not run: 5 >> >> So out of those 8 non-passing tests are any of the failures specifically >> related to using agentvm? >> >> Thanks, >> David >> >> >>> Thanks, >>> Stuart >>> >>> >>> On 2 June 2017 at 02:36, David Holmes wrote: >>>> >>>> Hi Stuart, >>>> >>>> On 1/06/2017 11:26 PM, Stuart Monteith wrote: >>>>> >>>>> >>>>> Hello, >>>>> I tested this on x86 and aarch64. Muneer's bug is an accurate >>>>> description of the failing tests. I'm not sure what you mean by >>>>> "8180904 has to be fixed before this backport", as the backport is the >>>>> fix for the issue Muneer presented. JDK9 doesn't exhibit these >>>>> failures as it has the fix to be backported. >>>> >>>> >>>> >>>> As I understood it, 8180904 reports that a whole bunch of tests fail if >>>> run >>>> in agentvm mode. The current backport would enable agentvm mode and hence >>>> all those tests would start to fail. >>>> >>>> Did I misunderstand something? >>>> >>>> Thanks, >>>> David >>>> >>>> >>>> >>>>> Comparing the runs without and with the patch - this is on x86 - I get >>>>> essentially the same on aarch64: >>>>> >>>>> 0: JTwork-without pass: 680; fail: 44; error: 3; not run: 4 >>>>> 1: JTwork-with pass: 718; fail: 6; error: 2; not run: 5 >>>>> >>>>> 0 1 Test >>>>> fail pass compiler/jsr292/PollutedTrapCounts.java >>>>> fail pass >>>>> compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java#id0 >>>>> fail pass compiler/loopopts/UseCountedLoopSafepoints.java >>>>> pass fail compiler/rtm/locking/TestRTMLockingThreshold.java#id0 >>>>> fail pass compiler/types/correctness/OffTest.java#id0 >>>>> fail pass gc/TestVerifySilently.java >>>>> fail pass gc/TestVerifySubSet.java >>>>> fail pass gc/class_unloading/TestCMSClassUnloadingEnabledHWM.java >>>>> fail pass gc/class_unloading/TestG1ClassUnloadingHWM.java >>>>> fail pass gc/ergonomics/TestDynamicNumberOfGCThreads.java >>>>> fail pass gc/g1/TestEagerReclaimHumongousRegions.java >>>>> fail pass gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java >>>>> fail pass gc/g1/TestEagerReclaimHumongousRegionsWithRefs.java >>>>> fail pass gc/g1/TestG1TraceEagerReclaimHumongousObjects.java >>>>> fail pass gc/g1/TestGCLogMessages.java >>>>> fail pass gc/g1/TestHumongousAllocInitialMark.java >>>>> fail pass gc/g1/TestPrintGCDetails.java >>>>> fail pass gc/g1/TestPrintRegionRememberedSetInfo.java >>>>> fail pass gc/g1/TestShrinkAuxiliaryData00.java >>>>> fail pass gc/g1/TestShrinkAuxiliaryData05.java >>>>> fail pass gc/g1/TestShrinkAuxiliaryData10.java >>>>> fail pass gc/g1/TestShrinkAuxiliaryData15.java >>>>> fail pass gc/g1/TestShrinkAuxiliaryData20.java >>>>> fail pass gc/g1/TestShrinkAuxiliaryData25.java >>>>> fail pass gc/g1/TestShrinkDefragmentedHeap.java#id0 >>>>> fail pass gc/g1/TestStringDeduplicationAgeThreshold.java >>>>> fail pass gc/g1/TestStringDeduplicationFullGC.java >>>>> fail pass gc/g1/TestStringDeduplicationInterned.java >>>>> fail pass gc/g1/TestStringDeduplicationPrintOptions.java >>>>> fail pass gc/g1/TestStringDeduplicationTableRehash.java >>>>> fail pass gc/g1/TestStringDeduplicationTableResize.java >>>>> fail pass gc/g1/TestStringDeduplicationYoungGC.java >>>>> fail pass gc/g1/TestStringSymbolTableStats.java >>>>> fail pass gc/logging/TestGCId.java >>>>> fail pass gc/whitebox/TestWBGC.java >>>>> fail pass runtime/ErrorHandling/TestOnOutOfMemoryError.java#id0 >>>>> fail pass runtime/NMT/JcmdWithNMTDisabled.java >>>>> fail pass runtime/memory/ReserveMemory.java >>>>> pass --- sanity/WhiteBox.java >>>>> fail pass serviceability/attach/AttachWithStalePidFile.java >>>>> fail pass serviceability/jvmti/TestRedefineWithUnresolvedClass.java >>>>> error pass >>>>> serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java#id0 >>>>> >>>>> >>>>> I find that compiler/rtm/locking/TestRTMLockingThreshold.java produces >>>>> inconsistent results on my machine, regardless of whether or not the >>>>> patch is applied. >>>>> >>>>> BR >>>>> Stuart >>>>> >>>>> >>>>> On 1 June 2017 at 06:39, David Holmes wrote: >>>>>> >>>>>> >>>>>> Thanks for that information Muneer, that is an unpleasant surprise. >>>>>> >>>>>> Stuart: I think 8180904 has to be fixed before this backport can take >>>>>> place. >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> ----- >>>>>> >>>>>> >>>>>> On 1/06/2017 2:31 PM, Muneer Kolarkunnu wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi David and Stuart, >>>>>>> >>>>>>> I recently reported one bug[1] for the same issue and listed which all >>>>>>> test cases are failing with agentvm. >>>>>>> I tested in Oracle.Linux.7.0 x64. >>>>>>> >>>>>>> [1] https://bugs.openjdk.java.net/browse/JDK-8180904 >>>>>>> >>>>>>> Regards, >>>>>>> Muneer >>>>>>> >>>>>>> -----Original Message----- >>>>>>> From: David Holmes >>>>>>> Sent: Thursday, June 01, 2017 7:04 AM >>>>>>> To: Stuart Monteith; hotspot-dev Source Developers >>>>>>> Subject: Re: RFR 8u backport: 8077608: [TESTBUG] Enable Hotspot jtreg >>>>>>> tests to run in agentvm mode >>>>>>> >>>>>>> Hi Stuart, >>>>>>> >>>>>>> This looks like an accurate backport of the change. >>>>>>> >>>>>>> My only minor concern is if there may be tests in 8u that are no >>>>>>> longer >>>>>>> in >>>>>>> 9 which may not work with agentvm mode. >>>>>>> >>>>>>> What platforms have you tested this on? >>>>>>> >>>>>>> Thanks, >>>>>>> David >>>>>>> >>>>>>> On 31/05/2017 11:19 PM, Stuart Monteith wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hello, >>>>>>>> Currently the jdk8u codebase fails some JTreg Hotspot tests >>>>>>>> when >>>>>>>> running in the -agentvm mode. This is because the ProcessTools class >>>>>>>> is not passing the classpath. There are substantial time savings to >>>>>>>> be >>>>>>>> gained using -agentvm over -othervm. >>>>>>>> >>>>>>>> Fortunately, there was a fix for jdk9 (8077608) that has not been >>>>>>>> backported to jdk8u. The details are as follows: >>>>>>>> >>>>>>>> >>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/017937.h >>>>>>>> tml >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8077608 >>>>>>>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/af2a1e9f08f3 >>>>>>>> >>>>>>>> The patch just needed a slight change, to remove the change to the >>>>>>>> file "test/compiler/uncommontrap/TestUnstableIfTrap.java" as that >>>>>>>> test >>>>>>>> doesn't exist on jdk8u. >>>>>>>> >>>>>>>> My colleague Ningsheng has kindly hosted the change here: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~njian/8077608/webrev.00 >>>>>>>> >>>>>>>> >>>>>>>> BR, >>>>>>>> Stuart >>>>>>>> >>>>>> >>>> >> From magnus.ihse.bursie at oracle.com Fri Jun 16 16:03:56 2017 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 16 Jun 2017 18:03:56 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <70fb60d3-7d19-ed9b-840e-bbb7315ae864@oracle.com> References: <20170609102041.GA2477@physik.fu-berlin.de> <1497442884.3741.1.camel@redhat.com> <70fb60d3-7d19-ed9b-840e-bbb7315ae864@oracle.com> Message-ID: On 2017-06-14 16:38, Erik Helin wrote: > On 06/14/2017 02:21 PM, Severin Gehwolf wrote: >> Hi Eric, >> >> On Wed, 2017-06-14 at 13:50 +0200, Erik Helin wrote: >>> For the fourth patch, fix-zero-build-on-sparc.diff, I'm not so sure. For >>> example, the following is a bit surprising to me (mostly because I'm not >>> familiar with zero): >>> >>> --- a/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp >>> +++ b/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp >>> @@ -37,7 +37,7 @@ >>> // understanding that there may be concurrent readers of that memory. >>> void memset_with_concurrent_readers(void* to, int value, size_t size); >>> >>> -#ifdef SPARC >>> +#if defined(SPARC) && !defined(ZERO) >>> >>> When this code was written, the intent was clearly to have a specialized >>> version of this function for SPARC. When writing such code, do we always >>> have to take into account the zero case with !defined(ZERO)? >> As of now, yes I think so. The thing is that Zero is supposed to be >> architecture agnostic for the most part. That is, you can build Zero on >> x86_64, SPARC, aarch64, etc. > Ok. But if Zero is architecture agnostic, why do we have the directory > hotspot/src/cpu/zero? Sorry, I don't know much about Zero... Zero is a strange beast. :-& It behaves partially as a separate architecture, and partially as a "jvm variant" (like server, client), and partially as a "turn this special flag on". Long term, there's probably a bunch of clarity to gain from cleaning this up. /Magnus > >>> That >>> doesn't seem like the right (or a scalable) approach to me. >> Agreed. That's how it is at the moment, though. >> >>> Severin and/or Roman, do you guys know more about Zero and how this >>> should work? If I want to write a function that I want to specialize for >>> e.g. x86-64 or arm, do I always have to take Zero into account? Or >>> should some other define be used, like #ifdef TARGET_ARCH_sparc? >> So the ZERO define can happen regardless of arch. I don't really know >> any define which does what you want except #if defined() && >> !defined(ZERO) perhaps. > Hmm, ok, but for the above code snippet, if we are running with Zero on > Sparc, can't we use the Sparc optimized version of > memset_with_concurrent_readers? Or can't we use Sparc assembly in the > runtime when running with Zero? > > Thanks, > Erik > >> Thanks, >> Severin >> >>> Thanks, >>> Erik >>> >>> On 06/09/2017 12:20 PM, John Paul Adrian Glaubitz wrote: >>>> Hi! >>>> >>>> I am currently working on fixing OpenJDK-9 on all non-mainstream >>>> targets available in Debian. For Debian/sparc64, the attached four >>>> patches were necessary to make the build succeed [1]. >>>> >>>> I know the patches cannot be merged right now, but I'm posting them >>>> anyway in case someone else is interested in using them. >>>> >>>> All patches are: >>>> >>>> Signed-off-by: John Paul Adrian Glaubitz >>>> >>>> I also signed the OCA. >>>> >>>> I'm now looking into fixing the builds on alpha (DEC Alpha), armel >>>> (ARMv4T), m68k (680x0), powerpc (PPC32) and sh4 (SuperH/J-Core). >>>> >>>> Cheers, >>>> Adrian >>>> >>>>> [1] https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=sparc64&ver=9%7Eb170-2&stamp=1496931563&raw=0 From kim.barrett at oracle.com Sat Jun 17 00:33:24 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 16 Jun 2017 20:33:24 -0400 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion Message-ID: Please review this refactoring of debug.hpp and globalDefinitions.hpp so that debug.hpp no longer includes globalDefinitions.hpp. Instead, the include dependency is now in the other direction. Among other things, this permits the use of the assert macros by inline functions defined in globalDefinitions.hpp. There are a few functions declared toward the end of debug.hpp that now seem somewhat misplaced there. I'm leaving them there for now, but will file a new CR to figure out a better place for them, possibly in vmError. There are a number of additional cleanups for dead code and the like that I'll be filing as followups; this change is already rather large and I didn't want to add more stuff to it. CR: https://bugs.openjdk.java.net/browse/JDK-8181449 Testing: jprt Webrev: http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.00/ The full webrev is somewhat large. However, much of the bulk involves either adding #includes to files or moving code from one place to another without changing it. To simplify reviewing the changes, I've broken it down into a sequence of patches, each associated with a particular bit of the refactoring. The full change is equivalent to applying these patches in the given order. (Note: I don't know if applying a subset gives a working repository.) (1) http://cr.openjdk.java.net/~kbarrett/8181449/jvm_h/ a. In preparation for removing the #include of jvm.h from debug.hpp (see move_format_buffer webrev), ensured all files that contain references to jio_printf variants include jvm.h. This mostly involved adding a #include to lots of files. b. For a few header files that referenced jio_printf variants, moved the function definition from the .hpp to the corresponding .cpp, and added #include of jvm.h to the .cpp. - macroAssembler_sparc.[ch]pp - macroAssembler_x86.[ch]pp - macroAssembler_aarch64.[ch]pp (2) http://cr.openjdk.java.net/~kbarrett/8181449/move_format_buffer/ a. Moved FormatBuffer and related stuff from debug.[ch]pp to new formatBuffer.[ch]pp, and updated users to #include the new files. This includes moving the #include of jvm.h, which is no longer needed by debug.hpp. b. Made the #include of debug.hpp explicit when already otherwise modifying a file that uses assert-like macros, rather than depending on indirect inclusion of that file. (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ a. Moved print_native_stack to VMError class. b. Removed unused and undefined pd_obfuscate_location. c. Simplified #ifdef PRODUCT guard in ps(). (4) http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/ a. Moved / combined definitions of BREAKPOINT macro from globalDefinitions_*.hpp to new breakpoint.hpp. b. Deleted all definitions of unused DEBUG_EXCEPTION macro. c. Moved gcc-specific ATTRIBUTE_PRINTF, pragma macros, and friends from globalDefinitions_gcc.hpp to new compilerWarnings.hpp. Also moved the default definitions for those macros from globalDefinitions.hpp to compilerWarnings.hpp. d. Added TARGET_COMPILER_HEADER[_INLINE] macros, similar to the CPU/OS/OS_CPU/_HEADER[_INLINE] macros, for including files based on new INCLUDE_SUFFIX_TARGET_COMPILER macro provided by the build system. (5) http://cr.openjdk.java.net/~kbarrett/8181449/flip_depend/ a. Changed globalDefinitions.hpp to #include debug.hpp, rather than the other way around. b. Changed globals.hpp to #include globalDefinitions.hpp rather than debug.hpp, since it actually needs to former and not the latter. c. Changed jvmci_globals.cpp to #include jvm.h, since it's no longer being indirectly included via an indirect include of debug.hpp that was including globalDefinitions.hpp. d. Moved printf-style formatters earlier in globalDefinitions.hpp, so they can be used in assert messages in this file. e. In globalDefinitions.hpp, changed some #ifdef ASSERT blocks of conditional calls to basic_fatal to instead use assert. While doing so, made the error messages more informative. In addition to globals.hpp, there are about 90 files that #include debug.hpp but not globalDefinitions.hpp. The few changes mentioned were sufficient to fix missing indirect includes resulting from debug.hpp no longer including globalDefinitions.hpp. There could be additional problems with platforms not supported by Oracle though. There are also about 40 files which directly include debug.hpp but don't appear to use any of the assertion macros. From glaubitz at physik.fu-berlin.de Sat Jun 17 23:40:06 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sun, 18 Jun 2017 01:40:06 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <20170614120408.GB16230@physik.fu-berlin.de> References: <20170609102041.GA2477@physik.fu-berlin.de> <20170614120408.GB16230@physik.fu-berlin.de> Message-ID: <5d613e41-a982-ec67-3a48-5befbf3a2808@physik.fu-berlin.de> Hi Erik! On 06/14/2017 02:04 PM, John Paul Adrian Glaubitz wrote: > On Wed, Jun 14, 2017 at 01:50:06PM +0200, Erik Helin wrote: >> thanks for contributing and signing the OCA! > > Thanks for reviewing my patches ;-). My OCA has been completed now and I'm now showing up on the list of signees: > http://www.oracle.com/technetwork/community/oca-486395.html So, at least from the bureaucracy side, I should be all set now. >> I think the first three patches (hotspot-add-missing-log-header.diff, >> hotspot-fix-checkbytebuffer.diff, rename-sparc-linux-atomic-header.diff) all >> look good, thanks for fixing broken code. Consider them Reviewed by me. >> Every patch needs a corresponding issue in the bug tracker, so I went ahead >> and created: >> - https://bugs.openjdk.java.net/browse/JDK-8182163 >> - https://bugs.openjdk.java.net/browse/JDK-8182164 >> - https://bugs.openjdk.java.net/browse/JDK-8182165 So, for these three patches. What else needs to be done to get them in a state so they can be merged? I understand they need to be reviewed by a second reviewer as they concern Hotspot code. In another message, you also mentioned: > Also, have you run the tier 1 testing for hotspot (the tests that need to > pass for every commit)? You can run those tests by running (from the > top-level "root" repo): > > $ make test TEST=hotspot_tier1 > > or, if you want to try the new run-test functionality > > $ make run-test-hotspot_tier1 Should those be run on the Linux sparc64 machine or just on Linux x86_64? I'm asking because running the testsuite on Linux sparc64 will only be possible with all four patches applied as they are build fixes. Running the testsuite on Linux x86_64 will be possible, of course. After fixing the build on Linux sparc, I will have more patches ready to fix the Zero builds on even more targets ;-). Cheers, Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Sun Jun 18 07:30:34 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Sun, 18 Jun 2017 09:30:34 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area Message-ID: Dear all, please take a look at this change. It attempts to solve the problem of UL using resource area in LogStream classes and tripping over ResourceMark when logging. There have been a couple of issues in that area and it continues to be an accident waiting to happen. issue: https://bugs.openjdk.java.net/browse/JDK-8181917 Prior discussion at hs-dev: http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-June/027090.html The problem is that LogStreams use resource area backed memory to assemble a log output line. Log output lines can be lengthy, so this memory may need to be expanded. If that expansion happens down the stack in a sub function which spans an own ResourceMark, we do assert or crash, see e.g. JDK-8181807 , JDK-8149557 , JDK-8167995 . There is no real good way around this but refraining from using resource area at all. Please see the bug report description for more details. The way to solve this is not to use Resource Memory for the Line memory but instead to use C-Heap memory. Refraining from using Resource Memory also makes UL more robust (not dependent on the Arena memory subsystem to work correctly and not dependent of having a current Thread* available). It also means it is less likely to change application behaviour - something logging should avoid if possible. Now, replacing resource area memory with C-Heap would be very simple, if only for one detail: deallocation. Currently, most LogStream instances are never deleted explicitly. They (both the instance and its line memory) usually live in resource area: They are returned from the "(trace|debug|info|warning|error)_stream()" function of LogImpl and used like this or similar to this: Log(jit, compilation) log; if (log.is_debug()) { print_impl(log.debug_stream(), ... ); } log.debug_stream() returns a one-time-use-only-resource-area-allocated instance of LogStream(), which is never cleaned up. So, its destructor is never run. That means that simply replacing the internal line memory of LogStream with C-Heap alone is not sufficient, because the memory could never get freed. This problem has been discussed in hotspot-dev and in #openjdk IRC with the UL developers. There are various solutions, but in the discussions the UL developers preferred a plain and simple solution: to just remove the "(trace|debug|info|warning|error)_stream()" completely and make LogStream a stack-allocatable object only. That way it is used as a plain RAII object and will free its internal memory when going out of scope. In the above example, the callsite would have to be changed to something like this: Log(jit, compilation) log; if (log.is_debug()) { LogStream ls(log.debug()); print_impl(&ls, ... ); } This solution requires fixing up a large number of callsites, but has the benefit of making the API simpler. ---- This is the first draft of the fix mentioned above. All callsites were fixed (but maybe it can be done better) and the LogStream API was greatly simplified. This is the complete webrev: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all-together-changes.webrev.00/webrev/index.html Because it is a bit lengthy, but many changes are mechanical and unexciting, I split the webrev in parts for easier review. -- http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/callsite-changes.webrev.00/webrev/ These are the - mostly mechanical - changes to the many callsites. Most of these changes follow the same pattern. A code sequence using "xxx_stream()" was split into declaration of LogTarget or Log (usually the former), an "is_enabled()" query and declaration of LogStream on the stack afterwards. This follows the principle that the logging itself should be cheap if unused: the declaration of LogTarget is a noop, because LogTarget has no members. Only if is_enabled() is true, more expensive things are done, e.g. initializing the outputStream object and allocating ResourceMarks. Note that I encountered some places where logging was done without enclosing "is_enabled()" query - for instance, see gc/g1/heapRegion.cpp, or cms/compactibleFreeListSpace.cpp. As far as I understand, in these cases we actually do print (assemble the line to be printed), only to discard all that work in the UL backend because logging is not enabled for that log target. So, we pay quite a bit for nothing. I marked these questionable code sections with an "// Unconditional write?" comment and we may want to fix those later in a follow up issue? -- http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/api-changes.webrev.00/webrev/ The API changes mostly are simplifications. Before, we had a whole hierarchy of LogStream... classes whose only difference was how the backing memory was allocated. Because we now always use C-Heap, all this can be folded together into a single simple LogStream class which uses Cheap as line memory. Please note that I left "LogStreamNoResourceMark" and "LogStreamCHeap" for now and typedef'ed them to the one single LogStream class; I will fix those later as part of this refactoring. -- http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/logstream-optimization.webrev.00/webrev/ Finally, this is a small optimization for LogStream (in case we are worried switching from resource area to malloc would be a performance issue). LogStream was changed to use, for small log lines, a small internal fixed sized char buffer and only switch to malloced memory for larger lines. The small char array is a member of LogStream and therefore placed on the stack. ----- The fix builds on Windows and Linux x64, and I found not yet any regressions. Will run more jtreg tests next week. Thanks in advance for the reviewing effort! Kind Regards, Thomas From thomas.stuefe at gmail.com Sun Jun 18 07:40:17 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Sun, 18 Jun 2017 09:40:17 +0200 Subject: stringStream in UL and nested ResourceMarks In-Reply-To: <7528b942-a591-401a-433d-5e16b85bc10c@oracle.com> References: <13b9c852-293c-e657-7ed1-d6644669a1f8@oracle.com> <93824b7c-598b-3c68-9b05-2922bc71ec7f@oracle.com> <4a9cc5cf-6e6c-ffeb-32c7-a5428e706fe9@oracle.com> <21d4c4ee-e386-d908-2c7f-4573e715f91e@oracle.com> <7528b942-a591-401a-433d-5e16b85bc10c@oracle.com> Message-ID: Hi Eric, thanks for the review! I reposted the next iteration as a first "real" fix under a new subject. As for your suggestions, please find my answers inline. On Wed, Jun 14, 2017 at 11:39 AM, Erik Helin wrote: > On 06/13/2017 06:23 PM, Thomas St?fe wrote: > >> So, I changed a whole bunch of callsites to stack-only LogStreams and my >> brain is slowly turning to cheese :) therefore, lets do a sanity check >> if this is still what we want. Current snapshot of my work here: >> >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-UL-should >> -not-use-resource-memory-for-LogStream/current-work-2/webrev/ >> > > I think this looks really good! A few comments: > > --- old/src/cpu/sparc/vm/vm_version_sparc.cpp > +++ new/src/cpu/sparc/vm/vm_version_sparc.cpp > @@ -381,7 +382,8 @@ > - outputStream* log = Log(os, cpu)::info_stream(); > + LogStream ls(Log(os, cpu)::info()); > + outputStream* log = &ls; > > I think the above pattern, LogStream ls(Log(foo, bar)::info()), turned out > very good, succinct and readable. Great work. > > thanks! > --- old/src/share/vm/classfile/classLoaderData.cpp > +++ new/src/share/vm/classfile/classLoaderData.cpp > @@ -831,16 +833,17 @@ > - outputStream* log = Log(class, loader, data)::debug_stream(); > - log->print("create class loader data " INTPTR_FORMAT, p2i(cld)); > - log->print(" for instance " INTPTR_FORMAT " of %s", p2i((void > *)cld->class_loader()), > + Log(class, loader, data) log; > + LogStream ls(log.debug()); > + ls.print("create class loader data " INTPTR_FORMAT, p2i(cld)); > + ls.print(" for instance " INTPTR_FORMAT " of %s", p2i((void > *)cld->class_loader()), > cld->loader_name()); > > if (string.not_null()) { > - log->print(": "); > - java_lang_String::print(string(), log); > + ls.print(": "); > + java_lang_String::print(string(), &ls); > } > - log->cr(); > + ls.cr(); > > Do you really need the `log` variable here? It seems to that only `ls` is > used? Or did you mean to do the `outputStream* log = &ls` pattern here as > well? Or maybe I missed something? > > I ended up reformulating the function. Because I did not like that we have the "is_enabled(level, tags....)" upstairs in the caller frame, and again unconditional write in this frame. So I redid the function, it now does know nothing of UL but writes to an outputStream, and the usual LogStream logic is in the caller frame. > --- old/src/share/vm/classfile/loaderConstraints.cpp > +++ new/src/share/vm/classfile/loaderConstraints.cpp > @@ -98,14 +101,14 @@ > if (klass != NULL && > klass->class_loader_data()->is_unloading()) { > probe->set_klass(NULL); > - if (log_is_enabled(Info, class, loader, constraints)) { > + if (lt.is_enabled()) { > ResourceMark rm; > - outputStream* out = Log(class, loader, > constraints)::info_stream(); > - out->print_cr("purging class object from constraint for name > %s," > + LogStream ls(lt); > + ls.print_cr("purging class object from constraint for name %s," > " loader list:", > probe->name()->as_C_string()); > for (int i = 0; i < probe->num_loaders(); i++) { > - out->print_cr(" [%d]: %s", i, > + ls.print_cr(" [%d]: %s", i, > probe->loader_data(i)->loader_name()); > } > } > > Could the pattern > LogStream ls(lt); > ls.print_cr("hello, brave new logging world"); > > become > LogStream(lt).print_cr("hello, brave new logging world"); > > in order to have less line? Not sure if it is better, but it is at least > shorter :) Seems to be a rather common pattern as well... > > It could, but "ls" is used twice, so I guess I need the variable name. > --- old/src/share/vm/gc/g1/g1AllocRegion.cpp > +++ new/src/share/vm/gc/g1/g1AllocRegion.cpp > @@ -211,12 +213,9 @@ > > if ((actual_word_size == 0 && result == NULL) || detailed_info) { > ResourceMark rm; > - outputStream* out; > - if (detailed_info) { > - out = log.trace_stream(); > - } else { > - out = log.debug_stream(); > - } > + LogStream ls_trace(log.trace()); > + LogStream ls_debug(log.debug()); > + outputStream* out = detailed_info ? &ls_trace : &ls_debug; > > Could this be > LogStream out = LogStream(detailed_info ? log.trace() : log.debug()); > > or is this too succinct? Anyways, nice use of the ternary operator here, > makes the code much more readable. > > Ah, that was annoying. This is the price you pay for too much templates. In short, I did not find a better way to express this. I played around with LogStream ls(detailled? LogTargetHandle(log.trace()):LogTargetHandle(log.debug())); but neither could get it to work nor was too impressed with the shortness of that expression.... If you have a better way to do this, please tell me :) I didn't have to look through the entire patch (got approx 50% of the way), > but I think the patch is becoming really good. > > Some thoughts: >> >> After talking this over with Eric off-list, >> > > Well, off-list, but on IRC ;) #openjdk on irc.oftc.net for those that > want to follow along or join in on the discussion. > > I do not think anymore that reducing the: >> >> LogTarget(...) log; >> if (log.is_enabled()) { >> LogStream ls(log)... >> } >> >> to just >> >> LogStream ls(..); >> if (ls.is_enabled()) { >> .. >> } >> >> is really a good idea. We want logging to not cause costs if logging is >> disabled. But this way, we would always to pay the cost for initializing >> the LogStream, which means initializing outputStream at least once (for >> the parent class) and maybe twice (if the line buffer is an outputStream >> class too). outputStream constructor just assigns a bunch of member >> variables, but this is still more than nothing. >> > > Yep, I still agree with this. > > --- >> >> Funnily, when translating all these callsites, I almost never used Log, >> but mostly LogTarget. This is because I wanted to avoid repeating the >> (level, tag, tag..) declarations, and the only way to do this is via >> LogTarget. Compare: >> >> Log(gc, metaspace, freelist) log; >> if (log.is_debug()) { >> LogStream ls(log.debug()); >> } >> >> repeats the "debug" info. Even worse are cases where the whole taglist >> would be repeated: >> >> if (log_is_enabled(Info, class, loader, constraints)) { >> LogStream ls(Log( class, loader, constraints)::info()); >> } >> > > I think using LogTarget makes a lot sense in these situations, I prefer > that solution. > > --- >> >> I found cases where the usage of "xx_stream()" was not guarded by any >> is_enabled() flag but executed unconditionally, e.g. metaspace.cpp >> (VirtualSpaceNode::take_from_committed()): >> >> 1016 if (!is_available(chunk_word_size)) { >> 1017 Log(gc, metaspace, freelist) log; >> 1018 log.debug("VirtualSpaceNode::take_from_committed() not >> available " SIZE_FORMAT " words ", chunk_word_size); >> 1019 // Dump some information about the virtual space that is nearly >> full >> 1020 ResourceMark rm; >> 1021 print_on(log.debug_stream()); >> 1022 return NULL; >> 1023 } >> >> So I really wondered: print_on(log.debug_stream()) is executed >> unconditionally, what happens here? What happens is that the whole >> printing is executed, first inside the LogStream, then down to >> LogTargetImpl, and somewhere deep down in UL (in LogTagSet::log()) the >> assembled message is ignored because there is no output connected to it. >> So we always pay for the whole printing. I consider this an error, >> right? I wonder how this could be prevented. >> > > Hmm, I'm not an expert on UL internals, so take my ideas with a large > grain of salt :) Would be possible to have log.debug() do the is_enabled() > check (and just do nothing if the check is false)? That would unfortunately > penalize code that want to call log multiple times, such as: > > Log(foo, bar) log; > if (log.is_enabled()) { > log.debug(...); > log.debug(...); > log.debug(...); > } > > In the above snippet, we would with my suggestion do the is_enabled check > 4 times instead of 1. OTOH, then one could then remove the first check and > just have: > > Log(foo, bar) log; > log.debug(...); > log.debug(...); > log.debug(...); > > (but still, this is 3 checks compared to 1). > > How expensive is the is_enabled check? I _think_ (others, please correct > me if I'm wrong) that code is meant to use the "if (is_enabled()" pattern > if either the logging or getting the data for logging is expensive. Hence, > if code doesn't do this (and instead rely on log.debug() to discard the > data), then it should be fine with this costing a bit more (or we have a > bug). > > I think so too (that the author just forgot to use is_enabled() enclosure). I re-asked the question in my official RFR, so lets see what people say. I think that maybe UL should print a warning if the UL frontend attempts to write something and there is no LogOutput for it available. But I think this can be done as a separate issue. > --- >> >> After doing all these changes, I am unsure. Is this the right direction? >> The alternative would still be my original proposal (tying the LogStream >> instances as members to the LogTarget instance on the stack). What do >> you think? I also think that if we go this direction, it might make >> sense to do this in jdk9, because auto-merging jdk9 to jdk10 may be a >> problem with so many changes. Or am I too pessimistic? >> > > IMHO you are definitely heading in the right direction. Again, IMO, I > don't think we should do this in JDK 9. Focus on 10 and if backporting > turns to be problematic due to this change, then we fix it then ;) > > Again, others might have a different view (if so, please chime in). > > Thanks, > Erik > > Thanks, Thomas > Kind regards, Thomas >> >> >> >> >> >> From erik.helin at oracle.com Mon Jun 19 06:59:38 2017 From: erik.helin at oracle.com (Erik Helin) Date: Mon, 19 Jun 2017 08:59:38 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <5d613e41-a982-ec67-3a48-5befbf3a2808@physik.fu-berlin.de> References: <20170609102041.GA2477@physik.fu-berlin.de> <20170614120408.GB16230@physik.fu-berlin.de> <5d613e41-a982-ec67-3a48-5befbf3a2808@physik.fu-berlin.de> Message-ID: <31eeeb60-1b0d-a0cb-238c-ca2361430786@oracle.com> On 06/18/2017 01:40 AM, John Paul Adrian Glaubitz wrote: > Hi Erik! > > On 06/14/2017 02:04 PM, John Paul Adrian Glaubitz wrote: >> On Wed, Jun 14, 2017 at 01:50:06PM +0200, Erik Helin wrote: >>> thanks for contributing and signing the OCA! >> >> Thanks for reviewing my patches ;-). > > My OCA has been completed now and I'm now showing up on the list > of signees: > >> http://www.oracle.com/technetwork/community/oca-486395.html > > So, at least from the bureaucracy side, I should be all set now. Great, thanks! >>> I think the first three patches (hotspot-add-missing-log-header.diff, >>> hotspot-fix-checkbytebuffer.diff, rename-sparc-linux-atomic-header.diff) all >>> look good, thanks for fixing broken code. Consider them Reviewed by me. >>> Every patch needs a corresponding issue in the bug tracker, so I went ahead >>> and created: >>> - https://bugs.openjdk.java.net/browse/JDK-8182163 >>> - https://bugs.openjdk.java.net/browse/JDK-8182164 >>> - https://bugs.openjdk.java.net/browse/JDK-8182165 > > So, for these three patches. What else needs to be done to get them > in a state so they can be merged? I understand they need to be reviewed > by a second reviewer as they concern Hotspot code. Yep, you need a second reviewer for these patches. > In another message, you also mentioned: > >> Also, have you run the tier 1 testing for hotspot (the tests that need to >> pass for every commit)? You can run those tests by running (from the >> top-level "root" repo): >> >> $ make test TEST=hotspot_tier1 >> >> or, if you want to try the new run-test functionality >> >> $ make run-test-hotspot_tier1 > > Should those be run on the Linux sparc64 machine or just on Linux x86_64? The tests should be run in order to verify your changes. Since your changes make Linux/sparc64 work again, they should be run on Linux/sparc64. > I'm asking because running the testsuite on Linux sparc64 will only be > possible with all four patches applied as they are build fixes. Running > the testsuite on Linux x86_64 will be possible, of course. Ok, this is the part I don't really understand yet. Why run with Zero on Linux/sparc64? Shouldn't it be possible to run HotSpot "natively" (with the template interpreter, C1, C2) on Linux/sparc64? Thanks, Erik > After fixing the build on Linux sparc, I will have more patches ready > to fix the Zero builds on even more targets ;-). > > Cheers, > Adrian > From glaubitz at physik.fu-berlin.de Mon Jun 19 07:06:13 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 19 Jun 2017 09:06:13 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <31eeeb60-1b0d-a0cb-238c-ca2361430786@oracle.com> References: <20170609102041.GA2477@physik.fu-berlin.de> <20170614120408.GB16230@physik.fu-berlin.de> <5d613e41-a982-ec67-3a48-5befbf3a2808@physik.fu-berlin.de> <31eeeb60-1b0d-a0cb-238c-ca2361430786@oracle.com> Message-ID: <20170619070613.GE28760@physik.fu-berlin.de> On Mon, Jun 19, 2017 at 08:59:38AM +0200, Erik Helin wrote: > > I'm asking because running the testsuite on Linux sparc64 will only be > > possible with all four patches applied as they are build fixes. Running > > the testsuite on Linux x86_64 will be possible, of course. > > Ok, this is the part I don't really understand yet. Why run with Zero on > Linux/sparc64? Shouldn't it be possible to run HotSpot "natively" (with > the template interpreter, C1, C2) on Linux/sparc64? The first three patches are needed to get native Hotspot to build on Linux sparc. Without the patches, the builds bails out with compiler errors. So, should I just run the testsuite with all three patches applied? The fourth patch just fixes Zero on Linux sparc. If I understand correctly, Debian's openjdk packages always build the Zero VM even on targets with a native Hotspot. And without the last patch, the Zero build fails on Linux sparc. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From marcus.larsson at oracle.com Mon Jun 19 12:30:11 2017 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Mon, 19 Jun 2017 14:30:11 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: Message-ID: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Hi Thomas, Thanks for your effort on this, great work! On 2017-06-18 09:30, Thomas St?fe wrote: > > -- > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/callsite-changes.webrev.00/webrev/ > > (Although this is a pre-existing issue, it might be a good opportunity to clean it up now.) In file loaderConstraints.cpp, class LoaderConstraintTable, for functions purge_loader_constraints(), add_entry(), check_or_update(), extend_loader_constraint() and merge_loader_constraints(): A LogStream is created, but only ever used for print_cr():s, which sort of defeats its purpose. It would be much simpler just to use the LogTarget directly. This is actually what's done for the converted log_ldr_constraint_msg(). A similar but worse issue is present in sharedPathsMiscInfo.cpp: Here, a LogStream is used to print incomplete lines without any CR at the end. These messages will never be logged. Also, the use of a stream here is unnecessary as well. In compactibleFreeListSpace.cpp: 2200 ResourceMark rm; It should be safe to remove this ResourceMark. > These are the - mostly mechanical - changes to the many callsites. > Most of these changes follow the same pattern. A code sequence using > "xxx_stream()" was split into declaration of LogTarget or Log (usually > the former), an "is_enabled()" query and declaration of LogStream on > the stack afterwards. This follows the principle that the logging > itself should be cheap if unused: the declaration of LogTarget is a > noop, because LogTarget has no members. Only if is_enabled() is true, > more expensive things are done, e.g. initializing the outputStream > object and allocating ResourceMarks. > > Note that I encountered some places where logging was done without > enclosing "is_enabled()" query - for instance, see > gc/g1/heapRegion.cpp, or cms/compactibleFreeListSpace.cpp. As far as I > understand, in these cases we actually do print (assemble the line to > be printed), only to discard all that work in the UL backend because > logging is not enabled for that log target. So, we pay quite a bit for > nothing. I marked these questionable code sections with an "// > Unconditional write?" comment and we may want to fix those later in a > follow up issue? That sounds good to me. I found more sites where the logging is unconditional (compactibleFreeListSpace.cpp, parOopClosures.inline.hpp, g1RemSet.cpp), but we should fix them all as a separate issue. I filed https://bugs.openjdk.java.net/browse/JDK-8182466. > > -- > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/api-changes.webrev.00/webrev/ > > > The API changes mostly are simplifications. Before, we had a whole > hierarchy of LogStream... classes whose only difference was how the > backing memory was allocated. Because we now always use C-Heap, all > this can be folded together into a single simple LogStream class which > uses Cheap as line memory. Please note that I left > "LogStreamNoResourceMark" and "LogStreamCHeap" for now and typedef'ed > them to the one single LogStream class; I will fix those later as part > of this refactoring. Looks good to me, as long as we get rid of the typedefs too eventually. :) > > -- > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/logstream-optimization.webrev.00/webrev/ > 56 // Prevent operator new for LogStream. 57 // static void* operator new (size_t); 58 // static void* operator new[] (size_t); 59 Should these be uncommented? Thanks again, Marcus From erik.helin at oracle.com Mon Jun 19 12:48:39 2017 From: erik.helin at oracle.com (Erik Helin) Date: Mon, 19 Jun 2017 14:48:39 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <20170619070613.GE28760@physik.fu-berlin.de> References: <20170609102041.GA2477@physik.fu-berlin.de> <20170614120408.GB16230@physik.fu-berlin.de> <5d613e41-a982-ec67-3a48-5befbf3a2808@physik.fu-berlin.de> <31eeeb60-1b0d-a0cb-238c-ca2361430786@oracle.com> <20170619070613.GE28760@physik.fu-berlin.de> Message-ID: On 06/19/2017 09:06 AM, John Paul Adrian Glaubitz wrote: > On Mon, Jun 19, 2017 at 08:59:38AM +0200, Erik Helin wrote: >>> I'm asking because running the testsuite on Linux sparc64 will only be >>> possible with all four patches applied as they are build fixes. Running >>> the testsuite on Linux x86_64 will be possible, of course. >> >> Ok, this is the part I don't really understand yet. Why run with Zero on >> Linux/sparc64? Shouldn't it be possible to run HotSpot "natively" (with >> the template interpreter, C1, C2) on Linux/sparc64? > > The first three patches are needed to get native Hotspot to build on > Linux sparc. Without the patches, the builds bails out with compiler > errors. > > So, should I just run the testsuite with all three patches applied? Yes, please run the testsuite with the three patches applied. This should work (famous last words ;)) for the "native" Linux/sparc64 version of hotspot (if not, I would to curious to learn why). To test Linux/sparc64+zero you obviously need the fourth patch applied as well. > The fourth patch just fixes Zero on Linux sparc. If I understand > correctly, Debian's openjdk packages always build the Zero VM even on > targets with a native Hotspot. And without the last patch, the Zero > build fails on Linux sparc. Ah, now I think I get it :) This is for the openjdk-9-jre-zero package, right? Does the openjdk-9-jre package provide the "native" (using template interpreter, C1, C2) version of hotspot if possible? Thanks, Erik > Adrian > From thomas.stuefe at gmail.com Mon Jun 19 13:16:32 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 19 Jun 2017 15:16:32 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: Hi Marcus, On Mon, Jun 19, 2017 at 2:30 PM, Marcus Larsson wrote: > Hi Thomas, > > Thanks for your effort on this, great work! > > On 2017-06-18 09:30, Thomas St?fe wrote: > > > -- > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/ > callsite-changes.webrev.00/webrev/ > > > (Although this is a pre-existing issue, it might be a good opportunity to > clean it up now.) > In file loaderConstraints.cpp, class LoaderConstraintTable, for functions > purge_loader_constraints(), add_entry(), check_or_update(), > extend_loader_constraint() and merge_loader_constraints(): > A LogStream is created, but only ever used for print_cr():s, which sort of > defeats its purpose. It would be much simpler just to use the LogTarget > directly. This is actually what's done for the converted > log_ldr_constraint_msg(). > > I found some of those too, but can a sequence of LogStream::print() calls really be replaced by a sequence of LogTarget::print()/LogImpl::print()? LogStream buffers all input till newline is encountered, whereas LogTarget::print (LogImpl::vwrite()->LogTagSet::vwrite()...) will print one log line for each invocation, no? A similar but worse issue is present in sharedPathsMiscInfo.cpp: > Here, a LogStream is used to print incomplete lines without any CR at the > end. These messages will never be logged. Also, the use of a stream here is > unnecessary as well. > > Yes, you are right, and it is also not enclosed in is_enabled(). Also, seems this coding is never tested, otherwise the assert in ~LogStreamBase() would have fired because of missing stream flush(). > In compactibleFreeListSpace.cpp: > > 2200 ResourceMark rm; > > It should be safe to remove this ResourceMark. > > Right. > These are the - mostly mechanical - changes to the many callsites. Most of > these changes follow the same pattern. A code sequence using "xxx_stream()" > was split into declaration of LogTarget or Log (usually the former), an > "is_enabled()" query and declaration of LogStream on the stack afterwards. > This follows the principle that the logging itself should be cheap if > unused: the declaration of LogTarget is a noop, because LogTarget has no > members. Only if is_enabled() is true, more expensive things are done, e.g. > initializing the outputStream object and allocating ResourceMarks. > > Note that I encountered some places where logging was done without > enclosing "is_enabled()" query - for instance, see gc/g1/heapRegion.cpp, or > cms/compactibleFreeListSpace.cpp. As far as I understand, in these cases > we actually do print (assemble the line to be printed), only to discard all > that work in the UL backend because logging is not enabled for that log > target. So, we pay quite a bit for nothing. I marked these questionable > code sections with an "// Unconditional write?" comment and we may want to > fix those later in a follow up issue? > > > That sounds good to me. I found more sites where the logging is > unconditional (compactibleFreeListSpace.cpp, parOopClosures.inline.hpp, > g1RemSet.cpp), but we should fix them all as a separate issue. I filed > https://bugs.openjdk.java.net/browse/JDK-8182466. > > Ok! I'll add whatever I find as comments. > > -- > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917- > refactor-ul-logstream/api-changes.webrev.00/webrev/ > > The API changes mostly are simplifications. Before, we had a whole > hierarchy of LogStream... classes whose only difference was how the backing > memory was allocated. Because we now always use C-Heap, all this can be > folded together into a single simple LogStream class which uses Cheap as > line memory. Please note that I left "LogStreamNoResourceMark" and > "LogStreamCHeap" for now and typedef'ed them to the one single LogStream > class; I will fix those later as part of this refactoring. > > > Looks good to me, as long as we get rid of the typedefs too eventually. :) > Sure! > > > -- > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/ > logstream-optimization.webrev.00/webrev/ > > > 56 // Prevent operator new for LogStream. 57 // static void* operator new (size_t); 58 // static void* operator new[] (size_t); 59 > > Should these be uncommented? > Oh, right. I have to play a bit more, was unsure whether that coding actually does what I want it to do, prevent ResourceObj::operator new from running. Will test a bit more. > > Thanks again, > Marcus > > Thanks for your review! I'll prepare an updated patch. ..Thomas From marcus.larsson at oracle.com Mon Jun 19 13:37:11 2017 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Mon, 19 Jun 2017 15:37:11 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: On 2017-06-19 15:16, Thomas St?fe wrote: > Hi Marcus, > > On Mon, Jun 19, 2017 at 2:30 PM, Marcus Larsson > > wrote: > > Hi Thomas, > > Thanks for your effort on this, great work! > > > On 2017-06-18 09:30, Thomas St?fe wrote: >> >> -- >> >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/callsite-changes.webrev.00/webrev/ >> >> > > (Although this is a pre-existing issue, it might be a good > opportunity to clean it up now.) > In file loaderConstraints.cpp, class LoaderConstraintTable, for > functions purge_loader_constraints(), add_entry(), > check_or_update(), extend_loader_constraint() and > merge_loader_constraints(): > A LogStream is created, but only ever used for print_cr():s, which > sort of defeats its purpose. It would be much simpler just to use > the LogTarget directly. This is actually what's done for the > converted log_ldr_constraint_msg(). > > > I found some of those too, but can a sequence of LogStream::print() > calls really be replaced by a sequence of > LogTarget::print()/LogImpl::print()? > > LogStream buffers all input till newline is encountered, whereas > LogTarget::print (LogImpl::vwrite()->LogTagSet::vwrite()...) will > print one log line for each invocation, no? Indeed they can. There's nothing special going on in LogStreams to make the lines grouped in any way. Each print_cr on a LogStream is equivalent to a LogImpl::write call. > > > A similar but worse issue is present in sharedPathsMiscInfo.cpp: > Here, a LogStream is used to print incomplete lines without any CR > at the end. These messages will never be logged. Also, the use of > a stream here is unnecessary as well. > > > Yes, you are right, and it is also not enclosed in is_enabled(). Also, > seems this coding is never tested, otherwise the assert in > ~LogStreamBase() would have fired because of missing stream flush(). True. The old code would never had a chance to hit the assert due to the resource allocation destructor problem, but now we should be able to hit it with proper testing. > In compactibleFreeListSpace.cpp: > > 2200 ResourceMark rm; > > It should be safe to remove this ResourceMark. > > > Right. > >> These are the - mostly mechanical - changes to the many >> callsites. Most of these changes follow the same pattern. A code >> sequence using "xxx_stream()" was split into declaration of >> LogTarget or Log (usually the former), an "is_enabled()" query >> and declaration of LogStream on the stack afterwards. This >> follows the principle that the logging itself should be cheap if >> unused: the declaration of LogTarget is a noop, because LogTarget >> has no members. Only if is_enabled() is true, more expensive >> things are done, e.g. initializing the outputStream object and >> allocating ResourceMarks. >> >> Note that I encountered some places where logging was done >> without enclosing "is_enabled()" query - for instance, see >> gc/g1/heapRegion.cpp, or cms/compactibleFreeListSpace.cpp. As far >> as I understand, in these cases we actually do print (assemble >> the line to be printed), only to discard all that work in the UL >> backend because logging is not enabled for that log target. So, >> we pay quite a bit for nothing. I marked these questionable code >> sections with an "// Unconditional write?" comment and we may >> want to fix those later in a follow up issue? > > That sounds good to me. I found more sites where the logging is > unconditional (compactibleFreeListSpace.cpp, > parOopClosures.inline.hpp, g1RemSet.cpp), but we should fix them > all as a separate issue. I filed > https://bugs.openjdk.java.net/browse/JDK-8182466 > . > > > Ok! I'll add whatever I find as comments. > >> >> -- >> >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/api-changes.webrev.00/webrev/ >> >> >> The API changes mostly are simplifications. Before, we had a >> whole hierarchy of LogStream... classes whose only difference was >> how the backing memory was allocated. Because we now always use >> C-Heap, all this can be folded together into a single simple >> LogStream class which uses Cheap as line memory. Please note that >> I left "LogStreamNoResourceMark" and "LogStreamCHeap" for now and >> typedef'ed them to the one single LogStream class; I will fix >> those later as part of this refactoring. > > Looks good to me, as long as we get rid of the typedefs too > eventually. :) > > > Sure! > > >> >> -- >> >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/logstream-optimization.webrev.00/webrev/ >> > > 56 // Prevent operator new for LogStream. > 57 // static void* operator new (size_t); > 58 // static void* operator new[] (size_t); > 59 > > Should these be uncommented? > > > Oh, right. I have to play a bit more, was unsure whether that coding > actually does what I want it to do, prevent ResourceObj::operator new > from running. Will test a bit more. Alright, sounds good. Marcus > > Thanks again, > Marcus > > > Thanks for your review! I'll prepare an updated patch. > > ..Thomas > From thomas.stuefe at gmail.com Mon Jun 19 13:47:44 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 19 Jun 2017 15:47:44 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: On Mon, Jun 19, 2017 at 3:37 PM, Marcus Larsson wrote: > > On 2017-06-19 15:16, Thomas St?fe wrote: > > Hi Marcus, > > On Mon, Jun 19, 2017 at 2:30 PM, Marcus Larsson > wrote: > >> Hi Thomas, >> >> Thanks for your effort on this, great work! >> >> On 2017-06-18 09:30, Thomas St?fe wrote: >> >> >> -- >> >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >> ul-logstream/callsite-changes.webrev.00/webrev/ >> >> >> (Although this is a pre-existing issue, it might be a good opportunity to >> clean it up now.) >> In file loaderConstraints.cpp, class LoaderConstraintTable, for functions >> purge_loader_constraints(), add_entry(), check_or_update(), >> extend_loader_constraint() and merge_loader_constraints(): >> A LogStream is created, but only ever used for print_cr():s, which sort >> of defeats its purpose. It would be much simpler just to use the LogTarget >> directly. This is actually what's done for the converted >> log_ldr_constraint_msg(). >> >> > I found some of those too, but can a sequence of LogStream::print() calls > really be replaced by a sequence of LogTarget::print()/LogImpl::print()? > > LogStream buffers all input till newline is encountered, whereas > LogTarget::print (LogImpl::vwrite()->LogTagSet::vwrite()...) will print > one log line for each invocation, no? > > > Indeed they can. There's nothing special going on in LogStreams to make > the lines grouped in any way. Each print_cr on a LogStream is equivalent to > a LogImpl::write call. > > Yes, but what about print() (without cr?) Maybe I am just slow... A sequence of Logstream ls; ls.print("<"); ls.print("br"); ls.print(">"); ls.cr(); is really equivalent to Log(,,) log; log.info("<"); log.info("br"); log.info(">\n"); ? I would have thought that only LogStream::cr() will cause the stream to be flushed, hence it will write "
", whereas all log.info() (or LogTarget::print()) calls would be logged as separate lines? > > A similar but worse issue is present in sharedPathsMiscInfo.cpp: >> Here, a LogStream is used to print incomplete lines without any CR at the >> end. These messages will never be logged. Also, the use of a stream here is >> unnecessary as well. >> >> > Yes, you are right, and it is also not enclosed in is_enabled(). Also, > seems this coding is never tested, otherwise the assert in ~LogStreamBase() > would have fired because of missing stream flush(). > > > True. The old code would never had a chance to hit the assert due to the > resource allocation destructor problem, but now we should be able to hit it > with proper testing. > > > >> In compactibleFreeListSpace.cpp: >> >> 2200 ResourceMark rm; >> >> It should be safe to remove this ResourceMark. >> >> > Right. > >> These are the - mostly mechanical - changes to the many callsites. Most >> of these changes follow the same pattern. A code sequence using >> "xxx_stream()" was split into declaration of LogTarget or Log (usually the >> former), an "is_enabled()" query and declaration of LogStream on the stack >> afterwards. This follows the principle that the logging itself should be >> cheap if unused: the declaration of LogTarget is a noop, because LogTarget >> has no members. Only if is_enabled() is true, more expensive things are >> done, e.g. initializing the outputStream object and allocating >> ResourceMarks. >> >> Note that I encountered some places where logging was done without >> enclosing "is_enabled()" query - for instance, see gc/g1/heapRegion.cpp, or >> cms/compactibleFreeListSpace.cpp. As far as I understand, in these cases >> we actually do print (assemble the line to be printed), only to discard all >> that work in the UL backend because logging is not enabled for that log >> target. So, we pay quite a bit for nothing. I marked these questionable >> code sections with an "// Unconditional write?" comment and we may want to >> fix those later in a follow up issue? >> >> >> That sounds good to me. I found more sites where the logging is >> unconditional (compactibleFreeListSpace.cpp, parOopClosures.inline.hpp, >> g1RemSet.cpp), but we should fix them all as a separate issue. I filed >> https://bugs.openjdk.java.net/browse/JDK-8182466. >> >> > Ok! I'll add whatever I find as comments. > > >> >> -- >> >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >> ul-logstream/api-changes.webrev.00/webrev/ >> >> The API changes mostly are simplifications. Before, we had a whole >> hierarchy of LogStream... classes whose only difference was how the backing >> memory was allocated. Because we now always use C-Heap, all this can be >> folded together into a single simple LogStream class which uses Cheap as >> line memory. Please note that I left "LogStreamNoResourceMark" and >> "LogStreamCHeap" for now and typedef'ed them to the one single LogStream >> class; I will fix those later as part of this refactoring. >> >> >> Looks good to me, as long as we get rid of the typedefs too eventually. :) >> > > Sure! > > >> >> >> -- >> >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >> ul-logstream/logstream-optimization.webrev.00/webrev/ >> >> >> 56 // Prevent operator new for LogStream. 57 // static void* operator new (size_t); 58 // static void* operator new[] (size_t); 59 >> >> Should these be uncommented? >> > > Oh, right. I have to play a bit more, was unsure whether that coding > actually does what I want it to do, prevent ResourceObj::operator new from > running. Will test a bit more. > > > Alright, sounds good. > > Marcus > > > >> >> Thanks again, >> Marcus >> >> > Thanks for your review! I'll prepare an updated patch. > > ..Thomas > > > From marcus.larsson at oracle.com Mon Jun 19 13:49:29 2017 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Mon, 19 Jun 2017 15:49:29 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: <42e543ac-e4b1-5dca-fa13-55f68e147f59@oracle.com> On 2017-06-19 15:47, Thomas St?fe wrote: > > > On Mon, Jun 19, 2017 at 3:37 PM, Marcus Larsson > > wrote: > > > On 2017-06-19 15:16, Thomas St?fe wrote: >> Hi Marcus, >> >> On Mon, Jun 19, 2017 at 2:30 PM, Marcus Larsson >> > wrote: >> >> Hi Thomas, >> >> Thanks for your effort on this, great work! >> >> >> On 2017-06-18 09:30, Thomas St?fe wrote: >>> >>> -- >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/callsite-changes.webrev.00/webrev/ >>> >>> >> >> (Although this is a pre-existing issue, it might be a good >> opportunity to clean it up now.) >> In file loaderConstraints.cpp, class LoaderConstraintTable, >> for functions purge_loader_constraints(), add_entry(), >> check_or_update(), extend_loader_constraint() and >> merge_loader_constraints(): >> A LogStream is created, but only ever used for print_cr():s, >> which sort of defeats its purpose. It would be much simpler >> just to use the LogTarget directly. This is actually what's >> done for the converted log_ldr_constraint_msg(). >> >> >> I found some of those too, but can a sequence of >> LogStream::print() calls really be replaced by a sequence of >> LogTarget::print()/LogImpl::print()? >> >> LogStream buffers all input till newline is encountered, whereas >> LogTarget::print (LogImpl::vwrite()->LogTagSet::vwrite()...) will >> print one log line for each invocation, no? > > Indeed they can. There's nothing special going on in LogStreams to > make the lines grouped in any way. Each print_cr on a LogStream is > equivalent to a LogImpl::write call. > > > Yes, but what about print() (without cr?) Maybe I am just slow... > > A sequence of > > Logstream ls; > ls.print("<"); > ls.print("br"); > ls.print(">"); > ls.cr (); > > is really equivalent to > > Log(,,) log; > log.info ("<"); > log.info ("br"); > log.info (">\n"); > > ? > > I would have thought that only LogStream::cr() will cause the stream > to be flushed, hence it will write "
", whereas all log.info > () (or LogTarget::print()) calls would be logged as > separate lines? No you're right. I was just talking about cases where there's *only* print_cr() calls, not when there are print() calls mixed in. > > >> >> >> A similar but worse issue is present in sharedPathsMiscInfo.cpp: >> Here, a LogStream is used to print incomplete lines without >> any CR at the end. These messages will never be logged. Also, >> the use of a stream here is unnecessary as well. >> >> >> Yes, you are right, and it is also not enclosed in is_enabled(). >> Also, seems this coding is never tested, otherwise the assert in >> ~LogStreamBase() would have fired because of missing stream flush(). > > True. The old code would never had a chance to hit the assert due > to the resource allocation destructor problem, but now we should > be able to hit it with proper testing. > >> In compactibleFreeListSpace.cpp: >> >> 2200 ResourceMark rm; >> >> It should be safe to remove this ResourceMark. >> >> >> Right. >> >>> These are the - mostly mechanical - changes to the many >>> callsites. Most of these changes follow the same pattern. A >>> code sequence using "xxx_stream()" was split into >>> declaration of LogTarget or Log (usually the former), an >>> "is_enabled()" query and declaration of LogStream on the >>> stack afterwards. This follows the principle that the >>> logging itself should be cheap if unused: the declaration of >>> LogTarget is a noop, because LogTarget has no members. Only >>> if is_enabled() is true, more expensive things are done, >>> e.g. initializing the outputStream object and allocating >>> ResourceMarks. >>> >>> Note that I encountered some places where logging was done >>> without enclosing "is_enabled()" query - for instance, see >>> gc/g1/heapRegion.cpp, or cms/compactibleFreeListSpace.cpp. >>> As far as I understand, in these cases we actually do print >>> (assemble the line to be printed), only to discard all that >>> work in the UL backend because logging is not enabled for >>> that log target. So, we pay quite a bit for nothing. I >>> marked these questionable code sections with an "// >>> Unconditional write?" comment and we may want to fix those >>> later in a follow up issue? >> >> That sounds good to me. I found more sites where the logging >> is unconditional (compactibleFreeListSpace.cpp, >> parOopClosures.inline.hpp, g1RemSet.cpp), but we should fix >> them all as a separate issue. I filed >> https://bugs.openjdk.java.net/browse/JDK-8182466 >> . >> >> >> Ok! I'll add whatever I find as comments. >> >>> >>> -- >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/api-changes.webrev.00/webrev/ >>> >>> >>> The API changes mostly are simplifications. Before, we had a >>> whole hierarchy of LogStream... classes whose only >>> difference was how the backing memory was allocated. Because >>> we now always use C-Heap, all this can be folded together >>> into a single simple LogStream class which uses Cheap as >>> line memory. Please note that I left >>> "LogStreamNoResourceMark" and "LogStreamCHeap" for now and >>> typedef'ed them to the one single LogStream class; I will >>> fix those later as part of this refactoring. >> >> Looks good to me, as long as we get rid of the typedefs too >> eventually. :) >> >> >> Sure! >> >> >>> >>> -- >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/logstream-optimization.webrev.00/webrev/ >>> >> >> 56 // Prevent operator new for LogStream. >> 57 // static void* operator new (size_t); >> 58 // static void* operator new[] (size_t); >> 59 >> >> Should these be uncommented? >> >> >> Oh, right. I have to play a bit more, was unsure whether that >> coding actually does what I want it to do, prevent >> ResourceObj::operator new from running. Will test a bit more. > > Alright, sounds good. > > Marcus > >> >> Thanks again, >> Marcus >> >> >> Thanks for your review! I'll prepare an updated patch. >> >> ..Thomas >> > > From thomas.stuefe at gmail.com Mon Jun 19 13:58:02 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 19 Jun 2017 15:58:02 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: <42e543ac-e4b1-5dca-fa13-55f68e147f59@oracle.com> References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> <42e543ac-e4b1-5dca-fa13-55f68e147f59@oracle.com> Message-ID: On Mon, Jun 19, 2017 at 3:49 PM, Marcus Larsson wrote: > > > On 2017-06-19 15:47, Thomas St?fe wrote: > > > > On Mon, Jun 19, 2017 at 3:37 PM, Marcus Larsson > wrote: > >> >> On 2017-06-19 15:16, Thomas St?fe wrote: >> >> Hi Marcus, >> >> On Mon, Jun 19, 2017 at 2:30 PM, Marcus Larsson < >> marcus.larsson at oracle.com> wrote: >> >>> Hi Thomas, >>> >>> Thanks for your effort on this, great work! >>> >>> On 2017-06-18 09:30, Thomas St?fe wrote: >>> >>> >>> -- >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >>> ul-logstream/callsite-changes.webrev.00/webrev/ >>> >>> >>> (Although this is a pre-existing issue, it might be a good opportunity >>> to clean it up now.) >>> In file loaderConstraints.cpp, class LoaderConstraintTable, for >>> functions purge_loader_constraints(), add_entry(), check_or_update(), >>> extend_loader_constraint() and merge_loader_constraints(): >>> A LogStream is created, but only ever used for print_cr():s, which sort >>> of defeats its purpose. It would be much simpler just to use the LogTarget >>> directly. This is actually what's done for the converted >>> log_ldr_constraint_msg(). >>> >>> >> I found some of those too, but can a sequence of LogStream::print() calls >> really be replaced by a sequence of LogTarget::print()/LogImpl::print()? >> >> LogStream buffers all input till newline is encountered, whereas >> LogTarget::print (LogImpl::vwrite()->LogTagSet::vwrite()...) will print >> one log line for each invocation, no? >> >> >> Indeed they can. There's nothing special going on in LogStreams to make >> the lines grouped in any way. Each print_cr on a LogStream is equivalent to >> a LogImpl::write call. >> >> > Yes, but what about print() (without cr?) Maybe I am just slow... > > A sequence of > > Logstream ls; > ls.print("<"); > ls.print("br"); > ls.print(">"); > ls.cr(); > > is really equivalent to > > Log(,,) log; > log.info("<"); > log.info("br"); > log.info(">\n"); > > ? > > I would have thought that only LogStream::cr() will cause the stream to be > flushed, hence it will write "
", whereas all log.info() (or > LogTarget::print()) calls would be logged as separate lines? > > > No you're right. I was just talking about cases where there's *only* > print_cr() calls, not when there are print() calls mixed in. > > > Ah, ok! Maybe this is something to ponder. I think your original idea was that LogStream's only purpose was just to be plugged into outputStream* compatible pre-existing print functions, right? But it is so darn useful, e.g. for assembling lines piece-wise. So I think people use it way more than expected. Maybe somewhere in the future, LogStream could be the only interface for printing to UL... ? This would certainly simplify the UL API and make our (SAPs) developers very happy, who constantly complain about the problem that it is difficult to grep for UL log sites, because they can come in so many forms. > > > >> >> A similar but worse issue is present in sharedPathsMiscInfo.cpp: >>> Here, a LogStream is used to print incomplete lines without any CR at >>> the end. These messages will never be logged. Also, the use of a stream >>> here is unnecessary as well. >>> >>> >> Yes, you are right, and it is also not enclosed in is_enabled(). Also, >> seems this coding is never tested, otherwise the assert in ~LogStreamBase() >> would have fired because of missing stream flush(). >> >> >> True. The old code would never had a chance to hit the assert due to the >> resource allocation destructor problem, but now we should be able to hit it >> with proper testing. >> >> >> >>> In compactibleFreeListSpace.cpp: >>> >>> 2200 ResourceMark rm; >>> >>> It should be safe to remove this ResourceMark. >>> >>> >> Right. >> >>> These are the - mostly mechanical - changes to the many callsites. Most >>> of these changes follow the same pattern. A code sequence using >>> "xxx_stream()" was split into declaration of LogTarget or Log (usually the >>> former), an "is_enabled()" query and declaration of LogStream on the stack >>> afterwards. This follows the principle that the logging itself should be >>> cheap if unused: the declaration of LogTarget is a noop, because LogTarget >>> has no members. Only if is_enabled() is true, more expensive things are >>> done, e.g. initializing the outputStream object and allocating >>> ResourceMarks. >>> >>> Note that I encountered some places where logging was done without >>> enclosing "is_enabled()" query - for instance, see gc/g1/heapRegion.cpp, or >>> cms/compactibleFreeListSpace.cpp. As far as I understand, in these >>> cases we actually do print (assemble the line to be printed), only to >>> discard all that work in the UL backend because logging is not enabled for >>> that log target. So, we pay quite a bit for nothing. I marked these >>> questionable code sections with an "// Unconditional write?" comment and we >>> may want to fix those later in a follow up issue? >>> >>> >>> That sounds good to me. I found more sites where the logging is >>> unconditional (compactibleFreeListSpace.cpp, parOopClosures.inline.hpp, >>> g1RemSet.cpp), but we should fix them all as a separate issue. I filed >>> https://bugs.openjdk.java.net/browse/JDK-8182466. >>> >>> >> Ok! I'll add whatever I find as comments. >> >> >>> >>> -- >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >>> ul-logstream/api-changes.webrev.00/webrev/ >>> >>> The API changes mostly are simplifications. Before, we had a whole >>> hierarchy of LogStream... classes whose only difference was how the backing >>> memory was allocated. Because we now always use C-Heap, all this can be >>> folded together into a single simple LogStream class which uses Cheap as >>> line memory. Please note that I left "LogStreamNoResourceMark" and >>> "LogStreamCHeap" for now and typedef'ed them to the one single LogStream >>> class; I will fix those later as part of this refactoring. >>> >>> >>> Looks good to me, as long as we get rid of the typedefs too eventually. >>> :) >>> >> >> Sure! >> >> >>> >>> >>> -- >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >>> ul-logstream/logstream-optimization.webrev.00/webrev/ >>> >>> >>> 56 // Prevent operator new for LogStream. 57 // static void* operator new (size_t); 58 // static void* operator new[] (size_t); 59 >>> >>> Should these be uncommented? >>> >> >> Oh, right. I have to play a bit more, was unsure whether that coding >> actually does what I want it to do, prevent ResourceObj::operator new from >> running. Will test a bit more. >> >> >> Alright, sounds good. >> >> Marcus >> >> >> >>> >>> Thanks again, >>> Marcus >>> >>> >> Thanks for your review! I'll prepare an updated patch. >> >> ..Thomas >> >> >> > > From mbrandy at linux.vnet.ibm.com Mon Jun 19 16:27:36 2017 From: mbrandy at linux.vnet.ibm.com (Matthew Brandyberry) Date: Mon, 19 Jun 2017 11:27:36 -0500 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 Message-ID: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> This is a PPC-specific hotspot optimization that leverages the mtfprd/mffprd instructions for for movement between general purpose and floating point registers (rather than through memory). It yields a ~35% improvement measured via a microbenchmark. Please review: Bug: https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt From mbrandy at linux.vnet.ibm.com Mon Jun 19 16:30:28 2017 From: mbrandy at linux.vnet.ibm.com (Matthew Brandyberry) Date: Mon, 19 Jun 2017 11:30:28 -0500 Subject: RFR(S) JDK-8181810 PPC64: Leverage extrdi for bitfield extract Message-ID: <2f852cff-938e-b383-cf2b-66f97fe652ff@linux.vnet.ibm.com> This is a PPC-specific hotspot optimization that leverages the mtfprd/mffprd instructions for for movement between general purpose and floating point registers (rather than through memory). It yields a ~35% improvement measured via a microbenchmark. Please review: Bug: https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt From mbrandy at linux.vnet.ibm.com Mon Jun 19 16:35:23 2017 From: mbrandy at linux.vnet.ibm.com (Matthew Brandyberry) Date: Mon, 19 Jun 2017 11:35:23 -0500 Subject: RFR(S) JDK-8181810 PPC64: Leverage extrdi for bitfield extract In-Reply-To: <2f852cff-938e-b383-cf2b-66f97fe652ff@linux.vnet.ibm.com> References: <2f852cff-938e-b383-cf2b-66f97fe652ff@linux.vnet.ibm.com> Message-ID: <49e34c46-6207-7766-d339-05f1ce6a0eb9@linux.vnet.ibm.com> Apologies for the bad cut-and-paste job.. the body of this review request should read as follows: This is a PPC-specific hotspot optimization that leverages the extrdi instruction for bitfield extract operations (shift-right and mask-with-and). It yields a ~25% improvement measured via a microbenchmark. Please review: Bug: https://bugs.openjdk.java.net/browse/JDK-8181810 Webrev: http://cr.openjdk.java.net/~gromero/8181810/v1/ Thanks, Matt On 6/19/17 11:30 AM, Matthew Brandyberry wrote: > This is a PPC-specific hotspot optimization that leverages the > mtfprd/mffprd > instructions for for movement between general purpose and floating point > registers (rather than through memory). It yields a ~35% improvement > measured > via a microbenchmark. > > Please review: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8181809 > Webrev: http://cr.openjdk.java.net/~gromero/8181809/v1/ > > Thanks, > Matt > From gromero at linux.vnet.ibm.com Mon Jun 19 18:17:18 2017 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Mon, 19 Jun 2017 15:17:18 -0300 Subject: Jtreg JVMCI test failures In-Reply-To: References: <593FFA43.5030005@linux.vnet.ibm.com> Message-ID: <5948152E.9090104@linux.vnet.ibm.com> Hi Christian, Sorry for the delay, it was national holiday last week. Thanks a lot for confirming all was/is fine. Indeed I found that updating the my jtreg with last build found in cloudbees fixed the issue [1]. Best regards, Gustavo [1] https://adopt-openjdk.ci.cloudbees.com/job/jtreg/lastSuccessfulBuild/artifact/ On 13-06-2017 12:32, Christian Thalinger wrote: > >> On Jun 13, 2017, at 7:44 AM, Gustavo Romero wrote: >> >> Hi, >> >> I'm trying to run the jtreg JVMCI tests (both standalone and using >> 'make test-hotspot-jtreg') but I'm getting something like that in all of them: >> >> -------------------------------------------------- >> TEST: compiler/jvmci/compilerToVM/AllocateCompileIdTest.java >> TEST JDK: /home/gromero/hg/jdk9/dev/build/linux-x86_64-normal-server-release/images/jdk >> >> ACTION: build -- Not run. Test running... >> REASON: User specified action: run build jdk.internal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox >> TIME: rnal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox seconds >> messages: >> command: build jdk.internal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox >> reason: User specified action: run build jdk.internal.vm.ci/jdk.vm.ci.hotspot.CompilerToVMHelper sun.hotspot.WhiteBox >> >> TEST RESULT: Error. can't find module jdk.internal.vm.ci in test directory or libraries >> -------------------------------------------------- >> >> It's on x64. I've tried tips of jdk9/dev, jdk9/jdk9, and jdk10/hs. >> >> Any clue on that? > > That?s odd. It works for me with jdk9: > > $ jtreg -verbose:summary -noreport -jdk:$PWD/build/macosx-x86_64-normal-server-release/images/jdk hotspot/test/compiler/jvmci/compilerToVM/AllocateCompileIdTest.java > Passed: compiler/jvmci/compilerToVM/AllocateCompileIdTest.java > Test results: passed: 1 > > Do you get this? > > $ ./build/macosx-x86_64-normal-server-release/images/jdk/bin/java --list-modules | grep jdk.internal.vm.ci > jdk.internal.vm.ci at 9-internal > >> >> >> Thanks, >> Gustavo >> > > From coleen.phillimore at oracle.com Mon Jun 19 20:56:50 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 19 Jun 2017 16:56:50 -0400 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion In-Reply-To: References: Message-ID: <47b9fe06-71a0-ac46-4ae5-a3c3e6c75685@oracle.com> On 6/16/17 8:33 PM, Kim Barrett wrote: > Please review this refactoring of debug.hpp and globalDefinitions.hpp > so that debug.hpp no longer includes globalDefinitions.hpp. Instead, > the include dependency is now in the other direction. Among other > things, this permits the use of the assert macros by inline functions > defined in globalDefinitions.hpp. > > There are a few functions declared toward the end of debug.hpp that > now seem somewhat misplaced there. I'm leaving them there for now, > but will file a new CR to figure out a better place for them, possibly > in vmError. There are a number of additional cleanups for dead code > and the like that I'll be filing as followups; this change is already > rather large and I didn't want to add more stuff to it. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8181449 > > Testing: > jprt > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.00/ > > The full webrev is somewhat large. However, much of the bulk involves > either adding #includes to files or moving code from one place to > another without changing it. To simplify reviewing the changes, I've > broken it down into a sequence of patches, each associated with a > particular bit of the refactoring. The full change is equivalent to > applying these patches in the given order. (Note: I don't know if > applying a subset gives a working repository.) > > (1) http://cr.openjdk.java.net/~kbarrett/8181449/jvm_h/ > a. In preparation for removing the #include of jvm.h from debug.hpp > (see move_format_buffer webrev), ensured all files that contain > references to jio_printf variants include jvm.h. This mostly involved > adding a #include to lots of files. > > b. For a few header files that referenced jio_printf variants, moved > the function definition from the .hpp to the corresponding .cpp, and > added #include of jvm.h to the .cpp. > - macroAssembler_sparc.[ch]pp > - macroAssembler_x86.[ch]pp > - macroAssembler_aarch64.[ch]pp Well that was boring. I assume that these files adding #include jvm.h actually got an error without the include. > > (2) http://cr.openjdk.java.net/~kbarrett/8181449/move_format_buffer/ > a. Moved FormatBuffer and related stuff from debug.[ch]pp to new > formatBuffer.[ch]pp, and updated users to #include the new files. > This includes moving the #include of jvm.h, which is no longer needed > by debug.hpp. > > b. Made the #include of debug.hpp explicit when already otherwise > modifying a file that uses assert-like macros, rather than depending > on indirect inclusion of that file. Good. > > (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ > a. Moved print_native_stack to VMError class. > b. Removed unused and undefined pd_obfuscate_location. > c. Simplified #ifdef PRODUCT guard in ps(). Why can't print_native_stack be in debug.cpp? I see. Actually it might be better in os.cpp than vmError.cpp. > > (4) http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/ > > a. Moved / combined definitions of BREAKPOINT macro from > globalDefinitions_*.hpp to new breakpoint.hpp. > > b. Deleted all definitions of unused DEBUG_EXCEPTION macro. > > c. Moved gcc-specific ATTRIBUTE_PRINTF, pragma macros, and friends > from globalDefinitions_gcc.hpp to new compilerWarnings.hpp. Also > moved the default definitions for those macros from > globalDefinitions.hpp to compilerWarnings.hpp. > > d. Added TARGET_COMPILER_HEADER[_INLINE] macros, similar to the > CPU/OS/OS_CPU/_HEADER[_INLINE] macros, for including files based on > new INCLUDE_SUFFIX_TARGET_COMPILER macro provided by the build system. Ok. > > (5) http://cr.openjdk.java.net/~kbarrett/8181449/flip_depend/ > > a. Changed globalDefinitions.hpp to #include debug.hpp, rather than > the other way around. > > b. Changed globals.hpp to #include globalDefinitions.hpp rather than > debug.hpp, since it actually needs to former and not the latter. > > c. Changed jvmci_globals.cpp to #include jvm.h, since it's no longer > being indirectly included via an indirect include of debug.hpp that > was including globalDefinitions.hpp. > > d. Moved printf-style formatters earlier in globalDefinitions.hpp, so > they can be used in assert messages in this file. > > e. In globalDefinitions.hpp, changed some #ifdef ASSERT blocks of > conditional calls to basic_fatal to instead use assert. While doing > so, made the error messages more informative. Also looks good. > > In addition to globals.hpp, there are about 90 files that #include > debug.hpp but not globalDefinitions.hpp. The few changes mentioned > were sufficient to fix missing indirect includes resulting from > debug.hpp no longer including globalDefinitions.hpp. There could be > additional problems with platforms not supported by Oracle though. > > There are also about 40 files which directly include debug.hpp but > don't appear to use any of the assertion macros. > Hm, might be worth another round to clean these up too. These changes look good though. Thank you for the cleanup!! It wasn't as large as I expected (at least this round). Coleen From kim.barrett at oracle.com Tue Jun 20 01:38:15 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 19 Jun 2017 21:38:15 -0400 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion In-Reply-To: <47b9fe06-71a0-ac46-4ae5-a3c3e6c75685@oracle.com> References: <47b9fe06-71a0-ac46-4ae5-a3c3e6c75685@oracle.com> Message-ID: <743711A6-AF26-41F0-9111-8ABD70A6D472@oracle.com> > On Jun 19, 2017, at 4:56 PM, coleen.phillimore at oracle.com wrote: > On 6/16/17 8:33 PM, Kim Barrett wrote: >> Please review this refactoring of debug.hpp and globalDefinitions.hpp >> so that debug.hpp no longer includes globalDefinitions.hpp. >> >> [?] >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8181449 >> >> Testing: >> jprt >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.00/ >> >> The full webrev is somewhat large. However, much of the bulk involves >> either adding #includes to files or moving code from one place to >> another without changing it. To simplify reviewing the changes, I've >> broken it down into a sequence of patches, each associated with a >> particular bit of the refactoring. The full change is equivalent to >> applying these patches in the given order. (Note: I don't know if >> applying a subset gives a working repository.) >> >> (1) http://cr.openjdk.java.net/~kbarrett/8181449/jvm_h/ >> a. In preparation for removing the #include of jvm.h from debug.hpp >> (see move_format_buffer webrev), ensured all files that contain >> references to jio_printf variants include jvm.h. This mostly involved >> adding a #include to lots of files. >> >> b. For a few header files that referenced jio_printf variants, moved >> the function definition from the .hpp to the corresponding .cpp, and >> added #include of jvm.h to the .cpp. >> - macroAssembler_sparc.[ch]pp >> - macroAssembler_x86.[ch]pp >> - macroAssembler_aarch64.[ch]pp > > Well that was boring. I assume that these files adding #include jvm.h actually got an error without the include. Only in an open build; Oracle's closed code includes jvm.h frequently enough via other (closed) headers that our supported platforms only needed a very small number (like 1-3, I forget exactly) additional includes. Fortunately, jprt includes some pure open builds in its testing. I eventually got tired of adding includes and re-running jprt, and just mass added the includes. >> [?] >> (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ >> a. Moved print_native_stack to VMError class. >> b. Removed unused and undefined pd_obfuscate_location. >> c. Simplified #ifdef PRODUCT guard in ps(). > > Why can't print_native_stack be in debug.cpp? I see. Actually it might be better in os.cpp than vmError.cpp. print_native_stack seems to be the "portable" and less verbose alternative to os::platform_print_native_stack. VMError::report selects between them based on requested verbosity. So VMError seems like a better place than os. >> [?] >> >> There are also about 40 files which directly include debug.hpp but >> don't appear to use any of the assertion macros. >> > Hm, might be worth another round to clean these up too. I?m hoping you are not asking for that to be added to this change. > These changes look good though. Thank you for the cleanup!! It wasn't as large as I expected (at least this round). Thanks. From stefan.karlsson at oracle.com Tue Jun 20 09:49:50 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 20 Jun 2017 11:49:50 +0200 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion In-Reply-To: References: Message-ID: Hi Kim, On 2017-06-17 02:33, Kim Barrett wrote: > Please review this refactoring of debug.hpp and globalDefinitions.hpp > so that debug.hpp no longer includes globalDefinitions.hpp. Instead, > the include dependency is now in the other direction. Among other > things, this permits the use of the assert macros by inline functions > defined in globalDefinitions.hpp. > > There are a few functions declared toward the end of debug.hpp that > now seem somewhat misplaced there. I'm leaving them there for now, > but will file a new CR to figure out a better place for them, possibly > in vmError. There are a number of additional cleanups for dead code > and the like that I'll be filing as followups; this change is already > rather large and I didn't want to add more stuff to it. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8181449 > > Testing: > jprt > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.00/ > > The full webrev is somewhat large. However, much of the bulk involves > either adding #includes to files or moving code from one place to > another without changing it. To simplify reviewing the changes, I've > broken it down into a sequence of patches, each associated with a > particular bit of the refactoring. The full change is equivalent to > applying these patches in the given order. (Note: I don't know if > applying a subset gives a working repository.) > > (1) http://cr.openjdk.java.net/~kbarrett/8181449/jvm_h/ > a. In preparation for removing the #include of jvm.h from debug.hpp > (see move_format_buffer webrev), ensured all files that contain > references to jio_printf variants include jvm.h. This mostly involved > adding a #include to lots of files. > > b. For a few header files that referenced jio_printf variants, moved > the function definition from the .hpp to the corresponding .cpp, and > added #include of jvm.h to the .cpp. > - macroAssembler_sparc.[ch]pp > - macroAssembler_x86.[ch]pp > - macroAssembler_aarch64.[ch]pp OK > > (2) http://cr.openjdk.java.net/~kbarrett/8181449/move_format_buffer/ > a. Moved FormatBuffer and related stuff from debug.[ch]pp to new > formatBuffer.[ch]pp, and updated users to #include the new files. > This includes moving the #include of jvm.h, which is no longer needed > by debug.hpp. > > b. Made the #include of debug.hpp explicit when already otherwise > modifying a file that uses assert-like macros, rather than depending > on indirect inclusion of that file. The following has now been moved to formatBuffer.hpp: 116 // Used to format messages. 117 typedef FormatBuffer<> err_msg; but formatBuffer.hpp is not explicitly included in all files using err_msg. > > (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ > a. Moved print_native_stack to VMError class. > b. Removed unused and undefined pd_obfuscate_location. > c. Simplified #ifdef PRODUCT guard in ps(). You removed the call to p->trace_stack(). Was that intentional? if (p->has_last_Java_frame()) { // If the last_Java_fp is set we are in C land and // can call the standard stack_trace function. -#ifdef PRODUCT p->print_stack(); } else { +#ifdef PRODUCT tty->print_cr("Cannot find the last Java frame, printing stack disabled."); #else // !PRODUCT - p->trace_stack(); - } else { > > (4) http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/ > > a. Moved / combined definitions of BREAKPOINT macro from > globalDefinitions_*.hpp to new breakpoint.hpp. > > b. Deleted all definitions of unused DEBUG_EXCEPTION macro. > > c. Moved gcc-specific ATTRIBUTE_PRINTF, pragma macros, and friends > from globalDefinitions_gcc.hpp to new compilerWarnings.hpp. Also > moved the default definitions for those macros from > globalDefinitions.hpp to compilerWarnings.hpp. > > d. Added TARGET_COMPILER_HEADER[_INLINE] macros, similar to the > CPU/OS/OS_CPU/_HEADER[_INLINE] macros, for including files based on > new INCLUDE_SUFFIX_TARGET_COMPILER macro provided by the build system. ====================================================================== http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/globalDefinitions.hpp.frames.html sort order?: 28 #include "utilities/macros.hpp" 29 #include "utilities/compilerWarnings.hpp" or is this intentional? I see that compilerWarnings.hpp has the comment: 64 // Defaults when not defined for the TARGET_COMPILER_xxx. which seems to suggest that macros.hpp need to be included before compilerWarnings.hpp. This used to work when globalDefinitions.hpp dispatched to globalDefinitions_.hpp, but now this seems fragile. debug.hpp even includes compilerWarnings.hpp before macros.hpp, so it seems like the following attribute is not used when debug.hpp is included!: -#ifndef ATTRIBUTE_PRINTF -#define ATTRIBUTE_PRINTF(fmt,vargs) __attribute__((format(printf, fmt, vargs))) -#endif That seems like a bug to me. ====================================================================== http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/globalDefinitions_gcc.hpp.udiff.html Maybe also get rid of the following line: //---------------------------------------------------------------------------------------------------- -// Debugging or get rid of the following stray new line so that the code in the different globalDefinitions files are more consistent. ====================================================================== http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/macros.hpp.frames.html 490 #define CPU_HEADER_STEM(basename) PASTE_TOKENS(basename, INCLUDE_SUFFIX_CPU) 491 #define OS_HEADER_STEM(basename) PASTE_TOKENS(basename, INCLUDE_SUFFIX_OS) 492 #define OS_CPU_HEADER_STEM(basename) PASTE_TOKENS(basename, PASTE_TOKENS(INCLUDE_SUFFIX_OS, INCLUDE_SUFFIX_CPU)) 493 #define TARGET_COMPILER_HEADER_STEM(basename) PASTE_TOKENS(basename, INCLUDE_SUFFIX_TARGET_COMPILER) We used to use the TARGET prefix for cpu/arch and os, for example: -#ifdef TARGET_ARCH_x86 -# include "c1_globals_x86.hpp" -#endif -#ifdef TARGET_ARCH_sparc -# include "c1_globals_sparc.hpp" -#endif -#ifdef TARGET_ARCH_arm -# include "c1_globals_arm.hpp" -#endif -#ifdef TARGET_ARCH_ppc -# include "c1_globals_ppc.hpp" -#endif -#ifdef TARGET_ARCH_aarch64 -# include "c1_globals_aarch64.hpp" -#endif -#ifdef TARGET_OS_FAMILY_linux -# include "c1_globals_linux.hpp" but changed it to: +#include "utilities/macros.hpp" + +#include CPU_HEADER(c1_globals) +#include OS_HEADER(c1_globals) with this patch: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/8a5735c11a84 Do we want to name the macro COMPILER_HEADER instead of TARGET_COMPILER_HEADER? > > (5) http://cr.openjdk.java.net/~kbarrett/8181449/flip_depend/ > > a. Changed globalDefinitions.hpp to #include debug.hpp, rather than > the other way around. ====================================================================== http://cr.openjdk.java.net/~kbarrett/8181449/flip_depend/src/share/vm/utilities/debug.hpp.frames.html Same comment as in (4) above. > > b. Changed globals.hpp to #include globalDefinitions.hpp rather than > debug.hpp, since it actually needs to former and not the latter. > > c. Changed jvmci_globals.cpp to #include jvm.h, since it's no longer > being indirectly included via an indirect include of debug.hpp that > was including globalDefinitions.hpp. I don't see that change. > > d. Moved printf-style formatters earlier in globalDefinitions.hpp, so > they can be used in assert messages in this file. OK > > e. In globalDefinitions.hpp, changed some #ifdef ASSERT blocks of > conditional calls to basic_fatal to instead use assert. While doing > so, made the error messages more informative. OK > > In addition to globals.hpp, there are about 90 files that #include > debug.hpp but not globalDefinitions.hpp. The few changes mentioned > were sufficient to fix missing indirect includes resulting from > debug.hpp no longer including globalDefinitions.hpp. There could be > additional problems with platforms not supported by Oracle though. > > There are also about 40 files which directly include debug.hpp but > don't appear to use any of the assertion macros. > OK. Thanks, StefanK From martin.doerr at sap.com Tue Jun 20 09:50:12 2017 From: martin.doerr at sap.com (Doerr, Martin) Date: Tue, 20 Jun 2017 09:50:12 +0000 Subject: RFR(S) JDK-8181810 PPC64: Leverage extrdi for bitfield extract In-Reply-To: <49e34c46-6207-7766-d339-05f1ce6a0eb9@linux.vnet.ibm.com> References: <2f852cff-938e-b383-cf2b-66f97fe652ff@linux.vnet.ibm.com> <49e34c46-6207-7766-d339-05f1ce6a0eb9@linux.vnet.ibm.com> Message-ID: <19e9eb9d5fec463380398d626ba8c78f@sap.com> Hi Matt, nice change. I have reviewed it and didn't see any mistakes. We will build and test it, too. I can also sponsor it after a 2nd review. Thanks and best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Matthew Brandyberry Sent: Montag, 19. Juni 2017 18:35 To: hotspot-dev at openjdk.java.net Subject: Re: RFR(S) JDK-8181810 PPC64: Leverage extrdi for bitfield extract Apologies for the bad cut-and-paste job.. the body of this review request should read as follows: This is a PPC-specific hotspot optimization that leverages the extrdi instruction for bitfield extract operations (shift-right and mask-with-and). It yields a ~25% improvement measured via a microbenchmark. Please review: Bug: https://bugs.openjdk.java.net/browse/JDK-8181810 Webrev: http://cr.openjdk.java.net/~gromero/8181810/v1/ Thanks, Matt On 6/19/17 11:30 AM, Matthew Brandyberry wrote: > This is a PPC-specific hotspot optimization that leverages the > mtfprd/mffprd > instructions for for movement between general purpose and floating point > registers (rather than through memory). It yields a ~35% improvement > measured > via a microbenchmark. > > Please review: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8181809 > Webrev: http://cr.openjdk.java.net/~gromero/8181809/v1/ > > Thanks, > Matt > From dalibor.topic at oracle.com Tue Jun 20 12:53:52 2017 From: dalibor.topic at oracle.com (dalibor topic) Date: Tue, 20 Jun 2017 14:53:52 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <20170609102041.GA2477@physik.fu-berlin.de> References: <20170609102041.GA2477@physik.fu-berlin.de> Message-ID: Hi John, I took a quick look at the build logs. It seems that the jtreg tests for aren't run, because the packaged jtreg is too old: Error: The testsuite at /?PKGBUILDDIR?/src/hotspot/test requires jtreg version 4.2 b07 or higher and this is jtreg version 4.2 b05. from https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=amd64&ver=9~b170-2&stamp=1495173713&raw=0 It should get picked up for next builds, thanks to https://tracker.debian.org/news/850162 , I guess. cheers, dalibor topic On 09.06.2017 12:20, John Paul Adrian Glaubitz wrote: > Hi! > > I am currently working on fixing OpenJDK-9 on all non-mainstream > targets available in Debian. For Debian/sparc64, the attached four > patches were necessary to make the build succeed [1]. > > I know the patches cannot be merged right now, but I'm posting them > anyway in case someone else is interested in using them. > > All patches are: > > Signed-off-by: John Paul Adrian Glaubitz > > I also signed the OCA. > > I'm now looking into fixing the builds on alpha (DEC Alpha), armel > (ARMv4T), m68k (680x0), powerpc (PPC32) and sh4 (SuperH/J-Core). > > Cheers, > Adrian > >> [1] https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=sparc64&ver=9%7Eb170-2&stamp=1496931563&raw=0 > -- Dalibor Topic | Principal Product Manager Phone: +494089091214 | Mobile: +491737185961 ORACLE Deutschland B.V. & Co. KG | K?hneh?fe 5 | 22761 Hamburg ORACLE Deutschland B.V. & Co. KG Hauptverwaltung: Riesstr. 25, D-80992 M?nchen Registergericht: Amtsgericht M?nchen, HRA 95603 Komplement?rin: ORACLE Deutschland Verwaltung B.V. Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697 Gesch?ftsf?hrer: Alexander van der Ven, Jan Schultheiss, Val Maher Oracle is committed to developing practices and products that help protect the environment From martin.doerr at sap.com Tue Jun 20 13:33:59 2017 From: martin.doerr at sap.com (Doerr, Martin) Date: Tue, 20 Jun 2017 13:33:59 +0000 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> Message-ID: <2a4fcb315f4d44199e8cc66935886f41@sap.com> Hi Matt, thanks for providing this webrev. I had already thought about using these instructions for this purpose and your change matches pretty much what I'd do. Here a couple of comments: ppc.ad: This was a lot of work. Thanks for doing it. effect(DEF dst, USE src); is redundant if a match rule match(Set dst (MoveL2D src)); exists. vm_version: This part is in conflict with Michihiro's change which is already pushed in jdk10, but it's trivial to resolve. I'm ok with using has_vpmsumb() for has_mtfprd(). In the past, we sometimes had trouble with assuming that a certain Power processor supports all new instructions if it supports certain ones. We also use the hotspot code on as400 where certain instruction subsets were disabled while other Power 8 instructions were usable. Maybe you can double-check if there may exist configurations in which has_vpmsumb() doesn't match has_mtfprd(). C1: It should also be possible to use the instructions in C1 compiler. Maybe you would like to take a look at it as well and see if it can be done with feasible effort. Here are some hints: The basic decisions are made in LIRGenerator::do_Convert. You could skip the force_to_spill or must_start_in_memory steps. The final assembly code gets emitted in LIR_Assembler::emit_opConvert where you could replace the store instructions. For testing, you can use -XX:TieredStopAtLevel=1, for example. Thanks and best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Matthew Brandyberry Sent: Montag, 19. Juni 2017 18:28 To: ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 This is a PPC-specific hotspot optimization that leverages the mtfprd/mffprd instructions for for movement between general purpose and floating point registers (rather than through memory). It yields a ~35% improvement measured via a microbenchmark. Please review: Bug: https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt From mbrandy at linux.vnet.ibm.com Tue Jun 20 13:38:57 2017 From: mbrandy at linux.vnet.ibm.com (Matthew Brandyberry) Date: Tue, 20 Jun 2017 08:38:57 -0500 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <2a4fcb315f4d44199e8cc66935886f41@sap.com> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <2a4fcb315f4d44199e8cc66935886f41@sap.com> Message-ID: <651ebdd4-3854-ac42-8e9c-54df77cbb5fc@linux.vnet.ibm.com> Hi Martin, Thanks for the review. I'll take a look at these areas and report back -- especially the integration into C1. On 6/20/17 8:33 AM, Doerr, Martin wrote: > Hi Matt, > > thanks for providing this webrev. I had already thought about using these instructions for this purpose and your change matches pretty much what I'd do. > > Here a couple of comments: > ppc.ad: > This was a lot of work. Thanks for doing it. > effect(DEF dst, USE src); is redundant if a match rule match(Set dst (MoveL2D src)); exists. > > vm_version: > This part is in conflict with Michihiro's change which is already pushed in jdk10, but it's trivial to resolve. I'm ok with using has_vpmsumb() for has_mtfprd(). In the past, we sometimes had trouble with assuming that a certain Power processor supports all new instructions if it supports certain ones. We also use the hotspot code on as400 where certain instruction subsets were disabled while other Power 8 instructions were usable. Maybe you can double-check if there may exist configurations in which has_vpmsumb() doesn't match has_mtfprd(). > > C1: > It should also be possible to use the instructions in C1 compiler. Maybe you would like to take a look at it as well and see if it can be done with feasible effort. > Here are some hints: > The basic decisions are made in LIRGenerator::do_Convert. You could skip the force_to_spill or must_start_in_memory steps. > The final assembly code gets emitted in LIR_Assembler::emit_opConvert where you could replace the store instructions. > For testing, you can use -XX:TieredStopAtLevel=1, for example. > > Thanks and best regards, > Martin > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Matthew Brandyberry > Sent: Montag, 19. Juni 2017 18:28 > To: ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net > Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 > > This is a PPC-specific hotspot optimization that leverages the > mtfprd/mffprd instructions for for movement between general purpose and > floating point registers (rather than through memory). It yields a ~35% > improvement measured via a microbenchmark. Please review: Bug: > https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: > http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt > > From glaubitz at physik.fu-berlin.de Tue Jun 20 13:44:07 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 20 Jun 2017 15:44:07 +0200 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> Message-ID: <20170620134406.GD14487@physik.fu-berlin.de> Hi Matthew! On Mon, Jun 19, 2017 at 11:27:36AM -0500, Matthew Brandyberry wrote: > This is a PPC-specific hotspot optimization that leverages the mtfprd/mffprd > instructions for for movement between general purpose and floating point > registers (rather than through memory). Do these instructions also exist on POWER5 or is this POWER8+ only? Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From mbrandy at linux.vnet.ibm.com Tue Jun 20 13:46:25 2017 From: mbrandy at linux.vnet.ibm.com (Matthew Brandyberry) Date: Tue, 20 Jun 2017 08:46:25 -0500 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <20170620134406.GD14487@physik.fu-berlin.de> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <20170620134406.GD14487@physik.fu-berlin.de> Message-ID: Hi Adrian, These instructions were introduced with POWER8 and are not available in earlier ISAs. On 6/20/17 8:44 AM, John Paul Adrian Glaubitz wrote: > Hi Matthew! > > On Mon, Jun 19, 2017 at 11:27:36AM -0500, Matthew Brandyberry wrote: >> This is a PPC-specific hotspot optimization that leverages the mtfprd/mffprd >> instructions for for movement between general purpose and floating point >> registers (rather than through memory). > Do these instructions also exist on POWER5 or is this POWER8+ only? > > Adrian > From glaubitz at physik.fu-berlin.de Tue Jun 20 13:58:07 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 20 Jun 2017 15:58:07 +0200 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <20170620134406.GD14487@physik.fu-berlin.de> Message-ID: <20170620135807.GE14487@physik.fu-berlin.de> On Tue, Jun 20, 2017 at 08:46:25AM -0500, Matthew Brandyberry wrote: > These instructions were introduced with POWER8 and are not available in > earlier ISAs. Isn't your patch then missing some guarding such that these instructions aren't used when building on ppc64be with POWER5? We're still building openjdk-9 in Debian on ppc64 with POWER5 besides the ppc64el build with POWER8. Thanks, Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From mbrandy at linux.vnet.ibm.com Tue Jun 20 14:03:07 2017 From: mbrandy at linux.vnet.ibm.com (Matthew Brandyberry) Date: Tue, 20 Jun 2017 09:03:07 -0500 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <20170620135807.GE14487@physik.fu-berlin.de> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <20170620134406.GD14487@physik.fu-berlin.de> <20170620135807.GE14487@physik.fu-berlin.de> Message-ID: <1b51ebad-0b4c-9f95-1d34-5119ca121131@linux.vnet.ibm.com> I think they are all guarded properly -- some at a higher level via predicate directives. Which instance(s) in particular look unguarded? On 6/20/17 8:58 AM, John Paul Adrian Glaubitz wrote: > On Tue, Jun 20, 2017 at 08:46:25AM -0500, Matthew Brandyberry wrote: >> These instructions were introduced with POWER8 and are not available in >> earlier ISAs. > Isn't your patch then missing some guarding such that these > instructions aren't used when building on ppc64be with POWER5? > > We're still building openjdk-9 in Debian on ppc64 with POWER5 besides > the ppc64el build with POWER8. > > Thanks, > Adrian > From glaubitz at physik.fu-berlin.de Tue Jun 20 14:06:09 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 20 Jun 2017 16:06:09 +0200 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <1b51ebad-0b4c-9f95-1d34-5119ca121131@linux.vnet.ibm.com> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <20170620134406.GD14487@physik.fu-berlin.de> <20170620135807.GE14487@physik.fu-berlin.de> <1b51ebad-0b4c-9f95-1d34-5119ca121131@linux.vnet.ibm.com> Message-ID: <20170620140609.GF14487@physik.fu-berlin.de> On Tue, Jun 20, 2017 at 09:03:07AM -0500, Matthew Brandyberry wrote: > I think they are all guarded properly -- some at a higher level via > predicate directives. Which instance(s) in particular look unguarded? Ah, I was just looking for #ifdefs when in fact there is "if (VM_Version::has_mtfprd())" which is apparently what I should have been looking for. Sorry for my ignorance and thanks for the quick replies! Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Wed Jun 21 07:29:57 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 21 Jun 2017 09:29:57 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: Hi Marcus, thank you for reviewing! New webrev: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.01/webrev/ Delta to last version: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-all-00-to-01/webrev/ Changes: - classfile/loaderConstraints.cpp: as you suggested, I fixed up all cases of "superfluous LogStream usage" I found and converted them to direct LogTarget::print() calls. - classfile/sharedPathsMiscInfo.cpp: here I opted for removing any notion of UL from this method, instead I just hand in an outputStream*. Both the "is_enabled" check and the LogStream creation is now handed in the caller frame. I also added a trailing cr(). - gc/cms/compactibleFreeListSpace.cpp: removed the superfluous ResourceMark - logging/logStream.hpp: enabled the private operator new() declarations to disable heap allocations for class LogStream. I also gave it a try, works fine, if you do new(), now you get a linker error. The rest of the changes is concerned with the removal of "LogStreamCHeap" which is not needed anymore. Note that I found some new instances of "unguarded printing" and I updated comments at JDK-8182466 . Thanks & Regards, Thomas On Mon, Jun 19, 2017 at 2:30 PM, Marcus Larsson wrote: > Hi Thomas, > > Thanks for your effort on this, great work! > > On 2017-06-18 09:30, Thomas St?fe wrote: > > > -- > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/ > callsite-changes.webrev.00/webrev/ > > > (Although this is a pre-existing issue, it might be a good opportunity to > clean it up now.) > In file loaderConstraints.cpp, class LoaderConstraintTable, for functions > purge_loader_constraints(), add_entry(), check_or_update(), > extend_loader_constraint() and merge_loader_constraints(): > A LogStream is created, but only ever used for print_cr():s, which sort of > defeats its purpose. It would be much simpler just to use the LogTarget > directly. This is actually what's done for the converted > log_ldr_constraint_msg(). > > A similar but worse issue is present in sharedPathsMiscInfo.cpp: > Here, a LogStream is used to print incomplete lines without any CR at the > end. These messages will never be logged. Also, the use of a stream here is > unnecessary as well. > > In compactibleFreeListSpace.cpp: > > 2200 ResourceMark rm; > > It should be safe to remove this ResourceMark. > > These are the - mostly mechanical - changes to the many callsites. Most of > these changes follow the same pattern. A code sequence using "xxx_stream()" > was split into declaration of LogTarget or Log (usually the former), an > "is_enabled()" query and declaration of LogStream on the stack afterwards. > This follows the principle that the logging itself should be cheap if > unused: the declaration of LogTarget is a noop, because LogTarget has no > members. Only if is_enabled() is true, more expensive things are done, e.g. > initializing the outputStream object and allocating ResourceMarks. > > Note that I encountered some places where logging was done without > enclosing "is_enabled()" query - for instance, see gc/g1/heapRegion.cpp, or > cms/compactibleFreeListSpace.cpp. As far as I understand, in these cases > we actually do print (assemble the line to be printed), only to discard all > that work in the UL backend because logging is not enabled for that log > target. So, we pay quite a bit for nothing. I marked these questionable > code sections with an "// Unconditional write?" comment and we may want to > fix those later in a follow up issue? > > > That sounds good to me. I found more sites where the logging is > unconditional (compactibleFreeListSpace.cpp, parOopClosures.inline.hpp, > g1RemSet.cpp), but we should fix them all as a separate issue. I filed > https://bugs.openjdk.java.net/browse/JDK-8182466. > > > -- > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917- > refactor-ul-logstream/api-changes.webrev.00/webrev/ > > The API changes mostly are simplifications. Before, we had a whole > hierarchy of LogStream... classes whose only difference was how the backing > memory was allocated. Because we now always use C-Heap, all this can be > folded together into a single simple LogStream class which uses Cheap as > line memory. Please note that I left "LogStreamNoResourceMark" and > "LogStreamCHeap" for now and typedef'ed them to the one single LogStream > class; I will fix those later as part of this refactoring. > > > Looks good to me, as long as we get rid of the typedefs too eventually. :) > > > -- > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/ > logstream-optimization.webrev.00/webrev/ > > > 56 // Prevent operator new for LogStream. 57 // static void* operator new (size_t); 58 // static void* operator new[] (size_t); 59 > > Should these be uncommented? > > Thanks again, > Marcus > > From marcus.larsson at oracle.com Wed Jun 21 10:18:50 2017 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Wed, 21 Jun 2017 12:18:50 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: Hi, On 2017-06-21 09:29, Thomas St?fe wrote: > Hi Marcus, > > thank you for reviewing! > > New webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.01/webrev/ > > > Delta to last version: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-all-00-to-01/webrev/ > > > Changes: > > - classfile/loaderConstraints.cpp: as you suggested, I fixed up all > cases of "superfluous LogStream usage" I found and converted them to > direct LogTarget::print() calls. > > - classfile/sharedPathsMiscInfo.cpp: here I opted for removing any > notion of UL from this method, instead I just hand in an > outputStream*. Both the "is_enabled" check and the LogStream creation > is now handed in the caller frame. I also added a trailing cr(). > > - gc/cms/compactibleFreeListSpace.cpp: removed the superfluous > ResourceMark > > - logging/logStream.hpp: enabled the private operator new() > declarations to disable heap allocations for class LogStream. I also > gave it a try, works fine, if you do new(), now you get a linker error. > > The rest of the changes is concerned with the removal of > "LogStreamCHeap" which is not needed anymore. Note that I found some > new instances of "unguarded printing" and I updated comments at > JDK-8182466 . > > Thanks & Regards, Thomas This looks great. One last thing just hit me: We should have a unit test to cover the allocation case for LogStream. Preferably one that forces it to grow twice (once from stackbuffer, and once from heap), to make sure we exercise that code. Thanks, Marcus From thomas.stuefe at gmail.com Wed Jun 21 16:16:38 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 21 Jun 2017 18:16:38 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: Hi Marcus, On Wed, Jun 21, 2017 at 12:18 PM, Marcus Larsson wrote: > Hi, > > > On 2017-06-21 09:29, Thomas St?fe wrote: > >> Hi Marcus, >> >> thank you for reviewing! >> >> New webrev: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >> ul-logstream/all.webrev.01/webrev/ > Estuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.01/webrev/> >> >> Delta to last version: http://cr.openjdk.java.net/~st >> uefe/webrevs/8181917-refactor-ul-logstream/delta-all-00-to-01/webrev/ < >> http://cr.openjdk.java.net/%7Estuefe/webrevs/8181917-refact >> or-ul-logstream/delta-all-00-to-01/webrev/> >> >> Changes: >> >> - classfile/loaderConstraints.cpp: as you suggested, I fixed up all >> cases of "superfluous LogStream usage" I found and converted them to direct >> LogTarget::print() calls. >> >> - classfile/sharedPathsMiscInfo.cpp: here I opted for removing any >> notion of UL from this method, instead I just hand in an outputStream*. >> Both the "is_enabled" check and the LogStream creation is now handed in the >> caller frame. I also added a trailing cr(). >> >> - gc/cms/compactibleFreeListSpace.cpp: removed the superfluous >> ResourceMark >> >> - logging/logStream.hpp: enabled the private operator new() declarations >> to disable heap allocations for class LogStream. I also gave it a try, >> works fine, if you do new(), now you get a linker error. >> >> The rest of the changes is concerned with the removal of "LogStreamCHeap" >> which is not needed anymore. Note that I found some new instances of >> "unguarded printing" and I updated comments at JDK-8182466 < >> https://bugs.openjdk.java.net/browse/JDK-8182466> . >> >> Thanks & Regards, Thomas >> > > This looks great. > > One last thing just hit me: We should have a unit test to cover the > allocation case for LogStream. Preferably one that forces it to grow twice > (once from stackbuffer, and once from heap), to make sure we exercise that > code. > > Good idea, especially since I had a bug in the line buffer handling :P New Webrev: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.02/webrev/ Delta to last: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-01-to-02/webrev/ Changes: fixed a bug where, on buffer resize in LogStream, the new buffer size was calculated wrong. Added a test to test LogStream resizing. I ran the gtests on win-x64 and linux-x64, and the jtreg servicability/logging tests on linux-x64. All good. I am currently running the whole hotspot jtreg tests, but that will take a while. Kind Regards, Thomas > Thanks, > Marcus > From daniel.daugherty at oracle.com Thu Jun 22 00:21:27 2017 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 21 Jun 2017 18:21:27 -0600 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion In-Reply-To: References: Message-ID: <86238df5-bcc7-56e6-d2d3-74cedd52be16@oracle.com> On 6/16/17 6:33 PM, Kim Barrett wrote: > Please review this refactoring of debug.hpp and globalDefinitions.hpp > so that debug.hpp no longer includes globalDefinitions.hpp. Instead, > the include dependency is now in the other direction. Among other > things, this permits the use of the assert macros by inline functions > defined in globalDefinitions.hpp. > > There are a few functions declared toward the end of debug.hpp that > now seem somewhat misplaced there. I'm leaving them there for now, > but will file a new CR to figure out a better place for them, possibly > in vmError. There are a number of additional cleanups for dead code > and the like that I'll be filing as followups; this change is already > rather large and I didn't want to add more stuff to it. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8181449 > > Testing: > jprt > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.00/ Normally, I would drop a list of the files here and hit them all one by one. That style's not going to work well with this review. I'm going to review the sub-webrevs one by one and only call out a file if I have comments. I'm going to lose the sanity check of knowing that I reviewed every file, but... > The full webrev is somewhat large. However, much of the bulk involves > either adding #includes to files or moving code from one place to > another without changing it. To simplify reviewing the changes, I've > broken it down into a sequence of patches, each associated with a > particular bit of the refactoring. The full change is equivalent to > applying these patches in the given order. (Note: I don't know if > applying a subset gives a working repository.) > > (1) http://cr.openjdk.java.net/~kbarrett/8181449/jvm_h/ No comments on this sub-webrev. > a. In preparation for removing the #include of jvm.h from debug.hpp > (see move_format_buffer webrev), ensured all files that contain > references to jio_printf variants include jvm.h. This mostly involved > adding a #include to lots of files. > > b. For a few header files that referenced jio_printf variants, moved > the function definition from the .hpp to the corresponding .cpp, and > added #include of jvm.h to the .cpp. > - macroAssembler_sparc.[ch]pp > - macroAssembler_x86.[ch]pp > - macroAssembler_aarch64.[ch]pp > > (2) http://cr.openjdk.java.net/~kbarrett/8181449/move_format_buffer/ src/share/vm/utilities/formatBuffer.cpp jcheck won't like the blank line at the end of the file. > a. Moved FormatBuffer and related stuff from debug.[ch]pp to new > formatBuffer.[ch]pp, and updated users to #include the new files. > This includes moving the #include of jvm.h, which is no longer needed > by debug.hpp. > > b. Made the #include of debug.hpp explicit when already otherwise > modifying a file that uses assert-like macros, rather than depending > on indirect inclusion of that file. > > (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ src/share/vm/utilities/vmError.cpp L210: if (fr.pc()) { L219: if (t && t->is_Java_thread()) { nit - uses implied boolean (not your bug, you just moved the code). > a. Moved print_native_stack to VMError class. > b. Removed unused and undefined pd_obfuscate_location. > c. Simplified #ifdef PRODUCT guard in ps(). > > (4) http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/ No comments on this sub-webrev. > a. Moved / combined definitions of BREAKPOINT macro from > globalDefinitions_*.hpp to new breakpoint.hpp. > > b. Deleted all definitions of unused DEBUG_EXCEPTION macro. > > c. Moved gcc-specific ATTRIBUTE_PRINTF, pragma macros, and friends > from globalDefinitions_gcc.hpp to new compilerWarnings.hpp. Also > moved the default definitions for those macros from > globalDefinitions.hpp to compilerWarnings.hpp. > > d. Added TARGET_COMPILER_HEADER[_INLINE] macros, similar to the > CPU/OS/OS_CPU/_HEADER[_INLINE] macros, for including files based on > new INCLUDE_SUFFIX_TARGET_COMPILER macro provided by the build system. > > (5) http://cr.openjdk.java.net/~kbarrett/8181449/flip_depend/ No comments on this sub-webrev. > a. Changed globalDefinitions.hpp to #include debug.hpp, rather than > the other way around. > > b. Changed globals.hpp to #include globalDefinitions.hpp rather than > debug.hpp, since it actually needs to former and not the latter. > > c. Changed jvmci_globals.cpp to #include jvm.h, since it's no longer > being indirectly included via an indirect include of debug.hpp that > was including globalDefinitions.hpp. > > d. Moved printf-style formatters earlier in globalDefinitions.hpp, so > they can be used in assert messages in this file. > > e. In globalDefinitions.hpp, changed some #ifdef ASSERT blocks of > conditional calls to basic_fatal to instead use assert. While doing > so, made the error messages more informative. > > In addition to globals.hpp, there are about 90 files that #include > debug.hpp but not globalDefinitions.hpp. The few changes mentioned > were sufficient to fix missing indirect includes resulting from > debug.hpp no longer including globalDefinitions.hpp. There could be > additional problems with platforms not supported by Oracle though. > > There are also about 40 files which directly include debug.hpp but > don't appear to use any of the assertion macros. Based on the last two paragraphs, I'm now wondering if there are files changed in the main webrev that aren't in a sub-webrev. I think I need to go back to the main webrev to make sure I didn't miss anything: > http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.00/ No new comments looking at the changes this way. Thumbs up! Very nice job of detangling... Dan From kim.barrett at oracle.com Thu Jun 22 00:29:24 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 21 Jun 2017 20:29:24 -0400 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion In-Reply-To: References: Message-ID: <0EC0C8CD-CB86-4A96-B25F-86D9619E2469@oracle.com> > On Jun 20, 2017, at 5:49 AM, Stefan Karlsson wrote: > > Hi Kim, Thanks for such a careful review of this largish and mostly very boring to read change. Responses inline. New webrevs: full: http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.01/ incr: http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.01.inc/ > On 2017-06-17 02:33, Kim Barrett wrote: >> (2) http://cr.openjdk.java.net/~kbarrett/8181449/move_format_buffer/ >> a. Moved FormatBuffer and related stuff from debug.[ch]pp to new >> formatBuffer.[ch]pp, and updated users to #include the new files. >> This includes moving the #include of jvm.h, which is no longer needed >> by debug.hpp. >> b. Made the #include of debug.hpp explicit when already otherwise >> modifying a file that uses assert-like macros, rather than depending >> on indirect inclusion of that file. > > The following has now been moved to formatBuffer.hpp: > > 116 // Used to format messages. > 117 typedef FormatBuffer<> err_msg; > > but formatBuffer.hpp is not explicitly included in all files using err_msg. It seems to be available everywhere needed via indirect includes, since there weren't any build failures. There are 25 files referring to err_msg, of which 18 lack a direct #include of the the new formatBuffer.hpp. Some uses look like they would be better eliminated by changing surrounding code. For example, callers of os::commit_memory_or_exit pay the cost of building the error message in an err_msg, even in the normal no-error case. Addressing that should be done in its own RFE. https://bugs.openjdk.java.net/browse/JDK-8182679 Since I think existing uses of err_msg ought to be examined and some eliminated, and there isn't presently a build problem, I'd prefer to not address these missing includes at this time. But I could have my arm twisted... >> (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ >> a. Moved print_native_stack to VMError class. >> b. Removed unused and undefined pd_obfuscate_location. >> c. Simplified #ifdef PRODUCT guard in ps(). > > You removed the call to p->trace_stack(). Was that intentional? > > if (p->has_last_Java_frame()) { > // If the last_Java_fp is set we are in C land and > // can call the standard stack_trace function. > -#ifdef PRODUCT > p->print_stack(); > } else { > +#ifdef PRODUCT > tty->print_cr("Cannot find the last Java frame, printing stack disabled."); > #else // !PRODUCT > - p->trace_stack(); > - } else { I kept getting confused by this code while trying to figure out what to do with pd_ps, which is why I changed it. Seems I was even more confused than I thought, since I failed to see the difference between p->print_stack() and p->trace_stack(). Well spotted. I've reverted this change. >> (4) http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/ >> a. Moved / combined definitions of BREAKPOINT macro from >> globalDefinitions_*.hpp to new breakpoint.hpp. >> b. Deleted all definitions of unused DEBUG_EXCEPTION macro. >> c. Moved gcc-specific ATTRIBUTE_PRINTF, pragma macros, and friends >> from globalDefinitions_gcc.hpp to new compilerWarnings.hpp. Also >> moved the default definitions for those macros from >> globalDefinitions.hpp to compilerWarnings.hpp. >> d. Added TARGET_COMPILER_HEADER[_INLINE] macros, similar to the >> CPU/OS/OS_CPU/_HEADER[_INLINE] macros, for including files based on >> new INCLUDE_SUFFIX_TARGET_COMPILER macro provided by the build system. > > ====================================================================== > http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/globalDefinitions.hpp.frames.html > > sort order?: > 28 #include "utilities/macros.hpp" > 29 #include "utilities/compilerWarnings.hpp" > > or is this intentional? I see that compilerWarnings.hpp has the comment: > 64 // Defaults when not defined for the TARGET_COMPILER_xxx. > > which seems to suggest that macros.hpp need to be included before compilerWarnings.hpp. > > This used to work when globalDefinitions.hpp dispatched to globalDefinitions_.hpp, but now this seems fragile. > > debug.hpp even includes compilerWarnings.hpp before macros.hpp, so it seems like the following attribute is not used when debug.hpp is included!: > > -#ifndef ATTRIBUTE_PRINTF > -#define ATTRIBUTE_PRINTF(fmt,vargs) __attribute__((format(printf, fmt, vargs))) > -#endif > > That seems like a bug to me. The mis-ordering was not intentional. TARGET_COMPILER_xxx is defined by the build system, not by macros.hpp. If compilerWarnings.hpp were changed to use the new TARGET_COMPILER_HEADER() dispatch (defined in macros.hpp), then the include would need to be added to compilerWarnings.hpp. So I don't think there's a bug here, but I'll fix the mis-ordered includes. > ====================================================================== > http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/globalDefinitions_gcc.hpp.udiff.html > > Maybe also get rid of the following line: > //---------------------------------------------------------------------------------------------------- > -// Debugging Gone now. > http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/macros.hpp.frames.html > > 490 #define CPU_HEADER_STEM(basename) PASTE_TOKENS(basename, INCLUDE_SUFFIX_CPU) > 491 #define OS_HEADER_STEM(basename) PASTE_TOKENS(basename, INCLUDE_SUFFIX_OS) > 492 #define OS_CPU_HEADER_STEM(basename) PASTE_TOKENS(basename, PASTE_TOKENS(INCLUDE_SUFFIX_OS, INCLUDE_SUFFIX_CPU)) > 493 #define TARGET_COMPILER_HEADER_STEM(basename) PASTE_TOKENS(basename, INCLUDE_SUFFIX_TARGET_COMPILER) > > We used to use the TARGET prefix for cpu/arch and os, for example: > [?] > but changed it to: > +#include "utilities/macros.hpp" > + > +#include CPU_HEADER(c1_globals) > +#include OS_HEADER(c1_globals) > > with this patch: > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/8a5735c11a84 > > Do we want to name the macro COMPILER_HEADER instead of TARGET_COMPILER_HEADER? I'd forgotten about the TARGET_ prefix in the old usage. Yes, dropping it for the new COMPILER_xxx macros would be consistent, and I mostly like it now that you've pointed it out. My only concern is that we've also got COMPILER1 and COMPILER2 based macros, and the visual distinction is a little subtle. But I'll go ahead and make the change. > http://cr.openjdk.java.net/~kbarrett/8181449/flip_depend/src/share/vm/utilities/debug.hpp.frames.html > > Same comment as in (4) above. See reply above. >> b. Changed globals.hpp to #include globalDefinitions.hpp rather than >> debug.hpp, since it actually needs to former and not the latter. >> c. Changed jvmci_globals.cpp to #include jvm.h, since it's no longer >> being indirectly included via an indirect include of debug.hpp that >> was including globalDefinitions.hpp. > > I don't see that change. That change got moved to the earlier jvm_h patch, and I forgot to update this patch description. From kim.barrett at oracle.com Thu Jun 22 00:51:52 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 21 Jun 2017 20:51:52 -0400 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion In-Reply-To: <86238df5-bcc7-56e6-d2d3-74cedd52be16@oracle.com> References: <86238df5-bcc7-56e6-d2d3-74cedd52be16@oracle.com> Message-ID: > On Jun 21, 2017, at 8:21 PM, Daniel D. Daugherty wrote: > > On 6/16/17 6:33 PM, Kim Barrett wrote: >> Please review this refactoring of debug.hpp and globalDefinitions.hpp >> so that debug.hpp no longer includes globalDefinitions.hpp. [?] >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8181449 >> >> Testing: >> jprt >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.00/ > > Normally, I would drop a list of the files here and hit them > all one by one. That style's not going to work well with this > review. :) >> (2) http://cr.openjdk.java.net/~kbarrett/8181449/move_format_buffer/ > > src/share/vm/utilities/formatBuffer.cpp > jcheck won't like the blank line at the end of the file. Fixed. >> (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ > > src/share/vm/utilities/vmError.cpp > L210: if (fr.pc()) { > L219: if (t && t->is_Java_thread()) { > nit - uses implied boolean > (not your bug, you just moved the code). There are quite a few more examples of implied booleans in this file. I?d prefer to leave this alone for this changeset. > Thumbs up! Very nice job of detangling? Thanks. From stefan.karlsson at oracle.com Thu Jun 22 06:49:05 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Jun 2017 08:49:05 +0200 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion In-Reply-To: <0EC0C8CD-CB86-4A96-B25F-86D9619E2469@oracle.com> References: <0EC0C8CD-CB86-4A96-B25F-86D9619E2469@oracle.com> Message-ID: <5588314d-1908-93f5-a7c8-e77a22cee89a@oracle.com> Hi Kim, On 2017-06-22 02:29, Kim Barrett wrote: >> On Jun 20, 2017, at 5:49 AM, Stefan Karlsson wrote: >> >> Hi Kim, > > Thanks for such a careful review of this largish and mostly very > boring to read change. Responses inline. > > New webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.01/ > incr: http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.01.inc/ Looks good. Just a couple of minor nits that would be nice to squash. > > >> On 2017-06-17 02:33, Kim Barrett wrote: >>> (2) http://cr.openjdk.java.net/~kbarrett/8181449/move_format_buffer/ >>> a. Moved FormatBuffer and related stuff from debug.[ch]pp to new >>> formatBuffer.[ch]pp, and updated users to #include the new files. >>> This includes moving the #include of jvm.h, which is no longer needed >>> by debug.hpp. >>> b. Made the #include of debug.hpp explicit when already otherwise >>> modifying a file that uses assert-like macros, rather than depending >>> on indirect inclusion of that file. >> >> The following has now been moved to formatBuffer.hpp: >> >> 116 // Used to format messages. >> 117 typedef FormatBuffer<> err_msg; >> >> but formatBuffer.hpp is not explicitly included in all files using err_msg. > > It seems to be available everywhere needed via indirect includes, > since there weren't any build failures. > > There are 25 files referring to err_msg, of which 18 lack a direct > #include of the the new formatBuffer.hpp. > > Some uses look like they would be better eliminated by changing > surrounding code. For example, callers of os::commit_memory_or_exit > pay the cost of building the error message in an err_msg, even in the > normal no-error case. Addressing that should be done in its own RFE. > https://bugs.openjdk.java.net/browse/JDK-8182679 > > Since I think existing uses of err_msg ought to be examined and some > eliminated, and there isn't presently a build problem, I'd prefer to > not address these missing includes at this time. But I could have my > arm twisted... Fine. > >>> (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ >>> a. Moved print_native_stack to VMError class. >>> b. Removed unused and undefined pd_obfuscate_location. >>> c. Simplified #ifdef PRODUCT guard in ps(). >> >> You removed the call to p->trace_stack(). Was that intentional? >> >> if (p->has_last_Java_frame()) { >> // If the last_Java_fp is set we are in C land and >> // can call the standard stack_trace function. >> -#ifdef PRODUCT >> p->print_stack(); >> } else { >> +#ifdef PRODUCT >> tty->print_cr("Cannot find the last Java frame, printing stack disabled."); >> #else // !PRODUCT >> - p->trace_stack(); >> - } else { > > I kept getting confused by this code while trying to figure out what > to do with pd_ps, which is why I changed it. Seems I was even more > confused than I thought, since I failed to see the difference between > p->print_stack() and p->trace_stack(). Well spotted. I've reverted > this change. OK. I wonder if this code would have been easier to read with to #ifdef PRODUCT sections. > >>> (4) http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/ >>> a. Moved / combined definitions of BREAKPOINT macro from >>> globalDefinitions_*.hpp to new breakpoint.hpp. >>> b. Deleted all definitions of unused DEBUG_EXCEPTION macro. >>> c. Moved gcc-specific ATTRIBUTE_PRINTF, pragma macros, and friends >>> from globalDefinitions_gcc.hpp to new compilerWarnings.hpp. Also >>> moved the default definitions for those macros from >>> globalDefinitions.hpp to compilerWarnings.hpp. >>> d. Added TARGET_COMPILER_HEADER[_INLINE] macros, similar to the >>> CPU/OS/OS_CPU/_HEADER[_INLINE] macros, for including files based on >>> new INCLUDE_SUFFIX_TARGET_COMPILER macro provided by the build system. >> >> ====================================================================== >> http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/globalDefinitions.hpp.frames.html >> >> sort order?: >> 28 #include "utilities/macros.hpp" >> 29 #include "utilities/compilerWarnings.hpp" >> >> or is this intentional? I see that compilerWarnings.hpp has the comment: >> 64 // Defaults when not defined for the TARGET_COMPILER_xxx. >> >> which seems to suggest that macros.hpp need to be included before compilerWarnings.hpp. >> >> This used to work when globalDefinitions.hpp dispatched to globalDefinitions_.hpp, but now this seems fragile. >> >> debug.hpp even includes compilerWarnings.hpp before macros.hpp, so it seems like the following attribute is not used when debug.hpp is included!: >> >> -#ifndef ATTRIBUTE_PRINTF >> -#define ATTRIBUTE_PRINTF(fmt,vargs) __attribute__((format(printf, fmt, vargs))) >> -#endif >> >> That seems like a bug to me. > > The mis-ordering was not intentional. > > TARGET_COMPILER_xxx is defined by the build system, not by macros.hpp. > > If compilerWarnings.hpp were changed to use the new > TARGET_COMPILER_HEADER() dispatch (defined in macros.hpp), then the > include would need to be added to compilerWarnings.hpp. > > So I don't think there's a bug here, but I'll fix the mis-ordered > includes. I agree that this isn't a bug. > >> ====================================================================== >> http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/globalDefinitions_gcc.hpp.udiff.html >> >> Maybe also get rid of the following line: >> //---------------------------------------------------------------------------------------------------- >> -// Debugging > > Gone now. I wasn't particularly clear on this request, and you cut out the following part of my mail: "or get rid of the following stray new line so that the code in the different globalDefinitions files are more consistent." With your last change the layout is still inconsistent: globalDefinitions_gcc.hpp 195 196 // checking for nanness globalDefinitions_sparcWorks.hpp 213 //---------------------------------------------------------------------------------------------------- 214 215 // checking for nanness globalDefinitions_visCPP.hpp 128 //---------------------------------------------------------------------------------------------------- 129 // Checking for nanness 110 //---------------------------------------------------------------------------------------------------- globalDefinitions_xlc.hpp 111 112 // checking for nanness Could you make all these files consistent in this regard before pushing this? > >> http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/macros.hpp.frames.html >> >> 490 #define CPU_HEADER_STEM(basename) PASTE_TOKENS(basename, INCLUDE_SUFFIX_CPU) >> 491 #define OS_HEADER_STEM(basename) PASTE_TOKENS(basename, INCLUDE_SUFFIX_OS) >> 492 #define OS_CPU_HEADER_STEM(basename) PASTE_TOKENS(basename, PASTE_TOKENS(INCLUDE_SUFFIX_OS, INCLUDE_SUFFIX_CPU)) >> 493 #define TARGET_COMPILER_HEADER_STEM(basename) PASTE_TOKENS(basename, INCLUDE_SUFFIX_TARGET_COMPILER) >> >> We used to use the TARGET prefix for cpu/arch and os, for example: >> [?] >> but changed it to: >> +#include "utilities/macros.hpp" >> + >> +#include CPU_HEADER(c1_globals) >> +#include OS_HEADER(c1_globals) >> >> with this patch: >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/8a5735c11a84 >> >> Do we want to name the macro COMPILER_HEADER instead of TARGET_COMPILER_HEADER? > > I'd forgotten about the TARGET_ prefix in the old usage. Yes, dropping > it for the new COMPILER_xxx macros would be consistent, and I mostly > like it now that you've pointed it out. My only concern is that we've > also got COMPILER1 and COMPILER2 based macros, and the visual > distinction is a little subtle. But I'll go ahead and make the change. OK > >> http://cr.openjdk.java.net/~kbarrett/8181449/flip_depend/src/share/vm/utilities/debug.hpp.frames.html >> >> Same comment as in (4) above. > > See reply above. > >>> b. Changed globals.hpp to #include globalDefinitions.hpp rather than >>> debug.hpp, since it actually needs to former and not the latter. >>> c. Changed jvmci_globals.cpp to #include jvm.h, since it's no longer >>> being indirectly included via an indirect include of debug.hpp that >>> was including globalDefinitions.hpp. >> >> I don't see that change. > > That change got moved to the earlier jvm_h patch, and I forgot to > update this patch description. OK http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.01.inc/src/share/vm/utilities/globalDefinitions.hpp.udiff.html 28 #include "utilities/macros.hpp" 29 #include "utilities/compilerWarnings.hpp" 30 #include "utilities/debug.hpp" 31 32 #include TARGET_COMPILER_HEADER(utilities/globalDefinitions) 33 // Defaults for macros that might be defined per compiler. 34 #ifndef NOINLINE 35 #define NOINLINE 36 #endif Would you mind adding a newline between 32 and 32 before pushing? Thanks, StefanK > > > From glaubitz at physik.fu-berlin.de Thu Jun 22 10:22:03 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Thu, 22 Jun 2017 12:22:03 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: References: <20170609102041.GA2477@physik.fu-berlin.de> Message-ID: <20170622102203.GB18516@physik.fu-berlin.de> Hi Dalibor! On Tue, Jun 20, 2017 at 02:53:52PM +0200, dalibor topic wrote: > I took a quick look at the build logs. It seems that the jtreg tests for > aren't run, because the packaged jtreg is too old: > > Error: The testsuite at /?PKGBUILDDIR?/src/hotspot/test requires jtreg > version 4.2 b07 or higher and this is jtreg version 4.2 b05. > > from https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=amd64&ver=9~b170-2&stamp=1495173713&raw=0 Aha, thanks for the heads-up! > It should get picked up for next builds, thanks to > https://tracker.debian.org/news/850162 , I guess. Yes, the next time Matthias Klose uploads a new openjdk-9 package, the testsuite should be run. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From glaubitz at physik.fu-berlin.de Thu Jun 22 10:27:03 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Thu, 22 Jun 2017 12:27:03 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: References: <20170609102041.GA2477@physik.fu-berlin.de> <20170614120408.GB16230@physik.fu-berlin.de> <5d613e41-a982-ec67-3a48-5befbf3a2808@physik.fu-berlin.de> <31eeeb60-1b0d-a0cb-238c-ca2361430786@oracle.com> <20170619070613.GE28760@physik.fu-berlin.de> Message-ID: <20170622102703.GC18516@physik.fu-berlin.de> On Mon, Jun 19, 2017 at 02:48:39PM +0200, Erik Helin wrote: > >So, should I just run the testsuite with all three patches applied? > > Yes, please run the testsuite with the three patches applied. This should > work (famous last words ;)) for the "native" Linux/sparc64 version of > hotspot (if not, I would to curious to learn why). To test > Linux/sparc64+zero you obviously need the fourth patch applied as well. Ok, I will give it a try. I will do a hotspot-native build first, run the testsuite and post the results. Let's tackle zero later. I've got anoter bunch of zero-related fixes that we're carrying in the Debian package and that should be upstreamed to be available for other downstreams as well. > >The fourth patch just fixes Zero on Linux sparc. If I understand > >correctly, Debian's openjdk packages always build the Zero VM even on > >targets with a native Hotspot. And without the last patch, the Zero > >build fails on Linux sparc. > > Ah, now I think I get it :) This is for the openjdk-9-jre-zero package, > right? Does the openjdk-9-jre package provide the "native" (using template > interpreter, C1, C2) version of hotspot if possible? >From the description of the -zero package you mentioned: > The package provides an alternative runtime using the Zero VM and > the Shark Just In Time Compiler (JIT). Built on architectures in > addition to the Hotspot VM as a debugging aid for those architectures > which don't have a Hotspot VM. > The VM is started with the option `-zero'. See the README.Debian for > details. For the -jre-headless package, we have: > Minimal Java runtime - needed for executing non GUI Java programs, > using Hotspot JIT. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From marcus.larsson at oracle.com Thu Jun 22 13:33:17 2017 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 22 Jun 2017 15:33:17 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: <79fac42d-1f8f-e1b4-4ea5-f6195dfcf4ad@oracle.com> On 2017-06-21 18:16, Thomas St?fe wrote: > Hi Marcus, > > On Wed, Jun 21, 2017 at 12:18 PM, Marcus Larsson > > wrote: > > Hi, > > > On 2017-06-21 09:29, Thomas St?fe wrote: > > Hi Marcus, > > thank you for reviewing! > > New webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.01/webrev/ > > > > > Delta to last version: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-all-00-to-01/webrev/ > > > > > Changes: > > - classfile/loaderConstraints.cpp: as you suggested, I fixed > up all cases of "superfluous LogStream usage" I found and > converted them to direct LogTarget::print() calls. > > - classfile/sharedPathsMiscInfo.cpp: here I opted for removing > any notion of UL from this method, instead I just hand in an > outputStream*. Both the "is_enabled" check and the LogStream > creation is now handed in the caller frame. I also added a > trailing cr(). > > - gc/cms/compactibleFreeListSpace.cpp: removed the superfluous > ResourceMark > > - logging/logStream.hpp: enabled the private operator new() > declarations to disable heap allocations for class LogStream. > I also gave it a try, works fine, if you do new(), now you get > a linker error. > > The rest of the changes is concerned with the removal of > "LogStreamCHeap" which is not needed anymore. Note that I > found some new instances of "unguarded printing" and I updated > comments at JDK-8182466 > > . > > Thanks & Regards, Thomas > > > This looks great. > > One last thing just hit me: We should have a unit test to cover > the allocation case for LogStream. Preferably one that forces it > to grow twice (once from stackbuffer, and once from heap), to make > sure we exercise that code. > > > Good idea, especially since I had a bug in the line buffer handling :P Aha! :) > > New Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.02/webrev/ > > > Delta to last: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-01-to-02/webrev/ > Looks good to me. Thanks, Marcus > > Changes: fixed a bug where, on buffer resize in LogStream, the new > buffer size was calculated wrong. Added a test to test LogStream resizing. > > I ran the gtests on win-x64 and linux-x64, and the jtreg > servicability/logging tests on linux-x64. All good. I am currently > running the whole hotspot jtreg tests, but that will take a while. > > Kind Regards, Thomas > > Thanks, > Marcus > > From thomas.stuefe at gmail.com Thu Jun 22 13:54:14 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Jun 2017 15:54:14 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: <79fac42d-1f8f-e1b4-4ea5-f6195dfcf4ad@oracle.com> References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> <79fac42d-1f8f-e1b4-4ea5-f6195dfcf4ad@oracle.com> Message-ID: Thank you Marcus! Would you mind running the change through jtreg? I ran it on Linux x64 (hotspot/test branch) and I got a number of errors, but nothing suspicious. I repeated a most of the tests with an unpatched VM and got the same errors. There is one suspicious test failing: runtime/modules/PatchModule/PatchModuleCDS.java, with "Failed. Execution failed: `main' threw exception: java.lang.RuntimeException: '[class,load] java.lang.Thread source: jrt:/java.base' missing from stdout/stderr". However, the same error also happens without my change. Still I would feel better if you would do an independent test. Thank you! @ all: I'd like to have another reviewer (maybe someone from the gc team? a lot of gc log sites changed), and as usual a sponsor. Kind Regards, Thomas On Thu, Jun 22, 2017 at 3:33 PM, Marcus Larsson wrote: > > > On 2017-06-21 18:16, Thomas St?fe wrote: > >> Hi Marcus, >> >> On Wed, Jun 21, 2017 at 12:18 PM, Marcus Larsson < >> marcus.larsson at oracle.com > wrote: >> >> Hi, >> >> >> On 2017-06-21 09:29, Thomas St?fe wrote: >> >> Hi Marcus, >> >> thank you for reviewing! >> >> New webrev: >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >> ul-logstream/all.webrev.01/webrev/ >> > or-ul-logstream/all.webrev.01/webrev/> >> > or-ul-logstream/all.webrev.01/webrev/ >> > or-ul-logstream/all.webrev.01/webrev/>> >> >> Delta to last version: >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >> ul-logstream/delta-all-00-to-01/webrev/ >> > or-ul-logstream/delta-all-00-to-01/webrev/> >> > or-ul-logstream/delta-all-00-to-01/webrev/ >> >> > or-ul-logstream/delta-all-00-to-01/webrev/>> >> >> Changes: >> >> - classfile/loaderConstraints.cpp: as you suggested, I fixed >> up all cases of "superfluous LogStream usage" I found and >> converted them to direct LogTarget::print() calls. >> >> - classfile/sharedPathsMiscInfo.cpp: here I opted for removing >> any notion of UL from this method, instead I just hand in an >> outputStream*. Both the "is_enabled" check and the LogStream >> creation is now handed in the caller frame. I also added a >> trailing cr(). >> >> - gc/cms/compactibleFreeListSpace.cpp: removed the superfluous >> ResourceMark >> >> - logging/logStream.hpp: enabled the private operator new() >> declarations to disable heap allocations for class LogStream. >> I also gave it a try, works fine, if you do new(), now you get >> a linker error. >> >> The rest of the changes is concerned with the removal of >> "LogStreamCHeap" which is not needed anymore. Note that I >> found some new instances of "unguarded printing" and I updated >> comments at JDK-8182466 >> > > . >> >> Thanks & Regards, Thomas >> >> >> This looks great. >> >> One last thing just hit me: We should have a unit test to cover >> the allocation case for LogStream. Preferably one that forces it >> to grow twice (once from stackbuffer, and once from heap), to make >> sure we exercise that code. >> >> >> Good idea, especially since I had a bug in the line buffer handling :P >> > > Aha! :) > > >> New Webrev: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor- >> ul-logstream/all.webrev.02/webrev/ > Estuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.02/webrev/> >> >> Delta to last: http://cr.openjdk.java.net/~st >> uefe/webrevs/8181917-refactor-ul-logstream/delta-01-to-02/webrev/ < >> http://cr.openjdk.java.net/%7Estuefe/webrevs/8181917-refact >> or-ul-logstream/delta-01-to-02/webrev/> >> > > Looks good to me. > > Thanks, > Marcus > > > >> Changes: fixed a bug where, on buffer resize in LogStream, the new buffer >> size was calculated wrong. Added a test to test LogStream resizing. >> >> I ran the gtests on win-x64 and linux-x64, and the jtreg >> servicability/logging tests on linux-x64. All good. I am currently running >> the whole hotspot jtreg tests, but that will take a while. >> >> Kind Regards, Thomas >> >> Thanks, >> Marcus >> >> >> > From erik.helin at oracle.com Thu Jun 22 14:13:57 2017 From: erik.helin at oracle.com (Erik Helin) Date: Thu, 22 Jun 2017 16:13:57 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> <79fac42d-1f8f-e1b4-4ea5-f6195dfcf4ad@oracle.com> Message-ID: On 06/22/2017 03:54 PM, Thomas St?fe wrote: > Thank you Marcus! > > Would you mind running the change through jtreg? I ran it on Linux x64 > (hotspot/test branch) and I got a number of errors, but nothing > suspicious. I repeated a most of the tests with an unpatched VM and got > the same errors. > > There is one suspicious test failing: > runtime/modules/PatchModule/PatchModuleCDS.java, with "Failed. Execution > failed: `main' threw exception: java.lang.RuntimeException: > '[class,load] java.lang.Thread source: jrt:/java.base' missing from > stdout/stderr". However, the same error also happens without my change. > Still I would feel better if you would do an independent test. > > Thank you! > > @ all: > I'd like to have another reviewer (maybe someone from the gc team? a lot > of gc log sites changed), and as usual a sponsor. I am on it :) I just had a lot of other to stuff to catch up on, but hopefully I can return to this patch very soon. Given what I've seen so far, I don't foresee any problems at all with this patch. Thanks, Erik > Kind Regards, Thomas > > On Thu, Jun 22, 2017 at 3:33 PM, Marcus Larsson > > wrote: > > > > On 2017-06-21 18:16, Thomas St?fe wrote: > > Hi Marcus, > > On Wed, Jun 21, 2017 at 12:18 PM, Marcus Larsson > > >> wrote: > > Hi, > > > On 2017-06-21 09:29, Thomas St?fe wrote: > > Hi Marcus, > > thank you for reviewing! > > New webrev: > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.01/webrev/ > > > > > > > > >> > > Delta to last version: > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-all-00-to-01/webrev/ > > > > > > > > > >> > > Changes: > > - classfile/loaderConstraints.cpp: as you suggested, I fixed > up all cases of "superfluous LogStream usage" I found and > converted them to direct LogTarget::print() calls. > > - classfile/sharedPathsMiscInfo.cpp: here I opted for > removing > any notion of UL from this method, instead I just hand in an > outputStream*. Both the "is_enabled" check and the LogStream > creation is now handed in the caller frame. I also added a > trailing cr(). > > - gc/cms/compactibleFreeListSpace.cpp: removed the > superfluous > ResourceMark > > - logging/logStream.hpp: enabled the private operator new() > declarations to disable heap allocations for class > LogStream. > I also gave it a try, works fine, if you do new(), now > you get > a linker error. > > The rest of the changes is concerned with the removal of > "LogStreamCHeap" which is not needed anymore. Note that I > found some new instances of "unguarded printing" and I > updated > comments at JDK-8182466 > > >> . > > Thanks & Regards, Thomas > > > This looks great. > > One last thing just hit me: We should have a unit test to cover > the allocation case for LogStream. Preferably one that forces it > to grow twice (once from stackbuffer, and once from heap), > to make > sure we exercise that code. > > > Good idea, especially since I had a bug in the line buffer > handling :P > > > Aha! :) > > > New Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.02/webrev/ > > > > > Delta to last: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-01-to-02/webrev/ > > > > > > Looks good to me. > > Thanks, > Marcus > > > > Changes: fixed a bug where, on buffer resize in LogStream, the > new buffer size was calculated wrong. Added a test to test > LogStream resizing. > > I ran the gtests on win-x64 and linux-x64, and the jtreg > servicability/logging tests on linux-x64. All good. I am > currently running the whole hotspot jtreg tests, but that will > take a while. > > Kind Regards, Thomas > > Thanks, > Marcus > > > > From thomas.stuefe at gmail.com Thu Jun 22 14:46:58 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Jun 2017 16:46:58 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> <79fac42d-1f8f-e1b4-4ea5-f6195dfcf4ad@oracle.com> Message-ID: On Thu, Jun 22, 2017 at 4:13 PM, Erik Helin wrote: > On 06/22/2017 03:54 PM, Thomas St?fe wrote: > > Thank you Marcus! > > > > Would you mind running the change through jtreg? I ran it on Linux x64 > > (hotspot/test branch) and I got a number of errors, but nothing > > suspicious. I repeated a most of the tests with an unpatched VM and got > > the same errors. > > > > There is one suspicious test failing: > > runtime/modules/PatchModule/PatchModuleCDS.java, with "Failed. Execution > > failed: `main' threw exception: java.lang.RuntimeException: > > '[class,load] java.lang.Thread source: jrt:/java.base' missing from > > stdout/stderr". However, the same error also happens without my change. > > Still I would feel better if you would do an independent test. > > > > Thank you! > > > > @ all: > > I'd like to have another reviewer (maybe someone from the gc team? a lot > > of gc log sites changed), and as usual a sponsor. > > I am on it :) I just had a lot of other to stuff to catch up on, but > hopefully I can return to this patch very soon. Given what I've seen so > far, I don't foresee any problems at all with this patch. > > Thanks, > Erik > Thanks, Erik! No rush, take your time. ..Thomas > > > Kind Regards, Thomas > > > > On Thu, Jun 22, 2017 at 3:33 PM, Marcus Larsson > > > wrote: > > > > > > > > On 2017-06-21 18:16, Thomas St?fe wrote: > > > > Hi Marcus, > > > > On Wed, Jun 21, 2017 at 12:18 PM, Marcus Larsson > > > > > >> wrote: > > > > Hi, > > > > > > On 2017-06-21 09:29, Thomas St?fe wrote: > > > > Hi Marcus, > > > > thank you for reviewing! > > > > New webrev: > > > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917- > refactor-ul-logstream/all.webrev.01/webrev/ > > refactor-ul-logstream/all.webrev.01/webrev/> > > > > refactor-ul-logstream/all.webrev.01/webrev/ > > refactor-ul-logstream/all.webrev.01/webrev/>> > > > > refactor-ul-logstream/all.webrev.01/webrev/ > > refactor-ul-logstream/all.webrev.01/webrev/> > > > > refactor-ul-logstream/all.webrev.01/webrev/ > > refactor-ul-logstream/all.webrev.01/webrev/>>> > > > > Delta to last version: > > > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917- > refactor-ul-logstream/delta-all-00-to-01/webrev/ > > refactor-ul-logstream/delta-all-00-to-01/webrev/> > > > > refactor-ul-logstream/delta-all-00-to-01/webrev/ > > refactor-ul-logstream/delta-all-00-to-01/webrev/>> > > > > refactor-ul-logstream/delta-all-00-to-01/webrev/ > > refactor-ul-logstream/delta-all-00-to-01/webrev/> > > > > > > refactor-ul-logstream/delta-all-00-to-01/webrev/ > > refactor-ul-logstream/delta-all-00-to-01/webrev/>>> > > > > Changes: > > > > - classfile/loaderConstraints.cpp: as you suggested, I > fixed > > up all cases of "superfluous LogStream usage" I found and > > converted them to direct LogTarget::print() calls. > > > > - classfile/sharedPathsMiscInfo.cpp: here I opted for > > removing > > any notion of UL from this method, instead I just hand > in an > > outputStream*. Both the "is_enabled" check and the > LogStream > > creation is now handed in the caller frame. I also added > a > > trailing cr(). > > > > - gc/cms/compactibleFreeListSpace.cpp: removed the > > superfluous > > ResourceMark > > > > - logging/logStream.hpp: enabled the private operator > new() > > declarations to disable heap allocations for class > > LogStream. > > I also gave it a try, works fine, if you do new(), now > > you get > > a linker error. > > > > The rest of the changes is concerned with the removal of > > "LogStreamCHeap" which is not needed anymore. Note that I > > found some new instances of "unguarded printing" and I > > updated > > comments at JDK-8182466 > > > > > > >> . > > > > Thanks & Regards, Thomas > > > > > > This looks great. > > > > One last thing just hit me: We should have a unit test to > cover > > the allocation case for LogStream. Preferably one that > forces it > > to grow twice (once from stackbuffer, and once from heap), > > to make > > sure we exercise that code. > > > > > > Good idea, especially since I had a bug in the line buffer > > handling :P > > > > > > Aha! :) > > > > > > New Webrev: > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917- > refactor-ul-logstream/all.webrev.02/webrev/ > > refactor-ul-logstream/all.webrev.02/webrev/> > > refactor-ul-logstream/all.webrev.02/webrev/ > > refactor-ul-logstream/all.webrev.02/webrev/>> > > > > Delta to last: > > http://cr.openjdk.java.net/~stuefe/webrevs/8181917- > refactor-ul-logstream/delta-01-to-02/webrev/ > > refactor-ul-logstream/delta-01-to-02/webrev/> > > refactor-ul-logstream/delta-01-to-02/webrev/ > > refactor-ul-logstream/delta-01-to-02/webrev/>> > > > > > > Looks good to me. > > > > Thanks, > > Marcus > > > > > > > > Changes: fixed a bug where, on buffer resize in LogStream, the > > new buffer size was calculated wrong. Added a test to test > > LogStream resizing. > > > > I ran the gtests on win-x64 and linux-x64, and the jtreg > > servicability/logging tests on linux-x64. All good. I am > > currently running the whole hotspot jtreg tests, but that will > > take a while. > > > > Kind Regards, Thomas > > > > Thanks, > > Marcus > > > > > > > > > From alexander.harlap at oracle.com Thu Jun 22 16:16:26 2017 From: alexander.harlap at oracle.com (Alexander Harlap) Date: Thu, 22 Jun 2017 12:16:26 -0400 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests Message-ID: Please review change for JDK-8178507 - co-locate nsk.regression.gc tests JDK-8178507 is last remaining sub-task ofJDK-8178482 - Co-locate remaining GC tests Proposed change located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ Co-located and converted to JTREG tests are: nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java nsk/regression/b4668531 => hotspot/test/gc/TestMemoryInitialization.java nsk/regression/b6186200 => hotspot/test/gc/cslocker/TestCSLocker.java Thank you, Alex From volker.simonis at gmail.com Thu Jun 22 16:28:52 2017 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 22 Jun 2017 18:28:52 +0200 Subject: RFR(S) JDK-8181810 PPC64: Leverage extrdi for bitfield extract In-Reply-To: <19e9eb9d5fec463380398d626ba8c78f@sap.com> References: <2f852cff-938e-b383-cf2b-66f97fe652ff@linux.vnet.ibm.com> <49e34c46-6207-7766-d339-05f1ce6a0eb9@linux.vnet.ibm.com> <19e9eb9d5fec463380398d626ba8c78f@sap.com> Message-ID: Hi Matthew, Martin, thanks for contributing and reviewing this change. It looks good! @Martin: can you please update the copyright year before pushing? Regards, Volker On Tue, Jun 20, 2017 at 11:50 AM, Doerr, Martin wrote: > Hi Matt, > > nice change. I have reviewed it and didn't see any mistakes. We will build and test it, too. > I can also sponsor it after a 2nd review. > > Thanks and best regards, > Martin > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Matthew Brandyberry > Sent: Montag, 19. Juni 2017 18:35 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR(S) JDK-8181810 PPC64: Leverage extrdi for bitfield extract > > Apologies for the bad cut-and-paste job.. the body of this review > request should > read as follows: > > This is a PPC-specific hotspot optimization that leverages the extrdi > instruction > for bitfield extract operations (shift-right and mask-with-and). It > yields a > ~25% improvement measured via a microbenchmark. > > Please review: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8181810 > Webrev: http://cr.openjdk.java.net/~gromero/8181810/v1/ > > Thanks, > Matt > > > On 6/19/17 11:30 AM, Matthew Brandyberry wrote: >> This is a PPC-specific hotspot optimization that leverages the >> mtfprd/mffprd >> instructions for for movement between general purpose and floating point >> registers (rather than through memory). It yields a ~35% improvement >> measured >> via a microbenchmark. >> >> Please review: >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8181809 >> Webrev: http://cr.openjdk.java.net/~gromero/8181809/v1/ >> >> Thanks, >> Matt >> > From martin.doerr at sap.com Thu Jun 22 16:44:50 2017 From: martin.doerr at sap.com (Doerr, Martin) Date: Thu, 22 Jun 2017 16:44:50 +0000 Subject: RFR(S) JDK-8181810 PPC64: Leverage extrdi for bitfield extract In-Reply-To: References: <2f852cff-938e-b383-cf2b-66f97fe652ff@linux.vnet.ibm.com> <49e34c46-6207-7766-d339-05f1ce6a0eb9@linux.vnet.ibm.com> <19e9eb9d5fec463380398d626ba8c78f@sap.com> Message-ID: Hi, I've pushed it with updated copyright. Thanks, Martin -----Original Message----- From: Volker Simonis [mailto:volker.simonis at gmail.com] Sent: Donnerstag, 22. Juni 2017 18:29 To: Doerr, Martin Cc: Matthew Brandyberry ; hotspot-dev at openjdk.java.net Subject: Re: RFR(S) JDK-8181810 PPC64: Leverage extrdi for bitfield extract Hi Matthew, Martin, thanks for contributing and reviewing this change. It looks good! @Martin: can you please update the copyright year before pushing? Regards, Volker On Tue, Jun 20, 2017 at 11:50 AM, Doerr, Martin wrote: > Hi Matt, > > nice change. I have reviewed it and didn't see any mistakes. We will build and test it, too. > I can also sponsor it after a 2nd review. > > Thanks and best regards, > Martin > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Matthew Brandyberry > Sent: Montag, 19. Juni 2017 18:35 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR(S) JDK-8181810 PPC64: Leverage extrdi for bitfield extract > > Apologies for the bad cut-and-paste job.. the body of this review > request should > read as follows: > > This is a PPC-specific hotspot optimization that leverages the extrdi > instruction > for bitfield extract operations (shift-right and mask-with-and). It > yields a > ~25% improvement measured via a microbenchmark. > > Please review: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8181810 > Webrev: http://cr.openjdk.java.net/~gromero/8181810/v1/ > > Thanks, > Matt > > > On 6/19/17 11:30 AM, Matthew Brandyberry wrote: >> This is a PPC-specific hotspot optimization that leverages the >> mtfprd/mffprd >> instructions for for movement between general purpose and floating point >> registers (rather than through memory). It yields a ~35% improvement >> measured >> via a microbenchmark. >> >> Please review: >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8181809 >> Webrev: http://cr.openjdk.java.net/~gromero/8181809/v1/ >> >> Thanks, >> Matt >> > From kim.barrett at oracle.com Thu Jun 22 20:54:05 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 22 Jun 2017 16:54:05 -0400 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion In-Reply-To: <5588314d-1908-93f5-a7c8-e77a22cee89a@oracle.com> References: <0EC0C8CD-CB86-4A96-B25F-86D9619E2469@oracle.com> <5588314d-1908-93f5-a7c8-e77a22cee89a@oracle.com> Message-ID: > On Jun 22, 2017, at 2:49 AM, Stefan Karlsson wrote: > > Hi Kim, > > On 2017-06-22 02:29, Kim Barrett wrote: >>>> (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ >>>> a. Moved print_native_stack to VMError class. >>>> b. Removed unused and undefined pd_obfuscate_location. >>>> c. Simplified #ifdef PRODUCT guard in ps(). >>> >>> You removed the call to p->trace_stack(). Was that intentional? >>> >>> if (p->has_last_Java_frame()) { >>> // If the last_Java_fp is set we are in C land and >>> // can call the standard stack_trace function. >>> -#ifdef PRODUCT >>> p->print_stack(); >>> } else { >>> +#ifdef PRODUCT >>> tty->print_cr("Cannot find the last Java frame, printing stack disabled."); >>> #else // !PRODUCT >>> - p->trace_stack(); >>> - } else { >> I kept getting confused by this code while trying to figure out what >> to do with pd_ps, which is why I changed it. Seems I was even more >> confused than I thought, since I failed to see the difference between >> p->print_stack() and p->trace_stack(). Well spotted. I've reverted >> this change. > > OK. I wonder if this code would have been easier to read with to #ifdef PRODUCT sections. I looked at that before simply reverting, and it was still pretty ugly. And I?d have needed to figure out how to test. I shouldn?t have touched this code as part of this set of changes in the first place; I?m just glad you spotted the mistake. >>> http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/globalDefinitions_gcc.hpp.udiff.html >>> >>> Maybe also get rid of the following line: >>> //---------------------------------------------------------------------------------------------------- >>> -// Debugging >> Gone now. > > I wasn't particularly clear on this request, and you cut out the following part of my mail: > "or get rid of the following stray new line so that the code in the different globalDefinitions files are more consistent.? Yeah, I misunderstood what you were asking for. What I've ended up doing is deleting all the "//--- ... ---" lines in the globalDefinitions_xxx.hpp files. There weren't very many, all in this general vicinity, and just making the one near the nanness comment consistent would have left other similarly odd file to file variations. > http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.01.inc/src/share/vm/utilities/globalDefinitions.hpp.udiff.html > > 31 > 32 #include TARGET_COMPILER_HEADER(utilities/globalDefinitions) > 33 // Defaults for macros that might be defined per compiler. > 34 #ifndef NOINLINE > 35 #define NOINLINE > 36 #endif > > Would you mind adding a newline between 32 and 32 before pushing? Done. I also updated the copyrights for globalDefinitions.hpp and macros.hpp, which got lost somewhere in the patch splitting. From stefan.karlsson at oracle.com Thu Jun 22 21:06:28 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Jun 2017 23:06:28 +0200 Subject: RFR(XL): 8181449: Fix debug.hpp / globalDefinitions.hpp dependency inversion In-Reply-To: References: <0EC0C8CD-CB86-4A96-B25F-86D9619E2469@oracle.com> <5588314d-1908-93f5-a7c8-e77a22cee89a@oracle.com> Message-ID: <73aefc6f-bbf5-984a-db87-e80171f7f7b1@oracle.com> On 2017-06-22 22:54, Kim Barrett wrote: >> On Jun 22, 2017, at 2:49 AM, Stefan Karlsson wrote: >> >> Hi Kim, >> >> On 2017-06-22 02:29, Kim Barrett wrote: >>>>> (3) http://cr.openjdk.java.net/~kbarrett/8181449/move_pns/ >>>>> a. Moved print_native_stack to VMError class. >>>>> b. Removed unused and undefined pd_obfuscate_location. >>>>> c. Simplified #ifdef PRODUCT guard in ps(). >>>> You removed the call to p->trace_stack(). Was that intentional? >>>> >>>> if (p->has_last_Java_frame()) { >>>> // If the last_Java_fp is set we are in C land and >>>> // can call the standard stack_trace function. >>>> -#ifdef PRODUCT >>>> p->print_stack(); >>>> } else { >>>> +#ifdef PRODUCT >>>> tty->print_cr("Cannot find the last Java frame, printing stack disabled."); >>>> #else // !PRODUCT >>>> - p->trace_stack(); >>>> - } else { >>> I kept getting confused by this code while trying to figure out what >>> to do with pd_ps, which is why I changed it. Seems I was even more >>> confused than I thought, since I failed to see the difference between >>> p->print_stack() and p->trace_stack(). Well spotted. I've reverted >>> this change. >> OK. I wonder if this code would have been easier to read with to #ifdef PRODUCT sections. to -> two :) > I looked at that before simply reverting, and it was still pretty ugly. And I?d have > needed to figure out how to test. I shouldn?t have touched this code as part of > this set of changes in the first place; I?m just glad you spotted the mistake. OK. > >>>> http://cr.openjdk.java.net/~kbarrett/8181449/target_macros/src/share/vm/utilities/globalDefinitions_gcc.hpp.udiff.html >>>> >>>> Maybe also get rid of the following line: >>>> //---------------------------------------------------------------------------------------------------- >>>> -// Debugging >>> Gone now. >> I wasn't particularly clear on this request, and you cut out the following part of my mail: >> "or get rid of the following stray new line so that the code in the different globalDefinitions files are more consistent.? > Yeah, I misunderstood what you were asking for. > > What I've ended up doing is deleting all the "//--- ... ---" lines in > the globalDefinitions_xxx.hpp files. There weren't very many, all in > this general vicinity, and just making the one near the nanness comment > consistent would have left other similarly odd file to file variations. > >> http://cr.openjdk.java.net/~kbarrett/8181449/hotspot.01.inc/src/share/vm/utilities/globalDefinitions.hpp.udiff.html >> >> 31 >> 32 #include TARGET_COMPILER_HEADER(utilities/globalDefinitions) >> 33 // Defaults for macros that might be defined per compiler. >> 34 #ifndef NOINLINE >> 35 #define NOINLINE >> 36 #endif >> >> Would you mind adding a newline between 32 and 32 before pushing? 32 and 33 ... > Done. > > I also updated the copyrights for globalDefinitions.hpp and > macros.hpp, which got lost somewhere in the patch splitting. Thanks, StefanK > > From mbrandy at linux.vnet.ibm.com Fri Jun 23 02:53:30 2017 From: mbrandy at linux.vnet.ibm.com (Matthew Brandyberry) Date: Thu, 22 Jun 2017 21:53:30 -0500 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <651ebdd4-3854-ac42-8e9c-54df77cbb5fc@linux.vnet.ibm.com> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <2a4fcb315f4d44199e8cc66935886f41@sap.com> <651ebdd4-3854-ac42-8e9c-54df77cbb5fc@linux.vnet.ibm.com> Message-ID: <56073cca-0c7a-0436-4e95-6d74a1bbe404@linux.vnet.ibm.com> Updated webrev: http://cr.openjdk.java.net/~gromero/8181809/v2/ See below for responses inline. On 6/20/17 8:38 AM, Matthew Brandyberry wrote: > Hi Martin, > > Thanks for the review. I'll take a look at these areas and report > back -- especially the integration into C1. > > On 6/20/17 8:33 AM, Doerr, Martin wrote: >> Hi Matt, >> >> thanks for providing this webrev. I had already thought about using >> these instructions for this purpose and your change matches pretty >> much what I'd do. >> >> Here a couple of comments: >> ppc.ad: >> This was a lot of work. Thanks for doing it. >> effect(DEF dst, USE src); is redundant if a match rule match(Set dst >> (MoveL2D src)); exists. Fixed. >> >> vm_version: >> This part is in conflict with Michihiro's change which is already >> pushed in jdk10, but it's trivial to resolve. I'm ok with using >> has_vpmsumb() for has_mtfprd(). In the past, we sometimes had trouble >> with assuming that a certain Power processor supports all new >> instructions if it supports certain ones. We also use the hotspot >> code on as400 where certain instruction subsets were disabled while >> other Power 8 instructions were usable. Maybe you can double-check if >> there may exist configurations in which has_vpmsumb() doesn't match >> has_mtfprd(). I could not find evidence of any config that includes vpmsumb but not mtfprd. >> >> C1: >> It should also be possible to use the instructions in C1 compiler. >> Maybe you would like to take a look at it as well and see if it can >> be done with feasible effort. >> Here are some hints: >> The basic decisions are made in LIRGenerator::do_Convert. You could >> skip the force_to_spill or must_start_in_memory steps. >> The final assembly code gets emitted in LIR_Assembler::emit_opConvert >> where you could replace the store instructions. >> For testing, you can use -XX:TieredStopAtLevel=1, for example. Done. Please take a look. >> >> Thanks and best regards, >> Martin >> >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Matthew Brandyberry >> Sent: Montag, 19. Juni 2017 18:28 >> To: ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net >> Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 >> >> This is a PPC-specific hotspot optimization that leverages the >> mtfprd/mffprd instructions for for movement between general purpose and >> floating point registers (rather than through memory). It yields a ~35% >> improvement measured via a microbenchmark. Please review: Bug: >> https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: >> http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt >> >> > From martin.doerr at sap.com Fri Jun 23 09:30:08 2017 From: martin.doerr at sap.com (Doerr, Martin) Date: Fri, 23 Jun 2017 09:30:08 +0000 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <56073cca-0c7a-0436-4e95-6d74a1bbe404@linux.vnet.ibm.com> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <2a4fcb315f4d44199e8cc66935886f41@sap.com> <651ebdd4-3854-ac42-8e9c-54df77cbb5fc@linux.vnet.ibm.com> <56073cca-0c7a-0436-4e95-6d74a1bbe404@linux.vnet.ibm.com> Message-ID: Excellent. Thanks for the update. The C1 part looks good, too. Also, thanks for checking "I could not find evidence of any config that includes vpmsumb but not mtfprd." There are only a few formally required things: - The new C1 code contains Tab characters. It's not possible to push it without fixing this. - Copyright messages should be updated. - Minor resolution to get vm_version_ppc applied to recent jdk10/hs. If no other changes get requested, I can handle these issues this time before pushing. But we need another review, first. Thanks and best regards, Martin -----Original Message----- From: Matthew Brandyberry [mailto:mbrandy at linux.vnet.ibm.com] Sent: Freitag, 23. Juni 2017 04:54 To: Doerr, Martin ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net Subject: Re: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 Updated webrev: http://cr.openjdk.java.net/~gromero/8181809/v2/ See below for responses inline. On 6/20/17 8:38 AM, Matthew Brandyberry wrote: > Hi Martin, > > Thanks for the review. I'll take a look at these areas and report > back -- especially the integration into C1. > > On 6/20/17 8:33 AM, Doerr, Martin wrote: >> Hi Matt, >> >> thanks for providing this webrev. I had already thought about using >> these instructions for this purpose and your change matches pretty >> much what I'd do. >> >> Here a couple of comments: >> ppc.ad: >> This was a lot of work. Thanks for doing it. >> effect(DEF dst, USE src); is redundant if a match rule match(Set dst >> (MoveL2D src)); exists. Fixed. >> >> vm_version: >> This part is in conflict with Michihiro's change which is already >> pushed in jdk10, but it's trivial to resolve. I'm ok with using >> has_vpmsumb() for has_mtfprd(). In the past, we sometimes had trouble >> with assuming that a certain Power processor supports all new >> instructions if it supports certain ones. We also use the hotspot >> code on as400 where certain instruction subsets were disabled while >> other Power 8 instructions were usable. Maybe you can double-check if >> there may exist configurations in which has_vpmsumb() doesn't match >> has_mtfprd(). I could not find evidence of any config that includes vpmsumb but not mtfprd. >> >> C1: >> It should also be possible to use the instructions in C1 compiler. >> Maybe you would like to take a look at it as well and see if it can >> be done with feasible effort. >> Here are some hints: >> The basic decisions are made in LIRGenerator::do_Convert. You could >> skip the force_to_spill or must_start_in_memory steps. >> The final assembly code gets emitted in LIR_Assembler::emit_opConvert >> where you could replace the store instructions. >> For testing, you can use -XX:TieredStopAtLevel=1, for example. Done. Please take a look. >> >> Thanks and best regards, >> Martin >> >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Matthew Brandyberry >> Sent: Montag, 19. Juni 2017 18:28 >> To: ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net >> Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 >> >> This is a PPC-specific hotspot optimization that leverages the >> mtfprd/mffprd instructions for for movement between general purpose and >> floating point registers (rather than through memory). It yields a ~35% >> improvement measured via a microbenchmark. Please review: Bug: >> https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: >> http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt >> >> > From mbrandy at linux.vnet.ibm.com Fri Jun 23 16:38:35 2017 From: mbrandy at linux.vnet.ibm.com (Matthew Brandyberry) Date: Fri, 23 Jun 2017 11:38:35 -0500 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <2a4fcb315f4d44199e8cc66935886f41@sap.com> <651ebdd4-3854-ac42-8e9c-54df77cbb5fc@linux.vnet.ibm.com> <56073cca-0c7a-0436-4e95-6d74a1bbe404@linux.vnet.ibm.com> Message-ID: <20c43bf4-a66b-2cc8-e62f-d58eb66df278@linux.vnet.ibm.com> Thanks Martin. Are there tools to help detect formatting errors like the tab characters? I'll keep an eye on this to see if I need to do anything else. -Matt On 6/23/17 4:30 AM, Doerr, Martin wrote: > Excellent. Thanks for the update. The C1 part looks good, too. > > Also, thanks for checking "I could not find evidence of any config that includes vpmsumb but not > mtfprd." > > There are only a few formally required things: > - The new C1 code contains Tab characters. It's not possible to push it without fixing this. > - Copyright messages should be updated. > - Minor resolution to get vm_version_ppc applied to recent jdk10/hs. > > If no other changes get requested, I can handle these issues this time before pushing. > But we need another review, first. > > Thanks and best regards, > Martin > > > -----Original Message----- > From: Matthew Brandyberry [mailto:mbrandy at linux.vnet.ibm.com] > Sent: Freitag, 23. Juni 2017 04:54 > To: Doerr, Martin ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net > Subject: Re: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 > > Updated webrev: http://cr.openjdk.java.net/~gromero/8181809/v2/ > > See below for responses inline. > > On 6/20/17 8:38 AM, Matthew Brandyberry wrote: >> Hi Martin, >> >> Thanks for the review. I'll take a look at these areas and report >> back -- especially the integration into C1. >> >> On 6/20/17 8:33 AM, Doerr, Martin wrote: >>> Hi Matt, >>> >>> thanks for providing this webrev. I had already thought about using >>> these instructions for this purpose and your change matches pretty >>> much what I'd do. >>> >>> Here a couple of comments: >>> ppc.ad: >>> This was a lot of work. Thanks for doing it. >>> effect(DEF dst, USE src); is redundant if a match rule match(Set dst >>> (MoveL2D src)); exists. > Fixed. >>> vm_version: >>> This part is in conflict with Michihiro's change which is already >>> pushed in jdk10, but it's trivial to resolve. I'm ok with using >>> has_vpmsumb() for has_mtfprd(). In the past, we sometimes had trouble >>> with assuming that a certain Power processor supports all new >>> instructions if it supports certain ones. We also use the hotspot >>> code on as400 where certain instruction subsets were disabled while >>> other Power 8 instructions were usable. Maybe you can double-check if >>> there may exist configurations in which has_vpmsumb() doesn't match >>> has_mtfprd(). > I could not find evidence of any config that includes vpmsumb but not > mtfprd. >>> C1: >>> It should also be possible to use the instructions in C1 compiler. >>> Maybe you would like to take a look at it as well and see if it can >>> be done with feasible effort. >>> Here are some hints: >>> The basic decisions are made in LIRGenerator::do_Convert. You could >>> skip the force_to_spill or must_start_in_memory steps. >>> The final assembly code gets emitted in LIR_Assembler::emit_opConvert >>> where you could replace the store instructions. >>> For testing, you can use -XX:TieredStopAtLevel=1, for example. > Done. Please take a look. >>> Thanks and best regards, >>> Martin >>> >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >>> Behalf Of Matthew Brandyberry >>> Sent: Montag, 19. Juni 2017 18:28 >>> To: ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net >>> Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 >>> >>> This is a PPC-specific hotspot optimization that leverages the >>> mtfprd/mffprd instructions for for movement between general purpose and >>> floating point registers (rather than through memory). It yields a ~35% >>> improvement measured via a microbenchmark. Please review: Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: >>> http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt >>> >>> From leonid.mesnik at oracle.com Fri Jun 23 22:18:26 2017 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Fri, 23 Jun 2017 15:18:26 -0700 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: References: Message-ID: Hi Basically changes looks good. Below are some comments: > On Jun 22, 2017, at 9:16 AM, Alexander Harlap wrote: > > Please review change for JDK-8178507 - co-locate nsk.regression.gc tests > > JDK-8178507 is last remaining sub-task ofJDK-8178482 - Co-locate remaining GC tests > > > Proposed change located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ > > Co-located and converted to JTREG tests are: > > nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java The out variable is no used and return code is not checked in method ?run?. Wouldn't it simpler just to move println into main and remove method ?run? completely? > > nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java The method ?run? always returns 0. It would be better to make it void or just remove it. Test never throws any exception. So it make a sense to write in comments that test verifies only that VM doesn?t crash but throw expected Error. > > nsk/regression/b4668531 => hotspot/test/gc/TestMemoryInitialization.java The variable buffer is ?read-only?. It make a sense to make variable ?buffer' public static member of class TestMemoryInitialization. So compiler could not optimize it usage during any optimization like escape analysis. > > nsk/regression/b6186200 => hotspot/test/gc/cslocker/TestCSLocker.java > Port looks good. It seems that test doesn?t verify that lock really happened. Could be this improved as a part of this fix or by filing separate RFE? Leonid > > Thank you, > > Alex > > From coleen.phillimore at oracle.com Fri Jun 23 23:42:17 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 23 Jun 2017 19:42:17 -0400 Subject: RFR (L) 7133093: Improve system dictionary performance Message-ID: <89f0b98c-3cbd-7b87-d76b-5e89a5a676fb@oracle.com> Summary: Implement one dictionary per ClassLoaderData for faster lookup and removal during class unloading See RFE for more details. open webrev at http://cr.openjdk.java.net/~coleenp/7133093.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-7133093 Tested with full "nightly" run in rbt, plus locally class loading and unloading tests: jtreg hotspot/test/runtime/ClassUnload jtreg hotspot/test/runtime/modules jtreg hotspot/test/gc/class_unloading make test-hotspot-closed-tonga FILTER=quick TEST_JOBS=4 TEST=vm.parallel_class_loading csh ~/testing/run_jck9 (vm/lang/java_lang) runThese -jck - uses class loader isolation to run each jck test and unloads tests when done (at -gc:5 intervals) Thanks, Coleen From martin.doerr at sap.com Mon Jun 26 08:44:11 2017 From: martin.doerr at sap.com (Doerr, Martin) Date: Mon, 26 Jun 2017 08:44:11 +0000 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <20c43bf4-a66b-2cc8-e62f-d58eb66df278@linux.vnet.ibm.com> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <2a4fcb315f4d44199e8cc66935886f41@sap.com> <651ebdd4-3854-ac42-8e9c-54df77cbb5fc@linux.vnet.ibm.com> <56073cca-0c7a-0436-4e95-6d74a1bbe404@linux.vnet.ibm.com> <20c43bf4-a66b-2cc8-e62f-d58eb66df278@linux.vnet.ibm.com> Message-ID: <0f82ddbcf57348e8ac6e6cd9e51674f3@sap.com> Hi Matt, you can run the pre-push check stand-alone: hg jcheck See: http://openjdk.java.net/projects/code-tools/jcheck/ I just had to add the commit message: 8181809: PPC64: Leverage mtfprd/mffprd on POWER8 Reviewed-by: mdoerr Contributed-by: Matthew Brandyberry (Note that the ':' after the bug id is important.) and replace the Tabs the 2 C1 files to get it passing. (I think that "Illegal tag name" warnings can be ignored.) So only the copyright dates are missing which are not checked by jcheck. But I don't need a new webrev if that's all which needs to be changed. Best regards, Martin -----Original Message----- From: Matthew Brandyberry [mailto:mbrandy at linux.vnet.ibm.com] Sent: Freitag, 23. Juni 2017 18:39 To: Doerr, Martin ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net Subject: Re: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 Thanks Martin. Are there tools to help detect formatting errors like the tab characters? I'll keep an eye on this to see if I need to do anything else. -Matt On 6/23/17 4:30 AM, Doerr, Martin wrote: > Excellent. Thanks for the update. The C1 part looks good, too. > > Also, thanks for checking "I could not find evidence of any config that includes vpmsumb but not > mtfprd." > > There are only a few formally required things: > - The new C1 code contains Tab characters. It's not possible to push it without fixing this. > - Copyright messages should be updated. > - Minor resolution to get vm_version_ppc applied to recent jdk10/hs. > > If no other changes get requested, I can handle these issues this time before pushing. > But we need another review, first. > > Thanks and best regards, > Martin > > > -----Original Message----- > From: Matthew Brandyberry [mailto:mbrandy at linux.vnet.ibm.com] > Sent: Freitag, 23. Juni 2017 04:54 > To: Doerr, Martin ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net > Subject: Re: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 > > Updated webrev: http://cr.openjdk.java.net/~gromero/8181809/v2/ > > See below for responses inline. > > On 6/20/17 8:38 AM, Matthew Brandyberry wrote: >> Hi Martin, >> >> Thanks for the review. I'll take a look at these areas and report >> back -- especially the integration into C1. >> >> On 6/20/17 8:33 AM, Doerr, Martin wrote: >>> Hi Matt, >>> >>> thanks for providing this webrev. I had already thought about using >>> these instructions for this purpose and your change matches pretty >>> much what I'd do. >>> >>> Here a couple of comments: >>> ppc.ad: >>> This was a lot of work. Thanks for doing it. >>> effect(DEF dst, USE src); is redundant if a match rule match(Set dst >>> (MoveL2D src)); exists. > Fixed. >>> vm_version: >>> This part is in conflict with Michihiro's change which is already >>> pushed in jdk10, but it's trivial to resolve. I'm ok with using >>> has_vpmsumb() for has_mtfprd(). In the past, we sometimes had trouble >>> with assuming that a certain Power processor supports all new >>> instructions if it supports certain ones. We also use the hotspot >>> code on as400 where certain instruction subsets were disabled while >>> other Power 8 instructions were usable. Maybe you can double-check if >>> there may exist configurations in which has_vpmsumb() doesn't match >>> has_mtfprd(). > I could not find evidence of any config that includes vpmsumb but not > mtfprd. >>> C1: >>> It should also be possible to use the instructions in C1 compiler. >>> Maybe you would like to take a look at it as well and see if it can >>> be done with feasible effort. >>> Here are some hints: >>> The basic decisions are made in LIRGenerator::do_Convert. You could >>> skip the force_to_spill or must_start_in_memory steps. >>> The final assembly code gets emitted in LIR_Assembler::emit_opConvert >>> where you could replace the store instructions. >>> For testing, you can use -XX:TieredStopAtLevel=1, for example. > Done. Please take a look. >>> Thanks and best regards, >>> Martin >>> >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >>> Behalf Of Matthew Brandyberry >>> Sent: Montag, 19. Juni 2017 18:28 >>> To: ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net >>> Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 >>> >>> This is a PPC-specific hotspot optimization that leverages the >>> mtfprd/mffprd instructions for for movement between general purpose and >>> floating point registers (rather than through memory). It yields a ~35% >>> improvement measured via a microbenchmark. Please review: Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: >>> http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt >>> >>> From martin.doerr at sap.com Mon Jun 26 12:47:45 2017 From: martin.doerr at sap.com (Doerr, Martin) Date: Mon, 26 Jun 2017 12:47:45 +0000 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <0f82ddbcf57348e8ac6e6cd9e51674f3@sap.com> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <2a4fcb315f4d44199e8cc66935886f41@sap.com> <651ebdd4-3854-ac42-8e9c-54df77cbb5fc@linux.vnet.ibm.com> <56073cca-0c7a-0436-4e95-6d74a1bbe404@linux.vnet.ibm.com> <20c43bf4-a66b-2cc8-e62f-d58eb66df278@linux.vnet.ibm.com> <0f82ddbcf57348e8ac6e6cd9e51674f3@sap.com> Message-ID: <95d5cb36271e4ebf8398223702b61ac8@sap.com> Hi Matt, after some testing and reviewing the C1 part again, I found 2 bugs: c1_LIRAssembler: is_stack() can't be used for this purpose as the value may be available in a register even though it was forced to stack. I just changed src_in_memory = !VM_Version::has_mtfprd() to make it consistent with LIRGenerator and removed the assertions which have become redundant. c1_LIRGenerator: value.set_destroys_register() is still needed for conversion from FP to GP registers because they kill the src value by fctiwz/fctidz. I just fixed these issues here in a copy of your webrev v2: http://cr.openjdk.java.net/~mdoerr/8181809_ppc64_mtfprd/v2/ Please take a look and use this one for 2nd review. Best regards, Martin -----Original Message----- From: Doerr, Martin Sent: Montag, 26. Juni 2017 10:44 To: 'Matthew Brandyberry' ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net Subject: RE: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 Hi Matt, you can run the pre-push check stand-alone: hg jcheck See: http://openjdk.java.net/projects/code-tools/jcheck/ I just had to add the commit message: 8181809: PPC64: Leverage mtfprd/mffprd on POWER8 Reviewed-by: mdoerr Contributed-by: Matthew Brandyberry (Note that the ':' after the bug id is important.) and replace the Tabs the 2 C1 files to get it passing. (I think that "Illegal tag name" warnings can be ignored.) So only the copyright dates are missing which are not checked by jcheck. But I don't need a new webrev if that's all which needs to be changed. Best regards, Martin -----Original Message----- From: Matthew Brandyberry [mailto:mbrandy at linux.vnet.ibm.com] Sent: Freitag, 23. Juni 2017 18:39 To: Doerr, Martin ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net Subject: Re: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 Thanks Martin. Are there tools to help detect formatting errors like the tab characters? I'll keep an eye on this to see if I need to do anything else. -Matt On 6/23/17 4:30 AM, Doerr, Martin wrote: > Excellent. Thanks for the update. The C1 part looks good, too. > > Also, thanks for checking "I could not find evidence of any config that includes vpmsumb but not > mtfprd." > > There are only a few formally required things: > - The new C1 code contains Tab characters. It's not possible to push it without fixing this. > - Copyright messages should be updated. > - Minor resolution to get vm_version_ppc applied to recent jdk10/hs. > > If no other changes get requested, I can handle these issues this time before pushing. > But we need another review, first. > > Thanks and best regards, > Martin > > > -----Original Message----- > From: Matthew Brandyberry [mailto:mbrandy at linux.vnet.ibm.com] > Sent: Freitag, 23. Juni 2017 04:54 > To: Doerr, Martin ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net > Subject: Re: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 > > Updated webrev: http://cr.openjdk.java.net/~gromero/8181809/v2/ > > See below for responses inline. > > On 6/20/17 8:38 AM, Matthew Brandyberry wrote: >> Hi Martin, >> >> Thanks for the review. I'll take a look at these areas and report >> back -- especially the integration into C1. >> >> On 6/20/17 8:33 AM, Doerr, Martin wrote: >>> Hi Matt, >>> >>> thanks for providing this webrev. I had already thought about using >>> these instructions for this purpose and your change matches pretty >>> much what I'd do. >>> >>> Here a couple of comments: >>> ppc.ad: >>> This was a lot of work. Thanks for doing it. >>> effect(DEF dst, USE src); is redundant if a match rule match(Set dst >>> (MoveL2D src)); exists. > Fixed. >>> vm_version: >>> This part is in conflict with Michihiro's change which is already >>> pushed in jdk10, but it's trivial to resolve. I'm ok with using >>> has_vpmsumb() for has_mtfprd(). In the past, we sometimes had trouble >>> with assuming that a certain Power processor supports all new >>> instructions if it supports certain ones. We also use the hotspot >>> code on as400 where certain instruction subsets were disabled while >>> other Power 8 instructions were usable. Maybe you can double-check if >>> there may exist configurations in which has_vpmsumb() doesn't match >>> has_mtfprd(). > I could not find evidence of any config that includes vpmsumb but not > mtfprd. >>> C1: >>> It should also be possible to use the instructions in C1 compiler. >>> Maybe you would like to take a look at it as well and see if it can >>> be done with feasible effort. >>> Here are some hints: >>> The basic decisions are made in LIRGenerator::do_Convert. You could >>> skip the force_to_spill or must_start_in_memory steps. >>> The final assembly code gets emitted in LIR_Assembler::emit_opConvert >>> where you could replace the store instructions. >>> For testing, you can use -XX:TieredStopAtLevel=1, for example. > Done. Please take a look. >>> Thanks and best regards, >>> Martin >>> >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >>> Behalf Of Matthew Brandyberry >>> Sent: Montag, 19. Juni 2017 18:28 >>> To: ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net >>> Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 >>> >>> This is a PPC-specific hotspot optimization that leverages the >>> mtfprd/mffprd instructions for for movement between general purpose and >>> floating point registers (rather than through memory). It yields a ~35% >>> improvement measured via a microbenchmark. Please review: Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: >>> http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt >>> >>> From robbin.ehn at oracle.com Mon Jun 26 13:17:57 2017 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 26 Jun 2017 15:17:57 +0200 Subject: RFR: 8180421: Change default value of BiasedLockingStartupDelay to 0 Message-ID: <35ec59f7-8ffd-a06a-7ad5-4988c4024d5a@oracle.com> Hi all, please review. On behalf of Stefan J, this patch changes the default value of BiasedLockingStartupDelay to 0. "The delay is however a problem for the some of the GC algorithms that use the mark-word." "Benchmark runs doesn't show any regressions for either startup times or steady state performance when setting it to 0." CSR: https://bugs.openjdk.java.net/browse/JDK-8181778 Issue: https://bugs.openjdk.java.net/browse/JDK-8180421 Thanks! /Robbin diff -r 26a2358e2796 src/share/vm/runtime/globals.hpp --- a/src/share/vm/runtime/globals.hpp Fri Jun 23 15:16:23 2017 -0700 +++ b/src/share/vm/runtime/globals.hpp Mon Jun 26 15:03:34 2017 +0200 @@ -1307,1 +1307,1 @@ - product(intx, BiasedLockingStartupDelay, 4000, \ + product(intx, BiasedLockingStartupDelay, 0, \ From erik.helin at oracle.com Mon Jun 26 15:28:11 2017 From: erik.helin at oracle.com (Erik Helin) Date: Mon, 26 Jun 2017 17:28:11 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: <28382d8f-1b63-9475-91eb-a03166430915@oracle.com> On 06/21/2017 06:16 PM, Thomas St?fe wrote: > New Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.02/webrev/ > Just a quick, very minor, nit (still reviewing it all): --- old/src/share/vm/logging/log.hpp 2017-06-21 17:40:35.171829500 +0200 +++ new/src/share/vm/logging/log.hpp 2017-06-21 17:40:34.157130800 +0200 @@ -105,10 +105,6 @@ // #define LogTarget(level, ...) LogTargetImpl -// Forward declaration to decouple this file from the outputStream API. -class outputStream; -outputStream* create_log_stream(LogLevelType level, LogTagSet* tagset); - template class LogTargetImpl; @@ -173,9 +169,6 @@ static bool is_##name() { \ return is_level(LogLevel::level); \ } \ - static outputStream* name##_stream() { \ - return create_log_stream(LogLevel::level, &LogTagSetMapping::tagset()); \ - } \ static LogTargetImpl* name() { \ return (LogTargetImpl*)NULL; \ } @@ -204,9 +197,8 @@ va_end(args); } - static outputStream* stream() { - return create_log_stream(level, &LogTagSetMapping::tagset()); - } }; + + Would you please revert those two empty lines you added? No need to re-review. Now I'm gonna dive back into this patch :) Thanks, Erik > Kind Regards, Thomas > > > > Thanks, > Marcus > > From erik.helin at oracle.com Mon Jun 26 15:42:22 2017 From: erik.helin at oracle.com (Erik Helin) Date: Mon, 26 Jun 2017 17:42:22 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: <28382d8f-1b63-9475-91eb-a03166430915@oracle.com> References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> <28382d8f-1b63-9475-91eb-a03166430915@oracle.com> Message-ID: Seems like you have an extra newline at the end of logStream.hpp as well :) Erik On 06/26/2017 05:28 PM, Erik Helin wrote: > On 06/21/2017 06:16 PM, Thomas St?fe wrote: >> New Webrev: >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.02/webrev/ >> >> > > Just a quick, very minor, nit (still reviewing it all): > > --- old/src/share/vm/logging/log.hpp 2017-06-21 17:40:35.171829500 +0200 > +++ new/src/share/vm/logging/log.hpp 2017-06-21 17:40:34.157130800 +0200 > @@ -105,10 +105,6 @@ > // > #define LogTarget(level, ...) LogTargetImpl LOG_TAGS(__VA_ARGS__)> > > -// Forward declaration to decouple this file from the outputStream API. > -class outputStream; > -outputStream* create_log_stream(LogLevelType level, LogTagSet* tagset); > - > template T2, LogTagType T3, LogTagType T4, LogTagType GuardTag> > class LogTargetImpl; > > @@ -173,9 +169,6 @@ > static bool is_##name() { \ > return is_level(LogLevel::level); \ > } \ > - static outputStream* name##_stream() { \ > - return create_log_stream(LogLevel::level, &LogTagSetMapping T2, T3, T4>::tagset()); \ > - } \ > static LogTargetImpl* > name() { \ > return (LogTargetImpl GuardTag>*)NULL; \ > } > @@ -204,9 +197,8 @@ > va_end(args); > } > > - static outputStream* stream() { > - return create_log_stream(level, &LogTagSetMapping T4>::tagset()); > - } > }; > > + > + > > Would you please revert those two empty lines you added? No need to > re-review. Now I'm gonna dive back into this patch :) > > Thanks, > Erik > >> Kind Regards, Thomas >> >> >> >> Thanks, >> Marcus >> >> From alexander.harlap at oracle.com Mon Jun 26 20:04:35 2017 From: alexander.harlap at oracle.com (Alexander Harlap) Date: Mon, 26 Jun 2017 16:04:35 -0400 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: References: Message-ID: Hi Leonid, I accommodated your suggestions. New version of changeset located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ Alex On 6/23/2017 6:18 PM, Leonid Mesnik wrote: > Hi > > Basically changes looks good. Below are some comments: > >> On Jun 22, 2017, at 9:16 AM, Alexander Harlap wrote: >> >> Please review change for JDK-8178507 - co-locate nsk.regression.gc tests >> >> JDK-8178507 is last remaining sub-task ofJDK-8178482 - Co-locate remaining GC tests >> >> >> Proposed change located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >> >> Co-located and converted to JTREG tests are: >> >> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java > The out variable is no used and return code is not checked in method ?run?. Wouldn't it simpler just to move println into main and remove method ?run? completely? >> nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java > The method ?run? always returns 0. It would be better to make it void or just remove it. Test never throws any exception. So it make a sense to write in comments that test verifies only that VM doesn?t crash but throw expected Error. > >> nsk/regression/b4668531 => hotspot/test/gc/TestMemoryInitialization.java > The variable buffer is ?read-only?. It make a sense to make variable ?buffer' public static member of class TestMemoryInitialization. So compiler could not optimize it usage during any optimization like escape analysis. >> nsk/regression/b6186200 => hotspot/test/gc/cslocker/TestCSLocker.java >> > Port looks good. It seems that test doesn?t verify that lock really happened. Could be this improved as a part of this fix or by filing separate RFE? > > Leonid >> Thank you, >> >> Alex >> >> From leonid.mesnik at oracle.com Mon Jun 26 20:11:01 2017 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Mon, 26 Jun 2017 13:11:01 -0700 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: References: Message-ID: Hi New changes looks good for me. Please get review from Reviewer. The only 2 small nits which don?t require separate review from me: http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestFullGCALot.java.html typo in 37 System.out.println("Hellow world!"); http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html return is not needed in 58 return; Thanks Leonid > On Jun 26, 2017, at 1:04 PM, Alexander Harlap wrote: > > Hi Leonid, > > I accommodated your suggestions. > > New version of changeset located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ > > > Alex > > > On 6/23/2017 6:18 PM, Leonid Mesnik wrote: >> Hi >> >> Basically changes looks good. Below are some comments: >> >>> On Jun 22, 2017, at 9:16 AM, Alexander Harlap wrote: >>> >>> Please review change for JDK-8178507 - co-locate nsk.regression.gc tests >>> >>> JDK-8178507 is last remaining sub-task ofJDK-8178482 - Co-locate remaining GC tests >>> >>> >>> Proposed change located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >>> >>> Co-located and converted to JTREG tests are: >>> >>> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java >> The out variable is no used and return code is not checked in method ?run?. Wouldn't it simpler just to move println into main and remove method ?run? completely? >>> nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java >> The method ?run? always returns 0. It would be better to make it void or just remove it. Test never throws any exception. So it make a sense to write in comments that test verifies only that VM doesn?t crash but throw expected Error. >> >>> nsk/regression/b4668531 => hotspot/test/gc/TestMemoryInitialization.java >> The variable buffer is ?read-only?. It make a sense to make variable ?buffer' public static member of class TestMemoryInitialization. So compiler could not optimize it usage during any optimization like escape analysis. >>> nsk/regression/b6186200 => hotspot/test/gc/cslocker/TestCSLocker.java >>> >> Port looks good. It seems that test doesn?t verify that lock really happened. Could be this improved as a part of this fix or by filing separate RFE? >> >> Leonid >>> Thank you, >>> >>> Alex >>> >>> > From igor.ignatyev at oracle.com Mon Jun 26 20:32:39 2017 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Mon, 26 Jun 2017 13:32:39 -0700 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: References: Message-ID: Hi Alexander, besides the small nits which Leonid mentioned, there is one in http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html: > 28 * @summary Test verifies only that VM doesn???t crash but throw expected Error. I guess "doesn???t" is 'doesn't' w/ a fancy apostrophe. otherwise looks good to me, Reviewed. -- Igor > On Jun 26, 2017, at 1:11 PM, Leonid Mesnik wrote: > > Hi > > New changes looks good for me. Please get review from Reviewer. > > The only 2 small nits which don?t require separate review from me: > > http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestFullGCALot.java.html > typo in > 37 System.out.println("Hellow world!"); > > http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html > return is not needed in > 58 return; > > Thanks > Leonid >> On Jun 26, 2017, at 1:04 PM, Alexander Harlap wrote: >> >> Hi Leonid, >> >> I accommodated your suggestions. >> >> New version of changeset located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ >> >> >> Alex >> >> >> On 6/23/2017 6:18 PM, Leonid Mesnik wrote: >>> Hi >>> >>> Basically changes looks good. Below are some comments: >>> >>>> On Jun 22, 2017, at 9:16 AM, Alexander Harlap wrote: >>>> >>>> Please review change for JDK-8178507 - co-locate nsk.regression.gc tests >>>> >>>> JDK-8178507 is last remaining sub-task ofJDK-8178482 - Co-locate remaining GC tests >>>> >>>> >>>> Proposed change located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >>>> >>>> Co-located and converted to JTREG tests are: >>>> >>>> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java >>> The out variable is no used and return code is not checked in method ?run?. Wouldn't it simpler just to move println into main and remove method ?run? completely? >>>> nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java >>> The method ?run? always returns 0. It would be better to make it void or just remove it. Test never throws any exception. So it make a sense to write in comments that test verifies only that VM doesn?t crash but throw expected Error. >>> >>>> nsk/regression/b4668531 => hotspot/test/gc/TestMemoryInitialization.java >>> The variable buffer is ?read-only?. It make a sense to make variable ?buffer' public static member of class TestMemoryInitialization. So compiler could not optimize it usage during any optimization like escape analysis. >>>> nsk/regression/b6186200 => hotspot/test/gc/cslocker/TestCSLocker.java >>>> >>> Port looks good. It seems that test doesn?t verify that lock really happened. Could be this improved as a part of this fix or by filing separate RFE? >>> >>> Leonid >>>> Thank you, >>>> >>>> Alex >>>> >>>> >> > From alexander.harlap at oracle.com Mon Jun 26 21:54:36 2017 From: alexander.harlap at oracle.com (Alexander Harlap) Date: Mon, 26 Jun 2017 17:54:36 -0400 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: References: Message-ID: <73e5c873-1ddd-afd7-b958-0367b9376f1d@oracle.com> Thank you Igor and Leonid, I fixed mentioned typos and unnecessary return (see http://cr.openjdk.java.net/~aharlap/8178507/webrev.02/) Do I need more reviews? Alex On 6/26/2017 4:32 PM, Igor Ignatyev wrote: > Hi Alexander, > > besides the small nits which Leonid mentioned, there is one in > http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html: > > >> 28 * @summary Test verifies only that VM doesn???t crash but throw expected Error. > I guess "doesn???t" is 'doesn't' w/ a fancy apostrophe. otherwise > looks good to me, Reviewed. > > -- Igor > >> On Jun 26, 2017, at 1:11 PM, Leonid Mesnik > > wrote: >> >> Hi >> >> New changes looks good for me. Please get review from Reviewer. >> >> The only 2 small nits which don?t require separate review from me: >> >> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestFullGCALot.java.html >> >> > > >> typo in >> 37 System.out.println("Hellow world!"); >> >> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html >> >> > > >> return is not needed in >> 58 return; >> >> Thanks >> Leonid >>> On Jun 26, 2017, at 1:04 PM, Alexander Harlap >>> > >>> wrote: >>> >>> Hi Leonid, >>> >>> I accommodated your suggestions. >>> >>> New version of changeset located at >>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ >>> >>> >>> >>> Alex >>> >>> >>> On 6/23/2017 6:18 PM, Leonid Mesnik wrote: >>>> Hi >>>> >>>> Basically changes looks good. Below are some comments: >>>> >>>>> On Jun 22, 2017, at 9:16 AM, Alexander Harlap >>>>> > >>>>> wrote: >>>>> >>>>> Please review change for JDK-8178507 >>>>> - co-locate >>>>> nsk.regression.gc tests >>>>> >>>>> JDK-8178507 is >>>>> last remaining sub-task ofJDK-8178482 >>>>> - Co-locate >>>>> remaining GC tests >>>>> >>>>> >>>>> Proposed change located at >>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >>>>> >>>>> >>>>> Co-located and converted to JTREG tests are: >>>>> >>>>> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java >>>> The out variable is no used and return code is not checked in >>>> method ?run?. Wouldn't it simpler just to move println into main >>>> and remove method ?run? completely? >>>>> nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java >>>> The method ?run? always returns 0. It would be better to make it >>>> void or just remove it. Test never throws any exception. So it make >>>> a sense to write in comments that test verifies only that VM >>>> doesn?t crash but throw expected Error. >>>> >>>>> nsk/regression/b4668531 => >>>>> hotspot/test/gc/TestMemoryInitialization.java >>>> The variable buffer is ?read-only?. It make a sense to make >>>> variable ?buffer' public static member of class >>>> TestMemoryInitialization. So compiler could not optimize it usage >>>> during any optimization like escape analysis. >>>>> nsk/regression/b6186200 => >>>>> hotspot/test/gc/cslocker/TestCSLocker.java >>>>> >>>> Port looks good. It seems that test doesn?t verify that lock really >>>> happened. Could be this improved as a part of this fix or by filing >>>> separate RFE? >>>> >>>> Leonid >>>>> Thank you, >>>>> >>>>> Alex >>>>> >>>>> >>> >> > From igor.ignatyev at oracle.com Mon Jun 26 23:04:41 2017 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Mon, 26 Jun 2017 16:04:41 -0700 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: <73e5c873-1ddd-afd7-b958-0367b9376f1d@oracle.com> References: <73e5c873-1ddd-afd7-b958-0367b9376f1d@oracle.com> Message-ID: > On Jun 26, 2017, at 2:54 PM, Alexander Harlap wrote: > > Thank you Igor and Leonid, > > I fixed mentioned typos and unnecessary return (see http://cr.openjdk.java.net/~aharlap/8178507/webrev.02/ ) > perfect. > Do I need more reviews? > no, you can go ahead and integrate it. -- Igor > Alex > > On 6/26/2017 4:32 PM, Igor Ignatyev wrote: >> Hi Alexander, >> >> besides the small nits which Leonid mentioned, there is one in http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html: >>> 28 * @summary Test verifies only that VM doesn???t crash but throw expected Error. >> I guess "doesn???t" is 'doesn't' w/ a fancy apostrophe. otherwise looks good to me, Reviewed. >> >> -- Igor >> >>> On Jun 26, 2017, at 1:11 PM, Leonid Mesnik > wrote: >>> >>> Hi >>> >>> New changes looks good for me. Please get review from Reviewer. >>> >>> The only 2 small nits which don?t require separate review from me: >>> >>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestFullGCALot.java.html > >>> typo in >>> 37 System.out.println("Hellow world!"); >>> >>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html > >>> return is not needed in >>> 58 return; >>> >>> Thanks >>> Leonid >>>> On Jun 26, 2017, at 1:04 PM, Alexander Harlap > wrote: >>>> >>>> Hi Leonid, >>>> >>>> I accommodated your suggestions. >>>> >>>> New version of changeset located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ >>>> >>>> >>>> Alex >>>> >>>> >>>> On 6/23/2017 6:18 PM, Leonid Mesnik wrote: >>>>> Hi >>>>> >>>>> Basically changes looks good. Below are some comments: >>>>> >>>>>> On Jun 22, 2017, at 9:16 AM, Alexander Harlap > wrote: >>>>>> >>>>>> Please review change for JDK-8178507 > - co-locate nsk.regression.gc tests >>>>>> >>>>>> JDK-8178507 > is last remaining sub-task ofJDK-8178482 > - Co-locate remaining GC tests >>>>>> >>>>>> >>>>>> Proposed change located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >>>>>> >>>>>> Co-located and converted to JTREG tests are: >>>>>> >>>>>> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java >>>>> The out variable is no used and return code is not checked in method ?run?. Wouldn't it simpler just to move println into main and remove method ?run? completely? >>>>>> nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java >>>>> The method ?run? always returns 0. It would be better to make it void or just remove it. Test never throws any exception. So it make a sense to write in comments that test verifies only that VM doesn?t crash but throw expected Error. >>>>> >>>>>> nsk/regression/b4668531 => hotspot/test/gc/TestMemoryInitialization.java >>>>> The variable buffer is ?read-only?. It make a sense to make variable ?buffer' public static member of class TestMemoryInitialization. So compiler could not optimize it usage during any optimization like escape analysis. >>>>>> nsk/regression/b6186200 => hotspot/test/gc/cslocker/TestCSLocker.java >>>>>> >>>>> Port looks good. It seems that test doesn?t verify that lock really happened. Could be this improved as a part of this fix or by filing separate RFE? >>>>> >>>>> Leonid >>>>>> Thank you, >>>>>> >>>>>> Alex >>>>>> >>>>>> >>>> >>> >> > From thomas.stuefe at gmail.com Tue Jun 27 06:24:21 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Jun 2017 08:24:21 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> <28382d8f-1b63-9475-91eb-a03166430915@oracle.com> Message-ID: Thanks Eric! I'll fix this, but will wait for your full review before posting a new webrev. Thanks, Thomas On Mon, Jun 26, 2017 at 5:42 PM, Erik Helin wrote: > Seems like you have an extra newline at the end of logStream.hpp as well :) > > Erik > > > On 06/26/2017 05:28 PM, Erik Helin wrote: > >> On 06/21/2017 06:16 PM, Thomas St?fe wrote: >> >>> New Webrev: >>> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor >>> -ul-logstream/all.webrev.02/webrev/ >>> >>> >>> >> Just a quick, very minor, nit (still reviewing it all): >> >> --- old/src/share/vm/logging/log.hpp 2017-06-21 17:40:35.171829500 >> +0200 >> +++ new/src/share/vm/logging/log.hpp 2017-06-21 17:40:34.157130800 >> +0200 >> @@ -105,10 +105,6 @@ >> // >> #define LogTarget(level, ...) LogTargetImpl> LOG_TAGS(__VA_ARGS__)> >> >> -// Forward declaration to decouple this file from the outputStream API. >> -class outputStream; >> -outputStream* create_log_stream(LogLevelType level, LogTagSet* tagset); >> - >> template > T2, LogTagType T3, LogTagType T4, LogTagType GuardTag> >> class LogTargetImpl; >> >> @@ -173,9 +169,6 @@ >> static bool is_##name() { \ >> return is_level(LogLevel::level); \ >> } \ >> - static outputStream* name##_stream() { \ >> - return create_log_stream(LogLevel::level, &LogTagSetMapping> T2, T3, T4>::tagset()); \ >> - } \ >> static LogTargetImpl* >> name() { \ >> return (LogTargetImpl> GuardTag>*)NULL; \ >> } >> @@ -204,9 +197,8 @@ >> va_end(args); >> } >> >> - static outputStream* stream() { >> - return create_log_stream(level, &LogTagSetMapping> T4>::tagset()); >> - } >> }; >> >> + >> + >> >> Would you please revert those two empty lines you added? No need to >> re-review. Now I'm gonna dive back into this patch :) >> >> Thanks, >> Erik >> >> Kind Regards, Thomas >>> >>> >>> >>> Thanks, >>> Marcus >>> >>> >>> From erik.helin at oracle.com Tue Jun 27 06:45:34 2017 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 27 Jun 2017 08:45:34 +0200 Subject: RFR(S) : 8181053 : port basicvmtest to jtreg In-Reply-To: <15a84631-f799-92f8-8a51-bfd3dab6bf90@oracle.com> References: <4DE9B200-A792-452D-9AAD-E1BC1B1E7001@oracle.com> <15a84631-f799-92f8-8a51-bfd3dab6bf90@oracle.com> Message-ID: <95ff2a59-5078-d132-bfea-5c782fa5f34a@oracle.com> Hi Igor, looking at this one extra time, I realized that the following change to hotspot/test/Makefile: -# clienttest (make sure various basic java client options work) - -hotspot_clienttest clienttest: sanitytest - $(RM) $(PRODUCT_HOME)/jre/lib/*/client/classes.jsa - $(RM) $(PRODUCT_HOME)/jre/bin/client/classes.jsa - $(PRODUCT_HOME)/bin/java $(JAVA_OPTIONS) -Xshare:dump - -PHONY_LIST += hotspot_clienttest clienttest actually removes one additional test that isn't covered by your newly added file hotspot/test/sanity/BasicVMTest.java: sanity testing -Xshare:dump. Do we want to add a small test in hotspot/test/sanity for -Xshare:dump? Or is this functionality tested elsewhere? A related question: if multiple tests were running concurrently (testing the same JDK), won't there be a race condition with the above test? For example if two JTReg tests are running (and -conc > 1) and both JTReg tests tries to remove classes.jsa and then regenerate them, seems like there could be a race? What do you think? Just scrap the -Xshare:dump sanity test or add a JTReg version? Thanks, Erik On 06/14/2017 09:41 AM, Erik Helin wrote: > On 06/14/2017 01:09 AM, Igor Ignatyev wrote: >> http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html >>> 121 lines changed: 54 ins; 67 del; 0 mod; >> >> Hi all, >> >> could you please review this small patch which introduces jtreg >> version of basicvmtest test? >> >> make version of basicvmtest also included sanity testing for CDS on >> client JVM, but this testing modified the product binaries, so it >> might interfere with results of other tests and is not very reliable. >> I have consulted w/ Misha about this, and he assured me that there are >> other better CDS tests which check the same functionality, so we >> should not lose test coverage there. >> >> webrev: >> http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html > > Looks good, Reviewed. > > Thank you for this patch Igor! I've been meaning to fix this for a long > time but never got around to it... > > Erik > >> jbs: https://bugs.openjdk.java.net/browse/JDK-8181053 >> testing: jprt, new added test >> >> Thanks, >> -- Igor >> From patric.hedlin at oracle.com Tue Jun 27 12:30:34 2017 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Tue, 27 Jun 2017 14:30:34 +0200 Subject: JDK10/RFR(S): 8182711: Re/Introduce private interface for HW-specific prefetch options in SPARC VM_Version. Message-ID: <972dfee0-7a6b-f115-419d-1738ce5006f1@oracle.com> Dear all, I would like to ask for help to review the following change/update: Issue: https://bugs.openjdk.java.net/browse/JDK-8182711 Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8182711/ 8182711: Re/Introduce private interface for HW-specific prefetch options in SPARC VM_Version. This is essentially to revoke the SPARC part of the solution to JDK-8016470. Testing: Testing on JDK10 (jtreg/hotspot) Best regards, Patric From erik.helin at oracle.com Tue Jun 27 12:53:45 2017 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 27 Jun 2017 14:53:45 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: On 06/21/2017 06:16 PM, Thomas St?fe wrote: > New Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.02/webrev/ I have spent most of my time in log.hpp, logStream.{hpp,cpp}, so I will start with those comments/questions and then continue reviewing the callsites. log.hpp: - two extra newlines near end of file logStream.hpp: - one extra newline near end of file - should you add declarations of delete as well to prevent someone from calling delete on LogStream pointer? Thinking about this, it seems a bit odd to pass a function an outputStream* (a pointer to a ResourceObj) that doesn't obey the ResourceObj contract. An outputStream* coming from &ls, where ls is an LogStream instance on the stack, is a pointer to a ResourceObj that in practice is not a ResourceObj. Thinking about this some more, I think this is safe. No function that is passed an outputStream* can assume that it can call delete on that pointer. - is 128 bytes as default too much for _smallbuf? Should we start with 64? - the keyword `public` is repeated unnecessarily LogStream class (a superfluous `public` before the write method) logStream.cpp - one extra newline at line 74 - the pointer returned from os::malloc is not checked. What should we do if a NULL pointer is returned? Print whatever is buffered and then do vm_out_of_memory? - is growing by doubling too aggressive? should the LineBuffer instead grow by chunking (e.g. always add 64 more bytes)? - instead of growing the LineBuffer by reallocating and then copying over the old bytes, should we use a chunked linked list instead (in order to avoid copying the same data multiple times)? The only "requirements" on the LineBuffer is fast append and fast iteration (doesn't have to support fast index lookup). - LogStream::write is no longer in inline.hpp, is this a potential performance problem? I think not, ,the old LogStream::write definition was most likely in .inline.hpp because of template usage Great work thus far Thomas, the patch is becoming really solid! If you want to discuss over IM, then you can (as always) find me in #openjdk on irc.oftc.net. Thanks, Erik > Kind Regards, Thomas From patric.hedlin at oracle.com Tue Jun 27 12:59:12 2017 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Tue, 27 Jun 2017 14:59:12 +0200 Subject: JDK10/RFR(M): 8182279: Add HW feature detection support for SPARC Core S5 (on Solaris). Message-ID: <266604ad-be28-bad8-fb49-c0649cde1725@oracle.com> Dear all, I would like to ask for help to perform an early (p)review of the following change/update: Issue: https://bugs.openjdk.java.net/browse/JDK-8182279 Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8182279/ Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8172231 Note 1. This is an early (p)review. Note 2. SPARC Core S5 disclosure has been approved for reviewing on this list(s). Note 3. SPARC Core S5 and M8 servers are currently not available to regular testing. 8182279: Add HW feature detection support for SPARC Core S5 (on Solaris). Updating SPARC feature/capability detection to include support for the SPARC Core S5. Caveat: This update will introduce some redundancies into the code base, features and definitions currently not used, addressed by subsequent bug or feature updates/patches. Fujitsu HW is treated very conservatively. Testing: Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp), RBT on non-M8 only. Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp), RBT on non-M8 only. Benchmarking: Not performed at this point. Best regards, Patric From glaubitz at physik.fu-berlin.de Tue Jun 27 13:40:55 2017 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 27 Jun 2017 15:40:55 +0200 Subject: [PATCH] linux-sparc build fixes In-Reply-To: <20170622102703.GC18516@physik.fu-berlin.de> References: <20170609102041.GA2477@physik.fu-berlin.de> <20170614120408.GB16230@physik.fu-berlin.de> <5d613e41-a982-ec67-3a48-5befbf3a2808@physik.fu-berlin.de> <31eeeb60-1b0d-a0cb-238c-ca2361430786@oracle.com> <20170619070613.GE28760@physik.fu-berlin.de> <20170622102703.GC18516@physik.fu-berlin.de> Message-ID: <20170627134055.GA30354@physik.fu-berlin.de> On Thu, Jun 22, 2017 at 12:27:03PM +0200, John Paul Adrian Glaubitz wrote: > On Mon, Jun 19, 2017 at 02:48:39PM +0200, Erik Helin wrote: > > >So, should I just run the testsuite with all three patches applied? > > > > Yes, please run the testsuite with the three patches applied. This should > > work (famous last words ;)) for the "native" Linux/sparc64 version of > > hotspot (if not, I would to curious to learn why). To test > > Linux/sparc64+zero you obviously need the fourth patch applied as well. > > Ok, I will give it a try. I will do a hotspot-native build first, run > the testsuite and post the results. Let's tackle zero later. I've got > anoter bunch of zero-related fixes that we're carrying in the Debian > package and that should be upstreamed to be available for other > downstreams as well. Here's a build with the patches applied and the testsuite enabled: > https://people.debian.org/~glaubitz/openjdk-9_9~b170-2_sparc64.build Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From claes.redestad at oracle.com Tue Jun 27 14:10:35 2017 From: claes.redestad at oracle.com (Claes Redestad) Date: Tue, 27 Jun 2017 16:10:35 +0200 Subject: RFR: 8180421: Change default value of BiasedLockingStartupDelay to 0 In-Reply-To: <35ec59f7-8ffd-a06a-7ad5-4988c4024d5a@oracle.com> References: <35ec59f7-8ffd-a06a-7ad5-4988c4024d5a@oracle.com> Message-ID: Hi Robbin, looks good to me! While there are some near-regressions in a few startup tests, they are not statistically significant. At the same time the peak performance and latency benefits on some benchmarks and applications of not delaying is clearly measurable, which makes this seem like the right thing to do to me. /Claes On 06/26/2017 03:17 PM, Robbin Ehn wrote: > Hi all, please review. > > On behalf of Stefan J, this patch changes the default value of > BiasedLockingStartupDelay to 0. > "The delay is however a problem for the some of the GC algorithms that > use the mark-word." > "Benchmark runs doesn't show any regressions for either startup times > or steady state performance when setting it to 0." > > CSR: https://bugs.openjdk.java.net/browse/JDK-8181778 > Issue: https://bugs.openjdk.java.net/browse/JDK-8180421 > > Thanks! > > /Robbin > > diff -r 26a2358e2796 src/share/vm/runtime/globals.hpp > --- a/src/share/vm/runtime/globals.hpp Fri Jun 23 15:16:23 2017 -0700 > +++ b/src/share/vm/runtime/globals.hpp Mon Jun 26 15:03:34 2017 +0200 > @@ -1307,1 +1307,1 @@ > - product(intx, BiasedLockingStartupDelay, > 4000, \ > + product(intx, BiasedLockingStartupDelay, > 0, \ From thomas.schatzl at oracle.com Tue Jun 27 14:20:59 2017 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 27 Jun 2017 16:20:59 +0200 Subject: RFR: 8180421: Change default value of BiasedLockingStartupDelay to 0 In-Reply-To: <35ec59f7-8ffd-a06a-7ad5-4988c4024d5a@oracle.com> References: <35ec59f7-8ffd-a06a-7ad5-4988c4024d5a@oracle.com> Message-ID: <1498573259.2924.1.camel@oracle.com> Hi, On Mon, 2017-06-26 at 15:17 +0200, Robbin Ehn wrote: > Hi all, please review. > > On behalf of Stefan J, this patch changes the default value of > BiasedLockingStartupDelay to 0. > "The delay is however a problem for the some of the GC algorithms > that use the mark-word." > "Benchmark runs doesn't show any regressions for either startup times > or steady state performance when setting it to 0." > > CSR: https://bugs.openjdk.java.net/browse/JDK-8181778 > Issue: https://bugs.openjdk.java.net/browse/JDK-8180421 ? looks good. Thomas From robbin.ehn at oracle.com Tue Jun 27 14:42:49 2017 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 27 Jun 2017 16:42:49 +0200 Subject: RFR: 8180421: Change default value of BiasedLockingStartupDelay to 0 In-Reply-To: <1498573259.2924.1.camel@oracle.com> References: <35ec59f7-8ffd-a06a-7ad5-4988c4024d5a@oracle.com> <1498573259.2924.1.camel@oracle.com> Message-ID: <1b338c6b-f288-85c9-849e-66fa5f1386f4@oracle.com> Thanks! /Robbin On 06/27/2017 04:20 PM, Thomas Schatzl wrote: > Hi, > > On Mon, 2017-06-26 at 15:17 +0200, Robbin Ehn wrote: >> Hi all, please review. >> >> On behalf of Stefan J, this patch changes the default value of >> BiasedLockingStartupDelay to 0. >> "The delay is however a problem for the some of the GC algorithms >> that use the mark-word." >> "Benchmark runs doesn't show any regressions for either startup times >> or steady state performance when setting it to 0." >> >> CSR: https://bugs.openjdk.java.net/browse/JDK-8181778 >> Issue: https://bugs.openjdk.java.net/browse/JDK-8180421 > > looks good. > > Thomas > From robbin.ehn at oracle.com Tue Jun 27 14:43:26 2017 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 27 Jun 2017 16:43:26 +0200 Subject: RFR: 8180421: Change default value of BiasedLockingStartupDelay to 0 In-Reply-To: References: <35ec59f7-8ffd-a06a-7ad5-4988c4024d5a@oracle.com> Message-ID: <65413379-4bb9-bb88-7de3-4d8cdf8aaaa8@oracle.com> On 06/27/2017 04:10 PM, Claes Redestad wrote: > Hi Robbin, > > looks good to me! > > While there are some near-regressions in a few startup tests, they are not statistically significant. At the same time the peak performance and latency benefits on some > benchmarks and applications of not delaying is clearly measurable, which makes this seem like the right thing to do to me. Thanks and thanks for looking at the numbers! /Robbin > > /Claes > > On 06/26/2017 03:17 PM, Robbin Ehn wrote: >> Hi all, please review. >> >> On behalf of Stefan J, this patch changes the default value of BiasedLockingStartupDelay to 0. >> "The delay is however a problem for the some of the GC algorithms that use the mark-word." >> "Benchmark runs doesn't show any regressions for either startup times or steady state performance when setting it to 0." >> >> CSR: https://bugs.openjdk.java.net/browse/JDK-8181778 >> Issue: https://bugs.openjdk.java.net/browse/JDK-8180421 >> >> Thanks! >> >> /Robbin >> >> diff -r 26a2358e2796 src/share/vm/runtime/globals.hpp >> --- a/src/share/vm/runtime/globals.hpp Fri Jun 23 15:16:23 2017 -0700 >> +++ b/src/share/vm/runtime/globals.hpp Mon Jun 26 15:03:34 2017 +0200 >> @@ -1307,1 +1307,1 @@ >> - product(intx, BiasedLockingStartupDelay, 4000, \ >> + product(intx, BiasedLockingStartupDelay, 0, \ > From daniel.daugherty at oracle.com Tue Jun 27 14:45:52 2017 From: daniel.daugherty at oracle.com (Daniel Daugherty) Date: Tue, 27 Jun 2017 07:45:52 -0700 (PDT) Subject: RFR: 8180421: Change default value of BiasedLockingStartupDelay to 0 Message-ID: <56d8761c-5b34-4e07-bf33-2b03b61d0681@default> ----- robbin.ehn at oracle.com wrote: > Hi all, please review. > > On behalf of Stefan J, this patch changes the default value of > BiasedLockingStartupDelay to 0. > "The delay is however a problem for the some of the GC algorithms that > use the mark-word." > "Benchmark runs doesn't show any regressions for either startup times > or steady state performance when setting it to 0." > > CSR: https://bugs.openjdk.java.net/browse/JDK-8181778 > Issue: https://bugs.openjdk.java.net/browse/JDK-8180421 > > Thanks! > > /Robbin > > diff -r 26a2358e2796 src/share/vm/runtime/globals.hpp > --- a/src/share/vm/runtime/globals.hpp Fri Jun 23 15:16:23 2017 -0700 > +++ b/src/share/vm/runtime/globals.hpp Mon Jun 26 15:03:34 2017 +0200 > @@ -1307,1 +1307,1 @@ > - product(intx, BiasedLockingStartupDelay, 4000, > \ > + product(intx, BiasedLockingStartupDelay, 0, > \ Thumbs up! Dan From robbin.ehn at oracle.com Tue Jun 27 14:48:36 2017 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 27 Jun 2017 16:48:36 +0200 Subject: RFR: 8180421: Change default value of BiasedLockingStartupDelay to 0 In-Reply-To: <56d8761c-5b34-4e07-bf33-2b03b61d0681@default> References: <56d8761c-5b34-4e07-bf33-2b03b61d0681@default> Message-ID: Thanks Dan! /Robbin On 06/27/2017 04:45 PM, Daniel Daugherty wrote: > ----- robbin.ehn at oracle.com wrote: > >> Hi all, please review. >> >> On behalf of Stefan J, this patch changes the default value of >> BiasedLockingStartupDelay to 0. >> "The delay is however a problem for the some of the GC algorithms that >> use the mark-word." >> "Benchmark runs doesn't show any regressions for either startup times >> or steady state performance when setting it to 0." >> >> CSR: https://bugs.openjdk.java.net/browse/JDK-8181778 >> Issue: https://bugs.openjdk.java.net/browse/JDK-8180421 >> >> Thanks! >> >> /Robbin >> >> diff -r 26a2358e2796 src/share/vm/runtime/globals.hpp >> --- a/src/share/vm/runtime/globals.hpp Fri Jun 23 15:16:23 2017 -0700 >> +++ b/src/share/vm/runtime/globals.hpp Mon Jun 26 15:03:34 2017 +0200 >> @@ -1307,1 +1307,1 @@ >> - product(intx, BiasedLockingStartupDelay, 4000, >> \ >> + product(intx, BiasedLockingStartupDelay, 0, >> \ > > Thumbs up! > > Dan > From igor.ignatyev at oracle.com Tue Jun 27 14:48:44 2017 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Tue, 27 Jun 2017 07:48:44 -0700 Subject: RFR(S) : 8181053 : port basicvmtest to jtreg In-Reply-To: <95ff2a59-5078-d132-bfea-5c782fa5f34a@oracle.com> References: <4DE9B200-A792-452D-9AAD-E1BC1B1E7001@oracle.com> <15a84631-f799-92f8-8a51-bfd3dab6bf90@oracle.com> <95ff2a59-5078-d132-bfea-5c782fa5f34a@oracle.com> Message-ID: Hi Erik, I have mentioned this in my 1st email. anyhow, changing the product and the associated potential race are the exact reasons why it was decided not to port this test. I have checked w/ Misha and he assured me we have other tests for -Xshare:dump. so answering your question, scrap this -Xshare:dump sanity test in favor of existing jtreg tests, e.g. runtime/SharedArchiveFile. thank you one more time for your review. Cheers, -- Igor > On Jun 26, 2017, at 11:45 PM, Erik Helin wrote: > > Hi Igor, > > looking at this one extra time, I realized that the following change to > hotspot/test/Makefile: > > -# clienttest (make sure various basic java client options work) > - > -hotspot_clienttest clienttest: sanitytest > - $(RM) $(PRODUCT_HOME)/jre/lib/*/client/classes.jsa > - $(RM) $(PRODUCT_HOME)/jre/bin/client/classes.jsa > - $(PRODUCT_HOME)/bin/java $(JAVA_OPTIONS) -Xshare:dump > - > -PHONY_LIST += hotspot_clienttest clienttest > > actually removes one additional test that isn't covered by your newly > added file hotspot/test/sanity/BasicVMTest.java: sanity testing > -Xshare:dump. > > Do we want to add a small test in hotspot/test/sanity for -Xshare:dump? > Or is this functionality tested elsewhere? A related question: if > multiple tests were running concurrently (testing the same JDK), won't > there be a race condition with the above test? For example if two JTReg > tests are running (and -conc > 1) and both JTReg tests tries to remove > classes.jsa and then regenerate them, seems like there could be a race? > > What do you think? Just scrap the -Xshare:dump sanity test or add a > JTReg version? > > Thanks, > Erik > > On 06/14/2017 09:41 AM, Erik Helin wrote: >> On 06/14/2017 01:09 AM, Igor Ignatyev wrote: >>> http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html >>>> 121 lines changed: 54 ins; 67 del; 0 mod; >>> >>> Hi all, >>> >>> could you please review this small patch which introduces jtreg >>> version of basicvmtest test? >>> >>> make version of basicvmtest also included sanity testing for CDS on >>> client JVM, but this testing modified the product binaries, so it >>> might interfere with results of other tests and is not very reliable. >>> I have consulted w/ Misha about this, and he assured me that there are >>> other better CDS tests which check the same functionality, so we >>> should not lose test coverage there. >>> >>> webrev: >>> http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html >> >> Looks good, Reviewed. >> >> Thank you for this patch Igor! I've been meaning to fix this for a long >> time but never got around to it... >> >> Erik >> >>> jbs: https://bugs.openjdk.java.net/browse/JDK-8181053 >>> testing: jprt, new added test >>> >>> Thanks, >>> -- Igor >>> From erik.helin at oracle.com Tue Jun 27 14:52:22 2017 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 27 Jun 2017 16:52:22 +0200 Subject: RFR(S) : 8181053 : port basicvmtest to jtreg In-Reply-To: References: <4DE9B200-A792-452D-9AAD-E1BC1B1E7001@oracle.com> <15a84631-f799-92f8-8a51-bfd3dab6bf90@oracle.com> <95ff2a59-5078-d132-bfea-5c782fa5f34a@oracle.com> Message-ID: <97c0cc4e-d871-0acd-1796-b31f08a76d2a@oracle.com> On 06/27/2017 04:48 PM, Igor Ignatyev wrote: > Hi Erik, > > I have mentioned this in my 1st email. anyhow, changing the product and the associated potential race are the exact reasons why it was decided not to port this test. I have checked w/ Misha and he assured me we have other tests for -Xshare:dump. so answering your question, scrap this -Xshare:dump sanity test in favor of existing jtreg tests, e.g. runtime/SharedArchiveFile. *sigh*, sorry, I forgot you explained this in the first email. Sounds good then, now go ahead and push this :) Erik > thank you one more time for your review. > > Cheers, > -- Igor >> On Jun 26, 2017, at 11:45 PM, Erik Helin wrote: >> >> Hi Igor, >> >> looking at this one extra time, I realized that the following change to >> hotspot/test/Makefile: >> >> -# clienttest (make sure various basic java client options work) >> - >> -hotspot_clienttest clienttest: sanitytest >> - $(RM) $(PRODUCT_HOME)/jre/lib/*/client/classes.jsa >> - $(RM) $(PRODUCT_HOME)/jre/bin/client/classes.jsa >> - $(PRODUCT_HOME)/bin/java $(JAVA_OPTIONS) -Xshare:dump >> - >> -PHONY_LIST += hotspot_clienttest clienttest >> >> actually removes one additional test that isn't covered by your newly >> added file hotspot/test/sanity/BasicVMTest.java: sanity testing >> -Xshare:dump. >> >> Do we want to add a small test in hotspot/test/sanity for -Xshare:dump? >> Or is this functionality tested elsewhere? A related question: if >> multiple tests were running concurrently (testing the same JDK), won't >> there be a race condition with the above test? For example if two JTReg >> tests are running (and -conc > 1) and both JTReg tests tries to remove >> classes.jsa and then regenerate them, seems like there could be a race? >> >> What do you think? Just scrap the -Xshare:dump sanity test or add a >> JTReg version? >> >> Thanks, >> Erik >> >> On 06/14/2017 09:41 AM, Erik Helin wrote: >>> On 06/14/2017 01:09 AM, Igor Ignatyev wrote: >>>> http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html >>>>> 121 lines changed: 54 ins; 67 del; 0 mod; >>>> >>>> Hi all, >>>> >>>> could you please review this small patch which introduces jtreg >>>> version of basicvmtest test? >>>> >>>> make version of basicvmtest also included sanity testing for CDS on >>>> client JVM, but this testing modified the product binaries, so it >>>> might interfere with results of other tests and is not very reliable. >>>> I have consulted w/ Misha about this, and he assured me that there are >>>> other better CDS tests which check the same functionality, so we >>>> should not lose test coverage there. >>>> >>>> webrev: >>>> http://cr.openjdk.java.net/~iignatyev//8181053/webrev.00/index.html >>> >>> Looks good, Reviewed. >>> >>> Thank you for this patch Igor! I've been meaning to fix this for a long >>> time but never got around to it... >>> >>> Erik >>> >>>> jbs: https://bugs.openjdk.java.net/browse/JDK-8181053 >>>> testing: jprt, new added test >>>> >>>> Thanks, >>>> -- Igor >>>> > From hohensee at amazon.com Tue Jun 27 17:03:24 2017 From: hohensee at amazon.com (Hohensee, Paul) Date: Tue, 27 Jun 2017 17:03:24 +0000 Subject: RFR(XL): 8182299: Enable disabled clang warnings, build on OSX 10 + Xcode 8 Message-ID: <27FD0413-52BC-42E6-A5B0-3C92A49A2D6F@amazon.com> https://bugs.openjdk.java.net/browse/JDK-8182299 http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_jdk.00/ http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_hotspot.00/ Jesper has been kind enough to host the webrevs while I get my cr.openjdk.net account set up, and to be the sponsor. This rfe a combination of enabling disabled clang warnings and getting jdk10 to build on OSX 10 and Xcode 8. At least one enabled warning (delete-non-virtual-dtor ) detected what seems to me a real potential bug, with the rest enforcing good code hygiene. These changes are only in OpenJDK, so I?m looking for a volunteer to make the closed changes. Thanks, Paul Here are the jdk notes: java_md_macosx.c splashscreen_sys.m: Removed objc_registerThreadWithCollector() since it's obsolete and of questionable value in any case. NSApplicationAWT.m: Use the correct NSEventMask rather than NSUInteger. jdhuff.c jdphuff.c: Shifting a negative signed value is undefined. Here are the hotspot notes: Here are the lists of files affected by enabling a given warning: switch (lack of default clause): c1_LIRAssembler_x86.cpp c1_LIRGenerator_x86.cpp c1_LinearScan_x86.hpp jniFastGetField_x86_64.cpp assembler.cpp c1_Canonicalizer.cpp c1_GraphBuilder.cpp c1_Instruction.cpp c1_LIR.cpp c1_LIRGenerator.cpp c1_LinearScan.cpp c1_ValueStack.hpp c1_ValueType.cpp bcEscapeAnalyzer.cpp ciArray.cpp ciEnv.cpp ciInstance.cpp ciMethod.cpp ciMethodBlocks.cpp ciMethodData.cpp ciTypeFlow.cpp compiledMethod.cpp dependencies.cpp nmethod.cpp compileTask.hpp heapRegionType.cpp abstractInterpreter.cpp bytecodes.cpp invocationCounter.cpp linkResolver.cpp rewriter.cpp jvmciCompilerToVM.cpp jvmciEnv.cpp universe.cpp cpCache.cpp generateOopMap.cpp method.cpp methodData.cpp compile.cpp connode.cpp gcm.cpp graphKit.cpp ifnode.cpp library_call.cpp memnode.cpp parse1.cpp parse2.cpp phaseX.cpp superword.cpp type.cpp vectornode.cpp jvmtiClassFileReconstituter.cpp jvmtiEnter.xsl jvmtiEventController.cpp jvmtiImpl.cpp jvmtiRedefineClasses.cpp methodComparator.cpp methodHandles.cpp advancedThresholdPolicy.cpp reflection.cpp relocator.cpp sharedRuntime.cpp simpleThresholdPolicy.cpp writeableFlags.cpp globalDefinitions.hpp delete-non-virtual-dtor (may be real latent bugs due to possible failure to execute destructor(s) ): decoder_aix.hpp decoder_machO.hpp classLoader.hpp g1RootClosures.hpp jvmtiImpl.hpp perfData.hpp decoder.hpp decoder_elf.hpp dynamic-class-memaccess: method.cpp empty-body: objectMonitor.cpp mallocSiteTable.cpp format (debug output will be affected by incorrect code changes to these): macroAssembler_x86.cpp os_bsd.cpp os_bsd_x86.cpp ciMethodData.cpp javaClasses.cpp debugInfo.cpp logFileOutput.cpp constantPool.cpp jvmtiEnter.xsl jvmtiRedefineClasses.cpp safepoint.cpp thread.cpp logical-op-parentheses: nativeInst_x86.hpp archDesc.cpp output_c.cpp output_h.cpp c1_GraphBuilder.cpp c1_LIRGenerator.cpp c1_LinearScan.cpp bcEscapeAnalyzer.cpp ciMethod.cpp stackMapTableFormat.hpp compressedStream.cpp dependencies.cpp heapRegion.cpp ptrQueue.cpp psPromotionManager.cpp jvmciCompilerToVM.cpp cfgnode.cpp chaitin.cpp compile.cpp compile.hpp escape.cpp graphKit.cpp lcm.cpp loopTransform.cpp loopnode.cpp loopopts.cpp macro.cpp memnode.cpp output.cpp parse1.cpp parseHelper.cpp reg_split.cpp superword.cpp superword.hpp jniCheck.cpp jvmtiEventController.cpp arguments.cpp javaCalls.cpp sharedRuntime.cpp parentheses: adlparse.cpp parentheses-equality: output_c.cpp javaAssertions.cpp gcm.cpp File-specific details: GensrcAdlc.gmk: Left tautological-compare in place to allow null 'this' pointer checks in methods intended to be called from a debugger. CompileGTest.gmk: Please ignore this one, since it requires changes to Google?s gtest source, which I doubt we want to do. CompileJvm.gmk: Left tautological-compare in place to allow null 'this' pointer checks in methods intended to be called from a debugger. MacosxDebuggerLocal.m: PT_ATTACH has been replaced by PT_ATTACHEXC ciMethodData.cpp: " 0x%" FORMAT64_MODIFIER "x" reduces to "0x%llx", whereas " " INTPTRNZ_FORMAT reduces to "0x%lx" generateOopMap.cpp: Refactored duplicate code in print_current_state() binaryTreeDictionary.cpp/hpp, hashtable.cpp/hpp: These provoke ?instantiation of variable required here, but no definition is available?. globalDefinitions_gcc.hpp: Define FORMAT64_MODIFIER properly for Apple, needed by os.cpp. globalDefinitions.hpp: Add INTPTRNZ_FORMAT, needed by ciMethodData.cpp. From hohensee at amazon.com Tue Jun 27 19:34:00 2017 From: hohensee at amazon.com (Hohensee, Paul) Date: Tue, 27 Jun 2017 19:34:00 +0000 Subject: FW: RFR(XL): 8182299: Enable disabled clang warnings, build on OSX 10 + Xcode 8 In-Reply-To: <1CBD62A9-9B1B-4B05-AAF9-BE2D52DE8C79@amazon.com> References: <27FD0413-52BC-42E6-A5B0-3C92A49A2D6F@amazon.com> <1CBD62A9-9B1B-4B05-AAF9-BE2D52DE8C79@amazon.com> Message-ID: An attempt at better formatting. https://bugs.openjdk.java.net/browse/JDK-8182299 http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_jdk.00/ http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_hotspot.00/ Jesper has been kind enough to host the webrevs while I get my cr.openjdk.net account set up, and to be the sponsor. This rfe a combination of enabling disabled clang warnings and getting jdk10 to build on OSX 10 and Xcode 8. At least one enabled warning (delete-non-virtual-dtor) detected what seems to me a real potential bug, with the rest enforcing good code hygiene. These changes are only in OpenJDK, so I?m looking for a volunteer to make the closed changes. Thanks, Paul Here are the jdk file-specific details: java_md_macosx.c splashscreen_sys.m Removed objc_registerThreadWithCollector() since it's obsolete and of questionable value in any case. NSApplicationAWT.m Use the correct NSEventMask rather than NSUInteger. jdhuff.c jdphuff.c Shifting a negative signed value is undefined. Here are the hotspot notes: Here are the lists of files affected by enabling a given warning: switch: all of these are lack of a default clause c1_LIRAssembler_x86.cpp c1_LIRGenerator_x86.cpp c1_LinearScan_x86.hpp jniFastGetField_x86_64.cpp assembler.cpp c1_Canonicalizer.cpp c1_GraphBuilder.cpp c1_Instruction.cpp c1_LIR.cpp c1_LIRGenerator.cpp c1_LinearScan.cpp c1_ValueStack.hpp c1_ValueType.cpp bcEscapeAnalyzer.cpp ciArray.cpp ciEnv.cpp ciInstance.cpp ciMethod.cpp ciMethodBlocks.cpp ciMethodData.cpp ciTypeFlow.cpp compiledMethod.cpp dependencies.cpp nmethod.cpp compileTask.hpp heapRegionType.cpp abstractInterpreter.cpp bytecodes.cpp invocationCounter.cpp linkResolver.cpp rewriter.cpp jvmciCompilerToVM.cpp jvmciEnv.cpp universe.cpp cpCache.cpp generateOopMap.cpp method.cpp methodData.cpp compile.cpp connode.cpp gcm.cpp graphKit.cpp ifnode.cpp library_call.cpp memnode.cpp parse1.cpp parse2.cpp phaseX.cpp superword.cpp type.cpp vectornode.cpp jvmtiClassFileReconstituter.cpp jvmtiEnter.xsl jvmtiEventController.cpp jvmtiImpl.cpp jvmtiRedefineClasses.cpp methodComparator.cpp methodHandles.cpp advancedThresholdPolicy.cpp reflection.cpp relocator.cpp sharedRuntime.cpp simpleThresholdPolicy.cpp writeableFlags.cpp globalDefinitions.hpp delete-non-virtual-dtor: these may be real latent bugs due to possible failure to execute destructor(s) decoder_aix.hpp decoder_machO.hpp classLoader.hpp g1RootClosures.hpp jvmtiImpl.hpp perfData.hpp decoder.hpp decoder_elf.hpp dynamic-class-memaccess: obscure use of memcpy method.cpp empty-body: ?;? isn?t good enough for clang, it prefers {} objectMonitor.cpp mallocSiteTable.cpp format: matches printf format strings against arguments. debug output will be affected by incorrect code changes to these. macroAssembler_x86.cpp os_bsd.cpp os_bsd_x86.cpp ciMethodData.cpp javaClasses.cpp debugInfo.cpp logFileOutput.cpp constantPool.cpp jvmtiEnter.xsl jvmtiRedefineClasses.cpp safepoint.cpp thread.cpp logical-op-parentheses: can be tricky to get correct. There are a few very long-winded predicates. nativeInst_x86.hpp archDesc.cpp output_c.cpp output_h.cpp c1_GraphBuilder.cpp c1_LIRGenerator.cpp c1_LinearScan.cpp bcEscapeAnalyzer.cpp ciMethod.cpp stackMapTableFormat.hpp compressedStream.cpp dependencies.cpp heapRegion.cpp ptrQueue.cpp psPromotionManager.cpp jvmciCompilerToVM.cpp cfgnode.cpp chaitin.cpp compile.cpp compile.hpp escape.cpp graphKit.cpp lcm.cpp loopTransform.cpp loopnode.cpp loopopts.cpp macro.cpp memnode.cpp output.cpp parse1.cpp parseHelper.cpp reg_split.cpp superword.cpp superword.hpp jniCheck.cpp jvmtiEventController.cpp arguments.cpp javaCalls.cpp sharedRuntime.cpp parentheses adlparse.cpp parentheses-equality output_c.cpp javaAssertions.cpp gcm.cpp File-specific details: GensrcAdlc.gmk CompileJvm.gmk Left tautological-compare in place to allow null 'this' pointer checks in methods intended to be called from a debugger. CompileGTest.gmk Just an enhanced comment. MacosxDebuggerLocal.m PT_ATTACH has been replaced by PT_ATTACHEXC ciMethodData.cp " 0x%" FORMAT64_MODIFIER "x" reduces to "0x%llx", whereas " " INTPTRNZ_FORMAT reduces to "0x%lx", which latter is what clang want. generateOopMap.cpp Refactored duplicate code in print_current_state(). binaryTreeDictionary.cpp/hpp, hashtable.cpp/hpp These provoked ?instantiation of variable required here, but no definition is available?. globalDefinitions_gcc.hpp Define FORMAT64_MODIFIER properly for Apple, needed by os.cpp. globalDefinitions.hpp Add INTPTRNZ_FORMAT, needed by ciMethodData.cpp. From stefan.karlsson at oracle.com Wed Jun 28 15:08:27 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 28 Jun 2017 17:08:27 +0200 Subject: RFR: 8178495: Bug in the align_size_up_ macro Message-ID: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> Hi all, Please review this patch to fix a bug in the align_size_up_ macro. http://cr.openjdk.java.net/~stefank/8178495/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8178495 The following: align_size_up_((uintptr_t)0x512345678ULL, (int8_t) 16); align_size_up_((uintptr_t)0x512345678ULL, (int16_t) 16); align_size_up_((uintptr_t)0x512345678ULL, (int32_t) 16); align_size_up_((uintptr_t)0x512345678ULL, (int64_t) 16); align_size_up_((uintptr_t)0x512345678ULL, (uint8_t) 16); align_size_up_((uintptr_t)0x512345678ULL, (uint16_t)16); align_size_up_((uintptr_t)0x512345678ULL, (uint32_t)16); align_size_up_((uintptr_t)0x512345678ULL, (uint64_t)16); Gives this output: 0x512345680 0x512345680 0x512345680 0x512345680 0x512345680 0x512345680 0x12345680 0x512345680 So, align_size_up_((uintptr_t)0x512345678ULL, (uint32_t)16) returns an unexpected, truncated value. This happens because in this macro: #define align_size_up_(size, alignment) (((size) + ((alignment) - 1)) & ~((alignment) - 1)) ~((alignment) - 1) returns 0x00000000FFFFFFF0 instead of 0xFFFFFFFFFFFFFFF0 This isn't a problem for the 64-bit types, and maybe more non-obvious is that it doesn't happen for types 8-bit and 16-bit types. For the 8-bit and 16-bit types, the (alignment - 1) is promoted to a signed int, when it later is used in the & expression it is signed extended into a signed 64-bit value. When the type is an unsigned 32-bit integer, it isn't promoted to a signed int, and therefore it is not singed extended to 64 bits, but instead zero extended to 64 bits. This bug is currently not affecting the code base, since the inline align functions promote all integers to intptr_t, before passing them down to the align macros. However, when/if JDK-8178489 is pushed the macro is actually used with 32 bits unsigned ints. Tested with the unit test and JPRT with and without patches for JDK-8178489 . Thanks, StefanK From coleen.phillimore at oracle.com Wed Jun 28 15:38:10 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 11:38:10 -0400 Subject: RFR (S) 8182848: Some functions misplaced in debug.hpp Message-ID: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> Summary: moved to vmError.hpp,cpp where they seemed more appropriate Moved the function pd_ps() into frame_sparc.cpp eliminating debug_cpu.cpp files. You can pick a better name for this if you want. open webrev at http://cr.openjdk.java.net/~coleenp/8182848.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8182848 Tested with JPRT. Thanks, Coleen From coleen.phillimore at oracle.com Wed Jun 28 15:46:17 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 11:46:17 -0400 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits Message-ID: Summary: And make updating the _rand_seed thread safe. See bug for more details. open webrev at http://cr.openjdk.java.net/~coleenp/8182554.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8182554 Tested with JPRT and performance tested addition of cas in os::random call (no regressions). The only thing that uses os::random more than like once is Symbol creation. Thanks, Coleen From vladimir.kozlov at oracle.com Wed Jun 28 16:18:42 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 28 Jun 2017 09:18:42 -0700 Subject: JDK10/RFR(S): 8182711: Re/Introduce private interface for HW-specific prefetch options in SPARC VM_Version. In-Reply-To: <972dfee0-7a6b-f115-419d-1738ce5006f1@oracle.com> References: <972dfee0-7a6b-f115-419d-1738ce5006f1@oracle.com> Message-ID: <33845021-2b66-c882-031c-f69e637da456@oracle.com> Hi Patric, I think simple undo 8016470 changes will not do. As discussed I agreed that you will restore methods in .hpp files but they should not check flags. They should only return HW specific or default values similar to other methods in .hpp files. Settings based on flags should be done in .cpp files. You also mixed in sparcv9 changes which is fine with me. Thanks, Vladimir On 6/27/17 5:30 AM, Patric Hedlin wrote: > > Dear all, > > I would like to ask for help to review the following change/update: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8182711 > > Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8182711/ > > > 8182711: Re/Introduce private interface for HW-specific prefetch options > in SPARC VM_Version. > > This is essentially to revoke the SPARC part of the solution to > JDK-8016470. > > > Testing: > > Testing on JDK10 (jtreg/hotspot) > > > Best regards, > Patric > From vladimir.kozlov at oracle.com Wed Jun 28 16:25:26 2017 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 28 Jun 2017 09:25:26 -0700 Subject: JDK10/RFR(M): 8182279: Add HW feature detection support for SPARC Core S5 (on Solaris). In-Reply-To: <266604ad-be28-bad8-fb49-c0649cde1725@oracle.com> References: <266604ad-be28-bad8-fb49-c0649cde1725@oracle.com> Message-ID: <15211d6e-2a07-f3da-e7a1-c092bb41ff8e@oracle.com> On 6/27/17 5:59 AM, Patric Hedlin wrote: > Dear all, > > I would like to ask for help to perform an early (p)review of the > following change/update: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8182279 > > Webrev: http://cr.openjdk.java.net/~neliasso/phedlin/tr8182279/ This looks good. > > Prerequisite: https://bugs.openjdk.java.net/browse/JDK-8172231 Why this is not pushed? It was reviewed already. Are you waiting an other reviewer to look? Thanks, Vladimir > > > Note 1. This is an early (p)review. > Note 2. SPARC Core S5 disclosure has been approved for reviewing on this > list(s). > Note 3. SPARC Core S5 and M8 servers are currently not available to > regular testing. > > > 8182279: Add HW feature detection support for SPARC Core S5 (on Solaris). > > Updating SPARC feature/capability detection to include support for > the SPARC Core S5. > > > Caveat: > > This update will introduce some redundancies into the code base, > features and definitions > currently not used, addressed by subsequent bug or feature > updates/patches. Fujitsu HW is > treated very conservatively. > > > Testing: > > Mostly tested on JDK9 (jtreg/RBT/hotspot/tier0-comp), RBT on non-M8 > only. > Testing on JDK10 (jtreg/RBT/hotspot/precheckin-comp), RBT on non-M8 > only. > > > Benchmarking: > > Not performed at this point. > > > Best regards, > Patric > > From stefan.karlsson at oracle.com Wed Jun 28 16:45:33 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 28 Jun 2017 18:45:33 +0200 Subject: RFR (S) 8182848: Some functions misplaced in debug.hpp In-Reply-To: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> References: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> Message-ID: <49b1df23-b488-906a-6dd6-6f2b2622be38@oracle.com> Hi Coleen, http://cr.openjdk.java.net/~coleenp/8182848.01/webrev/src/share/vm/runtime/frame.hpp.udiff.html + DEBUG_ONLY(void pd_ps();) // platform dependent frame printing Shouldn't this be guarded by NOT_PRODUCT when all implementations are guarded by #ifndef PRODUCT ? Time again to consider removing the optimized target ;) http://cr.openjdk.java.net/~coleenp/8182848.01/webrev/src/cpu/sparc/vm/frame_sparc.cpp.frames.html In this file you didn't add frame::pd_ps() after one of the frame constructors. Maybe move it to line 407 to be consistent with the other platform files? I know you didn't change this, but it's weird that findpc is declared as: 790 extern "C" void findpc(int x); when all other platforms use intptr_t x. and the code below casts to intptr_t: 803 findpc((intptr_t)pc); http://cr.openjdk.java.net/~coleenp/8182848.01/webrev/src/cpu/x86/vm/frame_x86.cpp.frames.html 684 void frame::pd_ps() {} 685 686 #endif You inserted a stray blank line at 685, that is not present in the other platform files. Other than that, this looks good. Thanks, StefanK On 2017-06-28 17:38, coleen.phillimore at oracle.com wrote: > Summary: moved to vmError.hpp,cpp where they seemed more appropriate > > Moved the function pd_ps() into frame_sparc.cpp eliminating > debug_cpu.cpp files. You can pick a better name for this if you want. > > open webrev at http://cr.openjdk.java.net/~coleenp/8182848.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8182848 > > Tested with JPRT. > > Thanks, > Coleen > From thomas.stuefe at gmail.com Wed Jun 28 17:25:05 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 28 Jun 2017 19:25:05 +0200 Subject: RFR (S) 8182848: Some functions misplaced in debug.hpp In-Reply-To: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> References: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> Message-ID: Hi Coleen, this is a sensible cleanup! Some small nits: vmError.cpp: - controlled_crash(): I would leave it either a first class global function, maybe in vmError.hpp, or make it file scope static. - can we move #include up to the general includes? Not a part of your patch, but looking at the includes at the beginning, I see: ... 25 #include 26 #include "precompiled.hpp" 27 #include "code/codeCache.hpp" ... which is probably wrong, I am sure fcntl.h is never picked up because of the precompiled header line following it. So, maybe move the system headers past the hotspot headers? I always thought that was the way we do this. debug.hpp While looking at debug.hpp, I saw both report_out_of_shared_space() and report_insufficient_metaspace(). Both seem awfully specific for a general purpose assert file. The implementation for report_insufficient_metaspace() in particular needs to know "MaxMetaspaceSize", so I think it would be better placed in metaspace.cpp (it is also only called from that one file, so it does not even have to be exposed beyond metaspace.cpp). Thank you, and Kind Regards, Thomas On Wed, Jun 28, 2017 at 5:38 PM, wrote: > Summary: moved to vmError.hpp,cpp where they seemed more appropriate > > Moved the function pd_ps() into frame_sparc.cpp eliminating debug_cpu.cpp > files. You can pick a better name for this if you want. > > open webrev at http://cr.openjdk.java.net/~coleenp/8182848.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8182848 > > Tested with JPRT. > > Thanks, > Coleen > > From jesper.wilhelmsson at oracle.com Wed Jun 28 17:36:11 2017 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 28 Jun 2017 19:36:11 +0200 Subject: RFR(XL): 8182299: Enable disabled clang warnings, build on OSX 10 + Xcode 8 In-Reply-To: References: <27FD0413-52BC-42E6-A5B0-3C92A49A2D6F@amazon.com> <1CBD62A9-9B1B-4B05-AAF9-BE2D52DE8C79@amazon.com> Message-ID: <6B296ABE-66C0-4C32-AC4E-8674BE103514@oracle.com> Hi Paul, Thanks for doing this change! In general everything looks really good, there are a lot of really nice cleanups here. I just have two minor questions/nits: * In hotspot/cpu/x86/vm/nativeInst_x86.hpp it seems the expression already have parenthesis around the & operations and the change here is "only" cleaning up the layout of the code which is not a bad thing in it self, but you move the logical operators to the beginning of each line which is a quite different style than the rest of the code in the same function where the operators are at the end of the line. * In hotspot/share/vm/opto/graphKit.cpp you moved the #ifdef ASSERT so that Action_none and Action_make_not_compilable are available also when ASSERT is not defined. I don't see this mentioned in your description of the change. Was this change intentional? Thanks, /Jesper > On 27 Jun 2017, at 21:34, Hohensee, Paul wrote: > > An attempt at better formatting. > > https://bugs.openjdk.java.net/browse/JDK-8182299 > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_jdk.00/ > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_hotspot.00/ > > Jesper has been kind enough to host the webrevs while I get my cr.openjdk.net account set up, and to be the sponsor. > > This rfe a combination of enabling disabled clang warnings and getting jdk10 to build on OSX 10 and Xcode 8. At least one enabled warning (delete-non-virtual-dtor) detected what seems to me a real potential bug, with the rest enforcing good code hygiene. > > These changes are only in OpenJDK, so I?m looking for a volunteer to make the closed changes. > > Thanks, > > Paul > > > Here are the jdk file-specific details: > > java_md_macosx.c splashscreen_sys.m > > Removed objc_registerThreadWithCollector() since it's obsolete and of questionable value in any case. > > NSApplicationAWT.m > > Use the correct NSEventMask rather than NSUInteger. > > jdhuff.c jdphuff.c > > Shifting a negative signed value is undefined. > > Here are the hotspot notes: > > Here are the lists of files affected by enabling a given warning: > > switch: all of these are lack of a default clause > > c1_LIRAssembler_x86.cpp c1_LIRGenerator_x86.cpp c1_LinearScan_x86.hpp > jniFastGetField_x86_64.cpp assembler.cpp c1_Canonicalizer.cpp > c1_GraphBuilder.cpp c1_Instruction.cpp c1_LIR.cpp c1_LIRGenerator.cpp > c1_LinearScan.cpp c1_ValueStack.hpp c1_ValueType.cpp > bcEscapeAnalyzer.cpp ciArray.cpp ciEnv.cpp ciInstance.cpp ciMethod.cpp > ciMethodBlocks.cpp ciMethodData.cpp ciTypeFlow.cpp > compiledMethod.cpp dependencies.cpp nmethod.cpp compileTask.hpp > heapRegionType.cpp abstractInterpreter.cpp bytecodes.cpp > invocationCounter.cpp linkResolver.cpp rewriter.cpp jvmciCompilerToVM.cpp > jvmciEnv.cpp universe.cpp cpCache.cpp generateOopMap.cpp > method.cpp methodData.cpp compile.cpp connode.cpp gcm.cpp graphKit.cpp > ifnode.cpp library_call.cpp memnode.cpp parse1.cpp > parse2.cpp phaseX.cpp superword.cpp type.cpp vectornode.cpp > jvmtiClassFileReconstituter.cpp jvmtiEnter.xsl jvmtiEventController.cpp > jvmtiImpl.cpp jvmtiRedefineClasses.cpp methodComparator.cpp methodHandles.cpp > advancedThresholdPolicy.cpp reflection.cpp relocator.cpp sharedRuntime.cpp > simpleThresholdPolicy.cpp writeableFlags.cpp globalDefinitions.hpp > > delete-non-virtual-dtor: these may be real latent bugs due to possible failure to execute destructor(s) > > decoder_aix.hpp decoder_machO.hpp classLoader.hpp g1RootClosures.hpp > jvmtiImpl.hpp perfData.hpp decoder.hpp decoder_elf.hpp > > dynamic-class-memaccess: obscure use of memcpy > > method.cpp > > empty-body: ?;? isn?t good enough for clang, it prefers {} > > objectMonitor.cpp mallocSiteTable.cpp > > format: matches printf format strings against arguments. debug output will be affected by > incorrect code changes to these. > > macroAssembler_x86.cpp os_bsd.cpp os_bsd_x86.cpp ciMethodData.cpp javaClasses.cpp > debugInfo.cpp logFileOutput.cpp constantPool.cpp jvmtiEnter.xsl jvmtiRedefineClasses.cpp > safepoint.cpp thread.cpp > > logical-op-parentheses: can be tricky to get correct. There are a few very long-winded predicates. > > nativeInst_x86.hpp archDesc.cpp output_c.cpp output_h.cpp c1_GraphBuilder.cpp > c1_LIRGenerator.cpp c1_LinearScan.cpp bcEscapeAnalyzer.cpp ciMethod.cpp > stackMapTableFormat.hpp compressedStream.cpp dependencies.cpp heapRegion.cpp > ptrQueue.cpp psPromotionManager.cpp jvmciCompilerToVM.cpp cfgnode.cpp > chaitin.cpp compile.cpp compile.hpp escape.cpp graphKit.cpp lcm.cpp > loopTransform.cpp loopnode.cpp loopopts.cpp macro.cpp memnode.cpp > output.cpp parse1.cpp parseHelper.cpp reg_split.cpp superword.cpp > superword.hpp jniCheck.cpp jvmtiEventController.cpp arguments.cpp > javaCalls.cpp sharedRuntime.cpp > > parentheses > > adlparse.cpp > > parentheses-equality > > output_c.cpp javaAssertions.cpp gcm.cpp > > File-specific details: > > GensrcAdlc.gmk CompileJvm.gmk > Left tautological-compare in place to allow null 'this' pointer checks in methods > intended to be called from a debugger. > > CompileGTest.gmk > Just an enhanced comment. > > MacosxDebuggerLocal.m > PT_ATTACH has been replaced by PT_ATTACHEXC > > ciMethodData.cp > " 0x%" FORMAT64_MODIFIER "x" reduces to "0x%llx", whereas > " " INTPTRNZ_FORMAT reduces to "0x%lx", which latter is what clang want. > > generateOopMap.cpp > Refactored duplicate code in print_current_state(). > > binaryTreeDictionary.cpp/hpp, hashtable.cpp/hpp > These provoked ?instantiation of variable required here, > but no definition is available?. > > globalDefinitions_gcc.hpp > Define FORMAT64_MODIFIER properly for Apple, needed by os.cpp. > > globalDefinitions.hpp > Add INTPTRNZ_FORMAT, needed by ciMethodData.cpp. From thomas.stuefe at gmail.com Wed Jun 28 17:35:19 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 28 Jun 2017 19:35:19 +0200 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits In-Reply-To: References: Message-ID: Hi Coleen, long->int: this makes sense. thread safety: So, if I understand this correctly, before it could happen that two threads calling at the same time would return the same value from the random sequence? Your patch seems fine and solves this. Kind Regards, Thomas On Wed, Jun 28, 2017 at 5:46 PM, wrote: > Summary: And make updating the _rand_seed thread safe. > > See bug for more details. > > open webrev at http://cr.openjdk.java.net/~coleenp/8182554.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8182554 > > Tested with JPRT and performance tested addition of cas in os::random call > (no regressions). The only thing that uses os::random more than like once > is Symbol creation. > > Thanks, > Coleen > From kim.barrett at oracle.com Wed Jun 28 18:01:42 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 28 Jun 2017 14:01:42 -0400 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits In-Reply-To: References: Message-ID: <2CC055A2-01B5-44ED-88B7-3F6AADECB567@oracle.com> > On Jun 28, 2017, at 11:46 AM, coleen.phillimore at oracle.com wrote: > > Summary: And make updating the _rand_seed thread safe. > > See bug for more details. > > open webrev at http://cr.openjdk.java.net/~coleenp/8182554.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8182554 > > Tested with JPRT and performance tested addition of cas in os::random call (no regressions). The only thing that uses os::random more than like once is Symbol creation. > > Thanks, > Coleen Something I missed during pre-review. Sorry! - random_helper ought to be static (or added to os::). Also, the seed in test_random is still (unsigned) long, and we're trying to eliminate potentially confusing uses of long. Looks good. I don't need another webrev for those nits. From alexander.harlap at oracle.com Wed Jun 28 18:27:26 2017 From: alexander.harlap at oracle.com (Alexander Harlap) Date: Wed, 28 Jun 2017 14:27:26 -0400 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: References: <73e5c873-1ddd-afd7-b958-0367b9376f1d@oracle.com> Message-ID: <5bda3da9-18fd-d00a-cc1f-19bf36ff3709@oracle.com> Hi Leonid and Igor, It looks like we need extra round of review: New version is here: http://cr.openjdk.java.net/~aharlap/8178507/webrev.03/ Two issues: 1. TestFullGCALot.java - it may take too long. So I added option -XX:+FullGCALotInterval=120 to make sure we do not hit timeout and do not slow down testing, also -XX:+IgnoreUnrecognizedVMOptions - do not fail in product mode 2. TestMemoryInitiazation.java - feature to initialize debug memory to some special words currently is supported only for CMS and Serial gc. So I modified Test to run now only for these gc's: * @requires vm.gc.Serial | vm.gc.ConcMarkSweep * @summary Simple test for -XX:+CheckMemoryInitialization doesn't crash VM * @run main/othervm -XX:+UseSerialGC -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization TestMemoryInitialization * @run main/othervm -XX:+UseConcMarkSweepGC -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization TestMemoryInitialization I will add enhancement request to support CheckMemoryInitialization flag in G1. Alex On 6/26/2017 7:04 PM, Igor Ignatyev wrote: > >> On Jun 26, 2017, at 2:54 PM, Alexander Harlap >> > wrote: >> >> Thank you Igor and Leonid, >> >> I fixed mentioned typos and unnecessary return (see >> http://cr.openjdk.java.net/~aharlap/8178507/webrev.02/) >> > perfect. >> >> Do I need more reviews? >> > no, you can go ahead and integrate it. > > -- Igor >> >> Alex >> >> >> On 6/26/2017 4:32 PM, Igor Ignatyev wrote: >>> Hi Alexander, >>> >>> besides the small nits which Leonid mentioned, there is one in >>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html: >>> >>> >>>> 28 * @summary Test verifies only that VM doesn???t crash but throw expected Error. >>> I guess "doesn???t" is 'doesn't' w/ a fancy apostrophe. otherwise >>> looks good to me, Reviewed. >>> >>> -- Igor >>> >>>> On Jun 26, 2017, at 1:11 PM, Leonid Mesnik >>>> > wrote: >>>> >>>> Hi >>>> >>>> New changes looks good for me. Please get review from Reviewer. >>>> >>>> The only 2 small nits which don?t require separate review from me: >>>> >>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestFullGCALot.java.html >>>> >>>> >>> > >>>> typo in >>>> 37 System.out.println("Hellow world!"); >>>> >>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html >>>> >>>> >>> > >>>> return is not needed in >>>> 58 return; >>>> >>>> Thanks >>>> Leonid >>>>> On Jun 26, 2017, at 1:04 PM, Alexander Harlap >>>>> > >>>>> wrote: >>>>> >>>>> Hi Leonid, >>>>> >>>>> I accommodated your suggestions. >>>>> >>>>> New version of changeset located at >>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ >>>>> >>>>> >>>>> >>>>> Alex >>>>> >>>>> >>>>> On 6/23/2017 6:18 PM, Leonid Mesnik wrote: >>>>>> Hi >>>>>> >>>>>> Basically changes looks good. Below are some comments: >>>>>> >>>>>>> On Jun 22, 2017, at 9:16 AM, Alexander Harlap >>>>>>> >>>>>> > wrote: >>>>>>> >>>>>>> Please review change for JDK-8178507 >>>>>>> - co-locate >>>>>>> nsk.regression.gc tests >>>>>>> >>>>>>> JDK-8178507 >>>>>>> is last remaining sub-task ofJDK-8178482 >>>>>>> - Co-locate >>>>>>> remaining GC tests >>>>>>> >>>>>>> >>>>>>> Proposed change located at >>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >>>>>>> >>>>>>> >>>>>>> Co-located and converted to JTREG tests are: >>>>>>> >>>>>>> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java >>>>>> The out variable is no used and return code is not checked in >>>>>> method ?run?. Wouldn't it simpler just to move println into main >>>>>> and remove method ?run? completely? >>>>>>> nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java >>>>>> The method ?run? always returns 0. It would be better to make it >>>>>> void or just remove it. Test never throws any exception. So it >>>>>> make a sense to write in comments that test verifies only that VM >>>>>> doesn?t crash but throw expected Error. >>>>>> >>>>>>> nsk/regression/b4668531 => >>>>>>> hotspot/test/gc/TestMemoryInitialization.java >>>>>> The variable buffer is ?read-only?. It make a sense to make >>>>>> variable ?buffer' public static member of class >>>>>> TestMemoryInitialization. So compiler could not optimize it usage >>>>>> during any optimization like escape analysis. >>>>>>> nsk/regression/b6186200 => >>>>>>> hotspot/test/gc/cslocker/TestCSLocker.java >>>>>>> >>>>>> Port looks good. It seems that test doesn?t verify that lock >>>>>> really happened. Could be this improved as a part of this fix or >>>>>> by filing separate RFE? >>>>>> >>>>>> Leonid >>>>>>> Thank you, >>>>>>> >>>>>>> Alex >>>>>>> >>>>>>> >>>>> >>>> >>> >> > From coleen.phillimore at oracle.com Wed Jun 28 18:32:40 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 14:32:40 -0400 Subject: RFR (S) 8182848: Some functions misplaced in debug.hpp In-Reply-To: <49b1df23-b488-906a-6dd6-6f2b2622be38@oracle.com> References: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> <49b1df23-b488-906a-6dd6-6f2b2622be38@oracle.com> Message-ID: On 6/28/17 12:45 PM, Stefan Karlsson wrote: > Hi Coleen, > > http://cr.openjdk.java.net/~coleenp/8182848.01/webrev/src/share/vm/runtime/frame.hpp.udiff.html > + DEBUG_ONLY(void pd_ps();) // platform dependent frame printing > > Shouldn't this be guarded by NOT_PRODUCT when all implementations are > guarded by #ifndef PRODUCT ? Yes it should be. I should build optimized (if it still builds). > > Time again to consider removing the optimized target ;) > > http://cr.openjdk.java.net/~coleenp/8182848.01/webrev/src/cpu/sparc/vm/frame_sparc.cpp.frames.html > > In this file you didn't add frame::pd_ps() after one of the frame > constructors. Maybe move it to line 407 to be consistent with the > other platform files? ok, done. > > I know you didn't change this, but it's weird that findpc is declared as: > 790 extern "C" void findpc(int x); > when all other platforms use intptr_t x. Yes, good spot. Fixed declaration and use without cast below. > > and the code below casts to intptr_t: > 803 findpc((intptr_t)pc); > http://cr.openjdk.java.net/~coleenp/8182848.01/webrev/src/cpu/x86/vm/frame_x86.cpp.frames.html > > 684 void frame::pd_ps() {} > 685 > 686 #endif > > You inserted a stray blank line at 685, that is not present in the > other platform files. gone! fixed. > > Other than that, this looks good. Thank you for the code review! Coleen > > Thanks, > StefanK > On 2017-06-28 17:38, coleen.phillimore at oracle.com wrote: >> Summary: moved to vmError.hpp,cpp where they seemed more appropriate >> >> Moved the function pd_ps() into frame_sparc.cpp eliminating >> debug_cpu.cpp files. You can pick a better name for this if you want. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8182848.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8182848 >> >> Tested with JPRT. >> >> Thanks, >> Coleen >> > From coleen.phillimore at oracle.com Wed Jun 28 18:38:59 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 14:38:59 -0400 Subject: RFR (S) 8182848: Some functions misplaced in debug.hpp In-Reply-To: References: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> Message-ID: On 6/28/17 1:25 PM, Thomas St?fe wrote: > Hi Coleen, > > this is a sensible cleanup! > > Some small nits: :) It's the kind of change that invites such things, but that's ok. > > vmError.cpp: > > - controlled_crash(): I would leave it either a first class global > function, maybe in vmError.hpp, or make it file scope static. I could add controlled_crash to class VMError like the other functions, although it violates the new implied rule that we only declare functions in header files that are used externally. But this seems reasonable to be in the header, I guess. > > - can we move #include up to the general includes? > > Not a part of your patch, but looking at the includes at the > beginning, I see: > ... > 25 #include > 26 #include "precompiled.hpp" > 27 #include "code/codeCache.hpp" > ... > > which is probably wrong, I am sure fcntl.h is never picked up because > of the precompiled header line following it. So, maybe move the system > headers past the hotspot headers? I always thought that was the way we > do this. I'll put the system headers (signal.h and this one) below the hotspot header #includes. I think this is how they are supposed to be. If it's not using fnctl.h, then I'll remove it. > > debug.hpp > > While looking at debug.hpp, I saw both report_out_of_shared_space() > and report_insufficient_metaspace(). Both seem awfully specific for a > general purpose assert file. The implementation for > report_insufficient_metaspace() in particular needs to know > "MaxMetaspaceSize", so I think it would be better placed in > metaspace.cpp (it is also only called from that one file, so it does > not even have to be exposed beyond metaspace.cpp). This metaspace change may be removed with some work that Ioi is doing. I'll tell him about it. Thanks, Coleen > > Thank you, and Kind Regards, Thomas > > > > > > > On Wed, Jun 28, 2017 at 5:38 PM, > wrote: > > Summary: moved to vmError.hpp,cpp where they seemed more appropriate > > Moved the function pd_ps() into frame_sparc.cpp eliminating > debug_cpu.cpp files. You can pick a better name for this if you > want. > > open webrev at > http://cr.openjdk.java.net/~coleenp/8182848.01/webrev > > bug link https://bugs.openjdk.java.net/browse/JDK-8182848 > > > Tested with JPRT. > > Thanks, > Coleen > > From coleen.phillimore at oracle.com Wed Jun 28 18:41:22 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 14:41:22 -0400 Subject: RFR (S) 8182848: Some functions misplaced in debug.hpp In-Reply-To: <49b1df23-b488-906a-6dd6-6f2b2622be38@oracle.com> References: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> <49b1df23-b488-906a-6dd6-6f2b2622be38@oracle.com> Message-ID: On 6/28/17 12:45 PM, Stefan Karlsson wrote: > Hi Coleen, > > http://cr.openjdk.java.net/~coleenp/8182848.01/webrev/src/share/vm/runtime/frame.hpp.udiff.html > + DEBUG_ONLY(void pd_ps();) // platform dependent frame printing > > Shouldn't this be guarded by NOT_PRODUCT when all implementations are > guarded by #ifndef PRODUCT ? > > Time again to consider removing the optimized target ;) I don't know how to build the optimized target anymore. yes, we should remove it!! Coleen > > http://cr.openjdk.java.net/~coleenp/8182848.01/webrev/src/cpu/sparc/vm/frame_sparc.cpp.frames.html > > In this file you didn't add frame::pd_ps() after one of the frame > constructors. Maybe move it to line 407 to be consistent with the > other platform files? > > I know you didn't change this, but it's weird that findpc is declared as: > 790 extern "C" void findpc(int x); > when all other platforms use intptr_t x. > > and the code below casts to intptr_t: > 803 findpc((intptr_t)pc); > http://cr.openjdk.java.net/~coleenp/8182848.01/webrev/src/cpu/x86/vm/frame_x86.cpp.frames.html > > 684 void frame::pd_ps() {} > 685 > 686 #endif > > You inserted a stray blank line at 685, that is not present in the > other platform files. > > Other than that, this looks good. > > Thanks, > StefanK > On 2017-06-28 17:38, coleen.phillimore at oracle.com wrote: >> Summary: moved to vmError.hpp,cpp where they seemed more appropriate >> >> Moved the function pd_ps() into frame_sparc.cpp eliminating >> debug_cpu.cpp files. You can pick a better name for this if you want. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8182848.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8182848 >> >> Tested with JPRT. >> >> Thanks, >> Coleen >> > From coleen.phillimore at oracle.com Wed Jun 28 18:44:03 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 14:44:03 -0400 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits In-Reply-To: References: Message-ID: <62fe181d-4448-dccb-7188-5ec220d4b8cc@oracle.com> Hi Thomas, Thanks for reviewing this. On 6/28/17 1:35 PM, Thomas St?fe wrote: > Hi Coleen, > > long->int: this makes sense. > thread safety: So, if I understand this correctly, before it could > happen that two threads calling at the same time would return the same > value from the random sequence? Your patch seems fine and solves this. Yes, that would be the case. It's highly unlikely now since most os::random calls are at the beginning and the Symbols call during Symbol creation while holding the SymbolTable_lock. Thanks! Coleen > > Kind Regards, Thomas > > On Wed, Jun 28, 2017 at 5:46 PM, > wrote: > > Summary: And make updating the _rand_seed thread safe. > > See bug for more details. > > open webrev at > http://cr.openjdk.java.net/~coleenp/8182554.01/webrev > > bug link https://bugs.openjdk.java.net/browse/JDK-8182554 > > > Tested with JPRT and performance tested addition of cas in > os::random call (no regressions). The only thing that uses > os::random more than like once is Symbol creation. > > Thanks, > Coleen > > From coleen.phillimore at oracle.com Wed Jun 28 18:48:46 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 14:48:46 -0400 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits In-Reply-To: <2CC055A2-01B5-44ED-88B7-3F6AADECB567@oracle.com> References: <2CC055A2-01B5-44ED-88B7-3F6AADECB567@oracle.com> Message-ID: <3c02072a-f56b-4b8d-0bc6-f4bb0abdfdb4@oracle.com> On 6/28/17 2:01 PM, Kim Barrett wrote: >> On Jun 28, 2017, at 11:46 AM, coleen.phillimore at oracle.com wrote: >> >> Summary: And make updating the _rand_seed thread safe. >> >> See bug for more details. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8182554.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8182554 >> >> Tested with JPRT and performance tested addition of cas in os::random call (no regressions). The only thing that uses os::random more than like once is Symbol creation. >> >> Thanks, >> Coleen > Something I missed during pre-review. Sorry! > - random_helper ought to be static (or added to os::). Ah yes, I forgot to do that with this version. Thanks! > > Also, the seed in test_random is still (unsigned) long, and we're > trying to eliminate potentially confusing uses of long. ahh, I missed one. Thanks! Coleen > > Looks good. I don't need another webrev for those nits. > > From coleen.phillimore at oracle.com Wed Jun 28 18:52:21 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 14:52:21 -0400 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits In-Reply-To: <3c02072a-f56b-4b8d-0bc6-f4bb0abdfdb4@oracle.com> References: <2CC055A2-01B5-44ED-88B7-3F6AADECB567@oracle.com> <3c02072a-f56b-4b8d-0bc6-f4bb0abdfdb4@oracle.com> Message-ID: I'm going to push with two Reviewers, mostly trivial change. thanks, Coleen On 6/28/17 2:48 PM, coleen.phillimore at oracle.com wrote: > > > On 6/28/17 2:01 PM, Kim Barrett wrote: >>> On Jun 28, 2017, at 11:46 AM, coleen.phillimore at oracle.com wrote: >>> >>> Summary: And make updating the _rand_seed thread safe. >>> >>> See bug for more details. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8182554.01/webrev >>> bug link https://bugs.openjdk.java.net/browse/JDK-8182554 >>> >>> Tested with JPRT and performance tested addition of cas in >>> os::random call (no regressions). The only thing that uses >>> os::random more than like once is Symbol creation. >>> >>> Thanks, >>> Coleen >> Something I missed during pre-review. Sorry! >> - random_helper ought to be static (or added to os::). > > Ah yes, I forgot to do that with this version. Thanks! >> >> Also, the seed in test_random is still (unsigned) long, and we're >> trying to eliminate potentially confusing uses of long. > > ahh, I missed one. > > Thanks! > Coleen > >> >> Looks good. I don't need another webrev for those nits. >> >> > From coleen.phillimore at oracle.com Wed Jun 28 19:54:49 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 15:54:49 -0400 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits In-Reply-To: References: <2CC055A2-01B5-44ED-88B7-3F6AADECB567@oracle.com> <3c02072a-f56b-4b8d-0bc6-f4bb0abdfdb4@oracle.com> Message-ID: Stefan had comments offline so here's another version to review that removes more (int) casts and reverts the return type of althashing object_hash(), and fixed the comment in synchronizer.cpp. open webrev at http://cr.openjdk.java.net/~coleenp/8182554.02/webrev Let me know if this looks good. thanks, Coleen On 6/28/17 2:52 PM, coleen.phillimore at oracle.com wrote: > > I'm going to push with two Reviewers, mostly trivial change. > thanks, > Coleen > > On 6/28/17 2:48 PM, coleen.phillimore at oracle.com wrote: >> >> >> On 6/28/17 2:01 PM, Kim Barrett wrote: >>>> On Jun 28, 2017, at 11:46 AM, coleen.phillimore at oracle.com wrote: >>>> >>>> Summary: And make updating the _rand_seed thread safe. >>>> >>>> See bug for more details. >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8182554.01/webrev >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8182554 >>>> >>>> Tested with JPRT and performance tested addition of cas in >>>> os::random call (no regressions). The only thing that uses >>>> os::random more than like once is Symbol creation. >>>> >>>> Thanks, >>>> Coleen >>> Something I missed during pre-review. Sorry! >>> - random_helper ought to be static (or added to os::). >> >> Ah yes, I forgot to do that with this version. Thanks! >>> >>> Also, the seed in test_random is still (unsigned) long, and we're >>> trying to eliminate potentially confusing uses of long. >> >> ahh, I missed one. >> >> Thanks! >> Coleen >> >>> >>> Looks good. I don't need another webrev for those nits. >>> >>> >> > From robbin.ehn at oracle.com Wed Jun 28 20:00:39 2017 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Wed, 28 Jun 2017 22:00:39 +0200 Subject: RFR: 8178495: Bug in the align_size_up_ macro In-Reply-To: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> References: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> Message-ID: Looks good. Is there a problem with always widen it to ULL ? E.g. ~((alignment) - 1ULL) Your widen_to_type_of is obliviously much cleaner. Thanks for fixing! /Robbin On 06/28/2017 05:08 PM, Stefan Karlsson wrote: > Hi all, > > Please review this patch to fix a bug in the align_size_up_ macro. > > http://cr.openjdk.java.net/~stefank/8178495/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8178495 > > The following: > align_size_up_((uintptr_t)0x512345678ULL, (int8_t) 16); > align_size_up_((uintptr_t)0x512345678ULL, (int16_t) 16); > align_size_up_((uintptr_t)0x512345678ULL, (int32_t) 16); > align_size_up_((uintptr_t)0x512345678ULL, (int64_t) 16); > > align_size_up_((uintptr_t)0x512345678ULL, (uint8_t) 16); > align_size_up_((uintptr_t)0x512345678ULL, (uint16_t)16); > align_size_up_((uintptr_t)0x512345678ULL, (uint32_t)16); > align_size_up_((uintptr_t)0x512345678ULL, (uint64_t)16); > > Gives this output: > 0x512345680 > 0x512345680 > 0x512345680 > 0x512345680 > > 0x512345680 > 0x512345680 > 0x12345680 > 0x512345680 > > So, align_size_up_((uintptr_t)0x512345678ULL, (uint32_t)16) returns an unexpected, truncated value. > > This happens because in this macro: > #define align_size_up_(size, alignment) (((size) + ((alignment) - 1)) & ~((alignment) - 1)) > > ~((alignment) - 1) returns 0x00000000FFFFFFF0 instead of 0xFFFFFFFFFFFFFFF0 > > This isn't a problem for the 64-bit types, and maybe more non-obvious is that it doesn't happen for types 8-bit and 16-bit types. > > For the 8-bit and 16-bit types, the (alignment - 1) is promoted to a signed int, when it later is used in the & expression it is signed extended into a signed 64-bit value. > > When the type is an unsigned 32-bit integer, it isn't promoted to a signed int, and therefore it is not singed extended to 64 bits, but instead zero extended to 64 bits. > > This bug is currently not affecting the code base, since the inline align functions promote all integers to intptr_t, before passing them down to the align macros. However, > when/if JDK-8178489 is pushed the macro is actually used with 32 bits unsigned ints. > > Tested with the unit test and JPRT with and without patches for JDK-8178489 . > > Thanks, > StefanK From stefan.karlsson at oracle.com Wed Jun 28 20:09:16 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 28 Jun 2017 22:09:16 +0200 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits In-Reply-To: References: <2CC055A2-01B5-44ED-88B7-3F6AADECB567@oracle.com> <3c02072a-f56b-4b8d-0bc6-f4bb0abdfdb4@oracle.com> Message-ID: On 2017-06-28 21:54, coleen.phillimore at oracle.com wrote: > > Stefan had comments offline so here's another version to review that > removes more (int) casts and reverts the return type of althashing > object_hash(), and fixed the comment in synchronizer.cpp. > > open webrev at http://cr.openjdk.java.net/~coleenp/8182554.02/webrev > > Let me know if this looks good. Looks good to me. StefanK > thanks, > Coleen > > > On 6/28/17 2:52 PM, coleen.phillimore at oracle.com wrote: >> >> I'm going to push with two Reviewers, mostly trivial change. >> thanks, >> Coleen >> >> On 6/28/17 2:48 PM, coleen.phillimore at oracle.com wrote: >>> >>> >>> On 6/28/17 2:01 PM, Kim Barrett wrote: >>>>> On Jun 28, 2017, at 11:46 AM, coleen.phillimore at oracle.com wrote: >>>>> >>>>> Summary: And make updating the _rand_seed thread safe. >>>>> >>>>> See bug for more details. >>>>> >>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8182554.01/webrev >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8182554 >>>>> >>>>> Tested with JPRT and performance tested addition of cas in >>>>> os::random call (no regressions). The only thing that uses >>>>> os::random more than like once is Symbol creation. >>>>> >>>>> Thanks, >>>>> Coleen >>>> Something I missed during pre-review. Sorry! >>>> - random_helper ought to be static (or added to os::). >>> >>> Ah yes, I forgot to do that with this version. Thanks! >>>> >>>> Also, the seed in test_random is still (unsigned) long, and we're >>>> trying to eliminate potentially confusing uses of long. >>> >>> ahh, I missed one. >>> >>> Thanks! >>> Coleen >>> >>>> >>>> Looks good. I don't need another webrev for those nits. >>>> >>>> >>> >> > From coleen.phillimore at oracle.com Wed Jun 28 20:11:12 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Jun 2017 16:11:12 -0400 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits In-Reply-To: References: <2CC055A2-01B5-44ED-88B7-3F6AADECB567@oracle.com> <3c02072a-f56b-4b8d-0bc6-f4bb0abdfdb4@oracle.com> Message-ID: On 6/28/17 4:09 PM, Stefan Karlsson wrote: > On 2017-06-28 21:54, coleen.phillimore at oracle.com wrote: >> >> Stefan had comments offline so here's another version to review that >> removes more (int) casts and reverts the return type of althashing >> object_hash(), and fixed the comment in synchronizer.cpp. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8182554.02/webrev >> >> Let me know if this looks good. > > Looks good to me. Thanks!! Coleen > > StefanK > >> thanks, >> Coleen >> >> >> On 6/28/17 2:52 PM, coleen.phillimore at oracle.com wrote: >>> >>> I'm going to push with two Reviewers, mostly trivial change. >>> thanks, >>> Coleen >>> >>> On 6/28/17 2:48 PM, coleen.phillimore at oracle.com wrote: >>>> >>>> >>>> On 6/28/17 2:01 PM, Kim Barrett wrote: >>>>>> On Jun 28, 2017, at 11:46 AM, coleen.phillimore at oracle.com wrote: >>>>>> >>>>>> Summary: And make updating the _rand_seed thread safe. >>>>>> >>>>>> See bug for more details. >>>>>> >>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8182554.01/webrev >>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8182554 >>>>>> >>>>>> Tested with JPRT and performance tested addition of cas in >>>>>> os::random call (no regressions). The only thing that uses >>>>>> os::random more than like once is Symbol creation. >>>>>> >>>>>> Thanks, >>>>>> Coleen >>>>> Something I missed during pre-review. Sorry! >>>>> - random_helper ought to be static (or added to os::). >>>> >>>> Ah yes, I forgot to do that with this version. Thanks! >>>>> >>>>> Also, the seed in test_random is still (unsigned) long, and we're >>>>> trying to eliminate potentially confusing uses of long. >>>> >>>> ahh, I missed one. >>>> >>>> Thanks! >>>> Coleen >>>> >>>>> >>>>> Looks good. I don't need another webrev for those nits. >>>>> >>>>> >>>> >>> >> > From leonid.mesnik at oracle.com Wed Jun 28 21:35:35 2017 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Wed, 28 Jun 2017 14:35:35 -0700 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: <5bda3da9-18fd-d00a-cc1f-19bf36ff3709@oracle.com> References: <73e5c873-1ddd-afd7-b958-0367b9376f1d@oracle.com> <5bda3da9-18fd-d00a-cc1f-19bf36ff3709@oracle.com> Message-ID: <64280D64-9F9B-42BE-B348-0F1449DFEAC6@oracle.com> > On Jun 28, 2017, at 11:27 AM, Alexander Harlap wrote: > > Hi Leonid and Igor, > > It looks like we need extra round of review: > > New version is here: http://cr.openjdk.java.net/~aharlap/8178507/webrev.03/ > Two issues: > > 1. TestFullGCALot.java - it may take too long. So I added option -XX:+FullGCALotInterval=120 to make sure we do not hit timeout and do not slow down testing, also -XX:+IgnoreUnrecognizedVMOptions - do not fail in product mode > It would be better to use requires to skip test for product bits. (vm.debug) > 2. TestMemoryInitiazation.java - feature to initialize debug memory to some special words currently is supported only for CMS and Serial gc. So I modified Test to run now only for these gc's: > > * @requires vm.gc.Serial | vm.gc.ConcMarkSweep > * @summary Simple test for -XX:+CheckMemoryInitialization doesn't crash VM > * @run main/othervm -XX:+UseSerialGC -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization TestMemoryInitialization > * @run main/othervm -XX:+UseConcMarkSweepGC -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization TestMemoryInitialization > Test tries to run VM 2 times with Serial/CMS GC in the case if any of them is supported or/and set. So test fails if CMS is not supported. In the case if any of GC is set explicitly it should fail with unsupported GC combinations. The better would be to split test into 2 single tests TestMemoryInitializationSerialGC & TestMemoryInitializationCMSGC which shares java code. Also CMS has been deprecated in JDK9 so I don?t know it make a sense to test it JDK10. Leonid > I will add enhancement request to support CheckMemoryInitialization flag in G1. > > Alex > On 6/26/2017 7:04 PM, Igor Ignatyev wrote: >> >>> On Jun 26, 2017, at 2:54 PM, Alexander Harlap > wrote: >>> >>> Thank you Igor and Leonid, >>> >>> I fixed mentioned typos and unnecessary return (see http://cr.openjdk.java.net/~aharlap/8178507/webrev.02/ ) >>> >> perfect. >>> Do I need more reviews? >>> >> no, you can go ahead and integrate it. >> >> -- Igor >>> Alex >>> >>> On 6/26/2017 4:32 PM, Igor Ignatyev wrote: >>>> Hi Alexander, >>>> >>>> besides the small nits which Leonid mentioned, there is one in http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html: >>>>> 28 * @summary Test verifies only that VM doesn???t crash but throw expected Error. >>>> I guess "doesn???t" is 'doesn't' w/ a fancy apostrophe. otherwise looks good to me, Reviewed. >>>> >>>> -- Igor >>>> >>>>> On Jun 26, 2017, at 1:11 PM, Leonid Mesnik > wrote: >>>>> >>>>> Hi >>>>> >>>>> New changes looks good for me. Please get review from Reviewer. >>>>> >>>>> The only 2 small nits which don?t require separate review from me: >>>>> >>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestFullGCALot.java.html > >>>>> typo in >>>>> 37 System.out.println("Hellow world!"); >>>>> >>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html > >>>>> return is not needed in >>>>> 58 return; >>>>> >>>>> Thanks >>>>> Leonid >>>>>> On Jun 26, 2017, at 1:04 PM, Alexander Harlap > wrote: >>>>>> >>>>>> Hi Leonid, >>>>>> >>>>>> I accommodated your suggestions. >>>>>> >>>>>> New version of changeset located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ >>>>>> >>>>>> >>>>>> Alex >>>>>> >>>>>> >>>>>> On 6/23/2017 6:18 PM, Leonid Mesnik wrote: >>>>>>> Hi >>>>>>> >>>>>>> Basically changes looks good. Below are some comments: >>>>>>> >>>>>>>> On Jun 22, 2017, at 9:16 AM, Alexander Harlap > wrote: >>>>>>>> >>>>>>>> Please review change for JDK-8178507 > - co-locate nsk.regression.gc tests >>>>>>>> >>>>>>>> JDK-8178507 > is last remaining sub-task ofJDK-8178482 > - Co-locate remaining GC tests >>>>>>>> >>>>>>>> >>>>>>>> Proposed change located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >>>>>>>> >>>>>>>> Co-located and converted to JTREG tests are: >>>>>>>> >>>>>>>> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java >>>>>>> The out variable is no used and return code is not checked in method ?run?. Wouldn't it simpler just to move println into main and remove method ?run? completely? >>>>>>>> nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java >>>>>>> The method ?run? always returns 0. It would be better to make it void or just remove it. Test never throws any exception. So it make a sense to write in comments that test verifies only that VM doesn?t crash but throw expected Error. >>>>>>> >>>>>>>> nsk/regression/b4668531 => hotspot/test/gc/TestMemoryInitialization.java >>>>>>> The variable buffer is ?read-only?. It make a sense to make variable ?buffer' public static member of class TestMemoryInitialization. So compiler could not optimize it usage during any optimization like escape analysis. >>>>>>>> nsk/regression/b6186200 => hotspot/test/gc/cslocker/TestCSLocker.java >>>>>>>> >>>>>>> Port looks good. It seems that test doesn?t verify that lock really happened. Could be this improved as a part of this fix or by filing separate RFE? >>>>>>> >>>>>>> Leonid >>>>>>>> Thank you, >>>>>>>> >>>>>>>> Alex >>>>>>>> >>>>>>>> >>>>>> >>>>> >>>> >>> >> > From kim.barrett at oracle.com Wed Jun 28 21:41:31 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 28 Jun 2017 17:41:31 -0400 Subject: RFR (S) 8182554: Code for os::random() assumes long is 32 bits In-Reply-To: References: <2CC055A2-01B5-44ED-88B7-3F6AADECB567@oracle.com> <3c02072a-f56b-4b8d-0bc6-f4bb0abdfdb4@oracle.com> Message-ID: <8AD7C957-CF29-4FED-9885-EAD1ED33B057@oracle.com> > On Jun 28, 2017, at 3:54 PM, coleen.phillimore at oracle.com wrote: > > > Stefan had comments offline so here's another version to review that removes more (int) casts and reverts the return type of althashing object_hash(), and fixed the comment in synchronizer.cpp. > > open webrev at http://cr.openjdk.java.net/~coleenp/8182554.02/webrev > > Let me know if this looks good. > thanks, > Coleen Looks good. From hohensee at amazon.com Wed Jun 28 22:50:33 2017 From: hohensee at amazon.com (Hohensee, Paul) Date: Wed, 28 Jun 2017 22:50:33 +0000 Subject: RFR(XL): 8182299: Enable disabled clang warnings, build on OSX 10 + Xcode 8 In-Reply-To: <6B296ABE-66C0-4C32-AC4E-8674BE103514@oracle.com> References: <27FD0413-52BC-42E6-A5B0-3C92A49A2D6F@amazon.com> <1CBD62A9-9B1B-4B05-AAF9-BE2D52DE8C79@amazon.com> <6B296ABE-66C0-4C32-AC4E-8674BE103514@oracle.com> Message-ID: <3ACE75CF-CEA6-4081-952A-BBA138763582@amazon.com> Thanks for the review, Jesper. New webrev sent, has only a change to nativeInst_x86.cpp. In nativeInst_x86.cpp, I formatted the expression so I could easily understand it, but you?re right, not everyone does it that way (maybe only me!), so I?ve changed it to if (((ubyte_at(0) & NativeTstRegMem::instruction_rex_prefix_mask) == NativeTstRegMem::instruction_rex_prefix && ubyte_at(1) == NativeTstRegMem::instruction_code_memXregl && (ubyte_at(2) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg) || (ubyte_at(0) == NativeTstRegMem::instruction_code_memXregl && (ubyte_at(1) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg)) { In graphKit.cpp, the old code was #ifdef ASSERT case Deoptimization::Action_none: case Deoptimization::Action_make_not_compilable: break; default: fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); break; #endif In the non-ASSERT case, the compiler complained about the lack of Action_none, Action_make_not_compilable and default. If the warning had been turned off, the result would have been ?break;? for all three. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;? The new code is case Deoptimization::Action_none: case Deoptimization::Action_make_not_compilable: break; default: #ifdef ASSERT fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); #endif break; The compiler doesn?t complain about Action_none, Action_make_not_compilable or default anymore. In the non-ASSERT case, the result is ?break;? for all three, same as for the old code. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;?, again same as for the old code. Thanks, Paul On 6/28/17, 10:36 AM, "jesper.wilhelmsson at oracle.com" wrote: Hi Paul, Thanks for doing this change! In general everything looks really good, there are a lot of really nice cleanups here. I just have two minor questions/nits: * In hotspot/cpu/x86/vm/nativeInst_x86.hpp it seems the expression already have parenthesis around the & operations and the change here is "only" cleaning up the layout of the code which is not a bad thing in it self, but you move the logical operators to the beginning of each line which is a quite different style than the rest of the code in the same function where the operators are at the end of the line. * In hotspot/share/vm/opto/graphKit.cpp you moved the #ifdef ASSERT so that Action_none and Action_make_not_compilable are available also when ASSERT is not defined. I don't see this mentioned in your description of the change. Was this change intentional? Thanks, /Jesper > On 27 Jun 2017, at 21:34, Hohensee, Paul wrote: > > An attempt at better formatting. > > https://bugs.openjdk.java.net/browse/JDK-8182299 > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_jdk.00/ > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_hotspot.00/ > > Jesper has been kind enough to host the webrevs while I get my cr.openjdk.net account set up, and to be the sponsor. > > This rfe a combination of enabling disabled clang warnings and getting jdk10 to build on OSX 10 and Xcode 8. At least one enabled warning (delete-non-virtual-dtor) detected what seems to me a real potential bug, with the rest enforcing good code hygiene. > > These changes are only in OpenJDK, so I?m looking for a volunteer to make the closed changes. > > Thanks, > > Paul > > > Here are the jdk file-specific details: > > java_md_macosx.c splashscreen_sys.m > > Removed objc_registerThreadWithCollector() since it's obsolete and of questionable value in any case. > > NSApplicationAWT.m > > Use the correct NSEventMask rather than NSUInteger. > > jdhuff.c jdphuff.c > > Shifting a negative signed value is undefined. > > Here are the hotspot notes: > > Here are the lists of files affected by enabling a given warning: > > switch: all of these are lack of a default clause > > c1_LIRAssembler_x86.cpp c1_LIRGenerator_x86.cpp c1_LinearScan_x86.hpp > jniFastGetField_x86_64.cpp assembler.cpp c1_Canonicalizer.cpp > c1_GraphBuilder.cpp c1_Instruction.cpp c1_LIR.cpp c1_LIRGenerator.cpp > c1_LinearScan.cpp c1_ValueStack.hpp c1_ValueType.cpp > bcEscapeAnalyzer.cpp ciArray.cpp ciEnv.cpp ciInstance.cpp ciMethod.cpp > ciMethodBlocks.cpp ciMethodData.cpp ciTypeFlow.cpp > compiledMethod.cpp dependencies.cpp nmethod.cpp compileTask.hpp > heapRegionType.cpp abstractInterpreter.cpp bytecodes.cpp > invocationCounter.cpp linkResolver.cpp rewriter.cpp jvmciCompilerToVM.cpp > jvmciEnv.cpp universe.cpp cpCache.cpp generateOopMap.cpp > method.cpp methodData.cpp compile.cpp connode.cpp gcm.cpp graphKit.cpp > ifnode.cpp library_call.cpp memnode.cpp parse1.cpp > parse2.cpp phaseX.cpp superword.cpp type.cpp vectornode.cpp > jvmtiClassFileReconstituter.cpp jvmtiEnter.xsl jvmtiEventController.cpp > jvmtiImpl.cpp jvmtiRedefineClasses.cpp methodComparator.cpp methodHandles.cpp > advancedThresholdPolicy.cpp reflection.cpp relocator.cpp sharedRuntime.cpp > simpleThresholdPolicy.cpp writeableFlags.cpp globalDefinitions.hpp > > delete-non-virtual-dtor: these may be real latent bugs due to possible failure to execute destructor(s) > > decoder_aix.hpp decoder_machO.hpp classLoader.hpp g1RootClosures.hpp > jvmtiImpl.hpp perfData.hpp decoder.hpp decoder_elf.hpp > > dynamic-class-memaccess: obscure use of memcpy > > method.cpp > > empty-body: ?;? isn?t good enough for clang, it prefers {} > > objectMonitor.cpp mallocSiteTable.cpp > > format: matches printf format strings against arguments. debug output will be affected by > incorrect code changes to these. > > macroAssembler_x86.cpp os_bsd.cpp os_bsd_x86.cpp ciMethodData.cpp javaClasses.cpp > debugInfo.cpp logFileOutput.cpp constantPool.cpp jvmtiEnter.xsl jvmtiRedefineClasses.cpp > safepoint.cpp thread.cpp > > logical-op-parentheses: can be tricky to get correct. There are a few very long-winded predicates. > > nativeInst_x86.hpp archDesc.cpp output_c.cpp output_h.cpp c1_GraphBuilder.cpp > c1_LIRGenerator.cpp c1_LinearScan.cpp bcEscapeAnalyzer.cpp ciMethod.cpp > stackMapTableFormat.hpp compressedStream.cpp dependencies.cpp heapRegion.cpp > ptrQueue.cpp psPromotionManager.cpp jvmciCompilerToVM.cpp cfgnode.cpp > chaitin.cpp compile.cpp compile.hpp escape.cpp graphKit.cpp lcm.cpp > loopTransform.cpp loopnode.cpp loopopts.cpp macro.cpp memnode.cpp > output.cpp parse1.cpp parseHelper.cpp reg_split.cpp superword.cpp > superword.hpp jniCheck.cpp jvmtiEventController.cpp arguments.cpp > javaCalls.cpp sharedRuntime.cpp > > parentheses > > adlparse.cpp > > parentheses-equality > > output_c.cpp javaAssertions.cpp gcm.cpp > > File-specific details: > > GensrcAdlc.gmk CompileJvm.gmk > Left tautological-compare in place to allow null 'this' pointer checks in methods > intended to be called from a debugger. > > CompileGTest.gmk > Just an enhanced comment. > > MacosxDebuggerLocal.m > PT_ATTACH has been replaced by PT_ATTACHEXC > > ciMethodData.cp > " 0x%" FORMAT64_MODIFIER "x" reduces to "0x%llx", whereas > " " INTPTRNZ_FORMAT reduces to "0x%lx", which latter is what clang want. > > generateOopMap.cpp > Refactored duplicate code in print_current_state(). > > binaryTreeDictionary.cpp/hpp, hashtable.cpp/hpp > These provoked ?instantiation of variable required here, > but no definition is available?. > > globalDefinitions_gcc.hpp > Define FORMAT64_MODIFIER properly for Apple, needed by os.cpp. > > globalDefinitions.hpp > Add INTPTRNZ_FORMAT, needed by ciMethodData.cpp. From kim.barrett at oracle.com Thu Jun 29 00:54:09 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 28 Jun 2017 20:54:09 -0400 Subject: RFR (S) 8182848: Some functions misplaced in debug.hpp In-Reply-To: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> References: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> Message-ID: <4FAC05DC-53C1-4EFD-886A-E22FB5AC78DF@oracle.com> > On Jun 28, 2017, at 11:38 AM, coleen.phillimore at oracle.com wrote: > > Summary: moved to vmError.hpp,cpp where they seemed more appropriate > > Moved the function pd_ps() into frame_sparc.cpp eliminating debug_cpu.cpp files. You can pick a better name for this if you want. > > open webrev at http://cr.openjdk.java.net/~coleenp/8182848.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8182848 > > Tested with JPRT. > > Thanks, > Coleen I didn?t notice anything other than minor things that Stefan or Thomas have already commented on. Looks good to me. From mikael.gerdin at oracle.com Thu Jun 29 09:47:02 2017 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 29 Jun 2017 11:47:02 +0200 Subject: RFR (S) 8183203: Remove stubRoutines_os Message-ID: <8ac671a6-1c48-a786-c47d-ecc677dadc34@oracle.com> Hi all, Please review this change to remove stubRoutines_.cpp. They have been empty for ages, the Linux and Windows ones have never had a single source line besides #includes since their creation almost 16 years ago and the Solaris one has been empty for almost 15 years. Testing: JPRT build-only job Bug: https://bugs.openjdk.java.net/browse/JDK-8183203 Webrev: http://cr.openjdk.java.net/~mgerdin/8183203/webrev/ /Mikael From stefan.karlsson at oracle.com Thu Jun 29 09:48:36 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 29 Jun 2017 11:48:36 +0200 Subject: RFR (S) 8183203: Remove stubRoutines_os In-Reply-To: <8ac671a6-1c48-a786-c47d-ecc677dadc34@oracle.com> References: <8ac671a6-1c48-a786-c47d-ecc677dadc34@oracle.com> Message-ID: Looks awesome! ;) StefanK On 2017-06-29 11:47, Mikael Gerdin wrote: > Hi all, > > Please review this change to remove stubRoutines_.cpp. > They have been empty for ages, the Linux and Windows ones have never > had a single source line besides #includes since their creation almost > 16 years ago and the Solaris one has been empty for almost 15 years. > > Testing: JPRT build-only job > Bug: https://bugs.openjdk.java.net/browse/JDK-8183203 > Webrev: http://cr.openjdk.java.net/~mgerdin/8183203/webrev/ > > /Mikael From thomas.schatzl at oracle.com Thu Jun 29 09:52:04 2017 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 29 Jun 2017 11:52:04 +0200 Subject: RFR (S) 8183203: Remove stubRoutines_os In-Reply-To: <8ac671a6-1c48-a786-c47d-ecc677dadc34@oracle.com> References: <8ac671a6-1c48-a786-c47d-ecc677dadc34@oracle.com> Message-ID: <1498729924.2900.8.camel@oracle.com> Hi, On Thu, 2017-06-29 at 11:47 +0200, Mikael Gerdin wrote: > Hi all, > > Please review this change to remove stubRoutines_.cpp. > They have been empty for ages, the Linux and Windows ones have never > had? > a single source line besides #includes since their creation almost > 16? > years ago and the Solaris one has been empty for almost 15 years. > > Testing: JPRT build-only job > Bug: https://bugs.openjdk.java.net/browse/JDK-8183203 > Webrev: http://cr.openjdk.java.net/~mgerdin/8183203/webrev/ > > /Mikael ? I like that change :) Ship it before anyone wants to add something to these files... Thanks, ? Thomas From thomas.schatzl at oracle.com Thu Jun 29 12:36:02 2017 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 29 Jun 2017 14:36:02 +0200 Subject: RFR: 8178495: Bug in the align_size_up_ macro In-Reply-To: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> References: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> Message-ID: <1498739762.2961.2.camel@oracle.com> Hi, On Wed, 2017-06-28 at 17:08 +0200, Stefan Karlsson wrote: > Hi all, > > Please review this patch to fix a bug in the align_size_up_ macro. > > http://cr.openjdk.java.net/~stefank/8178495/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8178495 > ? looks good to me. Thomas From coleen.phillimore at oracle.com Thu Jun 29 12:38:19 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 29 Jun 2017 08:38:19 -0400 Subject: RFR (S) 8182848: Some functions misplaced in debug.hpp In-Reply-To: <4FAC05DC-53C1-4EFD-886A-E22FB5AC78DF@oracle.com> References: <62f6d0e5-51ee-9993-75fd-492893459e6a@oracle.com> <4FAC05DC-53C1-4EFD-886A-E22FB5AC78DF@oracle.com> Message-ID: Thanks Kim. Coleen On 6/28/17 8:54 PM, Kim Barrett wrote: >> On Jun 28, 2017, at 11:38 AM, coleen.phillimore at oracle.com wrote: >> >> Summary: moved to vmError.hpp,cpp where they seemed more appropriate >> >> Moved the function pd_ps() into frame_sparc.cpp eliminating debug_cpu.cpp files. You can pick a better name for this if you want. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8182848.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8182848 >> >> Tested with JPRT. >> >> Thanks, >> Coleen > I didn?t notice anything other than minor things that Stefan or Thomas have already commented on. > > Looks good to me. > From mikael.gerdin at oracle.com Thu Jun 29 12:50:34 2017 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 29 Jun 2017 14:50:34 +0200 Subject: RFR (S) 8183229: Implement WindowsSemaphore::trywait Message-ID: Hi all, To help with some upcoming patches I'd like to add the trywait operation to the platform-independent Semaphore class. It's currently lacking a Windows-implementation. Testing: JPRT testing on Windows platforms Webrev: http://cr.openjdk.java.net/~mgerdin/8183229/webrev.0/ Bug: https://bugs.openjdk.java.net/browse/JDK-8183229 Thanks /Mikael From claes.redestad at oracle.com Thu Jun 29 12:55:33 2017 From: claes.redestad at oracle.com (Claes Redestad) Date: Thu, 29 Jun 2017 14:55:33 +0200 Subject: RFR (S) 8183229: Implement WindowsSemaphore::trywait In-Reply-To: References: Message-ID: <0bf8f2af-86a8-9ca2-5e40-53e9674d67d0@oracle.com> Hi, On 06/29/2017 02:50 PM, Mikael Gerdin wrote: > Hi all, > > To help with some upcoming patches I'd like to add the trywait > operation to the platform-independent Semaphore class. It's currently > lacking a Windows-implementation. > > Testing: JPRT testing on Windows platforms > Webrev: http://cr.openjdk.java.net/~mgerdin/8183229/webrev.0/ looks good! /Claes From hohensee at amazon.com Thu Jun 29 13:31:17 2017 From: hohensee at amazon.com (Hohensee, Paul) Date: Thu, 29 Jun 2017 13:31:17 +0000 Subject: RFR(XL): 8182299: Enable disabled clang warnings, build on OSX 10 + Xcode 8 In-Reply-To: <3ACE75CF-CEA6-4081-952A-BBA138763582@amazon.com> References: <27FD0413-52BC-42E6-A5B0-3C92A49A2D6F@amazon.com> <1CBD62A9-9B1B-4B05-AAF9-BE2D52DE8C79@amazon.com> <6B296ABE-66C0-4C32-AC4E-8674BE103514@oracle.com> <3ACE75CF-CEA6-4081-952A-BBA138763582@amazon.com> Message-ID: <435C96F2-417E-4A47-8649-28E34E2908AE@amazon.com> I now have access to cr.openjdk.java.net, so the latest webrevs are at http://cr.openjdk.java.net/~phh/8182299/webrev_jdk.00/ http://cr.openjdk.java.net/~phh/8182299/webrev_hotspot.00/ On 6/28/17, 3:50 PM, "hotspot-dev on behalf of Hohensee, Paul" wrote: Thanks for the review, Jesper. New webrev sent, has only a change to nativeInst_x86.cpp. In nativeInst_x86.cpp, I formatted the expression so I could easily understand it, but you?re right, not everyone does it that way (maybe only me!), so I?ve changed it to if (((ubyte_at(0) & NativeTstRegMem::instruction_rex_prefix_mask) == NativeTstRegMem::instruction_rex_prefix && ubyte_at(1) == NativeTstRegMem::instruction_code_memXregl && (ubyte_at(2) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg) || (ubyte_at(0) == NativeTstRegMem::instruction_code_memXregl && (ubyte_at(1) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg)) { In graphKit.cpp, the old code was #ifdef ASSERT case Deoptimization::Action_none: case Deoptimization::Action_make_not_compilable: break; default: fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); break; #endif In the non-ASSERT case, the compiler complained about the lack of Action_none, Action_make_not_compilable and default. If the warning had been turned off, the result would have been ?break;? for all three. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;? The new code is case Deoptimization::Action_none: case Deoptimization::Action_make_not_compilable: break; default: #ifdef ASSERT fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); #endif break; The compiler doesn?t complain about Action_none, Action_make_not_compilable or default anymore. In the non-ASSERT case, the result is ?break;? for all three, same as for the old code. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;?, again same as for the old code. Thanks, Paul On 6/28/17, 10:36 AM, "jesper.wilhelmsson at oracle.com" wrote: Hi Paul, Thanks for doing this change! In general everything looks really good, there are a lot of really nice cleanups here. I just have two minor questions/nits: * In hotspot/cpu/x86/vm/nativeInst_x86.hpp it seems the expression already have parenthesis around the & operations and the change here is "only" cleaning up the layout of the code which is not a bad thing in it self, but you move the logical operators to the beginning of each line which is a quite different style than the rest of the code in the same function where the operators are at the end of the line. * In hotspot/share/vm/opto/graphKit.cpp you moved the #ifdef ASSERT so that Action_none and Action_make_not_compilable are available also when ASSERT is not defined. I don't see this mentioned in your description of the change. Was this change intentional? Thanks, /Jesper > On 27 Jun 2017, at 21:34, Hohensee, Paul wrote: > > An attempt at better formatting. > > https://bugs.openjdk.java.net/browse/JDK-8182299 > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_jdk.00/ > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_hotspot.00/ > > Jesper has been kind enough to host the webrevs while I get my cr.openjdk.net account set up, and to be the sponsor. > > This rfe a combination of enabling disabled clang warnings and getting jdk10 to build on OSX 10 and Xcode 8. At least one enabled warning (delete-non-virtual-dtor) detected what seems to me a real potential bug, with the rest enforcing good code hygiene. > > These changes are only in OpenJDK, so I?m looking for a volunteer to make the closed changes. > > Thanks, > > Paul > > > Here are the jdk file-specific details: > > java_md_macosx.c splashscreen_sys.m > > Removed objc_registerThreadWithCollector() since it's obsolete and of questionable value in any case. > > NSApplicationAWT.m > > Use the correct NSEventMask rather than NSUInteger. > > jdhuff.c jdphuff.c > > Shifting a negative signed value is undefined. > > Here are the hotspot notes: > > Here are the lists of files affected by enabling a given warning: > > switch: all of these are lack of a default clause > > c1_LIRAssembler_x86.cpp c1_LIRGenerator_x86.cpp c1_LinearScan_x86.hpp > jniFastGetField_x86_64.cpp assembler.cpp c1_Canonicalizer.cpp > c1_GraphBuilder.cpp c1_Instruction.cpp c1_LIR.cpp c1_LIRGenerator.cpp > c1_LinearScan.cpp c1_ValueStack.hpp c1_ValueType.cpp > bcEscapeAnalyzer.cpp ciArray.cpp ciEnv.cpp ciInstance.cpp ciMethod.cpp > ciMethodBlocks.cpp ciMethodData.cpp ciTypeFlow.cpp > compiledMethod.cpp dependencies.cpp nmethod.cpp compileTask.hpp > heapRegionType.cpp abstractInterpreter.cpp bytecodes.cpp > invocationCounter.cpp linkResolver.cpp rewriter.cpp jvmciCompilerToVM.cpp > jvmciEnv.cpp universe.cpp cpCache.cpp generateOopMap.cpp > method.cpp methodData.cpp compile.cpp connode.cpp gcm.cpp graphKit.cpp > ifnode.cpp library_call.cpp memnode.cpp parse1.cpp > parse2.cpp phaseX.cpp superword.cpp type.cpp vectornode.cpp > jvmtiClassFileReconstituter.cpp jvmtiEnter.xsl jvmtiEventController.cpp > jvmtiImpl.cpp jvmtiRedefineClasses.cpp methodComparator.cpp methodHandles.cpp > advancedThresholdPolicy.cpp reflection.cpp relocator.cpp sharedRuntime.cpp > simpleThresholdPolicy.cpp writeableFlags.cpp globalDefinitions.hpp > > delete-non-virtual-dtor: these may be real latent bugs due to possible failure to execute destructor(s) > > decoder_aix.hpp decoder_machO.hpp classLoader.hpp g1RootClosures.hpp > jvmtiImpl.hpp perfData.hpp decoder.hpp decoder_elf.hpp > > dynamic-class-memaccess: obscure use of memcpy > > method.cpp > > empty-body: ?;? isn?t good enough for clang, it prefers {} > > objectMonitor.cpp mallocSiteTable.cpp > > format: matches printf format strings against arguments. debug output will be affected by > incorrect code changes to these. > > macroAssembler_x86.cpp os_bsd.cpp os_bsd_x86.cpp ciMethodData.cpp javaClasses.cpp > debugInfo.cpp logFileOutput.cpp constantPool.cpp jvmtiEnter.xsl jvmtiRedefineClasses.cpp > safepoint.cpp thread.cpp > > logical-op-parentheses: can be tricky to get correct. There are a few very long-winded predicates. > > nativeInst_x86.hpp archDesc.cpp output_c.cpp output_h.cpp c1_GraphBuilder.cpp > c1_LIRGenerator.cpp c1_LinearScan.cpp bcEscapeAnalyzer.cpp ciMethod.cpp > stackMapTableFormat.hpp compressedStream.cpp dependencies.cpp heapRegion.cpp > ptrQueue.cpp psPromotionManager.cpp jvmciCompilerToVM.cpp cfgnode.cpp > chaitin.cpp compile.cpp compile.hpp escape.cpp graphKit.cpp lcm.cpp > loopTransform.cpp loopnode.cpp loopopts.cpp macro.cpp memnode.cpp > output.cpp parse1.cpp parseHelper.cpp reg_split.cpp superword.cpp > superword.hpp jniCheck.cpp jvmtiEventController.cpp arguments.cpp > javaCalls.cpp sharedRuntime.cpp > > parentheses > > adlparse.cpp > > parentheses-equality > > output_c.cpp javaAssertions.cpp gcm.cpp > > File-specific details: > > GensrcAdlc.gmk CompileJvm.gmk > Left tautological-compare in place to allow null 'this' pointer checks in methods > intended to be called from a debugger. > > CompileGTest.gmk > Just an enhanced comment. > > MacosxDebuggerLocal.m > PT_ATTACH has been replaced by PT_ATTACHEXC > > ciMethodData.cp > " 0x%" FORMAT64_MODIFIER "x" reduces to "0x%llx", whereas > " " INTPTRNZ_FORMAT reduces to "0x%lx", which latter is what clang want. > > generateOopMap.cpp > Refactored duplicate code in print_current_state(). > > binaryTreeDictionary.cpp/hpp, hashtable.cpp/hpp > These provoked ?instantiation of variable required here, > but no definition is available?. > > globalDefinitions_gcc.hpp > Define FORMAT64_MODIFIER properly for Apple, needed by os.cpp. > > globalDefinitions.hpp > Add INTPTRNZ_FORMAT, needed by ciMethodData.cpp. From mbrandy at linux.vnet.ibm.com Thu Jun 29 13:37:07 2017 From: mbrandy at linux.vnet.ibm.com (Matthew Brandyberry) Date: Thu, 29 Jun 2017 08:37:07 -0500 Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 In-Reply-To: <95d5cb36271e4ebf8398223702b61ac8@sap.com> References: <8b53b855-f540-bb89-dcbb-bf0f5bb35b17@linux.vnet.ibm.com> <2a4fcb315f4d44199e8cc66935886f41@sap.com> <651ebdd4-3854-ac42-8e9c-54df77cbb5fc@linux.vnet.ibm.com> <56073cca-0c7a-0436-4e95-6d74a1bbe404@linux.vnet.ibm.com> <20c43bf4-a66b-2cc8-e62f-d58eb66df278@linux.vnet.ibm.com> <0f82ddbcf57348e8ac6e6cd9e51674f3@sap.com> <95d5cb36271e4ebf8398223702b61ac8@sap.com> Message-ID: <232e0b79-9022-bf4b-4c40-88880c68d22e@linux.vnet.ibm.com> Thanks Martin. That looks good. Is there anything I need to do to request a 2nd review? On 6/26/17 7:47 AM, Doerr, Martin wrote: > Hi Matt, > > after some testing and reviewing the C1 part again, I found 2 bugs: > > c1_LIRAssembler: is_stack() can't be used for this purpose as the value may be available in a register even though it was forced to stack. I just changed src_in_memory = !VM_Version::has_mtfprd() to make it consistent with LIRGenerator and removed the assertions which have become redundant. > > c1_LIRGenerator: value.set_destroys_register() is still needed for conversion from FP to GP registers because they kill the src value by fctiwz/fctidz. > > I just fixed these issues here in a copy of your webrev v2: > http://cr.openjdk.java.net/~mdoerr/8181809_ppc64_mtfprd/v2/ > > Please take a look and use this one for 2nd review. > > Best regards, > Martin > > > -----Original Message----- > From: Doerr, Martin > Sent: Montag, 26. Juni 2017 10:44 > To: 'Matthew Brandyberry' ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net > Subject: RE: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 > > Hi Matt, > > you can run the pre-push check stand-alone: hg jcheck > See: > http://openjdk.java.net/projects/code-tools/jcheck/ > > I just had to add the commit message: > 8181809: PPC64: Leverage mtfprd/mffprd on POWER8 > Reviewed-by: mdoerr > Contributed-by: Matthew Brandyberry > > (Note that the ':' after the bug id is important.) > > and replace the Tabs the 2 C1 files to get it passing. > (I think that "Illegal tag name" warnings can be ignored.) > > So only the copyright dates are missing which are not checked by jcheck. > But I don't need a new webrev if that's all which needs to be changed. > > Best regards, > Martin > > > -----Original Message----- > From: Matthew Brandyberry [mailto:mbrandy at linux.vnet.ibm.com] > Sent: Freitag, 23. Juni 2017 18:39 > To: Doerr, Martin ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net > Subject: Re: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 > > Thanks Martin. Are there tools to help detect formatting errors like the > tab characters? > > I'll keep an eye on this to see if I need to do anything else. > > -Matt > > On 6/23/17 4:30 AM, Doerr, Martin wrote: >> Excellent. Thanks for the update. The C1 part looks good, too. >> >> Also, thanks for checking "I could not find evidence of any config that includes vpmsumb but not >> mtfprd." >> >> There are only a few formally required things: >> - The new C1 code contains Tab characters. It's not possible to push it without fixing this. >> - Copyright messages should be updated. >> - Minor resolution to get vm_version_ppc applied to recent jdk10/hs. >> >> If no other changes get requested, I can handle these issues this time before pushing. >> But we need another review, first. >> >> Thanks and best regards, >> Martin >> >> >> -----Original Message----- >> From: Matthew Brandyberry [mailto:mbrandy at linux.vnet.ibm.com] >> Sent: Freitag, 23. Juni 2017 04:54 >> To: Doerr, Martin ; ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net >> Subject: Re: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 >> >> Updated webrev: http://cr.openjdk.java.net/~gromero/8181809/v2/ >> >> See below for responses inline. >> >> On 6/20/17 8:38 AM, Matthew Brandyberry wrote: >>> Hi Martin, >>> >>> Thanks for the review. I'll take a look at these areas and report >>> back -- especially the integration into C1. >>> >>> On 6/20/17 8:33 AM, Doerr, Martin wrote: >>>> Hi Matt, >>>> >>>> thanks for providing this webrev. I had already thought about using >>>> these instructions for this purpose and your change matches pretty >>>> much what I'd do. >>>> >>>> Here a couple of comments: >>>> ppc.ad: >>>> This was a lot of work. Thanks for doing it. >>>> effect(DEF dst, USE src); is redundant if a match rule match(Set dst >>>> (MoveL2D src)); exists. >> Fixed. >>>> vm_version: >>>> This part is in conflict with Michihiro's change which is already >>>> pushed in jdk10, but it's trivial to resolve. I'm ok with using >>>> has_vpmsumb() for has_mtfprd(). In the past, we sometimes had trouble >>>> with assuming that a certain Power processor supports all new >>>> instructions if it supports certain ones. We also use the hotspot >>>> code on as400 where certain instruction subsets were disabled while >>>> other Power 8 instructions were usable. Maybe you can double-check if >>>> there may exist configurations in which has_vpmsumb() doesn't match >>>> has_mtfprd(). >> I could not find evidence of any config that includes vpmsumb but not >> mtfprd. >>>> C1: >>>> It should also be possible to use the instructions in C1 compiler. >>>> Maybe you would like to take a look at it as well and see if it can >>>> be done with feasible effort. >>>> Here are some hints: >>>> The basic decisions are made in LIRGenerator::do_Convert. You could >>>> skip the force_to_spill or must_start_in_memory steps. >>>> The final assembly code gets emitted in LIR_Assembler::emit_opConvert >>>> where you could replace the store instructions. >>>> For testing, you can use -XX:TieredStopAtLevel=1, for example. >> Done. Please take a look. >>>> Thanks and best regards, >>>> Martin >>>> >>>> >>>> -----Original Message----- >>>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >>>> Behalf Of Matthew Brandyberry >>>> Sent: Montag, 19. Juni 2017 18:28 >>>> To: ppc-aix-port-dev at openjdk.java.net; hotspot-dev at openjdk.java.net >>>> Subject: RFR(M) JDK-8181809 PPC64: Leverage mtfprd/mffprd on POWER8 >>>> >>>> This is a PPC-specific hotspot optimization that leverages the >>>> mtfprd/mffprd instructions for for movement between general purpose and >>>> floating point registers (rather than through memory). It yields a ~35% >>>> improvement measured via a microbenchmark. Please review: Bug: >>>> https://bugs.openjdk.java.net/browse/JDK-8181809 Webrev: >>>> http://cr.openjdk.java.net/~gromero/8181809/v1/ Thanks, Matt >>>> >>>> From stefan.karlsson at oracle.com Thu Jun 29 13:50:18 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 29 Jun 2017 15:50:18 +0200 Subject: RFR: 8178495: Bug in the align_size_up_ macro In-Reply-To: References: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> Message-ID: <1b547206-e163-ceb4-dd2c-b908ce7d3c0e@oracle.com> On 2017-06-28 22:00, Robbin Ehn wrote: > Looks good. > > Is there a problem with always widen it to ULL ? Yes, you get problems on 32-bit builds as well as problems with signed vs unsigned comparisons. > > E.g. > ~((alignment) - 1ULL) > > Your widen_to_type_of is obliviously much cleaner. > > Thanks for fixing! Thanks for reviewing. StefanK > > /Robbin > > On 06/28/2017 05:08 PM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to fix a bug in the align_size_up_ macro. >> >> http://cr.openjdk.java.net/~stefank/8178495/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8178495 >> >> The following: >> align_size_up_((uintptr_t)0x512345678ULL, (int8_t) 16); >> align_size_up_((uintptr_t)0x512345678ULL, (int16_t) 16); >> align_size_up_((uintptr_t)0x512345678ULL, (int32_t) 16); >> align_size_up_((uintptr_t)0x512345678ULL, (int64_t) 16); >> >> align_size_up_((uintptr_t)0x512345678ULL, (uint8_t) 16); >> align_size_up_((uintptr_t)0x512345678ULL, (uint16_t)16); >> align_size_up_((uintptr_t)0x512345678ULL, (uint32_t)16); >> align_size_up_((uintptr_t)0x512345678ULL, (uint64_t)16); >> >> Gives this output: >> 0x512345680 >> 0x512345680 >> 0x512345680 >> 0x512345680 >> >> 0x512345680 >> 0x512345680 >> 0x12345680 >> 0x512345680 >> >> So, align_size_up_((uintptr_t)0x512345678ULL, (uint32_t)16) returns >> an unexpected, truncated value. >> >> This happens because in this macro: >> #define align_size_up_(size, alignment) (((size) + ((alignment) - 1)) >> & ~((alignment) - 1)) >> >> ~((alignment) - 1) returns 0x00000000FFFFFFF0 instead of >> 0xFFFFFFFFFFFFFFF0 >> >> This isn't a problem for the 64-bit types, and maybe more non-obvious >> is that it doesn't happen for types 8-bit and 16-bit types. >> >> For the 8-bit and 16-bit types, the (alignment - 1) is promoted to a >> signed int, when it later is used in the & expression it is signed >> extended into a signed 64-bit value. >> >> When the type is an unsigned 32-bit integer, it isn't promoted to a >> signed int, and therefore it is not singed extended to 64 bits, but >> instead zero extended to 64 bits. >> >> This bug is currently not affecting the code base, since the inline >> align functions promote all integers to intptr_t, before passing them >> down to the align macros. However, when/if JDK-8178489 >> is pushed the >> macro is actually used with 32 bits unsigned ints. >> >> Tested with the unit test and JPRT with and without patches for >> JDK-8178489 . >> >> Thanks, >> StefanK From stefan.karlsson at oracle.com Thu Jun 29 13:50:41 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 29 Jun 2017 15:50:41 +0200 Subject: RFR: 8178495: Bug in the align_size_up_ macro In-Reply-To: <1498739762.2961.2.camel@oracle.com> References: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> <1498739762.2961.2.camel@oracle.com> Message-ID: <3bef683a-e5ed-2bb3-a6a8-8a9909ae1ece@oracle.com> Thanks, Thomas. StefanK On 2017-06-29 14:36, Thomas Schatzl wrote: > Hi, > > On Wed, 2017-06-28 at 17:08 +0200, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to fix a bug in the align_size_up_ macro. >> >> http://cr.openjdk.java.net/~stefank/8178495/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8178495 >> > looks good to me. > > Thomas > From jesper.wilhelmsson at oracle.com Thu Jun 29 16:10:02 2017 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Thu, 29 Jun 2017 18:10:02 +0200 Subject: RFR(XL): 8182299: Enable disabled clang warnings, build on OSX 10 + Xcode 8 In-Reply-To: <435C96F2-417E-4A47-8649-28E34E2908AE@amazon.com> References: <27FD0413-52BC-42E6-A5B0-3C92A49A2D6F@amazon.com> <1CBD62A9-9B1B-4B05-AAF9-BE2D52DE8C79@amazon.com> <6B296ABE-66C0-4C32-AC4E-8674BE103514@oracle.com> <3ACE75CF-CEA6-4081-952A-BBA138763582@amazon.com> <435C96F2-417E-4A47-8649-28E34E2908AE@amazon.com> Message-ID: <3E582353-F462-4002-80B4-CC5AF88C9E59@oracle.com> Thanks for changing this! Looks good to me. /Jesper > On 29 Jun 2017, at 15:31, Hohensee, Paul wrote: > > I now have access to cr.openjdk.java.net, so the latest webrevs are at > > http://cr.openjdk.java.net/~phh/8182299/webrev_jdk.00/ > http://cr.openjdk.java.net/~phh/8182299/webrev_hotspot.00/ > > On 6/28/17, 3:50 PM, "hotspot-dev on behalf of Hohensee, Paul" wrote: > > Thanks for the review, Jesper. > > New webrev sent, has only a change to nativeInst_x86.cpp. > > In nativeInst_x86.cpp, I formatted the expression so I could easily understand it, but you?re right, not everyone does it that way (maybe only me!), so I?ve changed it to > > if (((ubyte_at(0) & NativeTstRegMem::instruction_rex_prefix_mask) == NativeTstRegMem::instruction_rex_prefix && > ubyte_at(1) == NativeTstRegMem::instruction_code_memXregl && > (ubyte_at(2) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg) || > (ubyte_at(0) == NativeTstRegMem::instruction_code_memXregl && > (ubyte_at(1) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg)) { > > In graphKit.cpp, the old code was > > #ifdef ASSERT > case Deoptimization::Action_none: > case Deoptimization::Action_make_not_compilable: > break; > default: > fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); > break; > #endif > > In the non-ASSERT case, the compiler complained about the lack of Action_none, Action_make_not_compilable and default. If the warning had been turned off, the result would have been ?break;? for all three. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;? > > The new code is > > case Deoptimization::Action_none: > case Deoptimization::Action_make_not_compilable: > break; > default: > #ifdef ASSERT > fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); > #endif > break; > > The compiler doesn?t complain about Action_none, Action_make_not_compilable or default anymore. In the non-ASSERT case, the result is ?break;? for all three, same as for the old code. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;?, again same as for the old code. > > Thanks, > > Paul > > On 6/28/17, 10:36 AM, "jesper.wilhelmsson at oracle.com" wrote: > > Hi Paul, > > Thanks for doing this change! In general everything looks really good, there are a lot of really nice cleanups here. I just have two minor questions/nits: > > * In hotspot/cpu/x86/vm/nativeInst_x86.hpp it seems the expression already have parenthesis around the & operations and the change here is "only" cleaning up the layout of the code which is not a bad thing in it self, but you move the logical operators to the beginning of each line which is a quite different style than the rest of the code in the same function where the operators are at the end of the line. > > * In hotspot/share/vm/opto/graphKit.cpp you moved the #ifdef ASSERT so that Action_none and Action_make_not_compilable are available also when ASSERT is not defined. I don't see this mentioned in your description of the change. Was this change intentional? > > Thanks, > /Jesper > > >> On 27 Jun 2017, at 21:34, Hohensee, Paul wrote: >> >> An attempt at better formatting. >> >> https://bugs.openjdk.java.net/browse/JDK-8182299 >> http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_jdk.00/ >> http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_hotspot.00/ >> >> Jesper has been kind enough to host the webrevs while I get my cr.openjdk.net account set up, and to be the sponsor. >> >> This rfe a combination of enabling disabled clang warnings and getting jdk10 to build on OSX 10 and Xcode 8. At least one enabled warning (delete-non-virtual-dtor) detected what seems to me a real potential bug, with the rest enforcing good code hygiene. >> >> These changes are only in OpenJDK, so I?m looking for a volunteer to make the closed changes. >> >> Thanks, >> >> Paul >> >> >> Here are the jdk file-specific details: >> >> java_md_macosx.c splashscreen_sys.m >> >> Removed objc_registerThreadWithCollector() since it's obsolete and of questionable value in any case. >> >> NSApplicationAWT.m >> >> Use the correct NSEventMask rather than NSUInteger. >> >> jdhuff.c jdphuff.c >> >> Shifting a negative signed value is undefined. >> >> Here are the hotspot notes: >> >> Here are the lists of files affected by enabling a given warning: >> >> switch: all of these are lack of a default clause >> >> c1_LIRAssembler_x86.cpp c1_LIRGenerator_x86.cpp c1_LinearScan_x86.hpp >> jniFastGetField_x86_64.cpp assembler.cpp c1_Canonicalizer.cpp >> c1_GraphBuilder.cpp c1_Instruction.cpp c1_LIR.cpp c1_LIRGenerator.cpp >> c1_LinearScan.cpp c1_ValueStack.hpp c1_ValueType.cpp >> bcEscapeAnalyzer.cpp ciArray.cpp ciEnv.cpp ciInstance.cpp ciMethod.cpp >> ciMethodBlocks.cpp ciMethodData.cpp ciTypeFlow.cpp >> compiledMethod.cpp dependencies.cpp nmethod.cpp compileTask.hpp >> heapRegionType.cpp abstractInterpreter.cpp bytecodes.cpp >> invocationCounter.cpp linkResolver.cpp rewriter.cpp jvmciCompilerToVM.cpp >> jvmciEnv.cpp universe.cpp cpCache.cpp generateOopMap.cpp >> method.cpp methodData.cpp compile.cpp connode.cpp gcm.cpp graphKit.cpp >> ifnode.cpp library_call.cpp memnode.cpp parse1.cpp >> parse2.cpp phaseX.cpp superword.cpp type.cpp vectornode.cpp >> jvmtiClassFileReconstituter.cpp jvmtiEnter.xsl jvmtiEventController.cpp >> jvmtiImpl.cpp jvmtiRedefineClasses.cpp methodComparator.cpp methodHandles.cpp >> advancedThresholdPolicy.cpp reflection.cpp relocator.cpp sharedRuntime.cpp >> simpleThresholdPolicy.cpp writeableFlags.cpp globalDefinitions.hpp >> >> delete-non-virtual-dtor: these may be real latent bugs due to possible failure to execute destructor(s) >> >> decoder_aix.hpp decoder_machO.hpp classLoader.hpp g1RootClosures.hpp >> jvmtiImpl.hpp perfData.hpp decoder.hpp decoder_elf.hpp >> >> dynamic-class-memaccess: obscure use of memcpy >> >> method.cpp >> >> empty-body: ?;? isn?t good enough for clang, it prefers {} >> >> objectMonitor.cpp mallocSiteTable.cpp >> >> format: matches printf format strings against arguments. debug output will be affected by >> incorrect code changes to these. >> >> macroAssembler_x86.cpp os_bsd.cpp os_bsd_x86.cpp ciMethodData.cpp javaClasses.cpp >> debugInfo.cpp logFileOutput.cpp constantPool.cpp jvmtiEnter.xsl jvmtiRedefineClasses.cpp >> safepoint.cpp thread.cpp >> >> logical-op-parentheses: can be tricky to get correct. There are a few very long-winded predicates. >> >> nativeInst_x86.hpp archDesc.cpp output_c.cpp output_h.cpp c1_GraphBuilder.cpp >> c1_LIRGenerator.cpp c1_LinearScan.cpp bcEscapeAnalyzer.cpp ciMethod.cpp >> stackMapTableFormat.hpp compressedStream.cpp dependencies.cpp heapRegion.cpp >> ptrQueue.cpp psPromotionManager.cpp jvmciCompilerToVM.cpp cfgnode.cpp >> chaitin.cpp compile.cpp compile.hpp escape.cpp graphKit.cpp lcm.cpp >> loopTransform.cpp loopnode.cpp loopopts.cpp macro.cpp memnode.cpp >> output.cpp parse1.cpp parseHelper.cpp reg_split.cpp superword.cpp >> superword.hpp jniCheck.cpp jvmtiEventController.cpp arguments.cpp >> javaCalls.cpp sharedRuntime.cpp >> >> parentheses >> >> adlparse.cpp >> >> parentheses-equality >> >> output_c.cpp javaAssertions.cpp gcm.cpp >> >> File-specific details: >> >> GensrcAdlc.gmk CompileJvm.gmk >> Left tautological-compare in place to allow null 'this' pointer checks in methods >> intended to be called from a debugger. >> >> CompileGTest.gmk >> Just an enhanced comment. >> >> MacosxDebuggerLocal.m >> PT_ATTACH has been replaced by PT_ATTACHEXC >> >> ciMethodData.cp >> " 0x%" FORMAT64_MODIFIER "x" reduces to "0x%llx", whereas >> " " INTPTRNZ_FORMAT reduces to "0x%lx", which latter is what clang want. >> >> generateOopMap.cpp >> Refactored duplicate code in print_current_state(). >> >> binaryTreeDictionary.cpp/hpp, hashtable.cpp/hpp >> These provoked ?instantiation of variable required here, >> but no definition is available?. >> >> globalDefinitions_gcc.hpp >> Define FORMAT64_MODIFIER properly for Apple, needed by os.cpp. >> >> globalDefinitions.hpp >> Add INTPTRNZ_FORMAT, needed by ciMethodData.cpp. > > > > > From alexander.harlap at oracle.com Thu Jun 29 18:08:40 2017 From: alexander.harlap at oracle.com (Alexander Harlap) Date: Thu, 29 Jun 2017 14:08:40 -0400 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: <64280D64-9F9B-42BE-B348-0F1449DFEAC6@oracle.com> References: <73e5c873-1ddd-afd7-b958-0367b9376f1d@oracle.com> <5bda3da9-18fd-d00a-cc1f-19bf36ff3709@oracle.com> <64280D64-9F9B-42BE-B348-0F1449DFEAC6@oracle.com> Message-ID: Here is new version: http://cr.openjdk.java.net/~aharlap/8178507/webrev.04/ I made changes , recommended by Leonid: vm.debug for debug-only options (very good!) and spitted TestMemoryInitialization into two separate tests. Alex On 6/28/2017 5:35 PM, Leonid Mesnik wrote: > >> On Jun 28, 2017, at 11:27 AM, Alexander Harlap >> > wrote: >> >> Hi Leonid and Igor, >> >> It looks like we need extra round of review: >> >> New version is here: >> http://cr.openjdk.java.net/~aharlap/8178507/webrev.03/ >> >> Two issues: >> >> 1. TestFullGCALot.java - it may take too long. So I added option >> -XX:+FullGCALotInterval=120 to make sure we do not hit timeout and do >> not slow down testing, also -XX:+IgnoreUnrecognizedVMOptions - do not >> fail in product mode >> > It would be better to use requires to skip test for product bits. > (vm.debug) >> >> 2. TestMemoryInitiazation.java - feature to initialize debug memory >> to some special words currently is supported only for CMS and Serial >> gc. So I modified Test to run now only for these gc's: >> >> * @requires vm.gc.Serial | vm.gc.ConcMarkSweep >> * @summary Simple test for -XX:+CheckMemoryInitialization doesn't >> crash VM >> * @run main/othervm -XX:+UseSerialGC >> -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization >> TestMemoryInitialization >> * @run main/othervm -XX:+UseConcMarkSweepGC >> -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization >> TestMemoryInitialization >> > Test tries to run VM 2 times with Serial/CMS GC in the case if any of > them is supported or/and set. So test fails if CMS is not supported. > In the case if any of GC is set explicitly it should fail with > unsupported GC combinations. > > The better would be to split test into 2 single tests > TestMemoryInitializationSerialGC & TestMemoryInitializationCMSGC which > shares java code. Also CMS has been deprecated in JDK9 so I don?t know > it make a sense to test it JDK10. > > Leonid >> >> I will add enhancement request to support CheckMemoryInitialization >> flag in G1. >> >> Alex >> >> On 6/26/2017 7:04 PM, Igor Ignatyev wrote: >>> >>>> On Jun 26, 2017, at 2:54 PM, Alexander Harlap >>>> > >>>> wrote: >>>> >>>> Thank you Igor and Leonid, >>>> >>>> I fixed mentioned typos and unnecessary return (see >>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.02/) >>>> >>> perfect. >>>> >>>> Do I need more reviews? >>>> >>> no, you can go ahead and integrate it. >>> >>> -- Igor >>>> >>>> Alex >>>> >>>> >>>> On 6/26/2017 4:32 PM, Igor Ignatyev wrote: >>>>> Hi Alexander, >>>>> >>>>> besides the small nits which Leonid mentioned, there is one in >>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html: >>>>> >>>>> >>>>>> 28 * @summary Test verifies only that VM doesn???t crash but throw expected Error. >>>>> I guess "doesn???t" is 'doesn't' w/ a fancy apostrophe. otherwise >>>>> looks good to me, Reviewed. >>>>> >>>>> -- Igor >>>>> >>>>>> On Jun 26, 2017, at 1:11 PM, Leonid Mesnik >>>>>> > wrote: >>>>>> >>>>>> Hi >>>>>> >>>>>> New changes looks good for me. Please get review from Reviewer. >>>>>> >>>>>> The only 2 small nits which don?t require separate review from me: >>>>>> >>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestFullGCALot.java.html >>>>>> >>>>>> >>>>> > >>>>>> typo in >>>>>> 37 System.out.println("Hellow world!"); >>>>>> >>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html >>>>>> >>>>>> >>>>> > >>>>>> return is not needed in >>>>>> 58 return; >>>>>> >>>>>> Thanks >>>>>> Leonid >>>>>>> On Jun 26, 2017, at 1:04 PM, Alexander Harlap >>>>>>> >>>>>> > wrote: >>>>>>> >>>>>>> Hi Leonid, >>>>>>> >>>>>>> I accommodated your suggestions. >>>>>>> >>>>>>> New version of changeset located at >>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ >>>>>>> >>>>>>> >>>>>>> >>>>>>> Alex >>>>>>> >>>>>>> >>>>>>> On 6/23/2017 6:18 PM, Leonid Mesnik wrote: >>>>>>>> Hi >>>>>>>> >>>>>>>> Basically changes looks good. Below are some comments: >>>>>>>> >>>>>>>>> On Jun 22, 2017, at 9:16 AM, Alexander Harlap >>>>>>>>> >>>>>>>> > wrote: >>>>>>>>> >>>>>>>>> Please review change for JDK-8178507 >>>>>>>>> - co-locate >>>>>>>>> nsk.regression.gc tests >>>>>>>>> >>>>>>>>> JDK-8178507 >>>>>>>>> is last remaining sub-task ofJDK-8178482 >>>>>>>>> - Co-locate >>>>>>>>> remaining GC tests >>>>>>>>> >>>>>>>>> >>>>>>>>> Proposed change located at >>>>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >>>>>>>>> >>>>>>>>> >>>>>>>>> Co-located and converted to JTREG tests are: >>>>>>>>> >>>>>>>>> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java >>>>>>>> The out variable is no used and return code is not checked in >>>>>>>> method ?run?. Wouldn't it simpler just to move println into >>>>>>>> main and remove method ?run? completely? >>>>>>>>> nsk/regression/b4396719 => >>>>>>>>> hotspot/test/gc/TestStackOverflow.java >>>>>>>> The method ?run? always returns 0. It would be better to make >>>>>>>> it void or just remove it. Test never throws any exception. So >>>>>>>> it make a sense to write in comments that test verifies only >>>>>>>> that VM doesn?t crash but throw expected Error. >>>>>>>> >>>>>>>>> nsk/regression/b4668531 => >>>>>>>>> hotspot/test/gc/TestMemoryInitialization.java >>>>>>>> The variable buffer is ?read-only?. It make a sense to make >>>>>>>> variable ?buffer' public static member of class >>>>>>>> TestMemoryInitialization. So compiler could not optimize it >>>>>>>> usage during any optimization like escape analysis. >>>>>>>>> nsk/regression/b6186200 => >>>>>>>>> hotspot/test/gc/cslocker/TestCSLocker.java >>>>>>>>> >>>>>>>> Port looks good. It seems that test doesn?t verify that lock >>>>>>>> really happened. Could be this improved as a part of this fix >>>>>>>> or by filing separate RFE? >>>>>>>> >>>>>>>> Leonid >>>>>>>>> Thank you, >>>>>>>>> >>>>>>>>> Alex >>>>>>>>> >>>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From kim.barrett at oracle.com Thu Jun 29 22:04:42 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 29 Jun 2017 18:04:42 -0400 Subject: RFR: 8178495: Bug in the align_size_up_ macro In-Reply-To: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> References: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> Message-ID: > On Jun 28, 2017, at 11:08 AM, Stefan Karlsson wrote: > > Hi all, > > Please review this patch to fix a bug in the align_size_up_ macro. > > http://cr.openjdk.java.net/~stefank/8178495/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8178495 ------------------------------------------------------------------------------ src/share/vm/utilities/globalDefinitions.hpp 514 #define widen_to_type_of(what, type_carrier) ((what) | ((type_carrier) & 0)) I think a better form of widen_to_type_of is the following: #define widen_to_type_of(what, type_carrier) (true ? (what) : (type_carrier)) The difference is that this just promotes as needed, and will never execute any part of type_carrier. The definition in the webrev may not be able to completely optimize away the carrier expression under some conditions. ------------------------------------------------------------------------------ test/native/utilities/test_align.cpp 35 struct TypeInfo : AllStatic { 36 static const bool is_unsigned = T(-1) > T(0); 37 static const T max = is_unsigned We recently (JDK-8181318) made it possible to #include and use std::numeric_limits, so you could get is_signed and max() from there. max_alignment could then just be a static helper function template. Also here: 54 static const intptr_t max_intptr = (intptr_t)max_intx; ------------------------------------------------------------------------------ test/native/utilities/test_align.cpp 25 #include "logging/log.hpp" and 44 const static bool logging_enabled = false; 45 #define log(...) \ ... I'm guessing the logging macro is because logging/log stuff isn't initialized yet when executing a TEST (as opposed to a TEST_VM)? So the #include is superfluous? ------------------------------------------------------------------------------ test/native/utilities/test_align.cpp 56 template 57 static void test_alignments() { ... and 117 TEST(Align, functions_and_macros) { ... calls to test_alignments for various types. I keep meaning to find out if we support Google Test's "Typed Tests" [1], and if not, try to figure out how hard it would be to change that. [1] https://github.com/google/googletest/blob/master/googletest/docs/AdvancedGuide.md Search for "Typed Tests". ------------------------------------------------------------------------------ test/native/utilities/test_align.cpp 79 ASSERT_EQ(align_size_up(value, alignment), (intptr_t)up); and elsewhere The Google Test docs suggest that the expected value be first, and the value being tested be second, and that the failure reporting assumes that when printing a failure message. Hm, I see the Google Test docs have been changed since I read that, and now say Historical note: Before February 2016 *_EQ had a convention of calling it as ASSERT_EQ(expected, actual), so lots of existing code uses this order. Now *_EQ treats both parameters in the same way. I wonder how old our Google Test code snapshot might be. ------------------------------------------------------------------------------ From kim.barrett at oracle.com Thu Jun 29 22:46:16 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 29 Jun 2017 18:46:16 -0400 Subject: RFR: 8178495: Bug in the align_size_up_ macro In-Reply-To: References: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> Message-ID: <2D3D3981-4349-4D62-B92C-672A57C5764E@oracle.com> > On Jun 29, 2017, at 6:04 PM, Kim Barrett wrote: > test/native/utilities/test_align.cpp > 56 template > 57 static void test_alignments() { > ... > and > 117 TEST(Align, functions_and_macros) { > ... calls to test_alignments for various types. > > I keep meaning to find out if we support Google Test's "Typed Tests" [1], > and if not, try to figure out how hard it would be to change that. > > [1] https://github.com/google/googletest/blob/master/googletest/docs/AdvancedGuide.md > Search for "Typed Tests?. Without Typed Tests, the SCOPED_TRACE macro might be useful to improve the reporting, and might might eliminate the need for some of the logging. From kim.barrett at oracle.com Fri Jun 30 00:01:30 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 29 Jun 2017 20:01:30 -0400 Subject: RFR (S) 8183229: Implement WindowsSemaphore::trywait In-Reply-To: References: Message-ID: <4FC8F3FB-B5ED-4618-BF89-9D46A1C56560@oracle.com> > On Jun 29, 2017, at 8:50 AM, Mikael Gerdin wrote: > > Hi all, > > To help with some upcoming patches I'd like to add the trywait operation to the platform-independent Semaphore class. It's currently lacking a Windows-implementation. > > Testing: JPRT testing on Windows platforms > Webrev: http://cr.openjdk.java.net/~mgerdin/8183229/webrev.0/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8183229 > > Thanks > /Mikael Code looks good, but needs a test for the new function in hotspot/test/native/runtime/test_semaphore.cpp. From leonid.mesnik at oracle.com Fri Jun 30 03:24:32 2017 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Thu, 29 Jun 2017 20:24:32 -0700 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: References: <73e5c873-1ddd-afd7-b958-0367b9376f1d@oracle.com> <5bda3da9-18fd-d00a-cc1f-19bf36ff3709@oracle.com> <64280D64-9F9B-42BE-B348-0F1449DFEAC6@oracle.com> Message-ID: <2D5E673A-2CC7-481E-A0F3-E97AA59272DE@oracle.com> Hi The changes looks good. Leonid > On Jun 29, 2017, at 11:08 AM, Alexander Harlap wrote: > > Here is new version: http://cr.openjdk.java.net/~aharlap/8178507/webrev.04/ > I made changes , recommended by Leonid: vm.debug for debug-only options (very good!) > and spitted TestMemoryInitialization into two separate tests. > Alex > > > On 6/28/2017 5:35 PM, Leonid Mesnik wrote: >> >>> On Jun 28, 2017, at 11:27 AM, Alexander Harlap > wrote: >>> >>> Hi Leonid and Igor, >>> >>> It looks like we need extra round of review: >>> >>> New version is here: http://cr.openjdk.java.net/~aharlap/8178507/webrev.03/ >>> Two issues: >>> >>> 1. TestFullGCALot.java - it may take too long. So I added option -XX:+FullGCALotInterval=120 to make sure we do not hit timeout and do not slow down testing, also -XX:+IgnoreUnrecognizedVMOptions - do not fail in product mode >> It would be better to use requires to skip test for product bits. (vm.debug) >>> >>> 2. TestMemoryInitiazation.java - feature to initialize debug memory to some special words currently is supported only for CMS and Serial gc. So I modified Test to run now only for these gc's: >>> >>> * @requires vm.gc.Serial | vm.gc.ConcMarkSweep >>> * @summary Simple test for -XX:+CheckMemoryInitialization doesn't crash VM >>> * @run main/othervm -XX:+UseSerialGC -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization TestMemoryInitialization >>> * @run main/othervm -XX:+UseConcMarkSweepGC -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization TestMemoryInitialization >> Test tries to run VM 2 times with Serial/CMS GC in the case if any of them is supported or/and set. So test fails if CMS is not supported. In the case if any of GC is set explicitly it should fail with unsupported GC combinations. >> >> The better would be to split test into 2 single tests TestMemoryInitializationSerialGC & TestMemoryInitializationCMSGC which shares java code. Also CMS has been deprecated in JDK9 so I don?t know it make a sense to test it JDK10. >> >> Leonid >>> >>> I will add enhancement request to support CheckMemoryInitialization flag in G1. >>> >>> Alex >>> On 6/26/2017 7:04 PM, Igor Ignatyev wrote: >>>> >>>>> On Jun 26, 2017, at 2:54 PM, Alexander Harlap > wrote: >>>>> >>>>> Thank you Igor and Leonid, >>>>> >>>>> I fixed mentioned typos and unnecessary return (see http://cr.openjdk.java.net/~aharlap/8178507/webrev.02/ ) >>>>> >>>> perfect. >>>>> Do I need more reviews? >>>>> >>>> no, you can go ahead and integrate it. >>>> >>>> -- Igor >>>>> Alex >>>>> >>>>> On 6/26/2017 4:32 PM, Igor Ignatyev wrote: >>>>>> Hi Alexander, >>>>>> >>>>>> besides the small nits which Leonid mentioned, there is one in http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html: >>>>>>> 28 * @summary Test verifies only that VM doesn???t crash but throw expected Error. >>>>>> I guess "doesn???t" is 'doesn't' w/ a fancy apostrophe. otherwise looks good to me, Reviewed. >>>>>> >>>>>> -- Igor >>>>>> >>>>>>> On Jun 26, 2017, at 1:11 PM, Leonid Mesnik > wrote: >>>>>>> >>>>>>> Hi >>>>>>> >>>>>>> New changes looks good for me. Please get review from Reviewer. >>>>>>> >>>>>>> The only 2 small nits which don?t require separate review from me: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestFullGCALot.java.html > >>>>>>> typo in >>>>>>> 37 System.out.println("Hellow world!"); >>>>>>> >>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html > >>>>>>> return is not needed in >>>>>>> 58 return; >>>>>>> >>>>>>> Thanks >>>>>>> Leonid >>>>>>>> On Jun 26, 2017, at 1:04 PM, Alexander Harlap > wrote: >>>>>>>> >>>>>>>> Hi Leonid, >>>>>>>> >>>>>>>> I accommodated your suggestions. >>>>>>>> >>>>>>>> New version of changeset located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ >>>>>>>> >>>>>>>> >>>>>>>> Alex >>>>>>>> >>>>>>>> >>>>>>>> On 6/23/2017 6:18 PM, Leonid Mesnik wrote: >>>>>>>>> Hi >>>>>>>>> >>>>>>>>> Basically changes looks good. Below are some comments: >>>>>>>>> >>>>>>>>>> On Jun 22, 2017, at 9:16 AM, Alexander Harlap > wrote: >>>>>>>>>> >>>>>>>>>> Please review change for JDK-8178507 > - co-locate nsk.regression.gc tests >>>>>>>>>> >>>>>>>>>> JDK-8178507 > is last remaining sub-task ofJDK-8178482 > - Co-locate remaining GC tests >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Proposed change located at http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >>>>>>>>>> >>>>>>>>>> Co-located and converted to JTREG tests are: >>>>>>>>>> >>>>>>>>>> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java >>>>>>>>> The out variable is no used and return code is not checked in method ?run?. Wouldn't it simpler just to move println into main and remove method ?run? completely? >>>>>>>>>> nsk/regression/b4396719 => hotspot/test/gc/TestStackOverflow.java >>>>>>>>> The method ?run? always returns 0. It would be better to make it void or just remove it. Test never throws any exception. So it make a sense to write in comments that test verifies only that VM doesn?t crash but throw expected Error. >>>>>>>>> >>>>>>>>>> nsk/regression/b4668531 => hotspot/test/gc/TestMemoryInitialization.java >>>>>>>>> The variable buffer is ?read-only?. It make a sense to make variable ?buffer' public static member of class TestMemoryInitialization. So compiler could not optimize it usage during any optimization like escape analysis. >>>>>>>>>> nsk/regression/b6186200 => hotspot/test/gc/cslocker/TestCSLocker.java >>>>>>>>>> >>>>>>>>> Port looks good. It seems that test doesn?t verify that lock really happened. Could be this improved as a part of this fix or by filing separate RFE? >>>>>>>>> >>>>>>>>> Leonid >>>>>>>>>> Thank you, >>>>>>>>>> >>>>>>>>>> Alex >>>>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From stefan.karlsson at oracle.com Fri Jun 30 08:06:43 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 30 Jun 2017 10:06:43 +0200 Subject: RFR: 8178495: Bug in the align_size_up_ macro In-Reply-To: References: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> Message-ID: <7aea18aa-7aab-8f80-72f5-b8d4304a829d@oracle.com> Hi Kim, Updated webrevs: http://cr.openjdk.java.net/~stefank/8178495/webrev.01 http://cr.openjdk.java.net/~stefank/8178495/webrev.01.delta Inlined: On 2017-06-30 00:04, Kim Barrett wrote: >> On Jun 28, 2017, at 11:08 AM, Stefan Karlsson wrote: >> >> Hi all, >> >> Please review this patch to fix a bug in the align_size_up_ macro. >> >> http://cr.openjdk.java.net/~stefank/8178495/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8178495 > ------------------------------------------------------------------------------ > src/share/vm/utilities/globalDefinitions.hpp > 514 #define widen_to_type_of(what, type_carrier) ((what) | ((type_carrier) & 0)) > > I think a better form of widen_to_type_of is the following: > > #define widen_to_type_of(what, type_carrier) (true ? (what) : (type_carrier)) > > The difference is that this just promotes as needed, and will never > execute any part of type_carrier. The definition in the webrev may not > be able to completely optimize away the carrier expression under some > conditions. Thanks for the suggestion. > > ------------------------------------------------------------------------------ > test/native/utilities/test_align.cpp > 35 struct TypeInfo : AllStatic { > 36 static const bool is_unsigned = T(-1) > T(0); > 37 static const T max = is_unsigned > > We recently (JDK-8181318) made it possible to #include and > use std::numeric_limits, so you could get is_signed and max() from > there. max_alignment could then just be a static helper function > template. > > Also here: > 54 static const intptr_t max_intptr = (intptr_t)max_intx; Done. > > ------------------------------------------------------------------------------ > test/native/utilities/test_align.cpp > 25 #include "logging/log.hpp" > and > 44 const static bool logging_enabled = false; > 45 #define log(...) \ > ... > > I'm guessing the logging macro is because logging/log stuff isn't > initialized yet when executing a TEST (as opposed to a TEST_VM)? So > the #include is superfluous? You are right. I changed the implementation to use SCOPED_TRACE, as you suggested in another mail. Thanks, StefanK From erik.helin at oracle.com Fri Jun 30 08:35:17 2017 From: erik.helin at oracle.com (Erik Helin) Date: Fri, 30 Jun 2017 10:35:17 +0200 Subject: RFR(XL): 8182299: Enable disabled clang warnings, build on OSX 10 + Xcode 8 In-Reply-To: <435C96F2-417E-4A47-8649-28E34E2908AE@amazon.com> References: <27FD0413-52BC-42E6-A5B0-3C92A49A2D6F@amazon.com> <1CBD62A9-9B1B-4B05-AAF9-BE2D52DE8C79@amazon.com> <6B296ABE-66C0-4C32-AC4E-8674BE103514@oracle.com> <3ACE75CF-CEA6-4081-952A-BBA138763582@amazon.com> <435C96F2-417E-4A47-8649-28E34E2908AE@amazon.com> Message-ID: <1d06f5fe-c3f8-f599-58bc-09b2099c819c@oracle.com> Hi Paul, thanks for contributing! Please see my comments regarding the GC changes below. On 06/29/2017 03:31 PM, Hohensee, Paul wrote: > I now have access to cr.openjdk.java.net, so the latest webrevs are at > > http://cr.openjdk.java.net/~phh/8182299/webrev_jdk.00/ > http://cr.openjdk.java.net/~phh/8182299/webrev_hotspot.00/ gc/g1/g1RootClosures.cpp: - // Closures to process raw oops in the root set. + virtual ~G1RootClosures() {} + +// Closures to process raw oops in the root set. I assume this is added because there is some warning about having only pure virtual methods but not having a virtual destructor. None of the classes inheriting from G1RootClosures needs a destructor, nor does G1RootClosures itself (it is just an "interface"). So there is no problem with correctness here :) However, I like to have lots of warnings enabled, so for me it is fine to an empty virtual destructor here just to please the compiler. gc/g1/heapRegion.cpp: looks correct gc/g1/heapRegionType.cpp: The indentation seems a bit funky to me here :) Also, have you compiled this on some other platforms? I think the last return is outside of the switch just to, as the comment says, "keep some compilers happy". Would clang be ok with having an empty default clause with just a break? And then fall through to the return outside of the switch? gc/g1/ptrQueue.cpp: looks correct gc/parallel/psPromotionManager.cpp: looks correct Thanks, Erik > On 6/28/17, 3:50 PM, "hotspot-dev on behalf of Hohensee, Paul" wrote: > > Thanks for the review, Jesper. > > New webrev sent, has only a change to nativeInst_x86.cpp. > > In nativeInst_x86.cpp, I formatted the expression so I could easily understand it, but you?re right, not everyone does it that way (maybe only me!), so I?ve changed it to > > if (((ubyte_at(0) & NativeTstRegMem::instruction_rex_prefix_mask) == NativeTstRegMem::instruction_rex_prefix && > ubyte_at(1) == NativeTstRegMem::instruction_code_memXregl && > (ubyte_at(2) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg) || > (ubyte_at(0) == NativeTstRegMem::instruction_code_memXregl && > (ubyte_at(1) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg)) { > > In graphKit.cpp, the old code was > > #ifdef ASSERT > case Deoptimization::Action_none: > case Deoptimization::Action_make_not_compilable: > break; > default: > fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); > break; > #endif > > In the non-ASSERT case, the compiler complained about the lack of Action_none, Action_make_not_compilable and default. If the warning had been turned off, the result would have been ?break;? for all three. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;? > > The new code is > > case Deoptimization::Action_none: > case Deoptimization::Action_make_not_compilable: > break; > default: > #ifdef ASSERT > fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); > #endif > break; > > The compiler doesn?t complain about Action_none, Action_make_not_compilable or default anymore. In the non-ASSERT case, the result is ?break;? for all three, same as for the old code. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;?, again same as for the old code. > > Thanks, > > Paul > > On 6/28/17, 10:36 AM, "jesper.wilhelmsson at oracle.com" wrote: > > Hi Paul, > > Thanks for doing this change! In general everything looks really good, there are a lot of really nice cleanups here. I just have two minor questions/nits: > > * In hotspot/cpu/x86/vm/nativeInst_x86.hpp it seems the expression already have parenthesis around the & operations and the change here is "only" cleaning up the layout of the code which is not a bad thing in it self, but you move the logical operators to the beginning of each line which is a quite different style than the rest of the code in the same function where the operators are at the end of the line. > > * In hotspot/share/vm/opto/graphKit.cpp you moved the #ifdef ASSERT so that Action_none and Action_make_not_compilable are available also when ASSERT is not defined. I don't see this mentioned in your description of the change. Was this change intentional? > > Thanks, > /Jesper > > > > On 27 Jun 2017, at 21:34, Hohensee, Paul wrote: > > > > An attempt at better formatting. > > > > https://bugs.openjdk.java.net/browse/JDK-8182299 > > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_jdk.00/ > > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_hotspot.00/ > > > > Jesper has been kind enough to host the webrevs while I get my cr.openjdk.net account set up, and to be the sponsor. > > > > This rfe a combination of enabling disabled clang warnings and getting jdk10 to build on OSX 10 and Xcode 8. At least one enabled warning (delete-non-virtual-dtor) detected what seems to me a real potential bug, with the rest enforcing good code hygiene. > > > > These changes are only in OpenJDK, so I?m looking for a volunteer to make the closed changes. > > > > Thanks, > > > > Paul > > > > > > Here are the jdk file-specific details: > > > > java_md_macosx.c splashscreen_sys.m > > > > Removed objc_registerThreadWithCollector() since it's obsolete and of questionable value in any case. > > > > NSApplicationAWT.m > > > > Use the correct NSEventMask rather than NSUInteger. > > > > jdhuff.c jdphuff.c > > > > Shifting a negative signed value is undefined. > > > > Here are the hotspot notes: > > > > Here are the lists of files affected by enabling a given warning: > > > > switch: all of these are lack of a default clause > > > > c1_LIRAssembler_x86.cpp c1_LIRGenerator_x86.cpp c1_LinearScan_x86.hpp > > jniFastGetField_x86_64.cpp assembler.cpp c1_Canonicalizer.cpp > > c1_GraphBuilder.cpp c1_Instruction.cpp c1_LIR.cpp c1_LIRGenerator.cpp > > c1_LinearScan.cpp c1_ValueStack.hpp c1_ValueType.cpp > > bcEscapeAnalyzer.cpp ciArray.cpp ciEnv.cpp ciInstance.cpp ciMethod.cpp > > ciMethodBlocks.cpp ciMethodData.cpp ciTypeFlow.cpp > > compiledMethod.cpp dependencies.cpp nmethod.cpp compileTask.hpp > > heapRegionType.cpp abstractInterpreter.cpp bytecodes.cpp > > invocationCounter.cpp linkResolver.cpp rewriter.cpp jvmciCompilerToVM.cpp > > jvmciEnv.cpp universe.cpp cpCache.cpp generateOopMap.cpp > > method.cpp methodData.cpp compile.cpp connode.cpp gcm.cpp graphKit.cpp > > ifnode.cpp library_call.cpp memnode.cpp parse1.cpp > > parse2.cpp phaseX.cpp superword.cpp type.cpp vectornode.cpp > > jvmtiClassFileReconstituter.cpp jvmtiEnter.xsl jvmtiEventController.cpp > > jvmtiImpl.cpp jvmtiRedefineClasses.cpp methodComparator.cpp methodHandles.cpp > > advancedThresholdPolicy.cpp reflection.cpp relocator.cpp sharedRuntime.cpp > > simpleThresholdPolicy.cpp writeableFlags.cpp globalDefinitions.hpp > > > > delete-non-virtual-dtor: these may be real latent bugs due to possible failure to execute destructor(s) > > > > decoder_aix.hpp decoder_machO.hpp classLoader.hpp g1RootClosures.hpp > > jvmtiImpl.hpp perfData.hpp decoder.hpp decoder_elf.hpp > > > > dynamic-class-memaccess: obscure use of memcpy > > > > method.cpp > > > > empty-body: ?;? isn?t good enough for clang, it prefers {} > > > > objectMonitor.cpp mallocSiteTable.cpp > > > > format: matches printf format strings against arguments. debug output will be affected by > > incorrect code changes to these. > > > > macroAssembler_x86.cpp os_bsd.cpp os_bsd_x86.cpp ciMethodData.cpp javaClasses.cpp > > debugInfo.cpp logFileOutput.cpp constantPool.cpp jvmtiEnter.xsl jvmtiRedefineClasses.cpp > > safepoint.cpp thread.cpp > > > > logical-op-parentheses: can be tricky to get correct. There are a few very long-winded predicates. > > > > nativeInst_x86.hpp archDesc.cpp output_c.cpp output_h.cpp c1_GraphBuilder.cpp > > c1_LIRGenerator.cpp c1_LinearScan.cpp bcEscapeAnalyzer.cpp ciMethod.cpp > > stackMapTableFormat.hpp compressedStream.cpp dependencies.cpp heapRegion.cpp > > ptrQueue.cpp psPromotionManager.cpp jvmciCompilerToVM.cpp cfgnode.cpp > > chaitin.cpp compile.cpp compile.hpp escape.cpp graphKit.cpp lcm.cpp > > loopTransform.cpp loopnode.cpp loopopts.cpp macro.cpp memnode.cpp > > output.cpp parse1.cpp parseHelper.cpp reg_split.cpp superword.cpp > > superword.hpp jniCheck.cpp jvmtiEventController.cpp arguments.cpp > > javaCalls.cpp sharedRuntime.cpp > > > > parentheses > > > > adlparse.cpp > > > > parentheses-equality > > > > output_c.cpp javaAssertions.cpp gcm.cpp > > > > File-specific details: > > > > GensrcAdlc.gmk CompileJvm.gmk > > Left tautological-compare in place to allow null 'this' pointer checks in methods > > intended to be called from a debugger. > > > > CompileGTest.gmk > > Just an enhanced comment. > > > > MacosxDebuggerLocal.m > > PT_ATTACH has been replaced by PT_ATTACHEXC > > > > ciMethodData.cp > > " 0x%" FORMAT64_MODIFIER "x" reduces to "0x%llx", whereas > > " " INTPTRNZ_FORMAT reduces to "0x%lx", which latter is what clang want. > > > > generateOopMap.cpp > > Refactored duplicate code in print_current_state(). > > > > binaryTreeDictionary.cpp/hpp, hashtable.cpp/hpp > > These provoked ?instantiation of variable required here, > > but no definition is available?. > > > > globalDefinitions_gcc.hpp > > Define FORMAT64_MODIFIER properly for Apple, needed by os.cpp. > > > > globalDefinitions.hpp > > Add INTPTRNZ_FORMAT, needed by ciMethodData.cpp. > > > > > From mikael.gerdin at oracle.com Fri Jun 30 08:40:15 2017 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Fri, 30 Jun 2017 10:40:15 +0200 Subject: RFR (S) 8183203: Remove stubRoutines_os In-Reply-To: <1498729924.2900.8.camel@oracle.com> References: <8ac671a6-1c48-a786-c47d-ecc677dadc34@oracle.com> <1498729924.2900.8.camel@oracle.com> Message-ID: <7a999767-062b-ed88-72fb-99b4c9159d4f@oracle.com> Hi Thomas, Thanks for the review. /Mikael On 2017-06-29 11:52, Thomas Schatzl wrote: > Hi, > > On Thu, 2017-06-29 at 11:47 +0200, Mikael Gerdin wrote: >> Hi all, >> >> Please review this change to remove stubRoutines_.cpp. >> They have been empty for ages, the Linux and Windows ones have never >> had >> a single source line besides #includes since their creation almost >> 16 >> years ago and the Solaris one has been empty for almost 15 years. >> >> Testing: JPRT build-only job >> Bug: https://bugs.openjdk.java.net/browse/JDK-8183203 >> Webrev: http://cr.openjdk.java.net/~mgerdin/8183203/webrev/ >> >> /Mikael > > I like that change :) Ship it before anyone wants to add something to > these files... > > Thanks, > Thomas > From mikael.gerdin at oracle.com Fri Jun 30 08:40:31 2017 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Fri, 30 Jun 2017 10:40:31 +0200 Subject: RFR (S) 8183203: Remove stubRoutines_os In-Reply-To: References: <8ac671a6-1c48-a786-c47d-ecc677dadc34@oracle.com> Message-ID: <6f2ba816-3d34-2cf9-8451-0cd3c3367fe9@oracle.com> Hi Stefan, On 2017-06-29 11:48, Stefan Karlsson wrote: > Looks awesome! ;) Thanks! :) /m > > StefanK > > On 2017-06-29 11:47, Mikael Gerdin wrote: >> Hi all, >> >> Please review this change to remove stubRoutines_.cpp. >> They have been empty for ages, the Linux and Windows ones have never >> had a single source line besides #includes since their creation almost >> 16 years ago and the Solaris one has been empty for almost 15 years. >> >> Testing: JPRT build-only job >> Bug: https://bugs.openjdk.java.net/browse/JDK-8183203 >> Webrev: http://cr.openjdk.java.net/~mgerdin/8183203/webrev/ >> >> /Mikael > > From stefan.karlsson at oracle.com Fri Jun 30 09:16:32 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 30 Jun 2017 11:16:32 +0200 Subject: RFR: 8178489: Make align functions more type safe and consistent Message-ID: <23768f79-d356-a7fc-833c-4932f18fc0d1@oracle.com> Hi all, Please review this patch to make the align functions more type safe and consistent. http://cr.openjdk.java.net/~stefank/8178489/webrev.00 https://bugs.openjdk.java.net/browse/JDK-8178489 Note that this patch needs to be applied on top of the following patches that are out for review: http://cr.openjdk.java.net/~stefank/8178491/webrev.02 http://cr.openjdk.java.net/~stefank/8178495/webrev.01 Currently, the align functions forces the user to often explicitly cast either the input parameters, or the return type, or both. Two examples of the current API: inline intptr_t align_size_up(intptr_t size, intptr_t alignment); inline void* align_ptr_up(const void* ptr, size_t alignment); I propose that we change the API to use templates to return the aligned value as the same type as the type of the unaligned input. The proposed API would look like this: template inline T align_size_up(T size, A alignment); template inline T* align_ptr_up(T* ptr, A alignment); and a follow-up RFE (JDK-8178499) would get rid of _size_ and _ptr_ from the names. Usages of these align functions would then look like: size_t aligned_size = align_up(alloc_size, os::vm_page_size()) HeapWord* aligned_top = align_up(top, region_size) Please, take an extra close look at the reinterpret_cast I added in atomic.hpp. This was needed because the align_ptr_down now returns T* and not void*, and the compiler complained when we tried to do a static cast from a volatile jbyte* to a volatile jint*. Tested with the align unit test and JPRT. Thanks, StefanK From stefan.karlsson at oracle.com Fri Jun 30 09:40:25 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 30 Jun 2017 11:40:25 +0200 Subject: RFR: 8178499: Remove _ptr_ and _size_ infixes from align functions Message-ID: Hi all, Please review this patch to remove the _ptr_ and _size_ infixes from align functions. http://cr.openjdk.java.net/~stefank/8178499/webrev.00 http://bugs.openjdk.java.net/browse/JDK-8178499 Rename functions (and corresponding macros) from: align_ptr_up align_size_up align_ptr_down align_size_down is_ptr_aligned is_size_aligned to: align_up align_down is_aligned The patch builds upon the changes in: http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-June/027328.html Thanks, StefanK From jesper.wilhelmsson at oracle.com Fri Jun 30 10:00:50 2017 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Fri, 30 Jun 2017 12:00:50 +0200 Subject: RFR(XL): 8182299: Enable disabled clang warnings, build on OSX 10 + Xcode 8 In-Reply-To: <1d06f5fe-c3f8-f599-58bc-09b2099c819c@oracle.com> References: <27FD0413-52BC-42E6-A5B0-3C92A49A2D6F@amazon.com> <1CBD62A9-9B1B-4B05-AAF9-BE2D52DE8C79@amazon.com> <6B296ABE-66C0-4C32-AC4E-8674BE103514@oracle.com> <3ACE75CF-CEA6-4081-952A-BBA138763582@amazon.com> <435C96F2-417E-4A47-8649-28E34E2908AE@amazon.com> <1d06f5fe-c3f8-f599-58bc-09b2099c819c@oracle.com> Message-ID: I ran the change through JPRT so at least it builds and runs on all platforms we support there. I'm currently fixing our closed code to build on Mac with this change. /Jesper > On 30 Jun 2017, at 10:35, Erik Helin wrote: > > Hi Paul, > > thanks for contributing! Please see my comments regarding the GC changes below. > > On 06/29/2017 03:31 PM, Hohensee, Paul wrote: >> I now have access to cr.openjdk.java.net, so the latest webrevs are at >> >> http://cr.openjdk.java.net/~phh/8182299/webrev_jdk.00/ >> http://cr.openjdk.java.net/~phh/8182299/webrev_hotspot.00/ > > gc/g1/g1RootClosures.cpp: > - // Closures to process raw oops in the root set. > + virtual ~G1RootClosures() {} > + > +// Closures to process raw oops in the root set. > > I assume this is added because there is some warning about having only pure virtual methods but not having a virtual destructor. None of the classes inheriting from G1RootClosures needs a destructor, nor does G1RootClosures itself (it is just an "interface"). So there is no problem with correctness here :) > > However, I like to have lots of warnings enabled, so for me it is fine to an empty virtual destructor here just to please the compiler. > > gc/g1/heapRegion.cpp: looks correct > > gc/g1/heapRegionType.cpp: > The indentation seems a bit funky to me here :) Also, have you compiled this on some other platforms? I think the last return is outside of the switch just to, as the comment says, "keep some compilers happy". > > Would clang be ok with having an empty default clause with just a break? And then fall through to the return outside of the switch? > > gc/g1/ptrQueue.cpp: looks correct > > gc/parallel/psPromotionManager.cpp: looks correct > > Thanks, > Erik > >> On 6/28/17, 3:50 PM, "hotspot-dev on behalf of Hohensee, Paul" wrote: >> >> Thanks for the review, Jesper. >> >> New webrev sent, has only a change to nativeInst_x86.cpp. >> >> In nativeInst_x86.cpp, I formatted the expression so I could easily understand it, but you?re right, not everyone does it that way (maybe only me!), so I?ve changed it to >> >> if (((ubyte_at(0) & NativeTstRegMem::instruction_rex_prefix_mask) == NativeTstRegMem::instruction_rex_prefix && >> ubyte_at(1) == NativeTstRegMem::instruction_code_memXregl && >> (ubyte_at(2) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg) || >> (ubyte_at(0) == NativeTstRegMem::instruction_code_memXregl && >> (ubyte_at(1) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg)) { >> >> In graphKit.cpp, the old code was >> >> #ifdef ASSERT >> case Deoptimization::Action_none: >> case Deoptimization::Action_make_not_compilable: >> break; >> default: >> fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); >> break; >> #endif >> >> In the non-ASSERT case, the compiler complained about the lack of Action_none, Action_make_not_compilable and default. If the warning had been turned off, the result would have been ?break;? for all three. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;? >> >> The new code is >> >> case Deoptimization::Action_none: >> case Deoptimization::Action_make_not_compilable: >> break; >> default: >> #ifdef ASSERT >> fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); >> #endif >> break; >> >> The compiler doesn?t complain about Action_none, Action_make_not_compilable or default anymore. In the non-ASSERT case, the result is ?break;? for all three, same as for the old code. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;?, again same as for the old code. >> >> Thanks, >> >> Paul >> >> On 6/28/17, 10:36 AM, "jesper.wilhelmsson at oracle.com" wrote: >> >> Hi Paul, >> >> Thanks for doing this change! In general everything looks really good, there are a lot of really nice cleanups here. I just have two minor questions/nits: >> >> * In hotspot/cpu/x86/vm/nativeInst_x86.hpp it seems the expression already have parenthesis around the & operations and the change here is "only" cleaning up the layout of the code which is not a bad thing in it self, but you move the logical operators to the beginning of each line which is a quite different style than the rest of the code in the same function where the operators are at the end of the line. >> >> * In hotspot/share/vm/opto/graphKit.cpp you moved the #ifdef ASSERT so that Action_none and Action_make_not_compilable are available also when ASSERT is not defined. I don't see this mentioned in your description of the change. Was this change intentional? >> >> Thanks, >> /Jesper >> >> >> > On 27 Jun 2017, at 21:34, Hohensee, Paul wrote: >> > >> > An attempt at better formatting. >> > >> > https://bugs.openjdk.java.net/browse/JDK-8182299 >> > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_jdk.00/ >> > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_hotspot.00/ >> > >> > Jesper has been kind enough to host the webrevs while I get my cr.openjdk.net account set up, and to be the sponsor. >> > >> > This rfe a combination of enabling disabled clang warnings and getting jdk10 to build on OSX 10 and Xcode 8. At least one enabled warning (delete-non-virtual-dtor) detected what seems to me a real potential bug, with the rest enforcing good code hygiene. >> > >> > These changes are only in OpenJDK, so I?m looking for a volunteer to make the closed changes. >> > >> > Thanks, >> > >> > Paul >> > >> > >> > Here are the jdk file-specific details: >> > >> > java_md_macosx.c splashscreen_sys.m >> > >> > Removed objc_registerThreadWithCollector() since it's obsolete and of questionable value in any case. >> > >> > NSApplicationAWT.m >> > >> > Use the correct NSEventMask rather than NSUInteger. >> > >> > jdhuff.c jdphuff.c >> > >> > Shifting a negative signed value is undefined. >> > >> > Here are the hotspot notes: >> > >> > Here are the lists of files affected by enabling a given warning: >> > >> > switch: all of these are lack of a default clause >> > >> > c1_LIRAssembler_x86.cpp c1_LIRGenerator_x86.cpp c1_LinearScan_x86.hpp >> > jniFastGetField_x86_64.cpp assembler.cpp c1_Canonicalizer.cpp >> > c1_GraphBuilder.cpp c1_Instruction.cpp c1_LIR.cpp c1_LIRGenerator.cpp >> > c1_LinearScan.cpp c1_ValueStack.hpp c1_ValueType.cpp >> > bcEscapeAnalyzer.cpp ciArray.cpp ciEnv.cpp ciInstance.cpp ciMethod.cpp >> > ciMethodBlocks.cpp ciMethodData.cpp ciTypeFlow.cpp >> > compiledMethod.cpp dependencies.cpp nmethod.cpp compileTask.hpp >> > heapRegionType.cpp abstractInterpreter.cpp bytecodes.cpp >> > invocationCounter.cpp linkResolver.cpp rewriter.cpp jvmciCompilerToVM.cpp >> > jvmciEnv.cpp universe.cpp cpCache.cpp generateOopMap.cpp >> > method.cpp methodData.cpp compile.cpp connode.cpp gcm.cpp graphKit.cpp >> > ifnode.cpp library_call.cpp memnode.cpp parse1.cpp >> > parse2.cpp phaseX.cpp superword.cpp type.cpp vectornode.cpp >> > jvmtiClassFileReconstituter.cpp jvmtiEnter.xsl jvmtiEventController.cpp >> > jvmtiImpl.cpp jvmtiRedefineClasses.cpp methodComparator.cpp methodHandles.cpp >> > advancedThresholdPolicy.cpp reflection.cpp relocator.cpp sharedRuntime.cpp >> > simpleThresholdPolicy.cpp writeableFlags.cpp globalDefinitions.hpp >> > >> > delete-non-virtual-dtor: these may be real latent bugs due to possible failure to execute destructor(s) >> > >> > decoder_aix.hpp decoder_machO.hpp classLoader.hpp g1RootClosures.hpp >> > jvmtiImpl.hpp perfData.hpp decoder.hpp decoder_elf.hpp >> > >> > dynamic-class-memaccess: obscure use of memcpy >> > >> > method.cpp >> > >> > empty-body: ?;? isn?t good enough for clang, it prefers {} >> > >> > objectMonitor.cpp mallocSiteTable.cpp >> > >> > format: matches printf format strings against arguments. debug output will be affected by >> > incorrect code changes to these. >> > >> > macroAssembler_x86.cpp os_bsd.cpp os_bsd_x86.cpp ciMethodData.cpp javaClasses.cpp >> > debugInfo.cpp logFileOutput.cpp constantPool.cpp jvmtiEnter.xsl jvmtiRedefineClasses.cpp >> > safepoint.cpp thread.cpp >> > >> > logical-op-parentheses: can be tricky to get correct. There are a few very long-winded predicates. >> > >> > nativeInst_x86.hpp archDesc.cpp output_c.cpp output_h.cpp c1_GraphBuilder.cpp >> > c1_LIRGenerator.cpp c1_LinearScan.cpp bcEscapeAnalyzer.cpp ciMethod.cpp >> > stackMapTableFormat.hpp compressedStream.cpp dependencies.cpp heapRegion.cpp >> > ptrQueue.cpp psPromotionManager.cpp jvmciCompilerToVM.cpp cfgnode.cpp >> > chaitin.cpp compile.cpp compile.hpp escape.cpp graphKit.cpp lcm.cpp >> > loopTransform.cpp loopnode.cpp loopopts.cpp macro.cpp memnode.cpp >> > output.cpp parse1.cpp parseHelper.cpp reg_split.cpp superword.cpp >> > superword.hpp jniCheck.cpp jvmtiEventController.cpp arguments.cpp >> > javaCalls.cpp sharedRuntime.cpp >> > >> > parentheses >> > >> > adlparse.cpp >> > >> > parentheses-equality >> > >> > output_c.cpp javaAssertions.cpp gcm.cpp >> > >> > File-specific details: >> > >> > GensrcAdlc.gmk CompileJvm.gmk >> > Left tautological-compare in place to allow null 'this' pointer checks in methods >> > intended to be called from a debugger. >> > >> > CompileGTest.gmk >> > Just an enhanced comment. >> > >> > MacosxDebuggerLocal.m >> > PT_ATTACH has been replaced by PT_ATTACHEXC >> > >> > ciMethodData.cp >> > " 0x%" FORMAT64_MODIFIER "x" reduces to "0x%llx", whereas >> > " " INTPTRNZ_FORMAT reduces to "0x%lx", which latter is what clang want. >> > >> > generateOopMap.cpp >> > Refactored duplicate code in print_current_state(). >> > >> > binaryTreeDictionary.cpp/hpp, hashtable.cpp/hpp >> > These provoked ?instantiation of variable required here, >> > but no definition is available?. >> > >> > globalDefinitions_gcc.hpp >> > Define FORMAT64_MODIFIER properly for Apple, needed by os.cpp. >> > >> > globalDefinitions.hpp >> > Add INTPTRNZ_FORMAT, needed by ciMethodData.cpp. >> >> >> >> >> From alexander.harlap at oracle.com Fri Jun 30 12:16:22 2017 From: alexander.harlap at oracle.com (Alexander Harlap) Date: Fri, 30 Jun 2017 08:16:22 -0400 Subject: Request for review for JDK-8178507 - co-locate nsk.regression.gc tests In-Reply-To: <2D5E673A-2CC7-481E-A0F3-E97AA59272DE@oracle.com> References: <73e5c873-1ddd-afd7-b958-0367b9376f1d@oracle.com> <5bda3da9-18fd-d00a-cc1f-19bf36ff3709@oracle.com> <64280D64-9F9B-42BE-B348-0F1449DFEAC6@oracle.com> <2D5E673A-2CC7-481E-A0F3-E97AA59272DE@oracle.com> Message-ID: Hi Igor, Leonid helped me to polish tests. Could you at final version? Alex On 6/29/2017 11:24 PM, Leonid Mesnik wrote: > Hi > > The changes looks good. > > Leonid >> On Jun 29, 2017, at 11:08 AM, Alexander Harlap >> > wrote: >> >> Here is new version: >> http://cr.openjdk.java.net/~aharlap/8178507/webrev.04/ >> >> I made changes , recommended by Leonid: vm.debug for debug-only >> options (very good!) >> >> and spitted TestMemoryInitialization into two separate tests. >> >> Alex >> >> >> >> On 6/28/2017 5:35 PM, Leonid Mesnik wrote: >>> >>>> On Jun 28, 2017, at 11:27 AM, Alexander Harlap >>>> > >>>> wrote: >>>> >>>> Hi Leonid and Igor, >>>> >>>> It looks like we need extra round of review: >>>> >>>> New version is here: >>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.03/ >>>> >>>> Two issues: >>>> >>>> 1. TestFullGCALot.java - it may take too long. So I added option >>>> -XX:+FullGCALotInterval=120 to make sure we do not hit timeout and >>>> do not slow down testing, also -XX:+IgnoreUnrecognizedVMOptions - >>>> do not fail in product mode >>>> >>> It would be better to use requires to skip test for product bits. >>> (vm.debug) >>>> >>>> 2. TestMemoryInitiazation.java - feature to initialize debug memory >>>> to some special words currently is supported only for CMS and >>>> Serial gc. So I modified Test to run now only for these gc's: >>>> >>>> * @requires vm.gc.Serial | vm.gc.ConcMarkSweep >>>> * @summary Simple test for -XX:+CheckMemoryInitialization doesn't >>>> crash VM >>>> * @run main/othervm -XX:+UseSerialGC >>>> -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization >>>> TestMemoryInitialization >>>> * @run main/othervm -XX:+UseConcMarkSweepGC >>>> -XX:+IgnoreUnrecognizedVMOptions -XX:+CheckMemoryInitialization >>>> TestMemoryInitialization >>>> >>> Test tries to run VM 2 times with Serial/CMS GC in the case if any >>> of them is supported or/and set. So test fails if CMS is not >>> supported. In the case if any of GC is set explicitly it should fail >>> with unsupported GC combinations. >>> >>> The better would be to split test into 2 single tests >>> TestMemoryInitializationSerialGC & TestMemoryInitializationCMSGC >>> which shares java code. Also CMS has been deprecated in JDK9 so I >>> don?t know it make a sense to test it JDK10. >>> >>> Leonid >>>> >>>> I will add enhancement request to support CheckMemoryInitialization >>>> flag in G1. >>>> >>>> Alex >>>> >>>> On 6/26/2017 7:04 PM, Igor Ignatyev wrote: >>>>> >>>>>> On Jun 26, 2017, at 2:54 PM, Alexander Harlap >>>>>> >>>>> > wrote: >>>>>> >>>>>> Thank you Igor and Leonid, >>>>>> >>>>>> I fixed mentioned typos and unnecessary return (see >>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.02/) >>>>>> >>>>> perfect. >>>>>> >>>>>> Do I need more reviews? >>>>>> >>>>> no, you can go ahead and integrate it. >>>>> >>>>> -- Igor >>>>>> >>>>>> Alex >>>>>> >>>>>> >>>>>> On 6/26/2017 4:32 PM, Igor Ignatyev wrote: >>>>>>> Hi Alexander, >>>>>>> >>>>>>> besides the small nits which Leonid mentioned, there is one in >>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html: >>>>>>> >>>>>>> >>>>>>>> 28 * @summary Test verifies only that VM doesn???t crash but throw expected Error. >>>>>>> I guess "doesn???t" is 'doesn't' w/ a fancy apostrophe. >>>>>>> otherwise looks good to me, Reviewed. >>>>>>> >>>>>>> -- Igor >>>>>>> >>>>>>>> On Jun 26, 2017, at 1:11 PM, Leonid Mesnik >>>>>>>> > wrote: >>>>>>>> >>>>>>>> Hi >>>>>>>> >>>>>>>> New changes looks good for me. Please get review from Reviewer. >>>>>>>> >>>>>>>> The only 2 small nits which don?t require separate review from me: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestFullGCALot.java.html >>>>>>>> >>>>>>>> >>>>>>> > >>>>>>>> typo in >>>>>>>> 37 System.out.println("Hellow world!"); >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/test/gc/TestStackOverflow.java.html >>>>>>>> >>>>>>>> >>>>>>> > >>>>>>>> return is not needed in >>>>>>>> 58 return; >>>>>>>> >>>>>>>> Thanks >>>>>>>> Leonid >>>>>>>>> On Jun 26, 2017, at 1:04 PM, Alexander Harlap >>>>>>>>> >>>>>>>> > wrote: >>>>>>>>> >>>>>>>>> Hi Leonid, >>>>>>>>> >>>>>>>>> I accommodated your suggestions. >>>>>>>>> >>>>>>>>> New version of changeset located at >>>>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.01/ >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Alex >>>>>>>>> >>>>>>>>> >>>>>>>>> On 6/23/2017 6:18 PM, Leonid Mesnik wrote: >>>>>>>>>> Hi >>>>>>>>>> >>>>>>>>>> Basically changes looks good. Below are some comments: >>>>>>>>>> >>>>>>>>>>> On Jun 22, 2017, at 9:16 AM, Alexander Harlap >>>>>>>>>>> >>>>>>>>>> > wrote: >>>>>>>>>>> >>>>>>>>>>> Please review change for JDK-8178507 >>>>>>>>>>> - >>>>>>>>>>> co-locate nsk.regression.gc tests >>>>>>>>>>> >>>>>>>>>>> JDK-8178507 >>>>>>>>>>> is last >>>>>>>>>>> remaining sub-task ofJDK-8178482 >>>>>>>>>>> - >>>>>>>>>>> Co-locate remaining GC tests >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Proposed change located at >>>>>>>>>>> http://cr.openjdk.java.net/~aharlap/8178507/webrev.00/ >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Co-located and converted to JTREG tests are: >>>>>>>>>>> >>>>>>>>>>> nsk/regression/b4187687 => hotspot/test/gc/TestFullGCALot.java >>>>>>>>>> The out variable is no used and return code is not checked in >>>>>>>>>> method ?run?. Wouldn't it simpler just to move println into >>>>>>>>>> main and remove method ?run? completely? >>>>>>>>>>> nsk/regression/b4396719 => >>>>>>>>>>> hotspot/test/gc/TestStackOverflow.java >>>>>>>>>> The method ?run? always returns 0. It would be better to make >>>>>>>>>> it void or just remove it. Test never throws any exception. >>>>>>>>>> So it make a sense to write in comments that test verifies >>>>>>>>>> only that VM doesn?t crash but throw expected Error. >>>>>>>>>> >>>>>>>>>>> nsk/regression/b4668531 => >>>>>>>>>>> hotspot/test/gc/TestMemoryInitialization.java >>>>>>>>>> The variable buffer is ?read-only?. It make a sense to make >>>>>>>>>> variable ?buffer' public static member of class >>>>>>>>>> TestMemoryInitialization. So compiler could not optimize it >>>>>>>>>> usage during any optimization like escape analysis. >>>>>>>>>>> nsk/regression/b6186200 => >>>>>>>>>>> hotspot/test/gc/cslocker/TestCSLocker.java >>>>>>>>>>> >>>>>>>>>> Port looks good. It seems that test doesn?t verify that lock >>>>>>>>>> really happened. Could be this improved as a part of this fix >>>>>>>>>> or by filing separate RFE? >>>>>>>>>> >>>>>>>>>> Leonid >>>>>>>>>>> Thank you, >>>>>>>>>>> >>>>>>>>>>> Alex >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From aph at redhat.com Fri Jun 30 15:03:37 2017 From: aph at redhat.com (Andrew Haley) Date: Fri, 30 Jun 2017 16:03:37 +0100 Subject: RFR: JDK-8172791 fixing issues with Reserved Stack Area In-Reply-To: <3cd1c352-33fa-96fe-d2f6-a78a25c0543e@oracle.com> References: <3cd1c352-33fa-96fe-d2f6-a78a25c0543e@oracle.com> Message-ID: Just to explain why this patch is needed: without it, JEP 270 (ReservedStackArea) does not work at all if a method with a ReservedStack annotation is inlined. Which, in practice, is all the time, because aggressive inlining is what C2 does. Can somebody please have a look at this? On 18/04/17 15:47, Frederic Parain wrote: > Greetings, > > Please review this fix which addresses several issues with the > ReservedStackArea implementation: > 1 - the method look_for_reserved_stack_annotated_method() fails to > detect in-lined > annotated methods, making the annotation ineffective for > in-lined methods > 2 - sometime an assertion failure related to reserved area status > occurs after incorrect > restoring of guards pages during a return from runtime > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8172791 > > webrev: > http://cr.openjdk.java.net/~fparain/8172791/webrev.00/index.html > > This fix has been contributed by Andrew Haley. > > Thank you, > > Fred -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From stefan.karlsson at oracle.com Fri Jun 30 15:15:49 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 30 Jun 2017 17:15:49 +0200 Subject: RFR: 8178495: Bug in the align_size_up_ macro In-Reply-To: <7aea18aa-7aab-8f80-72f5-b8d4304a829d@oracle.com> References: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> <7aea18aa-7aab-8f80-72f5-b8d4304a829d@oracle.com> Message-ID: Fixing the logging with SCOPED_TRACE was only tested locally, but failed on OSX. Here's a fix for that problem: http://cr.openjdk.java.net/~stefank/8178495/webrev.02 http://cr.openjdk.java.net/~stefank/8178495/webrev.02.delta Passes JPRT now. StefanK On 2017-06-30 10:06, Stefan Karlsson wrote: > Hi Kim, > > Updated webrevs: > http://cr.openjdk.java.net/~stefank/8178495/webrev.01 > http://cr.openjdk.java.net/~stefank/8178495/webrev.01.delta > > Inlined: > > On 2017-06-30 00:04, Kim Barrett wrote: >>> On Jun 28, 2017, at 11:08 AM, Stefan Karlsson >>> wrote: >>> >>> Hi all, >>> >>> Please review this patch to fix a bug in the align_size_up_ macro. >>> >>> http://cr.openjdk.java.net/~stefank/8178495/webrev.00/ >>> https://bugs.openjdk.java.net/browse/JDK-8178495 >> ------------------------------------------------------------------------------ >> >> src/share/vm/utilities/globalDefinitions.hpp >> 514 #define widen_to_type_of(what, type_carrier) ((what) | >> ((type_carrier) & 0)) >> >> I think a better form of widen_to_type_of is the following: >> >> #define widen_to_type_of(what, type_carrier) (true ? (what) : >> (type_carrier)) >> >> The difference is that this just promotes as needed, and will never >> execute any part of type_carrier. The definition in the webrev may not >> be able to completely optimize away the carrier expression under some >> conditions. > > Thanks for the suggestion. > >> >> ------------------------------------------------------------------------------ >> >> test/native/utilities/test_align.cpp >> 35 struct TypeInfo : AllStatic { >> 36 static const bool is_unsigned = T(-1) > T(0); >> 37 static const T max = is_unsigned >> >> We recently (JDK-8181318) made it possible to #include and >> use std::numeric_limits, so you could get is_signed and max() from >> there. max_alignment could then just be a static helper function >> template. >> >> Also here: >> 54 static const intptr_t max_intptr = (intptr_t)max_intx; > > Done. > >> >> ------------------------------------------------------------------------------ >> >> test/native/utilities/test_align.cpp >> 25 #include "logging/log.hpp" >> and >> 44 const static bool logging_enabled = false; >> 45 #define log(...) \ >> ... >> >> I'm guessing the logging macro is because logging/log stuff isn't >> initialized yet when executing a TEST (as opposed to a TEST_VM)? So >> the #include is superfluous? > > You are right. > > I changed the implementation to use SCOPED_TRACE, as you suggested in > another mail. > > Thanks, > StefanK > From stefan.karlsson at oracle.com Fri Jun 30 16:12:43 2017 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 30 Jun 2017 18:12:43 +0200 Subject: RFR: 8178500: Replace usages of round_to and round_down with align_up and align_down Message-ID: Hi all, Please review this patch to remove round_to and round_down and replace their usages with align_up and align_down. http://cr.openjdk.java.net/~stefank/8178500/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8178500 The round_to and round_down functions asserted that alignment was a power-of-2. I've added asserts to the corresponding align functions. This patch builds upon all the patches referred to in: http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-June/027329.html Thanks, StefanK From hohensee at amazon.com Fri Jun 30 16:30:20 2017 From: hohensee at amazon.com (Hohensee, Paul) Date: Fri, 30 Jun 2017 16:30:20 +0000 Subject: RFR(XL): 8182299: Enable disabled clang warnings, build on OSX 10 + Xcode 8 In-Reply-To: References: <27FD0413-52BC-42E6-A5B0-3C92A49A2D6F@amazon.com> <1CBD62A9-9B1B-4B05-AAF9-BE2D52DE8C79@amazon.com> <6B296ABE-66C0-4C32-AC4E-8674BE103514@oracle.com> <3ACE75CF-CEA6-4081-952A-BBA138763582@amazon.com> <435C96F2-417E-4A47-8649-28E34E2908AE@amazon.com> <1d06f5fe-c3f8-f599-58bc-09b2099c819c@oracle.com> Message-ID: Thanks for picking up the closed work, Jesper, and thanks for the review, Erik. It?s good to be back working on server Java. ( Yes, I added the G1RootClosures virtual destructor because of the warning. In heapRegionType.cpp, I just followed along with the existing funky indentation and made the default clause also have a return because all the other clauses do. Clang would indeed be ok with a default clause with just a break. Also, I?ve used the same idiom in other files, so I?d change those too if I change these. Which would you prefer? I?m not invested in any particular format. Thanks, Paul On 6/30/17, 3:00 AM, "jesper.wilhelmsson at oracle.com" wrote: I ran the change through JPRT so at least it builds and runs on all platforms we support there. I'm currently fixing our closed code to build on Mac with this change. /Jesper > On 30 Jun 2017, at 10:35, Erik Helin wrote: > > Hi Paul, > > thanks for contributing! Please see my comments regarding the GC changes below. > > On 06/29/2017 03:31 PM, Hohensee, Paul wrote: >> I now have access to cr.openjdk.java.net, so the latest webrevs are at >> >> http://cr.openjdk.java.net/~phh/8182299/webrev_jdk.00/ >> http://cr.openjdk.java.net/~phh/8182299/webrev_hotspot.00/ > > gc/g1/g1RootClosures.cpp: > - // Closures to process raw oops in the root set. > + virtual ~G1RootClosures() {} > + > +// Closures to process raw oops in the root set. > > I assume this is added because there is some warning about having only pure virtual methods but not having a virtual destructor. None of the classes inheriting from G1RootClosures needs a destructor, nor does G1RootClosures itself (it is just an "interface"). So there is no problem with correctness here :) > > However, I like to have lots of warnings enabled, so for me it is fine to an empty virtual destructor here just to please the compiler. > > gc/g1/heapRegion.cpp: looks correct > > gc/g1/heapRegionType.cpp: > The indentation seems a bit funky to me here :) Also, have you compiled this on some other platforms? I think the last return is outside of the switch just to, as the comment says, "keep some compilers happy". > > Would clang be ok with having an empty default clause with just a break? And then fall through to the return outside of the switch? > > gc/g1/ptrQueue.cpp: looks correct > > gc/parallel/psPromotionManager.cpp: looks correct > > Thanks, > Erik > >> On 6/28/17, 3:50 PM, "hotspot-dev on behalf of Hohensee, Paul" wrote: >> >> Thanks for the review, Jesper. >> >> New webrev sent, has only a change to nativeInst_x86.cpp. >> >> In nativeInst_x86.cpp, I formatted the expression so I could easily understand it, but you?re right, not everyone does it that way (maybe only me!), so I?ve changed it to >> >> if (((ubyte_at(0) & NativeTstRegMem::instruction_rex_prefix_mask) == NativeTstRegMem::instruction_rex_prefix && >> ubyte_at(1) == NativeTstRegMem::instruction_code_memXregl && >> (ubyte_at(2) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg) || >> (ubyte_at(0) == NativeTstRegMem::instruction_code_memXregl && >> (ubyte_at(1) & NativeTstRegMem::modrm_mask) == NativeTstRegMem::modrm_reg)) { >> >> In graphKit.cpp, the old code was >> >> #ifdef ASSERT >> case Deoptimization::Action_none: >> case Deoptimization::Action_make_not_compilable: >> break; >> default: >> fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); >> break; >> #endif >> >> In the non-ASSERT case, the compiler complained about the lack of Action_none, Action_make_not_compilable and default. If the warning had been turned off, the result would have been ?break;? for all three. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;? >> >> The new code is >> >> case Deoptimization::Action_none: >> case Deoptimization::Action_make_not_compilable: >> break; >> default: >> #ifdef ASSERT >> fatal("unknown action %d: %s", action, Deoptimization::trap_action_name(action)); >> #endif >> break; >> >> The compiler doesn?t complain about Action_none, Action_make_not_compilable or default anymore. In the non-ASSERT case, the result is ?break;? for all three, same as for the old code. In the ASSERT case, Action_none and Action_make_not_compilable result in ?break;?, and in the default case ?fatal(); break;?, again same as for the old code. >> >> Thanks, >> >> Paul >> >> On 6/28/17, 10:36 AM, "jesper.wilhelmsson at oracle.com" wrote: >> >> Hi Paul, >> >> Thanks for doing this change! In general everything looks really good, there are a lot of really nice cleanups here. I just have two minor questions/nits: >> >> * In hotspot/cpu/x86/vm/nativeInst_x86.hpp it seems the expression already have parenthesis around the & operations and the change here is "only" cleaning up the layout of the code which is not a bad thing in it self, but you move the logical operators to the beginning of each line which is a quite different style than the rest of the code in the same function where the operators are at the end of the line. >> >> * In hotspot/share/vm/opto/graphKit.cpp you moved the #ifdef ASSERT so that Action_none and Action_make_not_compilable are available also when ASSERT is not defined. I don't see this mentioned in your description of the change. Was this change intentional? >> >> Thanks, >> /Jesper >> >> >> > On 27 Jun 2017, at 21:34, Hohensee, Paul wrote: >> > >> > An attempt at better formatting. >> > >> > https://bugs.openjdk.java.net/browse/JDK-8182299 >> > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_jdk.00/ >> > http://cr.openjdk.java.net/~jwilhelm/8182299/webrev_hotspot.00/ >> > >> > Jesper has been kind enough to host the webrevs while I get my cr.openjdk.net account set up, and to be the sponsor. >> > >> > This rfe a combination of enabling disabled clang warnings and getting jdk10 to build on OSX 10 and Xcode 8. At least one enabled warning (delete-non-virtual-dtor) detected what seems to me a real potential bug, with the rest enforcing good code hygiene. >> > >> > These changes are only in OpenJDK, so I?m looking for a volunteer to make the closed changes. >> > >> > Thanks, >> > >> > Paul >> > >> > >> > Here are the jdk file-specific details: >> > >> > java_md_macosx.c splashscreen_sys.m >> > >> > Removed objc_registerThreadWithCollector() since it's obsolete and of questionable value in any case. >> > >> > NSApplicationAWT.m >> > >> > Use the correct NSEventMask rather than NSUInteger. >> > >> > jdhuff.c jdphuff.c >> > >> > Shifting a negative signed value is undefined. >> > >> > Here are the hotspot notes: >> > >> > Here are the lists of files affected by enabling a given warning: >> > >> > switch: all of these are lack of a default clause >> > >> > c1_LIRAssembler_x86.cpp c1_LIRGenerator_x86.cpp c1_LinearScan_x86.hpp >> > jniFastGetField_x86_64.cpp assembler.cpp c1_Canonicalizer.cpp >> > c1_GraphBuilder.cpp c1_Instruction.cpp c1_LIR.cpp c1_LIRGenerator.cpp >> > c1_LinearScan.cpp c1_ValueStack.hpp c1_ValueType.cpp >> > bcEscapeAnalyzer.cpp ciArray.cpp ciEnv.cpp ciInstance.cpp ciMethod.cpp >> > ciMethodBlocks.cpp ciMethodData.cpp ciTypeFlow.cpp >> > compiledMethod.cpp dependencies.cpp nmethod.cpp compileTask.hpp >> > heapRegionType.cpp abstractInterpreter.cpp bytecodes.cpp >> > invocationCounter.cpp linkResolver.cpp rewriter.cpp jvmciCompilerToVM.cpp >> > jvmciEnv.cpp universe.cpp cpCache.cpp generateOopMap.cpp >> > method.cpp methodData.cpp compile.cpp connode.cpp gcm.cpp graphKit.cpp >> > ifnode.cpp library_call.cpp memnode.cpp parse1.cpp >> > parse2.cpp phaseX.cpp superword.cpp type.cpp vectornode.cpp >> > jvmtiClassFileReconstituter.cpp jvmtiEnter.xsl jvmtiEventController.cpp >> > jvmtiImpl.cpp jvmtiRedefineClasses.cpp methodComparator.cpp methodHandles.cpp >> > advancedThresholdPolicy.cpp reflection.cpp relocator.cpp sharedRuntime.cpp >> > simpleThresholdPolicy.cpp writeableFlags.cpp globalDefinitions.hpp >> > >> > delete-non-virtual-dtor: these may be real latent bugs due to possible failure to execute destructor(s) >> > >> > decoder_aix.hpp decoder_machO.hpp classLoader.hpp g1RootClosures.hpp >> > jvmtiImpl.hpp perfData.hpp decoder.hpp decoder_elf.hpp >> > >> > dynamic-class-memaccess: obscure use of memcpy >> > >> > method.cpp >> > >> > empty-body: ?;? isn?t good enough for clang, it prefers {} >> > >> > objectMonitor.cpp mallocSiteTable.cpp >> > >> > format: matches printf format strings against arguments. debug output will be affected by >> > incorrect code changes to these. >> > >> > macroAssembler_x86.cpp os_bsd.cpp os_bsd_x86.cpp ciMethodData.cpp javaClasses.cpp >> > debugInfo.cpp logFileOutput.cpp constantPool.cpp jvmtiEnter.xsl jvmtiRedefineClasses.cpp >> > safepoint.cpp thread.cpp >> > >> > logical-op-parentheses: can be tricky to get correct. There are a few very long-winded predicates. >> > >> > nativeInst_x86.hpp archDesc.cpp output_c.cpp output_h.cpp c1_GraphBuilder.cpp >> > c1_LIRGenerator.cpp c1_LinearScan.cpp bcEscapeAnalyzer.cpp ciMethod.cpp >> > stackMapTableFormat.hpp compressedStream.cpp dependencies.cpp heapRegion.cpp >> > ptrQueue.cpp psPromotionManager.cpp jvmciCompilerToVM.cpp cfgnode.cpp >> > chaitin.cpp compile.cpp compile.hpp escape.cpp graphKit.cpp lcm.cpp >> > loopTransform.cpp loopnode.cpp loopopts.cpp macro.cpp memnode.cpp >> > output.cpp parse1.cpp parseHelper.cpp reg_split.cpp superword.cpp >> > superword.hpp jniCheck.cpp jvmtiEventController.cpp arguments.cpp >> > javaCalls.cpp sharedRuntime.cpp >> > >> > parentheses >> > >> > adlparse.cpp >> > >> > parentheses-equality >> > >> > output_c.cpp javaAssertions.cpp gcm.cpp >> > >> > File-specific details: >> > >> > GensrcAdlc.gmk CompileJvm.gmk >> > Left tautological-compare in place to allow null 'this' pointer checks in methods >> > intended to be called from a debugger. >> > >> > CompileGTest.gmk >> > Just an enhanced comment. >> > >> > MacosxDebuggerLocal.m >> > PT_ATTACH has been replaced by PT_ATTACHEXC >> > >> > ciMethodData.cp >> > " 0x%" FORMAT64_MODIFIER "x" reduces to "0x%llx", whereas >> > " " INTPTRNZ_FORMAT reduces to "0x%lx", which latter is what clang want. >> > >> > generateOopMap.cpp >> > Refactored duplicate code in print_current_state(). >> > >> > binaryTreeDictionary.cpp/hpp, hashtable.cpp/hpp >> > These provoked ?instantiation of variable required here, >> > but no definition is available?. >> > >> > globalDefinitions_gcc.hpp >> > Define FORMAT64_MODIFIER properly for Apple, needed by os.cpp. >> > >> > globalDefinitions.hpp >> > Add INTPTRNZ_FORMAT, needed by ciMethodData.cpp. >> >> >> >> >> From thomas.stuefe at gmail.com Fri Jun 30 18:32:46 2017 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 30 Jun 2017 20:32:46 +0200 Subject: RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area In-Reply-To: References: <337330d2-512c-bc77-5a55-c05d33e376e5@oracle.com> Message-ID: Hi Eric, thank you for the review! New Version: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.03/webrev/ Delta to last: http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-02-to-03/webrev/ Please find comments inline. On Tue, Jun 27, 2017 at 2:53 PM, Erik Helin wrote: > On 06/21/2017 06:16 PM, Thomas St?fe wrote: > >> New Webrev: >> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor >> -ul-logstream/all.webrev.02/webrev/ >> > > I have spent most of my time in log.hpp, logStream.{hpp,cpp}, so I will > start with those comments/questions and then continue reviewing the > callsites. > > log.hpp: > - two extra newlines near end of file > > logStream.hpp: > - one extra newline near end of file > I cleaned up all extra newlines I added. > - should you add declarations of delete as well to prevent someone from > calling delete on LogStream pointer? Tried this, but I think I ran into linker errors. I do not think it is needed, though. > Thinking about this, it seems a > bit odd to pass a function an outputStream* (a pointer to a > ResourceObj) that doesn't obey the ResourceObj contract. An > outputStream* coming from &ls, where ls is an LogStream instance on > the stack, is a pointer to a ResourceObj that in practice is not a > ResourceObj. > Yes I agree. Deriving outputStream from ResourceObj seems an odd choice but as far as I can tell it has always been this way and I did not want to change it. Especially since, as you noted, the contract is not really tight :) You still can place them on the stack or allocate them from C-Heap. > Thinking about this some more, I think this is safe. No function that > is passed an outputStream* can assume that it can call delete on that > pointer. > > Ok! > - is 128 bytes as default too much for _smallbuf? Should we start with > 64? > I changed it to 64. > - the keyword `public` is repeated unnecessarily LogStream class (a > superfluous `public` before the write method) > > Fixed. > logStream.cpp > - one extra newline at line 74 > - the pointer returned from os::malloc is not checked. What should we > do if a NULL pointer is returned? Print whatever is buffered and then > do vm_out_of_memory? > - is growing by doubling too aggressive? should the LineBuffer instead > grow by chunking (e.g. always add 64 more bytes)? > Okay. I changed the allocation of the line buffer such that - we do not allocate beyond a reasonable limit (1M hardwired), to prevent runaway leaks. - we cope with OOM In both cases, LogStream continues to work but may truncate the output- I also added a test case for the former scenario (the 1M cap). > - instead of growing the LineBuffer by reallocating and then copying > over the old bytes, should we use a chunked linked list instead (in > order to avoid copying the same data multiple times)? The only > "requirements" on the LineBuffer is fast append and fast iteration > (doesn't have to support fast index lookup). > Not really sure how this would work. The whole point of LogStream is to assemble one log line, in the form of a single continuous zero-terminated string, and pass that to the UL backend as one single log call, yes? How would you do this with disjunct chunks? - LogStream::write is no longer in inline.hpp, is this a potential > performance problem? I think not, ,the old LogStream::write > definition was most likely in .inline.hpp because of template usage > I do not think so either. I do not like implementations in headers (clutters the interface), unless it is worth it performance wise. Here, LogStream::write is usually called from one of the outputStream::print()... functions via virtual function call; not sure how that could even be inlined. > > Great work thus far Thomas, the patch is becoming really solid! If you > want to discuss over IM, then you can (as always) find me in #openjdk on > irc.oftc.net. > > Thanks :) And thanks for reviewing! ...Thomas > > Thanks, > Erik > > Kind Regards, Thomas >> > From ioi.lam at oracle.com Fri Jun 30 18:39:23 2017 From: ioi.lam at oracle.com (Ioi Lam) Date: Fri, 30 Jun 2017 11:39:23 -0700 Subject: RFR (L) 7133093: Improve system dictionary performance In-Reply-To: <89f0b98c-3cbd-7b87-d76b-5e89a5a676fb@oracle.com> References: <89f0b98c-3cbd-7b87-d76b-5e89a5a676fb@oracle.com> Message-ID: Hi Coleen, Maybe the bug should be renamed to "Use one Dictionary per class loader instance"? That way it's more obvious what it is when you look at the repo history. 1. Why is assert_locked_or_safepoint(SystemDictionary_lock) necessary in SystemDictionary::find_class (line 1826), but not necessary SystemDictionary::find (line 951)? Since you removed NoSafepointVerifier nosafepoint in the latter, maybe this means it's safe to remove the assert_locked_or_safepoint in the former? 2. 455 static ClassLoaderData* _current_loader_data = NULL; 456 static Klass* _current_class_entry = NULL; 457 458 InstanceKlass* ClassLoaderDataGraph::try_get_next_class() { How about moving the static fields into an iterator object. That way you don't need to keep track of the globals ClassLoaderDataGraphIterator { ClassLoaderData* _current_loader_data Klass* _current_class_entry; InstanceKlass* try_get_next_class() { ....} }; 3. Double check locking in ClassLoaderData::dictionary() -- someone else should look at this :-) 4. We may need a better strategy for deciding the size of each dictionary. 565 const int _primelist[10] = {1, 107, 1009}; 571 Dictionary* ClassLoaderData::dictionary() { 579 if ((dictionary = _dictionary) == NULL) { 580 int size; 581 if (this == the_null_class_loader_data() || is_system_class_loader_data()) { 582 size = _primelist[2]; 583 } else if (class_loader()->is_a(SystemDictionary::reflect_DelegatingClassLoader_klass())) { 584 size = _primelist[0]; 585 } else { 586 size = _primelist[1]; 587 } 588 dictionary = new Dictionary(this, size); I'll do some investigation on this issue and get back to you. The rest of the changes look good to me. Thanks - Ioi On 6/23/17 4:42 PM, coleen.phillimore at oracle.com wrote: > Summary: Implement one dictionary per ClassLoaderData for faster > lookup and removal during class unloading > > See RFE for more details. > > open webrev at http://cr.openjdk.java.net/~coleenp/7133093.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-7133093 > > Tested with full "nightly" run in rbt, plus locally class loading and > unloading tests: > > jtreg hotspot/test/runtime/ClassUnload > > jtreg hotspot/test/runtime/modules > > jtreg hotspot/test/gc/class_unloading > > make test-hotspot-closed-tonga FILTER=quick TEST_JOBS=4 > TEST=vm.parallel_class_loading > > csh ~/testing/run_jck9 (vm/lang/java_lang) > > runThese -jck - uses class loader isolation to run each jck test and > unloads tests when done (at -gc:5 intervals) > > > Thanks, > Coleen > > From coleen.phillimore at oracle.com Fri Jun 30 20:45:10 2017 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 30 Jun 2017 16:45:10 -0400 Subject: RFR (L) 7133093: Improve system dictionary performance In-Reply-To: References: <89f0b98c-3cbd-7b87-d76b-5e89a5a676fb@oracle.com> Message-ID: Ioi, Thank you for looking at this change. On 6/30/17 2:39 PM, Ioi Lam wrote: > Hi Coleen, > > Maybe the bug should be renamed to "Use one Dictionary per class > loader instance"? That way it's more obvious what it is when you look > at the repo history. I can do that. I made it One Dictionary per ClassLoaderData > > 1. > > Why is assert_locked_or_safepoint(SystemDictionary_lock) necessary in > SystemDictionary::find_class (line 1826), but not necessary > SystemDictionary::find (line 951)? Since you removed > NoSafepointVerifier nosafepoint in the latter, maybe this means it's > safe to remove the assert_locked_or_safepoint in the former? The call to SystemDictionary::find() is the (I believe) usual lock free lookup and the SystemDictionary::find_class() is used to verify that a class we're about to add or want to add hasn't been added by another thread. Or certain cases where we already have a lock to do something else, like add loader constraints. I took out the NoSafepointVerifier because it assumes that system dictionary entries would be moved by GC, which they won't. The old hash function could safepoint when getting the hash for the class_loader object. > > 2. > > 455 static ClassLoaderData* _current_loader_data = NULL; > 456 static Klass* _current_class_entry = NULL; > 457 > 458 InstanceKlass* ClassLoaderDataGraph::try_get_next_class() { > > How about moving the static fields into an iterator object. That way > you don't need to keep track of the globals > > ClassLoaderDataGraphIterator { > ClassLoaderData* _current_loader_data > Klass* _current_class_entry; > > InstanceKlass* try_get_next_class() { ....} > }; Ok, there's a different iterator that iterates over all of the classes for GC. I will adapt that for this use. That would be better. > > 3. Double check locking in ClassLoaderData::dictionary() -- someone > else should look at this :-) I copied code that David Holmes added for modules. > > 4. We may need a better strategy for deciding the size of each > dictionary. > > 565 const int _primelist[10] = {1, 107, 1009}; > 571 Dictionary* ClassLoaderData::dictionary() { > 579 if ((dictionary = _dictionary) == NULL) { > 580 int size; > 581 if (this == the_null_class_loader_data() || > is_system_class_loader_data()) { > 582 size = _primelist[2]; > 583 } else if > (class_loader()->is_a(SystemDictionary::reflect_DelegatingClassLoader_klass())) > { > 584 size = _primelist[0]; > 585 } else { > 586 size = _primelist[1]; > 587 } > 588 dictionary = new Dictionary(this, size); > > I'll do some investigation on this issue and get back to you. > How about if someone uses PredictedLoadedClassCount, then we use that to size all but the reflection and boot class loader? Then if there's an application that has a class loader with a huge amount of classes loaded in it, that would help this? It might cost some footprint but an oversized table would simply be a bigger array of pointers, so it might not be that bad to oversize. I think the long term solution is being able to resize these entries and make the application provide arguments. Please let me know what you find in your investigation and if that would work. > The rest of the changes look good to me. Thank you! Coleen > > Thanks > - Ioi > > > On 6/23/17 4:42 PM, coleen.phillimore at oracle.com wrote: >> Summary: Implement one dictionary per ClassLoaderData for faster >> lookup and removal during class unloading >> >> See RFE for more details. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/7133093.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-7133093 >> >> Tested with full "nightly" run in rbt, plus locally class loading and >> unloading tests: >> >> jtreg hotspot/test/runtime/ClassUnload >> >> jtreg hotspot/test/runtime/modules >> >> jtreg hotspot/test/gc/class_unloading >> >> make test-hotspot-closed-tonga FILTER=quick TEST_JOBS=4 >> TEST=vm.parallel_class_loading >> >> csh ~/testing/run_jck9 (vm/lang/java_lang) >> >> runThese -jck - uses class loader isolation to run each jck test and >> unloads tests when done (at -gc:5 intervals) >> >> >> Thanks, >> Coleen >> >> > From kim.barrett at oracle.com Fri Jun 30 23:21:43 2017 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 30 Jun 2017 19:21:43 -0400 Subject: RFR: 8178495: Bug in the align_size_up_ macro In-Reply-To: References: <90b9016c-42b3-31e7-8c9c-28dafcad879a@oracle.com> <7aea18aa-7aab-8f80-72f5-b8d4304a829d@oracle.com> Message-ID: > On Jun 30, 2017, at 11:15 AM, Stefan Karlsson wrote: > > Fixing the logging with SCOPED_TRACE was only tested locally, but failed on OSX. Here's a fix for that problem: > http://cr.openjdk.java.net/~stefank/8178495/webrev.02 > http://cr.openjdk.java.net/~stefank/8178495/webrev.02.delta > > Passes JPRT now. > > StefanK Looks good. I think I see where things might have gone awry, but I?m curious what the OSX failure was. I?m a bit surprised that it complained but other compilers didn?t. From daniel.daugherty at oracle.com Fri Jun 30 23:56:41 2017 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Fri, 30 Jun 2017 17:56:41 -0600 Subject: RFR: JDK-8172791 fixing issues with Reserved Stack Area In-Reply-To: References: <3cd1c352-33fa-96fe-d2f6-a78a25c0543e@oracle.com> Message-ID: <0b99ee01-4e40-dc92-b116-1abcd6ff257a@oracle.com> On 6/30/17 9:03 AM, Andrew Haley wrote: > Just to explain why this patch is needed: without it, JEP 270 > (ReservedStackArea) does not work at all if a method with a > ReservedStack annotation is inlined. Which, in practice, is all the > time, because aggressive inlining is what C2 does. > > Can somebody please have a look at this? > > > On 18/04/17 15:47, Frederic Parain wrote: >> Greetings, >> >> Please review this fix which addresses several issues with the >> ReservedStackArea implementation: >> 1 - the method look_for_reserved_stack_annotated_method() fails to >> detect in-lined >> annotated methods, making the annotation ineffective for >> in-lined methods >> 2 - sometime an assertion failure related to reserved area status >> occurs after incorrect >> restoring of guards pages during a return from runtime >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8172791 >> >> webrev: >> http://cr.openjdk.java.net/~fparain/8172791/webrev.00/index.html src/cpu/x86/vm/interp_masm_x86.cpp Is the deletion of: L1083: push(rthread); related to the assertion failure part of the fix? It looks like it is just fixing a call protocol error (pushing rthread when it is not needed), but I'm not sure. src/share/vm/code/compiledMethod.cpp No comments. src/share/vm/code/compiledMethod.hpp No comments. src/share/vm/opto/compile.cpp No comments. src/share/vm/runtime/sharedRuntime.cpp L3157: for (ScopeDesc *sd = nm->scope_desc_near(fr.pc()); sd; sd = sd->sender()) { nit: implied boolean with 'sd;' please change to: 'sd != NULL;' test/runtime/ReservedStack/ReservedStackTest.java Why stop running the test on -Xint? Thumbs up on the fix. This fix is going to need a review from someone on the Compiler team also. Fred, who did your JEP-270 reviews from the Compiler team? Dan >> >> This fix has been contributed by Andrew Haley. >> >> Thank you, >> >> Fred