From david.holmes at oracle.com Mon Feb 1 00:18:47 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 1 Feb 2016 10:18:47 +1000 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56ACC86A.8080102@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> Message-ID: <56AEA467.6050801@oracle.com> Hi Coleen, I think what Chris was referring to was the CDS compaction work - which has since been abandoned. To be honest it has been so long since I was working on this that I can't recall the details. At one point Ioi commented how all MSO's were allocated with 8-byte alignment which was unnecessary, and that we could do better and account for it in the size() method. He also noted if we somehow messed up the alignment when doing this that it should be quickly detectable on sparc. These current changes will affect the apparent wasted space in the archive as the expected usage would be based on size() while the actual usage would be determined by the allocator. Ioi was really the best person to comment-on/review this. David ----- On 31/01/2016 12:27 AM, Coleen Phillimore wrote: > > > On 1/29/16 2:20 PM, Coleen Phillimore wrote: >> >> Thanks Chris, >> >> On 1/29/16 2:15 PM, Chris Plummer wrote: >>> Hi Coleen, >>> >>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>> >>>> Hi Chris, >>>> >>>> I made a few extra changes because of your question that I didn't >>>> answer below, a few HeapWordSize became wordSize. I apologize that >>>> I don't know how to create incremental webrevs. See discussion below. >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>> >>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>> >>>>>> Thank you, Chris for looking at this change. >>>>>> >>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>> Hi Coleen, >>>>>>> >>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>> something other than 8? >>>>>> >>>>>> Okay, I can run one of the testsets with that. I verified it in >>>>>> the debugger mostly. >>>>>>> >>>>>>> Someone from GC team should apply your patch, grep for >>>>>>> align_object_size(), and confirm that the ones you didn't change >>>>>>> are correct. I gave a quick look and they look right to me, but I >>>>>>> wasn't always certain if object alignment was appropriate in all >>>>>>> cases. >>>>>> >>>>>> thanks - this is why I'd changed the align_object_size to >>>>>> align_heap_object_size before testing and changed it back, to >>>>>> verify that I didn't miss any. >>>>>>> >>>>>>> I see some remaining HeapWordSize references that are suspect, >>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go through >>>>>>> all of them since there are about 428. Do they need closer >>>>>>> inspection? >>>>> ??? Any comment? >>>> >>>> Actually, I tried to get a lot of HeapWordSize in the metadata but >>>> the primary focus of the change, despite the title, was to fix >>>> align_object_size wasn't used on metadata. >>> ok. >>>> That said a quick look at the instances of HeapWordSize led to some >>>> that weren't in the heap. I didn't look in Array.java because it's >>>> in the SA which isn't maintainable anyway, but I changed a few. >>>> There were very few that were not referring to objects in the Java >>>> heap. bytecodeTracer was one and there were a couple in metaspace.cpp. >>> Ok. If you think there may be more, or a more thorough analysis is >>> needed, perhaps just file a bug to get the rest later. >> >> From my look yesterday, there aren't a lot of HeapWordSize left. There >> are probably still a lot of HeapWord* casts for things that aren't in >> the Java heap. This is a bigger cleanup that might not make sense to >> do in one change, but maybe in incremental changes to related code. >> >>> >>> As for reviewing your incremental changes, as long as it was just >>> more changes of HeapWordSize to wordSize, I'm sure they are fine. >>> (And yes, I did see that the removal of Symbol size alignment was >>> also added). >> >> Good, thanks. >> >>>> >>>> The bad news is that's more code to review. See above webrev link. >>>> >>>>>>> >>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>> >>>>>> Okay, I'll remove it. That's a good idea. >>>>>> >>>>>>> >>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>> align_object_size() did, and not align to word size? Isn't that >>>>>>> what we agreed to? Have you tested CDS? David had concerns about >>>>>>> the InstanceKlass::size() not returning the same aligned size as >>>>>>> Metachunk::object_alignment(). >>>>>> >>>>>> I ran the CDS tests but I could test some more with CDS. We don't >>>>>> want to force the size of objects to be 64 bit (especially Symbol) >>>>>> because Metachunk::object_alignment() is 64 bits. >>>>> Do you mean "just" because? I wasn't necessarily suggesting that >>>>> all metadata be 64-bit aligned. However, the ones that have their >>>>> allocation size 64-bit aligned should be. I think David's concern >>>>> is that he wrote code that computes how much memory is needed for >>>>> the archive, and it uses size() for that. If the Metachunk >>>>> allocator allocates more than size() due to the 64-bit alignment of >>>>> Metachunk::object_alignment(), then he will underestimate the size. >>>>> You'll need to double check with David to see if I got this right. >>>> >>>> I don't know what code this is but yes, it would be wrong. It also >>>> would be wrong if there's any other alignment gaps or space in >>>> metaspace chunks because chunks themselves have an allocation >>>> granularity. >>>> >>>> It could be changed back by changing the function >>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>> >>>> I fixed Symbol so that it didn't call align_metaspace_size if this >>>> change is needed in the future. >>>> >>>> I was trying to limit the size of this change to correct >>>> align_object_size for metadata. >>> Well, there a few issues being addressed by fixing align_object_size. >>> Using align_object_size was incorrect from a code purity standpoint >>> (it was used on values unrelated to java objects), and was also >>> incorrect when ObjectAlignmentInBytes was not 8. This was the main >>> motivation for making this change. >> >> Exactly. This was higher priority because it was wrong. >>> >>> The 3rd issue is that align_object_size by default was doing 8 byte >>> alignment, and this wastes memory on 32-bit. However, as I mentioned >>> there may be some dependencies on this 8 byte alignment due to the >>> metaspace allocator doing 8 byte alignment. If you can get David to >>> say he's ok with just 4-byte size alignment on 32-bit, then I'm ok >>> with this change. Otherwise I think maybe you should stay with 8 byte >>> alignment (including symbols), and file a bug to someday change it to >>> word alignment, and have the metaspace allocator require that you >>> pass in alignment requirements. >> >> Okay, I can see what David says but I wouldn't change Symbol back. >> That's mostly unrelated to metadata storage and I can get 32 bit >> packing for symbols on 32 bit platforms. It probably saves more space >> than the other more invasive ideas that we've had. > > This is reviewed now. If David wants metadata sizing to change back to > 64 bits on 32 bit platforms, it's a one line change. I'm going to push > it to get the rest in. > Thanks, > Coleen >> >> Thanks, >> Coleen >> >>>> >>>> Thanks for looking at this in detail. >>> No problem. Thanks for cleaning this up. >>> >>> Chris >>>> >>>> Coleen >>>> >>>> >>>>>> Unfortunately, with the latter, metadata is never aligned on 32 >>>>>> bit boundaries for 32 bit platforms, but to fix this, we have to >>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>> because the alignment is not a function of the size of the object >>>>>> but what is required from its nonstatic data members. >>>>> Correct. >>>>>> I found MethodCounters, Klass (and subclasses) and ConstantPool >>>>>> has such alignment constraints. Not sizing metadata to 64 bit >>>>>> sizes is a start for making this change. >>>>> I agree with that, but just wanted to point out why David may be >>>>> concerned with this change. >>>>>>> >>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>> >>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>> Fixed, Thanks! >>>>> thanks, >>>>> >>>>> Chris >>>>>> >>>>>> Coleen >>>>>> >>>>>>> >>>>>>> thanks, >>>>>>> >>>>>>> Chris >>>>>>> >>>>>>> >>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>>>>> is_metadata_aligned for metadata rather >>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>> rather than align_pointer_up (all the related functions are ptr). >>>>>>>> >>>>>>>> Ran RBT quick tests on all platforms along with Chris's Plummers >>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>> changes. Reran subset of this after merging. >>>>>>>> >>>>>>>> I have a script to update copyrights on commit. It's not a big >>>>>>>> change, just mostly boring. See the bug comments for more >>>>>>>> details about the change. >>>>>>>> >>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>> >>>>>>>> thanks, >>>>>>>> Coleen >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From dmitry.samersoff at oracle.com Mon Feb 1 07:37:19 2016 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Mon, 1 Feb 2016 10:37:19 +0300 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AADD07.4080604@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> Message-ID: <56AF0B2F.6010207@oracle.com> Coleen, Didn't look over all changes but SA changes looks good to me. -Dmitry On 2016-01-29 06:31, Coleen Phillimore wrote: > > Hi Chris, > > I made a few extra changes because of your question that I didn't answer > below, a few HeapWordSize became wordSize. I apologize that I don't > know how to create incremental webrevs. See discussion below. > > open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ > > On 1/28/16 4:52 PM, Chris Plummer wrote: >> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>> >>> Thank you, Chris for looking at this change. >>> >>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>> Hi Coleen, >>>> >>>> Can you do some testing with ObjectAlignmentInBytes set to something >>>> other than 8? >>> >>> Okay, I can run one of the testsets with that. I verified it in the >>> debugger mostly. >>>> >>>> Someone from GC team should apply your patch, grep for >>>> align_object_size(), and confirm that the ones you didn't change are >>>> correct. I gave a quick look and they look right to me, but I wasn't >>>> always certain if object alignment was appropriate in all cases. >>> >>> thanks - this is why I'd changed the align_object_size to >>> align_heap_object_size before testing and changed it back, to verify >>> that I didn't miss any. >>>> >>>> I see some remaining HeapWordSize references that are suspect, like >>>> in Array.java and bytecodeTracer.cpp. I didn't go through all of >>>> them since there are about 428. Do they need closer inspection? >> ??? Any comment? > > Actually, I tried to get a lot of HeapWordSize in the metadata but the > primary focus of the change, despite the title, was to fix > align_object_size wasn't used on metadata. That said a quick look at > the instances of HeapWordSize led to some that weren't in the heap. I > didn't look in Array.java because it's in the SA which isn't > maintainable anyway, but I changed a few. There were very few that were > not referring to objects in the Java heap. bytecodeTracer was one and > there were a couple in metaspace.cpp. > > The bad news is that's more code to review. See above webrev link. > >>>> >>>> align_metadata_offset() is not used. It can be removed. >>> >>> Okay, I'll remove it. That's a good idea. >>> >>>> >>>> Shouldn't align_metadata_size() align to 64-bit like >>>> align_object_size() did, and not align to word size? Isn't that what >>>> we agreed to? Have you tested CDS? David had concerns about the >>>> InstanceKlass::size() not returning the same aligned size as >>>> Metachunk::object_alignment(). >>> >>> I ran the CDS tests but I could test some more with CDS. We don't >>> want to force the size of objects to be 64 bit (especially Symbol) >>> because Metachunk::object_alignment() is 64 bits. >> Do you mean "just" because? I wasn't necessarily suggesting that all >> metadata be 64-bit aligned. However, the ones that have their >> allocation size 64-bit aligned should be. I think David's concern is >> that he wrote code that computes how much memory is needed for the >> archive, and it uses size() for that. If the Metachunk allocator >> allocates more than size() due to the 64-bit alignment of >> Metachunk::object_alignment(), then he will underestimate the size. >> You'll need to double check with David to see if I got this right. > > I don't know what code this is but yes, it would be wrong. It also > would be wrong if there's any other alignment gaps or space in metaspace > chunks because chunks themselves have an allocation granularity. > > It could be changed back by changing the function align_metaspace_size > from 1 to WordsPerLong if you wanted to. > > I fixed Symbol so that it didn't call align_metaspace_size if this > change is needed in the future. > > I was trying to limit the size of this change to correct > align_object_size for metadata. > > Thanks for looking at this in detail. > > Coleen > > >>> Unfortunately, with the latter, metadata is never aligned on 32 bit >>> boundaries for 32 bit platforms, but to fix this, we have to pass a >>> minimum_alignment parameter to Metaspace::allocate() because the >>> alignment is not a function of the size of the object but what is >>> required from its nonstatic data members. >> Correct. >>> I found MethodCounters, Klass (and subclasses) and ConstantPool has >>> such alignment constraints. Not sizing metadata to 64 bit sizes is a >>> start for making this change. >> I agree with that, but just wanted to point out why David may be >> concerned with this change. >>>> >>>> instanceKlass.hpp: Need to fix the following comment: >>>> >>>> 97 // sizeof(OopMapBlock) in HeapWords. >>> Fixed, Thanks! >> thanks, >> >> Chris >>> >>> Coleen >>> >>>> >>>> thanks, >>>> >>>> Chris >>>> >>>> >>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>> is_metadata_aligned for metadata rather >>>>> than align_object_size, etc. Use wordSize rather than HeapWordSize >>>>> for metadata. Use align_ptr_up >>>>> rather than align_pointer_up (all the related functions are ptr). >>>>> >>>>> Ran RBT quick tests on all platforms along with Chris's Plummers >>>>> change for 8143608, ran jtreg hotspot tests and nsk.sajdi.testlist >>>>> co-located tests because there are SA changes. Reran subset of >>>>> this after merging. >>>>> >>>>> I have a script to update copyrights on commit. It's not a big >>>>> change, just mostly boring. See the bug comments for more details >>>>> about the change. >>>>> >>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>> >>>>> thanks, >>>>> Coleen >>>>> >>>>> >>>> >>> >> > -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From mikael.gerdin at oracle.com Mon Feb 1 07:57:29 2016 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Mon, 1 Feb 2016 08:57:29 +0100 Subject: RFR(M) 8148047: Move the vtable length field to Klass In-Reply-To: <56A24F77.6030804@oracle.com> References: <56A24F77.6030804@oracle.com> Message-ID: <56AF0FE9.9060803@oracle.com> Hi, can I get another review, from a Reviewer, on this change? /Mikael On 2016-01-22 16:49, Mikael Gerdin wrote: > Hi all, > > Here's the second part of the set of changes to move most of the vtable > related code to Klass. > > This change consists of the following parts: > * Move the field _vtable_len to Klass, making its accessor nonvirtual. > -> Ensure that this does not result in any footprint regression by > moving TRACE_DEFINE_KLASS_TRACE_ID in Klass and _itable_len in > InstanceKlass to fill out alignment gaps. > -> Move vtable_length_offset to Klass. Move vtable_start_offset to > Klass to keep the code consistent. vtable_start_offset depends on the > size of InstanceKlass and must therefore be defined outside of klass.hpp. > > * Update all locations to refer to Klass::vtable_{length,start}_offset > instead of InstanceKlass. > > * Modify SA to look for _vtable_len in Klass. > > * Rename CompilerToVM::Data::InstanceKlass_vtable_{length,start}_offset > and > jdk.vm.ci.hotspot.HotSpotVMConfig.instanceKlassVtable{Length,Start}Offset to > properly represent where the offsets are coming from. > > > Webrev: http://cr.openjdk.java.net/~mgerdin/8148047/webrev.0/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8148047 > > Testing: JPRT on Oracle supported platforms. > > As in the previous change I've updated the PPC64 and AARCH64 ports but I > have not tested the changes. Build and test feedback from porters is > most welcome! > > Thanks > /Mikael From magnus.ihse.bursie at oracle.com Mon Feb 1 12:12:29 2016 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 1 Feb 2016 13:12:29 +0100 Subject: RFR(M) 8069540: Remove universal binaries support from hotspot build In-Reply-To: <56AB9B98.9000403@oracle.com> References: <56AB9B98.9000403@oracle.com> Message-ID: <56AF4BAD.8060206@oracle.com> On 2016-01-29 18:04, Erik Joelsson wrote: > (adding build-dev) > > Looks good enough to me. Looks good to me too. /Magnus > > /Erik > > On 2016-01-29 17:51, Gerard Ziemski wrote: >> Hi all (and especially the makefiles experts), >> >> This fix removes support for building hotspot universal libraries on >> Mac OS X and simplifies the makefiles. >> >> We are still building Mac OS X hotspot libraries as universal >> libraries, but with only one architecture (x86_64). The rest of JDK >> is built as plain single architecture libraries and there is no >> longer any need for this complexity (since we haven't supported 32bit >> platform on Mac OS X for a while now) >> >> Bug link: https://bugs.openjdk.java.net/browse/JDK-8069540 >> Webrev: http://cr.openjdk.java.net/~gziemski/8069540_jdk_rev2/ >> Webrev: http://cr.openjdk.java.net/~gziemski/8069540_hotspot_rev2/ >> >> Testing: JPRT + RBT on all platforms. >> >> >> cheers > From kim.barrett at oracle.com Mon Feb 1 12:25:44 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 1 Feb 2016 07:25:44 -0500 Subject: RFR(M) 8148047: Move the vtable length field to Klass In-Reply-To: <56AF0FE9.9060803@oracle.com> References: <56A24F77.6030804@oracle.com> <56AF0FE9.9060803@oracle.com> Message-ID: > On Feb 1, 2016, at 2:57 AM, Mikael Gerdin wrote: > > Hi, can I get another review, from a Reviewer, on this change? I?ll take a look? From magnus.ihse.bursie at oracle.com Mon Feb 1 12:55:52 2016 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 1 Feb 2016 13:55:52 +0100 Subject: build_vm_def.sh broken on macosx? In-Reply-To: <56A0B4F0.1040103@oracle.com> References: <56A0B4F0.1040103@oracle.com> Message-ID: <56AF55D8.3030005@oracle.com> cc:ing serviceability-dev, since I believe the SA agent is the primary consumer of dynamic symbols in the jvm library. On 2016-01-21 11:37, Magnus Ihse Bursie wrote: > Hi, > > It seems that build_vm_def.sh is broken on macosx. The script lists > all from *.o using nm, and filters them using this awk expression: > '{ if ($3 ~ /^_ZTV/ || $3 ~ /^gHotSpotVM/) print "\t" $3 }' > > However, the typical output from nm on macosx looks like this: > __ZTV10methodOper > __ZTV11MachNopNode > > That is, only a single column, and two leading underscore. The awk > expression will fail to match anything, and an empty vm.def will be > produced. > > If I modify the script to: > '{ if ($1 ~ /^__ZTV/ || $1 ~ /^_gHotSpotVM/) print "\t" $1 }' > then it will match and print these symbols. > > The build_vm_def.sh script has not been modified since 2013, so if > this ever worked, then most likely the nm output has changed in Xcode > at some point. > > My main concern here is the new hotspot build. Does this mean that the > vm.def fills no purpose on the macosx build, and that the whole > process of running nm on all object files can be skipped? Or is this a > bug that has not been discovered? If so, it should be fixed in the old > build. Is it really the case that no one cares about the dynamic symbols in libjvm.dylib? If so, we can just skip this code altogether, rather than running a lots of command that seem to have an effect but have none. /Magnus From coleen.phillimore at oracle.com Mon Feb 1 13:13:11 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 1 Feb 2016 08:13:11 -0500 Subject: RFR(M) 8148047: Move the vtable length field to Klass In-Reply-To: <56AF0FE9.9060803@oracle.com> References: <56A24F77.6030804@oracle.com> <56AF0FE9.9060803@oracle.com> Message-ID: <56AF59E7.2080009@oracle.com> Mikael, This looks good. It looks like vtable length fills an alignment gap in the class. Did you run the colocated nsk.sajdi.testlist on this? It's quarantined but they all pass on linux x64. Thanks, Coleen On 2/1/16 2:57 AM, Mikael Gerdin wrote: > Hi, can I get another review, from a Reviewer, on this change? > > /Mikael > > On 2016-01-22 16:49, Mikael Gerdin wrote: >> Hi all, >> >> Here's the second part of the set of changes to move most of the vtable >> related code to Klass. >> >> This change consists of the following parts: >> * Move the field _vtable_len to Klass, making its accessor nonvirtual. >> -> Ensure that this does not result in any footprint regression by >> moving TRACE_DEFINE_KLASS_TRACE_ID in Klass and _itable_len in >> InstanceKlass to fill out alignment gaps. >> -> Move vtable_length_offset to Klass. Move vtable_start_offset to >> Klass to keep the code consistent. vtable_start_offset depends on the >> size of InstanceKlass and must therefore be defined outside of >> klass.hpp. >> >> * Update all locations to refer to Klass::vtable_{length,start}_offset >> instead of InstanceKlass. >> >> * Modify SA to look for _vtable_len in Klass. >> >> * Rename CompilerToVM::Data::InstanceKlass_vtable_{length,start}_offset >> and >> jdk.vm.ci.hotspot.HotSpotVMConfig.instanceKlassVtable{Length,Start}Offset >> to >> properly represent where the offsets are coming from. >> >> >> Webrev: http://cr.openjdk.java.net/~mgerdin/8148047/webrev.0/ >> Bug: https://bugs.openjdk.java.net/browse/JDK-8148047 >> >> Testing: JPRT on Oracle supported platforms. >> >> As in the previous change I've updated the PPC64 and AARCH64 ports but I >> have not tested the changes. Build and test feedback from porters is >> most welcome! >> >> Thanks >> /Mikael > From coleen.phillimore at oracle.com Mon Feb 1 13:22:16 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 1 Feb 2016 08:22:16 -0500 Subject: RFR(M) 8148481: Devirtualize Klass::vtable In-Reply-To: <56AA3E25.60409@oracle.com> References: <56AA3E25.60409@oracle.com> Message-ID: <56AF5C08.10104@oracle.com> Mikael, This looks good also. http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0/src/share/vm/oops/instanceKlass.cpp.udiff.html Why didn't you move print_vtable to klass.cpp too rather than casting to intptr_t? That looks odd. Thanks, Coleen On 1/28/16 11:13 AM, Mikael Gerdin wrote: > Hi all, > > Due to my recent changes in this area, Klass is now responsible for > maintaining the offsets and length of the embedded vtable and > therefore it makes sense to move all code related to the Java vtables > to Klass. > > This also allows us to remove a few unsafe casts where an ArrayKlass > was cast to an InstanceKlass just to get at the methods_at_vtable(). > These casts were removed from reflection.cpp, jni.cpp, > jvmciCompilerToVM.cpp and linkResolver.cpp, in cpCache.cpp there was > an alternate approach to the same problem which I've rewritten to use > the new way of accessing the vtable directly through Klass. > > Some notes: > * I took the liberty of changing the return type of start_of_vtable() > to vtableEntry* since that is in fact what it is. > * method_at_vtable is no longer inline, but I don't think that should > be a performance problem since it's usually only being called from > link resolving, cpCache or JNI calls, all of which are not > particularly hot paths. > > > Bug link: https://bugs.openjdk.java.net/browse/JDK-8148481 > Webrev: http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0 > > Testing: JPRT + RBT on Oracle platforms. > > /Mikael From kim.barrett at oracle.com Mon Feb 1 13:41:10 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 1 Feb 2016 08:41:10 -0500 Subject: RFR(M) 8148047: Move the vtable length field to Klass In-Reply-To: <56A24F77.6030804@oracle.com> References: <56A24F77.6030804@oracle.com> Message-ID: <704415CD-EF70-4186-A32A-3F32619A8369@oracle.com> > On Jan 22, 2016, at 10:49 AM, Mikael Gerdin wrote: > > Hi all, > > Here's the second part of the set of changes to move most of the vtable related code to Klass. > > This change consists of the following parts: > * Move the field _vtable_len to Klass, making its accessor nonvirtual. > -> Ensure that this does not result in any footprint regression by moving TRACE_DEFINE_KLASS_TRACE_ID in Klass and _itable_len in InstanceKlass to fill out alignment gaps. > -> Move vtable_length_offset to Klass. Move vtable_start_offset to Klass to keep the code consistent. vtable_start_offset depends on the size of InstanceKlass and must therefore be defined outside of klass.hpp. > > * Update all locations to refer to Klass::vtable_{length,start}_offset instead of InstanceKlass. > > * Modify SA to look for _vtable_len in Klass. > > * Rename CompilerToVM::Data::InstanceKlass_vtable_{length,start}_offset > and jdk.vm.ci.hotspot.HotSpotVMConfig.instanceKlassVtable{Length,Start}Offset to properly represent where the offsets are coming from. > > > Webrev: http://cr.openjdk.java.net/~mgerdin/8148047/webrev.0/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8148047 > > Testing: JPRT on Oracle supported platforms. > > As in the previous change I've updated the PPC64 and AARCH64 ports but I have not tested the changes. Build and test feedback from porters is most welcome! Looks good. A couple of surprises noted. ------------------------------------------------------------------------------ src/share/vm/oops/arrayKlass.cpp 85 ArrayKlass::ArrayKlass(Symbol* name) : src/share/vm/oops/instanceKlass.cpp 211 InstanceKlass::InstanceKlass(const ClassFileParser& parser, unsigned kind) : [Pre-existing] I was surprised by the indentation of the bodies of these constructors. I would have placed the opening brace on its own line (to separate the init-list from the body) and indented the body normally. I'm guessing there are more like this, so perhaps I should get over my surprise. ------------------------------------------------------------------------------ src/share/vm/oops/arrayKlass.cpp 91 set_vtable_length(Universe::base_vtable_size()); src/share/vm/oops/instanceKlass.cpp 216 set_vtable_length(parser.vtable_size()); [Sort of pre-existing] I was surprised that the Klass constructor left the new _vtable_len field uninitialized, with assignment done in subclasses. I was expecting the Klass constructor to be called with arguments that would be used to initialize various fields, with no need for the setter functions. But what's in the webrev seems to be of the general style used in this vicinity. Perhaps a followup cleanup is called for? ------------------------------------------------------------------------------ I agree with Chris that something has gone awry with 662 ByteSize Klass::vtable_start_offset() { 663 return in_ByteSize(InstanceKlass::header_size() * wordSize); 664 } But I don't have a suggestion for improvement right now. ------------------------------------------------------------------------------ From mikael.gerdin at oracle.com Mon Feb 1 14:02:27 2016 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Mon, 1 Feb 2016 15:02:27 +0100 Subject: RFR(M) 8148047: Move the vtable length field to Klass In-Reply-To: <56AF59E7.2080009@oracle.com> References: <56A24F77.6030804@oracle.com> <56AF0FE9.9060803@oracle.com> <56AF59E7.2080009@oracle.com> Message-ID: <56AF6573.8030005@oracle.com> Hi Coleen On 2016-02-01 14:13, Coleen Phillimore wrote: > > Mikael, > > This looks good. It looks like vtable length fills an alignment gap in > the class. > > Did you run the colocated nsk.sajdi.testlist on this? It's quarantined > but they all pass on linux x64. I've run through some SA testing but I can't recall exactly which ones. I'll run through that one as well. Thanks /Mikael > > Thanks, > Coleen > > On 2/1/16 2:57 AM, Mikael Gerdin wrote: >> Hi, can I get another review, from a Reviewer, on this change? >> >> /Mikael >> >> On 2016-01-22 16:49, Mikael Gerdin wrote: >>> Hi all, >>> >>> Here's the second part of the set of changes to move most of the vtable >>> related code to Klass. >>> >>> This change consists of the following parts: >>> * Move the field _vtable_len to Klass, making its accessor nonvirtual. >>> -> Ensure that this does not result in any footprint regression by >>> moving TRACE_DEFINE_KLASS_TRACE_ID in Klass and _itable_len in >>> InstanceKlass to fill out alignment gaps. >>> -> Move vtable_length_offset to Klass. Move vtable_start_offset to >>> Klass to keep the code consistent. vtable_start_offset depends on the >>> size of InstanceKlass and must therefore be defined outside of >>> klass.hpp. >>> >>> * Update all locations to refer to Klass::vtable_{length,start}_offset >>> instead of InstanceKlass. >>> >>> * Modify SA to look for _vtable_len in Klass. >>> >>> * Rename CompilerToVM::Data::InstanceKlass_vtable_{length,start}_offset >>> and >>> jdk.vm.ci.hotspot.HotSpotVMConfig.instanceKlassVtable{Length,Start}Offset >>> to >>> properly represent where the offsets are coming from. >>> >>> >>> Webrev: http://cr.openjdk.java.net/~mgerdin/8148047/webrev.0/ >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8148047 >>> >>> Testing: JPRT on Oracle supported platforms. >>> >>> As in the previous change I've updated the PPC64 and AARCH64 ports but I >>> have not tested the changes. Build and test feedback from porters is >>> most welcome! >>> >>> Thanks >>> /Mikael >> > From mikael.gerdin at oracle.com Mon Feb 1 14:32:25 2016 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Mon, 1 Feb 2016 15:32:25 +0100 Subject: RFR(M) 8148481: Devirtualize Klass::vtable In-Reply-To: <56AF5C08.10104@oracle.com> References: <56AA3E25.60409@oracle.com> <56AF5C08.10104@oracle.com> Message-ID: <56AF6C79.4070402@oracle.com> Hi Coleen, On 2016-02-01 14:22, Coleen Phillimore wrote: > > Mikael, > This looks good also. > > http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0/src/share/vm/oops/instanceKlass.cpp.udiff.html > > > Why didn't you move print_vtable to klass.cpp too rather than casting to > intptr_t? That looks odd. Well, what I actually did was overload print_vtable with a variant that casts the parameter to intptr_t*. The reason it can't take a vtableEntry* is that print_vtable is called for printing itables as well, and they have a more complex layout. I'm not entirely happy with the solution but I'd rather not have start_of_vtable return an intptr_t* just to avoid casting when dumping some debug output, and doing a cast in the print line felt ugly for some reason. print_vtable should probably be named maybe_dump_metadata since it actually dumps memory contents and if it's metadata it tries to call the virtual print function on the metadata. /Mikael > > Thanks, > Coleen > > On 1/28/16 11:13 AM, Mikael Gerdin wrote: >> Hi all, >> >> Due to my recent changes in this area, Klass is now responsible for >> maintaining the offsets and length of the embedded vtable and >> therefore it makes sense to move all code related to the Java vtables >> to Klass. >> >> This also allows us to remove a few unsafe casts where an ArrayKlass >> was cast to an InstanceKlass just to get at the methods_at_vtable(). >> These casts were removed from reflection.cpp, jni.cpp, >> jvmciCompilerToVM.cpp and linkResolver.cpp, in cpCache.cpp there was >> an alternate approach to the same problem which I've rewritten to use >> the new way of accessing the vtable directly through Klass. >> >> Some notes: >> * I took the liberty of changing the return type of start_of_vtable() >> to vtableEntry* since that is in fact what it is. >> * method_at_vtable is no longer inline, but I don't think that should >> be a performance problem since it's usually only being called from >> link resolving, cpCache or JNI calls, all of which are not >> particularly hot paths. >> >> >> Bug link: https://bugs.openjdk.java.net/browse/JDK-8148481 >> Webrev: http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0 >> >> Testing: JPRT + RBT on Oracle platforms. >> >> /Mikael > From mikael.gerdin at oracle.com Mon Feb 1 14:32:36 2016 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Mon, 1 Feb 2016 15:32:36 +0100 Subject: RFR(M) 8148047: Move the vtable length field to Klass In-Reply-To: <704415CD-EF70-4186-A32A-3F32619A8369@oracle.com> References: <56A24F77.6030804@oracle.com> <704415CD-EF70-4186-A32A-3F32619A8369@oracle.com> Message-ID: <56AF6C84.4090808@oracle.com> Hi Kim, On 2016-02-01 14:41, Kim Barrett wrote: >> On Jan 22, 2016, at 10:49 AM, Mikael Gerdin wrote: >> >> Hi all, >> >> Here's the second part of the set of changes to move most of the vtable related code to Klass. >> >> This change consists of the following parts: >> * Move the field _vtable_len to Klass, making its accessor nonvirtual. >> -> Ensure that this does not result in any footprint regression by moving TRACE_DEFINE_KLASS_TRACE_ID in Klass and _itable_len in InstanceKlass to fill out alignment gaps. >> -> Move vtable_length_offset to Klass. Move vtable_start_offset to Klass to keep the code consistent. vtable_start_offset depends on the size of InstanceKlass and must therefore be defined outside of klass.hpp. >> >> * Update all locations to refer to Klass::vtable_{length,start}_offset instead of InstanceKlass. >> >> * Modify SA to look for _vtable_len in Klass. >> >> * Rename CompilerToVM::Data::InstanceKlass_vtable_{length,start}_offset >> and jdk.vm.ci.hotspot.HotSpotVMConfig.instanceKlassVtable{Length,Start}Offset to properly represent where the offsets are coming from. >> >> >> Webrev: http://cr.openjdk.java.net/~mgerdin/8148047/webrev.0/ >> Bug: https://bugs.openjdk.java.net/browse/JDK-8148047 >> >> Testing: JPRT on Oracle supported platforms. >> >> As in the previous change I've updated the PPC64 and AARCH64 ports but I have not tested the changes. Build and test feedback from porters is most welcome! > > Looks good. > > A couple of surprises noted. > > ------------------------------------------------------------------------------ > src/share/vm/oops/arrayKlass.cpp > 85 ArrayKlass::ArrayKlass(Symbol* name) : > src/share/vm/oops/instanceKlass.cpp > 211 InstanceKlass::InstanceKlass(const ClassFileParser& parser, unsigned kind) : > > [Pre-existing] > > I was surprised by the indentation of the bodies of these > constructors. I would have placed the opening brace on its own line > (to separate the init-list from the body) and indented the body > normally. I'm guessing there are more like this, so perhaps I should > get over my surprise. There are indeed some weird things in the deepest, darkest corners :) > > ------------------------------------------------------------------------------ > src/share/vm/oops/arrayKlass.cpp > 91 set_vtable_length(Universe::base_vtable_size()); > src/share/vm/oops/instanceKlass.cpp > 216 set_vtable_length(parser.vtable_size()); > > [Sort of pre-existing] > I was surprised that the Klass constructor left the new _vtable_len > field uninitialized, with assignment done in subclasses. I was > expecting the Klass constructor to be called with arguments that would > be used to initialize various fields, with no need for the setter > functions. But what's in the webrev seems to be of the general style > used in this vicinity. Perhaps a followup cleanup is called for? Yeah, the fact that Klass only has a no-args constructor surprised me as well. I can file a followup cleanup but I don't really have the time to do further cleanups in this area at this time. > > ------------------------------------------------------------------------------ > > I agree with Chris that something has gone awry with > > 662 ByteSize Klass::vtable_start_offset() { > 663 return in_ByteSize(InstanceKlass::header_size() * wordSize); > 664 } > > But I don't have a suggestion for improvement right now. Ok. Thanks for the review, Kim. /Mikael > > ------------------------------------------------------------------------------ > From mikael.gerdin at oracle.com Mon Feb 1 14:44:45 2016 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Mon, 1 Feb 2016 15:44:45 +0100 Subject: RFR(M) 8148047: Move the vtable length field to Klass In-Reply-To: <56AF6573.8030005@oracle.com> References: <56A24F77.6030804@oracle.com> <56AF0FE9.9060803@oracle.com> <56AF59E7.2080009@oracle.com> <56AF6573.8030005@oracle.com> Message-ID: <56AF6F5D.4080100@oracle.com> On 2016-02-01 15:02, Mikael Gerdin wrote: > Hi Coleen > > On 2016-02-01 14:13, Coleen Phillimore wrote: >> >> Mikael, >> >> This looks good. It looks like vtable length fills an alignment gap in >> the class. >> >> Did you run the colocated nsk.sajdi.testlist on this? It's quarantined >> but they all pass on linux x64. > > I've run through some SA testing but I can't recall exactly which ones. > I'll run through that one as well. For the record, they all pass on my local workstation. /Mikael > > Thanks > /Mikael > >> >> Thanks, >> Coleen >> >> On 2/1/16 2:57 AM, Mikael Gerdin wrote: >>> Hi, can I get another review, from a Reviewer, on this change? >>> >>> /Mikael >>> >>> On 2016-01-22 16:49, Mikael Gerdin wrote: >>>> Hi all, >>>> >>>> Here's the second part of the set of changes to move most of the vtable >>>> related code to Klass. >>>> >>>> This change consists of the following parts: >>>> * Move the field _vtable_len to Klass, making its accessor nonvirtual. >>>> -> Ensure that this does not result in any footprint regression by >>>> moving TRACE_DEFINE_KLASS_TRACE_ID in Klass and _itable_len in >>>> InstanceKlass to fill out alignment gaps. >>>> -> Move vtable_length_offset to Klass. Move vtable_start_offset to >>>> Klass to keep the code consistent. vtable_start_offset depends on the >>>> size of InstanceKlass and must therefore be defined outside of >>>> klass.hpp. >>>> >>>> * Update all locations to refer to Klass::vtable_{length,start}_offset >>>> instead of InstanceKlass. >>>> >>>> * Modify SA to look for _vtable_len in Klass. >>>> >>>> * Rename CompilerToVM::Data::InstanceKlass_vtable_{length,start}_offset >>>> and >>>> jdk.vm.ci.hotspot.HotSpotVMConfig.instanceKlassVtable{Length,Start}Offset >>>> >>>> to >>>> properly represent where the offsets are coming from. >>>> >>>> >>>> Webrev: http://cr.openjdk.java.net/~mgerdin/8148047/webrev.0/ >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8148047 >>>> >>>> Testing: JPRT on Oracle supported platforms. >>>> >>>> As in the previous change I've updated the PPC64 and AARCH64 ports >>>> but I >>>> have not tested the changes. Build and test feedback from porters is >>>> most welcome! >>>> >>>> Thanks >>>> /Mikael >>> >> > From coleen.phillimore at oracle.com Mon Feb 1 15:41:19 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 1 Feb 2016 10:41:19 -0500 Subject: RFR(M) 8148481: Devirtualize Klass::vtable In-Reply-To: <56AF6C79.4070402@oracle.com> References: <56AA3E25.60409@oracle.com> <56AF5C08.10104@oracle.com> <56AF6C79.4070402@oracle.com> Message-ID: <56AF7C9F.3050206@oracle.com> On 2/1/16 9:32 AM, Mikael Gerdin wrote: > Hi Coleen, > > On 2016-02-01 14:22, Coleen Phillimore wrote: >> >> Mikael, >> This looks good also. >> >> http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0/src/share/vm/oops/instanceKlass.cpp.udiff.html >> >> >> >> Why didn't you move print_vtable to klass.cpp too rather than casting to >> intptr_t? That looks odd. > > Well, what I actually did was overload print_vtable with a variant > that casts the parameter to intptr_t*. > The reason it can't take a vtableEntry* is that print_vtable is called > for printing itables as well, and they have a more complex layout. Ah, I see. > > I'm not entirely happy with the solution but I'd rather not have > start_of_vtable return an intptr_t* just to avoid casting when dumping > some debug output, and doing a cast in the print line felt ugly for > some reason. Agree. > > print_vtable should probably be named maybe_dump_metadata since it > actually dumps memory contents and if it's metadata it tries to call > the virtual print function on the metadata. > I don't know but you should leave it. I like when printing functions start with print. Coleen > /Mikael > >> >> Thanks, >> Coleen >> >> On 1/28/16 11:13 AM, Mikael Gerdin wrote: >>> Hi all, >>> >>> Due to my recent changes in this area, Klass is now responsible for >>> maintaining the offsets and length of the embedded vtable and >>> therefore it makes sense to move all code related to the Java vtables >>> to Klass. >>> >>> This also allows us to remove a few unsafe casts where an ArrayKlass >>> was cast to an InstanceKlass just to get at the methods_at_vtable(). >>> These casts were removed from reflection.cpp, jni.cpp, >>> jvmciCompilerToVM.cpp and linkResolver.cpp, in cpCache.cpp there was >>> an alternate approach to the same problem which I've rewritten to use >>> the new way of accessing the vtable directly through Klass. >>> >>> Some notes: >>> * I took the liberty of changing the return type of start_of_vtable() >>> to vtableEntry* since that is in fact what it is. >>> * method_at_vtable is no longer inline, but I don't think that should >>> be a performance problem since it's usually only being called from >>> link resolving, cpCache or JNI calls, all of which are not >>> particularly hot paths. >>> >>> >>> Bug link: https://bugs.openjdk.java.net/browse/JDK-8148481 >>> Webrev: http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0 >>> >>> Testing: JPRT + RBT on Oracle platforms. >>> >>> /Mikael >> > From mikael.gerdin at oracle.com Mon Feb 1 15:44:53 2016 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Mon, 1 Feb 2016 16:44:53 +0100 Subject: RFR(M) 8148481: Devirtualize Klass::vtable In-Reply-To: <56AF7C9F.3050206@oracle.com> References: <56AA3E25.60409@oracle.com> <56AF5C08.10104@oracle.com> <56AF6C79.4070402@oracle.com> <56AF7C9F.3050206@oracle.com> Message-ID: <56AF7D75.3020800@oracle.com> Hi Coleen, On 2016-02-01 16:41, Coleen Phillimore wrote: > > > On 2/1/16 9:32 AM, Mikael Gerdin wrote: >> Hi Coleen, >> >> On 2016-02-01 14:22, Coleen Phillimore wrote: >>> >>> Mikael, >>> This looks good also. >>> >>> http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0/src/share/vm/oops/instanceKlass.cpp.udiff.html >>> >>> >>> >>> Why didn't you move print_vtable to klass.cpp too rather than casting to >>> intptr_t? That looks odd. >> >> Well, what I actually did was overload print_vtable with a variant >> that casts the parameter to intptr_t*. >> The reason it can't take a vtableEntry* is that print_vtable is called >> for printing itables as well, and they have a more complex layout. > > Ah, I see. >> >> I'm not entirely happy with the solution but I'd rather not have >> start_of_vtable return an intptr_t* just to avoid casting when dumping >> some debug output, and doing a cast in the print line felt ugly for >> some reason. > > Agree. >> >> print_vtable should probably be named maybe_dump_metadata since it >> actually dumps memory contents and if it's metadata it tries to call >> the virtual print function on the metadata. >> > I don't know but you should leave it. I like when printing functions > start with print. Ok, in this case I agree that the name should stay. Thanks for the review! /Mikael > > Coleen > >> /Mikael >> >>> >>> Thanks, >>> Coleen >>> >>> On 1/28/16 11:13 AM, Mikael Gerdin wrote: >>>> Hi all, >>>> >>>> Due to my recent changes in this area, Klass is now responsible for >>>> maintaining the offsets and length of the embedded vtable and >>>> therefore it makes sense to move all code related to the Java vtables >>>> to Klass. >>>> >>>> This also allows us to remove a few unsafe casts where an ArrayKlass >>>> was cast to an InstanceKlass just to get at the methods_at_vtable(). >>>> These casts were removed from reflection.cpp, jni.cpp, >>>> jvmciCompilerToVM.cpp and linkResolver.cpp, in cpCache.cpp there was >>>> an alternate approach to the same problem which I've rewritten to use >>>> the new way of accessing the vtable directly through Klass. >>>> >>>> Some notes: >>>> * I took the liberty of changing the return type of start_of_vtable() >>>> to vtableEntry* since that is in fact what it is. >>>> * method_at_vtable is no longer inline, but I don't think that should >>>> be a performance problem since it's usually only being called from >>>> link resolving, cpCache or JNI calls, all of which are not >>>> particularly hot paths. >>>> >>>> >>>> Bug link: https://bugs.openjdk.java.net/browse/JDK-8148481 >>>> Webrev: http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0 >>>> >>>> Testing: JPRT + RBT on Oracle platforms. >>>> >>>> /Mikael >>> >> > From coleen.phillimore at oracle.com Mon Feb 1 15:49:13 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 1 Feb 2016 10:49:13 -0500 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AF0B2F.6010207@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56AF0B2F.6010207@oracle.com> Message-ID: <56AF7E79.60600@oracle.com> Thank you, Dmitry! I checked it in before seeing this so couldn't list you as a reviewer. Coleen On 2/1/16 2:37 AM, Dmitry Samersoff wrote: > Coleen, > > Didn't look over all changes but SA changes looks good to me. > > -Dmitry > > On 2016-01-29 06:31, Coleen Phillimore wrote: >> Hi Chris, >> >> I made a few extra changes because of your question that I didn't answer >> below, a few HeapWordSize became wordSize. I apologize that I don't >> know how to create incremental webrevs. See discussion below. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >> >> On 1/28/16 4:52 PM, Chris Plummer wrote: >>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>> Thank you, Chris for looking at this change. >>>> >>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>> Hi Coleen, >>>>> >>>>> Can you do some testing with ObjectAlignmentInBytes set to something >>>>> other than 8? >>>> Okay, I can run one of the testsets with that. I verified it in the >>>> debugger mostly. >>>>> Someone from GC team should apply your patch, grep for >>>>> align_object_size(), and confirm that the ones you didn't change are >>>>> correct. I gave a quick look and they look right to me, but I wasn't >>>>> always certain if object alignment was appropriate in all cases. >>>> thanks - this is why I'd changed the align_object_size to >>>> align_heap_object_size before testing and changed it back, to verify >>>> that I didn't miss any. >>>>> I see some remaining HeapWordSize references that are suspect, like >>>>> in Array.java and bytecodeTracer.cpp. I didn't go through all of >>>>> them since there are about 428. Do they need closer inspection? >>> ??? Any comment? >> Actually, I tried to get a lot of HeapWordSize in the metadata but the >> primary focus of the change, despite the title, was to fix >> align_object_size wasn't used on metadata. That said a quick look at >> the instances of HeapWordSize led to some that weren't in the heap. I >> didn't look in Array.java because it's in the SA which isn't >> maintainable anyway, but I changed a few. There were very few that were >> not referring to objects in the Java heap. bytecodeTracer was one and >> there were a couple in metaspace.cpp. >> >> The bad news is that's more code to review. See above webrev link. >> >>>>> align_metadata_offset() is not used. It can be removed. >>>> Okay, I'll remove it. That's a good idea. >>>> >>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>> align_object_size() did, and not align to word size? Isn't that what >>>>> we agreed to? Have you tested CDS? David had concerns about the >>>>> InstanceKlass::size() not returning the same aligned size as >>>>> Metachunk::object_alignment(). >>>> I ran the CDS tests but I could test some more with CDS. We don't >>>> want to force the size of objects to be 64 bit (especially Symbol) >>>> because Metachunk::object_alignment() is 64 bits. >>> Do you mean "just" because? I wasn't necessarily suggesting that all >>> metadata be 64-bit aligned. However, the ones that have their >>> allocation size 64-bit aligned should be. I think David's concern is >>> that he wrote code that computes how much memory is needed for the >>> archive, and it uses size() for that. If the Metachunk allocator >>> allocates more than size() due to the 64-bit alignment of >>> Metachunk::object_alignment(), then he will underestimate the size. >>> You'll need to double check with David to see if I got this right. >> I don't know what code this is but yes, it would be wrong. It also >> would be wrong if there's any other alignment gaps or space in metaspace >> chunks because chunks themselves have an allocation granularity. >> >> It could be changed back by changing the function align_metaspace_size >> from 1 to WordsPerLong if you wanted to. >> >> I fixed Symbol so that it didn't call align_metaspace_size if this >> change is needed in the future. >> >> I was trying to limit the size of this change to correct >> align_object_size for metadata. >> >> Thanks for looking at this in detail. >> >> Coleen >> >> >>>> Unfortunately, with the latter, metadata is never aligned on 32 bit >>>> boundaries for 32 bit platforms, but to fix this, we have to pass a >>>> minimum_alignment parameter to Metaspace::allocate() because the >>>> alignment is not a function of the size of the object but what is >>>> required from its nonstatic data members. >>> Correct. >>>> I found MethodCounters, Klass (and subclasses) and ConstantPool has >>>> such alignment constraints. Not sizing metadata to 64 bit sizes is a >>>> start for making this change. >>> I agree with that, but just wanted to point out why David may be >>> concerned with this change. >>>>> instanceKlass.hpp: Need to fix the following comment: >>>>> >>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>> Fixed, Thanks! >>> thanks, >>> >>> Chris >>>> Coleen >>>> >>>>> thanks, >>>>> >>>>> Chris >>>>> >>>>> >>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>>> is_metadata_aligned for metadata rather >>>>>> than align_object_size, etc. Use wordSize rather than HeapWordSize >>>>>> for metadata. Use align_ptr_up >>>>>> rather than align_pointer_up (all the related functions are ptr). >>>>>> >>>>>> Ran RBT quick tests on all platforms along with Chris's Plummers >>>>>> change for 8143608, ran jtreg hotspot tests and nsk.sajdi.testlist >>>>>> co-located tests because there are SA changes. Reran subset of >>>>>> this after merging. >>>>>> >>>>>> I have a script to update copyrights on commit. It's not a big >>>>>> change, just mostly boring. See the bug comments for more details >>>>>> about the change. >>>>>> >>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>> >>>>>> thanks, >>>>>> Coleen >>>>>> >>>>>> > From gerard.ziemski at oracle.com Mon Feb 1 16:07:10 2016 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Mon, 1 Feb 2016 10:07:10 -0600 Subject: RFR(M) 8069540: Remove universal binaries support from hotspot build In-Reply-To: <56AB9B98.9000403@oracle.com> References: <56AB9B98.9000403@oracle.com> Message-ID: <330BF8D5-EB11-4187-ADEC-90F8C17AB66F@oracle.com> Thank you for the review! > On Jan 29, 2016, at 11:04 AM, Erik Joelsson wrote: > > (adding build-dev) > > Looks good enough to me. > > /Erik > > On 2016-01-29 17:51, Gerard Ziemski wrote: >> Hi all (and especially the makefiles experts), >> >> This fix removes support for building hotspot universal libraries on Mac OS X and simplifies the makefiles. >> >> We are still building Mac OS X hotspot libraries as universal libraries, but with only one architecture (x86_64). The rest of JDK is built as plain single architecture libraries and there is no longer any need for this complexity (since we haven't supported 32bit platform on Mac OS X for a while now) >> >> Bug link: https://bugs.openjdk.java.net/browse/JDK-8069540 >> Webrev: http://cr.openjdk.java.net/~gziemski/8069540_jdk_rev2/ >> Webrev: http://cr.openjdk.java.net/~gziemski/8069540_hotspot_rev2/ >> >> Testing: JPRT + RBT on all platforms. >> >> >> cheers > From gerard.ziemski at oracle.com Mon Feb 1 16:07:17 2016 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Mon, 1 Feb 2016 10:07:17 -0600 Subject: RFR(M) 8069540: Remove universal binaries support from hotspot build In-Reply-To: <56AF4BAD.8060206@oracle.com> References: <56AB9B98.9000403@oracle.com> <56AF4BAD.8060206@oracle.com> Message-ID: Thank you for the review! > On Feb 1, 2016, at 6:12 AM, Magnus Ihse Bursie wrote: > > On 2016-01-29 18:04, Erik Joelsson wrote: >> (adding build-dev) >> >> Looks good enough to me. > > Looks good to me too. > > /Magnus > >> >> /Erik >> >> On 2016-01-29 17:51, Gerard Ziemski wrote: >>> Hi all (and especially the makefiles experts), >>> >>> This fix removes support for building hotspot universal libraries on Mac OS X and simplifies the makefiles. >>> >>> We are still building Mac OS X hotspot libraries as universal libraries, but with only one architecture (x86_64). The rest of JDK is built as plain single architecture libraries and there is no longer any need for this complexity (since we haven't supported 32bit platform on Mac OS X for a while now) >>> >>> Bug link: https://bugs.openjdk.java.net/browse/JDK-8069540 >>> Webrev: http://cr.openjdk.java.net/~gziemski/8069540_jdk_rev2/ >>> Webrev: http://cr.openjdk.java.net/~gziemski/8069540_hotspot_rev2/ >>> >>> Testing: JPRT + RBT on all platforms. >>> >>> >>> cheers >> > From daniel.daugherty at oracle.com Mon Feb 1 17:21:38 2016 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Mon, 1 Feb 2016 10:21:38 -0700 Subject: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc In-Reply-To: <56ACCB06.7060900@oracle.com> References: <56AB5F39.1060005@oracle.com> <56AB8D2E.7090902@oracle.com> <56AB9A43.1000905@oracle.com> <56ACCB06.7060900@oracle.com> Message-ID: <56AF9422.5010300@oracle.com> On 1/30/16 7:39 AM, Coleen Phillimore wrote: > > I've moved the SafeFetch to has_method_vptr as suggested and retested. > > http://cr.openjdk.java.net/~coleenp/8146984.02/webrev/ src/share/vm/oops/method.cpp (old) L2114: return has_method_vptr((const void*)this); (new) L2120: return has_method_vptr(this); Just curious. I don't see anything that explains why the cast is no longer needed (no type changes). Was this simply cleaning up an unnecessary cast? Thumbs up. Dan > > Thanks, > Coleen > > On 1/29/16 11:58 AM, Coleen Phillimore wrote: >> >> >> On 1/29/16 11:02 AM, Daniel D. Daugherty wrote: >>> On 1/29/16 5:46 AM, Coleen Phillimore wrote: >>>> Summary: Add address check and use SafeFetchN for Method* vptr >>>> access when Method* may be bad pointer. >>>> >>>> Tested with RBT and failing test case (reproduced 1 in 100 times) >>>> with fatal in the 'return's in the change to verify. >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8146984/ >>> >>> This one caught my eye because it has to do with sampling... >> >> I should mention sampling in all my RFRs then! >>> >>> src/share/vm/oops/method.cpp >>> The old code checked "!is_metaspace_object()" and used >>> has_method_vptr((const void*)this). >>> >>> The new code skips the "!is_metaspace_object()" check even after >>> sanity >>> checking the pointer, but you don't really explain why that's OK. >> >> is_metaspace_object is a very expensive check. It has to traverse >> all the metaspace mmap chunks. The new code is more robust in that >> it sanity checks the pointer first but uses Safefetch to get the vptr. >> >> >>> >>> The new code also picks up parts of Method::has_method_vptr() which >>> makes me wonder if that's the right place for the fix. Won't other >>> callers to Method::has_method_vptr() be subject to the same >>> crashing >>> mode? Or was the crashing mode only due to the >>> "!is_metaspace_object()" >>> check... >> >> I should have moved the SafeFetch in to the has_method_vptr. I can't >> remember why I copied it now. It crashed because the pointer was in >> metaspace (is_metaspace_object returned true) but wasn't aligned, but >> the pointer could come from anywhere. >> >> Thanks, I'll test out this fix and resend it. >> Coleen >> >>> >>> Dan >>> >>> >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8146984 >>>> >>>> Thanks, >>>> Coleen >>>> >>> >> > From chris.plummer at oracle.com Mon Feb 1 17:56:14 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Mon, 1 Feb 2016 09:56:14 -0800 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AEA467.6050801@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> Message-ID: <56AF9C3E.70407@oracle.com> It seems the allocators always align the size up to at least a 64-bit boundary, so doesn't that make it pointless to attempt to save memory by keeping the allocation request size word aligned instead of 64-bit aligned? Chris On 1/31/16 4:18 PM, David Holmes wrote: > Hi Coleen, > > I think what Chris was referring to was the CDS compaction work - > which has since been abandoned. To be honest it has been so long since > I was working on this that I can't recall the details. At one point > Ioi commented how all MSO's were allocated with 8-byte alignment which > was unnecessary, and that we could do better and account for it in the > size() method. He also noted if we somehow messed up the alignment > when doing this that it should be quickly detectable on sparc. > > These current changes will affect the apparent wasted space in the > archive as the expected usage would be based on size() while the > actual usage would be determined by the allocator. > > Ioi was really the best person to comment-on/review this. > > David > ----- > > On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >> >> >> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>> >>> Thanks Chris, >>> >>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>> Hi Coleen, >>>> >>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>> >>>>> Hi Chris, >>>>> >>>>> I made a few extra changes because of your question that I didn't >>>>> answer below, a few HeapWordSize became wordSize. I apologize that >>>>> I don't know how to create incremental webrevs. See discussion below. >>>>> >>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>> >>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>> >>>>>>> Thank you, Chris for looking at this change. >>>>>>> >>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>> Hi Coleen, >>>>>>>> >>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>> something other than 8? >>>>>>> >>>>>>> Okay, I can run one of the testsets with that. I verified it in >>>>>>> the debugger mostly. >>>>>>>> >>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>> align_object_size(), and confirm that the ones you didn't change >>>>>>>> are correct. I gave a quick look and they look right to me, but I >>>>>>>> wasn't always certain if object alignment was appropriate in all >>>>>>>> cases. >>>>>>> >>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>> align_heap_object_size before testing and changed it back, to >>>>>>> verify that I didn't miss any. >>>>>>>> >>>>>>>> I see some remaining HeapWordSize references that are suspect, >>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go through >>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>> inspection? >>>>>> ??? Any comment? >>>>> >>>>> Actually, I tried to get a lot of HeapWordSize in the metadata but >>>>> the primary focus of the change, despite the title, was to fix >>>>> align_object_size wasn't used on metadata. >>>> ok. >>>>> That said a quick look at the instances of HeapWordSize led to some >>>>> that weren't in the heap. I didn't look in Array.java because it's >>>>> in the SA which isn't maintainable anyway, but I changed a few. >>>>> There were very few that were not referring to objects in the Java >>>>> heap. bytecodeTracer was one and there were a couple in >>>>> metaspace.cpp. >>>> Ok. If you think there may be more, or a more thorough analysis is >>>> needed, perhaps just file a bug to get the rest later. >>> >>> From my look yesterday, there aren't a lot of HeapWordSize left. There >>> are probably still a lot of HeapWord* casts for things that aren't in >>> the Java heap. This is a bigger cleanup that might not make sense to >>> do in one change, but maybe in incremental changes to related code. >>> >>>> >>>> As for reviewing your incremental changes, as long as it was just >>>> more changes of HeapWordSize to wordSize, I'm sure they are fine. >>>> (And yes, I did see that the removal of Symbol size alignment was >>>> also added). >>> >>> Good, thanks. >>> >>>>> >>>>> The bad news is that's more code to review. See above webrev link. >>>>> >>>>>>>> >>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>> >>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>> >>>>>>>> >>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>> align_object_size() did, and not align to word size? Isn't that >>>>>>>> what we agreed to? Have you tested CDS? David had concerns about >>>>>>>> the InstanceKlass::size() not returning the same aligned size as >>>>>>>> Metachunk::object_alignment(). >>>>>>> >>>>>>> I ran the CDS tests but I could test some more with CDS. We don't >>>>>>> want to force the size of objects to be 64 bit (especially Symbol) >>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>> Do you mean "just" because? I wasn't necessarily suggesting that >>>>>> all metadata be 64-bit aligned. However, the ones that have their >>>>>> allocation size 64-bit aligned should be. I think David's concern >>>>>> is that he wrote code that computes how much memory is needed for >>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>> allocator allocates more than size() due to the 64-bit alignment of >>>>>> Metachunk::object_alignment(), then he will underestimate the size. >>>>>> You'll need to double check with David to see if I got this right. >>>>> >>>>> I don't know what code this is but yes, it would be wrong. It also >>>>> would be wrong if there's any other alignment gaps or space in >>>>> metaspace chunks because chunks themselves have an allocation >>>>> granularity. >>>>> >>>>> It could be changed back by changing the function >>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>> >>>>> I fixed Symbol so that it didn't call align_metaspace_size if this >>>>> change is needed in the future. >>>>> >>>>> I was trying to limit the size of this change to correct >>>>> align_object_size for metadata. >>>> Well, there a few issues being addressed by fixing align_object_size. >>>> Using align_object_size was incorrect from a code purity standpoint >>>> (it was used on values unrelated to java objects), and was also >>>> incorrect when ObjectAlignmentInBytes was not 8. This was the main >>>> motivation for making this change. >>> >>> Exactly. This was higher priority because it was wrong. >>>> >>>> The 3rd issue is that align_object_size by default was doing 8 byte >>>> alignment, and this wastes memory on 32-bit. However, as I mentioned >>>> there may be some dependencies on this 8 byte alignment due to the >>>> metaspace allocator doing 8 byte alignment. If you can get David to >>>> say he's ok with just 4-byte size alignment on 32-bit, then I'm ok >>>> with this change. Otherwise I think maybe you should stay with 8 byte >>>> alignment (including symbols), and file a bug to someday change it to >>>> word alignment, and have the metaspace allocator require that you >>>> pass in alignment requirements. >>> >>> Okay, I can see what David says but I wouldn't change Symbol back. >>> That's mostly unrelated to metadata storage and I can get 32 bit >>> packing for symbols on 32 bit platforms. It probably saves more space >>> than the other more invasive ideas that we've had. >> >> This is reviewed now. If David wants metadata sizing to change back to >> 64 bits on 32 bit platforms, it's a one line change. I'm going to push >> it to get the rest in. >> Thanks, >> Coleen >>> >>> Thanks, >>> Coleen >>> >>>>> >>>>> Thanks for looking at this in detail. >>>> No problem. Thanks for cleaning this up. >>>> >>>> Chris >>>>> >>>>> Coleen >>>>> >>>>> >>>>>>> Unfortunately, with the latter, metadata is never aligned on 32 >>>>>>> bit boundaries for 32 bit platforms, but to fix this, we have to >>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>> because the alignment is not a function of the size of the object >>>>>>> but what is required from its nonstatic data members. >>>>>> Correct. >>>>>>> I found MethodCounters, Klass (and subclasses) and ConstantPool >>>>>>> has such alignment constraints. Not sizing metadata to 64 bit >>>>>>> sizes is a start for making this change. >>>>>> I agree with that, but just wanted to point out why David may be >>>>>> concerned with this change. >>>>>>>> >>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>> >>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>> Fixed, Thanks! >>>>>> thanks, >>>>>> >>>>>> Chris >>>>>>> >>>>>>> Coleen >>>>>>> >>>>>>>> >>>>>>>> thanks, >>>>>>>> >>>>>>>> Chris >>>>>>>> >>>>>>>> >>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>> rather than align_pointer_up (all the related functions are ptr). >>>>>>>>> >>>>>>>>> Ran RBT quick tests on all platforms along with Chris's Plummers >>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>> >>>>>>>>> I have a script to update copyrights on commit. It's not a big >>>>>>>>> change, just mostly boring. See the bug comments for more >>>>>>>>> details about the change. >>>>>>>>> >>>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>> >>>>>>>>> thanks, >>>>>>>>> Coleen >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> From coleen.phillimore at oracle.com Mon Feb 1 18:01:50 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 1 Feb 2016 13:01:50 -0500 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AF9C3E.70407@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> <56AF9C3E.70407@oracle.com> Message-ID: <56AF9D8E.1070408@oracle.com> On 2/1/16 12:56 PM, Chris Plummer wrote: > It seems the allocators always align the size up to at least a 64-bit > boundary, so doesn't that make it pointless to attempt to save memory > by keeping the allocation request size word aligned instead of 64-bit > aligned? Sort of, except you need a size as a multiple of 32 bit words to potentially fix this, so it's a step towards that (if wanted). Coleen > > Chris > > On 1/31/16 4:18 PM, David Holmes wrote: >> Hi Coleen, >> >> I think what Chris was referring to was the CDS compaction work - >> which has since been abandoned. To be honest it has been so long >> since I was working on this that I can't recall the details. At one >> point Ioi commented how all MSO's were allocated with 8-byte >> alignment which was unnecessary, and that we could do better and >> account for it in the size() method. He also noted if we somehow >> messed up the alignment when doing this that it should be quickly >> detectable on sparc. >> >> These current changes will affect the apparent wasted space in the >> archive as the expected usage would be based on size() while the >> actual usage would be determined by the allocator. >> >> Ioi was really the best person to comment-on/review this. >> >> David >> ----- >> >> On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >>> >>> >>> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>>> >>>> Thanks Chris, >>>> >>>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>>> Hi Coleen, >>>>> >>>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>>> >>>>>> Hi Chris, >>>>>> >>>>>> I made a few extra changes because of your question that I didn't >>>>>> answer below, a few HeapWordSize became wordSize. I apologize that >>>>>> I don't know how to create incremental webrevs. See discussion >>>>>> below. >>>>>> >>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>>> >>>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>>> >>>>>>>> Thank you, Chris for looking at this change. >>>>>>>> >>>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>>> Hi Coleen, >>>>>>>>> >>>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>>> something other than 8? >>>>>>>> >>>>>>>> Okay, I can run one of the testsets with that. I verified it in >>>>>>>> the debugger mostly. >>>>>>>>> >>>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>>> align_object_size(), and confirm that the ones you didn't change >>>>>>>>> are correct. I gave a quick look and they look right to me, but I >>>>>>>>> wasn't always certain if object alignment was appropriate in all >>>>>>>>> cases. >>>>>>>> >>>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>>> align_heap_object_size before testing and changed it back, to >>>>>>>> verify that I didn't miss any. >>>>>>>>> >>>>>>>>> I see some remaining HeapWordSize references that are suspect, >>>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go through >>>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>>> inspection? >>>>>>> ??? Any comment? >>>>>> >>>>>> Actually, I tried to get a lot of HeapWordSize in the metadata but >>>>>> the primary focus of the change, despite the title, was to fix >>>>>> align_object_size wasn't used on metadata. >>>>> ok. >>>>>> That said a quick look at the instances of HeapWordSize led to some >>>>>> that weren't in the heap. I didn't look in Array.java because it's >>>>>> in the SA which isn't maintainable anyway, but I changed a few. >>>>>> There were very few that were not referring to objects in the Java >>>>>> heap. bytecodeTracer was one and there were a couple in >>>>>> metaspace.cpp. >>>>> Ok. If you think there may be more, or a more thorough analysis is >>>>> needed, perhaps just file a bug to get the rest later. >>>> >>>> From my look yesterday, there aren't a lot of HeapWordSize left. There >>>> are probably still a lot of HeapWord* casts for things that aren't in >>>> the Java heap. This is a bigger cleanup that might not make sense to >>>> do in one change, but maybe in incremental changes to related code. >>>> >>>>> >>>>> As for reviewing your incremental changes, as long as it was just >>>>> more changes of HeapWordSize to wordSize, I'm sure they are fine. >>>>> (And yes, I did see that the removal of Symbol size alignment was >>>>> also added). >>>> >>>> Good, thanks. >>>> >>>>>> >>>>>> The bad news is that's more code to review. See above webrev link. >>>>>> >>>>>>>>> >>>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>>> >>>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>>> >>>>>>>>> >>>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>>> align_object_size() did, and not align to word size? Isn't that >>>>>>>>> what we agreed to? Have you tested CDS? David had concerns about >>>>>>>>> the InstanceKlass::size() not returning the same aligned size as >>>>>>>>> Metachunk::object_alignment(). >>>>>>>> >>>>>>>> I ran the CDS tests but I could test some more with CDS. We don't >>>>>>>> want to force the size of objects to be 64 bit (especially Symbol) >>>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>>> Do you mean "just" because? I wasn't necessarily suggesting that >>>>>>> all metadata be 64-bit aligned. However, the ones that have their >>>>>>> allocation size 64-bit aligned should be. I think David's concern >>>>>>> is that he wrote code that computes how much memory is needed for >>>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>>> allocator allocates more than size() due to the 64-bit alignment of >>>>>>> Metachunk::object_alignment(), then he will underestimate the size. >>>>>>> You'll need to double check with David to see if I got this right. >>>>>> >>>>>> I don't know what code this is but yes, it would be wrong. It also >>>>>> would be wrong if there's any other alignment gaps or space in >>>>>> metaspace chunks because chunks themselves have an allocation >>>>>> granularity. >>>>>> >>>>>> It could be changed back by changing the function >>>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>>> >>>>>> I fixed Symbol so that it didn't call align_metaspace_size if this >>>>>> change is needed in the future. >>>>>> >>>>>> I was trying to limit the size of this change to correct >>>>>> align_object_size for metadata. >>>>> Well, there a few issues being addressed by fixing align_object_size. >>>>> Using align_object_size was incorrect from a code purity standpoint >>>>> (it was used on values unrelated to java objects), and was also >>>>> incorrect when ObjectAlignmentInBytes was not 8. This was the main >>>>> motivation for making this change. >>>> >>>> Exactly. This was higher priority because it was wrong. >>>>> >>>>> The 3rd issue is that align_object_size by default was doing 8 byte >>>>> alignment, and this wastes memory on 32-bit. However, as I mentioned >>>>> there may be some dependencies on this 8 byte alignment due to the >>>>> metaspace allocator doing 8 byte alignment. If you can get David to >>>>> say he's ok with just 4-byte size alignment on 32-bit, then I'm ok >>>>> with this change. Otherwise I think maybe you should stay with 8 byte >>>>> alignment (including symbols), and file a bug to someday change it to >>>>> word alignment, and have the metaspace allocator require that you >>>>> pass in alignment requirements. >>>> >>>> Okay, I can see what David says but I wouldn't change Symbol back. >>>> That's mostly unrelated to metadata storage and I can get 32 bit >>>> packing for symbols on 32 bit platforms. It probably saves more space >>>> than the other more invasive ideas that we've had. >>> >>> This is reviewed now. If David wants metadata sizing to change back to >>> 64 bits on 32 bit platforms, it's a one line change. I'm going to push >>> it to get the rest in. >>> Thanks, >>> Coleen >>>> >>>> Thanks, >>>> Coleen >>>> >>>>>> >>>>>> Thanks for looking at this in detail. >>>>> No problem. Thanks for cleaning this up. >>>>> >>>>> Chris >>>>>> >>>>>> Coleen >>>>>> >>>>>> >>>>>>>> Unfortunately, with the latter, metadata is never aligned on 32 >>>>>>>> bit boundaries for 32 bit platforms, but to fix this, we have to >>>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>>> because the alignment is not a function of the size of the object >>>>>>>> but what is required from its nonstatic data members. >>>>>>> Correct. >>>>>>>> I found MethodCounters, Klass (and subclasses) and ConstantPool >>>>>>>> has such alignment constraints. Not sizing metadata to 64 bit >>>>>>>> sizes is a start for making this change. >>>>>>> I agree with that, but just wanted to point out why David may be >>>>>>> concerned with this change. >>>>>>>>> >>>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>>> >>>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>>> Fixed, Thanks! >>>>>>> thanks, >>>>>>> >>>>>>> Chris >>>>>>>> >>>>>>>> Coleen >>>>>>>> >>>>>>>>> >>>>>>>>> thanks, >>>>>>>>> >>>>>>>>> Chris >>>>>>>>> >>>>>>>>> >>>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>>> rather than align_pointer_up (all the related functions are >>>>>>>>>> ptr). >>>>>>>>>> >>>>>>>>>> Ran RBT quick tests on all platforms along with Chris's Plummers >>>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>>> >>>>>>>>>> I have a script to update copyrights on commit. It's not a big >>>>>>>>>> change, just mostly boring. See the bug comments for more >>>>>>>>>> details about the change. >>>>>>>>>> >>>>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>>> >>>>>>>>>> thanks, >>>>>>>>>> Coleen >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> > From coleen.phillimore at oracle.com Mon Feb 1 18:04:59 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 1 Feb 2016 13:04:59 -0500 Subject: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc In-Reply-To: <56AF9422.5010300@oracle.com> References: <56AB5F39.1060005@oracle.com> <56AB8D2E.7090902@oracle.com> <56AB9A43.1000905@oracle.com> <56ACCB06.7060900@oracle.com> <56AF9422.5010300@oracle.com> Message-ID: <56AF9E4B.8050704@oracle.com> Thanks Dan! On 2/1/16 12:21 PM, Daniel D. Daugherty wrote: > On 1/30/16 7:39 AM, Coleen Phillimore wrote: >> >> I've moved the SafeFetch to has_method_vptr as suggested and retested. >> >> http://cr.openjdk.java.net/~coleenp/8146984.02/webrev/ > > src/share/vm/oops/method.cpp > (old) L2114: return has_method_vptr((const void*)this); > (new) L2120: return has_method_vptr(this); > Just curious. I don't see anything that explains why the > cast is no longer needed (no type changes). Was this > simply cleaning up an unnecessary cast? The cast is unnecessary. I didn't add it back when I added the call to has_method_vptr back. thanks, Coleen > > Thumbs up. > > Dan > > >> >> Thanks, >> Coleen >> >> On 1/29/16 11:58 AM, Coleen Phillimore wrote: >>> >>> >>> On 1/29/16 11:02 AM, Daniel D. Daugherty wrote: >>>> On 1/29/16 5:46 AM, Coleen Phillimore wrote: >>>>> Summary: Add address check and use SafeFetchN for Method* vptr >>>>> access when Method* may be bad pointer. >>>>> >>>>> Tested with RBT and failing test case (reproduced 1 in 100 times) >>>>> with fatal in the 'return's in the change to verify. >>>>> >>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8146984/ >>>> >>>> This one caught my eye because it has to do with sampling... >>> >>> I should mention sampling in all my RFRs then! >>>> >>>> src/share/vm/oops/method.cpp >>>> The old code checked "!is_metaspace_object()" and used >>>> has_method_vptr((const void*)this). >>>> >>>> The new code skips the "!is_metaspace_object()" check even >>>> after sanity >>>> checking the pointer, but you don't really explain why that's OK. >>> >>> is_metaspace_object is a very expensive check. It has to traverse >>> all the metaspace mmap chunks. The new code is more robust in that >>> it sanity checks the pointer first but uses Safefetch to get the vptr. >>> >>> >>>> >>>> The new code also picks up parts of Method::has_method_vptr() >>>> which >>>> makes me wonder if that's the right place for the fix. Won't other >>>> callers to Method::has_method_vptr() be subject to the same >>>> crashing >>>> mode? Or was the crashing mode only due to the >>>> "!is_metaspace_object()" >>>> check... >>> >>> I should have moved the SafeFetch in to the has_method_vptr. I can't >>> remember why I copied it now. It crashed because the pointer was in >>> metaspace (is_metaspace_object returned true) but wasn't aligned, >>> but the pointer could come from anywhere. >>> >>> Thanks, I'll test out this fix and resend it. >>> Coleen >>> >>>> >>>> Dan >>>> >>>> >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8146984 >>>>> >>>>> Thanks, >>>>> Coleen >>>>> >>>> >>> >> > From chris.plummer at oracle.com Mon Feb 1 19:54:05 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Mon, 1 Feb 2016 11:54:05 -0800 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AF9D8E.1070408@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> <56AF9C3E.70407@oracle.com> <56AF9D8E.1070408@oracle.com> Message-ID: <56AFB7DD.8010500@oracle.com> On 2/1/16 10:01 AM, Coleen Phillimore wrote: > > > On 2/1/16 12:56 PM, Chris Plummer wrote: >> It seems the allocators always align the size up to at least a 64-bit >> boundary, so doesn't that make it pointless to attempt to save memory >> by keeping the allocation request size word aligned instead of 64-bit >> aligned? > > Sort of, except you need a size as a multiple of 32 bit words to > potentially fix this, so it's a step towards that (if wanted). What you need is (1) don't automatically pad the size up to 64-bit alignment in the allocator, (2) don't pad the size up to 64-bit in the size computations, and (3) the ability for the allocator to maintain an unaligned "top" pointer, and to fix the alignment if necessary during the allocation. This last one implies knowing the alignment requirements of the caller, so that means either passing in the alignment requirement or having allocators configured to the alignment requirements of its users. You need all 3 of these. Leave any one out and you don't recoup any of the wasted memory. We were doing all 3. You eliminated at least some of the cases of (2). Chris > > Coleen > >> >> Chris >> >> On 1/31/16 4:18 PM, David Holmes wrote: >>> Hi Coleen, >>> >>> I think what Chris was referring to was the CDS compaction work - >>> which has since been abandoned. To be honest it has been so long >>> since I was working on this that I can't recall the details. At one >>> point Ioi commented how all MSO's were allocated with 8-byte >>> alignment which was unnecessary, and that we could do better and >>> account for it in the size() method. He also noted if we somehow >>> messed up the alignment when doing this that it should be quickly >>> detectable on sparc. >>> >>> These current changes will affect the apparent wasted space in the >>> archive as the expected usage would be based on size() while the >>> actual usage would be determined by the allocator. >>> >>> Ioi was really the best person to comment-on/review this. >>> >>> David >>> ----- >>> >>> On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >>>> >>>> >>>> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>>>> >>>>> Thanks Chris, >>>>> >>>>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>>>> Hi Coleen, >>>>>> >>>>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>>>> >>>>>>> Hi Chris, >>>>>>> >>>>>>> I made a few extra changes because of your question that I didn't >>>>>>> answer below, a few HeapWordSize became wordSize. I apologize that >>>>>>> I don't know how to create incremental webrevs. See discussion >>>>>>> below. >>>>>>> >>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>>>> >>>>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>>>> >>>>>>>>> Thank you, Chris for looking at this change. >>>>>>>>> >>>>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>>>> Hi Coleen, >>>>>>>>>> >>>>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>>>> something other than 8? >>>>>>>>> >>>>>>>>> Okay, I can run one of the testsets with that. I verified it in >>>>>>>>> the debugger mostly. >>>>>>>>>> >>>>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>>>> align_object_size(), and confirm that the ones you didn't change >>>>>>>>>> are correct. I gave a quick look and they look right to me, >>>>>>>>>> but I >>>>>>>>>> wasn't always certain if object alignment was appropriate in all >>>>>>>>>> cases. >>>>>>>>> >>>>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>>>> align_heap_object_size before testing and changed it back, to >>>>>>>>> verify that I didn't miss any. >>>>>>>>>> >>>>>>>>>> I see some remaining HeapWordSize references that are suspect, >>>>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go through >>>>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>>>> inspection? >>>>>>>> ??? Any comment? >>>>>>> >>>>>>> Actually, I tried to get a lot of HeapWordSize in the metadata but >>>>>>> the primary focus of the change, despite the title, was to fix >>>>>>> align_object_size wasn't used on metadata. >>>>>> ok. >>>>>>> That said a quick look at the instances of HeapWordSize led to some >>>>>>> that weren't in the heap. I didn't look in Array.java because it's >>>>>>> in the SA which isn't maintainable anyway, but I changed a few. >>>>>>> There were very few that were not referring to objects in the Java >>>>>>> heap. bytecodeTracer was one and there were a couple in >>>>>>> metaspace.cpp. >>>>>> Ok. If you think there may be more, or a more thorough analysis is >>>>>> needed, perhaps just file a bug to get the rest later. >>>>> >>>>> From my look yesterday, there aren't a lot of HeapWordSize left. >>>>> There >>>>> are probably still a lot of HeapWord* casts for things that aren't in >>>>> the Java heap. This is a bigger cleanup that might not make sense to >>>>> do in one change, but maybe in incremental changes to related code. >>>>> >>>>>> >>>>>> As for reviewing your incremental changes, as long as it was just >>>>>> more changes of HeapWordSize to wordSize, I'm sure they are fine. >>>>>> (And yes, I did see that the removal of Symbol size alignment was >>>>>> also added). >>>>> >>>>> Good, thanks. >>>>> >>>>>>> >>>>>>> The bad news is that's more code to review. See above webrev >>>>>>> link. >>>>>>> >>>>>>>>>> >>>>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>>>> >>>>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>>>> align_object_size() did, and not align to word size? Isn't that >>>>>>>>>> what we agreed to? Have you tested CDS? David had concerns about >>>>>>>>>> the InstanceKlass::size() not returning the same aligned size as >>>>>>>>>> Metachunk::object_alignment(). >>>>>>>>> >>>>>>>>> I ran the CDS tests but I could test some more with CDS. We don't >>>>>>>>> want to force the size of objects to be 64 bit (especially >>>>>>>>> Symbol) >>>>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>>>> Do you mean "just" because? I wasn't necessarily suggesting that >>>>>>>> all metadata be 64-bit aligned. However, the ones that have their >>>>>>>> allocation size 64-bit aligned should be. I think David's concern >>>>>>>> is that he wrote code that computes how much memory is needed for >>>>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>>>> allocator allocates more than size() due to the 64-bit >>>>>>>> alignment of >>>>>>>> Metachunk::object_alignment(), then he will underestimate the >>>>>>>> size. >>>>>>>> You'll need to double check with David to see if I got this right. >>>>>>> >>>>>>> I don't know what code this is but yes, it would be wrong. It also >>>>>>> would be wrong if there's any other alignment gaps or space in >>>>>>> metaspace chunks because chunks themselves have an allocation >>>>>>> granularity. >>>>>>> >>>>>>> It could be changed back by changing the function >>>>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>>>> >>>>>>> I fixed Symbol so that it didn't call align_metaspace_size if this >>>>>>> change is needed in the future. >>>>>>> >>>>>>> I was trying to limit the size of this change to correct >>>>>>> align_object_size for metadata. >>>>>> Well, there a few issues being addressed by fixing >>>>>> align_object_size. >>>>>> Using align_object_size was incorrect from a code purity standpoint >>>>>> (it was used on values unrelated to java objects), and was also >>>>>> incorrect when ObjectAlignmentInBytes was not 8. This was the main >>>>>> motivation for making this change. >>>>> >>>>> Exactly. This was higher priority because it was wrong. >>>>>> >>>>>> The 3rd issue is that align_object_size by default was doing 8 byte >>>>>> alignment, and this wastes memory on 32-bit. However, as I mentioned >>>>>> there may be some dependencies on this 8 byte alignment due to the >>>>>> metaspace allocator doing 8 byte alignment. If you can get David to >>>>>> say he's ok with just 4-byte size alignment on 32-bit, then I'm ok >>>>>> with this change. Otherwise I think maybe you should stay with 8 >>>>>> byte >>>>>> alignment (including symbols), and file a bug to someday change >>>>>> it to >>>>>> word alignment, and have the metaspace allocator require that you >>>>>> pass in alignment requirements. >>>>> >>>>> Okay, I can see what David says but I wouldn't change Symbol back. >>>>> That's mostly unrelated to metadata storage and I can get 32 bit >>>>> packing for symbols on 32 bit platforms. It probably saves more >>>>> space >>>>> than the other more invasive ideas that we've had. >>>> >>>> This is reviewed now. If David wants metadata sizing to change >>>> back to >>>> 64 bits on 32 bit platforms, it's a one line change. I'm going to >>>> push >>>> it to get the rest in. >>>> Thanks, >>>> Coleen >>>>> >>>>> Thanks, >>>>> Coleen >>>>> >>>>>>> >>>>>>> Thanks for looking at this in detail. >>>>>> No problem. Thanks for cleaning this up. >>>>>> >>>>>> Chris >>>>>>> >>>>>>> Coleen >>>>>>> >>>>>>> >>>>>>>>> Unfortunately, with the latter, metadata is never aligned on 32 >>>>>>>>> bit boundaries for 32 bit platforms, but to fix this, we have to >>>>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>>>> because the alignment is not a function of the size of the object >>>>>>>>> but what is required from its nonstatic data members. >>>>>>>> Correct. >>>>>>>>> I found MethodCounters, Klass (and subclasses) and ConstantPool >>>>>>>>> has such alignment constraints. Not sizing metadata to 64 bit >>>>>>>>> sizes is a start for making this change. >>>>>>>> I agree with that, but just wanted to point out why David may be >>>>>>>> concerned with this change. >>>>>>>>>> >>>>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>>>> >>>>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>>>> Fixed, Thanks! >>>>>>>> thanks, >>>>>>>> >>>>>>>> Chris >>>>>>>>> >>>>>>>>> Coleen >>>>>>>>> >>>>>>>>>> >>>>>>>>>> thanks, >>>>>>>>>> >>>>>>>>>> Chris >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>>>> rather than align_pointer_up (all the related functions are >>>>>>>>>>> ptr). >>>>>>>>>>> >>>>>>>>>>> Ran RBT quick tests on all platforms along with Chris's >>>>>>>>>>> Plummers >>>>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>>>> >>>>>>>>>>> I have a script to update copyrights on commit. It's not a big >>>>>>>>>>> change, just mostly boring. See the bug comments for more >>>>>>>>>>> details about the change. >>>>>>>>>>> >>>>>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>>>> >>>>>>>>>>> thanks, >>>>>>>>>>> Coleen >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >> > From chris.plummer at oracle.com Mon Feb 1 19:59:55 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Mon, 1 Feb 2016 11:59:55 -0800 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AFB7DD.8010500@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> <56AF9C3E.70407@oracle.com> <56AF9D8E.1070408@oracle.com> <56AFB7DD.8010500@oracle.com> Message-ID: <56AFB93B.2020007@oracle.com> On 2/1/16 11:54 AM, Chris Plummer wrote: > On 2/1/16 10:01 AM, Coleen Phillimore wrote: >> >> >> On 2/1/16 12:56 PM, Chris Plummer wrote: >>> It seems the allocators always align the size up to at least a >>> 64-bit boundary, so doesn't that make it pointless to attempt to >>> save memory by keeping the allocation request size word aligned >>> instead of 64-bit aligned? >> >> Sort of, except you need a size as a multiple of 32 bit words to >> potentially fix this, so it's a step towards that (if wanted). > What you need is (1) don't automatically pad the size up to 64-bit > alignment in the allocator, (2) don't pad the size up to 64-bit in the > size computations, and (3) the ability for the allocator to maintain > an unaligned "top" pointer, and to fix the alignment if necessary > during the allocation. This last one implies knowing the alignment > requirements of the caller, so that means either passing in the > alignment requirement or having allocators configured to the alignment > requirements of its users. You need all 3 of these. Leave any one out > and you don't recoup any of the wasted memory. We were doing all 3. > You eliminated at least some of the cases of (2). Sorry, my wording near then end there was kind of backwards. I meant we were NOT doing any of the 3. You made is so in some cases we are now doing (2). Chris > > Chris >> >> Coleen >> >>> >>> Chris >>> >>> On 1/31/16 4:18 PM, David Holmes wrote: >>>> Hi Coleen, >>>> >>>> I think what Chris was referring to was the CDS compaction work - >>>> which has since been abandoned. To be honest it has been so long >>>> since I was working on this that I can't recall the details. At one >>>> point Ioi commented how all MSO's were allocated with 8-byte >>>> alignment which was unnecessary, and that we could do better and >>>> account for it in the size() method. He also noted if we somehow >>>> messed up the alignment when doing this that it should be quickly >>>> detectable on sparc. >>>> >>>> These current changes will affect the apparent wasted space in the >>>> archive as the expected usage would be based on size() while the >>>> actual usage would be determined by the allocator. >>>> >>>> Ioi was really the best person to comment-on/review this. >>>> >>>> David >>>> ----- >>>> >>>> On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >>>>> >>>>> >>>>> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>>>>> >>>>>> Thanks Chris, >>>>>> >>>>>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>>>>> Hi Coleen, >>>>>>> >>>>>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>>>>> >>>>>>>> Hi Chris, >>>>>>>> >>>>>>>> I made a few extra changes because of your question that I didn't >>>>>>>> answer below, a few HeapWordSize became wordSize. I apologize >>>>>>>> that >>>>>>>> I don't know how to create incremental webrevs. See discussion >>>>>>>> below. >>>>>>>> >>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>>>>> >>>>>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>>>>> >>>>>>>>>> Thank you, Chris for looking at this change. >>>>>>>>>> >>>>>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>>>>> Hi Coleen, >>>>>>>>>>> >>>>>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>>>>> something other than 8? >>>>>>>>>> >>>>>>>>>> Okay, I can run one of the testsets with that. I verified it in >>>>>>>>>> the debugger mostly. >>>>>>>>>>> >>>>>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>>>>> align_object_size(), and confirm that the ones you didn't >>>>>>>>>>> change >>>>>>>>>>> are correct. I gave a quick look and they look right to me, >>>>>>>>>>> but I >>>>>>>>>>> wasn't always certain if object alignment was appropriate in >>>>>>>>>>> all >>>>>>>>>>> cases. >>>>>>>>>> >>>>>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>>>>> align_heap_object_size before testing and changed it back, to >>>>>>>>>> verify that I didn't miss any. >>>>>>>>>>> >>>>>>>>>>> I see some remaining HeapWordSize references that are suspect, >>>>>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go through >>>>>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>>>>> inspection? >>>>>>>>> ??? Any comment? >>>>>>>> >>>>>>>> Actually, I tried to get a lot of HeapWordSize in the metadata but >>>>>>>> the primary focus of the change, despite the title, was to fix >>>>>>>> align_object_size wasn't used on metadata. >>>>>>> ok. >>>>>>>> That said a quick look at the instances of HeapWordSize led to >>>>>>>> some >>>>>>>> that weren't in the heap. I didn't look in Array.java because >>>>>>>> it's >>>>>>>> in the SA which isn't maintainable anyway, but I changed a few. >>>>>>>> There were very few that were not referring to objects in the Java >>>>>>>> heap. bytecodeTracer was one and there were a couple in >>>>>>>> metaspace.cpp. >>>>>>> Ok. If you think there may be more, or a more thorough analysis is >>>>>>> needed, perhaps just file a bug to get the rest later. >>>>>> >>>>>> From my look yesterday, there aren't a lot of HeapWordSize left. >>>>>> There >>>>>> are probably still a lot of HeapWord* casts for things that >>>>>> aren't in >>>>>> the Java heap. This is a bigger cleanup that might not make >>>>>> sense to >>>>>> do in one change, but maybe in incremental changes to related code. >>>>>> >>>>>>> >>>>>>> As for reviewing your incremental changes, as long as it was just >>>>>>> more changes of HeapWordSize to wordSize, I'm sure they are fine. >>>>>>> (And yes, I did see that the removal of Symbol size alignment was >>>>>>> also added). >>>>>> >>>>>> Good, thanks. >>>>>> >>>>>>>> >>>>>>>> The bad news is that's more code to review. See above webrev >>>>>>>> link. >>>>>>>> >>>>>>>>>>> >>>>>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>>>>> >>>>>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>>>>> align_object_size() did, and not align to word size? Isn't that >>>>>>>>>>> what we agreed to? Have you tested CDS? David had concerns >>>>>>>>>>> about >>>>>>>>>>> the InstanceKlass::size() not returning the same aligned >>>>>>>>>>> size as >>>>>>>>>>> Metachunk::object_alignment(). >>>>>>>>>> >>>>>>>>>> I ran the CDS tests but I could test some more with CDS. We >>>>>>>>>> don't >>>>>>>>>> want to force the size of objects to be 64 bit (especially >>>>>>>>>> Symbol) >>>>>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>>>>> Do you mean "just" because? I wasn't necessarily suggesting that >>>>>>>>> all metadata be 64-bit aligned. However, the ones that have their >>>>>>>>> allocation size 64-bit aligned should be. I think David's concern >>>>>>>>> is that he wrote code that computes how much memory is needed for >>>>>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>>>>> allocator allocates more than size() due to the 64-bit >>>>>>>>> alignment of >>>>>>>>> Metachunk::object_alignment(), then he will underestimate the >>>>>>>>> size. >>>>>>>>> You'll need to double check with David to see if I got this >>>>>>>>> right. >>>>>>>> >>>>>>>> I don't know what code this is but yes, it would be wrong. It >>>>>>>> also >>>>>>>> would be wrong if there's any other alignment gaps or space in >>>>>>>> metaspace chunks because chunks themselves have an allocation >>>>>>>> granularity. >>>>>>>> >>>>>>>> It could be changed back by changing the function >>>>>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>>>>> >>>>>>>> I fixed Symbol so that it didn't call align_metaspace_size if this >>>>>>>> change is needed in the future. >>>>>>>> >>>>>>>> I was trying to limit the size of this change to correct >>>>>>>> align_object_size for metadata. >>>>>>> Well, there a few issues being addressed by fixing >>>>>>> align_object_size. >>>>>>> Using align_object_size was incorrect from a code purity standpoint >>>>>>> (it was used on values unrelated to java objects), and was also >>>>>>> incorrect when ObjectAlignmentInBytes was not 8. This was the main >>>>>>> motivation for making this change. >>>>>> >>>>>> Exactly. This was higher priority because it was wrong. >>>>>>> >>>>>>> The 3rd issue is that align_object_size by default was doing 8 byte >>>>>>> alignment, and this wastes memory on 32-bit. However, as I >>>>>>> mentioned >>>>>>> there may be some dependencies on this 8 byte alignment due to the >>>>>>> metaspace allocator doing 8 byte alignment. If you can get David to >>>>>>> say he's ok with just 4-byte size alignment on 32-bit, then I'm ok >>>>>>> with this change. Otherwise I think maybe you should stay with 8 >>>>>>> byte >>>>>>> alignment (including symbols), and file a bug to someday change >>>>>>> it to >>>>>>> word alignment, and have the metaspace allocator require that you >>>>>>> pass in alignment requirements. >>>>>> >>>>>> Okay, I can see what David says but I wouldn't change Symbol back. >>>>>> That's mostly unrelated to metadata storage and I can get 32 bit >>>>>> packing for symbols on 32 bit platforms. It probably saves more >>>>>> space >>>>>> than the other more invasive ideas that we've had. >>>>> >>>>> This is reviewed now. If David wants metadata sizing to change >>>>> back to >>>>> 64 bits on 32 bit platforms, it's a one line change. I'm going to >>>>> push >>>>> it to get the rest in. >>>>> Thanks, >>>>> Coleen >>>>>> >>>>>> Thanks, >>>>>> Coleen >>>>>> >>>>>>>> >>>>>>>> Thanks for looking at this in detail. >>>>>>> No problem. Thanks for cleaning this up. >>>>>>> >>>>>>> Chris >>>>>>>> >>>>>>>> Coleen >>>>>>>> >>>>>>>> >>>>>>>>>> Unfortunately, with the latter, metadata is never aligned on 32 >>>>>>>>>> bit boundaries for 32 bit platforms, but to fix this, we have to >>>>>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>>>>> because the alignment is not a function of the size of the >>>>>>>>>> object >>>>>>>>>> but what is required from its nonstatic data members. >>>>>>>>> Correct. >>>>>>>>>> I found MethodCounters, Klass (and subclasses) and >>>>>>>>>> ConstantPool >>>>>>>>>> has such alignment constraints. Not sizing metadata to 64 bit >>>>>>>>>> sizes is a start for making this change. >>>>>>>>> I agree with that, but just wanted to point out why David may be >>>>>>>>> concerned with this change. >>>>>>>>>>> >>>>>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>>>>> >>>>>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>>>>> Fixed, Thanks! >>>>>>>>> thanks, >>>>>>>>> >>>>>>>>> Chris >>>>>>>>>> >>>>>>>>>> Coleen >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> thanks, >>>>>>>>>>> >>>>>>>>>>> Chris >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>>>>> rather than align_pointer_up (all the related functions are >>>>>>>>>>>> ptr). >>>>>>>>>>>> >>>>>>>>>>>> Ran RBT quick tests on all platforms along with Chris's >>>>>>>>>>>> Plummers >>>>>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>>>>> >>>>>>>>>>>> I have a script to update copyrights on commit. It's not a big >>>>>>>>>>>> change, just mostly boring. See the bug comments for more >>>>>>>>>>>> details about the change. >>>>>>>>>>>> >>>>>>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>>>>> >>>>>>>>>>>> thanks, >>>>>>>>>>>> Coleen >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>> >> > From coleen.phillimore at oracle.com Mon Feb 1 20:59:47 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 1 Feb 2016 15:59:47 -0500 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AFB93B.2020007@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> <56AF9C3E.70407@oracle.com> <56AF9D8E.1070408@oracle.com> <56AFB7DD.8010500@oracle.com> <56AFB93B.2020007@oracle.com> Message-ID: <56AFC743.3030006@oracle.com> On 2/1/16 2:59 PM, Chris Plummer wrote: > On 2/1/16 11:54 AM, Chris Plummer wrote: >> On 2/1/16 10:01 AM, Coleen Phillimore wrote: >>> >>> >>> On 2/1/16 12:56 PM, Chris Plummer wrote: >>>> It seems the allocators always align the size up to at least a >>>> 64-bit boundary, so doesn't that make it pointless to attempt to >>>> save memory by keeping the allocation request size word aligned >>>> instead of 64-bit aligned? >>> >>> Sort of, except you need a size as a multiple of 32 bit words to >>> potentially fix this, so it's a step towards that (if wanted). >> What you need is (1) don't automatically pad the size up to 64-bit >> alignment in the allocator, (2) don't pad the size up to 64-bit in >> the size computations, and (3) the ability for the allocator to >> maintain an unaligned "top" pointer, and to fix the alignment if >> necessary during the allocation. This last one implies knowing the >> alignment requirements of the caller, so that means either passing in >> the alignment requirement or having allocators configured to the >> alignment requirements of its users. Yes, exactly. I think we need to add another parameter to Metaspace::allocate() to allow the caller to specify alignment requirements. >> You need all 3 of these. Leave any one out and you don't recoup any >> of the wasted memory. We were doing all 3. You eliminated at least >> some of the cases of (2). > Sorry, my wording near then end there was kind of backwards. I meant > we were NOT doing any of the 3. You made is so in some cases we are > now doing (2). True. Why I said "if wanted" above was that we'd need to file an RFE to reclaim the wasted memory. Coleen > > Chris >> >> Chris >>> >>> Coleen >>> >>>> >>>> Chris >>>> >>>> On 1/31/16 4:18 PM, David Holmes wrote: >>>>> Hi Coleen, >>>>> >>>>> I think what Chris was referring to was the CDS compaction work - >>>>> which has since been abandoned. To be honest it has been so long >>>>> since I was working on this that I can't recall the details. At >>>>> one point Ioi commented how all MSO's were allocated with 8-byte >>>>> alignment which was unnecessary, and that we could do better and >>>>> account for it in the size() method. He also noted if we somehow >>>>> messed up the alignment when doing this that it should be quickly >>>>> detectable on sparc. >>>>> >>>>> These current changes will affect the apparent wasted space in the >>>>> archive as the expected usage would be based on size() while the >>>>> actual usage would be determined by the allocator. >>>>> >>>>> Ioi was really the best person to comment-on/review this. >>>>> >>>>> David >>>>> ----- >>>>> >>>>> On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >>>>>> >>>>>> >>>>>> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>>>>>> >>>>>>> Thanks Chris, >>>>>>> >>>>>>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>>>>>> Hi Coleen, >>>>>>>> >>>>>>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>>>>>> >>>>>>>>> Hi Chris, >>>>>>>>> >>>>>>>>> I made a few extra changes because of your question that I didn't >>>>>>>>> answer below, a few HeapWordSize became wordSize. I apologize >>>>>>>>> that >>>>>>>>> I don't know how to create incremental webrevs. See discussion >>>>>>>>> below. >>>>>>>>> >>>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>>>>>> >>>>>>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>>>>>> >>>>>>>>>>> Thank you, Chris for looking at this change. >>>>>>>>>>> >>>>>>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>>>>>> Hi Coleen, >>>>>>>>>>>> >>>>>>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>>>>>> something other than 8? >>>>>>>>>>> >>>>>>>>>>> Okay, I can run one of the testsets with that. I verified it in >>>>>>>>>>> the debugger mostly. >>>>>>>>>>>> >>>>>>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>>>>>> align_object_size(), and confirm that the ones you didn't >>>>>>>>>>>> change >>>>>>>>>>>> are correct. I gave a quick look and they look right to me, >>>>>>>>>>>> but I >>>>>>>>>>>> wasn't always certain if object alignment was appropriate >>>>>>>>>>>> in all >>>>>>>>>>>> cases. >>>>>>>>>>> >>>>>>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>>>>>> align_heap_object_size before testing and changed it back, to >>>>>>>>>>> verify that I didn't miss any. >>>>>>>>>>>> >>>>>>>>>>>> I see some remaining HeapWordSize references that are suspect, >>>>>>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go through >>>>>>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>>>>>> inspection? >>>>>>>>>> ??? Any comment? >>>>>>>>> >>>>>>>>> Actually, I tried to get a lot of HeapWordSize in the metadata >>>>>>>>> but >>>>>>>>> the primary focus of the change, despite the title, was to fix >>>>>>>>> align_object_size wasn't used on metadata. >>>>>>>> ok. >>>>>>>>> That said a quick look at the instances of HeapWordSize led to >>>>>>>>> some >>>>>>>>> that weren't in the heap. I didn't look in Array.java because >>>>>>>>> it's >>>>>>>>> in the SA which isn't maintainable anyway, but I changed a few. >>>>>>>>> There were very few that were not referring to objects in the >>>>>>>>> Java >>>>>>>>> heap. bytecodeTracer was one and there were a couple in >>>>>>>>> metaspace.cpp. >>>>>>>> Ok. If you think there may be more, or a more thorough analysis is >>>>>>>> needed, perhaps just file a bug to get the rest later. >>>>>>> >>>>>>> From my look yesterday, there aren't a lot of HeapWordSize left. >>>>>>> There >>>>>>> are probably still a lot of HeapWord* casts for things that >>>>>>> aren't in >>>>>>> the Java heap. This is a bigger cleanup that might not make >>>>>>> sense to >>>>>>> do in one change, but maybe in incremental changes to related code. >>>>>>> >>>>>>>> >>>>>>>> As for reviewing your incremental changes, as long as it was just >>>>>>>> more changes of HeapWordSize to wordSize, I'm sure they are fine. >>>>>>>> (And yes, I did see that the removal of Symbol size alignment was >>>>>>>> also added). >>>>>>> >>>>>>> Good, thanks. >>>>>>> >>>>>>>>> >>>>>>>>> The bad news is that's more code to review. See above webrev >>>>>>>>> link. >>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>>>>>> >>>>>>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>>>>>> align_object_size() did, and not align to word size? Isn't >>>>>>>>>>>> that >>>>>>>>>>>> what we agreed to? Have you tested CDS? David had concerns >>>>>>>>>>>> about >>>>>>>>>>>> the InstanceKlass::size() not returning the same aligned >>>>>>>>>>>> size as >>>>>>>>>>>> Metachunk::object_alignment(). >>>>>>>>>>> >>>>>>>>>>> I ran the CDS tests but I could test some more with CDS. We >>>>>>>>>>> don't >>>>>>>>>>> want to force the size of objects to be 64 bit (especially >>>>>>>>>>> Symbol) >>>>>>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>>>>>> Do you mean "just" because? I wasn't necessarily suggesting that >>>>>>>>>> all metadata be 64-bit aligned. However, the ones that have >>>>>>>>>> their >>>>>>>>>> allocation size 64-bit aligned should be. I think David's >>>>>>>>>> concern >>>>>>>>>> is that he wrote code that computes how much memory is needed >>>>>>>>>> for >>>>>>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>>>>>> allocator allocates more than size() due to the 64-bit >>>>>>>>>> alignment of >>>>>>>>>> Metachunk::object_alignment(), then he will underestimate the >>>>>>>>>> size. >>>>>>>>>> You'll need to double check with David to see if I got this >>>>>>>>>> right. >>>>>>>>> >>>>>>>>> I don't know what code this is but yes, it would be wrong. It >>>>>>>>> also >>>>>>>>> would be wrong if there's any other alignment gaps or space in >>>>>>>>> metaspace chunks because chunks themselves have an allocation >>>>>>>>> granularity. >>>>>>>>> >>>>>>>>> It could be changed back by changing the function >>>>>>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>>>>>> >>>>>>>>> I fixed Symbol so that it didn't call align_metaspace_size if >>>>>>>>> this >>>>>>>>> change is needed in the future. >>>>>>>>> >>>>>>>>> I was trying to limit the size of this change to correct >>>>>>>>> align_object_size for metadata. >>>>>>>> Well, there a few issues being addressed by fixing >>>>>>>> align_object_size. >>>>>>>> Using align_object_size was incorrect from a code purity >>>>>>>> standpoint >>>>>>>> (it was used on values unrelated to java objects), and was also >>>>>>>> incorrect when ObjectAlignmentInBytes was not 8. This was the main >>>>>>>> motivation for making this change. >>>>>>> >>>>>>> Exactly. This was higher priority because it was wrong. >>>>>>>> >>>>>>>> The 3rd issue is that align_object_size by default was doing 8 >>>>>>>> byte >>>>>>>> alignment, and this wastes memory on 32-bit. However, as I >>>>>>>> mentioned >>>>>>>> there may be some dependencies on this 8 byte alignment due to the >>>>>>>> metaspace allocator doing 8 byte alignment. If you can get >>>>>>>> David to >>>>>>>> say he's ok with just 4-byte size alignment on 32-bit, then I'm ok >>>>>>>> with this change. Otherwise I think maybe you should stay with >>>>>>>> 8 byte >>>>>>>> alignment (including symbols), and file a bug to someday change >>>>>>>> it to >>>>>>>> word alignment, and have the metaspace allocator require that you >>>>>>>> pass in alignment requirements. >>>>>>> >>>>>>> Okay, I can see what David says but I wouldn't change Symbol back. >>>>>>> That's mostly unrelated to metadata storage and I can get 32 bit >>>>>>> packing for symbols on 32 bit platforms. It probably saves more >>>>>>> space >>>>>>> than the other more invasive ideas that we've had. >>>>>> >>>>>> This is reviewed now. If David wants metadata sizing to change >>>>>> back to >>>>>> 64 bits on 32 bit platforms, it's a one line change. I'm going to >>>>>> push >>>>>> it to get the rest in. >>>>>> Thanks, >>>>>> Coleen >>>>>>> >>>>>>> Thanks, >>>>>>> Coleen >>>>>>> >>>>>>>>> >>>>>>>>> Thanks for looking at this in detail. >>>>>>>> No problem. Thanks for cleaning this up. >>>>>>>> >>>>>>>> Chris >>>>>>>>> >>>>>>>>> Coleen >>>>>>>>> >>>>>>>>> >>>>>>>>>>> Unfortunately, with the latter, metadata is never aligned on 32 >>>>>>>>>>> bit boundaries for 32 bit platforms, but to fix this, we >>>>>>>>>>> have to >>>>>>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>>>>>> because the alignment is not a function of the size of the >>>>>>>>>>> object >>>>>>>>>>> but what is required from its nonstatic data members. >>>>>>>>>> Correct. >>>>>>>>>>> I found MethodCounters, Klass (and subclasses) and >>>>>>>>>>> ConstantPool >>>>>>>>>>> has such alignment constraints. Not sizing metadata to 64 bit >>>>>>>>>>> sizes is a start for making this change. >>>>>>>>>> I agree with that, but just wanted to point out why David may be >>>>>>>>>> concerned with this change. >>>>>>>>>>>> >>>>>>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>>>>>> >>>>>>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>>>>>> Fixed, Thanks! >>>>>>>>>> thanks, >>>>>>>>>> >>>>>>>>>> Chris >>>>>>>>>>> >>>>>>>>>>> Coleen >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> thanks, >>>>>>>>>>>> >>>>>>>>>>>> Chris >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>>>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>>>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>>>>>> rather than align_pointer_up (all the related functions >>>>>>>>>>>>> are ptr). >>>>>>>>>>>>> >>>>>>>>>>>>> Ran RBT quick tests on all platforms along with Chris's >>>>>>>>>>>>> Plummers >>>>>>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>>>>>> >>>>>>>>>>>>> I have a script to update copyrights on commit. It's not a >>>>>>>>>>>>> big >>>>>>>>>>>>> change, just mostly boring. See the bug comments for more >>>>>>>>>>>>> details about the change. >>>>>>>>>>>>> >>>>>>>>>>>>> open webrev at >>>>>>>>>>>>> http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>>>>>> >>>>>>>>>>>>> thanks, >>>>>>>>>>>>> Coleen >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>> >>> >> > From chris.plummer at oracle.com Mon Feb 1 21:20:06 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Mon, 1 Feb 2016 13:20:06 -0800 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AFC743.3030006@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> <56AF9C3E.70407@oracle.com> <56AF9D8E.1070408@oracle.com> <56AFB7DD.8010500@oracle.com> <56AFB93B.2020007@oracle.com> <56AFC743.3030006@oracle.com> Message-ID: <56AFCC06.7010804@oracle.com> On 2/1/16 12:59 PM, Coleen Phillimore wrote: > > > On 2/1/16 2:59 PM, Chris Plummer wrote: >> On 2/1/16 11:54 AM, Chris Plummer wrote: >>> On 2/1/16 10:01 AM, Coleen Phillimore wrote: >>>> >>>> >>>> On 2/1/16 12:56 PM, Chris Plummer wrote: >>>>> It seems the allocators always align the size up to at least a >>>>> 64-bit boundary, so doesn't that make it pointless to attempt to >>>>> save memory by keeping the allocation request size word aligned >>>>> instead of 64-bit aligned? >>>> >>>> Sort of, except you need a size as a multiple of 32 bit words to >>>> potentially fix this, so it's a step towards that (if wanted). >>> What you need is (1) don't automatically pad the size up to 64-bit >>> alignment in the allocator, (2) don't pad the size up to 64-bit in >>> the size computations, and (3) the ability for the allocator to >>> maintain an unaligned "top" pointer, and to fix the alignment if >>> necessary during the allocation. This last one implies knowing the >>> alignment requirements of the caller, so that means either passing >>> in the alignment requirement or having allocators configured to the >>> alignment requirements of its users. > > Yes, exactly. I think we need to add another parameter to > Metaspace::allocate() to allow the caller to specify alignment > requirements. And I was looking at Amalloc() also, which does: x = ARENA_ALIGN(x); And then the following defines: #define ARENA_ALIGN_M1 (((size_t)(ARENA_AMALLOC_ALIGNMENT)) - 1) #define ARENA_ALIGN_MASK (~((size_t)ARENA_ALIGN_M1)) #define ARENA_ALIGN(x) ((((size_t)(x)) + ARENA_ALIGN_M1) & ARENA_ALIGN_MASK) #define ARENA_AMALLOC_ALIGNMENT (2*BytesPerWord) I think this all adds up to Amalloc doing 64-bit size alignment on 32-bit systems and 128-bit alignment on 64-bit systems. So I'm not so sure you Symbol changes are having an impact, at least not for Symbols allocated out of Arenas. If I'm reading the code right, symbols created for the null ClassLoader are allocated out of an arena and all others out of the C heap. Chris > >>> You need all 3 of these. Leave any one out and you don't recoup any >>> of the wasted memory. We were doing all 3. You eliminated at least >>> some of the cases of (2). >> Sorry, my wording near then end there was kind of backwards. I meant >> we were NOT doing any of the 3. You made is so in some cases we are >> now doing (2). > > True. Why I said "if wanted" above was that we'd need to file an RFE > to reclaim the wasted memory. > > Coleen > >> >> Chris >>> >>> Chris >>>> >>>> Coleen >>>> >>>>> >>>>> Chris >>>>> >>>>> On 1/31/16 4:18 PM, David Holmes wrote: >>>>>> Hi Coleen, >>>>>> >>>>>> I think what Chris was referring to was the CDS compaction work - >>>>>> which has since been abandoned. To be honest it has been so long >>>>>> since I was working on this that I can't recall the details. At >>>>>> one point Ioi commented how all MSO's were allocated with 8-byte >>>>>> alignment which was unnecessary, and that we could do better and >>>>>> account for it in the size() method. He also noted if we somehow >>>>>> messed up the alignment when doing this that it should be quickly >>>>>> detectable on sparc. >>>>>> >>>>>> These current changes will affect the apparent wasted space in >>>>>> the archive as the expected usage would be based on size() while >>>>>> the actual usage would be determined by the allocator. >>>>>> >>>>>> Ioi was really the best person to comment-on/review this. >>>>>> >>>>>> David >>>>>> ----- >>>>>> >>>>>> On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >>>>>>> >>>>>>> >>>>>>> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>>>>>>> >>>>>>>> Thanks Chris, >>>>>>>> >>>>>>>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>>>>>>> Hi Coleen, >>>>>>>>> >>>>>>>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>>>>>>> >>>>>>>>>> Hi Chris, >>>>>>>>>> >>>>>>>>>> I made a few extra changes because of your question that I >>>>>>>>>> didn't >>>>>>>>>> answer below, a few HeapWordSize became wordSize. I apologize >>>>>>>>>> that >>>>>>>>>> I don't know how to create incremental webrevs. See >>>>>>>>>> discussion below. >>>>>>>>>> >>>>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>>>>>>> >>>>>>>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>>>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>>>>>>> >>>>>>>>>>>> Thank you, Chris for looking at this change. >>>>>>>>>>>> >>>>>>>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>>>>>>> Hi Coleen, >>>>>>>>>>>>> >>>>>>>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>>>>>>> something other than 8? >>>>>>>>>>>> >>>>>>>>>>>> Okay, I can run one of the testsets with that. I verified >>>>>>>>>>>> it in >>>>>>>>>>>> the debugger mostly. >>>>>>>>>>>>> >>>>>>>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>>>>>>> align_object_size(), and confirm that the ones you didn't >>>>>>>>>>>>> change >>>>>>>>>>>>> are correct. I gave a quick look and they look right to >>>>>>>>>>>>> me, but I >>>>>>>>>>>>> wasn't always certain if object alignment was appropriate >>>>>>>>>>>>> in all >>>>>>>>>>>>> cases. >>>>>>>>>>>> >>>>>>>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>>>>>>> align_heap_object_size before testing and changed it back, to >>>>>>>>>>>> verify that I didn't miss any. >>>>>>>>>>>>> >>>>>>>>>>>>> I see some remaining HeapWordSize references that are >>>>>>>>>>>>> suspect, >>>>>>>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go >>>>>>>>>>>>> through >>>>>>>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>>>>>>> inspection? >>>>>>>>>>> ??? Any comment? >>>>>>>>>> >>>>>>>>>> Actually, I tried to get a lot of HeapWordSize in the >>>>>>>>>> metadata but >>>>>>>>>> the primary focus of the change, despite the title, was to fix >>>>>>>>>> align_object_size wasn't used on metadata. >>>>>>>>> ok. >>>>>>>>>> That said a quick look at the instances of HeapWordSize led >>>>>>>>>> to some >>>>>>>>>> that weren't in the heap. I didn't look in Array.java >>>>>>>>>> because it's >>>>>>>>>> in the SA which isn't maintainable anyway, but I changed a few. >>>>>>>>>> There were very few that were not referring to objects in the >>>>>>>>>> Java >>>>>>>>>> heap. bytecodeTracer was one and there were a couple in >>>>>>>>>> metaspace.cpp. >>>>>>>>> Ok. If you think there may be more, or a more thorough >>>>>>>>> analysis is >>>>>>>>> needed, perhaps just file a bug to get the rest later. >>>>>>>> >>>>>>>> From my look yesterday, there aren't a lot of HeapWordSize >>>>>>>> left. There >>>>>>>> are probably still a lot of HeapWord* casts for things that >>>>>>>> aren't in >>>>>>>> the Java heap. This is a bigger cleanup that might not make >>>>>>>> sense to >>>>>>>> do in one change, but maybe in incremental changes to related >>>>>>>> code. >>>>>>>> >>>>>>>>> >>>>>>>>> As for reviewing your incremental changes, as long as it was just >>>>>>>>> more changes of HeapWordSize to wordSize, I'm sure they are fine. >>>>>>>>> (And yes, I did see that the removal of Symbol size alignment was >>>>>>>>> also added). >>>>>>>> >>>>>>>> Good, thanks. >>>>>>>> >>>>>>>>>> >>>>>>>>>> The bad news is that's more code to review. See above webrev >>>>>>>>>> link. >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>>>>>>> >>>>>>>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>>>>>>> align_object_size() did, and not align to word size? Isn't >>>>>>>>>>>>> that >>>>>>>>>>>>> what we agreed to? Have you tested CDS? David had concerns >>>>>>>>>>>>> about >>>>>>>>>>>>> the InstanceKlass::size() not returning the same aligned >>>>>>>>>>>>> size as >>>>>>>>>>>>> Metachunk::object_alignment(). >>>>>>>>>>>> >>>>>>>>>>>> I ran the CDS tests but I could test some more with CDS. We >>>>>>>>>>>> don't >>>>>>>>>>>> want to force the size of objects to be 64 bit (especially >>>>>>>>>>>> Symbol) >>>>>>>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>>>>>>> Do you mean "just" because? I wasn't necessarily suggesting >>>>>>>>>>> that >>>>>>>>>>> all metadata be 64-bit aligned. However, the ones that have >>>>>>>>>>> their >>>>>>>>>>> allocation size 64-bit aligned should be. I think David's >>>>>>>>>>> concern >>>>>>>>>>> is that he wrote code that computes how much memory is >>>>>>>>>>> needed for >>>>>>>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>>>>>>> allocator allocates more than size() due to the 64-bit >>>>>>>>>>> alignment of >>>>>>>>>>> Metachunk::object_alignment(), then he will underestimate >>>>>>>>>>> the size. >>>>>>>>>>> You'll need to double check with David to see if I got this >>>>>>>>>>> right. >>>>>>>>>> >>>>>>>>>> I don't know what code this is but yes, it would be wrong. >>>>>>>>>> It also >>>>>>>>>> would be wrong if there's any other alignment gaps or space in >>>>>>>>>> metaspace chunks because chunks themselves have an allocation >>>>>>>>>> granularity. >>>>>>>>>> >>>>>>>>>> It could be changed back by changing the function >>>>>>>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>>>>>>> >>>>>>>>>> I fixed Symbol so that it didn't call align_metaspace_size if >>>>>>>>>> this >>>>>>>>>> change is needed in the future. >>>>>>>>>> >>>>>>>>>> I was trying to limit the size of this change to correct >>>>>>>>>> align_object_size for metadata. >>>>>>>>> Well, there a few issues being addressed by fixing >>>>>>>>> align_object_size. >>>>>>>>> Using align_object_size was incorrect from a code purity >>>>>>>>> standpoint >>>>>>>>> (it was used on values unrelated to java objects), and was also >>>>>>>>> incorrect when ObjectAlignmentInBytes was not 8. This was the >>>>>>>>> main >>>>>>>>> motivation for making this change. >>>>>>>> >>>>>>>> Exactly. This was higher priority because it was wrong. >>>>>>>>> >>>>>>>>> The 3rd issue is that align_object_size by default was doing 8 >>>>>>>>> byte >>>>>>>>> alignment, and this wastes memory on 32-bit. However, as I >>>>>>>>> mentioned >>>>>>>>> there may be some dependencies on this 8 byte alignment due to >>>>>>>>> the >>>>>>>>> metaspace allocator doing 8 byte alignment. If you can get >>>>>>>>> David to >>>>>>>>> say he's ok with just 4-byte size alignment on 32-bit, then >>>>>>>>> I'm ok >>>>>>>>> with this change. Otherwise I think maybe you should stay with >>>>>>>>> 8 byte >>>>>>>>> alignment (including symbols), and file a bug to someday >>>>>>>>> change it to >>>>>>>>> word alignment, and have the metaspace allocator require that you >>>>>>>>> pass in alignment requirements. >>>>>>>> >>>>>>>> Okay, I can see what David says but I wouldn't change Symbol back. >>>>>>>> That's mostly unrelated to metadata storage and I can get 32 bit >>>>>>>> packing for symbols on 32 bit platforms. It probably saves >>>>>>>> more space >>>>>>>> than the other more invasive ideas that we've had. >>>>>>> >>>>>>> This is reviewed now. If David wants metadata sizing to change >>>>>>> back to >>>>>>> 64 bits on 32 bit platforms, it's a one line change. I'm going >>>>>>> to push >>>>>>> it to get the rest in. >>>>>>> Thanks, >>>>>>> Coleen >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Coleen >>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks for looking at this in detail. >>>>>>>>> No problem. Thanks for cleaning this up. >>>>>>>>> >>>>>>>>> Chris >>>>>>>>>> >>>>>>>>>> Coleen >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> Unfortunately, with the latter, metadata is never aligned >>>>>>>>>>>> on 32 >>>>>>>>>>>> bit boundaries for 32 bit platforms, but to fix this, we >>>>>>>>>>>> have to >>>>>>>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>>>>>>> because the alignment is not a function of the size of the >>>>>>>>>>>> object >>>>>>>>>>>> but what is required from its nonstatic data members. >>>>>>>>>>> Correct. >>>>>>>>>>>> I found MethodCounters, Klass (and subclasses) and >>>>>>>>>>>> ConstantPool >>>>>>>>>>>> has such alignment constraints. Not sizing metadata to 64 bit >>>>>>>>>>>> sizes is a start for making this change. >>>>>>>>>>> I agree with that, but just wanted to point out why David >>>>>>>>>>> may be >>>>>>>>>>> concerned with this change. >>>>>>>>>>>>> >>>>>>>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>>>>>>> >>>>>>>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>>>>>>> Fixed, Thanks! >>>>>>>>>>> thanks, >>>>>>>>>>> >>>>>>>>>>> Chris >>>>>>>>>>>> >>>>>>>>>>>> Coleen >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> thanks, >>>>>>>>>>>>> >>>>>>>>>>>>> Chris >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>>>>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>>>>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>>>>>>> rather than align_pointer_up (all the related functions >>>>>>>>>>>>>> are ptr). >>>>>>>>>>>>>> >>>>>>>>>>>>>> Ran RBT quick tests on all platforms along with Chris's >>>>>>>>>>>>>> Plummers >>>>>>>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I have a script to update copyrights on commit. It's not >>>>>>>>>>>>>> a big >>>>>>>>>>>>>> change, just mostly boring. See the bug comments for more >>>>>>>>>>>>>> details about the change. >>>>>>>>>>>>>> >>>>>>>>>>>>>> open webrev at >>>>>>>>>>>>>> http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>>>>>>> >>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>> Coleen >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>> >>>> >>> >> > From coleen.phillimore at oracle.com Mon Feb 1 21:24:08 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 1 Feb 2016 16:24:08 -0500 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AFCC06.7010804@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> <56AF9C3E.70407@oracle.com> <56AF9D8E.1070408@oracle.com> <56AFB7DD.8010500@oracle.com> <56AFB93B.2020007@oracle.com> <56AFC743.3030006@oracle.com> <56AFCC06.7010804@oracle.com> Message-ID: <56AFCCF8.3010805@oracle.com> On 2/1/16 4:20 PM, Chris Plummer wrote: > On 2/1/16 12:59 PM, Coleen Phillimore wrote: >> >> >> On 2/1/16 2:59 PM, Chris Plummer wrote: >>> On 2/1/16 11:54 AM, Chris Plummer wrote: >>>> On 2/1/16 10:01 AM, Coleen Phillimore wrote: >>>>> >>>>> >>>>> On 2/1/16 12:56 PM, Chris Plummer wrote: >>>>>> It seems the allocators always align the size up to at least a >>>>>> 64-bit boundary, so doesn't that make it pointless to attempt to >>>>>> save memory by keeping the allocation request size word aligned >>>>>> instead of 64-bit aligned? >>>>> >>>>> Sort of, except you need a size as a multiple of 32 bit words to >>>>> potentially fix this, so it's a step towards that (if wanted). >>>> What you need is (1) don't automatically pad the size up to 64-bit >>>> alignment in the allocator, (2) don't pad the size up to 64-bit in >>>> the size computations, and (3) the ability for the allocator to >>>> maintain an unaligned "top" pointer, and to fix the alignment if >>>> necessary during the allocation. This last one implies knowing the >>>> alignment requirements of the caller, so that means either passing >>>> in the alignment requirement or having allocators configured to the >>>> alignment requirements of its users. >> >> Yes, exactly. I think we need to add another parameter to >> Metaspace::allocate() to allow the caller to specify alignment >> requirements. > And I was looking at Amalloc() also, which does: > > x = ARENA_ALIGN(x); > > And then the following defines: > > #define ARENA_ALIGN_M1 (((size_t)(ARENA_AMALLOC_ALIGNMENT)) - 1) > #define ARENA_ALIGN_MASK (~((size_t)ARENA_ALIGN_M1)) > #define ARENA_ALIGN(x) ((((size_t)(x)) + ARENA_ALIGN_M1) & > ARENA_ALIGN_MASK) > > #define ARENA_AMALLOC_ALIGNMENT (2*BytesPerWord) > > I think this all adds up to Amalloc doing 64-bit size alignment on > 32-bit systems and 128-bit alignment on 64-bit systems. So I'm not so > sure you Symbol changes are having an impact, at least not for Symbols > allocated out of Arenas. If I'm reading the code right, symbols > created for the null ClassLoader are allocated out of an arena and all > others out of the C heap. The Symbol arena version of operator 'new' calls Amalloc_4. Not sure about 128 bit alignment for 64 bit systems, is that right? Coleen > > Chris >> >>>> You need all 3 of these. Leave any one out and you don't recoup any >>>> of the wasted memory. We were doing all 3. You eliminated at least >>>> some of the cases of (2). >>> Sorry, my wording near then end there was kind of backwards. I meant >>> we were NOT doing any of the 3. You made is so in some cases we are >>> now doing (2). >> >> True. Why I said "if wanted" above was that we'd need to file an RFE >> to reclaim the wasted memory. >> >> Coleen >> >>> >>> Chris >>>> >>>> Chris >>>>> >>>>> Coleen >>>>> >>>>>> >>>>>> Chris >>>>>> >>>>>> On 1/31/16 4:18 PM, David Holmes wrote: >>>>>>> Hi Coleen, >>>>>>> >>>>>>> I think what Chris was referring to was the CDS compaction work >>>>>>> - which has since been abandoned. To be honest it has been so >>>>>>> long since I was working on this that I can't recall the >>>>>>> details. At one point Ioi commented how all MSO's were allocated >>>>>>> with 8-byte alignment which was unnecessary, and that we could >>>>>>> do better and account for it in the size() method. He also noted >>>>>>> if we somehow messed up the alignment when doing this that it >>>>>>> should be quickly detectable on sparc. >>>>>>> >>>>>>> These current changes will affect the apparent wasted space in >>>>>>> the archive as the expected usage would be based on size() while >>>>>>> the actual usage would be determined by the allocator. >>>>>>> >>>>>>> Ioi was really the best person to comment-on/review this. >>>>>>> >>>>>>> David >>>>>>> ----- >>>>>>> >>>>>>> On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >>>>>>>> >>>>>>>> >>>>>>>> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>>>>>>>> >>>>>>>>> Thanks Chris, >>>>>>>>> >>>>>>>>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>>>>>>>> Hi Coleen, >>>>>>>>>> >>>>>>>>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Chris, >>>>>>>>>>> >>>>>>>>>>> I made a few extra changes because of your question that I >>>>>>>>>>> didn't >>>>>>>>>>> answer below, a few HeapWordSize became wordSize. I >>>>>>>>>>> apologize that >>>>>>>>>>> I don't know how to create incremental webrevs. See >>>>>>>>>>> discussion below. >>>>>>>>>>> >>>>>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>>>>>>>> >>>>>>>>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>>>>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> Thank you, Chris for looking at this change. >>>>>>>>>>>>> >>>>>>>>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>>>>>>>> Hi Coleen, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>>>>>>>> something other than 8? >>>>>>>>>>>>> >>>>>>>>>>>>> Okay, I can run one of the testsets with that. I verified >>>>>>>>>>>>> it in >>>>>>>>>>>>> the debugger mostly. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>>>>>>>> align_object_size(), and confirm that the ones you didn't >>>>>>>>>>>>>> change >>>>>>>>>>>>>> are correct. I gave a quick look and they look right to >>>>>>>>>>>>>> me, but I >>>>>>>>>>>>>> wasn't always certain if object alignment was appropriate >>>>>>>>>>>>>> in all >>>>>>>>>>>>>> cases. >>>>>>>>>>>>> >>>>>>>>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>>>>>>>> align_heap_object_size before testing and changed it back, to >>>>>>>>>>>>> verify that I didn't miss any. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I see some remaining HeapWordSize references that are >>>>>>>>>>>>>> suspect, >>>>>>>>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go >>>>>>>>>>>>>> through >>>>>>>>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>>>>>>>> inspection? >>>>>>>>>>>> ??? Any comment? >>>>>>>>>>> >>>>>>>>>>> Actually, I tried to get a lot of HeapWordSize in the >>>>>>>>>>> metadata but >>>>>>>>>>> the primary focus of the change, despite the title, was to fix >>>>>>>>>>> align_object_size wasn't used on metadata. >>>>>>>>>> ok. >>>>>>>>>>> That said a quick look at the instances of HeapWordSize led >>>>>>>>>>> to some >>>>>>>>>>> that weren't in the heap. I didn't look in Array.java >>>>>>>>>>> because it's >>>>>>>>>>> in the SA which isn't maintainable anyway, but I changed a few. >>>>>>>>>>> There were very few that were not referring to objects in >>>>>>>>>>> the Java >>>>>>>>>>> heap. bytecodeTracer was one and there were a couple in >>>>>>>>>>> metaspace.cpp. >>>>>>>>>> Ok. If you think there may be more, or a more thorough >>>>>>>>>> analysis is >>>>>>>>>> needed, perhaps just file a bug to get the rest later. >>>>>>>>> >>>>>>>>> From my look yesterday, there aren't a lot of HeapWordSize >>>>>>>>> left. There >>>>>>>>> are probably still a lot of HeapWord* casts for things that >>>>>>>>> aren't in >>>>>>>>> the Java heap. This is a bigger cleanup that might not make >>>>>>>>> sense to >>>>>>>>> do in one change, but maybe in incremental changes to related >>>>>>>>> code. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> As for reviewing your incremental changes, as long as it was >>>>>>>>>> just >>>>>>>>>> more changes of HeapWordSize to wordSize, I'm sure they are >>>>>>>>>> fine. >>>>>>>>>> (And yes, I did see that the removal of Symbol size alignment >>>>>>>>>> was >>>>>>>>>> also added). >>>>>>>>> >>>>>>>>> Good, thanks. >>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The bad news is that's more code to review. See above webrev >>>>>>>>>>> link. >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>>>>>>>> >>>>>>>>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>>>>>>>> align_object_size() did, and not align to word size? >>>>>>>>>>>>>> Isn't that >>>>>>>>>>>>>> what we agreed to? Have you tested CDS? David had >>>>>>>>>>>>>> concerns about >>>>>>>>>>>>>> the InstanceKlass::size() not returning the same aligned >>>>>>>>>>>>>> size as >>>>>>>>>>>>>> Metachunk::object_alignment(). >>>>>>>>>>>>> >>>>>>>>>>>>> I ran the CDS tests but I could test some more with CDS. >>>>>>>>>>>>> We don't >>>>>>>>>>>>> want to force the size of objects to be 64 bit (especially >>>>>>>>>>>>> Symbol) >>>>>>>>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>>>>>>>> Do you mean "just" because? I wasn't necessarily suggesting >>>>>>>>>>>> that >>>>>>>>>>>> all metadata be 64-bit aligned. However, the ones that have >>>>>>>>>>>> their >>>>>>>>>>>> allocation size 64-bit aligned should be. I think David's >>>>>>>>>>>> concern >>>>>>>>>>>> is that he wrote code that computes how much memory is >>>>>>>>>>>> needed for >>>>>>>>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>>>>>>>> allocator allocates more than size() due to the 64-bit >>>>>>>>>>>> alignment of >>>>>>>>>>>> Metachunk::object_alignment(), then he will underestimate >>>>>>>>>>>> the size. >>>>>>>>>>>> You'll need to double check with David to see if I got this >>>>>>>>>>>> right. >>>>>>>>>>> >>>>>>>>>>> I don't know what code this is but yes, it would be wrong. >>>>>>>>>>> It also >>>>>>>>>>> would be wrong if there's any other alignment gaps or space in >>>>>>>>>>> metaspace chunks because chunks themselves have an allocation >>>>>>>>>>> granularity. >>>>>>>>>>> >>>>>>>>>>> It could be changed back by changing the function >>>>>>>>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>>>>>>>> >>>>>>>>>>> I fixed Symbol so that it didn't call align_metaspace_size >>>>>>>>>>> if this >>>>>>>>>>> change is needed in the future. >>>>>>>>>>> >>>>>>>>>>> I was trying to limit the size of this change to correct >>>>>>>>>>> align_object_size for metadata. >>>>>>>>>> Well, there a few issues being addressed by fixing >>>>>>>>>> align_object_size. >>>>>>>>>> Using align_object_size was incorrect from a code purity >>>>>>>>>> standpoint >>>>>>>>>> (it was used on values unrelated to java objects), and was also >>>>>>>>>> incorrect when ObjectAlignmentInBytes was not 8. This was the >>>>>>>>>> main >>>>>>>>>> motivation for making this change. >>>>>>>>> >>>>>>>>> Exactly. This was higher priority because it was wrong. >>>>>>>>>> >>>>>>>>>> The 3rd issue is that align_object_size by default was doing >>>>>>>>>> 8 byte >>>>>>>>>> alignment, and this wastes memory on 32-bit. However, as I >>>>>>>>>> mentioned >>>>>>>>>> there may be some dependencies on this 8 byte alignment due >>>>>>>>>> to the >>>>>>>>>> metaspace allocator doing 8 byte alignment. If you can get >>>>>>>>>> David to >>>>>>>>>> say he's ok with just 4-byte size alignment on 32-bit, then >>>>>>>>>> I'm ok >>>>>>>>>> with this change. Otherwise I think maybe you should stay >>>>>>>>>> with 8 byte >>>>>>>>>> alignment (including symbols), and file a bug to someday >>>>>>>>>> change it to >>>>>>>>>> word alignment, and have the metaspace allocator require that >>>>>>>>>> you >>>>>>>>>> pass in alignment requirements. >>>>>>>>> >>>>>>>>> Okay, I can see what David says but I wouldn't change Symbol >>>>>>>>> back. >>>>>>>>> That's mostly unrelated to metadata storage and I can get 32 bit >>>>>>>>> packing for symbols on 32 bit platforms. It probably saves >>>>>>>>> more space >>>>>>>>> than the other more invasive ideas that we've had. >>>>>>>> >>>>>>>> This is reviewed now. If David wants metadata sizing to change >>>>>>>> back to >>>>>>>> 64 bits on 32 bit platforms, it's a one line change. I'm going >>>>>>>> to push >>>>>>>> it to get the rest in. >>>>>>>> Thanks, >>>>>>>> Coleen >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Coleen >>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks for looking at this in detail. >>>>>>>>>> No problem. Thanks for cleaning this up. >>>>>>>>>> >>>>>>>>>> Chris >>>>>>>>>>> >>>>>>>>>>> Coleen >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> Unfortunately, with the latter, metadata is never aligned >>>>>>>>>>>>> on 32 >>>>>>>>>>>>> bit boundaries for 32 bit platforms, but to fix this, we >>>>>>>>>>>>> have to >>>>>>>>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>>>>>>>> because the alignment is not a function of the size of the >>>>>>>>>>>>> object >>>>>>>>>>>>> but what is required from its nonstatic data members. >>>>>>>>>>>> Correct. >>>>>>>>>>>>> I found MethodCounters, Klass (and subclasses) and >>>>>>>>>>>>> ConstantPool >>>>>>>>>>>>> has such alignment constraints. Not sizing metadata to 64 bit >>>>>>>>>>>>> sizes is a start for making this change. >>>>>>>>>>>> I agree with that, but just wanted to point out why David >>>>>>>>>>>> may be >>>>>>>>>>>> concerned with this change. >>>>>>>>>>>>>> >>>>>>>>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>>>>>>>> >>>>>>>>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>>>>>>>> Fixed, Thanks! >>>>>>>>>>>> thanks, >>>>>>>>>>>> >>>>>>>>>>>> Chris >>>>>>>>>>>>> >>>>>>>>>>>>> Coleen >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Chris >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>>>>>>>> Summary: Use align_metadata_size, align_metadata_offset and >>>>>>>>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>>>>>>>> rather than align_pointer_up (all the related functions >>>>>>>>>>>>>>> are ptr). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Ran RBT quick tests on all platforms along with Chris's >>>>>>>>>>>>>>> Plummers >>>>>>>>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I have a script to update copyrights on commit. It's not >>>>>>>>>>>>>>> a big >>>>>>>>>>>>>>> change, just mostly boring. See the bug comments for more >>>>>>>>>>>>>>> details about the change. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> open webrev at >>>>>>>>>>>>>>> http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>>> Coleen >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>> >>>>> >>>> >>> >> > From chris.plummer at oracle.com Mon Feb 1 21:35:52 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Mon, 1 Feb 2016 13:35:52 -0800 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AFCCF8.3010805@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> <56AF9C3E.70407@oracle.com> <56AF9D8E.1070408@oracle.com> <56AFB7DD.8010500@oracle.com> <56AFB93B.2020007@oracle.com> <56AFC743.3030006@oracle.com> <56AFCC06.7010804@oracle.com> <56AFCCF8.3010805@oracle.com> Message-ID: <56AFCFB8.4040902@oracle.com> On 2/1/16 1:24 PM, Coleen Phillimore wrote: > > > On 2/1/16 4:20 PM, Chris Plummer wrote: >> On 2/1/16 12:59 PM, Coleen Phillimore wrote: >>> >>> >>> On 2/1/16 2:59 PM, Chris Plummer wrote: >>>> On 2/1/16 11:54 AM, Chris Plummer wrote: >>>>> On 2/1/16 10:01 AM, Coleen Phillimore wrote: >>>>>> >>>>>> >>>>>> On 2/1/16 12:56 PM, Chris Plummer wrote: >>>>>>> It seems the allocators always align the size up to at least a >>>>>>> 64-bit boundary, so doesn't that make it pointless to attempt to >>>>>>> save memory by keeping the allocation request size word aligned >>>>>>> instead of 64-bit aligned? >>>>>> >>>>>> Sort of, except you need a size as a multiple of 32 bit words to >>>>>> potentially fix this, so it's a step towards that (if wanted). >>>>> What you need is (1) don't automatically pad the size up to 64-bit >>>>> alignment in the allocator, (2) don't pad the size up to 64-bit in >>>>> the size computations, and (3) the ability for the allocator to >>>>> maintain an unaligned "top" pointer, and to fix the alignment if >>>>> necessary during the allocation. This last one implies knowing the >>>>> alignment requirements of the caller, so that means either passing >>>>> in the alignment requirement or having allocators configured to >>>>> the alignment requirements of its users. >>> >>> Yes, exactly. I think we need to add another parameter to >>> Metaspace::allocate() to allow the caller to specify alignment >>> requirements. >> And I was looking at Amalloc() also, which does: >> >> x = ARENA_ALIGN(x); >> >> And then the following defines: >> >> #define ARENA_ALIGN_M1 (((size_t)(ARENA_AMALLOC_ALIGNMENT)) - 1) >> #define ARENA_ALIGN_MASK (~((size_t)ARENA_ALIGN_M1)) >> #define ARENA_ALIGN(x) ((((size_t)(x)) + ARENA_ALIGN_M1) & >> ARENA_ALIGN_MASK) >> >> #define ARENA_AMALLOC_ALIGNMENT (2*BytesPerWord) >> >> I think this all adds up to Amalloc doing 64-bit size alignment on >> 32-bit systems and 128-bit alignment on 64-bit systems. So I'm not so >> sure you Symbol changes are having an impact, at least not for >> Symbols allocated out of Arenas. If I'm reading the code right, >> symbols created for the null ClassLoader are allocated out of an >> arena and all others out of the C heap. > > The Symbol arena version of operator 'new' calls Amalloc_4. Ah, I missed that Amalloc_4 does not do the size aligning. Interesting that Amalloc_4 requires the size to be 4 bytes aligned, but then Amalloc has no such requirement but will align to 2x the word size. Lastly Amalloc_D aligns to 32-bit except on 32-bit sparc, where it aligns to 64-bit. Sounds suspect. I thought 32-bit ARM VFP required doubles to be 64-bit aligned. > Not sure about 128 bit alignment for 64 bit systems, is that right? #ifdef _LP64 const int LogBytesPerWord = 3; #else const int LogBytesPerWord = 2; #endif So this means BytesPerWord is 8 on 64-bit, and 2*BytesPerWord is 16 (128-bit). Chris > > Coleen > >> >> Chris >>> >>>>> You need all 3 of these. Leave any one out and you don't recoup >>>>> any of the wasted memory. We were doing all 3. You eliminated at >>>>> least some of the cases of (2). >>>> Sorry, my wording near then end there was kind of backwards. I >>>> meant we were NOT doing any of the 3. You made is so in some cases >>>> we are now doing (2). >>> >>> True. Why I said "if wanted" above was that we'd need to file an >>> RFE to reclaim the wasted memory. >>> >>> Coleen >>> >>>> >>>> Chris >>>>> >>>>> Chris >>>>>> >>>>>> Coleen >>>>>> >>>>>>> >>>>>>> Chris >>>>>>> >>>>>>> On 1/31/16 4:18 PM, David Holmes wrote: >>>>>>>> Hi Coleen, >>>>>>>> >>>>>>>> I think what Chris was referring to was the CDS compaction work >>>>>>>> - which has since been abandoned. To be honest it has been so >>>>>>>> long since I was working on this that I can't recall the >>>>>>>> details. At one point Ioi commented how all MSO's were >>>>>>>> allocated with 8-byte alignment which was unnecessary, and that >>>>>>>> we could do better and account for it in the size() method. He >>>>>>>> also noted if we somehow messed up the alignment when doing >>>>>>>> this that it should be quickly detectable on sparc. >>>>>>>> >>>>>>>> These current changes will affect the apparent wasted space in >>>>>>>> the archive as the expected usage would be based on size() >>>>>>>> while the actual usage would be determined by the allocator. >>>>>>>> >>>>>>>> Ioi was really the best person to comment-on/review this. >>>>>>>> >>>>>>>> David >>>>>>>> ----- >>>>>>>> >>>>>>>> On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>>>>>>>>> >>>>>>>>>> Thanks Chris, >>>>>>>>>> >>>>>>>>>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>>>>>>>>> Hi Coleen, >>>>>>>>>>> >>>>>>>>>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>>>>>>>>> >>>>>>>>>>>> Hi Chris, >>>>>>>>>>>> >>>>>>>>>>>> I made a few extra changes because of your question that I >>>>>>>>>>>> didn't >>>>>>>>>>>> answer below, a few HeapWordSize became wordSize. I >>>>>>>>>>>> apologize that >>>>>>>>>>>> I don't know how to create incremental webrevs. See >>>>>>>>>>>> discussion below. >>>>>>>>>>>> >>>>>>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>>>>>>>>> >>>>>>>>>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>>>>>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thank you, Chris for looking at this change. >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>>>>>>>>> Hi Coleen, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>>>>>>>>> something other than 8? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Okay, I can run one of the testsets with that. I verified >>>>>>>>>>>>>> it in >>>>>>>>>>>>>> the debugger mostly. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>>>>>>>>> align_object_size(), and confirm that the ones you >>>>>>>>>>>>>>> didn't change >>>>>>>>>>>>>>> are correct. I gave a quick look and they look right to >>>>>>>>>>>>>>> me, but I >>>>>>>>>>>>>>> wasn't always certain if object alignment was >>>>>>>>>>>>>>> appropriate in all >>>>>>>>>>>>>>> cases. >>>>>>>>>>>>>> >>>>>>>>>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>>>>>>>>> align_heap_object_size before testing and changed it >>>>>>>>>>>>>> back, to >>>>>>>>>>>>>> verify that I didn't miss any. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I see some remaining HeapWordSize references that are >>>>>>>>>>>>>>> suspect, >>>>>>>>>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go >>>>>>>>>>>>>>> through >>>>>>>>>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>>>>>>>>> inspection? >>>>>>>>>>>>> ??? Any comment? >>>>>>>>>>>> >>>>>>>>>>>> Actually, I tried to get a lot of HeapWordSize in the >>>>>>>>>>>> metadata but >>>>>>>>>>>> the primary focus of the change, despite the title, was to fix >>>>>>>>>>>> align_object_size wasn't used on metadata. >>>>>>>>>>> ok. >>>>>>>>>>>> That said a quick look at the instances of HeapWordSize led >>>>>>>>>>>> to some >>>>>>>>>>>> that weren't in the heap. I didn't look in Array.java >>>>>>>>>>>> because it's >>>>>>>>>>>> in the SA which isn't maintainable anyway, but I changed a >>>>>>>>>>>> few. >>>>>>>>>>>> There were very few that were not referring to objects in >>>>>>>>>>>> the Java >>>>>>>>>>>> heap. bytecodeTracer was one and there were a couple in >>>>>>>>>>>> metaspace.cpp. >>>>>>>>>>> Ok. If you think there may be more, or a more thorough >>>>>>>>>>> analysis is >>>>>>>>>>> needed, perhaps just file a bug to get the rest later. >>>>>>>>>> >>>>>>>>>> From my look yesterday, there aren't a lot of HeapWordSize >>>>>>>>>> left. There >>>>>>>>>> are probably still a lot of HeapWord* casts for things that >>>>>>>>>> aren't in >>>>>>>>>> the Java heap. This is a bigger cleanup that might not make >>>>>>>>>> sense to >>>>>>>>>> do in one change, but maybe in incremental changes to related >>>>>>>>>> code. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> As for reviewing your incremental changes, as long as it was >>>>>>>>>>> just >>>>>>>>>>> more changes of HeapWordSize to wordSize, I'm sure they are >>>>>>>>>>> fine. >>>>>>>>>>> (And yes, I did see that the removal of Symbol size >>>>>>>>>>> alignment was >>>>>>>>>>> also added). >>>>>>>>>> >>>>>>>>>> Good, thanks. >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The bad news is that's more code to review. See above >>>>>>>>>>>> webrev link. >>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>>>>>>>>> align_object_size() did, and not align to word size? >>>>>>>>>>>>>>> Isn't that >>>>>>>>>>>>>>> what we agreed to? Have you tested CDS? David had >>>>>>>>>>>>>>> concerns about >>>>>>>>>>>>>>> the InstanceKlass::size() not returning the same aligned >>>>>>>>>>>>>>> size as >>>>>>>>>>>>>>> Metachunk::object_alignment(). >>>>>>>>>>>>>> >>>>>>>>>>>>>> I ran the CDS tests but I could test some more with CDS. >>>>>>>>>>>>>> We don't >>>>>>>>>>>>>> want to force the size of objects to be 64 bit >>>>>>>>>>>>>> (especially Symbol) >>>>>>>>>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>>>>>>>>> Do you mean "just" because? I wasn't necessarily >>>>>>>>>>>>> suggesting that >>>>>>>>>>>>> all metadata be 64-bit aligned. However, the ones that >>>>>>>>>>>>> have their >>>>>>>>>>>>> allocation size 64-bit aligned should be. I think David's >>>>>>>>>>>>> concern >>>>>>>>>>>>> is that he wrote code that computes how much memory is >>>>>>>>>>>>> needed for >>>>>>>>>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>>>>>>>>> allocator allocates more than size() due to the 64-bit >>>>>>>>>>>>> alignment of >>>>>>>>>>>>> Metachunk::object_alignment(), then he will underestimate >>>>>>>>>>>>> the size. >>>>>>>>>>>>> You'll need to double check with David to see if I got >>>>>>>>>>>>> this right. >>>>>>>>>>>> >>>>>>>>>>>> I don't know what code this is but yes, it would be wrong. >>>>>>>>>>>> It also >>>>>>>>>>>> would be wrong if there's any other alignment gaps or space in >>>>>>>>>>>> metaspace chunks because chunks themselves have an allocation >>>>>>>>>>>> granularity. >>>>>>>>>>>> >>>>>>>>>>>> It could be changed back by changing the function >>>>>>>>>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>>>>>>>>> >>>>>>>>>>>> I fixed Symbol so that it didn't call align_metaspace_size >>>>>>>>>>>> if this >>>>>>>>>>>> change is needed in the future. >>>>>>>>>>>> >>>>>>>>>>>> I was trying to limit the size of this change to correct >>>>>>>>>>>> align_object_size for metadata. >>>>>>>>>>> Well, there a few issues being addressed by fixing >>>>>>>>>>> align_object_size. >>>>>>>>>>> Using align_object_size was incorrect from a code purity >>>>>>>>>>> standpoint >>>>>>>>>>> (it was used on values unrelated to java objects), and was also >>>>>>>>>>> incorrect when ObjectAlignmentInBytes was not 8. This was >>>>>>>>>>> the main >>>>>>>>>>> motivation for making this change. >>>>>>>>>> >>>>>>>>>> Exactly. This was higher priority because it was wrong. >>>>>>>>>>> >>>>>>>>>>> The 3rd issue is that align_object_size by default was doing >>>>>>>>>>> 8 byte >>>>>>>>>>> alignment, and this wastes memory on 32-bit. However, as I >>>>>>>>>>> mentioned >>>>>>>>>>> there may be some dependencies on this 8 byte alignment due >>>>>>>>>>> to the >>>>>>>>>>> metaspace allocator doing 8 byte alignment. If you can get >>>>>>>>>>> David to >>>>>>>>>>> say he's ok with just 4-byte size alignment on 32-bit, then >>>>>>>>>>> I'm ok >>>>>>>>>>> with this change. Otherwise I think maybe you should stay >>>>>>>>>>> with 8 byte >>>>>>>>>>> alignment (including symbols), and file a bug to someday >>>>>>>>>>> change it to >>>>>>>>>>> word alignment, and have the metaspace allocator require >>>>>>>>>>> that you >>>>>>>>>>> pass in alignment requirements. >>>>>>>>>> >>>>>>>>>> Okay, I can see what David says but I wouldn't change Symbol >>>>>>>>>> back. >>>>>>>>>> That's mostly unrelated to metadata storage and I can get 32 bit >>>>>>>>>> packing for symbols on 32 bit platforms. It probably saves >>>>>>>>>> more space >>>>>>>>>> than the other more invasive ideas that we've had. >>>>>>>>> >>>>>>>>> This is reviewed now. If David wants metadata sizing to >>>>>>>>> change back to >>>>>>>>> 64 bits on 32 bit platforms, it's a one line change. I'm going >>>>>>>>> to push >>>>>>>>> it to get the rest in. >>>>>>>>> Thanks, >>>>>>>>> Coleen >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Coleen >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks for looking at this in detail. >>>>>>>>>>> No problem. Thanks for cleaning this up. >>>>>>>>>>> >>>>>>>>>>> Chris >>>>>>>>>>>> >>>>>>>>>>>> Coleen >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> Unfortunately, with the latter, metadata is never aligned >>>>>>>>>>>>>> on 32 >>>>>>>>>>>>>> bit boundaries for 32 bit platforms, but to fix this, we >>>>>>>>>>>>>> have to >>>>>>>>>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>>>>>>>>> because the alignment is not a function of the size of >>>>>>>>>>>>>> the object >>>>>>>>>>>>>> but what is required from its nonstatic data members. >>>>>>>>>>>>> Correct. >>>>>>>>>>>>>> I found MethodCounters, Klass (and subclasses) and >>>>>>>>>>>>>> ConstantPool >>>>>>>>>>>>>> has such alignment constraints. Not sizing metadata to 64 >>>>>>>>>>>>>> bit >>>>>>>>>>>>>> sizes is a start for making this change. >>>>>>>>>>>>> I agree with that, but just wanted to point out why David >>>>>>>>>>>>> may be >>>>>>>>>>>>> concerned with this change. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>>>>>>>>> Fixed, Thanks! >>>>>>>>>>>>> thanks, >>>>>>>>>>>>> >>>>>>>>>>>>> Chris >>>>>>>>>>>>>> >>>>>>>>>>>>>> Coleen >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Chris >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>>>>>>>>> Summary: Use align_metadata_size, align_metadata_offset >>>>>>>>>>>>>>>> and >>>>>>>>>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>>>>>>>>> rather than align_pointer_up (all the related functions >>>>>>>>>>>>>>>> are ptr). >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Ran RBT quick tests on all platforms along with Chris's >>>>>>>>>>>>>>>> Plummers >>>>>>>>>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I have a script to update copyrights on commit. It's >>>>>>>>>>>>>>>> not a big >>>>>>>>>>>>>>>> change, just mostly boring. See the bug comments for more >>>>>>>>>>>>>>>> details about the change. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> open webrev at >>>>>>>>>>>>>>>> http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>>>> Coleen >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From coleen.phillimore at oracle.com Mon Feb 1 22:50:08 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 1 Feb 2016 17:50:08 -0500 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AFCFB8.4040902@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> <56AF9C3E.70407@oracle.com> <56AF9D8E.1070408@oracle.com> <56AFB7DD.8010500@oracle.com> <56AFB93B.2020007@oracle.com> <56AFC743.3030006@oracle.com> <56AFCC06.7010804@oracle.com> <56AFCCF8.3010805@oracle.com> <56AFCFB8.4040902@oracle.com> Message-ID: <56AFE120.4030502@oracle.com> On 2/1/16 4:35 PM, Chris Plummer wrote: > On 2/1/16 1:24 PM, Coleen Phillimore wrote: >> >> >> On 2/1/16 4:20 PM, Chris Plummer wrote: >>> On 2/1/16 12:59 PM, Coleen Phillimore wrote: >>>> >>>> >>>> On 2/1/16 2:59 PM, Chris Plummer wrote: >>>>> On 2/1/16 11:54 AM, Chris Plummer wrote: >>>>>> On 2/1/16 10:01 AM, Coleen Phillimore wrote: >>>>>>> >>>>>>> >>>>>>> On 2/1/16 12:56 PM, Chris Plummer wrote: >>>>>>>> It seems the allocators always align the size up to at least a >>>>>>>> 64-bit boundary, so doesn't that make it pointless to attempt >>>>>>>> to save memory by keeping the allocation request size word >>>>>>>> aligned instead of 64-bit aligned? >>>>>>> >>>>>>> Sort of, except you need a size as a multiple of 32 bit words to >>>>>>> potentially fix this, so it's a step towards that (if wanted). >>>>>> What you need is (1) don't automatically pad the size up to >>>>>> 64-bit alignment in the allocator, (2) don't pad the size up to >>>>>> 64-bit in the size computations, and (3) the ability for the >>>>>> allocator to maintain an unaligned "top" pointer, and to fix the >>>>>> alignment if necessary during the allocation. This last one >>>>>> implies knowing the alignment requirements of the caller, so that >>>>>> means either passing in the alignment requirement or having >>>>>> allocators configured to the alignment requirements of its users. >>>> >>>> Yes, exactly. I think we need to add another parameter to >>>> Metaspace::allocate() to allow the caller to specify alignment >>>> requirements. >>> And I was looking at Amalloc() also, which does: >>> >>> x = ARENA_ALIGN(x); >>> >>> And then the following defines: >>> >>> #define ARENA_ALIGN_M1 (((size_t)(ARENA_AMALLOC_ALIGNMENT)) - 1) >>> #define ARENA_ALIGN_MASK (~((size_t)ARENA_ALIGN_M1)) >>> #define ARENA_ALIGN(x) ((((size_t)(x)) + ARENA_ALIGN_M1) & >>> ARENA_ALIGN_MASK) >>> >>> #define ARENA_AMALLOC_ALIGNMENT (2*BytesPerWord) >>> >>> I think this all adds up to Amalloc doing 64-bit size alignment on >>> 32-bit systems and 128-bit alignment on 64-bit systems. So I'm not >>> so sure you Symbol changes are having an impact, at least not for >>> Symbols allocated out of Arenas. If I'm reading the code right, >>> symbols created for the null ClassLoader are allocated out of an >>> arena and all others out of the C heap. >> >> The Symbol arena version of operator 'new' calls Amalloc_4. > Ah, I missed that Amalloc_4 does not do the size aligning. Interesting > that Amalloc_4 requires the size to be 4 bytes aligned, but then > Amalloc has no such requirement but will align to 2x the word size. > Lastly Amalloc_D aligns to 32-bit except on 32-bit sparc, where it > aligns to 64-bit. Sounds suspect. I thought 32-bit ARM VFP required > doubles to be 64-bit aligned. >> Not sure about 128 bit alignment for 64 bit systems, is that right? > #ifdef _LP64 > const int LogBytesPerWord = 3; > #else > const int LogBytesPerWord = 2; > #endif > > So this means BytesPerWord is 8 on 64-bit, and 2*BytesPerWord is 16 > (128-bit). Wow, this is excessive and unexpected. We should file a bug. I can't see a good reason to pad out arena allocations to 16 bytes. Coleen > > Chris >> >> Coleen >> >>> >>> Chris >>>> >>>>>> You need all 3 of these. Leave any one out and you don't recoup >>>>>> any of the wasted memory. We were doing all 3. You eliminated at >>>>>> least some of the cases of (2). >>>>> Sorry, my wording near then end there was kind of backwards. I >>>>> meant we were NOT doing any of the 3. You made is so in some cases >>>>> we are now doing (2). >>>> >>>> True. Why I said "if wanted" above was that we'd need to file an >>>> RFE to reclaim the wasted memory. >>>> >>>> Coleen >>>> >>>>> >>>>> Chris >>>>>> >>>>>> Chris >>>>>>> >>>>>>> Coleen >>>>>>> >>>>>>>> >>>>>>>> Chris >>>>>>>> >>>>>>>> On 1/31/16 4:18 PM, David Holmes wrote: >>>>>>>>> Hi Coleen, >>>>>>>>> >>>>>>>>> I think what Chris was referring to was the CDS compaction >>>>>>>>> work - which has since been abandoned. To be honest it has >>>>>>>>> been so long since I was working on this that I can't recall >>>>>>>>> the details. At one point Ioi commented how all MSO's were >>>>>>>>> allocated with 8-byte alignment which was unnecessary, and >>>>>>>>> that we could do better and account for it in the size() >>>>>>>>> method. He also noted if we somehow messed up the alignment >>>>>>>>> when doing this that it should be quickly detectable on sparc. >>>>>>>>> >>>>>>>>> These current changes will affect the apparent wasted space in >>>>>>>>> the archive as the expected usage would be based on size() >>>>>>>>> while the actual usage would be determined by the allocator. >>>>>>>>> >>>>>>>>> Ioi was really the best person to comment-on/review this. >>>>>>>>> >>>>>>>>> David >>>>>>>>> ----- >>>>>>>>> >>>>>>>>> On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>>>>>>>>>> >>>>>>>>>>> Thanks Chris, >>>>>>>>>>> >>>>>>>>>>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>>>>>>>>>> Hi Coleen, >>>>>>>>>>>> >>>>>>>>>>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> Hi Chris, >>>>>>>>>>>>> >>>>>>>>>>>>> I made a few extra changes because of your question that I >>>>>>>>>>>>> didn't >>>>>>>>>>>>> answer below, a few HeapWordSize became wordSize. I >>>>>>>>>>>>> apologize that >>>>>>>>>>>>> I don't know how to create incremental webrevs. See >>>>>>>>>>>>> discussion below. >>>>>>>>>>>>> >>>>>>>>>>>>> open webrev at >>>>>>>>>>>>> http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>>>>>>>>>> >>>>>>>>>>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>>>>>>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thank you, Chris for looking at this change. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>>>>>>>>>> Hi Coleen, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>>>>>>>>>> something other than 8? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Okay, I can run one of the testsets with that. I >>>>>>>>>>>>>>> verified it in >>>>>>>>>>>>>>> the debugger mostly. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>>>>>>>>>> align_object_size(), and confirm that the ones you >>>>>>>>>>>>>>>> didn't change >>>>>>>>>>>>>>>> are correct. I gave a quick look and they look right to >>>>>>>>>>>>>>>> me, but I >>>>>>>>>>>>>>>> wasn't always certain if object alignment was >>>>>>>>>>>>>>>> appropriate in all >>>>>>>>>>>>>>>> cases. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>>>>>>>>>> align_heap_object_size before testing and changed it >>>>>>>>>>>>>>> back, to >>>>>>>>>>>>>>> verify that I didn't miss any. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I see some remaining HeapWordSize references that are >>>>>>>>>>>>>>>> suspect, >>>>>>>>>>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go >>>>>>>>>>>>>>>> through >>>>>>>>>>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>>>>>>>>>> inspection? >>>>>>>>>>>>>> ??? Any comment? >>>>>>>>>>>>> >>>>>>>>>>>>> Actually, I tried to get a lot of HeapWordSize in the >>>>>>>>>>>>> metadata but >>>>>>>>>>>>> the primary focus of the change, despite the title, was to >>>>>>>>>>>>> fix >>>>>>>>>>>>> align_object_size wasn't used on metadata. >>>>>>>>>>>> ok. >>>>>>>>>>>>> That said a quick look at the instances of HeapWordSize >>>>>>>>>>>>> led to some >>>>>>>>>>>>> that weren't in the heap. I didn't look in Array.java >>>>>>>>>>>>> because it's >>>>>>>>>>>>> in the SA which isn't maintainable anyway, but I changed a >>>>>>>>>>>>> few. >>>>>>>>>>>>> There were very few that were not referring to objects in >>>>>>>>>>>>> the Java >>>>>>>>>>>>> heap. bytecodeTracer was one and there were a couple in >>>>>>>>>>>>> metaspace.cpp. >>>>>>>>>>>> Ok. If you think there may be more, or a more thorough >>>>>>>>>>>> analysis is >>>>>>>>>>>> needed, perhaps just file a bug to get the rest later. >>>>>>>>>>> >>>>>>>>>>> From my look yesterday, there aren't a lot of HeapWordSize >>>>>>>>>>> left. There >>>>>>>>>>> are probably still a lot of HeapWord* casts for things that >>>>>>>>>>> aren't in >>>>>>>>>>> the Java heap. This is a bigger cleanup that might not make >>>>>>>>>>> sense to >>>>>>>>>>> do in one change, but maybe in incremental changes to >>>>>>>>>>> related code. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> As for reviewing your incremental changes, as long as it >>>>>>>>>>>> was just >>>>>>>>>>>> more changes of HeapWordSize to wordSize, I'm sure they are >>>>>>>>>>>> fine. >>>>>>>>>>>> (And yes, I did see that the removal of Symbol size >>>>>>>>>>>> alignment was >>>>>>>>>>>> also added). >>>>>>>>>>> >>>>>>>>>>> Good, thanks. >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> The bad news is that's more code to review. See above >>>>>>>>>>>>> webrev link. >>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>>>>>>>>>> align_object_size() did, and not align to word size? >>>>>>>>>>>>>>>> Isn't that >>>>>>>>>>>>>>>> what we agreed to? Have you tested CDS? David had >>>>>>>>>>>>>>>> concerns about >>>>>>>>>>>>>>>> the InstanceKlass::size() not returning the same >>>>>>>>>>>>>>>> aligned size as >>>>>>>>>>>>>>>> Metachunk::object_alignment(). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I ran the CDS tests but I could test some more with CDS. >>>>>>>>>>>>>>> We don't >>>>>>>>>>>>>>> want to force the size of objects to be 64 bit >>>>>>>>>>>>>>> (especially Symbol) >>>>>>>>>>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>>>>>>>>>> Do you mean "just" because? I wasn't necessarily >>>>>>>>>>>>>> suggesting that >>>>>>>>>>>>>> all metadata be 64-bit aligned. However, the ones that >>>>>>>>>>>>>> have their >>>>>>>>>>>>>> allocation size 64-bit aligned should be. I think David's >>>>>>>>>>>>>> concern >>>>>>>>>>>>>> is that he wrote code that computes how much memory is >>>>>>>>>>>>>> needed for >>>>>>>>>>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>>>>>>>>>> allocator allocates more than size() due to the 64-bit >>>>>>>>>>>>>> alignment of >>>>>>>>>>>>>> Metachunk::object_alignment(), then he will underestimate >>>>>>>>>>>>>> the size. >>>>>>>>>>>>>> You'll need to double check with David to see if I got >>>>>>>>>>>>>> this right. >>>>>>>>>>>>> >>>>>>>>>>>>> I don't know what code this is but yes, it would be >>>>>>>>>>>>> wrong. It also >>>>>>>>>>>>> would be wrong if there's any other alignment gaps or >>>>>>>>>>>>> space in >>>>>>>>>>>>> metaspace chunks because chunks themselves have an allocation >>>>>>>>>>>>> granularity. >>>>>>>>>>>>> >>>>>>>>>>>>> It could be changed back by changing the function >>>>>>>>>>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>>>>>>>>>> >>>>>>>>>>>>> I fixed Symbol so that it didn't call align_metaspace_size >>>>>>>>>>>>> if this >>>>>>>>>>>>> change is needed in the future. >>>>>>>>>>>>> >>>>>>>>>>>>> I was trying to limit the size of this change to correct >>>>>>>>>>>>> align_object_size for metadata. >>>>>>>>>>>> Well, there a few issues being addressed by fixing >>>>>>>>>>>> align_object_size. >>>>>>>>>>>> Using align_object_size was incorrect from a code purity >>>>>>>>>>>> standpoint >>>>>>>>>>>> (it was used on values unrelated to java objects), and was >>>>>>>>>>>> also >>>>>>>>>>>> incorrect when ObjectAlignmentInBytes was not 8. This was >>>>>>>>>>>> the main >>>>>>>>>>>> motivation for making this change. >>>>>>>>>>> >>>>>>>>>>> Exactly. This was higher priority because it was wrong. >>>>>>>>>>>> >>>>>>>>>>>> The 3rd issue is that align_object_size by default was >>>>>>>>>>>> doing 8 byte >>>>>>>>>>>> alignment, and this wastes memory on 32-bit. However, as I >>>>>>>>>>>> mentioned >>>>>>>>>>>> there may be some dependencies on this 8 byte alignment due >>>>>>>>>>>> to the >>>>>>>>>>>> metaspace allocator doing 8 byte alignment. If you can get >>>>>>>>>>>> David to >>>>>>>>>>>> say he's ok with just 4-byte size alignment on 32-bit, then >>>>>>>>>>>> I'm ok >>>>>>>>>>>> with this change. Otherwise I think maybe you should stay >>>>>>>>>>>> with 8 byte >>>>>>>>>>>> alignment (including symbols), and file a bug to someday >>>>>>>>>>>> change it to >>>>>>>>>>>> word alignment, and have the metaspace allocator require >>>>>>>>>>>> that you >>>>>>>>>>>> pass in alignment requirements. >>>>>>>>>>> >>>>>>>>>>> Okay, I can see what David says but I wouldn't change Symbol >>>>>>>>>>> back. >>>>>>>>>>> That's mostly unrelated to metadata storage and I can get 32 >>>>>>>>>>> bit >>>>>>>>>>> packing for symbols on 32 bit platforms. It probably saves >>>>>>>>>>> more space >>>>>>>>>>> than the other more invasive ideas that we've had. >>>>>>>>>> >>>>>>>>>> This is reviewed now. If David wants metadata sizing to >>>>>>>>>> change back to >>>>>>>>>> 64 bits on 32 bit platforms, it's a one line change. I'm >>>>>>>>>> going to push >>>>>>>>>> it to get the rest in. >>>>>>>>>> Thanks, >>>>>>>>>> Coleen >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Coleen >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks for looking at this in detail. >>>>>>>>>>>> No problem. Thanks for cleaning this up. >>>>>>>>>>>> >>>>>>>>>>>> Chris >>>>>>>>>>>>> >>>>>>>>>>>>> Coleen >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>>> Unfortunately, with the latter, metadata is never >>>>>>>>>>>>>>> aligned on 32 >>>>>>>>>>>>>>> bit boundaries for 32 bit platforms, but to fix this, we >>>>>>>>>>>>>>> have to >>>>>>>>>>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>>>>>>>>>> because the alignment is not a function of the size of >>>>>>>>>>>>>>> the object >>>>>>>>>>>>>>> but what is required from its nonstatic data members. >>>>>>>>>>>>>> Correct. >>>>>>>>>>>>>>> I found MethodCounters, Klass (and subclasses) and >>>>>>>>>>>>>>> ConstantPool >>>>>>>>>>>>>>> has such alignment constraints. Not sizing metadata to >>>>>>>>>>>>>>> 64 bit >>>>>>>>>>>>>>> sizes is a start for making this change. >>>>>>>>>>>>>> I agree with that, but just wanted to point out why David >>>>>>>>>>>>>> may be >>>>>>>>>>>>>> concerned with this change. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>>>>>>>>>> Fixed, Thanks! >>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Chris >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Coleen >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Chris >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>>>>>>>>>> Summary: Use align_metadata_size, >>>>>>>>>>>>>>>>> align_metadata_offset and >>>>>>>>>>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>>>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>>>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>>>>>>>>>> rather than align_pointer_up (all the related >>>>>>>>>>>>>>>>> functions are ptr). >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Ran RBT quick tests on all platforms along with >>>>>>>>>>>>>>>>> Chris's Plummers >>>>>>>>>>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>>>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>>>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I have a script to update copyrights on commit. It's >>>>>>>>>>>>>>>>> not a big >>>>>>>>>>>>>>>>> change, just mostly boring. See the bug comments for >>>>>>>>>>>>>>>>> more >>>>>>>>>>>>>>>>> details about the change. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> open webrev at >>>>>>>>>>>>>>>>> http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>>>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>>>>> Coleen >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From david.holmes at oracle.com Tue Feb 2 00:51:58 2016 From: david.holmes at oracle.com (David Holmes) Date: Tue, 2 Feb 2016 10:51:58 +1000 Subject: (urgent) RFR: 8148771: os::active_processor_count() returns garbage which causes VM to crash Message-ID: <56AFFDAE.80600@oracle.com> This is a backout of the fix for JDK-8147906 which changes the GC to use active_processor_count() during initialization. Some systems with aggressive power management can report continually varying numbers of available processors, via sched_getaffinity, and it seems the GC code can not tolerate that. So we hg backout JDK-8147906 and revert to using processor_count() again. We will then look at how to address this going forward. Bug: https://bugs.openjdk.java.net/browse/JDK-8148771 webrev http://cr.openjdk.java.net/~dholmes/8148771/webrev/ Thanks, David From kim.barrett at oracle.com Tue Feb 2 01:14:56 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 1 Feb 2016 20:14:56 -0500 Subject: (urgent) RFR: 8148771: os::active_processor_count() returns garbage which causes VM to crash In-Reply-To: <56AFFDAE.80600@oracle.com> References: <56AFFDAE.80600@oracle.com> Message-ID: > On Feb 1, 2016, at 7:51 PM, David Holmes wrote: > > This is a backout of the fix for JDK-8147906 which changes the GC to use active_processor_count() during initialization. Some systems with aggressive power management can report continually varying numbers of available processors, via sched_getaffinity, and it seems the GC code can not tolerate that. So we hg backout JDK-8147906 and revert to using processor_count() again. We will then look at how to address this going forward. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8148771 > > webrev http://cr.openjdk.java.net/~dholmes/8148771/webrev/ > > Thanks, > David Looks good. From david.holmes at oracle.com Tue Feb 2 01:20:13 2016 From: david.holmes at oracle.com (David Holmes) Date: Tue, 2 Feb 2016 11:20:13 +1000 Subject: (urgent) RFR: 8148771: os::active_processor_count() returns garbage which causes VM to crash In-Reply-To: References: <56AFFDAE.80600@oracle.com> Message-ID: <56B0044D.7010005@oracle.com> Thanks Kim. I'll apply the trivial rule to this and push as soon as testing is complete. David On 2/02/2016 11:14 AM, Kim Barrett wrote: >> On Feb 1, 2016, at 7:51 PM, David Holmes wrote: >> >> This is a backout of the fix for JDK-8147906 which changes the GC to use active_processor_count() during initialization. Some systems with aggressive power management can report continually varying numbers of available processors, via sched_getaffinity, and it seems the GC code can not tolerate that. So we hg backout JDK-8147906 and revert to using processor_count() again. We will then look at how to address this going forward. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8148771 >> >> webrev http://cr.openjdk.java.net/~dholmes/8148771/webrev/ >> >> Thanks, >> David > > Looks good. > From kim.barrett at oracle.com Tue Feb 2 01:21:08 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 1 Feb 2016 20:21:08 -0500 Subject: (urgent) RFR: 8148771: os::active_processor_count() returns garbage which causes VM to crash In-Reply-To: <56B0044D.7010005@oracle.com> References: <56AFFDAE.80600@oracle.com> <56B0044D.7010005@oracle.com> Message-ID: <6A329DB9-52D9-478C-801F-D8D948F23169@oracle.com> > On Feb 1, 2016, at 8:20 PM, David Holmes wrote: > > Thanks Kim. > > I'll apply the trivial rule to this and push as soon as testing is complete. I was just about to suggest that. > > David > > On 2/02/2016 11:14 AM, Kim Barrett wrote: >>> On Feb 1, 2016, at 7:51 PM, David Holmes wrote: >>> >>> This is a backout of the fix for JDK-8147906 which changes the GC to use active_processor_count() during initialization. Some systems with aggressive power management can report continually varying numbers of available processors, via sched_getaffinity, and it seems the GC code can not tolerate that. So we hg backout JDK-8147906 and revert to using processor_count() again. We will then look at how to address this going forward. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8148771 >>> >>> webrev http://cr.openjdk.java.net/~dholmes/8148771/webrev/ >>> >>> Thanks, >>> David >> >> Looks good. From thomas.schatzl at oracle.com Tue Feb 2 09:23:02 2016 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 02 Feb 2016 10:23:02 +0100 Subject: (urgent) RFR: 8148771: os::active_processor_count() returns garbage which causes VM to crash In-Reply-To: <56B0044D.7010005@oracle.com> References: <56AFFDAE.80600@oracle.com> <56B0044D.7010005@oracle.com> Message-ID: <1454404982.2291.0.camel@oracle.com> Hi David, On Tue, 2016-02-02 at 11:20 +1000, David Holmes wrote: > Thanks Kim. > > I'll apply the trivial rule to this and push as soon as testing is > complete. > thanks. Thomas From robbin.ehn at oracle.com Tue Feb 2 10:39:45 2016 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 2 Feb 2016 11:39:45 +0100 Subject: RFR(xs): 8148141: Remove fixed level padding in UL Message-ID: <56B08771.8070405@oracle.com> Hi, please review, This removes fixed level padding in UL for level decorations. Bug: https://bugs.openjdk.java.net/browse/JDK-8148141 Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148141/ Thanks! /Robbin From staffan.larsen at oracle.com Tue Feb 2 10:42:08 2016 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 2 Feb 2016 11:42:08 +0100 Subject: RFR(xs): 8148141: Remove fixed level padding in UL In-Reply-To: <56B08771.8070405@oracle.com> References: <56B08771.8070405@oracle.com> Message-ID: Looks good! Thanks, /Staffan > On 2 feb. 2016, at 11:39, Robbin Ehn wrote: > > Hi, please review, > > This removes fixed level padding in UL for level decorations. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8148141 > Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148141/ > > Thanks! > > /Robbin From robbin.ehn at oracle.com Tue Feb 2 10:46:29 2016 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 2 Feb 2016 11:46:29 +0100 Subject: RFR(xs): 8148141: Remove fixed level padding in UL In-Reply-To: References: <56B08771.8070405@oracle.com> Message-ID: <56B08905.1080209@oracle.com> Thanks Staffan! /Robbin On 02/02/2016 11:42 AM, Staffan Larsen wrote: > Looks good! > > Thanks, > /Staffan > >> On 2 feb. 2016, at 11:39, Robbin Ehn wrote: >> >> Hi, please review, >> >> This removes fixed level padding in UL for level decorations. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8148141 >> Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148141/ >> >> Thanks! >> >> /Robbin > From marcus.larsson at oracle.com Tue Feb 2 11:02:01 2016 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Tue, 2 Feb 2016 12:02:01 +0100 Subject: RFR(xs): 8148141: Remove fixed level padding in UL In-Reply-To: <56B08771.8070405@oracle.com> References: <56B08771.8070405@oracle.com> Message-ID: <56B08CA9.2070803@oracle.com> Hi, On 02/02/2016 11:39 AM, Robbin Ehn wrote: > Hi, please review, > > This removes fixed level padding in UL for level decorations. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8148141 > Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148141/ Looks good, thanks for fixing! Marcus > > Thanks! > > /Robbin From robbin.ehn at oracle.com Tue Feb 2 13:40:27 2016 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 2 Feb 2016 14:40:27 +0100 Subject: RFR(xs): 8148141: Remove fixed level padding in UL In-Reply-To: <56B08CA9.2070803@oracle.com> References: <56B08771.8070405@oracle.com> <56B08CA9.2070803@oracle.com> Message-ID: <56B0B1CB.3020202@oracle.com> Thanks Marcus! /Robbin On 02/02/2016 12:02 PM, Marcus Larsson wrote: > Hi, > > On 02/02/2016 11:39 AM, Robbin Ehn wrote: >> Hi, please review, >> >> This removes fixed level padding in UL for level decorations. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8148141 >> Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148141/ > > Looks good, thanks for fixing! > > Marcus > >> >> Thanks! >> >> /Robbin > From volker.simonis at gmail.com Tue Feb 2 16:30:33 2016 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 2 Feb 2016 17:30:33 +0100 Subject: Do we really still need mutex_.inline.hpp? Message-ID: Hi, I've just realized, that the file mutex_.inline.hpp is empty on all OpenJDK supported operating systems. It actually includes some other include files, but in my opinion this doesn't justify its existance. So if you don't use this file for something more meaningful in your closed ports I'd suggest to remove it completely and add the few additional includes right into the files which include mutex_.inline.hpp until now. What do you think? Regards, Volker From coleen.phillimore at oracle.com Tue Feb 2 17:54:20 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 2 Feb 2016 12:54:20 -0500 Subject: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc In-Reply-To: <56AF9E4B.8050704@oracle.com> References: <56AB5F39.1060005@oracle.com> <56AB8D2E.7090902@oracle.com> <56AB9A43.1000905@oracle.com> <56ACCB06.7060900@oracle.com> <56AF9422.5010300@oracle.com> <56AF9E4B.8050704@oracle.com> Message-ID: <56B0ED4C.6000902@oracle.com> Goetz, Can you review this since it's using SafeFetchN? thanks, Coleen On 2/1/16 1:04 PM, Coleen Phillimore wrote: > > > Thanks Dan! > > On 2/1/16 12:21 PM, Daniel D. Daugherty wrote: >> On 1/30/16 7:39 AM, Coleen Phillimore wrote: >>> >>> I've moved the SafeFetch to has_method_vptr as suggested and retested. >>> >>> http://cr.openjdk.java.net/~coleenp/8146984.02/webrev/ >> >> src/share/vm/oops/method.cpp >> (old) L2114: return has_method_vptr((const void*)this); >> (new) L2120: return has_method_vptr(this); >> Just curious. I don't see anything that explains why the >> cast is no longer needed (no type changes). Was this >> simply cleaning up an unnecessary cast? > > The cast is unnecessary. I didn't add it back when I added the call > to has_method_vptr back. > > thanks, > Coleen > >> >> Thumbs up. >> >> Dan >> >> >>> >>> Thanks, >>> Coleen >>> >>> On 1/29/16 11:58 AM, Coleen Phillimore wrote: >>>> >>>> >>>> On 1/29/16 11:02 AM, Daniel D. Daugherty wrote: >>>>> On 1/29/16 5:46 AM, Coleen Phillimore wrote: >>>>>> Summary: Add address check and use SafeFetchN for Method* vptr >>>>>> access when Method* may be bad pointer. >>>>>> >>>>>> Tested with RBT and failing test case (reproduced 1 in 100 times) >>>>>> with fatal in the 'return's in the change to verify. >>>>>> >>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8146984/ >>>>> >>>>> This one caught my eye because it has to do with sampling... >>>> >>>> I should mention sampling in all my RFRs then! >>>>> >>>>> src/share/vm/oops/method.cpp >>>>> The old code checked "!is_metaspace_object()" and used >>>>> has_method_vptr((const void*)this). >>>>> >>>>> The new code skips the "!is_metaspace_object()" check even >>>>> after sanity >>>>> checking the pointer, but you don't really explain why that's OK. >>>> >>>> is_metaspace_object is a very expensive check. It has to traverse >>>> all the metaspace mmap chunks. The new code is more robust in >>>> that it sanity checks the pointer first but uses Safefetch to get >>>> the vptr. >>>> >>>> >>>>> >>>>> The new code also picks up parts of Method::has_method_vptr() >>>>> which >>>>> makes me wonder if that's the right place for the fix. Won't >>>>> other >>>>> callers to Method::has_method_vptr() be subject to the same >>>>> crashing >>>>> mode? Or was the crashing mode only due to the >>>>> "!is_metaspace_object()" >>>>> check... >>>> >>>> I should have moved the SafeFetch in to the has_method_vptr. I >>>> can't remember why I copied it now. It crashed because the pointer >>>> was in metaspace (is_metaspace_object returned true) but wasn't >>>> aligned, but the pointer could come from anywhere. >>>> >>>> Thanks, I'll test out this fix and resend it. >>>> Coleen >>>> >>>>> >>>>> Dan >>>>> >>>>> >>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8146984 >>>>>> >>>>>> Thanks, >>>>>> Coleen >>>>>> >>>>> >>>> >>> >> > From mikael.vidstedt at oracle.com Tue Feb 2 19:25:33 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Tue, 2 Feb 2016 11:25:33 -0800 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56A96B55.7050301@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> Message-ID: <56B102AD.7020800@oracle.com> Please review this change which introduces a Copy::conjoint_swap and an Unsafe.copySwapMemory method to call it from Java, along with the necessary changes to have java.nio.Bits call it instead of the Bits.c code. http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ On the jdk/ side I don't think there should be a lot of surprises. Bits.c is gone and that required a mapfile-vers to be changed accordingly. I also added a relatively extensive jdk/internal/misc/Unsafe/CopySwap.java test which exercises all the various copySwap configurations and verifies that the resulting data is correct. There are also a handful of negative tests in there. On the hotspot/ side: * the copy logic in copy.cpp is leveraging templates to help the C++ compiler produce tight copy loops for the various configurations {element type, copy direction, src aligned, dst aligned}. * Unsafe_CopySwapMemory is a leaf to not stall safe points more than necessary. Only if needed (THROW, copy involves heap objects) will it enter VM using a new JVM_ENTRY_FROM_LEAF macro. * JVM_ENTRY_FROM_LEAF calls a new VM_ENTRY_BASE_FROM_LEAF helper macro, which mimics what VM_ENTRY_BASE does, but also does a debug_only(ResetNoHandleMark __rnhm;) - this is because JVM_LEAF/VM_LEAF_BASE does debug_only(NoHandleMark __hm;). I'm in the process of getting the last performance numbers, but from what I've seen so far this will outperform the earlier implementation. Cheers, Mikeal On 2016-01-27 17:13, Mikael Vidstedt wrote: > > Just an FYI: > > I'm working on moving all of this to the Hotspot Copy class and > bridging to it via jdk.internal.misc.Unsafe, removing Bits.c > altogether. The implementation is working, and the preliminary > performance numbers beat the pants off of any of the suggested Bits.c > implementations (yay!). > > I'm currently in the progress of getting some unit tests in place for > it all to make sure it covers all the corner cases and then I'll run > some real benchmarks to see if it actually lives up to the expectations. > > Cheers, > Mikael > > On 2016-01-26 11:13, John Rose wrote: >> On Jan 26, 2016, at 11:08 AM, Andrew Haley wrote: >>> On 01/26/2016 07:04 PM, John Rose wrote: >>>> Unsafe.copyMemory bottoms out to Copy::conjoint_memory_atomic. >>>> IMO that's a better starting point than memcpy. Perhaps it can be >>>> given an additional parameter (or overloading) to specify a swap size. >>> OK, but conjoint_memory_atomic doesn't guarantee that destination >>> words won't be torn if their source is misaligned: in fact it >>> guarantees that they will will be. >> That's a good point, and argues for a new function with the >> stronger guarantee. Actually, it would be perfectly reasonable >> to strengthen the guarantee on the existing function. I don't >> think anyone will care about the slight performance change, >> especially since it is probably favorable. Since it's Unsafe, >> they are not supposed to care, either. >> >> ? John > From rkennke at redhat.com Tue Feb 2 20:55:35 2016 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 02 Feb 2016 21:55:35 +0100 Subject: Atomic::add(jlong) broken? Message-ID: <1454446535.3676.17.camel@redhat.com> Hello, I believe Atomic::add(jlong) is broken. The comment above it says: ? // Atomically add to a location, return updated value Except in atomic.cpp, add(jlong) returns the old value. It causes quite some headscratching on my side :-) Fixing this seems easy. I am wonder if any code uses this though, maybe it should be removed altogether? On the other hand, the implementation there uses a CAS-based loop. I think an easier fix would be to cast to size_t or intptr_t and use the atomic impl of that. What do you think? Roman From david.holmes at oracle.com Wed Feb 3 06:14:50 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 3 Feb 2016 16:14:50 +1000 Subject: Atomic::add(jlong) broken? In-Reply-To: <1454446535.3676.17.camel@redhat.com> References: <1454446535.3676.17.camel@redhat.com> Message-ID: <56B19ADA.8030406@oracle.com> On 3/02/2016 6:55 AM, Roman Kennke wrote: > Hello, > > I believe Atomic::add(jlong) is broken. The comment above it says: > > // Atomically add to a location, return updated value > > Except in atomic.cpp, add(jlong) returns the old value. Yes that seems broken. > It causes quite some headscratching on my side :-) > > Fixing this seems easy. I am wonder if any code uses this though, maybe > it should be removed altogether? It was added here: http://hg.openjdk.java.net/jdk8/jdk8/hotspot//rev/2a241e764894 for the GC log file rotation code back in 2011. But then removed here: http://hg.openjdk.java.net/jdk8/jdk8/hotspot/rev/0598674c0056 because it was unnecessary and because of the missing Atomic:load(jlong) support it required. I think it can be deleted now and probably should be. > On the other hand, the implementation there uses a CAS-based loop. I > think an easier fix would be to cast to size_t or intptr_t and use the > atomic impl of that. > > What do you think? Delete it. Thanks, David > Roman > From aph at redhat.com Wed Feb 3 09:43:03 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 3 Feb 2016 09:43:03 +0000 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56B102AD.7020800@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> Message-ID: <56B1CBA7.4050902@redhat.com> On 02/02/16 19:25, Mikael Vidstedt wrote: > Please review this change which introduces a Copy::conjoint_swap and an > Unsafe.copySwapMemory method to call it from Java, along with the > necessary changes to have java.nio.Bits call it instead of the Bits.c code. There doesn't seem to be any way to use a byte-swap instruction in the swapping code. This will make it unnecessarily slow. Andrew. From david.holmes at oracle.com Wed Feb 3 11:02:23 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 3 Feb 2016 21:02:23 +1000 Subject: Do we really still need mutex_.inline.hpp? In-Reply-To: References: Message-ID: <56B1DE3F.60604@oracle.com> On 3/02/2016 2:30 AM, Volker Simonis wrote: > Hi, > > I've just realized, that the file mutex_.inline.hpp is empty on > all OpenJDK supported operating systems. It actually includes some > other include files, but in my opinion this doesn't justify its > existance. > > So if you don't use this file for something more meaningful in your > closed ports I'd suggest to remove it completely and add the few > additional includes right into the files which include > mutex_.inline.hpp until now. > > What do you think? I agree these files seem superfluous and should be removed. David > Regards, > Volker > From david.holmes at oracle.com Wed Feb 3 11:10:07 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 3 Feb 2016 21:10:07 +1000 Subject: RFR 8145628: hotspot metadata classes shouldn't use HeapWordSize or heap related macros like align_object_size In-Reply-To: <56AFE120.4030502@oracle.com> References: <56A90C0B.2050100@oracle.com> <56AA8726.7030807@oracle.com> <56AA8AF7.8020606@oracle.com> <56AA8DB4.4050609@oracle.com> <56AADD07.4080604@oracle.com> <56ABBA3F.3030400@oracle.com> <56ABBB9A.50504@oracle.com> <56ACC86A.8080102@oracle.com> <56AEA467.6050801@oracle.com> <56AF9C3E.70407@oracle.com> <56AF9D8E.1070408@oracle.com> <56AFB7DD.8010500@oracle.com> <56AFB93B.2020007@oracle.com> <56AFC743.3030006@oracle.com> <56AFCC06.7010804@oracle.com> <56AFCCF8.3010805@oracle.com> <56AFCFB8.4040902@oracle.com> <56AFE120.4030502@oracle.com> Message-ID: <56B1E00F.5000901@oracle.com> On 2/02/2016 8:50 AM, Coleen Phillimore wrote: > > > On 2/1/16 4:35 PM, Chris Plummer wrote: >> On 2/1/16 1:24 PM, Coleen Phillimore wrote: >>> >>> >>> On 2/1/16 4:20 PM, Chris Plummer wrote: >>>> On 2/1/16 12:59 PM, Coleen Phillimore wrote: >>>>> >>>>> >>>>> On 2/1/16 2:59 PM, Chris Plummer wrote: >>>>>> On 2/1/16 11:54 AM, Chris Plummer wrote: >>>>>>> On 2/1/16 10:01 AM, Coleen Phillimore wrote: >>>>>>>> >>>>>>>> >>>>>>>> On 2/1/16 12:56 PM, Chris Plummer wrote: >>>>>>>>> It seems the allocators always align the size up to at least a >>>>>>>>> 64-bit boundary, so doesn't that make it pointless to attempt >>>>>>>>> to save memory by keeping the allocation request size word >>>>>>>>> aligned instead of 64-bit aligned? >>>>>>>> >>>>>>>> Sort of, except you need a size as a multiple of 32 bit words to >>>>>>>> potentially fix this, so it's a step towards that (if wanted). >>>>>>> What you need is (1) don't automatically pad the size up to >>>>>>> 64-bit alignment in the allocator, (2) don't pad the size up to >>>>>>> 64-bit in the size computations, and (3) the ability for the >>>>>>> allocator to maintain an unaligned "top" pointer, and to fix the >>>>>>> alignment if necessary during the allocation. This last one >>>>>>> implies knowing the alignment requirements of the caller, so that >>>>>>> means either passing in the alignment requirement or having >>>>>>> allocators configured to the alignment requirements of its users. >>>>> >>>>> Yes, exactly. I think we need to add another parameter to >>>>> Metaspace::allocate() to allow the caller to specify alignment >>>>> requirements. >>>> And I was looking at Amalloc() also, which does: >>>> >>>> x = ARENA_ALIGN(x); >>>> >>>> And then the following defines: >>>> >>>> #define ARENA_ALIGN_M1 (((size_t)(ARENA_AMALLOC_ALIGNMENT)) - 1) >>>> #define ARENA_ALIGN_MASK (~((size_t)ARENA_ALIGN_M1)) >>>> #define ARENA_ALIGN(x) ((((size_t)(x)) + ARENA_ALIGN_M1) & >>>> ARENA_ALIGN_MASK) >>>> >>>> #define ARENA_AMALLOC_ALIGNMENT (2*BytesPerWord) >>>> >>>> I think this all adds up to Amalloc doing 64-bit size alignment on >>>> 32-bit systems and 128-bit alignment on 64-bit systems. So I'm not >>>> so sure you Symbol changes are having an impact, at least not for >>>> Symbols allocated out of Arenas. If I'm reading the code right, >>>> symbols created for the null ClassLoader are allocated out of an >>>> arena and all others out of the C heap. >>> >>> The Symbol arena version of operator 'new' calls Amalloc_4. >> Ah, I missed that Amalloc_4 does not do the size aligning. Interesting >> that Amalloc_4 requires the size to be 4 bytes aligned, but then >> Amalloc has no such requirement but will align to 2x the word size. >> Lastly Amalloc_D aligns to 32-bit except on 32-bit sparc, where it >> aligns to 64-bit. Sounds suspect. I thought 32-bit ARM VFP required >> doubles to be 64-bit aligned. >>> Not sure about 128 bit alignment for 64 bit systems, is that right? >> #ifdef _LP64 >> const int LogBytesPerWord = 3; >> #else >> const int LogBytesPerWord = 2; >> #endif >> >> So this means BytesPerWord is 8 on 64-bit, and 2*BytesPerWord is 16 >> (128-bit). > > Wow, this is excessive and unexpected. We should file a bug. I can't > see a good reason to pad out arena allocations to 16 bytes. Allocated blocks are (potentially) on distinct cache lines ? David > Coleen > >> >> Chris >>> >>> Coleen >>> >>>> >>>> Chris >>>>> >>>>>>> You need all 3 of these. Leave any one out and you don't recoup >>>>>>> any of the wasted memory. We were doing all 3. You eliminated at >>>>>>> least some of the cases of (2). >>>>>> Sorry, my wording near then end there was kind of backwards. I >>>>>> meant we were NOT doing any of the 3. You made is so in some cases >>>>>> we are now doing (2). >>>>> >>>>> True. Why I said "if wanted" above was that we'd need to file an >>>>> RFE to reclaim the wasted memory. >>>>> >>>>> Coleen >>>>> >>>>>> >>>>>> Chris >>>>>>> >>>>>>> Chris >>>>>>>> >>>>>>>> Coleen >>>>>>>> >>>>>>>>> >>>>>>>>> Chris >>>>>>>>> >>>>>>>>> On 1/31/16 4:18 PM, David Holmes wrote: >>>>>>>>>> Hi Coleen, >>>>>>>>>> >>>>>>>>>> I think what Chris was referring to was the CDS compaction >>>>>>>>>> work - which has since been abandoned. To be honest it has >>>>>>>>>> been so long since I was working on this that I can't recall >>>>>>>>>> the details. At one point Ioi commented how all MSO's were >>>>>>>>>> allocated with 8-byte alignment which was unnecessary, and >>>>>>>>>> that we could do better and account for it in the size() >>>>>>>>>> method. He also noted if we somehow messed up the alignment >>>>>>>>>> when doing this that it should be quickly detectable on sparc. >>>>>>>>>> >>>>>>>>>> These current changes will affect the apparent wasted space in >>>>>>>>>> the archive as the expected usage would be based on size() >>>>>>>>>> while the actual usage would be determined by the allocator. >>>>>>>>>> >>>>>>>>>> Ioi was really the best person to comment-on/review this. >>>>>>>>>> >>>>>>>>>> David >>>>>>>>>> ----- >>>>>>>>>> >>>>>>>>>> On 31/01/2016 12:27 AM, Coleen Phillimore wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 1/29/16 2:20 PM, Coleen Phillimore wrote: >>>>>>>>>>>> >>>>>>>>>>>> Thanks Chris, >>>>>>>>>>>> >>>>>>>>>>>> On 1/29/16 2:15 PM, Chris Plummer wrote: >>>>>>>>>>>>> Hi Coleen, >>>>>>>>>>>>> >>>>>>>>>>>>> On 1/28/16 7:31 PM, Coleen Phillimore wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Chris, >>>>>>>>>>>>>> >>>>>>>>>>>>>> I made a few extra changes because of your question that I >>>>>>>>>>>>>> didn't >>>>>>>>>>>>>> answer below, a few HeapWordSize became wordSize. I >>>>>>>>>>>>>> apologize that >>>>>>>>>>>>>> I don't know how to create incremental webrevs. See >>>>>>>>>>>>>> discussion below. >>>>>>>>>>>>>> >>>>>>>>>>>>>> open webrev at >>>>>>>>>>>>>> http://cr.openjdk.java.net/~coleenp/8145628.02/ >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 1/28/16 4:52 PM, Chris Plummer wrote: >>>>>>>>>>>>>>> On 1/28/16 1:41 PM, Coleen Phillimore wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thank you, Chris for looking at this change. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On 1/28/16 4:24 PM, Chris Plummer wrote: >>>>>>>>>>>>>>>>> Hi Coleen, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Can you do some testing with ObjectAlignmentInBytes set to >>>>>>>>>>>>>>>>> something other than 8? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Okay, I can run one of the testsets with that. I >>>>>>>>>>>>>>>> verified it in >>>>>>>>>>>>>>>> the debugger mostly. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Someone from GC team should apply your patch, grep for >>>>>>>>>>>>>>>>> align_object_size(), and confirm that the ones you >>>>>>>>>>>>>>>>> didn't change >>>>>>>>>>>>>>>>> are correct. I gave a quick look and they look right to >>>>>>>>>>>>>>>>> me, but I >>>>>>>>>>>>>>>>> wasn't always certain if object alignment was >>>>>>>>>>>>>>>>> appropriate in all >>>>>>>>>>>>>>>>> cases. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> thanks - this is why I'd changed the align_object_size to >>>>>>>>>>>>>>>> align_heap_object_size before testing and changed it >>>>>>>>>>>>>>>> back, to >>>>>>>>>>>>>>>> verify that I didn't miss any. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I see some remaining HeapWordSize references that are >>>>>>>>>>>>>>>>> suspect, >>>>>>>>>>>>>>>>> like in Array.java and bytecodeTracer.cpp. I didn't go >>>>>>>>>>>>>>>>> through >>>>>>>>>>>>>>>>> all of them since there are about 428. Do they need closer >>>>>>>>>>>>>>>>> inspection? >>>>>>>>>>>>>>> ??? Any comment? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Actually, I tried to get a lot of HeapWordSize in the >>>>>>>>>>>>>> metadata but >>>>>>>>>>>>>> the primary focus of the change, despite the title, was to >>>>>>>>>>>>>> fix >>>>>>>>>>>>>> align_object_size wasn't used on metadata. >>>>>>>>>>>>> ok. >>>>>>>>>>>>>> That said a quick look at the instances of HeapWordSize >>>>>>>>>>>>>> led to some >>>>>>>>>>>>>> that weren't in the heap. I didn't look in Array.java >>>>>>>>>>>>>> because it's >>>>>>>>>>>>>> in the SA which isn't maintainable anyway, but I changed a >>>>>>>>>>>>>> few. >>>>>>>>>>>>>> There were very few that were not referring to objects in >>>>>>>>>>>>>> the Java >>>>>>>>>>>>>> heap. bytecodeTracer was one and there were a couple in >>>>>>>>>>>>>> metaspace.cpp. >>>>>>>>>>>>> Ok. If you think there may be more, or a more thorough >>>>>>>>>>>>> analysis is >>>>>>>>>>>>> needed, perhaps just file a bug to get the rest later. >>>>>>>>>>>> >>>>>>>>>>>> From my look yesterday, there aren't a lot of HeapWordSize >>>>>>>>>>>> left. There >>>>>>>>>>>> are probably still a lot of HeapWord* casts for things that >>>>>>>>>>>> aren't in >>>>>>>>>>>> the Java heap. This is a bigger cleanup that might not make >>>>>>>>>>>> sense to >>>>>>>>>>>> do in one change, but maybe in incremental changes to >>>>>>>>>>>> related code. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> As for reviewing your incremental changes, as long as it >>>>>>>>>>>>> was just >>>>>>>>>>>>> more changes of HeapWordSize to wordSize, I'm sure they are >>>>>>>>>>>>> fine. >>>>>>>>>>>>> (And yes, I did see that the removal of Symbol size >>>>>>>>>>>>> alignment was >>>>>>>>>>>>> also added). >>>>>>>>>>>> >>>>>>>>>>>> Good, thanks. >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> The bad news is that's more code to review. See above >>>>>>>>>>>>>> webrev link. >>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> align_metadata_offset() is not used. It can be removed. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Okay, I'll remove it. That's a good idea. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Shouldn't align_metadata_size() align to 64-bit like >>>>>>>>>>>>>>>>> align_object_size() did, and not align to word size? >>>>>>>>>>>>>>>>> Isn't that >>>>>>>>>>>>>>>>> what we agreed to? Have you tested CDS? David had >>>>>>>>>>>>>>>>> concerns about >>>>>>>>>>>>>>>>> the InstanceKlass::size() not returning the same >>>>>>>>>>>>>>>>> aligned size as >>>>>>>>>>>>>>>>> Metachunk::object_alignment(). >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I ran the CDS tests but I could test some more with CDS. >>>>>>>>>>>>>>>> We don't >>>>>>>>>>>>>>>> want to force the size of objects to be 64 bit >>>>>>>>>>>>>>>> (especially Symbol) >>>>>>>>>>>>>>>> because Metachunk::object_alignment() is 64 bits. >>>>>>>>>>>>>>> Do you mean "just" because? I wasn't necessarily >>>>>>>>>>>>>>> suggesting that >>>>>>>>>>>>>>> all metadata be 64-bit aligned. However, the ones that >>>>>>>>>>>>>>> have their >>>>>>>>>>>>>>> allocation size 64-bit aligned should be. I think David's >>>>>>>>>>>>>>> concern >>>>>>>>>>>>>>> is that he wrote code that computes how much memory is >>>>>>>>>>>>>>> needed for >>>>>>>>>>>>>>> the archive, and it uses size() for that. If the Metachunk >>>>>>>>>>>>>>> allocator allocates more than size() due to the 64-bit >>>>>>>>>>>>>>> alignment of >>>>>>>>>>>>>>> Metachunk::object_alignment(), then he will underestimate >>>>>>>>>>>>>>> the size. >>>>>>>>>>>>>>> You'll need to double check with David to see if I got >>>>>>>>>>>>>>> this right. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I don't know what code this is but yes, it would be >>>>>>>>>>>>>> wrong. It also >>>>>>>>>>>>>> would be wrong if there's any other alignment gaps or >>>>>>>>>>>>>> space in >>>>>>>>>>>>>> metaspace chunks because chunks themselves have an allocation >>>>>>>>>>>>>> granularity. >>>>>>>>>>>>>> >>>>>>>>>>>>>> It could be changed back by changing the function >>>>>>>>>>>>>> align_metaspace_size from 1 to WordsPerLong if you wanted to. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I fixed Symbol so that it didn't call align_metaspace_size >>>>>>>>>>>>>> if this >>>>>>>>>>>>>> change is needed in the future. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I was trying to limit the size of this change to correct >>>>>>>>>>>>>> align_object_size for metadata. >>>>>>>>>>>>> Well, there a few issues being addressed by fixing >>>>>>>>>>>>> align_object_size. >>>>>>>>>>>>> Using align_object_size was incorrect from a code purity >>>>>>>>>>>>> standpoint >>>>>>>>>>>>> (it was used on values unrelated to java objects), and was >>>>>>>>>>>>> also >>>>>>>>>>>>> incorrect when ObjectAlignmentInBytes was not 8. This was >>>>>>>>>>>>> the main >>>>>>>>>>>>> motivation for making this change. >>>>>>>>>>>> >>>>>>>>>>>> Exactly. This was higher priority because it was wrong. >>>>>>>>>>>>> >>>>>>>>>>>>> The 3rd issue is that align_object_size by default was >>>>>>>>>>>>> doing 8 byte >>>>>>>>>>>>> alignment, and this wastes memory on 32-bit. However, as I >>>>>>>>>>>>> mentioned >>>>>>>>>>>>> there may be some dependencies on this 8 byte alignment due >>>>>>>>>>>>> to the >>>>>>>>>>>>> metaspace allocator doing 8 byte alignment. If you can get >>>>>>>>>>>>> David to >>>>>>>>>>>>> say he's ok with just 4-byte size alignment on 32-bit, then >>>>>>>>>>>>> I'm ok >>>>>>>>>>>>> with this change. Otherwise I think maybe you should stay >>>>>>>>>>>>> with 8 byte >>>>>>>>>>>>> alignment (including symbols), and file a bug to someday >>>>>>>>>>>>> change it to >>>>>>>>>>>>> word alignment, and have the metaspace allocator require >>>>>>>>>>>>> that you >>>>>>>>>>>>> pass in alignment requirements. >>>>>>>>>>>> >>>>>>>>>>>> Okay, I can see what David says but I wouldn't change Symbol >>>>>>>>>>>> back. >>>>>>>>>>>> That's mostly unrelated to metadata storage and I can get 32 >>>>>>>>>>>> bit >>>>>>>>>>>> packing for symbols on 32 bit platforms. It probably saves >>>>>>>>>>>> more space >>>>>>>>>>>> than the other more invasive ideas that we've had. >>>>>>>>>>> >>>>>>>>>>> This is reviewed now. If David wants metadata sizing to >>>>>>>>>>> change back to >>>>>>>>>>> 64 bits on 32 bit platforms, it's a one line change. I'm >>>>>>>>>>> going to push >>>>>>>>>>> it to get the rest in. >>>>>>>>>>> Thanks, >>>>>>>>>>> Coleen >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Coleen >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks for looking at this in detail. >>>>>>>>>>>>> No problem. Thanks for cleaning this up. >>>>>>>>>>>>> >>>>>>>>>>>>> Chris >>>>>>>>>>>>>> >>>>>>>>>>>>>> Coleen >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Unfortunately, with the latter, metadata is never >>>>>>>>>>>>>>>> aligned on 32 >>>>>>>>>>>>>>>> bit boundaries for 32 bit platforms, but to fix this, we >>>>>>>>>>>>>>>> have to >>>>>>>>>>>>>>>> pass a minimum_alignment parameter to Metaspace::allocate() >>>>>>>>>>>>>>>> because the alignment is not a function of the size of >>>>>>>>>>>>>>>> the object >>>>>>>>>>>>>>>> but what is required from its nonstatic data members. >>>>>>>>>>>>>>> Correct. >>>>>>>>>>>>>>>> I found MethodCounters, Klass (and subclasses) and >>>>>>>>>>>>>>>> ConstantPool >>>>>>>>>>>>>>>> has such alignment constraints. Not sizing metadata to >>>>>>>>>>>>>>>> 64 bit >>>>>>>>>>>>>>>> sizes is a start for making this change. >>>>>>>>>>>>>>> I agree with that, but just wanted to point out why David >>>>>>>>>>>>>>> may be >>>>>>>>>>>>>>> concerned with this change. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> instanceKlass.hpp: Need to fix the following comment: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 97 // sizeof(OopMapBlock) in HeapWords. >>>>>>>>>>>>>>>> Fixed, Thanks! >>>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Chris >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Coleen >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Chris >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On 1/27/16 10:27 AM, Coleen Phillimore wrote: >>>>>>>>>>>>>>>>>> Summary: Use align_metadata_size, >>>>>>>>>>>>>>>>>> align_metadata_offset and >>>>>>>>>>>>>>>>>> is_metadata_aligned for metadata rather >>>>>>>>>>>>>>>>>> than align_object_size, etc. Use wordSize rather than >>>>>>>>>>>>>>>>>> HeapWordSize for metadata. Use align_ptr_up >>>>>>>>>>>>>>>>>> rather than align_pointer_up (all the related >>>>>>>>>>>>>>>>>> functions are ptr). >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Ran RBT quick tests on all platforms along with >>>>>>>>>>>>>>>>>> Chris's Plummers >>>>>>>>>>>>>>>>>> change for 8143608, ran jtreg hotspot tests and >>>>>>>>>>>>>>>>>> nsk.sajdi.testlist co-located tests because there are SA >>>>>>>>>>>>>>>>>> changes. Reran subset of this after merging. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I have a script to update copyrights on commit. It's >>>>>>>>>>>>>>>>>> not a big >>>>>>>>>>>>>>>>>> change, just mostly boring. See the bug comments for >>>>>>>>>>>>>>>>>> more >>>>>>>>>>>>>>>>>> details about the change. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> open webrev at >>>>>>>>>>>>>>>>>> http://cr.openjdk.java.net/~coleenp/8145628.01/ >>>>>>>>>>>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8145628 >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> thanks, >>>>>>>>>>>>>>>>>> Coleen >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From sgehwolf at redhat.com Wed Feb 3 11:31:35 2016 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Wed, 03 Feb 2016 12:31:35 +0100 Subject: RFR(s): 8148945: JDK-8148481: Devirtualize Klass::vtable breaks Zero build Message-ID: <1454499095.3703.6.camel@redhat.com> Hi, Could somebody please review and sponsor this Zero build fix? JDK-8148481 made?start_of_vtable() protected and this change accounts for that by now using the new?method_at_vtable() function. I think?src/share/vm/interpreter/bytecodeInterpreter.cpp is Zero-only these days even though it lives in shared code. Bug:?https://bugs.openjdk.java.net/browse/JDK-8148945 webrev:?http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8148945/webrev.01/ Testing done: Zero builds again in all variants. Thanks, Severin From mikael.gerdin at oracle.com Wed Feb 3 12:18:37 2016 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 3 Feb 2016 13:18:37 +0100 Subject: RFR(s): 8148945: JDK-8148481: Devirtualize Klass::vtable breaks Zero build In-Reply-To: <1454499095.3703.6.camel@redhat.com> References: <1454499095.3703.6.camel@redhat.com> Message-ID: <56B1F01D.80204@oracle.com> Hi Severin, On 2016-02-03 12:31, Severin Gehwolf wrote: > Hi, > > Could somebody please review and sponsor this Zero build fix? > > JDK-8148481 made start_of_vtable() protected and this change accounts > for that by now using the new method_at_vtable() function. I > think src/share/vm/interpreter/bytecodeInterpreter.cpp is Zero-only > these days even though it lives in shared code. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8148945 > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8148945/webrev.01/ Sorry for breaking this, I completely forgot about zero :( The fix looks good to me. /Mikael > > Testing done: Zero builds again in all variants. > > Thanks, > Severin > From sgehwolf at redhat.com Wed Feb 3 12:35:14 2016 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Wed, 03 Feb 2016 13:35:14 +0100 Subject: RFR(s): 8148945: JDK-8148481: Devirtualize Klass::vtable breaks Zero build In-Reply-To: <56B1F01D.80204@oracle.com> References: <1454499095.3703.6.camel@redhat.com> <56B1F01D.80204@oracle.com> Message-ID: <1454502914.3703.12.camel@redhat.com> Hi, On Wed, 2016-02-03 at 13:18 +0100, Mikael Gerdin wrote: > Hi Severin, > > On 2016-02-03 12:31, Severin Gehwolf wrote: > > Hi, > > > > Could somebody please review and sponsor this Zero build fix? > > > > JDK-8148481 made start_of_vtable() protected and this change accounts > > for that by now using the new method_at_vtable() function. I > > think src/share/vm/interpreter/bytecodeInterpreter.cpp is Zero-only > > these days even though it lives in shared code. > > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8148945 > > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8148945/webrev.01/ > > Sorry for breaking this, I completely forgot about zero :( No harm done :) > The fix looks good to me. Thanks! Is this enough as far as Reviewers are concerned for this trivial fix? I'm attaching the hg-exported changeset in any case :) Cheers, Severin > /Mikael > > > > > Testing done: Zero builds again in all variants. > > > > Thanks, > > Severin > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: JDK-8148945.export.patch Type: text/x-patch Size: 2398 bytes Desc: not available URL: From coleen.phillimore at oracle.com Wed Feb 3 13:10:42 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 3 Feb 2016 08:10:42 -0500 Subject: RFR(s): 8148945: JDK-8148481: Devirtualize Klass::vtable breaks Zero build In-Reply-To: <1454502914.3703.12.camel@redhat.com> References: <1454499095.3703.6.camel@redhat.com> <56B1F01D.80204@oracle.com> <1454502914.3703.12.camel@redhat.com> Message-ID: <56B1FC52.3090404@oracle.com> It looks good and is enough reviewers, so don't fix the export for me. Mikael you can just push this change directly since JPRT doesn't build Zero. Coleen On 2/3/16 7:35 AM, Severin Gehwolf wrote: > Hi, > > On Wed, 2016-02-03 at 13:18 +0100, Mikael Gerdin wrote: >> Hi Severin, >> >> On 2016-02-03 12:31, Severin Gehwolf wrote: >>> Hi, >>> >>> Could somebody please review and sponsor this Zero build fix? >>> >>> JDK-8148481 made start_of_vtable() protected and this change accounts >>> for that by now using the new method_at_vtable() function. I >>> think src/share/vm/interpreter/bytecodeInterpreter.cpp is Zero-only >>> these days even though it lives in shared code. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8148945 >>> webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8148945/webrev.01/ >> Sorry for breaking this, I completely forgot about zero :( > No harm done :) > >> The fix looks good to me. > Thanks! Is this enough as far as Reviewers are concerned for this > trivial fix? I'm attaching the hg-exported changeset in any case :) > > Cheers, > Severin > >> /Mikael >> >>> Testing done: Zero builds again in all variants. >>> >>> Thanks, >>> Severin >>> From markus.gronlund at oracle.com Wed Feb 3 15:10:09 2016 From: markus.gronlund at oracle.com (Markus Gronlund) Date: Wed, 3 Feb 2016 07:10:09 -0800 (PST) Subject: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc In-Reply-To: <56B0ED4C.6000902@oracle.com> References: <56AB5F39.1060005@oracle.com> <56AB8D2E.7090902@oracle.com> <56AB9A43.1000905@oracle.com> <56ACCB06.7060900@oracle.com> <56AF9422.5010300@oracle.com> <56AF9E4B.8050704@oracle.com> <56B0ED4C.6000902@oracle.com> Message-ID: Hi Coleen, Thanks for looking into this. Maybe it could be simplified like: bool Method::has_method_vptr(const void* ptr) { // Use SafeFetch to check if this is a valid pointer first // This assumes that the vtbl pointer is the first word of a C++ object. // This assumption is also in universe.cpp patch_klass_vtable if (1 == SafeFetchN((intptr_t*)ptr, intptr_t(1))) { return false; } const Method m; return dereference_vptr(&m) == ptr; } // Check that this pointer is valid by checking that the vtbl pointer matches bool Method::is_valid_method() const { if (this == NULL) { return false; } // Quick sanity check on pointer. if ((intptr_t(this) & (wordSize-1)) != 0) { return false; } return has_method_vptr(this); } I think you also need: #include "runtime/stubRoutines.hpp Can't really comment on the use of SafeFetchN() as I am not familiar with it? Do all platforms support this? Looks like a good improvement for stability. No need to see any updated reviews. Thanks Markus -----Original Message----- From: Coleen Phillimore Sent: den 2 februari 2016 18:54 To: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz Subject: Re: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc Goetz, Can you review this since it's using SafeFetchN? thanks, Coleen On 2/1/16 1:04 PM, Coleen Phillimore wrote: > > > Thanks Dan! > > On 2/1/16 12:21 PM, Daniel D. Daugherty wrote: >> On 1/30/16 7:39 AM, Coleen Phillimore wrote: >>> >>> I've moved the SafeFetch to has_method_vptr as suggested and retested. >>> >>> http://cr.openjdk.java.net/~coleenp/8146984.02/webrev/ >> >> src/share/vm/oops/method.cpp >> (old) L2114: return has_method_vptr((const void*)this); >> (new) L2120: return has_method_vptr(this); >> Just curious. I don't see anything that explains why the >> cast is no longer needed (no type changes). Was this >> simply cleaning up an unnecessary cast? > > The cast is unnecessary. I didn't add it back when I added the call > to has_method_vptr back. > > thanks, > Coleen > >> >> Thumbs up. >> >> Dan >> >> >>> >>> Thanks, >>> Coleen >>> >>> On 1/29/16 11:58 AM, Coleen Phillimore wrote: >>>> >>>> >>>> On 1/29/16 11:02 AM, Daniel D. Daugherty wrote: >>>>> On 1/29/16 5:46 AM, Coleen Phillimore wrote: >>>>>> Summary: Add address check and use SafeFetchN for Method* vptr >>>>>> access when Method* may be bad pointer. >>>>>> >>>>>> Tested with RBT and failing test case (reproduced 1 in 100 times) >>>>>> with fatal in the 'return's in the change to verify. >>>>>> >>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8146984/ >>>>> >>>>> This one caught my eye because it has to do with sampling... >>>> >>>> I should mention sampling in all my RFRs then! >>>>> >>>>> src/share/vm/oops/method.cpp >>>>> The old code checked "!is_metaspace_object()" and used >>>>> has_method_vptr((const void*)this). >>>>> >>>>> The new code skips the "!is_metaspace_object()" check even >>>>> after sanity >>>>> checking the pointer, but you don't really explain why that's OK. >>>> >>>> is_metaspace_object is a very expensive check. It has to traverse >>>> all the metaspace mmap chunks. The new code is more robust in >>>> that it sanity checks the pointer first but uses Safefetch to get >>>> the vptr. >>>> >>>> >>>>> >>>>> The new code also picks up parts of Method::has_method_vptr() >>>>> which >>>>> makes me wonder if that's the right place for the fix. Won't >>>>> other >>>>> callers to Method::has_method_vptr() be subject to the same >>>>> crashing >>>>> mode? Or was the crashing mode only due to the >>>>> "!is_metaspace_object()" >>>>> check... >>>> >>>> I should have moved the SafeFetch in to the has_method_vptr. I >>>> can't remember why I copied it now. It crashed because the pointer >>>> was in metaspace (is_metaspace_object returned true) but wasn't >>>> aligned, but the pointer could come from anywhere. >>>> >>>> Thanks, I'll test out this fix and resend it. >>>> Coleen >>>> >>>>> >>>>> Dan >>>>> >>>>> >>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8146984 >>>>>> >>>>>> Thanks, >>>>>> Coleen >>>>>> >>>>> >>>> >>> >> > From erik.helin at oracle.com Wed Feb 3 15:13:41 2016 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 3 Feb 2016 16:13:41 +0100 Subject: RFR: 8148844: Update run_unit_test macro for InternalVMTests Message-ID: <20160203151341.GB8777@ehelin.jrpg.bea.com> Hi all, this patch updates the run_unit_test macro for InternalVMTests. The new macro both forward declares the test function and runs it. C++ can (as opposed to C) forward declare a function inside another function. I also added a small helper function, run_test, that ensures that test functions must return void and take no parameters (by typing the test function as a function pointer). Webrev: http://cr.openjdk.java.net/~ehelin/8148844/00/ Enhancement: https://bugs.openjdk.java.net/browse/JDK-8148844 Testing: - JPRT - Running the tests locally Thanks, Erik From daniel.daugherty at oracle.com Wed Feb 3 15:14:56 2016 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 3 Feb 2016 08:14:56 -0700 Subject: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc In-Reply-To: References: <56AB5F39.1060005@oracle.com> <56AB8D2E.7090902@oracle.com> <56AB9A43.1000905@oracle.com> <56ACCB06.7060900@oracle.com> <56AF9422.5010300@oracle.com> <56AF9E4B.8050704@oracle.com> <56B0ED4C.6000902@oracle.com> Message-ID: <56B21970.4030508@oracle.com> On 2/3/16 8:10 AM, Markus Gronlund wrote: > Hi Coleen, > > Thanks for looking into this. Maybe it could be simplified like: Very nicely done... > > bool Method::has_method_vptr(const void* ptr) { > // Use SafeFetch to check if this is a valid pointer first > // This assumes that the vtbl pointer is the first word of a C++ object. > // This assumption is also in universe.cpp patch_klass_vtable > if (1 == SafeFetchN((intptr_t*)ptr, intptr_t(1))) { > return false; > } > const Method m; > return dereference_vptr(&m) == ptr; > } > > // Check that this pointer is valid by checking that the vtbl pointer matches > bool Method::is_valid_method() const { > if (this == NULL) { In keeping with our latest style migration: if (NULL == this) { Yes, I know I have said I don't like it... but we're going that way and it's starting to grow on me... :-) Dan > return false; > } > // Quick sanity check on pointer. > if ((intptr_t(this) & (wordSize-1)) != 0) { Spaces around the '-' please... Dan > return false; > } > > return has_method_vptr(this); > } > > I think you also need: > > #include "runtime/stubRoutines.hpp > > Can't really comment on the use of SafeFetchN() as I am not familiar with it? Do all platforms support this? > > Looks like a good improvement for stability. > > No need to see any updated reviews. > > Thanks > Markus > > > -----Original Message----- > From: Coleen Phillimore > Sent: den 2 februari 2016 18:54 > To: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > Subject: Re: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc > > > Goetz, Can you review this since it's using SafeFetchN? > > thanks, > Coleen > > On 2/1/16 1:04 PM, Coleen Phillimore wrote: >> >> Thanks Dan! >> >> On 2/1/16 12:21 PM, Daniel D. Daugherty wrote: >>> On 1/30/16 7:39 AM, Coleen Phillimore wrote: >>>> I've moved the SafeFetch to has_method_vptr as suggested and retested. >>>> >>>> http://cr.openjdk.java.net/~coleenp/8146984.02/webrev/ >>> src/share/vm/oops/method.cpp >>> (old) L2114: return has_method_vptr((const void*)this); >>> (new) L2120: return has_method_vptr(this); >>> Just curious. I don't see anything that explains why the >>> cast is no longer needed (no type changes). Was this >>> simply cleaning up an unnecessary cast? >> The cast is unnecessary. I didn't add it back when I added the call >> to has_method_vptr back. >> >> thanks, >> Coleen >> >>> Thumbs up. >>> >>> Dan >>> >>> >>>> Thanks, >>>> Coleen >>>> >>>> On 1/29/16 11:58 AM, Coleen Phillimore wrote: >>>>> >>>>> On 1/29/16 11:02 AM, Daniel D. Daugherty wrote: >>>>>> On 1/29/16 5:46 AM, Coleen Phillimore wrote: >>>>>>> Summary: Add address check and use SafeFetchN for Method* vptr >>>>>>> access when Method* may be bad pointer. >>>>>>> >>>>>>> Tested with RBT and failing test case (reproduced 1 in 100 times) >>>>>>> with fatal in the 'return's in the change to verify. >>>>>>> >>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8146984/ >>>>>> This one caught my eye because it has to do with sampling... >>>>> I should mention sampling in all my RFRs then! >>>>>> src/share/vm/oops/method.cpp >>>>>> The old code checked "!is_metaspace_object()" and used >>>>>> has_method_vptr((const void*)this). >>>>>> >>>>>> The new code skips the "!is_metaspace_object()" check even >>>>>> after sanity >>>>>> checking the pointer, but you don't really explain why that's OK. >>>>> is_metaspace_object is a very expensive check. It has to traverse >>>>> all the metaspace mmap chunks. The new code is more robust in >>>>> that it sanity checks the pointer first but uses Safefetch to get >>>>> the vptr. >>>>> >>>>> >>>>>> The new code also picks up parts of Method::has_method_vptr() >>>>>> which >>>>>> makes me wonder if that's the right place for the fix. Won't >>>>>> other >>>>>> callers to Method::has_method_vptr() be subject to the same >>>>>> crashing >>>>>> mode? Or was the crashing mode only due to the >>>>>> "!is_metaspace_object()" >>>>>> check... >>>>> I should have moved the SafeFetch in to the has_method_vptr. I >>>>> can't remember why I copied it now. It crashed because the pointer >>>>> was in metaspace (is_metaspace_object returned true) but wasn't >>>>> aligned, but the pointer could come from anywhere. >>>>> >>>>> Thanks, I'll test out this fix and resend it. >>>>> Coleen >>>>> >>>>>> Dan >>>>>> >>>>>> >>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8146984 >>>>>>> >>>>>>> Thanks, >>>>>>> Coleen >>>>>>> From jesper.wilhelmsson at oracle.com Wed Feb 3 15:33:16 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Wed, 3 Feb 2016 16:33:16 +0100 Subject: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc In-Reply-To: <56B21970.4030508@oracle.com> References: <56AB5F39.1060005@oracle.com> <56AB8D2E.7090902@oracle.com> <56AB9A43.1000905@oracle.com> <56ACCB06.7060900@oracle.com> <56AF9422.5010300@oracle.com> <56AF9E4B.8050704@oracle.com> <56B0ED4C.6000902@oracle.com> <56B21970.4030508@oracle.com> Message-ID: <56B21DBC.9050905@oracle.com> Den 3/2/16 kl. 16:14, skrev Daniel D. Daugherty: > On 2/3/16 8:10 AM, Markus Gronlund wrote: >> Hi Coleen, >> >> Thanks for looking into this. Maybe it could be simplified like: > > Very nicely done... > >> >> bool Method::has_method_vptr(const void* ptr) { >> // Use SafeFetch to check if this is a valid pointer first >> // This assumes that the vtbl pointer is the first word of a C++ object. >> // This assumption is also in universe.cpp patch_klass_vtable >> if (1 == SafeFetchN((intptr_t*)ptr, intptr_t(1))) { >> return false; >> } >> const Method m; >> return dereference_vptr(&m) == ptr; >> } >> >> // Check that this pointer is valid by checking that the vtbl pointer matches >> bool Method::is_valid_method() const { >> if (this == NULL) { > > In keeping with our latest style migration: > > if (NULL == this) { > > Yes, I know I have said I don't like it... but we're > going that way and it's starting to grow on me... :-) Don't give in Dan! I think this truly awful and everyone I've asked around here dislikes this style. Is there a reason to turn things around like this? (Compilers from the Jurassic age cant detect "if (this = NULL)" is not a valid argument.) /Jesper > > Dan > > >> return false; >> } >> // Quick sanity check on pointer. >> if ((intptr_t(this) & (wordSize-1)) != 0) { > > Spaces around the '-' please... > > Dan > >> return false; >> } >> >> return has_method_vptr(this); >> } >> >> I think you also need: >> >> #include "runtime/stubRoutines.hpp >> >> Can't really comment on the use of SafeFetchN() as I am not familiar with it? >> Do all platforms support this? >> >> Looks like a good improvement for stability. >> >> No need to see any updated reviews. >> >> Thanks >> Markus >> >> >> -----Original Message----- >> From: Coleen Phillimore >> Sent: den 2 februari 2016 18:54 >> To: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >> Subject: Re: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc >> >> >> Goetz, Can you review this since it's using SafeFetchN? >> >> thanks, >> Coleen >> >> On 2/1/16 1:04 PM, Coleen Phillimore wrote: >>> >>> Thanks Dan! >>> >>> On 2/1/16 12:21 PM, Daniel D. Daugherty wrote: >>>> On 1/30/16 7:39 AM, Coleen Phillimore wrote: >>>>> I've moved the SafeFetch to has_method_vptr as suggested and retested. >>>>> >>>>> http://cr.openjdk.java.net/~coleenp/8146984.02/webrev/ >>>> src/share/vm/oops/method.cpp >>>> (old) L2114: return has_method_vptr((const void*)this); >>>> (new) L2120: return has_method_vptr(this); >>>> Just curious. I don't see anything that explains why the >>>> cast is no longer needed (no type changes). Was this >>>> simply cleaning up an unnecessary cast? >>> The cast is unnecessary. I didn't add it back when I added the call >>> to has_method_vptr back. >>> >>> thanks, >>> Coleen >>> >>>> Thumbs up. >>>> >>>> Dan >>>> >>>> >>>>> Thanks, >>>>> Coleen >>>>> >>>>> On 1/29/16 11:58 AM, Coleen Phillimore wrote: >>>>>> >>>>>> On 1/29/16 11:02 AM, Daniel D. Daugherty wrote: >>>>>>> On 1/29/16 5:46 AM, Coleen Phillimore wrote: >>>>>>>> Summary: Add address check and use SafeFetchN for Method* vptr >>>>>>>> access when Method* may be bad pointer. >>>>>>>> >>>>>>>> Tested with RBT and failing test case (reproduced 1 in 100 times) >>>>>>>> with fatal in the 'return's in the change to verify. >>>>>>>> >>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8146984/ >>>>>>> This one caught my eye because it has to do with sampling... >>>>>> I should mention sampling in all my RFRs then! >>>>>>> src/share/vm/oops/method.cpp >>>>>>> The old code checked "!is_metaspace_object()" and used >>>>>>> has_method_vptr((const void*)this). >>>>>>> >>>>>>> The new code skips the "!is_metaspace_object()" check even >>>>>>> after sanity >>>>>>> checking the pointer, but you don't really explain why that's OK. >>>>>> is_metaspace_object is a very expensive check. It has to traverse >>>>>> all the metaspace mmap chunks. The new code is more robust in >>>>>> that it sanity checks the pointer first but uses Safefetch to get >>>>>> the vptr. >>>>>> >>>>>> >>>>>>> The new code also picks up parts of Method::has_method_vptr() >>>>>>> which >>>>>>> makes me wonder if that's the right place for the fix. Won't >>>>>>> other >>>>>>> callers to Method::has_method_vptr() be subject to the same >>>>>>> crashing >>>>>>> mode? Or was the crashing mode only due to the >>>>>>> "!is_metaspace_object()" >>>>>>> check... >>>>>> I should have moved the SafeFetch in to the has_method_vptr. I >>>>>> can't remember why I copied it now. It crashed because the pointer >>>>>> was in metaspace (is_metaspace_object returned true) but wasn't >>>>>> aligned, but the pointer could come from anywhere. >>>>>> >>>>>> Thanks, I'll test out this fix and resend it. >>>>>> Coleen >>>>>> >>>>>>> Dan >>>>>>> >>>>>>> >>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8146984 >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Coleen >>>>>>>> > From mikael.vidstedt at oracle.com Wed Feb 3 16:13:38 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 3 Feb 2016 08:13:38 -0800 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56B1CBA7.4050902@redhat.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <56B1CBA7.4050902@redhat.com> Message-ID: <56B22732.6010609@oracle.com> On 2016-02-03 01:43, Andrew Haley wrote: > On 02/02/16 19:25, Mikael Vidstedt wrote: >> Please review this change which introduces a Copy::conjoint_swap and an >> Unsafe.copySwapMemory method to call it from Java, along with the >> necessary changes to have java.nio.Bits call it instead of the Bits.c code. > There doesn't seem to be any way to use a byte-swap instruction > in the swapping code. This will make it unnecessarily slow. To be clear, this isn't trying to provide the absolutely most optimal copy+swap implementation. It's trying to fix the Bits.c unaligned bug and pave the way for further improvements. Further performance improvements here are certainly possible, but at this point I'm happy as long as the performance is on par (or better) with the Bits.c implementation it's replacing. That said, at least gcc seems to recognize the byte swapping pattern and does emit a bswap on linux-x64. I'm not sure about the other platforms though. Cheers, Mikael From aph at redhat.com Wed Feb 3 16:15:44 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 3 Feb 2016 16:15:44 +0000 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56B22732.6010609@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <56B1CBA7.4050902@redhat.com> <56B22732.6010609@oracle.com> Message-ID: <56B227B0.6080705@redhat.com> On 02/03/2016 04:13 PM, Mikael Vidstedt wrote: > > On 2016-02-03 01:43, Andrew Haley wrote: >> On 02/02/16 19:25, Mikael Vidstedt wrote: >>> Please review this change which introduces a Copy::conjoint_swap and an >>> Unsafe.copySwapMemory method to call it from Java, along with the >>> necessary changes to have java.nio.Bits call it instead of the Bits.c code. >> There doesn't seem to be any way to use a byte-swap instruction >> in the swapping code. This will make it unnecessarily slow. > > To be clear, this isn't trying to provide the absolutely most optimal > copy+swap implementation. It's trying to fix the Bits.c unaligned bug > and pave the way for further improvements. Further performance > improvements here are certainly possible, but at this point I'm happy as > long as the performance is on par (or better) with the Bits.c > implementation it's replacing. Got it, sure. It's just nice to be able to replace low-level routines with platform ones. > That said, at least gcc seems to recognize the byte swapping pattern and > does emit a bswap on linux-x64. I'm not sure about the other platforms > though. Oh, very nice. Right, I'll check that once your patch does in. Andrew. From coleen.phillimore at oracle.com Wed Feb 3 16:43:53 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 3 Feb 2016 11:43:53 -0500 Subject: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc In-Reply-To: <56B21DBC.9050905@oracle.com> References: <56AB5F39.1060005@oracle.com> <56AB8D2E.7090902@oracle.com> <56AB9A43.1000905@oracle.com> <56ACCB06.7060900@oracle.com> <56AF9422.5010300@oracle.com> <56AF9E4B.8050704@oracle.com> <56B0ED4C.6000902@oracle.com> <56B21970.4030508@oracle.com> <56B21DBC.9050905@oracle.com> Message-ID: <56B22E49.8040007@oracle.com> On 2/3/16 10:33 AM, Jesper Wilhelmsson wrote: > Den 3/2/16 kl. 16:14, skrev Daniel D. Daugherty: >> On 2/3/16 8:10 AM, Markus Gronlund wrote: >>> Hi Coleen, >>> >>> Thanks for looking into this. Maybe it could be simplified like: >> >> Very nicely done... >> >>> >>> bool Method::has_method_vptr(const void* ptr) { >>> // Use SafeFetch to check if this is a valid pointer first >>> // This assumes that the vtbl pointer is the first word of a C++ >>> object. >>> // This assumption is also in universe.cpp patch_klass_vtable >>> if (1 == SafeFetchN((intptr_t*)ptr, intptr_t(1))) { >>> return false; >>> } >>> const Method m; >>> return dereference_vptr(&m) == ptr; >>> } >>> >>> // Check that this pointer is valid by checking that the vtbl >>> pointer matches >>> bool Method::is_valid_method() const { >>> if (this == NULL) { >> >> In keeping with our latest style migration: >> >> if (NULL == this) { >> >> Yes, I know I have said I don't like it... but we're >> going that way and it's starting to grow on me... :-) > > Don't give in Dan! > I think this truly awful and everyone I've asked around here dislikes > this style. Is there a reason to turn things around like this? > (Compilers from the Jurassic age cant detect "if (this = NULL)" is not > a valid argument.) Oh good, I agree, I find this style visually distracting. I won't give in. Coleen > /Jesper > > >> >> Dan >> >> >>> return false; >>> } >>> // Quick sanity check on pointer. >>> if ((intptr_t(this) & (wordSize-1)) != 0) { >> >> Spaces around the '-' please... >> >> Dan >> >>> return false; >>> } >>> >>> return has_method_vptr(this); >>> } >>> >>> I think you also need: >>> >>> #include "runtime/stubRoutines.hpp >>> >>> Can't really comment on the use of SafeFetchN() as I am not familiar >>> with it? >>> Do all platforms support this? >>> >>> Looks like a good improvement for stability. >>> >>> No need to see any updated reviews. >>> >>> Thanks >>> Markus >>> >>> >>> -----Original Message----- >>> From: Coleen Phillimore >>> Sent: den 2 februari 2016 18:54 >>> To: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>> Subject: Re: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const >>> void*)+0xc >>> >>> >>> Goetz, Can you review this since it's using SafeFetchN? >>> >>> thanks, >>> Coleen >>> >>> On 2/1/16 1:04 PM, Coleen Phillimore wrote: >>>> >>>> Thanks Dan! >>>> >>>> On 2/1/16 12:21 PM, Daniel D. Daugherty wrote: >>>>> On 1/30/16 7:39 AM, Coleen Phillimore wrote: >>>>>> I've moved the SafeFetch to has_method_vptr as suggested and >>>>>> retested. >>>>>> >>>>>> http://cr.openjdk.java.net/~coleenp/8146984.02/webrev/ >>>>> src/share/vm/oops/method.cpp >>>>> (old) L2114: return has_method_vptr((const void*)this); >>>>> (new) L2120: return has_method_vptr(this); >>>>> Just curious. I don't see anything that explains why the >>>>> cast is no longer needed (no type changes). Was this >>>>> simply cleaning up an unnecessary cast? >>>> The cast is unnecessary. I didn't add it back when I added the call >>>> to has_method_vptr back. >>>> >>>> thanks, >>>> Coleen >>>> >>>>> Thumbs up. >>>>> >>>>> Dan >>>>> >>>>> >>>>>> Thanks, >>>>>> Coleen >>>>>> >>>>>> On 1/29/16 11:58 AM, Coleen Phillimore wrote: >>>>>>> >>>>>>> On 1/29/16 11:02 AM, Daniel D. Daugherty wrote: >>>>>>>> On 1/29/16 5:46 AM, Coleen Phillimore wrote: >>>>>>>>> Summary: Add address check and use SafeFetchN for Method* vptr >>>>>>>>> access when Method* may be bad pointer. >>>>>>>>> >>>>>>>>> Tested with RBT and failing test case (reproduced 1 in 100 times) >>>>>>>>> with fatal in the 'return's in the change to verify. >>>>>>>>> >>>>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8146984/ >>>>>>>> This one caught my eye because it has to do with sampling... >>>>>>> I should mention sampling in all my RFRs then! >>>>>>>> src/share/vm/oops/method.cpp >>>>>>>> The old code checked "!is_metaspace_object()" and used >>>>>>>> has_method_vptr((const void*)this). >>>>>>>> >>>>>>>> The new code skips the "!is_metaspace_object()" check even >>>>>>>> after sanity >>>>>>>> checking the pointer, but you don't really explain why >>>>>>>> that's OK. >>>>>>> is_metaspace_object is a very expensive check. It has to traverse >>>>>>> all the metaspace mmap chunks. The new code is more robust in >>>>>>> that it sanity checks the pointer first but uses Safefetch to get >>>>>>> the vptr. >>>>>>> >>>>>>> >>>>>>>> The new code also picks up parts of Method::has_method_vptr() >>>>>>>> which >>>>>>>> makes me wonder if that's the right place for the fix. Won't >>>>>>>> other >>>>>>>> callers to Method::has_method_vptr() be subject to the same >>>>>>>> crashing >>>>>>>> mode? Or was the crashing mode only due to the >>>>>>>> "!is_metaspace_object()" >>>>>>>> check... >>>>>>> I should have moved the SafeFetch in to the has_method_vptr. I >>>>>>> can't remember why I copied it now. It crashed because the pointer >>>>>>> was in metaspace (is_metaspace_object returned true) but wasn't >>>>>>> aligned, but the pointer could come from anywhere. >>>>>>> >>>>>>> Thanks, I'll test out this fix and resend it. >>>>>>> Coleen >>>>>>> >>>>>>>> Dan >>>>>>>> >>>>>>>> >>>>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8146984 >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Coleen >>>>>>>>> >> From coleen.phillimore at oracle.com Wed Feb 3 16:46:24 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 3 Feb 2016 11:46:24 -0500 Subject: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc In-Reply-To: References: <56AB5F39.1060005@oracle.com> <56AB8D2E.7090902@oracle.com> <56AB9A43.1000905@oracle.com> <56ACCB06.7060900@oracle.com> <56AF9422.5010300@oracle.com> <56AF9E4B.8050704@oracle.com> <56B0ED4C.6000902@oracle.com> Message-ID: <56B22EE0.7010501@oracle.com> Markus, Thank you for reviewing. On 2/3/16 10:10 AM, Markus Gronlund wrote: > Hi Coleen, > > Thanks for looking into this. Maybe it could be simplified like: > > bool Method::has_method_vptr(const void* ptr) { > // Use SafeFetch to check if this is a valid pointer first > // This assumes that the vtbl pointer is the first word of a C++ object. > // This assumption is also in universe.cpp patch_klass_vtable > if (1 == SafeFetchN((intptr_t*)ptr, intptr_t(1))) { > return false; > } > const Method m; > return dereference_vptr(&m) == ptr; > } > > // Check that this pointer is valid by checking that the vtbl pointer matches > bool Method::is_valid_method() const { > if (this == NULL) { > return false; > } > // Quick sanity check on pointer. > if ((intptr_t(this) & (wordSize-1)) != 0) { > return false; > } > > return has_method_vptr(this); > } I'll change as suggested, it looks more nicer and more compact. > > I think you also need: > > #include "runtime/stubRoutines.hpp Ok, I usually don't use precompiled headers but will include it because it's used. > > Can't really comment on the use of SafeFetchN() as I am not familiar with it? Do all platforms support this? Yes, I think so. I believe I sponsored the patch and the patch to make it work on Zero too. Thanks! Coleen > > Looks like a good improvement for stability. > > No need to see any updated reviews. > > Thanks > Markus > > > -----Original Message----- > From: Coleen Phillimore > Sent: den 2 februari 2016 18:54 > To: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > Subject: Re: RFR 8146984: SIGBUS: bool Method::has_method_vptr(const void*)+0xc > > > Goetz, Can you review this since it's using SafeFetchN? > > thanks, > Coleen > > On 2/1/16 1:04 PM, Coleen Phillimore wrote: >> >> Thanks Dan! >> >> On 2/1/16 12:21 PM, Daniel D. Daugherty wrote: >>> On 1/30/16 7:39 AM, Coleen Phillimore wrote: >>>> I've moved the SafeFetch to has_method_vptr as suggested and retested. >>>> >>>> http://cr.openjdk.java.net/~coleenp/8146984.02/webrev/ >>> src/share/vm/oops/method.cpp >>> (old) L2114: return has_method_vptr((const void*)this); >>> (new) L2120: return has_method_vptr(this); >>> Just curious. I don't see anything that explains why the >>> cast is no longer needed (no type changes). Was this >>> simply cleaning up an unnecessary cast? >> The cast is unnecessary. I didn't add it back when I added the call >> to has_method_vptr back. >> >> thanks, >> Coleen >> >>> Thumbs up. >>> >>> Dan >>> >>> >>>> Thanks, >>>> Coleen >>>> >>>> On 1/29/16 11:58 AM, Coleen Phillimore wrote: >>>>> >>>>> On 1/29/16 11:02 AM, Daniel D. Daugherty wrote: >>>>>> On 1/29/16 5:46 AM, Coleen Phillimore wrote: >>>>>>> Summary: Add address check and use SafeFetchN for Method* vptr >>>>>>> access when Method* may be bad pointer. >>>>>>> >>>>>>> Tested with RBT and failing test case (reproduced 1 in 100 times) >>>>>>> with fatal in the 'return's in the change to verify. >>>>>>> >>>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8146984/ >>>>>> This one caught my eye because it has to do with sampling... >>>>> I should mention sampling in all my RFRs then! >>>>>> src/share/vm/oops/method.cpp >>>>>> The old code checked "!is_metaspace_object()" and used >>>>>> has_method_vptr((const void*)this). >>>>>> >>>>>> The new code skips the "!is_metaspace_object()" check even >>>>>> after sanity >>>>>> checking the pointer, but you don't really explain why that's OK. >>>>> is_metaspace_object is a very expensive check. It has to traverse >>>>> all the metaspace mmap chunks. The new code is more robust in >>>>> that it sanity checks the pointer first but uses Safefetch to get >>>>> the vptr. >>>>> >>>>> >>>>>> The new code also picks up parts of Method::has_method_vptr() >>>>>> which >>>>>> makes me wonder if that's the right place for the fix. Won't >>>>>> other >>>>>> callers to Method::has_method_vptr() be subject to the same >>>>>> crashing >>>>>> mode? Or was the crashing mode only due to the >>>>>> "!is_metaspace_object()" >>>>>> check... >>>>> I should have moved the SafeFetch in to the has_method_vptr. I >>>>> can't remember why I copied it now. It crashed because the pointer >>>>> was in metaspace (is_metaspace_object returned true) but wasn't >>>>> aligned, but the pointer could come from anywhere. >>>>> >>>>> Thanks, I'll test out this fix and resend it. >>>>> Coleen >>>>> >>>>>> Dan >>>>>> >>>>>> >>>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8146984 >>>>>>> >>>>>>> Thanks, >>>>>>> Coleen >>>>>>> From zoltan.majo at oracle.com Wed Feb 3 17:02:39 2016 From: zoltan.majo at oracle.com (=?UTF-8?B?Wm9sdMOhbiBNYWrDsw==?=) Date: Wed, 3 Feb 2016 18:02:39 +0100 Subject: [9] RFR (XS): 8148970: Quarantine testlibrary_tests/whitebox/vm_flags/IntxTest.java Message-ID: <56B232AF.6010408@oracle.com> Hi, please review the patch to quarantine the testlibrary_tests/whitebox/vm_flags/IntxTest.java test. Webrev: http://cr.openjdk.java.net/~zmajo/8148970/webrev.00/ I intend to push this directly into 'hs'. For more details please see JDK-8148758, the parent issue of this issue. https://bugs.openjdk.java.net/browse/JDK-8148758 Thank you and best regards, Zoltan From vladimir.x.ivanov at oracle.com Wed Feb 3 17:06:20 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 3 Feb 2016 20:06:20 +0300 Subject: [9] RFR (XS): 8148970: Quarantine testlibrary_tests/whitebox/vm_flags/IntxTest.java In-Reply-To: <56B232AF.6010408@oracle.com> References: <56B232AF.6010408@oracle.com> Message-ID: <56B2338C.5080608@oracle.com> Reviewed. Best regards, Vladimir Ivanov On 2/3/16 8:02 PM, Zolt?n Maj? wrote: > Hi, > > > please review the patch to quarantine the > testlibrary_tests/whitebox/vm_flags/IntxTest.java test. > > Webrev: > http://cr.openjdk.java.net/~zmajo/8148970/webrev.00/ > > I intend to push this directly into 'hs'. For more details please see > JDK-8148758, the parent issue of this issue. > https://bugs.openjdk.java.net/browse/JDK-8148758 > > Thank you and best regards, > > > Zoltan > From daniel.daugherty at oracle.com Wed Feb 3 17:10:37 2016 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 3 Feb 2016 10:10:37 -0700 Subject: [9] RFR (XS): 8148970: Quarantine testlibrary_tests/whitebox/vm_flags/IntxTest.java In-Reply-To: <56B232AF.6010408@oracle.com> References: <56B232AF.6010408@oracle.com> Message-ID: <56B2348D.7040206@oracle.com> The @ignore should reference 8148758 which is the bug that needs to be fixed. Thumbs up if you fix that part. Dan On 2/3/16 10:02 AM, Zolt?n Maj? wrote: > Hi, > > > please review the patch to quarantine the > testlibrary_tests/whitebox/vm_flags/IntxTest.java test. > > Webrev: > http://cr.openjdk.java.net/~zmajo/8148970/webrev.00/ > > I intend to push this directly into 'hs'. For more details please see > JDK-8148758, the parent issue of this issue. > https://bugs.openjdk.java.net/browse/JDK-8148758 > > Thank you and best regards, > > > Zoltan > From sean.coffey at oracle.com Wed Feb 3 18:04:03 2016 From: sean.coffey at oracle.com (=?UTF-8?Q?Se=c3=a1n_Coffey?=) Date: Wed, 3 Feb 2016 18:04:03 +0000 Subject: [8u communication] - Removal of jdk8u/hs-dev forest In-Reply-To: <56AA3D6D.6060702@redhat.com> References: <56AA3200.3060205@oracle.com> <56AA3D6D.6060702@redhat.com> Message-ID: <56B24113.9040403@oracle.com> On 28/01/16 16:10, Andrew Haley wrote: > On 01/28/2016 03:21 PM, Se?n Coffey wrote: >> To help keep the mercurial server clean of old unnecessary development >> forests, I'd like to propose that the jdk8u/hs-dev forest be deleted. > All of us have, from time to time, done software archaeology in order > to discover when a change was made. By definition you cannot know > when this might be needed. In my opinion it would make sense to > archive this somewhere. Question around this Andrew. All of the edits and commits that were made in hs-dev are sync'ed to the master jdk8u forest. That's the standard team to master sync process that the JDK Updates Projects use. Nothing should be lost. Is the master forest sufficient for your archaeology needs or am I missing something ? regards, Sean. > > Andrew. > From john.r.rose at oracle.com Wed Feb 3 21:33:32 2016 From: john.r.rose at oracle.com (John Rose) Date: Wed, 3 Feb 2016 13:33:32 -0800 Subject: RFR(M) 8148481: Devirtualize Klass::vtable In-Reply-To: <56AF7D75.3020800@oracle.com> References: <56AA3E25.60409@oracle.com> <56AF5C08.10104@oracle.com> <56AF6C79.4070402@oracle.com> <56AF7C9F.3050206@oracle.com> <56AF7D75.3020800@oracle.com> Message-ID: <367F3419-80F6-42A7-9EAF-75580B423DA7@oracle.com> Thanks, Mikael. These are good cleanups in a vexed area of the code. We need them partly because we will be doing more tricks with v-tables in the future, with value types, specializations, and new arrays. In the next round of cleanups in the same area, I'd like to see us consider moving the i-table mechanism into Klass also, since all the new types (notably enhanced arrays) will use i-tables as much as they use v-tables. The first detail I looked for in this series of changes was the usage of Universe::base_vtable_size, which is a hack that communicates the v-table layout for Object methods to array types, and I'm not surprised to find that it is still surrounded by its own little cloud of darkness, one corner of which Kim pointed out. We will wish to get rid of Universe::base_vtable_size when we put more methods on arrays. Or we will have to allow i-tables to be indirectly accessed and shared, a more flexible and compact option IMO. ? John P.S. We have a practice to consider footprint costs, even tiny ones in the single bytes, with proposals such as moving these fields around. My take on that is, never let footprint stop a clean refactoring. There is a huge amount of slack in our metadata; if we need to throw a pinch of footprint dust over our shoulders to satisfy the powers that be, we should squeeze unrelated fields smaller to make a net reduction in footprint. How to find such fields? Just do a hex dump and look for (say) consecutive runs of five or more zeroes. Except for hot path data like v-tables and super-type displays, those are opportunities for introducing var-ints (see CompressedStream and read_int_mb), or optional full-sized fields enabled by a flag bit or escape code. The complexity of such a move is localized to one class, and so is more bearable than keeping the multi-class complexity associated with a blocked refactoring. On Feb 1, 2016, at 7:44 AM, Mikael Gerdin wrote: > Bug link: https://bugs.openjdk.java.net/browse/JDK-8148481 > Webrev: http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0 From david.holmes at oracle.com Thu Feb 4 02:14:47 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 4 Feb 2016 12:14:47 +1000 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56B102AD.7020800@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> Message-ID: <56B2B417.3050803@oracle.com> Hi Mikael, Can't really comment on the bit-twiddling details. A couple of minor style nits: - don't put "return" on a line by itself, include the first part of the return expression - spaces after commas in template definitions/instantiation The JVM_ENTRY_FROM_LEAF etc was a little mind twisting but seems okay. Otherwise hotspot and JDK code appear okay. Thanks, David On 3/02/2016 5:25 AM, Mikael Vidstedt wrote: > > Please review this change which introduces a Copy::conjoint_swap and an > Unsafe.copySwapMemory method to call it from Java, along with the > necessary changes to have java.nio.Bits call it instead of the Bits.c code. > > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ > > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ > > On the jdk/ side I don't think there should be a lot of surprises. > Bits.c is gone and that required a mapfile-vers to be changed > accordingly. I also added a relatively extensive > jdk/internal/misc/Unsafe/CopySwap.java test which exercises all the > various copySwap configurations and verifies that the resulting data is > correct. There are also a handful of negative tests in there. > > On the hotspot/ side: > > * the copy logic in copy.cpp is leveraging templates to help the C++ > compiler produce tight copy loops for the various configurations > {element type, copy direction, src aligned, dst aligned}. > * Unsafe_CopySwapMemory is a leaf to not stall safe points more than > necessary. Only if needed (THROW, copy involves heap objects) will it > enter VM using a new JVM_ENTRY_FROM_LEAF macro. > * JVM_ENTRY_FROM_LEAF calls a new VM_ENTRY_BASE_FROM_LEAF helper macro, > which mimics what VM_ENTRY_BASE does, but also does a > debug_only(ResetNoHandleMark __rnhm;) - this is because > JVM_LEAF/VM_LEAF_BASE does debug_only(NoHandleMark __hm;). > > I'm in the process of getting the last performance numbers, but from > what I've seen so far this will outperform the earlier implementation. > > Cheers, > Mikeal > > On 2016-01-27 17:13, Mikael Vidstedt wrote: >> >> Just an FYI: >> >> I'm working on moving all of this to the Hotspot Copy class and >> bridging to it via jdk.internal.misc.Unsafe, removing Bits.c >> altogether. The implementation is working, and the preliminary >> performance numbers beat the pants off of any of the suggested Bits.c >> implementations (yay!). >> >> I'm currently in the progress of getting some unit tests in place for >> it all to make sure it covers all the corner cases and then I'll run >> some real benchmarks to see if it actually lives up to the expectations. >> >> Cheers, >> Mikael >> >> On 2016-01-26 11:13, John Rose wrote: >>> On Jan 26, 2016, at 11:08 AM, Andrew Haley wrote: >>>> On 01/26/2016 07:04 PM, John Rose wrote: >>>>> Unsafe.copyMemory bottoms out to Copy::conjoint_memory_atomic. >>>>> IMO that's a better starting point than memcpy. Perhaps it can be >>>>> given an additional parameter (or overloading) to specify a swap size. >>>> OK, but conjoint_memory_atomic doesn't guarantee that destination >>>> words won't be torn if their source is misaligned: in fact it >>>> guarantees that they will will be. >>> That's a good point, and argues for a new function with the >>> stronger guarantee. Actually, it would be perfectly reasonable >>> to strengthen the guarantee on the existing function. I don't >>> think anyone will care about the slight performance change, >>> especially since it is probably favorable. Since it's Unsafe, >>> they are not supposed to care, either. >>> >>> ? John >> > From john.r.rose at oracle.com Thu Feb 4 06:27:06 2016 From: john.r.rose at oracle.com (John Rose) Date: Wed, 3 Feb 2016 22:27:06 -0800 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56B102AD.7020800@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> Message-ID: <79CA457A-E4E3-4D49-B22A-C959C16DAC49@oracle.com> On Feb 2, 2016, at 11:25 AM, Mikael Vidstedt wrote: > Please review this change which introduces a Copy::conjoint_swap and an Unsafe.copySwapMemory method to call it from Java, along with the necessary changes to have java.nio.Bits call it instead of the Bits.c code. > > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ This is very good. I have some nit-picks: These days, when we introduce a new intrinsic (@HSIntrCand), we write the argument checking code separately in a non-intrinsic bytecode method. In this case, we don't (yet) have an intrinsic binding for U.copy*, but we might in the future. (C intrinsifies memcpy, which is a precedent.) In any case, I would prefer if we could structure the argument checking code in a similar way, with Unsafe.java containing both copySwapMemory and a private copySwapMemory0. Then we can JIT-optimize the safety checks. You might as well extend the same treatment to the pre-existing copyMemory call. The most important check (and the only one in U.copyMemory) is to ensure that the size_t operand has not wrapped around from a Java negative value to a crazy-large size_t value. That's the low-hanging fruit. Checking the pointers (for null or oob) is more problematic, of course. Checking consistency around elemSize is cheap and easy, so I agree that the U.copySM should do that work also. Basically, Unsafe can do very basic checks if there is a tricky user model to enforce, but it mustn't "sign up" to guard the user against all errors. Rule of thumb: Unsafe calls don't throw NPEs, they just SEGV. And the rare bit that *does* throw (IAE usually) should be placed into Unsafe.java, not unsafe.cpp. (The best-practice rule for putting argument checking code outside of the intrinsic is a newer one, so Unsafe code might not always do this.) The comment "Generalizing it would be reasonable, but requires card marking" is bogus, since we never byte-swap managed pointers. The test logic will flow a little smoother if your GenericPointer guy, the onHeap version, stores the appropriate array base offset in his offset field. You won't have to mention p.isOnHeap nearly so much, and the code will set a slightly better example. The VM_ENTRY_BASE_FROM_LEAF macro is really cool. The C++ template code is cool also. It reminds me of the kind of work Gosling's "Ace" processor could do, but now it's mainstreamed for all to use in C++. We're going to get some of that goodness in Project Valhalla with specialization logic. I find it amazing that the right way to code this in C is to use memcpy for unaligned accesses and byte peek/poke into registers for byte-swapping operators. I'm glad we can write this code *once* for the JVM and JDK. Possible future work: If we can get a better handle on writing vectorizable loops from Java, including Unsafe-based ones, we can move some of the C code back up to Java. Perhaps U.copy* calls for very short lengths deserved to be broken out into small loops of U.get/put* (with alignment). I think you experimented with this, and there were problems with the JIT putting fail-safe memory barriers between U.get/put* calls. Paul's work on Array.mismatch ran into similar issues, with the right answer being to write manual vector code in assembly. Anyway, you can count me as a reviewer. Thanks, ? John From chris.plummer at oracle.com Thu Feb 4 07:20:20 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Wed, 3 Feb 2016 23:20:20 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 Message-ID: <56B2FBB4.70407@oracle.com> Hello, Please review the following for removing Method::_method_data when only supporting C1 (or more specifically, when not supporting C2 or JVMCI). This will help reduce dynamic footprint usage for the minimal VM. As part of this fix, ProfileInterperter is forced to false unless C2 or JVMCI is supported. This was mainly done to avoid crashes if it is turned on and Method::_method_data has been excluded, but also because it is not useful except to C2 or JVMCI. Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 Test with JPRT -testset hotspot. thanks, Chris From zoltan.majo at oracle.com Thu Feb 4 07:50:57 2016 From: zoltan.majo at oracle.com (=?UTF-8?B?Wm9sdMOhbiBNYWrDsw==?=) Date: Thu, 4 Feb 2016 08:50:57 +0100 Subject: [9] RFR (XS): 8148970: Quarantine testlibrary_tests/whitebox/vm_flags/IntxTest.java In-Reply-To: <56B2338C.5080608@oracle.com> References: <56B232AF.6010408@oracle.com> <56B2338C.5080608@oracle.com> Message-ID: <56B302E1.7000407@oracle.com> Hi Vladimir, thank you for the review! Best regards, Zoltan On 02/03/2016 06:06 PM, Vladimir Ivanov wrote: > Reviewed. > > Best regards, > Vladimir Ivanov > > On 2/3/16 8:02 PM, Zolt?n Maj? wrote: >> Hi, >> >> >> please review the patch to quarantine the >> testlibrary_tests/whitebox/vm_flags/IntxTest.java test. >> >> Webrev: >> http://cr.openjdk.java.net/~zmajo/8148970/webrev.00/ >> >> I intend to push this directly into 'hs'. For more details please see >> JDK-8148758, the parent issue of this issue. >> https://bugs.openjdk.java.net/browse/JDK-8148758 >> >> Thank you and best regards, >> >> >> Zoltan >> From zoltan.majo at oracle.com Thu Feb 4 07:52:09 2016 From: zoltan.majo at oracle.com (=?UTF-8?B?Wm9sdMOhbiBNYWrDsw==?=) Date: Thu, 4 Feb 2016 08:52:09 +0100 Subject: [9] RFR (XS): 8148970: Quarantine testlibrary_tests/whitebox/vm_flags/IntxTest.java In-Reply-To: <56B2348D.7040206@oracle.com> References: <56B232AF.6010408@oracle.com> <56B2348D.7040206@oracle.com> Message-ID: <56B30329.4070804@oracle.com> Hi Dan, On 02/03/2016 06:10 PM, Daniel D. Daugherty wrote: > The @ignore should reference 8148758 which is the bug that needs > to be fixed. thank you for pointing that out. Here is the updated webrev: http://cr.openjdk.java.net/~zmajo/8148970/webrev.01/ I'll push that right away. Best regards, Zoltan > > Thumbs up if you fix that part. > > Dan > > On 2/3/16 10:02 AM, Zolt?n Maj? wrote: >> Hi, >> >> >> please review the patch to quarantine the >> testlibrary_tests/whitebox/vm_flags/IntxTest.java test. >> >> Webrev: >> http://cr.openjdk.java.net/~zmajo/8148970/webrev.00/ >> >> I intend to push this directly into 'hs'. For more details please see >> JDK-8148758, the parent issue of this issue. >> https://bugs.openjdk.java.net/browse/JDK-8148758 >> >> Thank you and best regards, >> >> >> Zoltan >> > From volker.simonis at gmail.com Thu Feb 4 08:04:41 2016 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 4 Feb 2016 09:04:41 +0100 Subject: [8u communication] - Removal of jdk8u/hs-dev forest In-Reply-To: <56B24113.9040403@oracle.com> References: <56AA3200.3060205@oracle.com> <56AA3D6D.6060702@redhat.com> <56B24113.9040403@oracle.com> Message-ID: On Wed, Feb 3, 2016 at 7:04 PM, Se?n Coffey wrote: > On 28/01/16 16:10, Andrew Haley wrote: >> >> On 01/28/2016 03:21 PM, Se?n Coffey wrote: >>> >>> To help keep the mercurial server clean of old unnecessary development >>> forests, I'd like to propose that the jdk8u/hs-dev forest be deleted. >> >> All of us have, from time to time, done software archaeology in order >> to discover when a change was made. By definition you cannot know >> when this might be needed. In my opinion it would make sense to >> archive this somewhere. > > Question around this Andrew. All of the edits and commits that were made in > hs-dev are sync'ed to the master jdk8u forest. That's the standard team to > master sync process that the JDK Updates Projects use. > > Nothing should be lost. Is the master forest sufficient for your archaeology > needs or am I missing something ? > Andrew may refer to broken links from JBS which point to the jdk8u/hs-dev repository. I think this is mostly a "convenience problem" and unfortunately we already have this problem with numerous other deleted forests (e.g. hsx). The solution is relatively simple (just replace jdk8u/hs-dev by jdk8u/dev in the link because the change id remains stable) but people not so familiar with Mercurial/OpenJDK probably don't know this and get a "bad impression" because links from JBS point to nowhere. Maybe we can establish some http-redirection at hg.openjdk.java.net to automatically redirect request from deleted repositories to existing ones? Regards, Volker > regards, > Sean. >> >> >> Andrew. >> > From sgehwolf at redhat.com Thu Feb 4 10:23:13 2016 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Thu, 04 Feb 2016 11:23:13 +0100 Subject: [8u communication] - Removal of jdk8u/hs-dev forest In-Reply-To: References: <56AA3200.3060205@oracle.com> <56AA3D6D.6060702@redhat.com> <56B24113.9040403@oracle.com> Message-ID: <1454581393.3499.1.camel@redhat.com> On Thu, 2016-02-04 at 09:04 +0100, Volker Simonis wrote: > Andrew may refer to broken links from JBS which point to the > jdk8u/hs-dev repository. I think this is mostly a "convenience > problem" and unfortunately we already have this problem with numerous > other deleted forests (e.g. hsx). The solution is relatively simple > (just replace jdk8u/hs-dev by jdk8u/dev in the link because the change > id remains stable) but people not so familiar with Mercurial/OpenJDK > probably don't know this and get a "bad impression" because links from > JBS point to nowhere. > > Maybe we can establish some http-redirection at hg.openjdk.java.net to > automatically redirect request from deleted repositories to existing > ones? +1 Cheers, Severin From chris.hegarty at oracle.com Thu Feb 4 10:36:28 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Thu, 4 Feb 2016 10:36:28 +0000 Subject: [8u communication] - Removal of jdk8u/hs-dev forest In-Reply-To: <1454581393.3499.1.camel@redhat.com> References: <56AA3200.3060205@oracle.com> <56AA3D6D.6060702@redhat.com> <56B24113.9040403@oracle.com> <1454581393.3499.1.camel@redhat.com> Message-ID: <56B329AC.1050207@oracle.com> On 04/02/16 10:23, Severin Gehwolf wrote: > On Thu, 2016-02-04 at 09:04 +0100, Volker Simonis wrote: >> Andrew may refer to broken links from JBS which point to the >> jdk8u/hs-dev repository. I think this is mostly a "convenience >> problem" and unfortunately we already have this problem with numerous >> other deleted forests (e.g. hsx). The solution is relatively simple >> (just replace jdk8u/hs-dev by jdk8u/dev in the link because the change >> id remains stable) but people not so familiar with Mercurial/OpenJDK >> probably don't know this and get a "bad impression" because links from >> JBS point to nowhere. >> >> Maybe we can establish some http-redirection at hg.openjdk.java.net to >> automatically redirect request from deleted repositories to existing >> ones? > > +1 This would be convenient, but I'm not sure how necessary. All JIRA issues updated by hgupdater should contain a comment with a URL to the changeset in the master forest. Yes, there are previous comments with URLs to changesets in development forests which may no longer exist, but you should always be able to find the master. Unless I'm missing something related to the 8u development flow? -Chris. From aph at redhat.com Thu Feb 4 10:37:03 2016 From: aph at redhat.com (Andrew Haley) Date: Thu, 4 Feb 2016 10:37:03 +0000 Subject: [8u communication] - Removal of jdk8u/hs-dev forest In-Reply-To: <56B24113.9040403@oracle.com> References: <56AA3200.3060205@oracle.com> <56AA3D6D.6060702@redhat.com> <56B24113.9040403@oracle.com> Message-ID: <56B329CF.4020800@redhat.com> On 03/02/16 18:04, Se?n Coffey wrote: > On 28/01/16 16:10, Andrew Haley wrote: >> On 01/28/2016 03:21 PM, Se?n Coffey wrote: >>> To help keep the mercurial server clean of old unnecessary development >>> forests, I'd like to propose that the jdk8u/hs-dev forest be deleted. >> All of us have, from time to time, done software archaeology in order >> to discover when a change was made. By definition you cannot know >> when this might be needed. In my opinion it would make sense to >> archive this somewhere. > Question around this Andrew. All of the edits and commits that were made > in hs-dev are sync'ed to the master jdk8u forest. That's the standard > team to master sync process that the JDK Updates Projects use. > > Nothing should be lost. Is the master forest sufficient for your > archaeology needs or am I missing something ? I guess it would be sufficient, although there might be some dangling URLs. Deleting anything makes me nervous, but I suppose this is benign. Andrew. From david.lindholm at oracle.com Thu Feb 4 11:49:57 2016 From: david.lindholm at oracle.com (David Lindholm) Date: Thu, 4 Feb 2016 12:49:57 +0100 Subject: RFR: 8148844: Update run_unit_test macro for InternalVMTests In-Reply-To: <20160203151341.GB8777@ehelin.jrpg.bea.com> References: <20160203151341.GB8777@ehelin.jrpg.bea.com> Message-ID: <56B33AE5.8090605@oracle.com> Hi Erik, Looks good! This solution is a big improvement. Thanks, David On 2016-02-03 16:13, Erik Helin wrote: > Hi all, > > this patch updates the run_unit_test macro for InternalVMTests. > The new macro both forward declares the test function and runs it. C++ > can (as opposed to C) forward declare a function inside another > function. I also added a small helper function, run_test, that > ensures that test functions must return void and take no parameters (by > typing the test function as a function pointer). > > Webrev: > http://cr.openjdk.java.net/~ehelin/8148844/00/ > > Enhancement: > https://bugs.openjdk.java.net/browse/JDK-8148844 > > Testing: > - JPRT > - Running the tests locally > > Thanks, > Erik From aph at redhat.com Thu Feb 4 12:22:34 2016 From: aph at redhat.com (Andrew Haley) Date: Thu, 4 Feb 2016 12:22:34 +0000 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56B102AD.7020800@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> Message-ID: <56B3428A.3010704@redhat.com> On 02/02/2016 07:25 PM, Mikael Vidstedt wrote: > Please review this change which introduces a Copy::conjoint_swap and an > Unsafe.copySwapMemory method to call it from Java, along with the > necessary changes to have java.nio.Bits call it instead of the Bits.c code. > > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ One other little thing: why are the byte-swapping methods in class nio.Bits not called copySwapSomething? e.g.: 826 /** 827 * Copy and byte swap 16 bit elements from off-heap memory to a heap array 828 * 829 * @param srcAddr 830 * source address 831 * @param dst 832 * destination array, must be a 16-bit primitive array type 833 * @param dstPos 834 * byte offset within the destination array of the first element to write 835 * @param length 836 * number of bytes to copy 837 */ 838 static void copyToCharArray(long srcAddr, Object dst, long dstPos, long length) { 839 unsafe.copySwapMemory(null, srcAddr, dst, unsafe.arrayBaseOffset(dst.getClass()) + dstPos, length, 2); 840 } Andrew. From stefan.johansson at oracle.com Thu Feb 4 12:24:21 2016 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Thu, 4 Feb 2016 13:24:21 +0100 Subject: RFR: 8148844: Update run_unit_test macro for InternalVMTests In-Reply-To: <20160203151341.GB8777@ehelin.jrpg.bea.com> References: <20160203151341.GB8777@ehelin.jrpg.bea.com> Message-ID: <56B342F5.2020208@oracle.com> Looks good, Reviewed, Stefan On 2016-02-03 16:13, Erik Helin wrote: > Hi all, > > this patch updates the run_unit_test macro for InternalVMTests. > The new macro both forward declares the test function and runs it. C++ > can (as opposed to C) forward declare a function inside another > function. I also added a small helper function, run_test, that > ensures that test functions must return void and take no parameters (by > typing the test function as a function pointer). > > Webrev: > http://cr.openjdk.java.net/~ehelin/8148844/00/ > > Enhancement: > https://bugs.openjdk.java.net/browse/JDK-8148844 > > Testing: > - JPRT > - Running the tests locally > > Thanks, > Erik From christian.thalinger at oracle.com Thu Feb 4 17:31:07 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Thu, 4 Feb 2016 09:31:07 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56B2FBB4.70407@oracle.com> References: <56B2FBB4.70407@oracle.com> Message-ID: <38A68458-6D67-423D-93F0-95E40AE7E92D@oracle.com> src/share/vm/oops/method.hpp: I?d rather have the #if?s inside the method bodies. > On Feb 3, 2016, at 11:20 PM, Chris Plummer wrote: > > Hello, > > Please review the following for removing Method::_method_data when only supporting C1 (or more specifically, when not supporting C2 or JVMCI). This will help reduce dynamic footprint usage for the minimal VM. > > As part of this fix, ProfileInterperter is forced to false unless C2 or JVMCI is supported. This was mainly done to avoid crashes if it is turned on and Method::_method_data has been excluded, but also because it is not useful except to C2 or JVMCI. > > Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 > > Test with JPRT -testset hotspot. > > thanks, > > Chris From chris.plummer at oracle.com Thu Feb 4 17:45:40 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Thu, 4 Feb 2016 09:45:40 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <38A68458-6D67-423D-93F0-95E40AE7E92D@oracle.com> References: <56B2FBB4.70407@oracle.com> <38A68458-6D67-423D-93F0-95E40AE7E92D@oracle.com> Message-ID: <56B38E44.4010502@oracle.com> Ok. I can go either way on this particular example. However, when you start to get a lot of methods using the #ifdefs, it looks cleaner if you have just one #ifdef/#else/#endif for all of them. For example, see #if INCLUDE_NMT in memTracker.hpp. So do we want consistency in our approach to these #ifdefs, or do we want flexibility based on how many #ifdefs we'll end up with? thanks, Chris On 2/4/16 9:31 AM, Christian Thalinger wrote: > src/share/vm/oops/method.hpp: > > I?d rather have the #if?s inside the method bodies. > >> On Feb 3, 2016, at 11:20 PM, Chris Plummer wrote: >> >> Hello, >> >> Please review the following for removing Method::_method_data when only supporting C1 (or more specifically, when not supporting C2 or JVMCI). This will help reduce dynamic footprint usage for the minimal VM. >> >> As part of this fix, ProfileInterperter is forced to false unless C2 or JVMCI is supported. This was mainly done to avoid crashes if it is turned on and Method::_method_data has been excluded, but also because it is not useful except to C2 or JVMCI. >> >> Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ >> Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 >> >> Test with JPRT -testset hotspot. >> >> thanks, >> >> Chris From chris.plummer at oracle.com Thu Feb 4 18:36:51 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Thu, 4 Feb 2016 10:36:51 -0800 Subject: [9] RFR (S) 8146436: Add -XX:+UseAggressiveHeapShrink option Message-ID: <56B39A43.5070409@oracle.com> Hello, Please review the following for adding the -XX UseAggressiveHeapShrink option. When turned on, it tells the GC to reduce the heap size to the new target size immediately after a full GC rather than doing it progressively over 4 GCs. Webrev: http://cr.openjdk.java.net/~cjplummer/8146436/webrev.02/ Bug: https://bugs.openjdk.java.net/browse/JDK-8146436 Testing: -JPRT with '-testset hotspot' -JPRT with '-testset hotspot -vmflags "-XX:+UseAggressiveHeapShrink"' -added new TestMaxMinHeapFreeRatioFlags.java test thanks, Chris From coleen.phillimore at oracle.com Thu Feb 4 19:03:09 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 4 Feb 2016 14:03:09 -0500 Subject: RFR: 8148844: Update run_unit_test macro for InternalVMTests In-Reply-To: <56B33AE5.8090605@oracle.com> References: <20160203151341.GB8777@ehelin.jrpg.bea.com> <56B33AE5.8090605@oracle.com> Message-ID: <56B3A06D.9050601@oracle.com> Yes, this is really nice. Coleen On 2/4/16 6:49 AM, David Lindholm wrote: > Hi Erik, > > Looks good! This solution is a big improvement. > > > Thanks, > David > > On 2016-02-03 16:13, Erik Helin wrote: >> Hi all, >> >> this patch updates the run_unit_test macro for InternalVMTests. >> The new macro both forward declares the test function and runs it. C++ >> can (as opposed to C) forward declare a function inside another >> function. I also added a small helper function, run_test, that >> ensures that test functions must return void and take no parameters (by >> typing the test function as a function pointer). >> >> Webrev: >> http://cr.openjdk.java.net/~ehelin/8148844/00/ >> >> Enhancement: >> https://bugs.openjdk.java.net/browse/JDK-8148844 >> >> Testing: >> - JPRT >> - Running the tests locally >> >> Thanks, >> Erik > From coleen.phillimore at oracle.com Thu Feb 4 22:40:52 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 4 Feb 2016 17:40:52 -0500 Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN Message-ID: <56B3D374.7030109@oracle.com> Summary: Backout change for 8146984 but add an alignment check which may have caught original bug. Will retest with new check once this isn't an integration blocker. Ran original tests that failed. open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ bug link https://bugs.openjdk.java.net/browse/JDK-8149038 Thanks, Coleen From coleen.phillimore at oracle.com Thu Feb 4 22:43:50 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 4 Feb 2016 17:43:50 -0500 Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN In-Reply-To: <56B3D374.7030109@oracle.com> References: <56B3D374.7030109@oracle.com> Message-ID: <56B3D426.5050002@oracle.com> On 2/4/16 5:40 PM, Coleen Phillimore wrote: > Summary: Backout change for 8146984 but add an alignment check which > may have caught original bug. > > Will retest with new check once this isn't an integration blocker. Ran > original tests that failed. > > open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ > bug link https://bugs.openjdk.java.net/browse/JDK-8149038 The original bug is: https://bugs.openjdk.java.net/browse/JDK-8146984 Coleen > > Thanks, > Coleen From markus.gronlund at oracle.com Thu Feb 4 23:08:17 2016 From: markus.gronlund at oracle.com (Markus Gronlund) Date: Thu, 4 Feb 2016 15:08:17 -0800 (PST) Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN In-Reply-To: <56B3D426.5050002@oracle.com> References: <56B3D374.7030109@oracle.com> <56B3D426.5050002@oracle.com> Message-ID: <145fbaa7-c6e9-4c39-873a-6b1adc6ed72f@default> Hi Coleen, Thanks for reverting, looks good. /Markus PS I don?t know if you want to go straight back to the previous version, but I still think this piece could be tightened a bit: bool Method::has_method_vptr(const void* ptr) { assert(ptr != NULL, "invariant"); // This assumes that the vtbl pointer is the first word of a C++ object. // This assumption is also in universe.cpp patch_klass_vtble const Method m; return dereference_vptr(&m) == dereference_vptr(ptr); } // Check that this pointer is valid by checking that the vtbl pointer matches bool Method::is_valid_method() const { if (this == NULL) { return false; } if ((intptr_t(this) & (wordSize - 1)) != 0) { return false; } if (!is_metaspace_object()) { return false; } return has_method_vptr(this); } -----Original Message----- From: Coleen Phillimore Sent: den 4 februari 2016 23:44 To: hotspot-dev at openjdk.java.net Subject: Re: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN On 2/4/16 5:40 PM, Coleen Phillimore wrote: > Summary: Backout change for 8146984 but add an alignment check which > may have caught original bug. > > Will retest with new check once this isn't an integration blocker. Ran > original tests that failed. > > open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ > bug link https://bugs.openjdk.java.net/browse/JDK-8149038 The original bug is: https://bugs.openjdk.java.net/browse/JDK-8146984 Coleen > > Thanks, > Coleen From coleen.phillimore at oracle.com Thu Feb 4 23:18:59 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 4 Feb 2016 18:18:59 -0500 Subject: RFR(M) 8148481: Devirtualize Klass::vtable In-Reply-To: <367F3419-80F6-42A7-9EAF-75580B423DA7@oracle.com> References: <56AA3E25.60409@oracle.com> <56AF5C08.10104@oracle.com> <56AF6C79.4070402@oracle.com> <56AF7C9F.3050206@oracle.com> <56AF7D75.3020800@oracle.com> <367F3419-80F6-42A7-9EAF-75580B423DA7@oracle.com> Message-ID: <56B3DC63.10205@oracle.com> Hi John, I'm going to file an RFE for 10 with this idea. Cleanups like this that help with value types and the future directions of Java are worth doing before we start working on value types and the future directions of java. I've been looking through metaspace at all the zeroes recently. One cause is that Arrays have to have the same vtable offset as InstanceKlass. Arrays are thus 70 words long with 66 (ish) of them all zero. I would really like to see vtables and itables be pointed to by Klass. That way they can be expanded during class redefinition (think adding methods) without having to replace the InstanceKlass, which turns out to be not practical wrt performance. Also itables/vtables could be shared by different classes if that's part of the new directions. Having a fixed size InstanceKlass would also improve the compaction of the class metaspace, which is still fixed size and doesn't grow. Still not sure what places we have no performance considerations in order to use CompressedStream. But yes, we have too many zeros in metaspace. Thank you for the ideas! Coleen On 2/3/16 4:33 PM, John Rose wrote: > Thanks, Mikael. These are good cleanups in a vexed area of the code. > > We need them partly because we will be doing more tricks with v-tables > in the future, with value types, specializations, and new arrays. > > In the next round of cleanups in the same area, I'd like to see us > consider > moving the i-table mechanism into Klass also, since all the new types > (notably enhanced arrays) will use i-tables as much as they use v-tables. > > The first detail I looked for in this series of changes was the usage of > Universe::base_vtable_size, which is a hack that communicates the > v-table layout for Object methods to array types, and I'm not surprised > to find that it is still surrounded by its own little cloud of darkness, > one corner of which Kim pointed out. > > We will wish to get rid of Universe::base_vtable_size when we > put more methods on arrays. Or we will have to allow i-tables > to be indirectly accessed and shared, a more flexible and > compact option IMO. > > ? John > > P.S. We have a practice to consider footprint costs, even tiny > ones in the single bytes, with proposals such as moving these fields > around. My take on that is, never let footprint stop a clean refactoring. > There is a huge amount of slack in our metadata; if we need to > throw a pinch of footprint dust over our shoulders to satisfy the > powers that be, we should squeeze unrelated fields smaller > to make a net reduction in footprint. How to find such fields? > Just do a hex dump and look for (say) consecutive runs of > five or more zeroes. Except for hot path data like v-tables and > super-type displays, those are opportunities for introducing > var-ints (see CompressedStream and read_int_mb), or > optional full-sized fields enabled by a flag bit or escape code. > The complexity of such a move is localized to one class, > and so is more bearable than keeping the multi-class > complexity associated with a blocked refactoring. > > On Feb 1, 2016, at 7:44 AM, Mikael Gerdin > wrote: > >> Bug link:https://bugs.openjdk.java.net/browse/JDK-8148481 >> Webrev:http://cr.openjdk.java.net/~mgerdin/8148481/webrev.0 >> > From coleen.phillimore at oracle.com Thu Feb 4 23:23:15 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 4 Feb 2016 18:23:15 -0500 Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN In-Reply-To: <145fbaa7-c6e9-4c39-873a-6b1adc6ed72f@default> References: <56B3D374.7030109@oracle.com> <56B3D426.5050002@oracle.com> <145fbaa7-c6e9-4c39-873a-6b1adc6ed72f@default> Message-ID: <56B3DD63.10000@oracle.com> Hi Markus, Thank you for reviewing. I want to keep the backout as simple as possible so don't want to give myself the chance to mess up with a cleanup. On 2/4/16 6:08 PM, Markus Gronlund wrote: > Hi Coleen, > > Thanks for reverting, looks good. > > /Markus > > PS I don?t know if you want to go straight back to the previous version, but I still think this piece could be tightened a bit: > > bool Method::has_method_vptr(const void* ptr) { > assert(ptr != NULL, "invariant"); > > // This assumes that the vtbl pointer is the first word of a C++ object. > // This assumption is also in universe.cpp patch_klass_vtble > const Method m; > return dereference_vptr(&m) == dereference_vptr(ptr); Yes, this is nicer. > } > > // Check that this pointer is valid by checking that the vtbl pointer matches > bool Method::is_valid_method() const { > if (this == NULL) { > return false; > } > if ((intptr_t(this) & (wordSize - 1)) != 0) { > return false; > } > if (!is_metaspace_object()) { > return false; > } > return has_method_vptr(this); > } > I like 'else's. Are they less efficient? Thanks, Coleen > > -----Original Message----- > From: Coleen Phillimore > Sent: den 4 februari 2016 23:44 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN > > > > On 2/4/16 5:40 PM, Coleen Phillimore wrote: >> Summary: Backout change for 8146984 but add an alignment check which >> may have caught original bug. >> >> Will retest with new check once this isn't an integration blocker. Ran >> original tests that failed. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8149038 > The original bug is: > > https://bugs.openjdk.java.net/browse/JDK-8146984 > > Coleen >> Thanks, >> Coleen From daniel.daugherty at oracle.com Thu Feb 4 23:26:01 2016 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Thu, 4 Feb 2016 16:26:01 -0700 Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN In-Reply-To: <145fbaa7-c6e9-4c39-873a-6b1adc6ed72f@default> References: <56B3D374.7030109@oracle.com> <56B3D426.5050002@oracle.com> <145fbaa7-c6e9-4c39-873a-6b1adc6ed72f@default> Message-ID: <56B3DE09.8090506@oracle.com> On 2/4/16 4:08 PM, Markus Gronlund wrote: > Hi Coleen, > > Thanks for reverting, looks good. Agreed, but also agree with Markus that it would be a shame to lose the cleanups. Since it's an integration_blocker, I can see why you want to try to very reduced risk... Do you have plans to take another run at the cleaned up version? Dan > > /Markus > > PS I don?t know if you want to go straight back to the previous version, but I still think this piece could be tightened a bit: > > bool Method::has_method_vptr(const void* ptr) { > assert(ptr != NULL, "invariant"); > > // This assumes that the vtbl pointer is the first word of a C++ object. > // This assumption is also in universe.cpp patch_klass_vtble > const Method m; > return dereference_vptr(&m) == dereference_vptr(ptr); > } > > // Check that this pointer is valid by checking that the vtbl pointer matches > bool Method::is_valid_method() const { > if (this == NULL) { > return false; > } > if ((intptr_t(this) & (wordSize - 1)) != 0) { > return false; > } > if (!is_metaspace_object()) { > return false; > } > return has_method_vptr(this); > } > > > > -----Original Message----- > From: Coleen Phillimore > Sent: den 4 februari 2016 23:44 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN > > > > On 2/4/16 5:40 PM, Coleen Phillimore wrote: >> Summary: Backout change for 8146984 but add an alignment check which >> may have caught original bug. >> >> Will retest with new check once this isn't an integration blocker. Ran >> original tests that failed. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8149038 > The original bug is: > > https://bugs.openjdk.java.net/browse/JDK-8146984 > > Coleen >> Thanks, >> Coleen From coleen.phillimore at oracle.com Thu Feb 4 23:34:29 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 4 Feb 2016 18:34:29 -0500 Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN In-Reply-To: <56B3DE09.8090506@oracle.com> References: <56B3D374.7030109@oracle.com> <56B3D426.5050002@oracle.com> <145fbaa7-c6e9-4c39-873a-6b1adc6ed72f@default> <56B3DE09.8090506@oracle.com> Message-ID: <56B3E005.2070902@oracle.com> On 2/4/16 6:26 PM, Daniel D. Daugherty wrote: > On 2/4/16 4:08 PM, Markus Gronlund wrote: >> Hi Coleen, >> >> Thanks for reverting, looks good. > > Agreed, but also agree with Markus that it would be a shame to > lose the cleanups. > > Since it's an integration_blocker, I can see why you want to > try to very reduced risk... Do you have plans to take another > run at the cleaned up version? No, I don't really. It eliminates 2 lines in a function and I don't see how eliminating else's is cleaner or at least cleaner enough to file an RFE. Okay, the cleaner has_method_vptr() compiles on linux. It better compile on windows. open webrev at http://cr.openjdk.java.net/~coleenp/8149038.02/ thanks, Coleen > > Dan > > >> >> /Markus >> >> PS I don?t know if you want to go straight back to the previous >> version, but I still think this piece could be tightened a bit: >> >> bool Method::has_method_vptr(const void* ptr) { >> assert(ptr != NULL, "invariant"); >> >> // This assumes that the vtbl pointer is the first word of a C++ >> object. >> // This assumption is also in universe.cpp patch_klass_vtble >> const Method m; >> return dereference_vptr(&m) == dereference_vptr(ptr); >> } >> >> // Check that this pointer is valid by checking that the vtbl pointer >> matches >> bool Method::is_valid_method() const { >> if (this == NULL) { >> return false; >> } >> if ((intptr_t(this) & (wordSize - 1)) != 0) { >> return false; >> } >> if (!is_metaspace_object()) { >> return false; >> } >> return has_method_vptr(this); >> } >> >> >> >> -----Original Message----- >> From: Coleen Phillimore >> Sent: den 4 februari 2016 23:44 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR (S, URGENT) 8149038: SIGSEGV at >> frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN >> >> >> >> On 2/4/16 5:40 PM, Coleen Phillimore wrote: >>> Summary: Backout change for 8146984 but add an alignment check which >>> may have caught original bug. >>> >>> Will retest with new check once this isn't an integration blocker. Ran >>> original tests that failed. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ >>> bug link https://bugs.openjdk.java.net/browse/JDK-8149038 >> The original bug is: >> >> https://bugs.openjdk.java.net/browse/JDK-8146984 >> >> Coleen >>> Thanks, >>> Coleen > From daniel.daugherty at oracle.com Thu Feb 4 23:46:49 2016 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Thu, 4 Feb 2016 16:46:49 -0700 Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN In-Reply-To: <56B3E005.2070902@oracle.com> References: <56B3D374.7030109@oracle.com> <56B3D426.5050002@oracle.com> <145fbaa7-c6e9-4c39-873a-6b1adc6ed72f@default> <56B3DE09.8090506@oracle.com> <56B3E005.2070902@oracle.com> Message-ID: <56B3E2E9.2090903@oracle.com> On 2/4/16 4:34 PM, Coleen Phillimore wrote: > > > On 2/4/16 6:26 PM, Daniel D. Daugherty wrote: >> On 2/4/16 4:08 PM, Markus Gronlund wrote: >>> Hi Coleen, >>> >>> Thanks for reverting, looks good. >> >> Agreed, but also agree with Markus that it would be a shame to >> lose the cleanups. >> >> Since it's an integration_blocker, I can see why you want to >> try to very reduced risk... Do you have plans to take another >> run at the cleaned up version? > > No, I don't really. It eliminates 2 lines in a function and I don't > see how eliminating else's is cleaner or at least cleaner enough to > file an RFE. > > Okay, the cleaner has_method_vptr() compiles on linux. It better > compile on windows. > > open webrev at http://cr.openjdk.java.net/~coleenp/8149038.02/ src/share/vm/oops/method.cpp L2103: // This assumption is also in universe.cpp patch_klass_vtble Typo: 'patch_klass_vtble' -> 'patch_klass_vtable' Don't need another webrev. Thumbs up. Dan > > thanks, > Coleen > >> >> Dan >> >> >>> >>> /Markus >>> >>> PS I don?t know if you want to go straight back to the previous >>> version, but I still think this piece could be tightened a bit: >>> >>> bool Method::has_method_vptr(const void* ptr) { >>> assert(ptr != NULL, "invariant"); >>> >>> // This assumes that the vtbl pointer is the first word of a C++ >>> object. >>> // This assumption is also in universe.cpp patch_klass_vtble >>> const Method m; >>> return dereference_vptr(&m) == dereference_vptr(ptr); >>> } >>> >>> // Check that this pointer is valid by checking that the vtbl >>> pointer matches >>> bool Method::is_valid_method() const { >>> if (this == NULL) { >>> return false; >>> } >>> if ((intptr_t(this) & (wordSize - 1)) != 0) { >>> return false; >>> } >>> if (!is_metaspace_object()) { >>> return false; >>> } >>> return has_method_vptr(this); >>> } >>> >>> >>> >>> -----Original Message----- >>> From: Coleen Phillimore >>> Sent: den 4 februari 2016 23:44 >>> To: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR (S, URGENT) 8149038: SIGSEGV at >>> frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN >>> >>> >>> >>> On 2/4/16 5:40 PM, Coleen Phillimore wrote: >>>> Summary: Backout change for 8146984 but add an alignment check which >>>> may have caught original bug. >>>> >>>> Will retest with new check once this isn't an integration blocker. Ran >>>> original tests that failed. >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8149038 >>> The original bug is: >>> >>> https://bugs.openjdk.java.net/browse/JDK-8146984 >>> >>> Coleen >>>> Thanks, >>>> Coleen >> > From mikael.vidstedt at oracle.com Fri Feb 5 00:10:18 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Thu, 4 Feb 2016 16:10:18 -0800 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56B3428A.3010704@redhat.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <56B3428A.3010704@redhat.com> Message-ID: <56B3E86A.40206@oracle.com> On 2016-02-04 04:22, Andrew Haley wrote: > On 02/02/2016 07:25 PM, Mikael Vidstedt wrote: >> Please review this change which introduces a Copy::conjoint_swap and an >> Unsafe.copySwapMemory method to call it from Java, along with the >> necessary changes to have java.nio.Bits call it instead of the Bits.c code. >> >> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ >> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ > One other little thing: why are the byte-swapping methods in class nio.Bits > not called copySwapSomething? e.g.: That sure would be a better name, wouldn't it? I'm not going to be changing the Bits method names as part of this change, but it does seem like a very reasonable follow-up enhancement. Cheers, Mikael From david.holmes at oracle.com Fri Feb 5 01:43:54 2016 From: david.holmes at oracle.com (David Holmes) Date: Fri, 5 Feb 2016 11:43:54 +1000 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56B2FBB4.70407@oracle.com> References: <56B2FBB4.70407@oracle.com> Message-ID: <56B3FE5A.9010806@oracle.com> Hi Chris, On 4/02/2016 5:20 PM, Chris Plummer wrote: > Hello, > > Please review the following for removing Method::_method_data when only > supporting C1 (or more specifically, when not supporting C2 or JVMCI). Does JVMCI exist with C1 only? The COMPILER2_OR_JVMCI conjunction makes things a bit messy. Can we abstract that behind a single variable, INCLUDE_METHOD_DATA (or some such) to make it cleaner? > This will help reduce dynamic footprint usage for the minimal VM. > > As part of this fix, ProfileInterperter is forced to false unless C2 or > JVMCI is supported. This was mainly done to avoid crashes if it is > turned on and Method::_method_data has been excluded, but also because > it is not useful except to C2 or JVMCI. Are you saying that the information generated by ProfileInterpreter is only used by C2 and JVMCI? If that is case it should really have been a C2 only flag. If ProfileInterpreter is forced to false then shouldn't you also be checking TraceProfileInterpreter and PrintMethodData use as well Thanks, David > Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 > > Test with JPRT -testset hotspot. > > thanks, > > Chris From chris.plummer at oracle.com Fri Feb 5 02:10:08 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Thu, 4 Feb 2016 18:10:08 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56B3FE5A.9010806@oracle.com> References: <56B2FBB4.70407@oracle.com> <56B3FE5A.9010806@oracle.com> Message-ID: <56B40480.6060703@oracle.com> Hi David, On 2/4/16 5:43 PM, David Holmes wrote: > Hi Chris, > > On 4/02/2016 5:20 PM, Chris Plummer wrote: >> Hello, >> >> Please review the following for removing Method::_method_data when only >> supporting C1 (or more specifically, when not supporting C2 or JVMCI). > > Does JVMCI exist with C1 only? My understanding is it can exists with C2 or on its own, but currently is not included with C1 builds. > The COMPILER2_OR_JVMCI conjunction makes things a bit messy. Can we > abstract that behind a single variable, INCLUDE_METHOD_DATA (or some > such) to make it cleaner? I'll also be using COMPILER2_OR_JVMCI with another change to that removes some MethodCounter fields. So yes, I can add INCLUDE_METHOD_DATA, but then will need another INCLUDE_XXX for the MethodCounter fields I'll be conditionally removing. > >> This will help reduce dynamic footprint usage for the minimal VM. >> >> As part of this fix, ProfileInterperter is forced to false unless C2 or >> JVMCI is supported. This was mainly done to avoid crashes if it is >> turned on and Method::_method_data has been excluded, but also because >> it is not useful except to C2 or JVMCI. > > Are you saying that the information generated by ProfileInterpreter is > only used by C2 and JVMCI? If that is case it should really have been > a C2 only flag. > That is my understanding. Coleen confirmed it for me. I believe she got her info from the compiler team. BTW, we need a mechanism to make these conditionally unsupported flags a constant value when they are not supported. It would help deadstrip code. > If ProfileInterpreter is forced to false then shouldn't you also be > checking TraceProfileInterpreter and PrintMethodData use as well Yes, I can add those. thanks, Chris > > Thanks, > David > >> Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ >> Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 >> >> Test with JPRT -testset hotspot. >> >> thanks, >> >> Chris From vladimir.kozlov at oracle.com Fri Feb 5 02:35:00 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 4 Feb 2016 18:35:00 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56B40480.6060703@oracle.com> References: <56B2FBB4.70407@oracle.com> <56B3FE5A.9010806@oracle.com> <56B40480.6060703@oracle.com> Message-ID: <56B40A54.8000005@oracle.com> Yes, interpreter profiling is used only for C2 and JVMCI. We did experimented with profiling for C1 in embedded but dropped it as you remember. Vladimir On 2/4/16 6:10 PM, Chris Plummer wrote: > Hi David, > > On 2/4/16 5:43 PM, David Holmes wrote: >> Hi Chris, >> >> On 4/02/2016 5:20 PM, Chris Plummer wrote: >>> Hello, >>> >>> Please review the following for removing Method::_method_data when only >>> supporting C1 (or more specifically, when not supporting C2 or JVMCI). >> >> Does JVMCI exist with C1 only? > My understanding is it can exists with C2 or on its own, but currently > is not included with C1 builds. >> The COMPILER2_OR_JVMCI conjunction makes things a bit messy. Can we >> abstract that behind a single variable, INCLUDE_METHOD_DATA (or some >> such) to make it cleaner? > I'll also be using COMPILER2_OR_JVMCI with another change to that > removes some MethodCounter fields. So yes, I can add > INCLUDE_METHOD_DATA, but then will need another INCLUDE_XXX for the > MethodCounter fields I'll be conditionally removing. >> >>> This will help reduce dynamic footprint usage for the minimal VM. >>> >>> As part of this fix, ProfileInterperter is forced to false unless C2 or >>> JVMCI is supported. This was mainly done to avoid crashes if it is >>> turned on and Method::_method_data has been excluded, but also because >>> it is not useful except to C2 or JVMCI. >> >> Are you saying that the information generated by ProfileInterpreter is >> only used by C2 and JVMCI? If that is case it should really have been >> a C2 only flag. >> > That is my understanding. Coleen confirmed it for me. I believe she got > her info from the compiler team. BTW, we need a mechanism to make these > conditionally unsupported flags a constant value when they are not > supported. It would help deadstrip code. >> If ProfileInterpreter is forced to false then shouldn't you also be >> checking TraceProfileInterpreter and PrintMethodData use as well > Yes, I can add those. > > thanks, > > Chris >> >> Thanks, >> David >> >>> Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 >>> >>> Test with JPRT -testset hotspot. >>> >>> thanks, >>> >>> Chris > From david.holmes at oracle.com Fri Feb 5 03:10:19 2016 From: david.holmes at oracle.com (David Holmes) Date: Fri, 5 Feb 2016 13:10:19 +1000 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56B40480.6060703@oracle.com> References: <56B2FBB4.70407@oracle.com> <56B3FE5A.9010806@oracle.com> <56B40480.6060703@oracle.com> Message-ID: <56B4129B.3010506@oracle.com> On 5/02/2016 12:10 PM, Chris Plummer wrote: > Hi David, > > On 2/4/16 5:43 PM, David Holmes wrote: >> Hi Chris, >> >> On 4/02/2016 5:20 PM, Chris Plummer wrote: >>> Hello, >>> >>> Please review the following for removing Method::_method_data when only >>> supporting C1 (or more specifically, when not supporting C2 or JVMCI). >> >> Does JVMCI exist with C1 only? > My understanding is it can exists with C2 or on its own, but currently > is not included with C1 builds. Okay. >> The COMPILER2_OR_JVMCI conjunction makes things a bit messy. Can we >> abstract that behind a single variable, INCLUDE_METHOD_DATA (or some >> such) to make it cleaner? > I'll also be using COMPILER2_OR_JVMCI with another change to that > removes some MethodCounter fields. So yes, I can add > INCLUDE_METHOD_DATA, but then will need another INCLUDE_XXX for the > MethodCounter fields I'll be conditionally removing. Okay. It is ugly though :( >> >>> This will help reduce dynamic footprint usage for the minimal VM. >>> >>> As part of this fix, ProfileInterperter is forced to false unless C2 or >>> JVMCI is supported. This was mainly done to avoid crashes if it is >>> turned on and Method::_method_data has been excluded, but also because >>> it is not useful except to C2 or JVMCI. >> >> Are you saying that the information generated by ProfileInterpreter is >> only used by C2 and JVMCI? If that is case it should really have been >> a C2 only flag. >> > That is my understanding. Coleen confirmed it for me. I believe she got > her info from the compiler team. BTW, we need a mechanism to make these > conditionally unsupported flags a constant value when they are not > supported. It would help deadstrip code. Does it work to define it only in c2_globals.hpp and jvmci_globals.hpp, then in the shared globals.hpp define the flag as a constant "false" if not C2 or JVMCI? (I admit the multiple layers of macros makes it hard to see exactly how to make such a declaration.) >> If ProfileInterpreter is forced to false then shouldn't you also be >> checking TraceProfileInterpreter and PrintMethodData use as well > Yes, I can add those. Thinking more on this, forcing ProfileInterpreter off doesn't really change anything, so I don't think you need to validate these flags are also off. Thanks, David > thanks, > > Chris >> >> Thanks, >> David >> >>> Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 >>> >>> Test with JPRT -testset hotspot. >>> >>> thanks, >>> >>> Chris > From dmitry.samersoff at oracle.com Fri Feb 5 09:16:41 2016 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Fri, 5 Feb 2016 12:16:41 +0300 Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN In-Reply-To: <56B3D374.7030109@oracle.com> References: <56B3D374.7030109@oracle.com> Message-ID: <56B46879.50008@oracle.com> Coleen, Looks good for me. -Dmitry On 2016-02-05 01:40, Coleen Phillimore wrote: > Summary: Backout change for 8146984 but add an alignment check which may > have caught original bug. > > Will retest with new check once this isn't an integration blocker. Ran > original tests that failed. > > open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ > bug link https://bugs.openjdk.java.net/browse/JDK-8149038 > > Thanks, > Coleen -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From tobias.hartmann at oracle.com Fri Feb 5 11:33:15 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 5 Feb 2016 12:33:15 +0100 Subject: [9] RFR(XS): 8149109: [TESTBUG] TestRegisterRestoring.java fails with "VM option 'SafepointALot' is develop" Message-ID: <56B4887B.40808@oracle.com> Hi, please review the following fix that adds a missing -XX:+IgnoreUnrecognizedVMOptions to the test. https://bugs.openjdk.java.net/browse/JDK-8149109 http://cr.openjdk.java.net/~thartmann/8149109/webrev.00/ I intend to push this into main because the test already escaped hs-comp (we only execute with fastdebug builds). Thanks, Tobias From vladimir.x.ivanov at oracle.com Fri Feb 5 11:41:59 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Fri, 5 Feb 2016 14:41:59 +0300 Subject: [9] RFR(XS): 8149109: [TESTBUG] TestRegisterRestoring.java fails with "VM option 'SafepointALot' is develop" In-Reply-To: <56B4887B.40808@oracle.com> References: <56B4887B.40808@oracle.com> Message-ID: <56B48A87.9000009@oracle.com> Reviewed. Best regards, Vladimir Ivanov On 2/5/16 2:33 PM, Tobias Hartmann wrote: > Hi, > > please review the following fix that adds a missing -XX:+IgnoreUnrecognizedVMOptions to the test. > > https://bugs.openjdk.java.net/browse/JDK-8149109 > http://cr.openjdk.java.net/~thartmann/8149109/webrev.00/ > > I intend to push this into main because the test already escaped hs-comp (we only execute with fastdebug builds). > > Thanks, > Tobias > From tobias.hartmann at oracle.com Fri Feb 5 11:43:10 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 5 Feb 2016 12:43:10 +0100 Subject: [9] RFR(XS): 8149109: [TESTBUG] TestRegisterRestoring.java fails with "VM option 'SafepointALot' is develop" In-Reply-To: <56B48A87.9000009@oracle.com> References: <56B4887B.40808@oracle.com> <56B48A87.9000009@oracle.com> Message-ID: <56B48ACE.4090002@oracle.com> Thanks for the quick review, Vladimir! Best, Tobias On 05.02.2016 12:41, Vladimir Ivanov wrote: > Reviewed. > > Best regards, > Vladimir Ivanov > > On 2/5/16 2:33 PM, Tobias Hartmann wrote: >> Hi, >> >> please review the following fix that adds a missing -XX:+IgnoreUnrecognizedVMOptions to the test. >> >> https://bugs.openjdk.java.net/browse/JDK-8149109 >> http://cr.openjdk.java.net/~thartmann/8149109/webrev.00/ >> >> I intend to push this into main because the test already escaped hs-comp (we only execute with fastdebug builds). >> >> Thanks, >> Tobias >> From paul.sandoz at oracle.com Fri Feb 5 13:00:42 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 5 Feb 2016 13:00:42 +0000 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <79CA457A-E4E3-4D49-B22A-C959C16DAC49@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <79CA457A-E4E3-4D49-B22A-C959C16DAC49@oracle.com> Message-ID: <5E9239AE-D8C9-4A98-9C46-9FE0F130A06C@oracle.com> Hi, Nice use of C++ templates :-) Overall looks good. I too would prefer if we could move the argument checking out, perhaps even to the point of requiring callers do that rather than providing another method, for example for Buffer i think the arguments are known to be valid? I think in either case it is important to improve the documentation on the method stating the constraints on arguments, atomicity guarantees etc. I have a hunch that for the particular case of copying-with-swap for buffers i could get this to work work efficiently using Unsafe (three separate methods for each unit type of 2, 4 and 8 bytes), since IIUC the range is bounded to be less than Integer.MAX_VALUE so an int loop rather than a long loop can be used and therefore safe points checks will not be placed within the loop. However, i think what you have done is more generally applicable and could be made intrinsic. It would be a nice at some future point if it could be made a pure Java implementation and intrinsified where appropriate. ? John, regarding array mismatch there were issues with the efficiency of the unrolled loops with Unsafe access. (Since the loops were int bases there were no issues with safe point checks.) Roland recently fixed that so now code is generated that is competitive with direct array accesses. We drop into the stub intrinsic and leverage 128bits or 256bits where supported. Interestingly it seems the unrolled loop using Unsafe is now slightly faster than the stub using 128bit registers. I don?t know if that is due to unluckly alignment, and/or the stub needs to do some manual unrolling. In terms of code-cache efficiency the intrinsic is better. Paul. > On 4 Feb 2016, at 06:27, John Rose wrote: > > On Feb 2, 2016, at 11:25 AM, Mikael Vidstedt wrote: >> Please review this change which introduces a Copy::conjoint_swap and an Unsafe.copySwapMemory method to call it from Java, along with the necessary changes to have java.nio.Bits call it instead of the Bits.c code. >> >> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ >> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ > > This is very good. > > I have some nit-picks: > > These days, when we introduce a new intrinsic (@HSIntrCand), > we write the argument checking code separately in a non-intrinsic > bytecode method. In this case, we don't (yet) have an intrinsic > binding for U.copy*, but we might in the future. (C intrinsifies > memcpy, which is a precedent.) In any case, I would prefer > if we could structure the argument checking code in a similar > way, with Unsafe.java containing both copySwapMemory > and a private copySwapMemory0. Then we can JIT-optimize > the safety checks. > > You might as well extend the same treatment to the pre-existing > copyMemory call. The most important check (and the only one > in U.copyMemory) is to ensure that the size_t operand has not > wrapped around from a Java negative value to a crazy-large > size_t value. That's the low-hanging fruit. Checking the pointers > (for null or oob) is more problematic, of course. Checking consistency > around elemSize is cheap and easy, so I agree that the U.copySM > should do that work also. Basically, Unsafe can do very basic > checks if there is a tricky user model to enforce, but it mustn't > "sign up" to guard the user against all errors. > > Rule of thumb: Unsafe calls don't throw NPEs, they just SEGV. > And the rare bit that *does* throw (IAE usually) should be placed > into Unsafe.java, not unsafe.cpp. (The best-practice rule for putting > argument checking code outside of the intrinsic is a newer one, > so Unsafe code might not always do this.) > > The comment "Generalizing it would be reasonable, but requires > card marking" is bogus, since we never byte-swap managed pointers. > > The test logic will flow a little smoother if your GenericPointer guy, > the onHeap version, stores the appropriate array base offset in his offset field. > You won't have to mention p.isOnHeap nearly so much, and the code will > set a slightly better example. > > The VM_ENTRY_BASE_FROM_LEAF macro is really cool. > > The C++ template code is cool also. It reminds me of the kind > of work Gosling's "Ace" processor could do, but now it's mainstreamed > for all to use in C++. We're going to get some of that goodness > in Project Valhalla with specialization logic. > > I find it amazing that the right way to code this in C is to > use memcpy for unaligned accesses and byte peek/poke > into registers for byte-swapping operators. I'm glad we > can write this code *once* for the JVM and JDK. > > Possible future work: If we can get a better handle on > writing vectorizable loops from Java, including Unsafe-based > ones, we can move some of the C code back up to Java. > Perhaps U.copy* calls for very short lengths deserved to > be broken out into small loops of U.get/put* (with alignment). > I think you experimented with this, and there were problems > with the JIT putting fail-safe memory barriers between > U.get/put* calls. Paul's work on Array.mismatch ran into > similar issues, with the right answer being to write manual > vector code in assembly. > > Anyway, you can count me as a reviewer. > > Thanks, > > ? John From coleen.phillimore at oracle.com Fri Feb 5 13:22:01 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 5 Feb 2016 08:22:01 -0500 Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN In-Reply-To: <56B3E2E9.2090903@oracle.com> References: <56B3D374.7030109@oracle.com> <56B3D426.5050002@oracle.com> <145fbaa7-c6e9-4c39-873a-6b1adc6ed72f@default> <56B3DE09.8090506@oracle.com> <56B3E005.2070902@oracle.com> <56B3E2E9.2090903@oracle.com> Message-ID: <56B4A1F9.9090701@oracle.com> I'd fixed that typo in my original fix, but not in the backout. Next time. Coleen On 2/4/16 6:46 PM, Daniel D. Daugherty wrote: > On 2/4/16 4:34 PM, Coleen Phillimore wrote: >> >> >> On 2/4/16 6:26 PM, Daniel D. Daugherty wrote: >>> On 2/4/16 4:08 PM, Markus Gronlund wrote: >>>> Hi Coleen, >>>> >>>> Thanks for reverting, looks good. >>> >>> Agreed, but also agree with Markus that it would be a shame to >>> lose the cleanups. >>> >>> Since it's an integration_blocker, I can see why you want to >>> try to very reduced risk... Do you have plans to take another >>> run at the cleaned up version? >> >> No, I don't really. It eliminates 2 lines in a function and I don't >> see how eliminating else's is cleaner or at least cleaner enough to >> file an RFE. >> >> Okay, the cleaner has_method_vptr() compiles on linux. It better >> compile on windows. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8149038.02/ > > src/share/vm/oops/method.cpp > L2103: // This assumption is also in universe.cpp patch_klass_vtble > Typo: 'patch_klass_vtble' -> 'patch_klass_vtable' > > Don't need another webrev. Thumbs up. > > > Dan > > >> >> thanks, >> Coleen >> >>> >>> Dan >>> >>> >>>> >>>> /Markus >>>> >>>> PS I don?t know if you want to go straight back to the previous >>>> version, but I still think this piece could be tightened a bit: >>>> >>>> bool Method::has_method_vptr(const void* ptr) { >>>> assert(ptr != NULL, "invariant"); >>>> >>>> // This assumes that the vtbl pointer is the first word of a C++ >>>> object. >>>> // This assumption is also in universe.cpp patch_klass_vtble >>>> const Method m; >>>> return dereference_vptr(&m) == dereference_vptr(ptr); >>>> } >>>> >>>> // Check that this pointer is valid by checking that the vtbl >>>> pointer matches >>>> bool Method::is_valid_method() const { >>>> if (this == NULL) { >>>> return false; >>>> } >>>> if ((intptr_t(this) & (wordSize - 1)) != 0) { >>>> return false; >>>> } >>>> if (!is_metaspace_object()) { >>>> return false; >>>> } >>>> return has_method_vptr(this); >>>> } >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Coleen Phillimore >>>> Sent: den 4 februari 2016 23:44 >>>> To: hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR (S, URGENT) 8149038: SIGSEGV at >>>> frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN >>>> >>>> >>>> >>>> On 2/4/16 5:40 PM, Coleen Phillimore wrote: >>>>> Summary: Backout change for 8146984 but add an alignment check which >>>>> may have caught original bug. >>>>> >>>>> Will retest with new check once this isn't an integration blocker. >>>>> Ran >>>>> original tests that failed. >>>>> >>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8149038 >>>> The original bug is: >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8146984 >>>> >>>> Coleen >>>>> Thanks, >>>>> Coleen >>> >> > From coleen.phillimore at oracle.com Fri Feb 5 13:22:16 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 5 Feb 2016 08:22:16 -0500 Subject: RFR (S, URGENT) 8149038: SIGSEGV at frame::is_interpreted_frame_valid -> StubRoutines::SafeFetchN In-Reply-To: <56B46879.50008@oracle.com> References: <56B3D374.7030109@oracle.com> <56B46879.50008@oracle.com> Message-ID: <56B4A208.9000508@oracle.com> Thanks, Dmitry. Coleen On 2/5/16 4:16 AM, Dmitry Samersoff wrote: > Coleen, > > Looks good for me. > > -Dmitry > > On 2016-02-05 01:40, Coleen Phillimore wrote: >> Summary: Backout change for 8146984 but add an alignment check which may >> have caught original bug. >> >> Will retest with new check once this isn't an integration blocker. Ran >> original tests that failed. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8149038.01/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8149038 >> >> Thanks, >> Coleen > From vladimir.kozlov at oracle.com Fri Feb 5 17:32:29 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 5 Feb 2016 09:32:29 -0800 Subject: [9] RFR(XS): 8149109: [TESTBUG] TestRegisterRestoring.java fails with "VM option 'SafepointALot' is develop" In-Reply-To: <56B4887B.40808@oracle.com> References: <56B4887B.40808@oracle.com> Message-ID: <56B4DCAD.1020901@oracle.com> On 2/5/16 3:33 AM, Tobias Hartmann wrote: > Hi, > > please review the following fix that adds a missing -XX:+IgnoreUnrecognizedVMOptions to the test. > > https://bugs.openjdk.java.net/browse/JDK-8149109 > http://cr.openjdk.java.net/~thartmann/8149109/webrev.00/ Good. > > I intend to push this into main because the test already escaped hs-comp (we only execute with fastdebug builds). Agree. How it passed JPRT? Is this test is not included in out set of tests for JPRT runs? Thanks, Vladimir > > Thanks, > Tobias > From vladimir.x.ivanov at oracle.com Fri Feb 5 17:37:30 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Fri, 5 Feb 2016 20:37:30 +0300 Subject: [9] RFR (S): 8149141: Optimized build is broken Message-ID: <56B4DDDA.6000201@oracle.com> http://cr.openjdk.java.net/~vlivanov/8149141/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8149141 !defined(PRODUCT) & ASSERT confusion. Best regards, Vladimir Ivanov PS: I'm surprised that unit tests are full of asserts, but guarded by !defined(PRODUCT). Any reason to keep the tests in optimized build? From vladimir.kozlov at oracle.com Fri Feb 5 19:42:07 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 5 Feb 2016 11:42:07 -0800 Subject: [9] RFR (S): 8149141: Optimized build is broken In-Reply-To: <56B4DDDA.6000201@oracle.com> References: <56B4DDDA.6000201@oracle.com> Message-ID: <56B4FB0F.8030006@oracle.com> Looks good. I think tests should be in debug build. Thanks, Vladimir K On 2/5/16 9:37 AM, Vladimir Ivanov wrote: > http://cr.openjdk.java.net/~vlivanov/8149141/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8149141 > > !defined(PRODUCT) & ASSERT confusion. > > Best regards, > Vladimir Ivanov > > PS: I'm surprised that unit tests are full of asserts, but guarded by > !defined(PRODUCT). Any reason to keep the tests in optimized build? From mikael.vidstedt at oracle.com Fri Feb 5 22:21:47 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Fri, 5 Feb 2016 14:21:47 -0800 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <5E9239AE-D8C9-4A98-9C46-9FE0F130A06C@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <79CA457A-E4E3-4D49-B22A-C959C16DAC49@oracle.com> <5E9239AE-D8C9-4A98-9C46-9FE0F130A06C@oracle.com> Message-ID: <56B5207B.8010107@oracle.com> I fully agree that moving the arguments checking up to Java makes more sense, and I've prepared new webrevs which do exactly that, including changes to address the other feedback from David, John and others: hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04/hotspot/webrev/ jdk: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04/jdk/webrev/ Incremental webrevs for your convenience: hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04.incr/hotspot/webrev/ jdk: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04.incr/jdk/webrev/ I have done some benchmarking of this code and for large copies (16MB+) this outperforms the old Bits.c implementation by *30-100%* depending on platform and exact element sizes! For smaller copies the additional checks which are now performed hurt performance on client VMs (80-90% of old impl), but with the server VMs I see performance on par with, or in most cases 5-10% better than the old implementation. There's a potentially statistically significant regression of ~3-4% for elemSize=2, but for now I'm going to declare success. There's certainly room for further improvements here, but this should at least do for addressing the original problem. I filed https://bugs.openjdk.java.net/browse/JDK-8149159 for moving the checks for Unsafe.copyMemory to Java, and will work on that next. I also filed https://bugs.openjdk.java.net/browse/JDK-8149162 to cover the potential renaming of the Bits methods to have more informative names. Finally, I filed https://bugs.openjdk.java.net/browse/JDK-8149163 to look at improving the behavior of Unsafe.addressSize(), after having spent too much time trying to understand why the performance of the new U.copySwapMemory Java checks wasn't quite living up to my expectations (spoiler alert: Unsafe.addressSize() is not intrinsified, so will always result in a call into the VM/unsafe.cpp). Finally, I - too - would like to see the copy-swap logic moved into Java, and as I mentioned I played around with that first before I decided to do the native implementation to address the immediate problem. Looking forward to what you find Paul! Cheers, Mikael On 2016-02-05 05:00, Paul Sandoz wrote: > Hi, > > Nice use of C++ templates :-) > > Overall looks good. > > I too would prefer if we could move the argument checking out, perhaps even to the point of requiring callers do that rather than providing another method, for example for Buffer i think the arguments are known to be valid? I think in either case it is important to improve the documentation on the method stating the constraints on arguments, atomicity guarantees etc. > > I have a hunch that for the particular case of copying-with-swap for buffers i could get this to work work efficiently using Unsafe (three separate methods for each unit type of 2, 4 and 8 bytes), since IIUC the range is bounded to be less than Integer.MAX_VALUE so an int loop rather than a long loop can be used and therefore safe points checks will not be placed within the loop. > > However, i think what you have done is more generally applicable and could be made intrinsic. It would be a nice at some future point if it could be made a pure Java implementation and intrinsified where appropriate. > > ? > > John, regarding array mismatch there were issues with the efficiency of the unrolled loops with Unsafe access. (Since the loops were int bases there were no issues with safe point checks.) Roland recently fixed that so now code is generated that is competitive with direct array accesses. We drop into the stub intrinsic and leverage 128bits or 256bits where supported. Interestingly it seems the unrolled loop using Unsafe is now slightly faster than the stub using 128bit registers. I don?t know if that is due to unluckly alignment, and/or the stub needs to do some manual unrolling. In terms of code-cache efficiency the intrinsic is better. > > Paul. > > > > > >> On 4 Feb 2016, at 06:27, John Rose wrote: >> >> On Feb 2, 2016, at 11:25 AM, Mikael Vidstedt wrote: >>> Please review this change which introduces a Copy::conjoint_swap and an Unsafe.copySwapMemory method to call it from Java, along with the necessary changes to have java.nio.Bits call it instead of the Bits.c code. >>> >>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ >>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ >> This is very good. >> >> I have some nit-picks: >> >> These days, when we introduce a new intrinsic (@HSIntrCand), >> we write the argument checking code separately in a non-intrinsic >> bytecode method. In this case, we don't (yet) have an intrinsic >> binding for U.copy*, but we might in the future. (C intrinsifies >> memcpy, which is a precedent.) In any case, I would prefer >> if we could structure the argument checking code in a similar >> way, with Unsafe.java containing both copySwapMemory >> and a private copySwapMemory0. Then we can JIT-optimize >> the safety checks. >> >> You might as well extend the same treatment to the pre-existing >> copyMemory call. The most important check (and the only one >> in U.copyMemory) is to ensure that the size_t operand has not >> wrapped around from a Java negative value to a crazy-large >> size_t value. That's the low-hanging fruit. Checking the pointers >> (for null or oob) is more problematic, of course. Checking consistency >> around elemSize is cheap and easy, so I agree that the U.copySM >> should do that work also. Basically, Unsafe can do very basic >> checks if there is a tricky user model to enforce, but it mustn't >> "sign up" to guard the user against all errors. >> >> Rule of thumb: Unsafe calls don't throw NPEs, they just SEGV. >> And the rare bit that *does* throw (IAE usually) should be placed >> into Unsafe.java, not unsafe.cpp. (The best-practice rule for putting >> argument checking code outside of the intrinsic is a newer one, >> so Unsafe code might not always do this.) >> >> The comment "Generalizing it would be reasonable, but requires >> card marking" is bogus, since we never byte-swap managed pointers. >> >> The test logic will flow a little smoother if your GenericPointer guy, >> the onHeap version, stores the appropriate array base offset in his offset field. >> You won't have to mention p.isOnHeap nearly so much, and the code will >> set a slightly better example. >> >> The VM_ENTRY_BASE_FROM_LEAF macro is really cool. >> >> The C++ template code is cool also. It reminds me of the kind >> of work Gosling's "Ace" processor could do, but now it's mainstreamed >> for all to use in C++. We're going to get some of that goodness >> in Project Valhalla with specialization logic. >> >> I find it amazing that the right way to code this in C is to >> use memcpy for unaligned accesses and byte peek/poke >> into registers for byte-swapping operators. I'm glad we >> can write this code *once* for the JVM and JDK. >> >> Possible future work: If we can get a better handle on >> writing vectorizable loops from Java, including Unsafe-based >> ones, we can move some of the C code back up to Java. >> Perhaps U.copy* calls for very short lengths deserved to >> be broken out into small loops of U.get/put* (with alignment). >> I think you experimented with this, and there were problems >> with the JIT putting fail-safe memory barriers between >> U.get/put* calls. Paul's work on Array.mismatch ran into >> similar issues, with the right answer being to write manual >> vector code in assembly. >> >> Anyway, you can count me as a reviewer. >> >> Thanks, >> >> ? John From kim.barrett at oracle.com Fri Feb 5 23:22:25 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 5 Feb 2016 18:22:25 -0500 Subject: [9] RFR (S): 8149141: Optimized build is broken In-Reply-To: <56B4DDDA.6000201@oracle.com> References: <56B4DDDA.6000201@oracle.com> Message-ID: <9554AD61-2C81-426D-A5A6-14B9C025D3BA@oracle.com> > On Feb 5, 2016, at 12:37 PM, Vladimir Ivanov wrote: > > http://cr.openjdk.java.net/~vlivanov/8149141/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8149141 > > !defined(PRODUCT) & ASSERT confusion. > > Best regards, > Vladimir Ivanov > > PS: I'm surprised that unit tests are full of asserts, but guarded by !defined(PRODUCT). Any reason to keep the tests in optimized build? Some of the tests I?ve written use guarantee for that very reason. But apparently not all, as one of the ones being fixed by this change is one of mine. Oops. From kim.barrett at oracle.com Sat Feb 6 07:03:15 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Sat, 6 Feb 2016 02:03:15 -0500 Subject: [9] RFR (S): 8149141: Optimized build is broken In-Reply-To: <56B4DDDA.6000201@oracle.com> References: <56B4DDDA.6000201@oracle.com> Message-ID: > On Feb 5, 2016, at 12:37 PM, Vladimir Ivanov wrote: > > http://cr.openjdk.java.net/~vlivanov/8149141/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8149141 > > !defined(PRODUCT) & ASSERT confusion. > > Best regards, > Vladimir Ivanov > > PS: I'm surprised that unit tests are full of asserts, but guarded by !defined(PRODUCT). Any reason to keep the tests in optimized build? Change looks good. Not sure it?s worth cleaning up the confusion in the internal VM tests around those flags, since we?re hoping for a better unit test framework soonish (JDK-8047975). I?m pretty sure there are other places where there is confusion around !defined(PRODUCT) && !defined(ASSERT), though it would help if I could find a clear description of what ?optimized? builds are for. That variant seems to be the one nobody talks about? From igor.ignatyev at oracle.com Sun Feb 7 21:22:15 2016 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Mon, 8 Feb 2016 00:22:15 +0300 Subject: RFR(XS) : 8144695 : --disable-warnings-as-errors does not work for HotSpot build In-Reply-To: <100FFD7C-1F76-4C17-BA8F-293C99F1B6C1@oracle.com> References: <5BE45063-15F7-446E-9001-3AFA3B00578C@oracle.com> <5FB23711-1ED5-4ABB-8F1C-D6EEA551ED22@oracle.com> <100FFD7C-1F76-4C17-BA8F-293C99F1B6C1@oracle.com> Message-ID: <7B6E32C6-67B3-44D8-8A81-0AAC87DE206C@oracle.com> Hi Kim, could you please take a look at the updated webrev: http://cr.openjdk.java.net/~iignatyev/8144695/webrev.03 I agree that ?+w? isn?t related to WARNINGS_ARE_ERRORS, so it was moved to CFLAGS_WARN. Regarding compiler version based conditions, I think it?d be better for build team to decide how to deal w/ them. PS I?ve checked that w/ the patch applied warnings, which normally cause a build error, don?t cause any build errors w/ --disable-warnings-as-errors. Thanks, Igor > On Dec 17, 2015, at 11:30 PM, Kim Barrett wrote: > > On Dec 17, 2015, at 8:22 AM, Igor Ignatyev wrote: >> >> >>> On Dec 17, 2015, at 2:10 AM, Kim Barrett wrote: >>> make/solaris/makefiles/adlc.make >>> 77 WARNINGS_ARE_ERRORS ?= -w -xwe >>> >>> I'm pretty sure "-w" is wrong here, and should be removed. >> you are right, I made a typo, it was ?+w? before. the new webrev : http://cr.openjdk.java.net/~iignatyev/8144695/webrev.02/ >> >>> And it's >>> not clear why this assignment should be conditional on the compiler >>> version. >> it was added as a fix for https://bugs.openjdk.java.net/browse/JDK-6851829, excerpt from Chris?s evaluation: >> >>> Since some of the errors are in system headers we can only disable the "+w -errwarn" on SS11 and below. > > "+w" has nothing to do with warnings being errors; it just turns on > more warnings. So it shouldn't be in WARNINGS_ARE_ERRORS. > > CFLAGS_WARN is (according to various comments) supposed to hold > options to enable/disable warnings, so "+w" there was reasonable, > while -errwarn should not have been there by that definition. > > The conditionalization disables additional warnings and "warnings are > errors" for older compilers that I think we're no longer using for > jdk9. Are we allowed to retire support for such? > > The conditionalization may only be needed for "+w", though without > testing on a no longer officially supported version of the compiler > that would be hard to prove. From david.holmes at oracle.com Mon Feb 8 00:14:16 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 8 Feb 2016 10:14:16 +1000 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56B5207B.8010107@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <79CA457A-E4E3-4D49-B22A-C959C16DAC49@oracle.com> <5E9239AE-D8C9-4A98-9C46-9FE0F130A06C@oracle.com> <56B5207B.8010107@oracle.com> Message-ID: <56B7DDD8.2050701@oracle.com> On 6/02/2016 8:21 AM, Mikael Vidstedt wrote: > > I fully agree that moving the arguments checking up to Java makes more > sense, and I've prepared new webrevs which do exactly that, including > changes to address the other feedback from David, John and others: Shouldn't the lowest-level do_conjoint_swap routines at least check preconditions with asserts to catch the cases where the calling Java code has failed to do the right thing? The other Copy methods seem to do this. David ----- > hotspot: > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04/hotspot/webrev/ > > jdk: > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04/jdk/webrev/ > > Incremental webrevs for your convenience: > > hotspot: > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04.incr/hotspot/webrev/ > > jdk: > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04.incr/jdk/webrev/ > > > > I have done some benchmarking of this code and for large copies (16MB+) > this outperforms the old Bits.c implementation by *30-100%* depending on > platform and exact element sizes! For smaller copies the additional > checks which are now performed hurt performance on client VMs (80-90% of > old impl), but with the server VMs I see performance on par with, or in > most cases 5-10% better than the old implementation. There's a > potentially statistically significant regression of ~3-4% for > elemSize=2, but for now I'm going to declare success. There's certainly > room for further improvements here, but this should at least do for > addressing the original problem. > > > I filed https://bugs.openjdk.java.net/browse/JDK-8149159 for moving the > checks for Unsafe.copyMemory to Java, and will work on that next. I also > filed https://bugs.openjdk.java.net/browse/JDK-8149162 to cover the > potential renaming of the Bits methods to have more informative names. > Finally, I filed https://bugs.openjdk.java.net/browse/JDK-8149163 to > look at improving the behavior of Unsafe.addressSize(), after having > spent too much time trying to understand why the performance of the new > U.copySwapMemory Java checks wasn't quite living up to my expectations > (spoiler alert: Unsafe.addressSize() is not intrinsified, so will always > result in a call into the VM/unsafe.cpp). > > > Finally, I - too - would like to see the copy-swap logic moved into > Java, and as I mentioned I played around with that first before I > decided to do the native implementation to address the immediate > problem. Looking forward to what you find Paul! > > Cheers, > Mikael > > On 2016-02-05 05:00, Paul Sandoz wrote: >> Hi, >> >> Nice use of C++ templates :-) >> >> Overall looks good. >> >> I too would prefer if we could move the argument checking out, perhaps >> even to the point of requiring callers do that rather than providing >> another method, for example for Buffer i think the arguments are known >> to be valid? I think in either case it is important to improve the >> documentation on the method stating the constraints on arguments, >> atomicity guarantees etc. >> >> I have a hunch that for the particular case of copying-with-swap for >> buffers i could get this to work work efficiently using Unsafe (three >> separate methods for each unit type of 2, 4 and 8 bytes), since IIUC >> the range is bounded to be less than Integer.MAX_VALUE so an int loop >> rather than a long loop can be used and therefore safe points checks >> will not be placed within the loop. >> >> However, i think what you have done is more generally applicable and >> could be made intrinsic. It would be a nice at some future point if it >> could be made a pure Java implementation and intrinsified where >> appropriate. >> >> ? >> >> John, regarding array mismatch there were issues with the efficiency >> of the unrolled loops with Unsafe access. (Since the loops were int >> bases there were no issues with safe point checks.) Roland recently >> fixed that so now code is generated that is competitive with direct >> array accesses. We drop into the stub intrinsic and leverage 128bits >> or 256bits where supported. Interestingly it seems the unrolled loop >> using Unsafe is now slightly faster than the stub using 128bit >> registers. I don?t know if that is due to unluckly alignment, and/or >> the stub needs to do some manual unrolling. In terms of code-cache >> efficiency the intrinsic is better. >> >> Paul. >> >> >> >> >> >>> On 4 Feb 2016, at 06:27, John Rose wrote: >>> >>> On Feb 2, 2016, at 11:25 AM, Mikael Vidstedt >>> wrote: >>>> Please review this change which introduces a Copy::conjoint_swap and >>>> an Unsafe.copySwapMemory method to call it from Java, along with the >>>> necessary changes to have java.nio.Bits call it instead of the >>>> Bits.c code. >>>> >>>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ >>>> >>>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ >>>> >>> This is very good. >>> >>> I have some nit-picks: >>> >>> These days, when we introduce a new intrinsic (@HSIntrCand), >>> we write the argument checking code separately in a non-intrinsic >>> bytecode method. In this case, we don't (yet) have an intrinsic >>> binding for U.copy*, but we might in the future. (C intrinsifies >>> memcpy, which is a precedent.) In any case, I would prefer >>> if we could structure the argument checking code in a similar >>> way, with Unsafe.java containing both copySwapMemory >>> and a private copySwapMemory0. Then we can JIT-optimize >>> the safety checks. >>> >>> You might as well extend the same treatment to the pre-existing >>> copyMemory call. The most important check (and the only one >>> in U.copyMemory) is to ensure that the size_t operand has not >>> wrapped around from a Java negative value to a crazy-large >>> size_t value. That's the low-hanging fruit. Checking the pointers >>> (for null or oob) is more problematic, of course. Checking consistency >>> around elemSize is cheap and easy, so I agree that the U.copySM >>> should do that work also. Basically, Unsafe can do very basic >>> checks if there is a tricky user model to enforce, but it mustn't >>> "sign up" to guard the user against all errors. >>> >>> Rule of thumb: Unsafe calls don't throw NPEs, they just SEGV. >>> And the rare bit that *does* throw (IAE usually) should be placed >>> into Unsafe.java, not unsafe.cpp. (The best-practice rule for putting >>> argument checking code outside of the intrinsic is a newer one, >>> so Unsafe code might not always do this.) >>> >>> The comment "Generalizing it would be reasonable, but requires >>> card marking" is bogus, since we never byte-swap managed pointers. >>> >>> The test logic will flow a little smoother if your GenericPointer guy, >>> the onHeap version, stores the appropriate array base offset in his >>> offset field. >>> You won't have to mention p.isOnHeap nearly so much, and the code will >>> set a slightly better example. >>> >>> The VM_ENTRY_BASE_FROM_LEAF macro is really cool. >>> >>> The C++ template code is cool also. It reminds me of the kind >>> of work Gosling's "Ace" processor could do, but now it's mainstreamed >>> for all to use in C++. We're going to get some of that goodness >>> in Project Valhalla with specialization logic. >>> >>> I find it amazing that the right way to code this in C is to >>> use memcpy for unaligned accesses and byte peek/poke >>> into registers for byte-swapping operators. I'm glad we >>> can write this code *once* for the JVM and JDK. >>> >>> Possible future work: If we can get a better handle on >>> writing vectorizable loops from Java, including Unsafe-based >>> ones, we can move some of the C code back up to Java. >>> Perhaps U.copy* calls for very short lengths deserved to >>> be broken out into small loops of U.get/put* (with alignment). >>> I think you experimented with this, and there were problems >>> with the JIT putting fail-safe memory barriers between >>> U.get/put* calls. Paul's work on Array.mismatch ran into >>> similar issues, with the right answer being to write manual >>> vector code in assembly. >>> >>> Anyway, you can count me as a reviewer. >>> >>> Thanks, >>> >>> ? John > From tobias.hartmann at oracle.com Mon Feb 8 08:20:47 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 8 Feb 2016 09:20:47 +0100 Subject: [9] RFR(XS): 8149109: [TESTBUG] TestRegisterRestoring.java fails with "VM option 'SafepointALot' is develop" In-Reply-To: <56B4DCAD.1020901@oracle.com> References: <56B4887B.40808@oracle.com> <56B4DCAD.1020901@oracle.com> Message-ID: <56B84FDF.8030305@oracle.com> Hi Vladimir, On 05.02.2016 18:32, Vladimir Kozlov wrote: > On 2/5/16 3:33 AM, Tobias Hartmann wrote: >> Hi, >> >> please review the following fix that adds a missing -XX:+IgnoreUnrecognizedVMOptions to the test. >> >> https://bugs.openjdk.java.net/browse/JDK-8149109 >> http://cr.openjdk.java.net/~thartmann/8149109/webrev.00/ > > Good. Thanks, for the review. >> I intend to push this into main because the test already escaped hs-comp (we only execute with fastdebug builds). > > Agree. How it passed JPRT? Is this test is not included in out set of tests for JPRT runs? We don't execute the jtreg tests on JPRT with a product build (only fastdebug). I've run RBT on all platforms but forgot to enable testing with product builds. Thanks, Tobias From vladimir.x.ivanov at oracle.com Mon Feb 8 13:06:29 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Mon, 8 Feb 2016 16:06:29 +0300 Subject: [9] RFR (S): 8149141: Optimized build is broken In-Reply-To: References: <56B4DDDA.6000201@oracle.com> Message-ID: <56B892D5.1020309@oracle.com> Vladimir, Kim, thanks for the reviews. >> PS: I'm surprised that unit tests are full of asserts, but guarded by !defined(PRODUCT). Any reason to keep the tests in optimized build? > > Change looks good. > > Not sure it?s worth cleaning up the confusion in the internal VM tests around those flags, since we?re > hoping for a better unit test framework soonish (JDK-8047975). I?m pretty sure there are other places > where there is confusion around !defined(PRODUCT) && !defined(ASSERT), though it would help if > I could find a clear description of what ?optimized? builds are for. That variant seems to be the one > nobody talks about? I'm fine with leaving it as is for now. Optimized binaries are essentially product binaries with additional diagnostic functionality: counters, tracing, internal structures dumping, verification. Just look through *globals.hpp for notproduct flags. Usually, optimized binaries behave like product, but provide more tools to peek into the JVM. So, it is much easier to diagnose & debug problems with optimized binaries. But with gradual move of diagnostic functionality into product, optimized binaries become less relevant. Best regards, Vladimir Ivanov From kim.barrett at oracle.com Mon Feb 8 19:37:34 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 8 Feb 2016 14:37:34 -0500 Subject: RFR(XS) : 8144695 : --disable-warnings-as-errors does not work for HotSpot build In-Reply-To: <7B6E32C6-67B3-44D8-8A81-0AAC87DE206C@oracle.com> References: <5BE45063-15F7-446E-9001-3AFA3B00578C@oracle.com> <5FB23711-1ED5-4ABB-8F1C-D6EEA551ED22@oracle.com> <100FFD7C-1F76-4C17-BA8F-293C99F1B6C1@oracle.com> <7B6E32C6-67B3-44D8-8A81-0AAC87DE206C@oracle.com> Message-ID: <97585ABE-82C2-42C6-AC3A-2418C8F9AC85@oracle.com> > On Feb 7, 2016, at 4:22 PM, Igor Ignatyev wrote: > > Hi Kim, > > could you please take a look at the updated webrev: http://cr.openjdk.java.net/~iignatyev/8144695/webrev.03 > > I agree that ?+w? isn?t related to WARNINGS_ARE_ERRORS, so it was moved to CFLAGS_WARN. > > Regarding compiler version based conditions, I think it?d be better for build team to decide how to deal w/ them. > > PS I?ve checked that w/ the patch applied warnings, which normally cause a build error, don?t cause any build errors w/ --disable-warnings-as-errors. Looks good. From david.holmes at oracle.com Tue Feb 9 05:48:39 2016 From: david.holmes at oracle.com (David Holmes) Date: Tue, 9 Feb 2016 15:48:39 +1000 Subject: (XS) RFR: 8149427: Remove .class files from the hotspot repo .hgignore file Message-ID: <56B97DB7.1080402@oracle.com> Bug: https://bugs.openjdk.java.net/browse/JDK-8149427 webrev: http://cr.openjdk.java.net/~dholmes/8149427/webrev/ JDK-6900757 added the following to .hgignore: +\.class$ but it is unclear why this was done. This setting can cause problems when jtreg testing leaves class files in unexpected places, and -stree JPRT submissions then fail testing in strange ways. "hg status" doesn't show these errant files because of the entry in .hgignore. I propose to remove the entry from the .hgignore file. Thanks, David patch: --- old/./.hgignore 2016-02-09 00:43:28.786882859 -0500 +++ new/./.hgignore 2016-02-09 00:43:27.254796576 -0500 @@ -10,7 +10,6 @@ .igv.log ^.hgtip .DS_Store -\.class$ ^\.mx.jvmci/env ^\.mx.jvmci/.*\.pyc ^\.mx.jvmci/eclipse-launches/.* --- From mikael.vidstedt at oracle.com Tue Feb 9 07:04:40 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Mon, 8 Feb 2016 23:04:40 -0800 Subject: (XS) RFR: 8149427: Remove .class files from the hotspot repo .hgignore file In-Reply-To: <56B97DB7.1080402@oracle.com> References: <56B97DB7.1080402@oracle.com> Message-ID: <56B98F88.1080401@oracle.com> Looks good.It would of course be great to understand why it was added to start with, but even so I don't think it should be there (and we should instead fix whatever caused it to be added). Cheers, Mikael On 2016-02-08 21:48, David Holmes wrote: > Bug: https://bugs.openjdk.java.net/browse/JDK-8149427 > > webrev: http://cr.openjdk.java.net/~dholmes/8149427/webrev/ > > JDK-6900757 added the following to .hgignore: > > +\.class$ > > but it is unclear why this was done. This setting can cause problems > when jtreg testing leaves class files in unexpected places, and -stree > JPRT submissions then fail testing in strange ways. "hg status" > doesn't show these errant files because of the entry in .hgignore. > > I propose to remove the entry from the .hgignore file. > > Thanks, > David > > > patch: > > --- old/./.hgignore 2016-02-09 00:43:28.786882859 -0500 > +++ new/./.hgignore 2016-02-09 00:43:27.254796576 -0500 > @@ -10,7 +10,6 @@ > .igv.log > ^.hgtip > .DS_Store > -\.class$ > ^\.mx.jvmci/env > ^\.mx.jvmci/.*\.pyc > ^\.mx.jvmci/eclipse-launches/.* > > --- > From magnus.ihse.bursie at oracle.com Tue Feb 9 09:54:13 2016 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Tue, 9 Feb 2016 10:54:13 +0100 Subject: RFR(XS) : 8144695 : --disable-warnings-as-errors does not work for HotSpot build In-Reply-To: <97585ABE-82C2-42C6-AC3A-2418C8F9AC85@oracle.com> References: <5BE45063-15F7-446E-9001-3AFA3B00578C@oracle.com> <5FB23711-1ED5-4ABB-8F1C-D6EEA551ED22@oracle.com> <100FFD7C-1F76-4C17-BA8F-293C99F1B6C1@oracle.com> <7B6E32C6-67B3-44D8-8A81-0AAC87DE206C@oracle.com> <97585ABE-82C2-42C6-AC3A-2418C8F9AC85@oracle.com> Message-ID: <56B9B745.20504@oracle.com> On 2016-02-08 20:37, Kim Barrett wrote: >> On Feb 7, 2016, at 4:22 PM, Igor Ignatyev wrote: >> >> Hi Kim, >> >> could you please take a look at the updated webrev: http://cr.openjdk.java.net/~iignatyev/8144695/webrev.03 >> >> I agree that ?+w? isn?t related to WARNINGS_ARE_ERRORS, so it was moved to CFLAGS_WARN. >> >> Regarding compiler version based conditions, I think it?d be better for build team to decide how to deal w/ them. >> >> PS I?ve checked that w/ the patch applied warnings, which normally cause a build error, don?t cause any build errors w/ --disable-warnings-as-errors. > Looks good. > Looks good to me to, now. /Magnus From igor.ignatyev at oracle.com Tue Feb 9 10:14:54 2016 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Tue, 9 Feb 2016 13:14:54 +0300 Subject: RFR(XS) : 8144695 : --disable-warnings-as-errors does not work for HotSpot build In-Reply-To: <56B9B745.20504@oracle.com> References: <5BE45063-15F7-446E-9001-3AFA3B00578C@oracle.com> <5FB23711-1ED5-4ABB-8F1C-D6EEA551ED22@oracle.com> <100FFD7C-1F76-4C17-BA8F-293C99F1B6C1@oracle.com> <7B6E32C6-67B3-44D8-8A81-0AAC87DE206C@oracle.com> <97585ABE-82C2-42C6-AC3A-2418C8F9AC85@oracle.com> <56B9B745.20504@oracle.com> Message-ID: <12EB23A0-5446-437E-AB09-B79B47EB0B00@oracle.com> Kim, Magnus, Thank you for review. ? Igor > On Feb 9, 2016, at 12:54 PM, Magnus Ihse Bursie wrote: > > On 2016-02-08 20:37, Kim Barrett wrote: >>> On Feb 7, 2016, at 4:22 PM, Igor Ignatyev wrote: >>> >>> Hi Kim, >>> >>> could you please take a look at the updated webrev: http://cr.openjdk.java.net/~iignatyev/8144695/webrev.03 >>> >>> I agree that ?+w? isn?t related to WARNINGS_ARE_ERRORS, so it was moved to CFLAGS_WARN. >>> >>> Regarding compiler version based conditions, I think it?d be better for build team to decide how to deal w/ them. >>> >>> PS I?ve checked that w/ the patch applied warnings, which normally cause a build error, don?t cause any build errors w/ --disable-warnings-as-errors. >> Looks good. >> > Looks good to me to, now. > > /Magnus > From goetz.lindenmaier at sap.com Tue Feb 9 20:58:52 2016 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 9 Feb 2016 20:58:52 +0000 Subject: Version special case '9' Message-ID: Hi, could somebody please comment on this issue? Thanks, Goetz From: Lindenmaier, Goetz Sent: Dienstag, 26. Januar 2016 17:13 To: hotspot-dev at openjdk.java.net; 'verona-dev at openjdk.java.net' Subject: Version special case '9' Hi, We appreciate the new version scheme introduced with JEP 223. It simplifies the versioning and we agree that it is to it's best downward compatible. Unfortunately there is one exception to this. The initial version of a new major release skips the minor and security digits. This makes parsing the string unnecessarily complicated, because the format is not predictable. In this, the JEP also is inconsistent. In chapter "Dropping the initial 1 element from version numbers" it is argued that comparing "9.0.0" to "1.8.0" by digit will show that 9 is newer than 8. But there is no such version as 9.0.0. This is stated in chapter "Version numbers": "The version number does not include trailing zero elements; i.e., $SECURITY is omitted if it has the value zero, and $MINOR is omitted if both $MINOR and $SECURITY have the value zero." Our BussinessObjects applications parse the option string by Float.parseFloat(System.getProperty("java.version").substring(0,3)) This delivers 1.7 for Java 7, but currently crashes for jdk9, as it tries to parse "9-i" from 9-internal. With trailing .0.0, this would see 9.0 and could parse the Java version until Java 99. As a workaround for this, we are now configuring with --with-version-patch=1 which results in version string 9.0.0.1-internal. At another place, we use split() to analyse the version. if (Integer.parseInt(System.getProperty("java.version").split("\\.")[1]) > 7)) { ... If there are always 3 numbers, this could be fixed to: if (Integer.parseInt(System.getProperty("java.version").split("\\.")[0]) > 7) || Integer.parseInt(System.getProperty("java.version").split("\\.")[1]) > 7)) { ... which was probably in mind when the But with omitting the .0.0, we must also split at '-' and '+': String[] version_elements = System.getProperty("java.version").split("\\.|-|+"); if (Integer.parseInt(version_elements[0]) > 7) || Integer.parseInt(version_elements[1]) > 7)) { ... If you want to check for a version > 9.*.22 it's even more complicated: String[] version_elements = System.getProperty("java.version").split("+|-")[0].split("\\."); if (Integer.parseInt(version_elements[0]) > 9 || (Integer.parseInt(version_elements[0]) == 9 && version_elements.length >= 3 && Integer.parseInt(version_elements[2]) > 22)) { ... So we would appreciate if the JEP was enhanced to always guarantee three version numbers 'x.y.z'. Further, version number 9.0.0.1 breaks the jck test api/java_lang/System/index.html#GetProperty. It fails with: "getJavaSpecVersion0001: Failed. System property 'java.specification.version' does not corresponds to the format 'major.minor.micro'". Maybe a fix of this test is already worked on, we are using jck suite from 9.9.15. Best regards, Goetz. From christian.thalinger at oracle.com Tue Feb 9 22:54:28 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Tue, 9 Feb 2016 12:54:28 -1000 Subject: (XS) RFR: 8149427: Remove .class files from the hotspot repo .hgignore file In-Reply-To: <56B98F88.1080401@oracle.com> References: <56B97DB7.1080402@oracle.com> <56B98F88.1080401@oracle.com> Message-ID: <9E7B5E46-4FF0-4BB7-AC4A-E0F199B4061F@oracle.com> Ask Michael. > On Feb 8, 2016, at 9:04 PM, Mikael Vidstedt wrote: > > > Looks good.It would of course be great to understand why it was added to start with, but even so I don't think it should be there (and we should instead fix whatever caused it to be added). > > Cheers, > Mikael > > On 2016-02-08 21:48, David Holmes wrote: >> Bug: https://bugs.openjdk.java.net/browse/JDK-8149427 >> >> webrev: http://cr.openjdk.java.net/~dholmes/8149427/webrev/ >> >> JDK-6900757 added the following to .hgignore: >> >> +\.class$ >> >> but it is unclear why this was done. This setting can cause problems when jtreg testing leaves class files in unexpected places, and -stree JPRT submissions then fail testing in strange ways. "hg status" doesn't show these errant files because of the entry in .hgignore. >> >> I propose to remove the entry from the .hgignore file. >> >> Thanks, >> David >> >> >> patch: >> >> --- old/./.hgignore 2016-02-09 00:43:28.786882859 -0500 >> +++ new/./.hgignore 2016-02-09 00:43:27.254796576 -0500 >> @@ -10,7 +10,6 @@ >> .igv.log >> ^.hgtip >> .DS_Store >> -\.class$ >> ^\.mx.jvmci/env >> ^\.mx.jvmci/.*\.pyc >> ^\.mx.jvmci/eclipse-launches/.* >> >> --- >> > From mikael.vidstedt at oracle.com Wed Feb 10 03:42:03 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Tue, 9 Feb 2016 19:42:03 -0800 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56B7DDD8.2050701@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <79CA457A-E4E3-4D49-B22A-C959C16DAC49@oracle.com> <5E9239AE-D8C9-4A98-9C46-9FE0F130A06C@oracle.com> <56B5207B.8010107@oracle.com> <56B7DDD8.2050701@oracle.com> Message-ID: <56BAB18B.3070707@oracle.com> Can I please get a quick review of these updated webrevs: hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05/hotspot/webrev/ jdk: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05/jdk/webrev/ incremental webrevs: hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05.incr/hotspot/webrev/ jdk: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05.incr/jdk/webrev/ Changes: * Added asserts in copy.cpp/conjoint_swap * Correctness: Moved offset sign checks to only be performed if corresponding base object is null, and added corresponding tests I'm about to make additional changes in this same area, so unless this last change is horribly broken I'm planning on pushing this and doing any additional cleanup in the upcoming change(s). Cheers, Mikael On 2016-02-07 16:14, David Holmes wrote: > On 6/02/2016 8:21 AM, Mikael Vidstedt wrote: >> >> I fully agree that moving the arguments checking up to Java makes more >> sense, and I've prepared new webrevs which do exactly that, including >> changes to address the other feedback from David, John and others: > > Shouldn't the lowest-level do_conjoint_swap routines at least check > preconditions with asserts to catch the cases where the calling Java > code has failed to do the right thing? The other Copy methods seem to > do this. > > David > ----- > >> hotspot: >> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04/hotspot/webrev/ >> >> >> jdk: >> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04/jdk/webrev/ >> >> Incremental webrevs for your convenience: >> >> hotspot: >> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04.incr/hotspot/webrev/ >> >> >> jdk: >> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04.incr/jdk/webrev/ >> >> >> >> >> I have done some benchmarking of this code and for large copies (16MB+) >> this outperforms the old Bits.c implementation by *30-100%* depending on >> platform and exact element sizes! For smaller copies the additional >> checks which are now performed hurt performance on client VMs (80-90% of >> old impl), but with the server VMs I see performance on par with, or in >> most cases 5-10% better than the old implementation. There's a >> potentially statistically significant regression of ~3-4% for >> elemSize=2, but for now I'm going to declare success. There's certainly >> room for further improvements here, but this should at least do for >> addressing the original problem. >> >> >> I filed https://bugs.openjdk.java.net/browse/JDK-8149159 for moving the >> checks for Unsafe.copyMemory to Java, and will work on that next. I also >> filed https://bugs.openjdk.java.net/browse/JDK-8149162 to cover the >> potential renaming of the Bits methods to have more informative names. >> Finally, I filed https://bugs.openjdk.java.net/browse/JDK-8149163 to >> look at improving the behavior of Unsafe.addressSize(), after having >> spent too much time trying to understand why the performance of the new >> U.copySwapMemory Java checks wasn't quite living up to my expectations >> (spoiler alert: Unsafe.addressSize() is not intrinsified, so will always >> result in a call into the VM/unsafe.cpp). >> >> >> Finally, I - too - would like to see the copy-swap logic moved into >> Java, and as I mentioned I played around with that first before I >> decided to do the native implementation to address the immediate >> problem. Looking forward to what you find Paul! >> >> Cheers, >> Mikael >> >> On 2016-02-05 05:00, Paul Sandoz wrote: >>> Hi, >>> >>> Nice use of C++ templates :-) >>> >>> Overall looks good. >>> >>> I too would prefer if we could move the argument checking out, perhaps >>> even to the point of requiring callers do that rather than providing >>> another method, for example for Buffer i think the arguments are known >>> to be valid? I think in either case it is important to improve the >>> documentation on the method stating the constraints on arguments, >>> atomicity guarantees etc. >>> >>> I have a hunch that for the particular case of copying-with-swap for >>> buffers i could get this to work work efficiently using Unsafe (three >>> separate methods for each unit type of 2, 4 and 8 bytes), since IIUC >>> the range is bounded to be less than Integer.MAX_VALUE so an int loop >>> rather than a long loop can be used and therefore safe points checks >>> will not be placed within the loop. >>> >>> However, i think what you have done is more generally applicable and >>> could be made intrinsic. It would be a nice at some future point if it >>> could be made a pure Java implementation and intrinsified where >>> appropriate. >>> >>> ? >>> >>> John, regarding array mismatch there were issues with the efficiency >>> of the unrolled loops with Unsafe access. (Since the loops were int >>> bases there were no issues with safe point checks.) Roland recently >>> fixed that so now code is generated that is competitive with direct >>> array accesses. We drop into the stub intrinsic and leverage 128bits >>> or 256bits where supported. Interestingly it seems the unrolled loop >>> using Unsafe is now slightly faster than the stub using 128bit >>> registers. I don?t know if that is due to unluckly alignment, and/or >>> the stub needs to do some manual unrolling. In terms of code-cache >>> efficiency the intrinsic is better. >>> >>> Paul. >>> >>> >>> >>> >>> >>>> On 4 Feb 2016, at 06:27, John Rose wrote: >>>> >>>> On Feb 2, 2016, at 11:25 AM, Mikael Vidstedt >>>> wrote: >>>>> Please review this change which introduces a Copy::conjoint_swap and >>>>> an Unsafe.copySwapMemory method to call it from Java, along with the >>>>> necessary changes to have java.nio.Bits call it instead of the >>>>> Bits.c code. >>>>> >>>>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ >>>>> >>>>> >>>>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ >>>>> >>>>> >>>> This is very good. >>>> >>>> I have some nit-picks: >>>> >>>> These days, when we introduce a new intrinsic (@HSIntrCand), >>>> we write the argument checking code separately in a non-intrinsic >>>> bytecode method. In this case, we don't (yet) have an intrinsic >>>> binding for U.copy*, but we might in the future. (C intrinsifies >>>> memcpy, which is a precedent.) In any case, I would prefer >>>> if we could structure the argument checking code in a similar >>>> way, with Unsafe.java containing both copySwapMemory >>>> and a private copySwapMemory0. Then we can JIT-optimize >>>> the safety checks. >>>> >>>> You might as well extend the same treatment to the pre-existing >>>> copyMemory call. The most important check (and the only one >>>> in U.copyMemory) is to ensure that the size_t operand has not >>>> wrapped around from a Java negative value to a crazy-large >>>> size_t value. That's the low-hanging fruit. Checking the pointers >>>> (for null or oob) is more problematic, of course. Checking >>>> consistency >>>> around elemSize is cheap and easy, so I agree that the U.copySM >>>> should do that work also. Basically, Unsafe can do very basic >>>> checks if there is a tricky user model to enforce, but it mustn't >>>> "sign up" to guard the user against all errors. >>>> >>>> Rule of thumb: Unsafe calls don't throw NPEs, they just SEGV. >>>> And the rare bit that *does* throw (IAE usually) should be placed >>>> into Unsafe.java, not unsafe.cpp. (The best-practice rule for putting >>>> argument checking code outside of the intrinsic is a newer one, >>>> so Unsafe code might not always do this.) >>>> >>>> The comment "Generalizing it would be reasonable, but requires >>>> card marking" is bogus, since we never byte-swap managed pointers. >>>> >>>> The test logic will flow a little smoother if your GenericPointer guy, >>>> the onHeap version, stores the appropriate array base offset in his >>>> offset field. >>>> You won't have to mention p.isOnHeap nearly so much, and the code will >>>> set a slightly better example. >>>> >>>> The VM_ENTRY_BASE_FROM_LEAF macro is really cool. >>>> >>>> The C++ template code is cool also. It reminds me of the kind >>>> of work Gosling's "Ace" processor could do, but now it's mainstreamed >>>> for all to use in C++. We're going to get some of that goodness >>>> in Project Valhalla with specialization logic. >>>> >>>> I find it amazing that the right way to code this in C is to >>>> use memcpy for unaligned accesses and byte peek/poke >>>> into registers for byte-swapping operators. I'm glad we >>>> can write this code *once* for the JVM and JDK. >>>> >>>> Possible future work: If we can get a better handle on >>>> writing vectorizable loops from Java, including Unsafe-based >>>> ones, we can move some of the C code back up to Java. >>>> Perhaps U.copy* calls for very short lengths deserved to >>>> be broken out into small loops of U.get/put* (with alignment). >>>> I think you experimented with this, and there were problems >>>> with the JIT putting fail-safe memory barriers between >>>> U.get/put* calls. Paul's work on Array.mismatch ran into >>>> similar issues, with the right answer being to write manual >>>> vector code in assembly. >>>> >>>> Anyway, you can count me as a reviewer. >>>> >>>> Thanks, >>>> >>>> ? John >> From david.holmes at oracle.com Wed Feb 10 04:38:39 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 10 Feb 2016 14:38:39 +1000 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56BAB18B.3070707@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <79CA457A-E4E3-4D49-B22A-C959C16DAC49@oracle.com> <5E9239AE-D8C9-4A98-9C46-9FE0F130A06C@oracle.com> <56B5207B.8010107@oracle.com> <56B7DDD8.2050701@oracle.com> <56BAB18B.3070707@oracle.com> Message-ID: <56BABECF.4050505@oracle.com> On 10/02/2016 1:42 PM, Mikael Vidstedt wrote: > > Can I please get a quick review of these updated webrevs: In terms of the incremental changes this looks fine. If you consider it all reviewed then nothing in the increments should change that. But looking at the JDK code I have some follow up suggestions for copySwapMemory: - document all parameters with @param and describe constraints on values/relationships - specify @throws for all IllegalArgumentException and NullPointerException conditions - add a descriptive error message when throwing IllegalArgumentException - not sure NullPointerException is correct, rather than IllegalArgumentException. for null base with zero offset cases Thanks, David > hotspot: > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05/hotspot/webrev/ > > jdk: > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05/jdk/webrev/ > > incremental webrevs: > > hotspot: > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05.incr/hotspot/webrev/ > > jdk: > http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05.incr/jdk/webrev/ > > > Changes: > > * Added asserts in copy.cpp/conjoint_swap > * Correctness: Moved offset sign checks to only be performed if > corresponding base object is null, and added corresponding tests > > I'm about to make additional changes in this same area, so unless this > last change is horribly broken I'm planning on pushing this and doing > any additional cleanup in the upcoming change(s). > > Cheers, > Mikael > > > On 2016-02-07 16:14, David Holmes wrote: >> On 6/02/2016 8:21 AM, Mikael Vidstedt wrote: >>> >>> I fully agree that moving the arguments checking up to Java makes more >>> sense, and I've prepared new webrevs which do exactly that, including >>> changes to address the other feedback from David, John and others: >> >> Shouldn't the lowest-level do_conjoint_swap routines at least check >> preconditions with asserts to catch the cases where the calling Java >> code has failed to do the right thing? The other Copy methods seem to >> do this. >> >> David >> ----- >> >>> hotspot: >>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04/hotspot/webrev/ >>> >>> >>> jdk: >>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04/jdk/webrev/ >>> >>> Incremental webrevs for your convenience: >>> >>> hotspot: >>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04.incr/hotspot/webrev/ >>> >>> >>> jdk: >>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.04.incr/jdk/webrev/ >>> >>> >>> >>> >>> I have done some benchmarking of this code and for large copies (16MB+) >>> this outperforms the old Bits.c implementation by *30-100%* depending on >>> platform and exact element sizes! For smaller copies the additional >>> checks which are now performed hurt performance on client VMs (80-90% of >>> old impl), but with the server VMs I see performance on par with, or in >>> most cases 5-10% better than the old implementation. There's a >>> potentially statistically significant regression of ~3-4% for >>> elemSize=2, but for now I'm going to declare success. There's certainly >>> room for further improvements here, but this should at least do for >>> addressing the original problem. >>> >>> >>> I filed https://bugs.openjdk.java.net/browse/JDK-8149159 for moving the >>> checks for Unsafe.copyMemory to Java, and will work on that next. I also >>> filed https://bugs.openjdk.java.net/browse/JDK-8149162 to cover the >>> potential renaming of the Bits methods to have more informative names. >>> Finally, I filed https://bugs.openjdk.java.net/browse/JDK-8149163 to >>> look at improving the behavior of Unsafe.addressSize(), after having >>> spent too much time trying to understand why the performance of the new >>> U.copySwapMemory Java checks wasn't quite living up to my expectations >>> (spoiler alert: Unsafe.addressSize() is not intrinsified, so will always >>> result in a call into the VM/unsafe.cpp). >>> >>> >>> Finally, I - too - would like to see the copy-swap logic moved into >>> Java, and as I mentioned I played around with that first before I >>> decided to do the native implementation to address the immediate >>> problem. Looking forward to what you find Paul! >>> >>> Cheers, >>> Mikael >>> >>> On 2016-02-05 05:00, Paul Sandoz wrote: >>>> Hi, >>>> >>>> Nice use of C++ templates :-) >>>> >>>> Overall looks good. >>>> >>>> I too would prefer if we could move the argument checking out, perhaps >>>> even to the point of requiring callers do that rather than providing >>>> another method, for example for Buffer i think the arguments are known >>>> to be valid? I think in either case it is important to improve the >>>> documentation on the method stating the constraints on arguments, >>>> atomicity guarantees etc. >>>> >>>> I have a hunch that for the particular case of copying-with-swap for >>>> buffers i could get this to work work efficiently using Unsafe (three >>>> separate methods for each unit type of 2, 4 and 8 bytes), since IIUC >>>> the range is bounded to be less than Integer.MAX_VALUE so an int loop >>>> rather than a long loop can be used and therefore safe points checks >>>> will not be placed within the loop. >>>> >>>> However, i think what you have done is more generally applicable and >>>> could be made intrinsic. It would be a nice at some future point if it >>>> could be made a pure Java implementation and intrinsified where >>>> appropriate. >>>> >>>> ? >>>> >>>> John, regarding array mismatch there were issues with the efficiency >>>> of the unrolled loops with Unsafe access. (Since the loops were int >>>> bases there were no issues with safe point checks.) Roland recently >>>> fixed that so now code is generated that is competitive with direct >>>> array accesses. We drop into the stub intrinsic and leverage 128bits >>>> or 256bits where supported. Interestingly it seems the unrolled loop >>>> using Unsafe is now slightly faster than the stub using 128bit >>>> registers. I don?t know if that is due to unluckly alignment, and/or >>>> the stub needs to do some manual unrolling. In terms of code-cache >>>> efficiency the intrinsic is better. >>>> >>>> Paul. >>>> >>>> >>>> >>>> >>>> >>>>> On 4 Feb 2016, at 06:27, John Rose wrote: >>>>> >>>>> On Feb 2, 2016, at 11:25 AM, Mikael Vidstedt >>>>> wrote: >>>>>> Please review this change which introduces a Copy::conjoint_swap and >>>>>> an Unsafe.copySwapMemory method to call it from Java, along with the >>>>>> necessary changes to have java.nio.Bits call it instead of the >>>>>> Bits.c code. >>>>>> >>>>>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/hotspot/webrev/ >>>>>> >>>>>> >>>>>> http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.03/jdk/webrev/ >>>>>> >>>>>> >>>>> This is very good. >>>>> >>>>> I have some nit-picks: >>>>> >>>>> These days, when we introduce a new intrinsic (@HSIntrCand), >>>>> we write the argument checking code separately in a non-intrinsic >>>>> bytecode method. In this case, we don't (yet) have an intrinsic >>>>> binding for U.copy*, but we might in the future. (C intrinsifies >>>>> memcpy, which is a precedent.) In any case, I would prefer >>>>> if we could structure the argument checking code in a similar >>>>> way, with Unsafe.java containing both copySwapMemory >>>>> and a private copySwapMemory0. Then we can JIT-optimize >>>>> the safety checks. >>>>> >>>>> You might as well extend the same treatment to the pre-existing >>>>> copyMemory call. The most important check (and the only one >>>>> in U.copyMemory) is to ensure that the size_t operand has not >>>>> wrapped around from a Java negative value to a crazy-large >>>>> size_t value. That's the low-hanging fruit. Checking the pointers >>>>> (for null or oob) is more problematic, of course. Checking >>>>> consistency >>>>> around elemSize is cheap and easy, so I agree that the U.copySM >>>>> should do that work also. Basically, Unsafe can do very basic >>>>> checks if there is a tricky user model to enforce, but it mustn't >>>>> "sign up" to guard the user against all errors. >>>>> >>>>> Rule of thumb: Unsafe calls don't throw NPEs, they just SEGV. >>>>> And the rare bit that *does* throw (IAE usually) should be placed >>>>> into Unsafe.java, not unsafe.cpp. (The best-practice rule for putting >>>>> argument checking code outside of the intrinsic is a newer one, >>>>> so Unsafe code might not always do this.) >>>>> >>>>> The comment "Generalizing it would be reasonable, but requires >>>>> card marking" is bogus, since we never byte-swap managed pointers. >>>>> >>>>> The test logic will flow a little smoother if your GenericPointer guy, >>>>> the onHeap version, stores the appropriate array base offset in his >>>>> offset field. >>>>> You won't have to mention p.isOnHeap nearly so much, and the code will >>>>> set a slightly better example. >>>>> >>>>> The VM_ENTRY_BASE_FROM_LEAF macro is really cool. >>>>> >>>>> The C++ template code is cool also. It reminds me of the kind >>>>> of work Gosling's "Ace" processor could do, but now it's mainstreamed >>>>> for all to use in C++. We're going to get some of that goodness >>>>> in Project Valhalla with specialization logic. >>>>> >>>>> I find it amazing that the right way to code this in C is to >>>>> use memcpy for unaligned accesses and byte peek/poke >>>>> into registers for byte-swapping operators. I'm glad we >>>>> can write this code *once* for the JVM and JDK. >>>>> >>>>> Possible future work: If we can get a better handle on >>>>> writing vectorizable loops from Java, including Unsafe-based >>>>> ones, we can move some of the C code back up to Java. >>>>> Perhaps U.copy* calls for very short lengths deserved to >>>>> be broken out into small loops of U.get/put* (with alignment). >>>>> I think you experimented with this, and there were problems >>>>> with the JIT putting fail-safe memory barriers between >>>>> U.get/put* calls. Paul's work on Array.mismatch ran into >>>>> similar issues, with the right answer being to write manual >>>>> vector code in assembly. >>>>> >>>>> Anyway, you can count me as a reviewer. >>>>> >>>>> Thanks, >>>>> >>>>> ? John >>> > From iris.clark at oracle.com Wed Feb 10 05:27:55 2016 From: iris.clark at oracle.com (Iris Clark) Date: Tue, 9 Feb 2016 21:27:55 -0800 (PST) Subject: Version special case '9' In-Reply-To: <4295855A5C1DE049A61835A1887419CC41F20DD2@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC41F20DD2@DEWDFEMB12A.global.corp.sap> Message-ID: Hi, Goetz. Sorry for the delayed response. I've been thinking about your situation. I don't think that we can add back the trailing zeros. The JEP inconsistency needs to be addressed. The intent for the "Dropping the initial 1 element..." section is intended to describe a comparison algorithm, not guarantee the presence of trailing zeros. This comparison has been implemented in jdk.Version.compareTo() which was pushed in this changeset (expected in jdk=9+105): http://hg.openjdk.java.net/jdk9/dev/jdk/rev/7adef1c3afd5 I haven't determined the feasibility of this quite yet, but I suspect that a reasonable solution to your version comparison problem for earlier releases such as 8u and 7u would be to add a similar implementation of jdk.Version.compareTo() to those releases. Given the format of our previous release strings, this wouldn't be a straight backport of the JDK 9 jdk.Version, it would have to be a subset. This would allow you to use the same methods on jdk.Version across a wider range of releases. Would that be helpful? I haven't looked closely at the code, but I wonder whether the jck test has found a bug in the java.specification.version implementation? Historically, the only time we increment that system property is as part of a JSR or a MR. I've filed the following bug to investigate: 8149519: Investigate implementation of java.specification.version https://bugs.openjdk.java.net/browse/JDK-8149519 Thanks, iris -----Original Message----- From: Lindenmaier, Goetz [mailto:goetz.lindenmaier at sap.com] Sent: Tuesday, January 26, 2016 8:13 AM To: hotspot-dev at openjdk.java.net; verona-dev at openjdk.java.net Subject: Version special case '9' Hi, We appreciate the new version scheme introduced with JEP 223. It simplifies the versioning and we agree that it is to it's best downward compatible. Unfortunately there is one exception to this. The initial version of a new major release skips the minor and security digits. This makes parsing the string unnecessarily complicated, because the format is not predictable. In this, the JEP also is inconsistent. In chapter "Dropping the initial 1 element from version numbers" it is argued that comparing "9.0.0" to "1.8.0" by digit will show that 9 is newer than 8. But there is no such version as 9.0.0. This is stated in chapter "Version numbers": "The version number does not include trailing zero elements; i.e., $SECURITY is omitted if it has the value zero, and $MINOR is omitted if both $MINOR and $SECURITY have the value zero." Our BussinessObjects applications parse the option string by Float.parseFloat(System.getProperty("java.version").substring(0,3)) This delivers 1.7 for Java 7, but currently crashes for jdk9, as it tries to parse "9-i" from 9-internal. With trailing .0.0, this would see 9.0 and could parse the Java version until Java 99. As a workaround for this, we are now configuring with --with-version-patch=1 which results in version string 9.0.0.1-internal. At another place, we use split() to analyse the version. if (Integer.parseInt(System.getProperty("java.version").split("\\.")[1]) > 7)) { ... If there are always 3 numbers, this could be fixed to: if (Integer.parseInt(System.getProperty("java.version").split("\\.")[0]) > 7) || Integer.parseInt(System.getProperty("java.version").split("\\.")[1]) > 7)) { ... which was probably in mind when the But with omitting the .0.0, we must also split at '-' and '+': String[] version_elements = System.getProperty("java.version").split("\\.|-|+"); if (Integer.parseInt(version_elements[0]) > 7) || Integer.parseInt(version_elements[1]) > 7)) { ... If you want to check for a version > 9.*.22 it's even more complicated: String[] version_elements = System.getProperty("java.version").split("+|-")[0].split("\\."); if (Integer.parseInt(version_elements[0]) > 9 || (Integer.parseInt(version_elements[0]) == 9 && version_elements.length >= 3 && Integer.parseInt(version_elements[2]) > 22)) { ... So we would appreciate if the JEP was enhanced to always guarantee three version numbers 'x.y.z'. Further, version number 9.0.0.1 breaks the jck test api/java_lang/System/index.html#GetProperty. It fails with: "getJavaSpecVersion0001: Failed. System property 'java.specification.version' does not corresponds to the format 'major.minor.micro'". Maybe a fix of this test is already worked on, we are using jck suite from 9.9.15. Best regards, Goetz. From goetz.lindenmaier at sap.com Wed Feb 10 06:10:21 2016 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Wed, 10 Feb 2016 06:10:21 +0000 Subject: Version special case '9' In-Reply-To: References: <4295855A5C1DE049A61835A1887419CC41F20DD2@DEWDFEMB12A.global.corp.sap> Message-ID: <57a697ee95a243d1aed272c540131600@DEWDFE13DE09.global.corp.sap> Hi Iris, thanks for your response! > such as 8u and 7u would be to add a similar implementation of > jdk.Version.compareTo() to those releases. Given the format of our Hmm, it would be great if old software would run out-of-the box with Jdk 9. If you add some utility classes, one still has to change Java code to use these new classes. It's a bit more convenient, but the code still has to be changed. Skipping the .0.0 in the version really breaks _all_ old code. So fixing that would be most helpful. And it would fix the inconsistency in the JEP text. > I don't think that we can add back the trailing zeros. Why not? Java 9 is not yet released, is it? When else would you fix something like that? Anyways, what is the reason for skipping the two zeroes? Thanks and best regards, Goetz. > -----Original Message----- > From: Iris Clark [mailto:iris.clark at oracle.com] > Sent: Mittwoch, 10. Februar 2016 06:28 > To: Lindenmaier, Goetz ; hotspot- > dev at openjdk.java.net; verona-dev at openjdk.java.net > Subject: RE: Version special case '9' > > Hi, Goetz. > > Sorry for the delayed response. I've been thinking about your situation. > > I don't think that we can add back the trailing zeros. The JEP inconsistency > needs to be addressed. The intent for the "Dropping the initial 1 element..." > section is intended to describe a comparison algorithm, not guarantee the > presence of trailing zeros. This comparison has been implemented in > jdk.Version.compareTo() which was pushed in this changeset (expected in > jdk=9+105): > > http://hg.openjdk.java.net/jdk9/dev/jdk/rev/7adef1c3afd5 > > I haven't determined the feasibility of this quite yet, but I suspect that a > reasonable solution to your version comparison problem for earlier releases > such as 8u and 7u would be to add a similar implementation of > jdk.Version.compareTo() to those releases. Given the format of our > previous release strings, this wouldn't be a straight backport of the JDK 9 > jdk.Version, it would have to be a subset. This would allow you to use the > same methods on jdk.Version across a wider range of releases. > > Would that be helpful? > > I haven't looked closely at the code, but I wonder whether the jck test has > found a bug in the java.specification.version implementation? Historically, > the only time we increment that system property is as part of a JSR or a MR. > I've filed the following bug to investigate: > > 8149519: Investigate implementation of java.specification.version > https://bugs.openjdk.java.net/browse/JDK-8149519 > > Thanks, > iris > > -----Original Message----- > From: Lindenmaier, Goetz [mailto:goetz.lindenmaier at sap.com] > Sent: Tuesday, January 26, 2016 8:13 AM > To: hotspot-dev at openjdk.java.net; verona-dev at openjdk.java.net > Subject: Version special case '9' > > Hi, > > We appreciate the new version scheme introduced with JEP 223. > It simplifies the versioning and we agree that it is to it's best downward > compatible. > > Unfortunately there is one exception to this. > The initial version of a new major release skips the minor and security digits. > This makes parsing the string unnecessarily complicated, because the format > is not predictable. > > In this, the JEP also is inconsistent. > In chapter "Dropping the initial 1 element from version numbers" > it is argued that comparing "9.0.0" to "1.8.0" by digit will show that > 9 is newer than 8. But there is no such version as 9.0.0. > > This is stated in chapter "Version numbers": "The version number does not > include trailing zero elements; i.e., $SECURITY is omitted if it has the value > zero, and $MINOR is omitted if both $MINOR and $SECURITY have the value > zero." > > Our BussinessObjects applications parse the option string by > Float.parseFloat(System.getProperty("java.version").substring(0,3)) > This delivers 1.7 for Java 7, but currently crashes for jdk9, as it tries to parse > "9-i" from 9-internal. With trailing .0.0, this would see 9.0 and could parse the > Java version until Java 99. > > As a workaround for this, we are now configuring with --with-version- > patch=1 which results in version string 9.0.0.1-internal. > > At another place, we use split() to analyse the version. > > if (Integer.parseInt(System.getProperty("java.version").split("\\.")[1]) > 7)) { > ... > > If there are always 3 numbers, this could be fixed to: > > if (Integer.parseInt(System.getProperty("java.version").split("\\.")[0]) > 7) > || > Integer.parseInt(System.getProperty("java.version").split("\\.")[1]) > 7)) { > ... > > which was probably in mind when the > > But with omitting the .0.0, we must also split at '-' and '+': > > String[] version_elements = System.getProperty("java.version").split("\\.|- > |+"); > if (Integer.parseInt(version_elements[0]) > 7) || > Integer.parseInt(version_elements[1]) > 7)) { ... > > If you want to check for a version > 9.*.22 it's even more complicated: > > String[] version_elements = System.getProperty("java.version").split("+|- > ")[0].split("\\."); > if (Integer.parseInt(version_elements[0]) > 9 || > (Integer.parseInt(version_elements[0]) == 9 && > version_elements.length >= 3 && > Integer.parseInt(version_elements[2]) > 22)) { ... > > So we would appreciate if the JEP was enhanced to always guarantee three > version numbers 'x.y.z'. > > Further, version number 9.0.0.1 breaks the jck test > api/java_lang/System/index.html#GetProperty. > It fails with: "getJavaSpecVersion0001: Failed. System property > 'java.specification.version' does not corresponds to the format > 'major.minor.micro'". > Maybe a fix of this test is already worked on, we are using jck suite from > 9.9.15. > > Best regards, > Goetz. From ktruong.nguyen at gmail.com Wed Feb 10 09:19:17 2016 From: ktruong.nguyen at gmail.com (Khanh Nguyen) Date: Wed, 10 Feb 2016 01:19:17 -0800 Subject: Add instrumentation in the TemplateInterpreter Message-ID: Hello, I want to add instrumentation to monitor all reads and writes in the TemplateInterpreter, I think I got the correct place for it in /cpu/x86/vm/templateTable_x86_64.cpp. Can someone please tell me if I'm doing it right? For writes: static void do_oop_store(InterpreterMacroAssembler* _masm, Address obj, Register val, BarrierSet::Name barrier, bool precise) { [...] case BarrierSet::CardTableModRef: case BarrierSet::CardTableExtension: { if (val == noreg) { __ store_heap_oop_null(obj); } else { __ store_heap_oop(obj, val); /*mycodeA*/ __ movptr(c_rarg1, obj.base()); // save this value otherwise it will be changed? // flatten object address if needed if (!precise || (obj.index() == noreg && obj.disp() == 0)) { __ store_check(obj.base()); /*mycodeB*/ __ call_VM(noreg, //void CAST_FROM_FN_PTR(address, InterpreterRuntime::write_helper), c_rarg1, // obj c_rarg1, // field address because store check is called on field address val); } else { __ leaq(rdx, obj); __ store_check(rdx); /*mycodeC*/ __ call_VM(noreg, //void CAST_FROM_FN_PTR(address, InterpreterRuntime::write_helper), c_rarg1, // obj rdx, // field address, because store check is called on field address val); } } break; For reads: case Bytecodes::_fast_agetfield: __ load_heap_oop(rax, field); /*mycodeD*/ __ call_VM(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::read_barrier_helper), rax); __ verify_oop(rax); break; My questions are: 1) I thought this represents a putfield a.f=b where a.f is represented by the parameter obj of type Address. b is obvious the parameter val of type Register. Especially in obj there are fields: base, index and disp. But as I run this, looks like obj is actually the field address. (the case mycodeB) I haven't found a test case that can trigger the case mycodeC to see the behavior (i.e., rdx might get destroyed and I got random value back or c_rarg1 is the obj address and rdx is field address) 2) Before this, I tried to insert the same __ call_VM in fast_aputfield before do_oop_store but it results in JVM crash. I don't understand the reason why. What I did in the call is just print the parameters. I did see the values printed (only the 1st time it goes to the method) but then the VM crashed. I thought __ call_VM will preserve all registers's value and restore properly when comes back. My instrumentation has no side effect, I just observe and record the values (actually just printing the values to test). 3) Is it strictly required to have the line /*mycodeA*/ I tried to, in mycodeB line, pass obj.base() twice and it got build errors for "smashed args"? I greatly appreciate your time, Best, Khanh Nguyen From paul.sandoz at oracle.com Wed Feb 10 10:03:40 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Wed, 10 Feb 2016 11:03:40 +0100 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: <56BAB18B.3070707@oracle.com> References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <79CA457A-E4E3-4D49-B22A-C959C16DAC49@oracle.com> <5E9239AE-D8C9-4A98-9C46-9FE0F130A06C@oracle.com> <56B5207B.8010107@oracle.com> <56B7DDD8.2050701@oracle.com> <56BAB18B.3070707@oracle.com> Message-ID: > On 10 Feb 2016, at 04:42, Mikael Vidstedt wrote: > > > Can I please get a quick review of these updated webrevs: > > hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05/hotspot/webrev/ > jdk: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05/jdk/webrev/ > > incremental webrevs: > > hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05.incr/hotspot/webrev/ > jdk: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05.incr/jdk/webrev/ > +1 I agree with David on the JavaDoc, but that could be followed up with any future changes, including potentially the removal of wrapping methods in Bits.java, since buffers any way use Unsafe the wrappers now appear to offer little value. Paul. > Changes: > > * Added asserts in copy.cpp/conjoint_swap > * Correctness: Moved offset sign checks to only be performed if corresponding base object is null, and added corresponding tests > > I'm about to make additional changes in this same area, so unless this last change is horribly broken I'm planning on pushing this and doing any additional cleanup in the upcoming change(s). > > Cheers, > Mikael > From marcus.larsson at oracle.com Wed Feb 10 13:49:04 2016 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Wed, 10 Feb 2016 14:49:04 +0100 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework Message-ID: <56BB3FD0.5000104@oracle.com> Hi, Please review the following patch adding support for non-interleavable multi-line log messages in UL. Summary: This patch adds a LogMessage class that represents a multiline log message, buffering lines that belong to the same message. The class has a similar interface to the Log class, with printf-like methods for each log level. These methods will append the log message with additional lines. Once all filled in, the log message should be sent to the the appropriate log(s) using Log<>::write(). All lines in the LogMessage are written in a way that prevents interleaving by other messages. Lines are printed in the same order they were added to the message (regardless of level). Apart from the level, decorators will be identical for lines in the same LogMessage, and all lines will be decorated. Webrev: http://cr.openjdk.java.net/~mlarsson/8145934/webrev.00/ Issue: https://bugs.openjdk.java.net/browse/JDK-8145934 Testing: Included tests through JPRT Thanks, Marcus From vladimir.x.ivanov at oracle.com Wed Feb 10 18:13:53 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 10 Feb 2016 21:13:53 +0300 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list Message-ID: <56BB7DE1.4020002@oracle.com> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 https://bugs.openjdk.java.net/browse/JDK-8138922 StubCodeDesc keeps a list of all descriptors rooted at StubCodeDesc::_list by placing newly instantiated objects there at the end of the constructor. Unfortunately, it doesn't guarantee that only fully-constructed objects are visible, because compiler (or HW) can reorder the stores. Since method handle adapters are generated on demand when j.l.i framework is initialized, it's possible there are readers iterating over the list at the moment. It's not a problem per se until everybody sees a consistent view of the list. The fix is to insert a StoreStore barrier before registering an object on the list. (I also considered moving MH adapter allocation to VM initialization phase before anybody reads the list, but it's non-trivial since MethodHandles::generate_adapters() has a number of implicit dependencies.) Testing: manual (verified StubCodeMark assembly), JPRT Thanks! Best regards, Vladimir Ivanov From rachel.protacio at oracle.com Wed Feb 10 18:25:49 2016 From: rachel.protacio at oracle.com (Rachel Protacio) Date: Wed, 10 Feb 2016 13:25:49 -0500 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework In-Reply-To: <56BB3FD0.5000104@oracle.com> References: <56BB3FD0.5000104@oracle.com> Message-ID: <56BB80AD.9030701@oracle.com> Hi, Thank you for implementing this - it will be very useful for some of our logging. The code looks good to me! Thanks also for the file_contains_substring() update in log.cpp :) Rachel On 2/10/2016 8:49 AM, Marcus Larsson wrote: > Hi, > > Please review the following patch adding support for non-interleavable > multi-line log messages in UL. > > Summary: > This patch adds a LogMessage class that represents a multiline log > message, buffering lines that belong to the same message. The class > has a similar interface to the Log class, with printf-like methods for > each log level. These methods will append the log message with > additional lines. Once all filled in, the log message should be sent > to the the appropriate log(s) using Log<>::write(). All lines in the > LogMessage are written in a way that prevents interleaving by other > messages. Lines are printed in the same order they were added to the > message (regardless of level). Apart from the level, decorators will > be identical for lines in the same LogMessage, and all lines will be > decorated. > > Webrev: > http://cr.openjdk.java.net/~mlarsson/8145934/webrev.00/ > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8145934 > > Testing: > Included tests through JPRT > > Thanks, > Marcus From vladimir.kozlov at oracle.com Wed Feb 10 18:31:06 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 10 Feb 2016 10:31:06 -0800 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BB7DE1.4020002@oracle.com> References: <56BB7DE1.4020002@oracle.com> Message-ID: <56BB81EA.70209@oracle.com> Static ++_count could be also problem since you incremented it before adding 'this' to list. Can you look? Should we go though all our static fields and see if they have the same concurrent access problem? Thanks, Vladimir K On 2/10/16 10:13 AM, Vladimir Ivanov wrote: > http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 > https://bugs.openjdk.java.net/browse/JDK-8138922 > > StubCodeDesc keeps a list of all descriptors rooted at StubCodeDesc::_list by placing newly instantiated objects there > at the end of the constructor. Unfortunately, it doesn't guarantee that only fully-constructed objects are visible, > because compiler (or HW) can reorder the stores. > > Since method handle adapters are generated on demand when j.l.i framework is initialized, it's possible there are > readers iterating over the list at the moment. It's not a problem per se until everybody sees a consistent view of the > list. > > The fix is to insert a StoreStore barrier before registering an object on the list. > > (I also considered moving MH adapter allocation to VM initialization phase before anybody reads the list, but it's > non-trivial since MethodHandles::generate_adapters() has a number of implicit dependencies.) > > Testing: manual (verified StubCodeMark assembly), JPRT > > Thanks! > > Best regards, > Vladimir Ivanov From mikael.vidstedt at oracle.com Wed Feb 10 19:00:26 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 10 Feb 2016 11:00:26 -0800 Subject: RFR JDK-8141491: Unaligned memory access in Bits.c In-Reply-To: References: <56560F32.6040300@oracle.com> <565628F5.2080801@oracle.com> <56A6FCA5.7040808@oracle.com> <56A736B7.4020601@redhat.com> <8D4B5D84-FF89-461E-93B2-7FB2E56F9741@oracle.com> <56A7BF7B.5020006@redhat.com> <17C15340-167F-4B33-A6CF-8510EAC2491C@oracle.com> <56A7C410.4040301@redhat.com> <713CDD14-7C04-4B33-AC48-6A5474351C97@oracle.com> <56A96B55.7050301@oracle.com> <56B102AD.7020800@oracle.com> <79CA457A-E4E3-4D49-B22A-C959C16DAC49@oracle.com> <5E9239AE-D8C9-4A98-9C46-9FE0F130A06C@oracle.com> <56B5207B.8010107@oracle.com> <56B7DDD8.2050701@oracle.com> <56BAB18B.3070707@oracle.com> Message-ID: <56BB88C9.5010502@oracle.com> On 2016-02-10 02:03, Paul Sandoz wrote: >> On 10 Feb 2016, at 04:42, Mikael Vidstedt wrote: >> >> >> Can I please get a quick review of these updated webrevs: >> >> hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05/hotspot/webrev/ >> jdk: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05/jdk/webrev/ >> >> incremental webrevs: >> >> hotspot: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05.incr/hotspot/webrev/ >> jdk: http://cr.openjdk.java.net/~mikael/webrevs/8141491/webrev.05.incr/jdk/webrev/ >> > +1 > > I agree with David on the JavaDoc, but that could be followed up with any future changes, including potentially the removal of wrapping methods in Bits.java, since buffers any way use Unsafe the wrappers now appear to offer little value. I'm planning on cleaning up Unsafe.java pretty significantly in a separate change, so I will add the relevant javadocs as part of that. I'll also file a separate enhancement to remove the Bits wrappers. Thanks, Mikael > > Paul. > > >> Changes: >> >> * Added asserts in copy.cpp/conjoint_swap >> * Correctness: Moved offset sign checks to only be performed if corresponding base object is null, and added corresponding tests >> >> I'm about to make additional changes in this same area, so unless this last change is horribly broken I'm planning on pushing this and doing any additional cleanup in the upcoming change(s). >> >> Cheers, >> Mikael >> From tom.benson at oracle.com Wed Feb 10 19:27:57 2016 From: tom.benson at oracle.com (Tom Benson) Date: Wed, 10 Feb 2016 14:27:57 -0500 Subject: [9] RFR (S) 8146436: Add -XX:+UseAggressiveHeapShrink option In-Reply-To: <56B39A43.5070409@oracle.com> References: <56B39A43.5070409@oracle.com> Message-ID: <56BB8F3D.3070502@oracle.com> Hi Chris, My apologies if I missed the discussion somewhere, but is there a specific rationale for adding this that can be mentioned in the bug report? I can imagine scenarios where it would be useful, but maybe the real need can be called out. I think it might be clearer if the new code in cardGeneration was moved down to where the values are used. IE, I would leave the inits of current_shrink_factor and _shrink_factor as they were at lines 190/191. Then down at 270, just don't divide by the shrink factor if UseAggressiveHeapShrink is set, and the updates to shrink factor can be in the same conditional. This has the advantage that you can fix the comment just above it to match this special case. Do you think that would work? It looks like the ending "\" at line 3330 in globals.hpp isn't aligned, and the copyright in cardGeneration.cpp needs to be updated. One other nit, which you can ignore unless someone comes forward to agree with me 8^) , is that I'd prefer the name ShrinkHeapAggressively instead. Maybe this was already debated elsewhere.... Tom On 2/4/2016 1:36 PM, Chris Plummer wrote: > Hello, > > Please review the following for adding the -XX UseAggressiveHeapShrink > option. When turned on, it tells the GC to reduce the heap size to the > new target size immediately after a full GC rather than doing it > progressively over 4 GCs. > > Webrev: http://cr.openjdk.java.net/~cjplummer/8146436/webrev.02/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8146436 > > Testing: > -JPRT with '-testset hotspot' > -JPRT with '-testset hotspot -vmflags "-XX:+UseAggressiveHeapShrink"' > -added new TestMaxMinHeapFreeRatioFlags.java test > > thanks, > > Chris From jesper.wilhelmsson at oracle.com Wed Feb 10 19:47:34 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Wed, 10 Feb 2016 20:47:34 +0100 Subject: RFR: 8149591 - Prepare hotspot for GTest Message-ID: <56BB93D6.3000905@oracle.com> Hi, Please review this change to prepare the Hotspot code for the Google unit test framework. From the RFE: A few changes are needed in the hotspot code to start using the Google Test framework. 1. The new() operator as defined in allocation.cpp can not be used together with GTest. This needs to be moved to a separate file so that we can avoid compiling it when building the GTest enabled JVM. 2. In management.cpp there is a local variable called err_msg. This variable is shadowing a global variable in debug.hpp. In the GTest work the global err_msg variable is used in the vmassert macro and this creates a conflict with the local variable in management.cpp. 3. If SuppressFatalErrorMessage is set ALL error messages should be suppressed, even the ones in error_is_suppressed() in debug.cpp. This is what is done by this change. RFE: https://bugs.openjdk.java.net/browse/JDK-8149591 Webrev: http://cr.openjdk.java.net/~jwilhelm/8149591/webrev.00/index.html Thanks, /Jesper From tom.benson at oracle.com Wed Feb 10 20:16:26 2016 From: tom.benson at oracle.com (Tom Benson) Date: Wed, 10 Feb 2016 15:16:26 -0500 Subject: [9] RFR (S) 8146436: Add -XX:+UseAggressiveHeapShrink option In-Reply-To: <56BB9831.5030504@oracle.com> References: <56B39A43.5070409@oracle.com> <56BB8F3D.3070502@oracle.com> <56BB9831.5030504@oracle.com> Message-ID: <56BB9A9A.7060901@oracle.com> Hi Chris, OK, that all sounds good. >> I can change it, although that will mean filing a new CCC. Ah, I'd forgotten about that. Not worth it, unless there's a landslide of support for a different name. Tnx, Tom On 2/10/2016 3:06 PM, Chris Plummer wrote: > Hi Tom, > > Thanks for having a look. Comments inline below: > > On 2/10/16 11:27 AM, Tom Benson wrote: >> Hi Chris, >> My apologies if I missed the discussion somewhere, but is there a >> specific rationale for adding this that can be mentioned in the bug >> report? I can imagine scenarios where it would be useful, but maybe >> the real need can be called out. > In general, it is for customers that want to minimize the amount of > memory used by the java heap, and are willing to sacrifice some > performance (induce more frequent GCs) to save that memory. When heap > usage fluctuates greatly, the GC will tend to hold on to that memory > longer than needed due to the the current algorithm which requires 4 > full GCs before MaxHeapFreeRatio is fully honored. If this is what you > are looking for, I can add it to the CR. >> >> I think it might be clearer if the new code in cardGeneration was >> moved down to where the values are used. IE, I would leave the inits >> of current_shrink_factor and _shrink_factor as they were at lines >> 190/191. Then down at 270, just don't divide by the shrink factor >> if UseAggressiveHeapShrink is set, and the updates to shrink factor >> can be in the same conditional. This has the advantage that you can >> fix the comment just above it to match this special case. Do you >> think that would work? > Yes, that makes sense. I'll get started on it. I have a vacation > coming up shortly, so what I'll get a new webrev out soon, but > probably will need to wait until after my trip to do more thorough > testing and push the changes. >> >> It looks like the ending "\" at line 3330 in globals.hpp isn't >> aligned, and the copyright in cardGeneration.cpp needs to be updated. > Ok. >> >> One other nit, which you can ignore unless someone comes forward to >> agree with me 8^) , is that I'd prefer the name >> ShrinkHeapAggressively instead. Maybe this was already debated >> elsewhere.... > The name choice hasn't really been discussed or questioned. It was > what was suggested to me, so I stuck with it (The initial work was > done by someone else. I'm just getting it integrated into 9). I can > change it, although that will mean filing a new CCC. > > thanks, > > Chris >> Tom >> >> On 2/4/2016 1:36 PM, Chris Plummer wrote: >>> Hello, >>> >>> Please review the following for adding the -XX >>> UseAggressiveHeapShrink option. When turned on, it tells the GC to >>> reduce the heap size to the new target size immediately after a full >>> GC rather than doing it progressively over 4 GCs. >>> >>> Webrev: http://cr.openjdk.java.net/~cjplummer/8146436/webrev.02/ >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8146436 >>> >>> Testing: >>> -JPRT with '-testset hotspot' >>> -JPRT with '-testset hotspot -vmflags "-XX:+UseAggressiveHeapShrink"' >>> -added new TestMaxMinHeapFreeRatioFlags.java test >>> >>> thanks, >>> >>> Chris >> > From george.triantafillou at oracle.com Wed Feb 10 20:23:42 2016 From: george.triantafillou at oracle.com (George Triantafillou) Date: Wed, 10 Feb 2016 15:23:42 -0500 Subject: RFR: 8149591 - Prepare hotspot for GTest In-Reply-To: <56BB93D6.3000905@oracle.com> References: <56BB93D6.3000905@oracle.com> Message-ID: <56BB9C4E.8020608@oracle.com> Hi Jesper, Your changes look good. -George On 2/10/2016 2:47 PM, Jesper Wilhelmsson wrote: > Hi, > > Please review this change to prepare the Hotspot code for the Google > unit test framework. From the RFE: > > A few changes are needed in the hotspot code to start using the Google > Test framework. > > 1. The new() operator as defined in allocation.cpp can not be used > together with GTest. This needs to be moved to a separate file so that > we can avoid compiling it when building the GTest enabled JVM. > > 2. In management.cpp there is a local variable called err_msg. This > variable is shadowing a global variable in debug.hpp. In the GTest > work the global err_msg variable is used in the vmassert macro and > this creates a conflict with the local variable in management.cpp. > > 3. If SuppressFatalErrorMessage is set ALL error messages should be > suppressed, even the ones in error_is_suppressed() in debug.cpp. > > This is what is done by this change. > > RFE: https://bugs.openjdk.java.net/browse/JDK-8149591 > Webrev: http://cr.openjdk.java.net/~jwilhelm/8149591/webrev.00/index.html > > Thanks, > /Jesper From jesper.wilhelmsson at oracle.com Wed Feb 10 20:31:03 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Wed, 10 Feb 2016 21:31:03 +0100 Subject: RFR: 8149594 - Clean up Hotspot makefiles Message-ID: <56BB9E07.7030208@oracle.com> Hi, Please review this cleanup of the Hotspot makefiles. Since I have been spending some time in the makefiles lately there were a few random cleanups that I couldn't stop myself from doing. Most of these are made to make the linux and bsd makefiles more alike. This has helped a lot when porting the framework to the different platforms. There are a couple of preparing alignment changes that I included in this cleanup to make the Google test patch easier to review later. There are also a couple of "real" changes: * In make/bsd/makefiles/buildtree.make we set up OS_VENDOR with the motivation that we don't include defs.make. Three lines below we include defs.make. * In make/bsd/makefiles/buildtree.make the 'install' target depends on 'install_jsigs'. There is no rule called 'install_jsigs', it is called 'install_jsig'. Another difference that I find interesting but that I have not changed in this patch (I can do that if requested) is that in the bsd version of fastdebug.make VERSION is set to "fastdebug" but in the linux version it is set to "optimized". Given the name of the makefile fastdebug seems more correct, but whichever is the correct value, shouldn't they be the same on linux and bsd? https://bugs.openjdk.java.net/browse/JDK-8149594 http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ Thanks, /Jesper From jesper.wilhelmsson at oracle.com Wed Feb 10 20:40:38 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Wed, 10 Feb 2016 21:40:38 +0100 Subject: RFR: 8149591 - Prepare hotspot for GTest In-Reply-To: <56BB9C4E.8020608@oracle.com> References: <56BB93D6.3000905@oracle.com> <56BB9C4E.8020608@oracle.com> Message-ID: <56BBA046.1080105@oracle.com> Thanks George! /Jesper Den 10/2/16 kl. 21:23, skrev George Triantafillou: > Hi Jesper, > > Your changes look good. > > -George > > On 2/10/2016 2:47 PM, Jesper Wilhelmsson wrote: >> Hi, >> >> Please review this change to prepare the Hotspot code for the Google unit test >> framework. From the RFE: >> >> A few changes are needed in the hotspot code to start using the Google Test >> framework. >> >> 1. The new() operator as defined in allocation.cpp can not be used together >> with GTest. This needs to be moved to a separate file so that we can avoid >> compiling it when building the GTest enabled JVM. >> >> 2. In management.cpp there is a local variable called err_msg. This variable >> is shadowing a global variable in debug.hpp. In the GTest work the global >> err_msg variable is used in the vmassert macro and this creates a conflict >> with the local variable in management.cpp. >> >> 3. If SuppressFatalErrorMessage is set ALL error messages should be >> suppressed, even the ones in error_is_suppressed() in debug.cpp. >> >> This is what is done by this change. >> >> RFE: https://bugs.openjdk.java.net/browse/JDK-8149591 >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8149591/webrev.00/index.html >> >> Thanks, >> /Jesper > From tom.benson at oracle.com Wed Feb 10 20:50:04 2016 From: tom.benson at oracle.com (Tom Benson) Date: Wed, 10 Feb 2016 15:50:04 -0500 Subject: [9] RFR (S) 8146436: Add -XX:+UseAggressiveHeapShrink option In-Reply-To: <56BB9FEA.5070506@oracle.com> References: <56B39A43.5070409@oracle.com> <56BB8F3D.3070502@oracle.com> <56BB9831.5030504@oracle.com> <56BB9A9A.7060901@oracle.com> <56BB9FEA.5070506@oracle.com> Message-ID: <56BBA27C.2080106@oracle.com> Hi Chris, I agree, it makes sense to move the _shrink_factor adjustments inside the conditional. You may as well also add the missing word (...we don't want TO shrink...) in the comment while you're there. I've heard from another GC team person that there might be more feedback on the name coming, after some discussion. Not sure if it will constitute the 'landslide' I mentioned. 8^) Tom On 2/10/2016 3:39 PM, Chris Plummer wrote: > Hi Tom, > > if (!UseAggressiveHeapShrink) { > // If UseAggressiveHeapShrink is false (the default), > // we don't want shrink all the way back to initSize if people > call > // System.gc(), because some programs do that between "phases" > and then > // we'd just have to grow the heap up again for the next > phase. So we > // damp the shrinking: 0% on the first call, 10% on the second > call, 40% > // on the third call, and 100% by the fourth call. But if we > recompute > // size without shrinking, it goes back to 0%. > shrink_bytes = shrink_bytes / 100 * current_shrink_factor; > } > assert(shrink_bytes <= max_shrink_bytes, "invalid shrink size"); > if (current_shrink_factor == 0) { > _shrink_factor = 10; > } else { > _shrink_factor = MIN2(current_shrink_factor * 4, (size_t) 100); > } > > I got rid of the changes at the start of the method, and added the > !UseAggressiveHeapShrink check and the comment, so the first 2 lines > above and the closing right brace are now the only change in the file, > other than the copyright date. If you want I could also move the > _shrink_factor adjustment into this block since the value of > _shrink_factor becomes irrelevant if UseAggressiveHeapShrink is true. > The assert should remain outside the block. > > cheers, > > Chris > > On 2/10/16 12:16 PM, Tom Benson wrote: >> Hi Chris, >> OK, that all sounds good. >> >> >> I can change it, although that will mean filing a new CCC. >> Ah, I'd forgotten about that. Not worth it, unless there's a >> landslide of support for a different name. >> >> Tnx, >> Tom >> >> On 2/10/2016 3:06 PM, Chris Plummer wrote: >>> Hi Tom, >>> >>> Thanks for having a look. Comments inline below: >>> >>> On 2/10/16 11:27 AM, Tom Benson wrote: >>>> Hi Chris, >>>> My apologies if I missed the discussion somewhere, but is there a >>>> specific rationale for adding this that can be mentioned in the bug >>>> report? I can imagine scenarios where it would be useful, but >>>> maybe the real need can be called out. >>> In general, it is for customers that want to minimize the amount of >>> memory used by the java heap, and are willing to sacrifice some >>> performance (induce more frequent GCs) to save that memory. When >>> heap usage fluctuates greatly, the GC will tend to hold on to that >>> memory longer than needed due to the the current algorithm which >>> requires 4 full GCs before MaxHeapFreeRatio is fully honored. If >>> this is what you are looking for, I can add it to the CR. >>>> >>>> I think it might be clearer if the new code in cardGeneration was >>>> moved down to where the values are used. IE, I would leave the >>>> inits of current_shrink_factor and _shrink_factor as they were at >>>> lines 190/191. Then down at 270, just don't divide by the shrink >>>> factor if UseAggressiveHeapShrink is set, and the updates to shrink >>>> factor can be in the same conditional. This has the advantage that >>>> you can fix the comment just above it to match this special case. >>>> Do you think that would work? >>> Yes, that makes sense. I'll get started on it. I have a vacation >>> coming up shortly, so what I'll get a new webrev out soon, but >>> probably will need to wait until after my trip to do more thorough >>> testing and push the changes. >>>> >>>> It looks like the ending "\" at line 3330 in globals.hpp isn't >>>> aligned, and the copyright in cardGeneration.cpp needs to be updated. >>> Ok. >>>> >>>> One other nit, which you can ignore unless someone comes forward to >>>> agree with me 8^) , is that I'd prefer the name >>>> ShrinkHeapAggressively instead. Maybe this was already debated >>>> elsewhere.... >>> The name choice hasn't really been discussed or questioned. It was >>> what was suggested to me, so I stuck with it (The initial work was >>> done by someone else. I'm just getting it integrated into 9). I can >>> change it, although that will mean filing a new CCC. >>> >>> thanks, >>> >>> Chris >>>> Tom >>>> >>>> On 2/4/2016 1:36 PM, Chris Plummer wrote: >>>>> Hello, >>>>> >>>>> Please review the following for adding the -XX >>>>> UseAggressiveHeapShrink option. When turned on, it tells the GC to >>>>> reduce the heap size to the new target size immediately after a >>>>> full GC rather than doing it progressively over 4 GCs. >>>>> >>>>> Webrev: http://cr.openjdk.java.net/~cjplummer/8146436/webrev.02/ >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8146436 >>>>> >>>>> Testing: >>>>> -JPRT with '-testset hotspot' >>>>> -JPRT with '-testset hotspot -vmflags >>>>> "-XX:+UseAggressiveHeapShrink"' >>>>> -added new TestMaxMinHeapFreeRatioFlags.java test >>>>> >>>>> thanks, >>>>> >>>>> Chris >>>> >>> >> > From jesper.wilhelmsson at oracle.com Wed Feb 10 22:10:51 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Wed, 10 Feb 2016 23:10:51 +0100 Subject: RFR: 8149594 - Clean up Hotspot makefiles In-Reply-To: <56BB9E07.7030208@oracle.com> References: <56BB9E07.7030208@oracle.com> Message-ID: <56BBB56B.5020506@oracle.com> Sending again to include the build-dev list. /Jesper Den 10/2/16 kl. 21:31, skrev Jesper Wilhelmsson: > Hi, > > Please review this cleanup of the Hotspot makefiles. > > Since I have been spending some time in the makefiles lately there were a few > random cleanups that I couldn't stop myself from doing. Most of these are made > to make the linux and bsd makefiles more alike. This has helped a lot when > porting the framework to the different platforms. > > There are a couple of preparing alignment changes that I included in this > cleanup to make the Google test patch easier to review later. > > There are also a couple of "real" changes: > > * In make/bsd/makefiles/buildtree.make we set up OS_VENDOR with the motivation > that we don't include defs.make. Three lines below we include defs.make. > > * In make/bsd/makefiles/buildtree.make the 'install' target depends on > 'install_jsigs'. There is no rule called 'install_jsigs', it is called > 'install_jsig'. > > > Another difference that I find interesting but that I have not changed in this > patch (I can do that if requested) is that in the bsd version of fastdebug.make > VERSION is set to "fastdebug" but in the linux version it is set to "optimized". > Given the name of the makefile fastdebug seems more correct, but whichever is > the correct value, shouldn't they be the same on linux and bsd? > > > https://bugs.openjdk.java.net/browse/JDK-8149594 > http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ > > Thanks, > /Jesper From kim.barrett at oracle.com Wed Feb 10 22:34:56 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 10 Feb 2016 17:34:56 -0500 Subject: RFR: 8149594 - Clean up Hotspot makefiles In-Reply-To: <56BBB56B.5020506@oracle.com> References: <56BB9E07.7030208@oracle.com> <56BBB56B.5020506@oracle.com> Message-ID: <16426566-DBB7-45E7-B934-2F68FF136745@oracle.com> Den 10/2/16 kl. 21:31, skrev Jesper Wilhelmsson: > https://bugs.openjdk.java.net/browse/JDK-8149594 > http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ ------------------------------------------------------------------------------ I might have preferred two webrevs, one of only whitespace changes and one of other changes. ------------------------------------------------------------------------------ make/bsd/Makefile make/linux/Makefile 172 TARGETS = debug fastdebug optimized product ... 200 BUILDTREE = $(MAKE) -f $(BUILDTREE_MAKE) $(BUILDTREE_VARS) If there's a non-whitespace change (increasing the separation between the variables and the "=" or "+="), I couldn't find it. I'm guessing this is being done because the planned later changes introduce something in here that leads to that reformatting? ------------------------------------------------------------------------------ make/bsd/makefiles/gcc.make 278 WARNING_FLAGS += -Wconversion Oh, cool! So we haven't been using that option after all! Note: This is a "real change" that wasn't mentioned in the RFR. I've been meaning to file a bug report against this for a while. The pre-gcc4.3 version of -Wconversion probably ought not be used in a production context anyway. https://gcc.gnu.org/wiki/NewWconversion The old behavior for -Wconversion was intended to aid translation of old C code to modern C standards by identifying places where adding function prototypes may result in different behavior. That's just not an issue for C++, nor for our code in general. And we're not prepared to use the new -Wconversion; see JDK-8135181. So rather than changing our builds to actually use this option with old compilers that Oracle doesn't support (so we can't locally test this change), I suggest removing the option entirely, since it hasn't actually been used anyway. ------------------------------------------------------------------------------ make/bsd/makefiles/jvmti.make make/linux/makefiles/trace.make The only non-copyright change in these files seem to be the addition of a blank line to the end of the file. ------------------------------------------------------------------------------ make/bsd/makefiles/top.make 88 vm_build_preliminaries: checks $(Cached_plat) $(AD_Files_If_Required) trace_stuff jvmti_stuff dtrace_stuff What is the point of re-ordering trace_stuff and jvmti_stuff? Also, elsewhere the whitespace after the target's ":" is minimized, but not here. ------------------------------------------------------------------------------ From john.r.rose at oracle.com Wed Feb 10 22:43:58 2016 From: john.r.rose at oracle.com (John Rose) Date: Wed, 10 Feb 2016 14:43:58 -0800 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework In-Reply-To: <56BB3FD0.5000104@oracle.com> References: <56BB3FD0.5000104@oracle.com> Message-ID: <3910DA9B-43C9-4C1A-8FD0-993A54225550@oracle.com> Thanks for taking this on. To be an adequate substitute for ttyLocker it needs to support block structure via the RAII pattern. Otherwise the use cases are verbose enough to be a burden on programmers. This is easy, I think: Give LogMessage a constructor which takes a reference to the corresponding LogHandle. Have the LogMessage destructor call log.write(*this). (BTW, as written it allows accidentally dropped writes, which is bad: We'll never find all those bugs. That's the burden of a rough-edged API, especially when it is turned off most of the time.) If necessary or for flexibility, allow the LogMessage constructor an optional boolean to say "don't write automatically". Also, allow a "reset" method to cancel any buffered writing. So the default is to perform the write at the end of the block (if there is anything to write), but it can be turned off explicitly. Giving the LogMessage a clear linkage to a LogHandle allows the LogMessage to be a simple delegate for the LogHandle itself. This allows the user to ignore the LogHandle and work with the LogMessage as if it were the LogHandle. That seems preferable to requiring split attention to both objects. Given this simplification, the name LogMessage could be changed to BufferedLogHandle, LogBuffer, ScopedLog, etc., to emphasize that the thing is really a channel to some log, but with an extra bit of buffering to control. To amend your example use case: // example buffered log messages (proposed) LogHandle(logging) log; if (log.is_debug()) { ResourceMark rm; LogMessage msg; msg.debug("debug message"); msg.trace("additional trace information"); log.write(msg); } Either this: // example buffered log messages (amended #1) LogHandle(logging) log; if (log.is_debug()) { ResourceMark rm; LogBuffer buf(log); buf.debug("debug message"); buf.trace("additional trace information"); } Or this: // example buffered log messages (amended #2) { LogBuffer(logging) log; if (log.is_debug()) { ResourceMark rm; log.debug("debug message"); log.trace("additional trace information"); } } The second is probably preferable, since it encourages the logging logic to be modularized into a single block, and because it reduces the changes for error that might occur from having two similar names (log/msg or log/buf). The second usage requires the LogBuffer constructor to be lazy: It must delay internal memory allocation until the first output operation. ? John From kim.barrett at oracle.com Wed Feb 10 23:40:11 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 10 Feb 2016 18:40:11 -0500 Subject: RFR: 8149591 - Prepare hotspot for GTest In-Reply-To: <56BB93D6.3000905@oracle.com> References: <56BB93D6.3000905@oracle.com> Message-ID: <924A3643-B8B6-427A-A504-F61CAB0ED295@oracle.com> > On Feb 10, 2016, at 2:47 PM, Jesper Wilhelmsson wrote: > > Hi, > > Please review this change to prepare the Hotspot code for the Google unit test framework. From the RFE: > > A few changes are needed in the hotspot code to start using the Google Test framework. > > 1. The new() operator as defined in allocation.cpp can not be used together with GTest. This needs to be moved to a separate file so that we can avoid compiling it when building the GTest enabled JVM. > > 2. In management.cpp there is a local variable called err_msg. This variable is shadowing a global variable in debug.hpp. In the GTest work the global err_msg variable is used in the vmassert macro and this creates a conflict with the local variable in management.cpp. Where does this happen? The current vmassert macro doesn't use err_msg. A better way to address this might be to fix the problematic macro. For safety, macros whose expansions refer to some namespace-scoped name should qualify the reference. > > 3. If SuppressFatalErrorMessage is set ALL error messages should be suppressed, even the ones in error_is_suppressed() in debug.cpp. > > This is what is done by this change. > > RFE: https://bugs.openjdk.java.net/browse/JDK-8149591 > Webrev: http://cr.openjdk.java.net/~jwilhelm/8149591/webrev.00/index.html > > Thanks, > /Jesper From jesper.wilhelmsson at oracle.com Thu Feb 11 00:34:13 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Thu, 11 Feb 2016 01:34:13 +0100 Subject: RFR: 8149591 - Prepare hotspot for GTest In-Reply-To: <924A3643-B8B6-427A-A504-F61CAB0ED295@oracle.com> References: <56BB93D6.3000905@oracle.com> <924A3643-B8B6-427A-A504-F61CAB0ED295@oracle.com> Message-ID: <56BBD705.5060206@oracle.com> Den 11/2/16 kl. 00:40, skrev Kim Barrett: >> On Feb 10, 2016, at 2:47 PM, Jesper Wilhelmsson wrote: >> >> Hi, >> >> Please review this change to prepare the Hotspot code for the Google unit test framework. From the RFE: >> >> A few changes are needed in the hotspot code to start using the Google Test framework. >> >> 1. The new() operator as defined in allocation.cpp can not be used together with GTest. This needs to be moved to a separate file so that we can avoid compiling it when building the GTest enabled JVM. >> >> 2. In management.cpp there is a local variable called err_msg. This variable is shadowing a global variable in debug.hpp. In the GTest work the global err_msg variable is used in the vmassert macro and this creates a conflict with the local variable in management.cpp. > > Where does this happen? The current vmassert macro doesn't use err_msg. > > A better way to address this might be to fix the problematic macro. > For safety, macros whose expansions refer to some namespace-scoped > name should qualify the reference. The vmassert macro looks like this with the GTest changes applied: #define vmassert(p, ...) \ do { \ if (!(p)) { \ if (is_executing_unit_tests()) { \ report_assert_msg(err_msg(__VA_ARGS__).buffer()); \ } \ report_vm_error(__FILE__, __LINE__, "assert(" #p ") failed", __VA_ARGS__); \ BREAKPOINT; \ } \ } while (0) This is done so that the framework can pick up the assertion when running "death tests" - tests that are supposed to trigger an assertion. If there is a better way to implement this I'm open to any suggestions. /Jesper > >> >> 3. If SuppressFatalErrorMessage is set ALL error messages should be suppressed, even the ones in error_is_suppressed() in debug.cpp. >> >> This is what is done by this change. >> >> RFE: https://bugs.openjdk.java.net/browse/JDK-8149591 >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8149591/webrev.00/index.html >> >> Thanks, >> /Jesper > > From jesper.wilhelmsson at oracle.com Thu Feb 11 00:51:51 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Thu, 11 Feb 2016 01:51:51 +0100 Subject: RFR: 8149594 - Clean up Hotspot makefiles In-Reply-To: <16426566-DBB7-45E7-B934-2F68FF136745@oracle.com> References: <56BB9E07.7030208@oracle.com> <56BBB56B.5020506@oracle.com> <16426566-DBB7-45E7-B934-2F68FF136745@oracle.com> Message-ID: <56BBDB27.4040107@oracle.com> Hi Kim, Thanks for looking at this! Den 10/2/16 kl. 23:34, skrev Kim Barrett: > Den 10/2/16 kl. 21:31, skrev Jesper Wilhelmsson: >> https://bugs.openjdk.java.net/browse/JDK-8149594 >> http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ > > > ------------------------------------------------------------------------------ > I might have preferred two webrevs, one of only whitespace changes and > one of other changes. Yes, I'll split it up in the next version if there is a need for it. > ------------------------------------------------------------------------------ > make/bsd/Makefile > make/linux/Makefile > 172 TARGETS = debug fastdebug optimized product > ... > 200 BUILDTREE = $(MAKE) -f $(BUILDTREE_MAKE) $(BUILDTREE_VARS) > > If there's a non-whitespace change (increasing the separation between > the variables and the "=" or "+="), I couldn't find it. I'm guessing > this is being done because the planned later changes introduce > something in here that leads to that reformatting? Yes, this is one of those changes that is motivated by future changes. I wanted to separate this whitespace change from the actual change to make that one easier to review. > > ------------------------------------------------------------------------------ > make/bsd/makefiles/gcc.make > 278 WARNING_FLAGS += -Wconversion > > Oh, cool! So we haven't been using that option after all! > > Note: This is a "real change" that wasn't mentioned in the RFR. > > I've been meaning to file a bug report against this for a while. The > pre-gcc4.3 version of -Wconversion probably ought not be used in a > production context anyway. > > https://gcc.gnu.org/wiki/NewWconversion > The old behavior for -Wconversion was intended to aid translation of > old C code to modern C standards by identifying places where adding > function prototypes may result in different behavior. That's just not > an issue for C++, nor for our code in general. > > And we're not prepared to use the new -Wconversion; see JDK-8135181. > > So rather than changing our builds to actually use this option with > old compilers that Oracle doesn't support (so we can't locally test > this change), I suggest removing the option entirely, since it hasn't > actually been used anyway. This typo was only present on bsd. Are you suggesting to remove it only on bsd, or on linux as well? > > ------------------------------------------------------------------------------ > make/bsd/makefiles/jvmti.make > make/linux/makefiles/trace.make > > The only non-copyright change in these files seem to be the addition > of a blank line to the end of the file. Yes, this is to make the bsd and linux versions of the files the same. It makes it easier to apply patches from one platform to the other when porting stuff. > > ------------------------------------------------------------------------------ > make/bsd/makefiles/top.make > 88 vm_build_preliminaries: checks $(Cached_plat) $(AD_Files_If_Required) trace_stuff jvmti_stuff dtrace_stuff > > What is the point of re-ordering trace_stuff and jvmti_stuff? To make the bsd and linux versions the same. > > Also, elsewhere the whitespace after the target's ":" is minimized, > but not here. Oops. I'll fix that. /Jesper > > ------------------------------------------------------------------------------ > From kim.barrett at oracle.com Thu Feb 11 01:01:26 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 10 Feb 2016 20:01:26 -0500 Subject: RFR: 8149591 - Prepare hotspot for GTest In-Reply-To: <56BBD705.5060206@oracle.com> References: <56BB93D6.3000905@oracle.com> <924A3643-B8B6-427A-A504-F61CAB0ED295@oracle.com> <56BBD705.5060206@oracle.com> Message-ID: <79FBDF1B-0978-4B1C-8F33-5DB6824F3FE3@oracle.com> > On Feb 10, 2016, at 7:34 PM, Jesper Wilhelmsson wrote: > > Den 11/2/16 kl. 00:40, skrev Kim Barrett: >>> 2. In management.cpp there is a local variable called err_msg. This variable is shadowing a global variable in debug.hpp. In the GTest work the global err_msg variable is used in the vmassert macro and this creates a conflict with the local variable in management.cpp. >> >> Where does this happen? The current vmassert macro doesn't use err_msg. >> >> A better way to address this might be to fix the problematic macro. >> For safety, macros whose expansions refer to some namespace-scoped >> name should qualify the reference. > > The vmassert macro looks like this with the GTest changes applied: > > #define vmassert(p, ...) \ > do { \ > if (!(p)) { \ > if (is_executing_unit_tests()) { \ > report_assert_msg(err_msg(__VA_ARGS__).buffer()); \ > } \ > report_vm_error(__FILE__, __LINE__, "assert(" #p ") failed", __VA_ARGS__); \ > BREAKPOINT; \ > } \ > } while (0) > > This is done so that the framework can pick up the assertion when running "death tests" - tests that are supposed to trigger an assertion. > > If there is a better way to implement this I'm open to any suggestions. Why not replace report_assert_msg(err_msg(__VA_ARGS__).buffer()) with report_assert_msg(__VA_ARGS__) Unless report_assert_msg is not provided by us, but is instead part of the gtest framework. In that case, we provide a variadic wrapper around report_assert_msg and call that wrapper in the vmassert expansion. Note that I think it might be good to split up debug.hpp, moving all the FormatBuffer stuff to a separate header. (I've been intending to submit an RFE for this.) With David's changes to use variadic macros, the macros no longer need to refer to FormatBuffer stuff in their expansions. Such a split would make debug.hpp standalone and not need to include, for example, globalDefinitions.hpp. One benefit of that would be that globalDefinitions.hpp could then include debug.hpp in order to use vmassert, rather than using kludges. From kim.barrett at oracle.com Thu Feb 11 01:15:34 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 10 Feb 2016 20:15:34 -0500 Subject: RFR: 8149594 - Clean up Hotspot makefiles In-Reply-To: <56BBDB27.4040107@oracle.com> References: <56BB9E07.7030208@oracle.com> <56BBB56B.5020506@oracle.com> <16426566-DBB7-45E7-B934-2F68FF136745@oracle.com> <56BBDB27.4040107@oracle.com> Message-ID: > On Feb 10, 2016, at 7:51 PM, Jesper Wilhelmsson wrote: > Den 10/2/16 kl. 23:34, skrev Kim Barrett: >> Den 10/2/16 kl. 21:31, skrev Jesper Wilhelmsson: >>> https://bugs.openjdk.java.net/browse/JDK-8149594 >>> http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ >> >> >> ------------------------------------------------------------------------------ >> I might have preferred two webrevs, one of only whitespace changes and >> one of other changes. > > Yes, I'll split it up in the next version if there is a need for it. Thanks. >> ------------------------------------------------------------------------------ >> make/bsd/makefiles/gcc.make >> 278 WARNING_FLAGS += -Wconversion >> >> Oh, cool! So we haven't been using that option after all! >> >> Note: This is a "real change" that wasn't mentioned in the RFR. >> >> I've been meaning to file a bug report against this for a while. The >> pre-gcc4.3 version of -Wconversion probably ought not be used in a >> production context anyway. >> >> https://gcc.gnu.org/wiki/NewWconversion >> The old behavior for -Wconversion was intended to aid translation of >> old C code to modern C standards by identifying places where adding >> function prototypes may result in different behavior. That's just not >> an issue for C++, nor for our code in general. >> >> And we're not prepared to use the new -Wconversion; see JDK-8135181. >> >> So rather than changing our builds to actually use this option with >> old compilers that Oracle doesn't support (so we can't locally test >> this change), I suggest removing the option entirely, since it hasn't >> actually been used anyway. > > This typo was only present on bsd. Are you suggesting to remove it only on bsd, or on linux as well? Oh, ick! I forgot there are two of these. *I* think it should be removed in both. But maybe doing anything either way should be done as a separate thing? And whatever is done here should be checked with folks like SAP who actually build with old versions of gcc. From kim.barrett at oracle.com Thu Feb 11 01:17:14 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 10 Feb 2016 20:17:14 -0500 Subject: RFR: 8149591 - Prepare hotspot for GTest In-Reply-To: <79FBDF1B-0978-4B1C-8F33-5DB6824F3FE3@oracle.com> References: <56BB93D6.3000905@oracle.com> <924A3643-B8B6-427A-A504-F61CAB0ED295@oracle.com> <56BBD705.5060206@oracle.com> <79FBDF1B-0978-4B1C-8F33-5DB6824F3FE3@oracle.com> Message-ID: <987651C5-C294-42CF-8FAB-80C8E3A7B0B8@oracle.com> > On Feb 10, 2016, at 8:01 PM, Kim Barrett wrote: > Why not replace > > report_assert_msg(err_msg(__VA_ARGS__).buffer()) > > with > > report_assert_msg(__VA_ARGS__) > > Unless report_assert_msg is not provided by us, but is instead part of > the gtest framework. In that case, we provide a variadic wrapper > around report_assert_msg and call that wrapper in the vmassert > expansion. And remember to give the variadic function the appropriate ATTRIBUTE_PRINTF. From david.holmes at oracle.com Thu Feb 11 01:47:11 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Feb 2016 11:47:11 +1000 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BB7DE1.4020002@oracle.com> References: <56BB7DE1.4020002@oracle.com> Message-ID: <56BBE81F.6040403@oracle.com> Hi Vladimir, On 11/02/2016 4:13 AM, Vladimir Ivanov wrote: > http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 > https://bugs.openjdk.java.net/browse/JDK-8138922 > > StubCodeDesc keeps a list of all descriptors rooted at > StubCodeDesc::_list by placing newly instantiated objects there at the > end of the constructor. Unfortunately, it doesn't guarantee that only > fully-constructed objects are visible, because compiler (or HW) can > reorder the stores. > > Since method handle adapters are generated on demand when j.l.i > framework is initialized, it's possible there are readers iterating over > the list at the moment. It's not a problem per se until everybody sees a > consistent view of the list. > > The fix is to insert a StoreStore barrier before registering an object > on the list. Are entries ever removed from the list? The multi-threading aspects of this code are unclear. The on demand nature of method handle adapters may be exposing this code to concurrency issues that the code doesn't expect. ?? Thanks, David > > (I also considered moving MH adapter allocation to VM initialization > phase before anybody reads the list, but it's non-trivial since > MethodHandles::generate_adapters() has a number of implicit dependencies.) > > Testing: manual (verified StubCodeMark assembly), JPRT > > Thanks! > > Best regards, > Vladimir Ivanov From david.holmes at oracle.com Thu Feb 11 02:07:49 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Feb 2016 12:07:49 +1000 Subject: RFR: 8149594 - Clean up Hotspot makefiles In-Reply-To: <56BBB56B.5020506@oracle.com> References: <56BB9E07.7030208@oracle.com> <56BBB56B.5020506@oracle.com> Message-ID: <56BBECF5.6050708@oracle.com> Jesper, Magnus is rewriting all of the hotspot build system. Are these cleanups really worthwhile at this stage? David On 11/02/2016 8:10 AM, Jesper Wilhelmsson wrote: > Sending again to include the build-dev list. > /Jesper > > Den 10/2/16 kl. 21:31, skrev Jesper Wilhelmsson: >> Hi, >> >> Please review this cleanup of the Hotspot makefiles. >> >> Since I have been spending some time in the makefiles lately there >> were a few >> random cleanups that I couldn't stop myself from doing. Most of these >> are made >> to make the linux and bsd makefiles more alike. This has helped a lot >> when >> porting the framework to the different platforms. >> >> There are a couple of preparing alignment changes that I included in this >> cleanup to make the Google test patch easier to review later. >> >> There are also a couple of "real" changes: >> >> * In make/bsd/makefiles/buildtree.make we set up OS_VENDOR with the >> motivation >> that we don't include defs.make. Three lines below we include defs.make. >> >> * In make/bsd/makefiles/buildtree.make the 'install' target depends on >> 'install_jsigs'. There is no rule called 'install_jsigs', it is called >> 'install_jsig'. >> >> >> Another difference that I find interesting but that I have not changed >> in this >> patch (I can do that if requested) is that in the bsd version of >> fastdebug.make >> VERSION is set to "fastdebug" but in the linux version it is set to >> "optimized". >> Given the name of the makefile fastdebug seems more correct, but >> whichever is >> the correct value, shouldn't they be the same on linux and bsd? >> >> >> https://bugs.openjdk.java.net/browse/JDK-8149594 >> http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ >> >> Thanks, >> /Jesper From mikael.vidstedt at oracle.com Thu Feb 11 03:34:05 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 10 Feb 2016 19:34:05 -0800 Subject: RFR (XS): 8149611: Add tests for Unsafe.copySwapMemory Message-ID: <56BC012D.6030008@oracle.com> When I prepared the change for JDK-8141491 [1] I accidentally forgot to hg add the new jtreg tests for the new Unsafe.copySwapMemory method. This change adds the tests: Bug: https://bugs.openjdk.java.net/browse/JDK-8149611 Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8149611/webrev.00/ Cheers, Mikael [1] https://bugs.openjdk.java.net/browse/JDK-8141491 From david.holmes at oracle.com Thu Feb 11 03:49:43 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Feb 2016 13:49:43 +1000 Subject: RFR (XS): 8149611: Add tests for Unsafe.copySwapMemory In-Reply-To: <56BC012D.6030008@oracle.com> References: <56BC012D.6030008@oracle.com> Message-ID: <56BC04D7.8010104@oracle.com> Ship it! This has been reviewed since webrev.03 for 8149611. Thanks, David On 11/02/2016 1:34 PM, Mikael Vidstedt wrote: > > When I prepared the change for JDK-8141491 [1] I accidentally forgot to > hg add the new jtreg tests for the new Unsafe.copySwapMemory method. > This change adds the tests: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149611 > Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8149611/webrev.00/ > > Cheers, > Mikael > > [1] https://bugs.openjdk.java.net/browse/JDK-8141491 > From david.holmes at oracle.com Thu Feb 11 04:20:01 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Feb 2016 14:20:01 +1000 Subject: RFR: 8149591 - Prepare hotspot for GTest In-Reply-To: <56BB93D6.3000905@oracle.com> References: <56BB93D6.3000905@oracle.com> Message-ID: <56BC0BF1.7070506@oracle.com> Hi Jesper, On 11/02/2016 5:47 AM, Jesper Wilhelmsson wrote: > Hi, > > Please review this change to prepare the Hotspot code for the Google > unit test framework. From the RFE: > > A few changes are needed in the hotspot code to start using the Google > Test framework. > > 1. The new() operator as defined in allocation.cpp can not be used > together with GTest. This needs to be moved to a separate file so that > we can avoid compiling it when building the GTest enabled JVM. I presume that is because GTest will use the real global operator new? The name of the new file, given it contains new and delete, seems one-sided. But I can't think of anything better. :) > 2. In management.cpp there is a local variable called err_msg. This > variable is shadowing a global variable in debug.hpp. In the GTest work > the global err_msg variable is used in the vmassert macro and this > creates a conflict with the local variable in management.cpp. Renaming seems trivially fine. > 3. If SuppressFatalErrorMessage is set ALL error messages should be > suppressed, even the ones in error_is_suppressed() in debug.cpp. Took me a while to think this one through. Not sure what purpose SuppressFatalErrorMessages is intended to serve. The idea that the VM can just vanish without any kind of message to the user just seems like a bad idea. I wonder if there is a SuppressFatalErrorMessages test somewhere that actually relies on the output from error_is_suppressed to determine that a crash really did happen? ;-) Cheers, David > This is what is done by this change. > > RFE: https://bugs.openjdk.java.net/browse/JDK-8149591 > Webrev: http://cr.openjdk.java.net/~jwilhelm/8149591/webrev.00/index.html > > Thanks, > /Jesper From jesper.wilhelmsson at oracle.com Thu Feb 11 07:35:25 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Thu, 11 Feb 2016 08:35:25 +0100 Subject: RFR: 8149594 - Clean up Hotspot makefiles In-Reply-To: <56BBECF5.6050708@oracle.com> References: <56BB9E07.7030208@oracle.com> <56BBB56B.5020506@oracle.com> <56BBECF5.6050708@oracle.com> Message-ID: <56BC39BD.6010303@oracle.com> Den 11/2/16 kl. 03:07, skrev David Holmes: > Jesper, > > Magnus is rewriting all of the hotspot build system. Are these cleanups really > worthwhile at this stage? The cleanups are worthwhile to me since I work on mostly Mac and port all my changes over to the linux makefiles in bulk, and without these cleanups my patches won't apply cleanly. The reason I want to push them now even though it is a while left until the GTest stuff is done, is that every time anyone makes a change in the makefiles (which happens more often than I had expected) I get merge conflicts everywhere. /Jesper > > David > > On 11/02/2016 8:10 AM, Jesper Wilhelmsson wrote: >> Sending again to include the build-dev list. >> /Jesper >> >> Den 10/2/16 kl. 21:31, skrev Jesper Wilhelmsson: >>> Hi, >>> >>> Please review this cleanup of the Hotspot makefiles. >>> >>> Since I have been spending some time in the makefiles lately there >>> were a few >>> random cleanups that I couldn't stop myself from doing. Most of these >>> are made >>> to make the linux and bsd makefiles more alike. This has helped a lot >>> when >>> porting the framework to the different platforms. >>> >>> There are a couple of preparing alignment changes that I included in this >>> cleanup to make the Google test patch easier to review later. >>> >>> There are also a couple of "real" changes: >>> >>> * In make/bsd/makefiles/buildtree.make we set up OS_VENDOR with the >>> motivation >>> that we don't include defs.make. Three lines below we include defs.make. >>> >>> * In make/bsd/makefiles/buildtree.make the 'install' target depends on >>> 'install_jsigs'. There is no rule called 'install_jsigs', it is called >>> 'install_jsig'. >>> >>> >>> Another difference that I find interesting but that I have not changed >>> in this >>> patch (I can do that if requested) is that in the bsd version of >>> fastdebug.make >>> VERSION is set to "fastdebug" but in the linux version it is set to >>> "optimized". >>> Given the name of the makefile fastdebug seems more correct, but >>> whichever is >>> the correct value, shouldn't they be the same on linux and bsd? >>> >>> >>> https://bugs.openjdk.java.net/browse/JDK-8149594 >>> http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ >>> >>> Thanks, >>> /Jesper From thomas.stuefe at gmail.com Thu Feb 11 08:27:13 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 11 Feb 2016 09:27:13 +0100 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework In-Reply-To: <56BB3FD0.5000104@oracle.com> References: <56BB3FD0.5000104@oracle.com> Message-ID: Hi Marcus, thank you for implementing this! I am happy that you implement this with a temporary buffer rather than implementing a lock. I did not yet look closely at all changes, this is quite a large change. But I have some questions about the memory you use: You use either resource area or NMT-tracked C heap. In both cases this is quite "high level memory" in the sense that it uses a number of components. These components then cannot use logging (or at least the LogMessage) without introducing circular references. Worse, this may often work and only crash if the LogMessage::grow() function grows the buffer, so it is difficult to test. Would it not be better to use plain low-level raw malloc in this case? Or, alternativly, factor memory allocation out into an own interface to be used as template parameter, so that most people get a reasonable default: LogMessage myMessage; .. but low level components like NMT, for example, could do: LogMessage myMessageUsingRawMalloc; ? A second concern with using ResourceArea: One has to be careful with handing down LogMessage objects to subroutines, to fill the LogMessage object, and further down the callstack happens another ResourceMark destroying the LogMessage buffer. Could that be a problem, and if yes, could we have an assert for this case? Kind Regards, Thomas On Wed, Feb 10, 2016 at 2:49 PM, Marcus Larsson wrote: > Hi, > > Please review the following patch adding support for non-interleavable > multi-line log messages in UL. > > Summary: > This patch adds a LogMessage class that represents a multiline log > message, buffering lines that belong to the same message. The class has a > similar interface to the Log class, with printf-like methods for each log > level. These methods will append the log message with additional lines. > Once all filled in, the log message should be sent to the the appropriate > log(s) using Log<>::write(). All lines in the LogMessage are written in a > way that prevents interleaving by other messages. Lines are printed in the > same order they were added to the message (regardless of level). Apart from > the level, decorators will be identical for lines in the same LogMessage, > and all lines will be decorated. > > Webrev: > http://cr.openjdk.java.net/~mlarsson/8145934/webrev.00/ > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8145934 > > Testing: > Included tests through JPRT > > Thanks, > Marcus > From vladimir.x.ivanov at oracle.com Thu Feb 11 09:26:42 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Thu, 11 Feb 2016 12:26:42 +0300 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BBE81F.6040403@oracle.com> References: <56BB7DE1.4020002@oracle.com> <56BBE81F.6040403@oracle.com> Message-ID: <56BC53D2.3040701@oracle.com> Vladimir, David, thanks for the feedback! > Static ++_count could be also problem since you incremented it before > adding 'this' to list. Can you look? It's not a problem, because there's always a single writer (at least, right now). > Should we go though all our static fields and see if they have the same > concurrent access problem? Ideally, yes :-) But it's tedious, and not reliable. I'll do a shallow inspecion for other occurences. On 2/11/16 4:47 AM, David Holmes wrote: > Are entries ever removed from the list? No. They are only added (in StubCodeDesc ctor). > The multi-threading aspects of this code are unclear. The on demand > nature of method handle adapters may be exposing this code to > concurrency issues that the code doesn't expect. ?? It definitely stretches the code in unexpected ways. Proposed fix is enough for now, but it doesn't protect from possible modifications. I see 2 alternatives: (1) guard all accesses by a lock (or do modifications in a lock-free manner); (2) change the way how method handle adapters are generated: do it early enough, so compilers always see the same list. #2 looks more attractive. I don't see a compelling reason to generate MH adapters on-demand anymore, they can be part of the regular start-up procedure. Also, it would allow to add additional verification logic to ensure StubCodeDesc::_list isn't modified after it is read. What do you think? Best regards, Vladimir Ivanov From david.holmes at oracle.com Thu Feb 11 10:27:34 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Feb 2016 20:27:34 +1000 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BC53D2.3040701@oracle.com> References: <56BB7DE1.4020002@oracle.com> <56BBE81F.6040403@oracle.com> <56BC53D2.3040701@oracle.com> Message-ID: <56BC6216.6040709@oracle.com> On 11/02/2016 7:26 PM, Vladimir Ivanov wrote: > Vladimir, David, thanks for the feedback! > >> Static ++_count could be also problem since you incremented it before >> adding 'this' to list. Can you look? > It's not a problem, because there's always a single writer (at least, > right now). > >> Should we go though all our static fields and see if they have the same >> concurrent access problem? > Ideally, yes :-) But it's tedious, and not reliable. I'll do a shallow > inspecion for other occurences. > > On 2/11/16 4:47 AM, David Holmes wrote: >> Are entries ever removed from the list? > > No. They are only added (in StubCodeDesc ctor). > >> The multi-threading aspects of this code are unclear. The on demand >> nature of method handle adapters may be exposing this code to >> concurrency issues that the code doesn't expect. ?? > It definitely stretches the code in unexpected ways. Proposed fix is > enough for now, but it doesn't protect from possible modifications. > > I see 2 alternatives: > (1) guard all accesses by a lock (or do modifications in a lock-free > manner); > > (2) change the way how method handle adapters are generated: do it > early enough, so compilers always see the same list. > > #2 looks more attractive. I don't see a compelling reason to generate MH > adapters on-demand anymore, they can be part of the regular start-up > procedure. Also, it would allow to add additional verification logic to > ensure StubCodeDesc::_list isn't modified after it is read. > > What do you think? Immutability is always preferable to having to synchronize code to deal with concurrent modification. :) David > Best regards, > Vladimir Ivanov From aph at redhat.com Thu Feb 11 10:56:47 2016 From: aph at redhat.com (Andrew Haley) Date: Thu, 11 Feb 2016 10:56:47 +0000 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BB7DE1.4020002@oracle.com> References: <56BB7DE1.4020002@oracle.com> Message-ID: <56BC68EF.4080903@redhat.com> On 10/02/16 18:13, Vladimir Ivanov wrote: > The fix is to insert a StoreStore barrier before registering an object > on the list. 1. There's not usually any point having a StoreStore without a corresponding load barrier on the reader side: there's nothing for the StoreStore to synchronize with. Having said that, you could argue that because there's an address dependency from the list pointer to all the instances of StubCodeDesc we should be safe. 2. StoreStore is arguably wrong for everything except an object initialized with pure constants. However, given that a StubCodeDesc is immutable I guess it's safe, as Hans observes in [1]. Acquire/release is much easier to reason about because you don't have to make these fine judgements, so I'd just use acquire/release everywhere in order to keep my sanity. Andrew. [1] http://www.hboehm.info/c++mm/no_write_fences.html From vladimir.x.ivanov at oracle.com Thu Feb 11 10:58:34 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Thu, 11 Feb 2016 13:58:34 +0300 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BC6216.6040709@oracle.com> References: <56BB7DE1.4020002@oracle.com> <56BBE81F.6040403@oracle.com> <56BC53D2.3040701@oracle.com> <56BC6216.6040709@oracle.com> Message-ID: <1F1EF16A-ED2E-4578-8BF4-AB99935AA4C2@oracle.com> I assume you are not satisfied with the proposed fix :-) The downside of #2 is more complicated start-up. I'll experiment to see how it shapes out. Best regards, Vladimir Ivanov > On 11 ????. 2016 ?., at 13:27, David Holmes wrote: > >> On 11/02/2016 7:26 PM, Vladimir Ivanov wrote: >> Vladimir, David, thanks for the feedback! >> >>> Static ++_count could be also problem since you incremented it before >>> adding 'this' to list. Can you look? >> It's not a problem, because there's always a single writer (at least, >> right now). >> >>> Should we go though all our static fields and see if they have the same >>> concurrent access problem? >> Ideally, yes :-) But it's tedious, and not reliable. I'll do a shallow >> inspecion for other occurences. >> >>> On 2/11/16 4:47 AM, David Holmes wrote: >>> Are entries ever removed from the list? >> >> No. They are only added (in StubCodeDesc ctor). >> >>> The multi-threading aspects of this code are unclear. The on demand >>> nature of method handle adapters may be exposing this code to >>> concurrency issues that the code doesn't expect. ?? >> It definitely stretches the code in unexpected ways. Proposed fix is >> enough for now, but it doesn't protect from possible modifications. >> >> I see 2 alternatives: >> (1) guard all accesses by a lock (or do modifications in a lock-free >> manner); >> >> (2) change the way how method handle adapters are generated: do it >> early enough, so compilers always see the same list. >> >> #2 looks more attractive. I don't see a compelling reason to generate MH >> adapters on-demand anymore, they can be part of the regular start-up >> procedure. Also, it would allow to add additional verification logic to >> ensure StubCodeDesc::_list isn't modified after it is read. >> >> What do you think? > > Immutability is always preferable to having to synchronize code to deal with concurrent modification. :) > > David > >> Best regards, >> Vladimir Ivanov From david.holmes at oracle.com Thu Feb 11 12:02:15 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Feb 2016 22:02:15 +1000 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BC68EF.4080903@redhat.com> References: <56BB7DE1.4020002@oracle.com> <56BC68EF.4080903@redhat.com> Message-ID: <56BC7847.2090703@oracle.com> On 11/02/2016 8:56 PM, Andrew Haley wrote: > On 10/02/16 18:13, Vladimir Ivanov wrote: >> The fix is to insert a StoreStore barrier before registering an object >> on the list. > > 1. There's not usually any point having a StoreStore without a > corresponding load barrier on the reader side: there's nothing for the > StoreStore to synchronize with. Having said that, you could argue > that because there's an address dependency from the list pointer to > all the instances of StubCodeDesc we should be safe. That was my take as well. I could not see where the iteration code actually exists. If there were a list() accessor then we could add a loadload into that (or convert to load_acquire with store_release). David ----- > 2. StoreStore is arguably wrong for everything except an object > initialized with pure constants. However, given that a StubCodeDesc > is immutable I guess it's safe, as Hans observes in [1]. > > Acquire/release is much easier to reason about because you don't have > to make these fine judgements, so I'd just use acquire/release > everywhere in order to keep my sanity. > > Andrew. > > > [1] http://www.hboehm.info/c++mm/no_write_fences.html > From aph at redhat.com Thu Feb 11 12:34:23 2016 From: aph at redhat.com (Andrew Haley) Date: Thu, 11 Feb 2016 12:34:23 +0000 Subject: Cleaning up undefined behaviour in HotSpot Message-ID: <56BC7FCF.3050903@redhat.com> We're having problems with GCC 6 failing to build a working HotSpot in jdk8. I think this may be due to HotSpot's rather extensive use of undefined behaviour. This includes, but is not limited to integer overflows, null pointer dereferences, and type aliasing violations. It's a big job to fix it all, but I could certainly create a patch. However, the other problem is that all versions are affected and will need to be patched in order to run with GCC 6. >From the point of view of proprietary products based on OpenJDK this perhaps isn't an issue because people can build and test with a "frozen" compiler, but of course it's a big problem for distributions who build with the system compiler. (Mind you, it's quite possible that the proprietary JDK is broken but no-one noticed. And this is a potential security nightmare.) So, not only must the current development sources be patched, but also JKD 8. (And, for me, 7 and maybe 6.) I think we need to have a policy that all UB, with the possible exception of a couple of things which can be worked around with compiler switched, gets fixed. Comments, please... Andrew. From dmitry.dmitriev at oracle.com Thu Feb 11 13:54:29 2016 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Thu, 11 Feb 2016 16:54:29 +0300 Subject: RFR: 8144578: TestOptionsWithRanges test only ever uses the default collector In-Reply-To: <5697A9D3.1000700@oracle.com> References: <5696854D.1000604@oracle.com> <56969C91.9050003@oracle.com> <5697A9D3.1000700@oracle.com> Message-ID: <56BC9295.8050806@oracle.com> Hello, Please, need a Reviewer for that change. I uploaded updated webrev.02: http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.02/ Difference from webrev.01: I removed excluding of MinTLABSize and MarkSweepAlwaysCompactCount options from testing because underlying problems were fixed. Thanks, Dmitry On 14.01.2016 16:59, Dmitry Dmitriev wrote: > Hi Sangheon, > > Thank you for the review! Updated webrev: > http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.01/ > > Comments inline. > > On 13.01.2016 21:50, sangheon wrote: >> Hi Dmitry, >> >> Thank you for fixing this. >> Overall seems good. >> >> -------------------------------------------------------------------- >> test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >> >> 87 /* >> 88 * JDK-8144578 >> 89 * Temporarily remove testing of max range for ParGCArrayScanChunk >> because >> 90 * JVM can hang when ParGCArrayScanChunk=4294967296 and ParallelGC >> is used >> 91 */ >> 92 excludeTestMaxRange("ParGCArrayScanChunk"); >> >> issue number should be 8145204. > Fixed. >> >> -------------------------------------------------------------------- >> test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java >> >> line 181 >> >> - if (name.startsWith("G1")) { >> - option.addPrepend("-XX:+UseG1GC"); >> - } >> - >> - if (name.startsWith("CMS")) { >> - option.addPrepend("-XX:+UseConcMarkSweepGC"); >> - } >> - >> >> Is this change really needed for dedicated gc flags(starting with >> "G1" or "CMS")? >> I thought this CR is targeted for non-dedicated gc flags such as >> TLABWasteIncrement. > I return deleted lines. > > Thanks, > Dmitry >> >> And if you still think that above lines should be removed, please >> remove line 224 as well. >> >> 224 case "NewSizeThreadIncrease": >> 225 option.addPrepend("-XX:+UseSerialGC"); >> >> >> Thanks, >> Sangheon >> >> >> On 01/13/2016 09:11 AM, Dmitry Dmitriev wrote: >>> Hello, >>> >>> Please review small enhancement to the command line option >>> validation test framework which allow to run test with different GCs. >>> Few comments: >>> 1) Code which executed for testing was moved from >>> JVMOptionsUtils.java to separate class(JVMStartup.java) to avoid >>> overhead at java start-up for determining vm and gc type. >>> 2) runJavaWithParam method in JVMOption.java was refactored to avoid >>> code duplication. >>> >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8144578 >>> webrev.00: http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.00/ >>> >>> Testing: tested on all platforms with different gc by RBT, failed >>> flags were temporary removed from testing in TestOptionsWithRanges.java >>> >>> Thanks, >>> Dmitry >> > From marcus.larsson at oracle.com Thu Feb 11 15:29:13 2016 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 11 Feb 2016 16:29:13 +0100 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework In-Reply-To: <3910DA9B-43C9-4C1A-8FD0-993A54225550@oracle.com> References: <56BB3FD0.5000104@oracle.com> <3910DA9B-43C9-4C1A-8FD0-993A54225550@oracle.com> Message-ID: <56BCA8C9.102@oracle.com> Hi, On 02/10/2016 11:43 PM, John Rose wrote: > Thanks for taking this on. Thanks for looking at it! > > To be an adequate substitute for ttyLocker it needs to > support block structure via the RAII pattern. Otherwise > the use cases are verbose enough to be a burden on > programmers. > > This is easy, I think: Give LogMessage a constructor > which takes a reference to the corresponding LogHandle. > Have the LogMessage destructor call log.write(*this). Having automatic writing of the messages when they go out of scope would be convenient I guess. The idea with the current and more verbose API is to make it clear that the log message is just some in-memory buffer that can be used to prepare a multi-part message, which is sent explicitly to the intended log when ready. It makes it very obvious how the different components interact, at the cost of perhaps unnecessary verbosity. Having it automatically written makes it less verbose, but also a bit cryptic, IMHO. > > (BTW, as written it allows accidentally dropped writes, > which is bad: We'll never find all those bugs. That's > the burden of a rough-edged API, especially when it is > turned off most of the time.) We could add an assert/guarantee in an attempt to prevent this. > > If necessary or for flexibility, allow the LogMessage > constructor an optional boolean to say "don't write > automatically". Also, allow a "reset" method to > cancel any buffered writing. So the default is to > perform the write at the end of the block (if there > is anything to write), but it can be turned off > explicitly. > > Giving the LogMessage a clear linkage to a LogHandle > allows the LogMessage to be a simple delegate for > the LogHandle itself. This allows the user to ignore > the LogHandle and work with the LogMessage as > if it were the LogHandle. That seems preferable > to requiring split attention to both objects. > > Given this simplification, the name LogMessage > could be changed to BufferedLogHandle, LogBuffer, > ScopedLog, etc., to emphasize that the thing is > really a channel to some log, but with an extra > bit of buffering to control. I still think the LogMessage name makes sense. BufferedLogHandle and the likes give the impression that it's a LogHandle with some internal buffering for the sake of performance, which actually the opposite of it's intention. This class should only be used when it is important that the multi-line message isn't interleaved by other messages. I still expect the majority of the logging throughout the VM to still use the regular (and faster) LogHandle and/or log macros. > > To amend your example use case: > > // example buffered log messages (proposed) > LogHandle(logging) log; > if (log.is_debug()) { > ResourceMark rm; > LogMessage msg; > msg.debug("debug message"); > msg.trace("additional trace information"); > log.write(msg); > } > > Either this: > > // example buffered log messages (amended #1) > LogHandle(logging) log; > if (log.is_debug()) { > ResourceMark rm; > LogBuffer buf(log); > buf.debug("debug message"); > buf.trace("additional trace information"); > } > > Or this: > > // example buffered log messages (amended #2) > { LogBuffer(logging) log; > if (log.is_debug()) { > ResourceMark rm; > log.debug("debug message"); > log.trace("additional trace information"); > } > } > > The second is probably preferable, since it encourages the > logging logic to be modularized into a single block, and > because it reduces the changes for error that might occur > from having two similar names (log/msg or log/buf). The second case is more compact, which is always a good thing when it comes to logging. For the more involved scenarios where there are multiple messages being sent, I usually assume (perhaps incorrectly) that a LogHandle is used throughout the scope of such scenarios/functions, for the sake of compactness and consistency (not having to specify log tags in more than one place). In those cases there would already be a LogHandle that could be used for testing levels and such. With messages tied to a particular output like you suggest, it does however make sense to allow level testing functions on the message instances as well. I'll prepare another patch with your suggestions and we'll see how it turns out. Thanks, Marcus > > The second usage requires the LogBuffer constructor > to be lazy: It must delay internal memory allocation > until the first output operation. > > ? John From marcus.larsson at oracle.com Thu Feb 11 15:30:01 2016 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 11 Feb 2016 16:30:01 +0100 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework In-Reply-To: <56BB80AD.9030701@oracle.com> References: <56BB3FD0.5000104@oracle.com> <56BB80AD.9030701@oracle.com> Message-ID: <56BCA8F9.2020402@oracle.com> Hi, On 02/10/2016 07:25 PM, Rachel Protacio wrote: > Hi, > > Thank you for implementing this - it will be very useful for some of > our logging. The code looks good to me! Thanks also for the > file_contains_substring() update in log.cpp :) Thank you for reviewing, Rachel! Marcus > > Rachel > > On 2/10/2016 8:49 AM, Marcus Larsson wrote: >> Hi, >> >> Please review the following patch adding support for >> non-interleavable multi-line log messages in UL. >> >> Summary: >> This patch adds a LogMessage class that represents a multiline log >> message, buffering lines that belong to the same message. The class >> has a similar interface to the Log class, with printf-like methods >> for each log level. These methods will append the log message with >> additional lines. Once all filled in, the log message should be sent >> to the the appropriate log(s) using Log<>::write(). All lines in the >> LogMessage are written in a way that prevents interleaving by other >> messages. Lines are printed in the same order they were added to the >> message (regardless of level). Apart from the level, decorators will >> be identical for lines in the same LogMessage, and all lines will be >> decorated. >> >> Webrev: >> http://cr.openjdk.java.net/~mlarsson/8145934/webrev.00/ >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8145934 >> >> Testing: >> Included tests through JPRT >> >> Thanks, >> Marcus > From paul.sandoz at oracle.com Thu Feb 11 15:39:18 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 11 Feb 2016 16:39:18 +0100 Subject: RFR JDK-8149644 Integrate VarHandles Message-ID: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> Hi, This is the implementation review request for VarHandles. Langtools: http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/langtools/webrev/index.html Hotspot: http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/hotspot/webrev/index.html JDK: http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html The spec/API review is proceeding here [1]. The patches depend on Unsafe changes [2] and ByteBuffer changes [3]. Recent (as of today) JPRT runs for core and hotspot tests pass without failure. Many parts of the code have been soaking in the Valhalla repo for over a year, and it?s been soaking in the sandbox for quite and many a JPRT run was performed. It is planned to push through hs-comp as is the case for the dependent patches, and thus minimise any delays due to integration between forests. The Langtools changes are small. Tweaks were made to support updates to signature polymorphic methods and where may be located, in addition to supporting compilation of calls to MethodHandle.link*. The Hotspot changes are not very large. It?s mostly a matter of augmenting checks for MethodHandle to include that for VarHandle. It?s tempting to generalise the ?invokehandle" invocation as i believe there are other use-cases where it might be useful, but i resisted temptation here. I wanted to focus on the minimal changes required. The JDK changes are more substantial, but a large proportion are new tests. The source compilation approach taken is to use templates, the same approach as for code in the nio package, to generate both implementation and test source code. The implementations are generated by the build, the tests are pre-generated. I believe the tests should have good coverage but we have yet to run any code coverage tool. The approach to invocation of VarHandle signature polymoprhic methods is slightly different to that of MethodHandles. I wanted to ensure that linking for the common cases avoids lambda form creation, compilation and therefore class spinning. That reduces start up costs and also potential circular dependencies that might be induced in the VM boot process if VarHandles are employed early on. For common basic (i.e. erased ref and widened primitive) method signatures, namely all those that matter for the efficient atomic operations there are pre-generated methods that would otherwise be generated from creating and compiling invoker lambda forms. Those methods reside on the VarHandleGuards class. When the VM makes an up call to MethodHandleNatives.linkMethod to link a call site then this up-called method will first check if an appropriate pre-generated method exists on VarHandleGuards and if so it links to that, otherwise it falls back to a method on a class generated from compiling a lambda form. For testing purposes there is a system property available to switch off this optimisation when linking [*]. Each VarHandle instance of the same variable type produced from the same factory will share an underlying immutable instance of a VarForm that contains a set of MemberName instances, one for each implementation of a signature polymorphic method (a value of null means unsupported). The invoke methods (on VarHandleGuards or on lambda forms) will statically link to such MemberName instances using a call to MethodHandle.linkToStatic. There are a couple of TODOs in comments, those are all on non-critical code paths and i plan to chase them up afterwards. C1 does not support constant folding for @Stable arrays hence why in certain cases we have exploded stuff into fields that are operated on using if/else loops. We can simplify such code if/when C1 support is added. Thanks, Paul. [1] http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038150.html [2] http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2016-January/020953.html http://mail.openjdk.java.net/pipermail/hotspot-dev/2016-January/021514.html [3] http://mail.openjdk.java.net/pipermail/nio-dev/2016-February/003535.html [*] This technique might be useful for common signatures of MH invokers to reduce associated costs of lambda form creation and compilation in the interim of something better. From marcus.larsson at oracle.com Thu Feb 11 15:46:27 2016 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 11 Feb 2016 16:46:27 +0100 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework In-Reply-To: References: <56BB3FD0.5000104@oracle.com> Message-ID: <56BCACD3.6010701@oracle.com> Hi Thomas, On 02/11/2016 09:27 AM, Thomas St?fe wrote: > Hi Marcus, > > thank you for implementing this! Thanks for looking at it! > > I am happy that you implement this with a temporary buffer rather than > implementing a lock. > > I did not yet look closely at all changes, this is quite a large > change. But I have some questions about the memory you use: > > You use either resource area or NMT-tracked C heap. In both cases this > is quite "high level memory" in the sense that it uses a number of > components. These components then cannot use logging (or at least the > LogMessage) without introducing circular references. Worse, this may > often work and only crash if the LogMessage::grow() function grows the > buffer, so it is difficult to test. > > Would it not be better to use plain low-level raw malloc in this case? > > Or, alternativly, factor memory allocation out into an own interface > to be used as template parameter, so that most people get a reasonable > default: > > LogMessage myMessage; > .. > > but low level components like NMT, for example, could do: > > LogMessage myMessageUsingRawMalloc; > > ? Good point. This is actually already an issue with the current implementation for regular log messages. If they are longer than what the stack buffer in Log<>::vwrite() can hold, a buffer will be allocated on the heap instead (NMT tracked). I'm considering to approach this as a follow-up issue though, given that it's a pre-existing issue and that it won't be a problem until logging for NMT or similar is added. > > A second concern with using ResourceArea: One has to be careful with > handing down LogMessage objects to subroutines, to fill the LogMessage > object, and further down the callstack happens another ResourceMark > destroying the LogMessage buffer. Could that be a problem, and if yes, > could we have an assert for this case? Yes, there is a problem here if the LogMessage buffer grows under a different ResourceMark. I think it's possible to add an assert here, but I wonder if we don't actually want something more robust than that. Perhaps skipping the resource allocations altogether is a better idea. Thanks, Marcus > > Kind Regards, Thomas > > > > > > > On Wed, Feb 10, 2016 at 2:49 PM, Marcus Larsson > > wrote: > > Hi, > > Please review the following patch adding support for > non-interleavable multi-line log messages in UL. > > Summary: > This patch adds a LogMessage class that represents a multiline log > message, buffering lines that belong to the same message. The > class has a similar interface to the Log class, with printf-like > methods for each log level. These methods will append the log > message with additional lines. Once all filled in, the log message > should be sent to the the appropriate log(s) using Log<>::write(). > All lines in the LogMessage are written in a way that prevents > interleaving by other messages. Lines are printed in the same > order they were added to the message (regardless of level). Apart > from the level, decorators will be identical for lines in the same > LogMessage, and all lines will be decorated. > > Webrev: > http://cr.openjdk.java.net/~mlarsson/8145934/webrev.00/ > > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8145934 > > Testing: > Included tests through JPRT > > Thanks, > Marcus > > From vladimir.kozlov at oracle.com Fri Feb 12 00:18:22 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 11 Feb 2016 16:18:22 -0800 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> Message-ID: <56BD24CE.8020405@oracle.com> I looked on Hotspot changes and changes are fine. My only complain is missing {} in if() statements. It was source for bugs for us before so we require to always have {}: in rewriter.cpp, method.cpp, methodHandles.cpp. Thanks, Vladimir On 2/11/16 7:39 AM, Paul Sandoz wrote: > Hi, > > This is the implementation review request for VarHandles. > > Langtools: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/langtools/webrev/index.html > > Hotspot: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/hotspot/webrev/index.html > > JDK: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html > > The spec/API review is proceeding here [1]. > > The patches depend on Unsafe changes [2] and ByteBuffer changes [3]. > > Recent (as of today) JPRT runs for core and hotspot tests pass without failure. Many parts of the code have been soaking in the Valhalla repo for over a year, and it?s been soaking in the sandbox for quite and many a JPRT run was performed. > > It is planned to push through hs-comp as is the case for the dependent patches, and thus minimise any delays due to integration between forests. > > > The Langtools changes are small. Tweaks were made to support updates to signature polymorphic methods and where may be located, in addition to supporting compilation of calls to MethodHandle.link*. > > > The Hotspot changes are not very large. It?s mostly a matter of augmenting checks for MethodHandle to include that for VarHandle. It?s tempting to generalise the ?invokehandle" invocation as i believe there are other use-cases where it might be useful, but i resisted temptation here. I wanted to focus on the minimal changes required. > > > The JDK changes are more substantial, but a large proportion are new tests. The source compilation approach taken is to use templates, the same approach as for code in the nio package, to generate both implementation and test source code. The implementations are generated by the build, the tests are pre-generated. I believe the tests should have good coverage but we have yet to run any code coverage tool. > > The approach to invocation of VarHandle signature polymoprhic methods is slightly different to that of MethodHandles. I wanted to ensure that linking for the common cases avoids lambda form creation, compilation and therefore class spinning. That reduces start up costs and also potential circular dependencies that might be induced in the VM boot process if VarHandles are employed early on. > > For common basic (i.e. erased ref and widened primitive) method signatures, namely all those that matter for the efficient atomic operations there are pre-generated methods that would otherwise be generated from creating and compiling invoker lambda forms. Those methods reside on the VarHandleGuards class. When the VM makes an up call to MethodHandleNatives.linkMethod to link a call site then this up-called method will first check if an appropriate pre-generated method exists on VarHandleGuards and if so it links to that, otherwise it falls back to a method on a class generated from compiling a lambda form. For testing purposes there is a system property available to switch off this optimisation when linking [*]. > > Each VarHandle instance of the same variable type produced from the same factory will share an underlying immutable instance of a VarForm that contains a set of MemberName instances, one for each implementation of a signature polymorphic method (a value of null means unsupported). The invoke methods (on VarHandleGuards or on lambda forms) will statically link to such MemberName instances using a call to MethodHandle.linkToStatic. > > There are a couple of TODOs in comments, those are all on non-critical code paths and i plan to chase them up afterwards. > > C1 does not support constant folding for @Stable arrays hence why in certain cases we have exploded stuff into fields that are operated on using if/else loops. We can simplify such code if/when C1 support is added. > > > Thanks, > Paul. > > [1] http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038150.html > [2] http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2016-January/020953.html > http://mail.openjdk.java.net/pipermail/hotspot-dev/2016-January/021514.html > [3] http://mail.openjdk.java.net/pipermail/nio-dev/2016-February/003535.html > > [*] This technique might be useful for common signatures of MH invokers to reduce associated costs of lambda form creation and compilation in the interim of something better. > From vladimir.kozlov at oracle.com Fri Feb 12 00:36:01 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 11 Feb 2016 16:36:01 -0800 Subject: Cleaning up undefined behaviour in HotSpot In-Reply-To: <56BC7FCF.3050903@redhat.com> References: <56BC7FCF.3050903@redhat.com> Message-ID: <56BD28F1.3000602@oracle.com> > I think we need to have a policy that all UB, with the possible > exception of a couple of things which can be worked around with > compiler switched, gets fixed. I agree with that (assuming it is not false positive or compiler's bug). We (Hotspot team) welcome all fixes which remove UB in jdk 9 (current) sources. But it is up to jdk 8 update release and support teams to take those changes into 8u. In Oracle we use only supported build configuration, as you called 'frozen': https://wiki.openjdk.java.net/display/Build/Supported+Build+Platforms If you know which flags we can use with gcc 4 to trigger the same warnings/errors, please, let us know so we can verify fixes. Regards, Vladimir On 2/11/16 4:34 AM, Andrew Haley wrote: > We're having problems with GCC 6 failing to build a working HotSpot > in jdk8. > > I think this may be due to HotSpot's rather extensive use of undefined > behaviour. This includes, but is not limited to integer overflows, > null pointer dereferences, and type aliasing violations. > > It's a big job to fix it all, but I could certainly create a patch. > However, the other problem is that all versions are affected and will > need to be patched in order to run with GCC 6. > > From the point of view of proprietary products based on OpenJDK this > perhaps isn't an issue because people can build and test with a > "frozen" compiler, but of course it's a big problem for distributions > who build with the system compiler. (Mind you, it's quite possible > that the proprietary JDK is broken but no-one noticed. And this is a > potential security nightmare.) > > So, not only must the current development sources be patched, but also > JKD 8. (And, for me, 7 and maybe 6.) > > I think we need to have a policy that all UB, with the possible > exception of a couple of things which can be worked around with > compiler switched, gets fixed. > > Comments, please... > > Andrew. > From john.r.rose at oracle.com Fri Feb 12 03:32:09 2016 From: john.r.rose at oracle.com (John Rose) Date: Thu, 11 Feb 2016 19:32:09 -0800 Subject: Cleaning up undefined behaviour in HotSpot In-Reply-To: <56BC7FCF.3050903@redhat.com> References: <56BC7FCF.3050903@redhat.com> Message-ID: <321C74AA-D5C7-4C87-A909-7A6FDB4CAE07@oracle.com> On Feb 11, 2016, at 4:34 AM, Andrew Haley wrote: > > I think we need to have a policy that all UB, with the possible > exception of a couple of things which can be worked around with > compiler switched, gets fixed. > > Comments, please... It's worth study to see how bad things are and what the various remedial tactics will cost. One remedial tactic might be to not use compilers that aggressively DTWT for by-the-book UB. To me it seems possible that this new wave of UB enforcement will ultimately be rejected by the C programming community. (Of course, I was wrong about XML getting rejected so what do I know.) That said, I agree with Vladimir, that an anti-UB policy is all to the good, and assuming it does not become our new job. I have one positive contribution: anti-UB workarounds should be encapsulated in macros or inline functions and defined in globalDefinitions.hpp (or equivalent). The HotSpot way is to take trick and complex C expression patterns and write them up once, correctly, in a header file. Neither HotSpot maintainers nor HotSpot platform compilers have a full grasp of the subtleties of the C expression language. Best practices for C expressions need to be communicated in header files, not in admiring comments near virtuoso cadenzas of C operators. https://wiki.openjdk.java.net/display/HotSpot/StyleGuide#StyleGuide-Miscellaneous > ? Use functions from globalDefinitions.hpp when performing bitwise operations on integers. > Do not code directly as C operators, unless they are extremely simple. > (Examples: round_to, is_power_of_2, exact_log2.) ? John From thomas.stuefe at gmail.com Fri Feb 12 07:22:31 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 12 Feb 2016 08:22:31 +0100 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework In-Reply-To: <56BCACD3.6010701@oracle.com> References: <56BB3FD0.5000104@oracle.com> <56BCACD3.6010701@oracle.com> Message-ID: Hi Marcus, On Thu, Feb 11, 2016 at 4:46 PM, Marcus Larsson wrote: > Hi Thomas, > > On 02/11/2016 09:27 AM, Thomas St?fe wrote: > > Hi Marcus, > > thank you for implementing this! > > > Thanks for looking at it! > > > I am happy that you implement this with a temporary buffer rather than > implementing a lock. > > I did not yet look closely at all changes, this is quite a large change. > But I have some questions about the memory you use: > > You use either resource area or NMT-tracked C heap. In both cases this is > quite "high level memory" in the sense that it uses a number of components. > These components then cannot use logging (or at least the LogMessage) > without introducing circular references. Worse, this may often work and > only crash if the LogMessage::grow() function grows the buffer, so it is > difficult to test. > > Would it not be better to use plain low-level raw malloc in this case? > > Or, alternativly, factor memory allocation out into an own interface to be > used as template parameter, so that most people get a reasonable default: > > LogMessage myMessage; > .. > > but low level components like NMT, for example, could do: > > LogMessage myMessageUsingRawMalloc; > > ? > > > Good point. This is actually already an issue with the current > implementation for regular log messages. If they are longer than what the > stack buffer in Log<>::vwrite() can hold, a buffer will be allocated on the > heap instead (NMT tracked). I'm considering to approach this as a follow-up > issue though, given that it's a pre-existing issue and that it won't be a > problem until logging for NMT or similar is added. > > This sounds reasonable. > > A second concern with using ResourceArea: One has to be careful with > handing down LogMessage objects to subroutines, to fill the LogMessage > object, and further down the callstack happens another ResourceMark > destroying the LogMessage buffer. Could that be a problem, and if yes, > could we have an assert for this case? > > > Yes, there is a problem here if the LogMessage buffer grows under a > different ResourceMark. I think it's possible to add an assert here, but I > wonder if we don't actually want something more robust than that. Perhaps > skipping the resource allocations altogether is a better idea. > > Yes, maybe this is not necessary. In our VM we have something very similar to LogMessage for our tracing system, but we just use raw malloc. So, we traded in performance, but the code got simpler and more robust. Kind Regards, Thomas > Thanks, > Marcus > > > Kind Regards, Thomas > > > > > > > On Wed, Feb 10, 2016 at 2:49 PM, Marcus Larsson > wrote: > >> Hi, >> >> Please review the following patch adding support for non-interleavable >> multi-line log messages in UL. >> >> Summary: >> This patch adds a LogMessage class that represents a multiline log >> message, buffering lines that belong to the same message. The class has a >> similar interface to the Log class, with printf-like methods for each log >> level. These methods will append the log message with additional lines. >> Once all filled in, the log message should be sent to the the appropriate >> log(s) using Log<>::write(). All lines in the LogMessage are written in a >> way that prevents interleaving by other messages. Lines are printed in the >> same order they were added to the message (regardless of level). Apart from >> the level, decorators will be identical for lines in the same LogMessage, >> and all lines will be decorated. >> >> Webrev: >> http://cr.openjdk.java.net/~mlarsson/8145934/webrev.00/ >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8145934 >> >> Testing: >> Included tests through JPRT >> >> Thanks, >> Marcus >> > > > From magnus.ihse.bursie at oracle.com Fri Feb 12 08:17:12 2016 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 12 Feb 2016 09:17:12 +0100 Subject: RFR: 8149594 - Clean up Hotspot makefiles In-Reply-To: <56BC39BD.6010303@oracle.com> References: <56BB9E07.7030208@oracle.com> <56BBB56B.5020506@oracle.com> <56BBECF5.6050708@oracle.com> <56BC39BD.6010303@oracle.com> Message-ID: <56BD9508.6060406@oracle.com> On 2016-02-11 08:35, Jesper Wilhelmsson wrote: > Den 11/2/16 kl. 03:07, skrev David Holmes: >> Jesper, >> >> Magnus is rewriting all of the hotspot build system. Are these >> cleanups really >> worthwhile at this stage? > > The cleanups are worthwhile to me since I work on mostly Mac and port > all my changes over to the linux makefiles in bulk, and without these > cleanups my patches won't apply cleanly. The reason I want to push > them now even though it is a while left until the GTest stuff is done, > is that every time anyone makes a change in the makefiles (which > happens more often than I had expected) I get merge conflicts everywhere. If you want to make whitespace change, and/or structural changes that do not affect the behavior (and you can swear honest-to-god that it does not do any such thing), I have no objection. It is perhaps not the best spent time to clean up the old makefiles, but if you've already done it, what the heck. However, I'm wary of *actual* changes. First, I'm afraid that any real change may have unintended consequences. I saw your and Kim's discussion about -Wconversion. I didn't follow it entirely, but I know that we have previously actively *disabled* -Wconversion since it lead to problems. Just haphazardly enabling it again sounds like a bad idea. Second, any real changes pushed to the old hotspot make system must be re-implemented by me in the new hotspot build (until the point comes where it is merged into mainline). If they are hidden in the midst of a sea of whitespace changes (and even worse, if they are done unintentionally), that's a hopelessly time-consuming and in essence unneccessary work for me. So, I will not reject your patch, but I do require that you at least separate whitespace (and other structural but non-functional) changes into a separate fix. If you have any *real* changes, these must be tested thoroughly on all relevant platforms and compilers. /Magnus > /Jesper > >> >> David >> >> On 11/02/2016 8:10 AM, Jesper Wilhelmsson wrote: >>> Sending again to include the build-dev list. >>> /Jesper >>> >>> Den 10/2/16 kl. 21:31, skrev Jesper Wilhelmsson: >>>> Hi, >>>> >>>> Please review this cleanup of the Hotspot makefiles. >>>> >>>> Since I have been spending some time in the makefiles lately there >>>> were a few >>>> random cleanups that I couldn't stop myself from doing. Most of these >>>> are made >>>> to make the linux and bsd makefiles more alike. This has helped a lot >>>> when >>>> porting the framework to the different platforms. >>>> >>>> There are a couple of preparing alignment changes that I included >>>> in this >>>> cleanup to make the Google test patch easier to review later. >>>> >>>> There are also a couple of "real" changes: >>>> >>>> * In make/bsd/makefiles/buildtree.make we set up OS_VENDOR with the >>>> motivation >>>> that we don't include defs.make. Three lines below we include >>>> defs.make. >>>> >>>> * In make/bsd/makefiles/buildtree.make the 'install' target depends on >>>> 'install_jsigs'. There is no rule called 'install_jsigs', it is called >>>> 'install_jsig'. >>>> >>>> >>>> Another difference that I find interesting but that I have not changed >>>> in this >>>> patch (I can do that if requested) is that in the bsd version of >>>> fastdebug.make >>>> VERSION is set to "fastdebug" but in the linux version it is set to >>>> "optimized". >>>> Given the name of the makefile fastdebug seems more correct, but >>>> whichever is >>>> the correct value, shouldn't they be the same on linux and bsd? >>>> >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8149594 >>>> http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ >>>> >>>> Thanks, >>>> /Jesper From magnus.ihse.bursie at oracle.com Fri Feb 12 08:22:19 2016 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 12 Feb 2016 09:22:19 +0100 Subject: RFR: 8149594 - Clean up Hotspot makefiles In-Reply-To: <56BBB56B.5020506@oracle.com> References: <56BB9E07.7030208@oracle.com> <56BBB56B.5020506@oracle.com> Message-ID: <56BD963B.6070904@oracle.com> On 2016-02-10 23:10, Jesper Wilhelmsson wrote: > Sending again to include the build-dev list. > /Jesper > > Den 10/2/16 kl. 21:31, skrev Jesper Wilhelmsson: >> Hi, >> >> Please review this cleanup of the Hotspot makefiles. >> >> Since I have been spending some time in the makefiles lately there >> were a few >> random cleanups that I couldn't stop myself from doing. Most of these >> are made >> to make the linux and bsd makefiles more alike. This has helped a lot >> when >> porting the framework to the different platforms. >> >> There are a couple of preparing alignment changes that I included in >> this >> cleanup to make the Google test patch easier to review later. >> >> There are also a couple of "real" changes: >> >> * In make/bsd/makefiles/buildtree.make we set up OS_VENDOR with the >> motivation >> that we don't include defs.make. Three lines below we include defs.make. ... and the very first thing we do with OS_VENDOR in defs.make is: ifeq ($(OS_VENDOR), Darwin) ifneq ($(MACOSX_UNIVERSAL), true) EXPORT_LIB_ARCH_DIR = $(EXPORT_LIB_DIR) endif endif which will not be done if you change buildtree.make. The old build system is a real mess. All changes can have unintended consequences unless carefully followed all the way. Now, there's a separate patch on the way to remove the universal build (don't know the status of that one) so maybe this doesn't matter. But still. Any "real" changes to the hotspot makefiles seems like an unneccessary risk at this point, at least if mixed with tons of whitespace changes. >> * In make/bsd/makefiles/buildtree.make the 'install' target depends on >> 'install_jsigs'. There is no rule called 'install_jsigs', it is called >> 'install_jsig'. So is this a bug fix? Apparently the libjsig is build properly on macosx, so what would this change achieve? /Magnus >> >> >> Another difference that I find interesting but that I have not >> changed in this >> patch (I can do that if requested) is that in the bsd version of >> fastdebug.make >> VERSION is set to "fastdebug" but in the linux version it is set to >> "optimized". >> Given the name of the makefile fastdebug seems more correct, but >> whichever is >> the correct value, shouldn't they be the same on linux and bsd? >> >> >> https://bugs.openjdk.java.net/browse/JDK-8149594 >> http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ >> >> Thanks, >> /Jesper From aph at redhat.com Fri Feb 12 09:46:16 2016 From: aph at redhat.com (Andrew Haley) Date: Fri, 12 Feb 2016 09:46:16 +0000 Subject: Cleaning up undefined behaviour in HotSpot In-Reply-To: <321C74AA-D5C7-4C87-A909-7A6FDB4CAE07@oracle.com> References: <56BC7FCF.3050903@redhat.com> <321C74AA-D5C7-4C87-A909-7A6FDB4CAE07@oracle.com> Message-ID: <56BDA9E8.2020805@redhat.com> On 12/02/16 03:32, John Rose wrote: > On Feb 11, 2016, at 4:34 AM, Andrew Haley wrote: >> >> I think we need to have a policy that all UB, with the possible >> exception of a couple of things which can be worked around with >> compiler switched, gets fixed. >> >> Comments, please... > > It's worth study to see how bad things are and what the various > remedial tactics will cost. One remedial tactic might be to not > use compilers that aggressively DTWT for by-the-book UB. None of the Linux distros can really have that choice: it's something that's for proprietary binary versions only. All of us in the OpenJDK community, proprietary and Open Source, have to use the same sources. And besides, every C++ compiler is going to bite us all eventually: using a magic C++ compiler only delays the evil day when we have to fix this stuff. > To me it seems possible that this new wave of UB enforcement > will ultimately be rejected by the C programming community. > (Of course, I was wrong about XML getting rejected so what > do I know.) > > That said, I agree with Vladimir, that an anti-UB policy is all > to the good, and assuming it does not become our new job. I understand that. I am thinking of putting together a hit squad to fix all this stuff. But I can't do that if I have to fight people at every turn. I've seen this in other communities: "Well, I know that my code is *technically* UB, but I don't want to change it..." > I have one positive contribution: anti-UB workarounds should be > encapsulated in macros or inline functions and defined in > globalDefinitions.hpp (or equivalent). The HotSpot way is to take > trick and complex C expression patterns and write them up once, > correctly, in a header file. Neither HotSpot maintainers nor > HotSpot platform compilers have a full grasp of the subtleties of > the C expression language. Best practices for C expressions need to > be communicated in header files, not in admiring comments near > virtuoso cadenzas of C operators. I agree up to a point, but it's not always like that, and not always something you can just wrap up in a macro. Some HotSpot idioms aren't really any better or more convenient than non-UB code, they're just old-fashioned. > https://wiki.openjdk.java.net/display/HotSpot/StyleGuide#StyleGuide-Miscellaneous > >> ? Use functions from globalDefinitions.hpp when performing bitwise operations on integers. >> Do not code directly as C operators, unless they are extremely simple. >> (Examples: round_to, is_power_of_2, exact_log2.) OK. Andrew. From magnus.ihse.bursie at oracle.com Fri Feb 12 11:42:32 2016 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 12 Feb 2016 12:42:32 +0100 Subject: WE'RE HIRING: Beta testers for the new hotspot build system! Message-ID: <3B88FEE2-FEC5-47BA-91C6-C200C6342C8F@oracle.com> ... or, well, more to the truth, we're *not* hiring. At least not in the sense that we'll pay you money. But we do want beta testers anyway! :-) Your reward will be a new build system that will work on the systems and configurations you care about, right out of the box, and a fuzzy warm feeling. A document that describes how to try out the new build system, and that answers many questions that you might have, is here: http://cr.openjdk.java.net/~ihse/docs/new-hotspot-build.html The TL;DR: Clone the project forest. hg clone http://hg.openjdk.java.net/build-infra/jdk9 build-infra-jdk9 cd build-infra-jdk9 && bash get_source.sh [additional closed url] The build infra project is constantly on the move. The safest way to get to a working state is by using a tag, e.g. build-infra-beta-01. bash common/bin/hgforest.sh update -r build-infra-beta-01 Build it. This works just as the with old build. bash configure && make If you have questions or want to report bugs or enhancement requests, please direct them to the build-infra project list build-infra-dev at openjdk.java.net. Note that this is different from the Build Group mailing list build-dev at openjdk.java.net. Before asking questions, please check http://cr.openjdk.java.net/~ihse/docs/new-hotspot-build.html to see if they have been answered there first. /Magnus From robbin.ehn at oracle.com Fri Feb 12 12:16:08 2016 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Fri, 12 Feb 2016 13:16:08 +0100 Subject: RFR(s): 8148219: Add decorator hostname to UL Message-ID: <56BDCD08.2080202@oracle.com> Hi, please review. This adds a new decorator for hostname to UL, with minor changes to os::get_host_name and UL init. JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ Manual tested and verified no change to hs_err_pid (uses os::get_host_name when fastdebug build) and that UL prints hostname. Thanks! /Robbin From jesper.wilhelmsson at oracle.com Fri Feb 12 13:06:54 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Fri, 12 Feb 2016 14:06:54 +0100 Subject: RFR: 8149594 - Clean up Hotspot makefiles In-Reply-To: <56BB9E07.7030208@oracle.com> References: <56BB9E07.7030208@oracle.com> Message-ID: <56BDD8EE.2020104@oracle.com> Since the new Hotspot build system is closer to being finished than I expected I've decided to drop this patch and integrate GTest into the new build system instead. Thanks, /Jesper Den 10/2/16 kl. 21:31, skrev Jesper Wilhelmsson: > Hi, > > Please review this cleanup of the Hotspot makefiles. > > Since I have been spending some time in the makefiles lately there were a few > random cleanups that I couldn't stop myself from doing. Most of these are made > to make the linux and bsd makefiles more alike. This has helped a lot when > porting the framework to the different platforms. > > There are a couple of preparing alignment changes that I included in this > cleanup to make the Google test patch easier to review later. > > There are also a couple of "real" changes: > > * In make/bsd/makefiles/buildtree.make we set up OS_VENDOR with the motivation > that we don't include defs.make. Three lines below we include defs.make. > > * In make/bsd/makefiles/buildtree.make the 'install' target depends on > 'install_jsigs'. There is no rule called 'install_jsigs', it is called > 'install_jsig'. > > > Another difference that I find interesting but that I have not changed in this > patch (I can do that if requested) is that in the bsd version of fastdebug.make > VERSION is set to "fastdebug" but in the linux version it is set to "optimized". > Given the name of the makefile fastdebug seems more correct, but whichever is > the correct value, shouldn't they be the same on linux and bsd? > > > https://bugs.openjdk.java.net/browse/JDK-8149594 > http://cr.openjdk.java.net/~jwilhelm/8149594/webrev.00/ > > Thanks, > /Jesper From vladimir.x.ivanov at oracle.com Fri Feb 12 13:13:02 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Fri, 12 Feb 2016 16:13:02 +0300 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BB7DE1.4020002@oracle.com> References: <56BB7DE1.4020002@oracle.com> Message-ID: <56BDDA5E.4080808@oracle.com> Vladimir, David, Andrew, thanks again for the feedback. Updated version: http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01 I moved method handle adapters generation to VM init phase and added verification logic to ensure there are no modifications to the StubCodeDesc::_list after that. Also, slightly refactored java.lang.invoke initialization logic. Best regards, Vladimir Ivanov On 2/10/16 9:13 PM, Vladimir Ivanov wrote: > http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 > https://bugs.openjdk.java.net/browse/JDK-8138922 > > StubCodeDesc keeps a list of all descriptors rooted at > StubCodeDesc::_list by placing newly instantiated objects there at the > end of the constructor. Unfortunately, it doesn't guarantee that only > fully-constructed objects are visible, because compiler (or HW) can > reorder the stores. > > Since method handle adapters are generated on demand when j.l.i > framework is initialized, it's possible there are readers iterating over > the list at the moment. It's not a problem per se until everybody sees a > consistent view of the list. > > The fix is to insert a StoreStore barrier before registering an object > on the list. > > (I also considered moving MH adapter allocation to VM initialization > phase before anybody reads the list, but it's non-trivial since > MethodHandles::generate_adapters() has a number of implicit dependencies.) > > Testing: manual (verified StubCodeMark assembly), JPRT > > Thanks! > > Best regards, > Vladimir Ivanov From coleen.phillimore at oracle.com Fri Feb 12 13:28:55 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 12 Feb 2016 08:28:55 -0500 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BDDA5E.4080808@oracle.com> References: <56BB7DE1.4020002@oracle.com> <56BDDA5E.4080808@oracle.com> Message-ID: <56BDDE17.5090100@oracle.com> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/runtime/thread.cpp.udiff.html This has a collision with RFR: 8148630: Convert TraceStartupTime to Unified Logging http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/code/codeBlob.cpp.udiff.html The 'new' operators already call vm_exit_out_of_memory rather than returning null. MethodHandlesAdapterBlob may already do this. Coleen On 2/12/16 8:13 AM, Vladimir Ivanov wrote: > Vladimir, David, Andrew, thanks again for the feedback. > > Updated version: > http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01 > > I moved method handle adapters generation to VM init phase and added > verification logic to ensure there are no modifications to the > StubCodeDesc::_list after that. > > Also, slightly refactored java.lang.invoke initialization logic. > > Best regards, > Vladimir Ivanov > > On 2/10/16 9:13 PM, Vladimir Ivanov wrote: >> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 >> https://bugs.openjdk.java.net/browse/JDK-8138922 >> >> StubCodeDesc keeps a list of all descriptors rooted at >> StubCodeDesc::_list by placing newly instantiated objects there at the >> end of the constructor. Unfortunately, it doesn't guarantee that only >> fully-constructed objects are visible, because compiler (or HW) can >> reorder the stores. >> >> Since method handle adapters are generated on demand when j.l.i >> framework is initialized, it's possible there are readers iterating over >> the list at the moment. It's not a problem per se until everybody sees a >> consistent view of the list. >> >> The fix is to insert a StoreStore barrier before registering an object >> on the list. >> >> (I also considered moving MH adapter allocation to VM initialization >> phase before anybody reads the list, but it's non-trivial since >> MethodHandles::generate_adapters() has a number of implicit >> dependencies.) >> >> Testing: manual (verified StubCodeMark assembly), JPRT >> >> Thanks! >> >> Best regards, >> Vladimir Ivanov From paul.sandoz at oracle.com Fri Feb 12 13:36:35 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 12 Feb 2016 14:36:35 +0100 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <56BD24CE.8020405@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56BD24CE.8020405@oracle.com> Message-ID: <76FE8EAF-0359-464F-B1BE-D048B53FE8A7@oracle.com> > On 12 Feb 2016, at 01:18, Vladimir Kozlov wrote: > > I looked on Hotspot changes and changes are fine. My only complain is missing {} in if() statements. It was source for bugs for us before so we require to always have {}: in rewriter.cpp, method.cpp, methodHandles.cpp. > Thanks! updated in place. Paul. > Thanks, > Vladimir > > On 2/11/16 7:39 AM, Paul Sandoz wrote: >> Hi, >> >> This is the implementation review request for VarHandles. >> >> Langtools: >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/langtools/webrev/index.html >> >> Hotspot: >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/hotspot/webrev/index.html >> >> JDK: >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html >> From paul.sandoz at oracle.com Fri Feb 12 13:39:17 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 12 Feb 2016 14:39:17 +0100 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> Message-ID: > On 11 Feb 2016, at 16:39, Paul Sandoz wrote: > > JDK: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html > In case anyone is currently reviewing i inadvertently included some updates to classes in j.u.c.atomic. Those updates are now removed from the webrev. Martin and Doug will get to this area later on. Paul. From martin.doerr at sap.com Fri Feb 12 14:52:54 2016 From: martin.doerr at sap.com (Doerr, Martin) Date: Fri, 12 Feb 2016 14:52:54 +0000 Subject: RTM disabled for Linux on PPC64 LE In-Reply-To: <56BDE1EF.1020305@linux.vnet.ibm.com> References: <56BDE1EF.1020305@linux.vnet.ibm.com> Message-ID: Hi Gustavo, the reason why we disabled RTM for linux on PPC64 (big or little endian) was the problematic behavior of syscalls. The old version of the document www.kernel.org/doc/Documentation/powerpc/transactional_memory.txt said: ?Performing syscalls from within transaction is not recommended, and can lead to unpredictable results.? Transactions need to either pass completely or roll back completely without disturbing side effects of partially executed syscalls. We rely on the kernel to abort transactions if necessary. The document has changed and it may possibly work with a new linux kernel. However, we don't have such a new kernel, yet. So we can't test it at the moment. I don't know which kernel version exactly contains the change. I guess this exact version number (major + minor) should be used for enabling RTM. I haven't looked into the tests, yet. There may be a need for additional adaptations and fixes. We appreciate if you make experiments and/or contributions. Thanks and best regards, Martin -----Original Message----- From: ppc-aix-port-dev [mailto:ppc-aix-port-dev-bounces at openjdk.java.net] On Behalf Of Gustavo Romero Sent: Freitag, 12. Februar 2016 14:45 To: hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net Subject: RTM disabled for Linux on PPC64 LE Importance: High Hi, As of now (tip 1922:be58b02c11f9, jdk9/jdk9 repo) Hotspot build for Linux on ppc64le of fails due to a simple uninitialized variable error: hotspot/src/share/vm/ci/ciMethodData.hpp:585:100: error: ?data? may be used uninitialized in this function hotspot/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp:2408:78: error: ?md? may be used uninitialized in this function So this straightforward patch solves the issue: diff -r 534c50395957 src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp --- a/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Thu Jan 28 15:42:23 2016 -0800 +++ b/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Mon Feb 08 17:13:14 2016 -0200 @@ -2321,8 +2321,8 @@ if (reg_conflict) { obj = dst; } } - ciMethodData* md; - ciProfileData* data; + ciMethodData* md = NULL; + ciProfileData* data = NULL; int mdo_offset_bias = 0; compiler/rtm if (should_profile) { ciMethod* method = op->profiled_method(); However, after the build, I realized that RTM is still disabled for Linux on ppc64le, failing 25 tests on compiler/rtm suite: http://hastebin.com/raw/ohoxiwaqih Hence after applying the following patches that enable RTM for Linux on ppc64le: diff -r 266fa9bb5297 src/cpu/ppc/vm/vm_version_ppc.cpp --- a/src/cpu/ppc/vm/vm_version_ppc.cpp Thu Feb 04 16:48:39 2016 -0800 +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp Fri Feb 12 10:55:46 2016 -0200 @@ -255,7 +255,9 @@ } #endif #ifdef linux - // TODO: check kernel version (we currently have too old versions only) + if (os::Linux::os_version() >= 4) { // at least Linux kernel version 4 + os_too_old = false; + } #endif if (os_too_old) { vm_exit_during_initialization("RTM is not supported on this OS version."); diff -r 266fa9bb5297 src/os/linux/vm/os_linux.cpp --- a/src/os/linux/vm/os_linux.cpp Thu Feb 04 16:48:39 2016 -0800 +++ b/src/os/linux/vm/os_linux.cpp Fri Feb 12 10:58:10 2016 -0200 @@ -135,6 +135,7 @@ int os::Linux::_page_size = -1; const int os::Linux::_vm_default_page_size = (8 * K); bool os::Linux::_supports_fast_thread_cpu_time = false; +uint32_t os::Linux::_os_version = 0; const char * os::Linux::_glibc_version = NULL; const char * os::Linux::_libpthread_version = NULL; pthread_condattr_t os::Linux::_condattr[1]; @@ -4332,6 +4333,21 @@ return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; } +void os::Linux::initialize_os_info() { + assert(_os_version == 0, "OS info already initialized"); + + struct utsname _uname; + + uname(&_uname); // Not sure yet how deal if ret == -1 + _os_version = atoi(_uname.release); +} + +uint32_t os::Linux::os_version() { + assert(_os_version != 0, "not initialized"); + return _os_version; +} + + ///// // glibc on Linux platform uses non-documented flag // to indicate, that some special sort of signal @@ -4553,6 +4569,7 @@ init_page_sizes((size_t) Linux::page_size()); Linux::initialize_system_info(); + Linux::initialize_os_info(); // main_thread points to the aboriginal thread Linux::_main_thread = pthread_self(); diff -r 266fa9bb5297 src/os/linux/vm/os_linux.hpp --- a/src/os/linux/vm/os_linux.hpp Thu Feb 04 16:48:39 2016 -0800 +++ b/src/os/linux/vm/os_linux.hpp Fri Feb 12 10:59:01 2016 -0200 @@ -55,7 +55,7 @@ static bool _supports_fast_thread_cpu_time; static GrowableArray* _cpu_to_node; - + static uint32_t _os_version; protected: static julong _physical_memory; @@ -198,6 +198,9 @@ static jlong fast_thread_cpu_time(clockid_t clockid); + static void initialize_os_info(); + static uint32_t os_version(); + // pthread_cond clock suppport private: static pthread_condattr_t _condattr[1]; 23 tests are now passing: http://hastebin.com/raw/oyicagusod Is there a reason to let RTM disabled for Linux on ppc64le by now? Could somebody explain what is currently missing on PPC64 LE RTM implementation in order to make all RTM tests pass? Thank you. Regards, -- Gustavo Romero From vladimir.x.ivanov at oracle.com Fri Feb 12 17:09:18 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Fri, 12 Feb 2016 20:09:18 +0300 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BDDE17.5090100@oracle.com> References: <56BB7DE1.4020002@oracle.com> <56BDDA5E.4080808@oracle.com> <56BDDE17.5090100@oracle.com> Message-ID: <56BE11BE.3090808@oracle.com> Coleen, On 2/12/16 4:28 PM, Coleen Phillimore wrote: > > http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/runtime/thread.cpp.udiff.html > > > This has a collision with > > RFR: 8148630: Convert TraceStartupTime to Unified Logging Removed. I asked Rachel to cover java.lang.invoke case. > > http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/code/codeBlob.cpp.udiff.html > > > The 'new' operators already call vm_exit_out_of_memory rather than > returning null. MethodHandlesAdapterBlob may already do this. I don't see that happening in BufferBlob::operator new (which is called in ). It allocates right in the code cache and returns NULL if allocation fails. I replicate the code from other stub generators, e.g.: void StubRoutines::initialize2() { ... _code2 = BufferBlob::create("StubRoutines (2)", code_size2); if (_code2 == NULL) { vm_exit_out_of_memory(code_size2, OOM_MALLOC_ERROR, "CodeCache: no room for StubRoutines (2)"); } Or do you suggest to add MethodHandlesAdapterBlob::operator new and move the check there? Updated webrev: http://cr.openjdk.java.net/~vlivanov/8138922/webrev.02 Had to move StubCodeDesc::freeze() call later in the init sequence: JFR also allocates some stubs. Best regards, Vladimir Ivanov > > Coleen > > On 2/12/16 8:13 AM, Vladimir Ivanov wrote: >> Vladimir, David, Andrew, thanks again for the feedback. >> >> Updated version: >> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01 >> >> I moved method handle adapters generation to VM init phase and added >> verification logic to ensure there are no modifications to the >> StubCodeDesc::_list after that. >> >> Also, slightly refactored java.lang.invoke initialization logic. >> >> Best regards, >> Vladimir Ivanov >> >> On 2/10/16 9:13 PM, Vladimir Ivanov wrote: >>> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 >>> https://bugs.openjdk.java.net/browse/JDK-8138922 >>> >>> StubCodeDesc keeps a list of all descriptors rooted at >>> StubCodeDesc::_list by placing newly instantiated objects there at the >>> end of the constructor. Unfortunately, it doesn't guarantee that only >>> fully-constructed objects are visible, because compiler (or HW) can >>> reorder the stores. >>> >>> Since method handle adapters are generated on demand when j.l.i >>> framework is initialized, it's possible there are readers iterating over >>> the list at the moment. It's not a problem per se until everybody sees a >>> consistent view of the list. >>> >>> The fix is to insert a StoreStore barrier before registering an object >>> on the list. >>> >>> (I also considered moving MH adapter allocation to VM initialization >>> phase before anybody reads the list, but it's non-trivial since >>> MethodHandles::generate_adapters() has a number of implicit >>> dependencies.) >>> >>> Testing: manual (verified StubCodeMark assembly), JPRT >>> >>> Thanks! >>> >>> Best regards, >>> Vladimir Ivanov > From coleen.phillimore at oracle.com Fri Feb 12 19:25:11 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 12 Feb 2016 14:25:11 -0500 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BE11BE.3090808@oracle.com> References: <56BB7DE1.4020002@oracle.com> <56BDDA5E.4080808@oracle.com> <56BDDE17.5090100@oracle.com> <56BE11BE.3090808@oracle.com> Message-ID: <56BE3197.6050100@oracle.com> On 2/12/16 12:09 PM, Vladimir Ivanov wrote: > Coleen, > > > > > On 2/12/16 4:28 PM, Coleen Phillimore wrote: >> >> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/runtime/thread.cpp.udiff.html >> >> >> >> This has a collision with >> >> RFR: 8148630: Convert TraceStartupTime to Unified Logging > Removed. I asked Rachel to cover java.lang.invoke case. Good, thank you. That makes it easy since your change and hers are going in different repositories. > >> >> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/code/codeBlob.cpp.udiff.html >> >> >> >> The 'new' operators already call vm_exit_out_of_memory rather than >> returning null. MethodHandlesAdapterBlob may already do this. > I don't see that happening in BufferBlob::operator new (which is > called in ). It allocates right in the code cache and returns NULL if > allocation fails. > > I replicate the code from other stub generators, e.g.: > void StubRoutines::initialize2() { > ... > _code2 = BufferBlob::create("StubRoutines (2)", code_size2); > if (_code2 == NULL) { > vm_exit_out_of_memory(code_size2, OOM_MALLOC_ERROR, "CodeCache: > no room for StubRoutines (2)"); > } > > Or do you suggest to add MethodHandlesAdapterBlob::operator new and > move the check there? > No. If the 'new' this ultimately calls doesn't already call vm_exit_out_of_memory, then you've done the right thing. I don't see any other issues, but I don't really know this code so I'm a provisional review. Coleen > Updated webrev: > http://cr.openjdk.java.net/~vlivanov/8138922/webrev.02 > > Had to move StubCodeDesc::freeze() call later in the init sequence: > JFR also allocates some stubs. > > Best regards, > Vladimir Ivanov > >> >> Coleen >> >> On 2/12/16 8:13 AM, Vladimir Ivanov wrote: >>> Vladimir, David, Andrew, thanks again for the feedback. >>> >>> Updated version: >>> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01 >>> >>> I moved method handle adapters generation to VM init phase and added >>> verification logic to ensure there are no modifications to the >>> StubCodeDesc::_list after that. >>> >>> Also, slightly refactored java.lang.invoke initialization logic. >>> >>> Best regards, >>> Vladimir Ivanov >>> >>> On 2/10/16 9:13 PM, Vladimir Ivanov wrote: >>>> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 >>>> https://bugs.openjdk.java.net/browse/JDK-8138922 >>>> >>>> StubCodeDesc keeps a list of all descriptors rooted at >>>> StubCodeDesc::_list by placing newly instantiated objects there at the >>>> end of the constructor. Unfortunately, it doesn't guarantee that only >>>> fully-constructed objects are visible, because compiler (or HW) can >>>> reorder the stores. >>>> >>>> Since method handle adapters are generated on demand when j.l.i >>>> framework is initialized, it's possible there are readers iterating >>>> over >>>> the list at the moment. It's not a problem per se until everybody >>>> sees a >>>> consistent view of the list. >>>> >>>> The fix is to insert a StoreStore barrier before registering an object >>>> on the list. >>>> >>>> (I also considered moving MH adapter allocation to VM initialization >>>> phase before anybody reads the list, but it's non-trivial since >>>> MethodHandles::generate_adapters() has a number of implicit >>>> dependencies.) >>>> >>>> Testing: manual (verified StubCodeMark assembly), JPRT >>>> >>>> Thanks! >>>> >>>> Best regards, >>>> Vladimir Ivanov >> From vladimir.kozlov at oracle.com Fri Feb 12 19:28:19 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 12 Feb 2016 11:28:19 -0800 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BE11BE.3090808@oracle.com> References: <56BB7DE1.4020002@oracle.com> <56BDDA5E.4080808@oracle.com> <56BDDE17.5090100@oracle.com> <56BE11BE.3090808@oracle.com> Message-ID: <56BE3253.8030804@oracle.com> webrev.02 is fine for me. Thanks, Vladimir On 2/12/16 9:09 AM, Vladimir Ivanov wrote: > Coleen, > > > > > On 2/12/16 4:28 PM, Coleen Phillimore wrote: >> >> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/runtime/thread.cpp.udiff.html >> >> >> This has a collision with >> >> RFR: 8148630: Convert TraceStartupTime to Unified Logging > Removed. I asked Rachel to cover java.lang.invoke case. > >> >> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/code/codeBlob.cpp.udiff.html >> >> >> The 'new' operators already call vm_exit_out_of_memory rather than >> returning null. MethodHandlesAdapterBlob may already do this. > I don't see that happening in BufferBlob::operator new (which is called in ). It allocates right in the code cache and > returns NULL if allocation fails. > > I replicate the code from other stub generators, e.g.: > void StubRoutines::initialize2() { > ... > _code2 = BufferBlob::create("StubRoutines (2)", code_size2); > if (_code2 == NULL) { > vm_exit_out_of_memory(code_size2, OOM_MALLOC_ERROR, "CodeCache: no room for StubRoutines (2)"); > } > > Or do you suggest to add MethodHandlesAdapterBlob::operator new and move the check there? > > Updated webrev: > http://cr.openjdk.java.net/~vlivanov/8138922/webrev.02 > > Had to move StubCodeDesc::freeze() call later in the init sequence: JFR also allocates some stubs. > > Best regards, > Vladimir Ivanov > >> >> Coleen >> >> On 2/12/16 8:13 AM, Vladimir Ivanov wrote: >>> Vladimir, David, Andrew, thanks again for the feedback. >>> >>> Updated version: >>> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01 >>> >>> I moved method handle adapters generation to VM init phase and added >>> verification logic to ensure there are no modifications to the >>> StubCodeDesc::_list after that. >>> >>> Also, slightly refactored java.lang.invoke initialization logic. >>> >>> Best regards, >>> Vladimir Ivanov >>> >>> On 2/10/16 9:13 PM, Vladimir Ivanov wrote: >>>> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 >>>> https://bugs.openjdk.java.net/browse/JDK-8138922 >>>> >>>> StubCodeDesc keeps a list of all descriptors rooted at >>>> StubCodeDesc::_list by placing newly instantiated objects there at the >>>> end of the constructor. Unfortunately, it doesn't guarantee that only >>>> fully-constructed objects are visible, because compiler (or HW) can >>>> reorder the stores. >>>> >>>> Since method handle adapters are generated on demand when j.l.i >>>> framework is initialized, it's possible there are readers iterating over >>>> the list at the moment. It's not a problem per se until everybody sees a >>>> consistent view of the list. >>>> >>>> The fix is to insert a StoreStore barrier before registering an object >>>> on the list. >>>> >>>> (I also considered moving MH adapter allocation to VM initialization >>>> phase before anybody reads the list, but it's non-trivial since >>>> MethodHandles::generate_adapters() has a number of implicit >>>> dependencies.) >>>> >>>> Testing: manual (verified StubCodeMark assembly), JPRT >>>> >>>> Thanks! >>>> >>>> Best regards, >>>> Vladimir Ivanov >> From vladimir.kozlov at oracle.com Fri Feb 12 19:34:02 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 12 Feb 2016 11:34:02 -0800 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <76FE8EAF-0359-464F-B1BE-D048B53FE8A7@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56BD24CE.8020405@oracle.com> <76FE8EAF-0359-464F-B1BE-D048B53FE8A7@oracle.com> Message-ID: <56BE33AA.7090800@oracle.com> On 2/12/16 5:36 AM, Paul Sandoz wrote: > >> On 12 Feb 2016, at 01:18, Vladimir Kozlov wrote: >> >> I looked on Hotspot changes and changes are fine. My only complain is missing {} in if() statements. It was source for bugs for us before so we require to always have {}: in rewriter.cpp, method.cpp, methodHandles.cpp. >> > > Thanks! updated in place. > Paul. Good. Vladimir > >> Thanks, >> Vladimir >> >> On 2/11/16 7:39 AM, Paul Sandoz wrote: >>> Hi, >>> >>> This is the implementation review request for VarHandles. >>> >>> Langtools: >>> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/langtools/webrev/index.html >>> >>> Hotspot: >>> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/hotspot/webrev/index.html >>> >>> JDK: >>> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html >>> > From kirk at kodewerk.com Sun Feb 14 18:58:34 2016 From: kirk at kodewerk.com (kirk at kodewerk.com) Date: Sun, 14 Feb 2016 19:58:34 +0100 Subject: WE'RE HIRING: Beta testers for the new hotspot build system! In-Reply-To: <3B88FEE2-FEC5-47BA-91C6-C200C6342C8F@oracle.com> References: <3B88FEE2-FEC5-47BA-91C6-C200C6342C8F@oracle.com> Message-ID: <070826EE-D602-4E31-B2C6-0F9D22BAE959@kodewerk.com> Hi Magnus, I?ve tried this new build process on my Mac (OSX 10.11.3 with all latest patches, xcode 7.2.1 with all latest patches) and I get this error. configure: error: Could not find freetype! /Users/kirk/Projects/OpenJDK/build-infra-jdk9/common/autoconf/generated-configure.sh: line 82: 5: Bad file descriptor configure exiting with result code 1 freetype is on my machine. I also went and rebuilt freetype version 6.3.2 (configure, make, sudo make install). It all seemed to end well but the build still failed with the same message. I?ve not dug deeper as of yet. I have logs from the freetype build and I?ve turned on tracing in the configure script (set -xv). It?s a lot of output so I?ll only send it if you?re interested in taking a peek. Kind regards, Kirk Pepperdine > On Feb 12, 2016, at 12:42 PM, Magnus Ihse Bursie wrote: > > ... or, well, more to the truth, we're *not* hiring. At least not in the sense that we'll pay you money. But we do want beta testers anyway! :-) Your reward will be a new build system that will work on the systems and configurations you care about, right out of the box, and a fuzzy warm feeling. > > A document that describes how to try out the new build system, and that answers many questions that you might have, is here: > > http://cr.openjdk.java.net/~ihse/docs/new-hotspot-build.html > > The TL;DR: > Clone the project forest. > > hg clone http://hg.openjdk.java.net/build-infra/jdk9 build-infra-jdk9 > cd build-infra-jdk9 && bash get_source.sh [additional closed url] > The build infra project is constantly on the move. The safest way to get to a working state is by using a tag, e.g. build-infra-beta-01. > > bash common/bin/hgforest.sh update -r build-infra-beta-01 > Build it. This works just as the with old build. > > bash configure && make > If you have questions or want to report bugs or enhancement requests, please direct them to the build-infra project list build-infra-dev at openjdk.java.net . Note that this is different from the Build Group mailing list build-dev at openjdk.java.net . > > Before asking questions, please check http://cr.openjdk.java.net/~ihse/docs/new-hotspot-build.html to see if they have been answered there first. > > /Magnus > From david.holmes at oracle.com Mon Feb 15 03:13:46 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 15 Feb 2016 13:13:46 +1000 Subject: (XS) RFR: 8149427: Remove .class files from the hotspot repo .hgignore file In-Reply-To: <9E7B5E46-4FF0-4BB7-AC4A-E0F199B4061F@oracle.com> References: <56B97DB7.1080402@oracle.com> <56B98F88.1080401@oracle.com> <9E7B5E46-4FF0-4BB7-AC4A-E0F199B4061F@oracle.com> Message-ID: <56C1426A.8050709@oracle.com> Is Michael away? If I don't hear anything to the contrary in the next 24 hours I will push this change. Thanks, David On 10/02/2016 8:54 AM, Christian Thalinger wrote: > Ask Michael. > >> On Feb 8, 2016, at 9:04 PM, Mikael Vidstedt wrote: >> >> >> Looks good.It would of course be great to understand why it was added to start with, but even so I don't think it should be there (and we should instead fix whatever caused it to be added). >> >> Cheers, >> Mikael >> >> On 2016-02-08 21:48, David Holmes wrote: >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8149427 >>> >>> webrev: http://cr.openjdk.java.net/~dholmes/8149427/webrev/ >>> >>> JDK-6900757 added the following to .hgignore: >>> >>> +\.class$ >>> >>> but it is unclear why this was done. This setting can cause problems when jtreg testing leaves class files in unexpected places, and -stree JPRT submissions then fail testing in strange ways. "hg status" doesn't show these errant files because of the entry in .hgignore. >>> >>> I propose to remove the entry from the .hgignore file. >>> >>> Thanks, >>> David >>> >>> >>> patch: >>> >>> --- old/./.hgignore 2016-02-09 00:43:28.786882859 -0500 >>> +++ new/./.hgignore 2016-02-09 00:43:27.254796576 -0500 >>> @@ -10,7 +10,6 @@ >>> .igv.log >>> ^.hgtip >>> .DS_Store >>> -\.class$ >>> ^\.mx.jvmci/env >>> ^\.mx.jvmci/.*\.pyc >>> ^\.mx.jvmci/eclipse-launches/.* >>> >>> --- >>> >> > From david.holmes at oracle.com Mon Feb 15 03:35:33 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 15 Feb 2016 13:35:33 +1000 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BDDA5E.4080808@oracle.com> References: <56BB7DE1.4020002@oracle.com> <56BDDA5E.4080808@oracle.com> Message-ID: <56C14785.3030903@oracle.com> Hi Vladimir, On 12/02/2016 11:13 PM, Vladimir Ivanov wrote: > Vladimir, David, Andrew, thanks again for the feedback. > > Updated version: > http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01 > > I moved method handle adapters generation to VM init phase and added > verification logic to ensure there are no modifications to the > StubCodeDesc::_list after that. That sounds good in principle, but this is not an area of the code I am familiar with. Thanks, David > Also, slightly refactored java.lang.invoke initialization logic. > > Best regards, > Vladimir Ivanov > > On 2/10/16 9:13 PM, Vladimir Ivanov wrote: >> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 >> https://bugs.openjdk.java.net/browse/JDK-8138922 >> >> StubCodeDesc keeps a list of all descriptors rooted at >> StubCodeDesc::_list by placing newly instantiated objects there at the >> end of the constructor. Unfortunately, it doesn't guarantee that only >> fully-constructed objects are visible, because compiler (or HW) can >> reorder the stores. >> >> Since method handle adapters are generated on demand when j.l.i >> framework is initialized, it's possible there are readers iterating over >> the list at the moment. It's not a problem per se until everybody sees a >> consistent view of the list. >> >> The fix is to insert a StoreStore barrier before registering an object >> on the list. >> >> (I also considered moving MH adapter allocation to VM initialization >> phase before anybody reads the list, but it's non-trivial since >> MethodHandles::generate_adapters() has a number of implicit >> dependencies.) >> >> Testing: manual (verified StubCodeMark assembly), JPRT >> >> Thanks! >> >> Best regards, >> Vladimir Ivanov From david.holmes at oracle.com Mon Feb 15 05:16:26 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 15 Feb 2016 15:16:26 +1000 Subject: RFR(s): 8148219: Add decorator hostname to UL In-Reply-To: <56BDCD08.2080202@oracle.com> References: <56BDCD08.2080202@oracle.com> Message-ID: <56C15F2A.7020305@oracle.com> Hi Robbin, A couple of minor comments ... On 12/02/2016 10:16 PM, Robbin Ehn wrote: > Hi, please review. > > This adds a new decorator for hostname to UL, with minor changes to > os::get_host_name and UL init. > > JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 > Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ src/os/windows/vm/os_windows.cpp #ifdef ASSERT char buffer[1024]; + const char* hostname = "N/A"; st->print("HostName: "); + #ifndef PRODUCT If ASSERT is true then PRODUCT should never be true - we don't do PRODUCT builds with asserts enabled. That aside I don't see the point of the changes you made in this function. ?? --- src/share/vm/runtime/os.hpp There's no need to move the get_host_name declaration or note that it is used by UL. Just get rid of the PRODUCT_RETURN in the existing code. --- Please update all copyright years as needed. Where there is a single year like "2015, " it becomes "2015, 2016, ". Thanks, David ----- > Manual tested and verified no change to hs_err_pid (uses > os::get_host_name when fastdebug build) and that UL prints hostname. > > Thanks! > > /Robbin From david.holmes at oracle.com Mon Feb 15 05:32:55 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 15 Feb 2016 15:32:55 +1000 Subject: RFR: 8144578: TestOptionsWithRanges test only ever uses the default collector In-Reply-To: <56BC9295.8050806@oracle.com> References: <5696854D.1000604@oracle.com> <56969C91.9050003@oracle.com> <5697A9D3.1000700@oracle.com> <56BC9295.8050806@oracle.com> Message-ID: <56C16307.7030108@oracle.com> Hi Dmitry, On 11/02/2016 11:54 PM, Dmitry Dmitriev wrote: > Hello, > > Please, need a Reviewer for that change. > I uploaded updated webrev.02: > http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.02/ > > Difference from webrev.01: I removed excluding of MinTLABSize and > MarkSweepAlwaysCompactCount options from testing because underlying > problems were fixed. JVMOption.java: The refactoring tended to obscure the primary change. But as you have refactored could you also remove the duplicated call to getErrorMessageCommandLine(value) please :) --- JVMOptionsUtils.java Is there not a more direct way to get the current GC argument from jtreg ? --- JVMStartup.java Not sure what relevance the WeakRef usage and System.gc really has here. Not sure why weakRef is volatile, nor createWeakRef is synchronized ?? Thanks, David ----- > Thanks, > Dmitry > > On 14.01.2016 16:59, Dmitry Dmitriev wrote: >> Hi Sangheon, >> >> Thank you for the review! Updated webrev: >> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.01/ >> >> Comments inline. >> >> On 13.01.2016 21:50, sangheon wrote: >>> Hi Dmitry, >>> >>> Thank you for fixing this. >>> Overall seems good. >>> >>> -------------------------------------------------------------------- >>> test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>> >>> 87 /* >>> 88 * JDK-8144578 >>> 89 * Temporarily remove testing of max range for ParGCArrayScanChunk >>> because >>> 90 * JVM can hang when ParGCArrayScanChunk=4294967296 and ParallelGC >>> is used >>> 91 */ >>> 92 excludeTestMaxRange("ParGCArrayScanChunk"); >>> >>> issue number should be 8145204. >> Fixed. >>> >>> -------------------------------------------------------------------- >>> test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java >>> >>> line 181 >>> >>> - if (name.startsWith("G1")) { >>> - option.addPrepend("-XX:+UseG1GC"); >>> - } >>> - >>> - if (name.startsWith("CMS")) { >>> - option.addPrepend("-XX:+UseConcMarkSweepGC"); >>> - } >>> - >>> >>> Is this change really needed for dedicated gc flags(starting with >>> "G1" or "CMS")? >>> I thought this CR is targeted for non-dedicated gc flags such as >>> TLABWasteIncrement. >> I return deleted lines. >> >> Thanks, >> Dmitry >>> >>> And if you still think that above lines should be removed, please >>> remove line 224 as well. >>> >>> 224 case "NewSizeThreadIncrease": >>> 225 option.addPrepend("-XX:+UseSerialGC"); >>> >>> >>> Thanks, >>> Sangheon >>> >>> >>> On 01/13/2016 09:11 AM, Dmitry Dmitriev wrote: >>>> Hello, >>>> >>>> Please review small enhancement to the command line option >>>> validation test framework which allow to run test with different GCs. >>>> Few comments: >>>> 1) Code which executed for testing was moved from >>>> JVMOptionsUtils.java to separate class(JVMStartup.java) to avoid >>>> overhead at java start-up for determining vm and gc type. >>>> 2) runJavaWithParam method in JVMOption.java was refactored to avoid >>>> code duplication. >>>> >>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8144578 >>>> webrev.00: http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.00/ >>>> >>>> Testing: tested on all platforms with different gc by RBT, failed >>>> flags were temporary removed from testing in TestOptionsWithRanges.java >>>> >>>> Thanks, >>>> Dmitry >>> >> > From robbin.ehn at oracle.com Mon Feb 15 08:01:15 2016 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 15 Feb 2016 09:01:15 +0100 Subject: RFR(s): 8148219: Add decorator hostname to UL In-Reply-To: <56C15F2A.7020305@oracle.com> References: <56BDCD08.2080202@oracle.com> <56C15F2A.7020305@oracle.com> Message-ID: <56C185CB.7000109@oracle.com> Thanks David for looking at this! On 02/15/2016 06:16 AM, David Holmes wrote: > Hi Robbin, > > A couple of minor comments ... > > On 12/02/2016 10:16 PM, Robbin Ehn wrote: >> Hi, please review. >> >> This adds a new decorator for hostname to UL, with minor changes to >> os::get_host_name and UL init. >> >> JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 >> Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ > > src/os/windows/vm/os_windows.cpp > > #ifdef ASSERT > char buffer[1024]; > + const char* hostname = "N/A"; > st->print("HostName: "); > + #ifndef PRODUCT > > If ASSERT is true then PRODUCT should never be true - we don't do > PRODUCT builds with asserts enabled. > > That aside I don't see the point of the changes you made in this > function. ?? > Yes > --- > > src/share/vm/runtime/os.hpp > > There's no need to move the get_host_name declaration or note that it is > used by UL. Just get rid of the PRODUCT_RETURN in the existing code. > Ok > --- > > Please update all copyright years as needed. Where there is a single > year like "2015, " it becomes "2015, 2016, ". Sure Thanks, I'll update! /Robbin > > Thanks, > David > ----- > > >> Manual tested and verified no change to hs_err_pid (uses >> os::get_host_name when fastdebug build) and that UL prints hostname. >> >> Thanks! >> >> /Robbin From dmitry.dmitriev at oracle.com Mon Feb 15 09:41:30 2016 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Mon, 15 Feb 2016 12:41:30 +0300 Subject: RFR: 8144578: TestOptionsWithRanges test only ever uses the default collector In-Reply-To: <56C16307.7030108@oracle.com> References: <5696854D.1000604@oracle.com> <56969C91.9050003@oracle.com> <5697A9D3.1000700@oracle.com> <56BC9295.8050806@oracle.com> <56C16307.7030108@oracle.com> Message-ID: <56C19D4A.9080705@oracle.com> Hello David, Thank you for looking into this. On 15.02.2016 8:32, David Holmes wrote: > Hi Dmitry, > > On 11/02/2016 11:54 PM, Dmitry Dmitriev wrote: >> Hello, >> >> Please, need a Reviewer for that change. >> I uploaded updated webrev.02: >> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.02/ >> >> Difference from webrev.01: I removed excluding of MinTLABSize and >> MarkSweepAlwaysCompactCount options from testing because underlying >> problems were fixed. > > JVMOption.java: > > The refactoring tended to obscure the primary change. But as you have > refactored could you also remove the duplicated call to > getErrorMessageCommandLine(value) please :) Yes, thanks. Will do! > > --- > > JVMOptionsUtils.java > > Is there not a more direct way to get the current GC argument from > jtreg ? The one of alternatives was to parse jtreg properties with command line options, but was decided not to depend on command line options in this case. > > --- > > JVMStartup.java > > Not sure what relevance the WeakRef usage and System.gc really has > here. Not sure why weakRef is volatile, nor createWeakRef is > synchronized ?? The idea was to make this class slightly more complex(instead of using simple HelloWorld before) and therefore I use volatile and synchronized. Also I used WeakRef and System.gc() to ensure that gc was happened. Thank you, Dmitry > > Thanks, > David > ----- > >> Thanks, >> Dmitry >> >> On 14.01.2016 16:59, Dmitry Dmitriev wrote: >>> Hi Sangheon, >>> >>> Thank you for the review! Updated webrev: >>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.01/ >>> >>> Comments inline. >>> >>> On 13.01.2016 21:50, sangheon wrote: >>>> Hi Dmitry, >>>> >>>> Thank you for fixing this. >>>> Overall seems good. >>>> >>>> -------------------------------------------------------------------- >>>> test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>>> >>>> 87 /* >>>> 88 * JDK-8144578 >>>> 89 * Temporarily remove testing of max range for ParGCArrayScanChunk >>>> because >>>> 90 * JVM can hang when ParGCArrayScanChunk=4294967296 and ParallelGC >>>> is used >>>> 91 */ >>>> 92 excludeTestMaxRange("ParGCArrayScanChunk"); >>>> >>>> issue number should be 8145204. >>> Fixed. >>>> >>>> -------------------------------------------------------------------- >>>> test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java >>>> >>>> >>>> line 181 >>>> >>>> - if (name.startsWith("G1")) { >>>> - option.addPrepend("-XX:+UseG1GC"); >>>> - } >>>> - >>>> - if (name.startsWith("CMS")) { >>>> - option.addPrepend("-XX:+UseConcMarkSweepGC"); >>>> - } >>>> - >>>> >>>> Is this change really needed for dedicated gc flags(starting with >>>> "G1" or "CMS")? >>>> I thought this CR is targeted for non-dedicated gc flags such as >>>> TLABWasteIncrement. >>> I return deleted lines. >>> >>> Thanks, >>> Dmitry >>>> >>>> And if you still think that above lines should be removed, please >>>> remove line 224 as well. >>>> >>>> 224 case "NewSizeThreadIncrease": >>>> 225 option.addPrepend("-XX:+UseSerialGC"); >>>> >>>> >>>> Thanks, >>>> Sangheon >>>> >>>> >>>> On 01/13/2016 09:11 AM, Dmitry Dmitriev wrote: >>>>> Hello, >>>>> >>>>> Please review small enhancement to the command line option >>>>> validation test framework which allow to run test with different GCs. >>>>> Few comments: >>>>> 1) Code which executed for testing was moved from >>>>> JVMOptionsUtils.java to separate class(JVMStartup.java) to avoid >>>>> overhead at java start-up for determining vm and gc type. >>>>> 2) runJavaWithParam method in JVMOption.java was refactored to avoid >>>>> code duplication. >>>>> >>>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8144578 >>>>> webrev.00: http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.00/ >>>>> >>>>> Testing: tested on all platforms with different gc by RBT, failed >>>>> flags were temporary removed from testing in >>>>> TestOptionsWithRanges.java >>>>> >>>>> Thanks, >>>>> Dmitry >>>> >>> >> From david.holmes at oracle.com Mon Feb 15 09:59:40 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 15 Feb 2016 19:59:40 +1000 Subject: RFR: 8144578: TestOptionsWithRanges test only ever uses the default collector In-Reply-To: <56C19D4A.9080705@oracle.com> References: <5696854D.1000604@oracle.com> <56969C91.9050003@oracle.com> <5697A9D3.1000700@oracle.com> <56BC9295.8050806@oracle.com> <56C16307.7030108@oracle.com> <56C19D4A.9080705@oracle.com> Message-ID: <56C1A18C.6040400@oracle.com> On 15/02/2016 7:41 PM, Dmitry Dmitriev wrote: > Hello David, > > Thank you for looking into this. > > On 15.02.2016 8:32, David Holmes wrote: >> Hi Dmitry, >> >> On 11/02/2016 11:54 PM, Dmitry Dmitriev wrote: >>> Hello, >>> >>> Please, need a Reviewer for that change. >>> I uploaded updated webrev.02: >>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.02/ >>> >>> Difference from webrev.01: I removed excluding of MinTLABSize and >>> MarkSweepAlwaysCompactCount options from testing because underlying >>> problems were fixed. >> >> JVMOption.java: >> >> The refactoring tended to obscure the primary change. But as you have >> refactored could you also remove the duplicated call to >> getErrorMessageCommandLine(value) please :) > Yes, thanks. Will do! >> >> --- >> >> JVMOptionsUtils.java >> >> Is there not a more direct way to get the current GC argument from >> jtreg ? > The one of alternatives was to parse jtreg properties with command line > options, but was decided not to depend on command line options in this > case. Okay. Seems complicated but it is what it is. >> >> --- >> >> JVMStartup.java >> >> Not sure what relevance the WeakRef usage and System.gc really has >> here. Not sure why weakRef is volatile, nor createWeakRef is >> synchronized ?? > The idea was to make this class slightly more complex(instead of using > simple HelloWorld before) and therefore I use volatile and synchronized. > Also I used WeakRef and System.gc() to ensure that gc was happened. Okay. The details of the "test" don't really mastter. Thanks, David > Thank you, > Dmitry >> >> Thanks, >> David >> ----- >> >>> Thanks, >>> Dmitry >>> >>> On 14.01.2016 16:59, Dmitry Dmitriev wrote: >>>> Hi Sangheon, >>>> >>>> Thank you for the review! Updated webrev: >>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.01/ >>>> >>>> Comments inline. >>>> >>>> On 13.01.2016 21:50, sangheon wrote: >>>>> Hi Dmitry, >>>>> >>>>> Thank you for fixing this. >>>>> Overall seems good. >>>>> >>>>> -------------------------------------------------------------------- >>>>> test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>>>> >>>>> 87 /* >>>>> 88 * JDK-8144578 >>>>> 89 * Temporarily remove testing of max range for ParGCArrayScanChunk >>>>> because >>>>> 90 * JVM can hang when ParGCArrayScanChunk=4294967296 and ParallelGC >>>>> is used >>>>> 91 */ >>>>> 92 excludeTestMaxRange("ParGCArrayScanChunk"); >>>>> >>>>> issue number should be 8145204. >>>> Fixed. >>>>> >>>>> -------------------------------------------------------------------- >>>>> test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java >>>>> >>>>> >>>>> line 181 >>>>> >>>>> - if (name.startsWith("G1")) { >>>>> - option.addPrepend("-XX:+UseG1GC"); >>>>> - } >>>>> - >>>>> - if (name.startsWith("CMS")) { >>>>> - option.addPrepend("-XX:+UseConcMarkSweepGC"); >>>>> - } >>>>> - >>>>> >>>>> Is this change really needed for dedicated gc flags(starting with >>>>> "G1" or "CMS")? >>>>> I thought this CR is targeted for non-dedicated gc flags such as >>>>> TLABWasteIncrement. >>>> I return deleted lines. >>>> >>>> Thanks, >>>> Dmitry >>>>> >>>>> And if you still think that above lines should be removed, please >>>>> remove line 224 as well. >>>>> >>>>> 224 case "NewSizeThreadIncrease": >>>>> 225 option.addPrepend("-XX:+UseSerialGC"); >>>>> >>>>> >>>>> Thanks, >>>>> Sangheon >>>>> >>>>> >>>>> On 01/13/2016 09:11 AM, Dmitry Dmitriev wrote: >>>>>> Hello, >>>>>> >>>>>> Please review small enhancement to the command line option >>>>>> validation test framework which allow to run test with different GCs. >>>>>> Few comments: >>>>>> 1) Code which executed for testing was moved from >>>>>> JVMOptionsUtils.java to separate class(JVMStartup.java) to avoid >>>>>> overhead at java start-up for determining vm and gc type. >>>>>> 2) runJavaWithParam method in JVMOption.java was refactored to avoid >>>>>> code duplication. >>>>>> >>>>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8144578 >>>>>> webrev.00: http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.00/ >>>>>> >>>>>> Testing: tested on all platforms with different gc by RBT, failed >>>>>> flags were temporary removed from testing in >>>>>> TestOptionsWithRanges.java >>>>>> >>>>>> Thanks, >>>>>> Dmitry >>>>> >>>> >>> > From robbin.ehn at oracle.com Mon Feb 15 10:04:27 2016 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 15 Feb 2016 11:04:27 +0100 Subject: RFR(s): 8148219: Add decorator hostname to UL In-Reply-To: <56BDCD08.2080202@oracle.com> References: <56BDCD08.2080202@oracle.com> Message-ID: <56C1A2AB.6070702@oracle.com> Hi, please review this v2. Update according David's comments, except os::get_host_name which needs to be moved from private scope. New webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219.v2/ And manually re-tested. Thanks! /Robbin On 02/12/2016 01:16 PM, Robbin Ehn wrote: > Hi, please review. > > This adds a new decorator for hostname to UL, with minor changes to > os::get_host_name and UL init. > > JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 > Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ > > Manual tested and verified no change to hs_err_pid (uses > os::get_host_name when fastdebug build) and that UL prints hostname. > > Thanks! > > /Robbin From dmitry.dmitriev at oracle.com Mon Feb 15 10:06:31 2016 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Mon, 15 Feb 2016 13:06:31 +0300 Subject: RFR: 8144578: TestOptionsWithRanges test only ever uses the default collector In-Reply-To: <56C1A18C.6040400@oracle.com> References: <5696854D.1000604@oracle.com> <56969C91.9050003@oracle.com> <5697A9D3.1000700@oracle.com> <56BC9295.8050806@oracle.com> <56C16307.7030108@oracle.com> <56C19D4A.9080705@oracle.com> <56C1A18C.6040400@oracle.com> Message-ID: <56C1A327.1010907@oracle.com> David, I removed duplicated getErrorMessageCommandLine(value):http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.03/ Thanks, Dmitry On 15.02.2016 12:59, David Holmes wrote: > On 15/02/2016 7:41 PM, Dmitry Dmitriev wrote: >> Hello David, >> >> Thank you for looking into this. >> >> On 15.02.2016 8:32, David Holmes wrote: >>> Hi Dmitry, >>> >>> On 11/02/2016 11:54 PM, Dmitry Dmitriev wrote: >>>> Hello, >>>> >>>> Please, need a Reviewer for that change. >>>> I uploaded updated webrev.02: >>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.02/ >>>> >>>> Difference from webrev.01: I removed excluding of MinTLABSize and >>>> MarkSweepAlwaysCompactCount options from testing because underlying >>>> problems were fixed. >>> >>> JVMOption.java: >>> >>> The refactoring tended to obscure the primary change. But as you have >>> refactored could you also remove the duplicated call to >>> getErrorMessageCommandLine(value) please :) >> Yes, thanks. Will do! >>> >>> --- >>> >>> JVMOptionsUtils.java >>> >>> Is there not a more direct way to get the current GC argument from >>> jtreg ? >> The one of alternatives was to parse jtreg properties with command line >> options, but was decided not to depend on command line options in this >> case. > > Okay. Seems complicated but it is what it is. > >>> >>> --- >>> >>> JVMStartup.java >>> >>> Not sure what relevance the WeakRef usage and System.gc really has >>> here. Not sure why weakRef is volatile, nor createWeakRef is >>> synchronized ?? >> The idea was to make this class slightly more complex(instead of using >> simple HelloWorld before) and therefore I use volatile and synchronized. >> Also I used WeakRef and System.gc() to ensure that gc was happened. > > Okay. The details of the "test" don't really mastter. > > Thanks, > David > >> Thank you, >> Dmitry >>> >>> Thanks, >>> David >>> ----- >>> >>>> Thanks, >>>> Dmitry >>>> >>>> On 14.01.2016 16:59, Dmitry Dmitriev wrote: >>>>> Hi Sangheon, >>>>> >>>>> Thank you for the review! Updated webrev: >>>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.01/ >>>>> >>>>> Comments inline. >>>>> >>>>> On 13.01.2016 21:50, sangheon wrote: >>>>>> Hi Dmitry, >>>>>> >>>>>> Thank you for fixing this. >>>>>> Overall seems good. >>>>>> >>>>>> -------------------------------------------------------------------- >>>>>> test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>>>>> >>>>>> >>>>>> 87 /* >>>>>> 88 * JDK-8144578 >>>>>> 89 * Temporarily remove testing of max range for ParGCArrayScanChunk >>>>>> because >>>>>> 90 * JVM can hang when ParGCArrayScanChunk=4294967296 and ParallelGC >>>>>> is used >>>>>> 91 */ >>>>>> 92 excludeTestMaxRange("ParGCArrayScanChunk"); >>>>>> >>>>>> issue number should be 8145204. >>>>> Fixed. >>>>>> >>>>>> -------------------------------------------------------------------- >>>>>> test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java >>>>>> >>>>>> >>>>>> >>>>>> line 181 >>>>>> >>>>>> - if (name.startsWith("G1")) { >>>>>> - option.addPrepend("-XX:+UseG1GC"); >>>>>> - } >>>>>> - >>>>>> - if (name.startsWith("CMS")) { >>>>>> - option.addPrepend("-XX:+UseConcMarkSweepGC"); >>>>>> - } >>>>>> - >>>>>> >>>>>> Is this change really needed for dedicated gc flags(starting with >>>>>> "G1" or "CMS")? >>>>>> I thought this CR is targeted for non-dedicated gc flags such as >>>>>> TLABWasteIncrement. >>>>> I return deleted lines. >>>>> >>>>> Thanks, >>>>> Dmitry >>>>>> >>>>>> And if you still think that above lines should be removed, please >>>>>> remove line 224 as well. >>>>>> >>>>>> 224 case "NewSizeThreadIncrease": >>>>>> 225 option.addPrepend("-XX:+UseSerialGC"); >>>>>> >>>>>> >>>>>> Thanks, >>>>>> Sangheon >>>>>> >>>>>> >>>>>> On 01/13/2016 09:11 AM, Dmitry Dmitriev wrote: >>>>>>> Hello, >>>>>>> >>>>>>> Please review small enhancement to the command line option >>>>>>> validation test framework which allow to run test with different >>>>>>> GCs. >>>>>>> Few comments: >>>>>>> 1) Code which executed for testing was moved from >>>>>>> JVMOptionsUtils.java to separate class(JVMStartup.java) to avoid >>>>>>> overhead at java start-up for determining vm and gc type. >>>>>>> 2) runJavaWithParam method in JVMOption.java was refactored to >>>>>>> avoid >>>>>>> code duplication. >>>>>>> >>>>>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8144578 >>>>>>> webrev.00: http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.00/ >>>>>>> >>>>>>> Testing: tested on all platforms with different gc by RBT, failed >>>>>>> flags were temporary removed from testing in >>>>>>> TestOptionsWithRanges.java >>>>>>> >>>>>>> Thanks, >>>>>>> Dmitry >>>>>> >>>>> >>>> >> From david.holmes at oracle.com Mon Feb 15 10:42:34 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 15 Feb 2016 20:42:34 +1000 Subject: RFR(s): 8148219: Add decorator hostname to UL In-Reply-To: <56C1A2AB.6070702@oracle.com> References: <56BDCD08.2080202@oracle.com> <56C1A2AB.6070702@oracle.com> Message-ID: <56C1AB9A.7000302@oracle.com> On 15/02/2016 8:04 PM, Robbin Ehn wrote: > Hi, please review this v2. > > Update according David's comments, except os::get_host_name which needs > to be moved from private scope. > > New webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219.v2/ All looks good. > And manually re-tested. Is there a test for UL that can be enhanced to test this new decorator? Thanks, David > Thanks! > > /Robbin > > On 02/12/2016 01:16 PM, Robbin Ehn wrote: >> Hi, please review. >> >> This adds a new decorator for hostname to UL, with minor changes to >> os::get_host_name and UL init. >> >> JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 >> Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ >> >> Manual tested and verified no change to hs_err_pid (uses >> os::get_host_name when fastdebug build) and that UL prints hostname. >> >> Thanks! >> >> /Robbin From david.holmes at oracle.com Mon Feb 15 10:44:28 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 15 Feb 2016 20:44:28 +1000 Subject: RFR: 8144578: TestOptionsWithRanges test only ever uses the default collector In-Reply-To: <56C1A327.1010907@oracle.com> References: <5696854D.1000604@oracle.com> <56969C91.9050003@oracle.com> <5697A9D3.1000700@oracle.com> <56BC9295.8050806@oracle.com> <56C16307.7030108@oracle.com> <56C19D4A.9080705@oracle.com> <56C1A18C.6040400@oracle.com> <56C1A327.1010907@oracle.com> Message-ID: <56C1AC0C.7080904@oracle.com> On 15/02/2016 8:06 PM, Dmitry Dmitriev wrote: > David, > I removed duplicated > getErrorMessageCommandLine(value):http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.03/ Thanks :) Shorter local variable names are okay. Cheers, David > Thanks, > Dmitry > > On 15.02.2016 12:59, David Holmes wrote: >> On 15/02/2016 7:41 PM, Dmitry Dmitriev wrote: >>> Hello David, >>> >>> Thank you for looking into this. >>> >>> On 15.02.2016 8:32, David Holmes wrote: >>>> Hi Dmitry, >>>> >>>> On 11/02/2016 11:54 PM, Dmitry Dmitriev wrote: >>>>> Hello, >>>>> >>>>> Please, need a Reviewer for that change. >>>>> I uploaded updated webrev.02: >>>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.02/ >>>>> >>>>> Difference from webrev.01: I removed excluding of MinTLABSize and >>>>> MarkSweepAlwaysCompactCount options from testing because underlying >>>>> problems were fixed. >>>> >>>> JVMOption.java: >>>> >>>> The refactoring tended to obscure the primary change. But as you have >>>> refactored could you also remove the duplicated call to >>>> getErrorMessageCommandLine(value) please :) >>> Yes, thanks. Will do! >>>> >>>> --- >>>> >>>> JVMOptionsUtils.java >>>> >>>> Is there not a more direct way to get the current GC argument from >>>> jtreg ? >>> The one of alternatives was to parse jtreg properties with command line >>> options, but was decided not to depend on command line options in this >>> case. >> >> Okay. Seems complicated but it is what it is. >> >>>> >>>> --- >>>> >>>> JVMStartup.java >>>> >>>> Not sure what relevance the WeakRef usage and System.gc really has >>>> here. Not sure why weakRef is volatile, nor createWeakRef is >>>> synchronized ?? >>> The idea was to make this class slightly more complex(instead of using >>> simple HelloWorld before) and therefore I use volatile and synchronized. >>> Also I used WeakRef and System.gc() to ensure that gc was happened. >> >> Okay. The details of the "test" don't really mastter. >> >> Thanks, >> David >> >>> Thank you, >>> Dmitry >>>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>>> Thanks, >>>>> Dmitry >>>>> >>>>> On 14.01.2016 16:59, Dmitry Dmitriev wrote: >>>>>> Hi Sangheon, >>>>>> >>>>>> Thank you for the review! Updated webrev: >>>>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.01/ >>>>>> >>>>>> Comments inline. >>>>>> >>>>>> On 13.01.2016 21:50, sangheon wrote: >>>>>>> Hi Dmitry, >>>>>>> >>>>>>> Thank you for fixing this. >>>>>>> Overall seems good. >>>>>>> >>>>>>> -------------------------------------------------------------------- >>>>>>> test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>>>>>> >>>>>>> >>>>>>> 87 /* >>>>>>> 88 * JDK-8144578 >>>>>>> 89 * Temporarily remove testing of max range for ParGCArrayScanChunk >>>>>>> because >>>>>>> 90 * JVM can hang when ParGCArrayScanChunk=4294967296 and ParallelGC >>>>>>> is used >>>>>>> 91 */ >>>>>>> 92 excludeTestMaxRange("ParGCArrayScanChunk"); >>>>>>> >>>>>>> issue number should be 8145204. >>>>>> Fixed. >>>>>>> >>>>>>> -------------------------------------------------------------------- >>>>>>> test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java >>>>>>> >>>>>>> >>>>>>> >>>>>>> line 181 >>>>>>> >>>>>>> - if (name.startsWith("G1")) { >>>>>>> - option.addPrepend("-XX:+UseG1GC"); >>>>>>> - } >>>>>>> - >>>>>>> - if (name.startsWith("CMS")) { >>>>>>> - option.addPrepend("-XX:+UseConcMarkSweepGC"); >>>>>>> - } >>>>>>> - >>>>>>> >>>>>>> Is this change really needed for dedicated gc flags(starting with >>>>>>> "G1" or "CMS")? >>>>>>> I thought this CR is targeted for non-dedicated gc flags such as >>>>>>> TLABWasteIncrement. >>>>>> I return deleted lines. >>>>>> >>>>>> Thanks, >>>>>> Dmitry >>>>>>> >>>>>>> And if you still think that above lines should be removed, please >>>>>>> remove line 224 as well. >>>>>>> >>>>>>> 224 case "NewSizeThreadIncrease": >>>>>>> 225 option.addPrepend("-XX:+UseSerialGC"); >>>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Sangheon >>>>>>> >>>>>>> >>>>>>> On 01/13/2016 09:11 AM, Dmitry Dmitriev wrote: >>>>>>>> Hello, >>>>>>>> >>>>>>>> Please review small enhancement to the command line option >>>>>>>> validation test framework which allow to run test with different >>>>>>>> GCs. >>>>>>>> Few comments: >>>>>>>> 1) Code which executed for testing was moved from >>>>>>>> JVMOptionsUtils.java to separate class(JVMStartup.java) to avoid >>>>>>>> overhead at java start-up for determining vm and gc type. >>>>>>>> 2) runJavaWithParam method in JVMOption.java was refactored to >>>>>>>> avoid >>>>>>>> code duplication. >>>>>>>> >>>>>>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8144578 >>>>>>>> webrev.00: http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.00/ >>>>>>>> >>>>>>>> Testing: tested on all platforms with different gc by RBT, failed >>>>>>>> flags were temporary removed from testing in >>>>>>>> TestOptionsWithRanges.java >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Dmitry >>>>>>> >>>>>> >>>>> >>> > From dmitry.dmitriev at oracle.com Mon Feb 15 10:54:47 2016 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Mon, 15 Feb 2016 13:54:47 +0300 Subject: RFR: 8144578: TestOptionsWithRanges test only ever uses the default collector In-Reply-To: <56C1AC0C.7080904@oracle.com> References: <5696854D.1000604@oracle.com> <56969C91.9050003@oracle.com> <5697A9D3.1000700@oracle.com> <56BC9295.8050806@oracle.com> <56C16307.7030108@oracle.com> <56C19D4A.9080705@oracle.com> <56C1A18C.6040400@oracle.com> <56C1A327.1010907@oracle.com> <56C1AC0C.7080904@oracle.com> Message-ID: <56C1AE77.9050705@oracle.com> David, thank you for the review! Dmitry On 15.02.2016 13:44, David Holmes wrote: > On 15/02/2016 8:06 PM, Dmitry Dmitriev wrote: >> David, >> I removed duplicated >> getErrorMessageCommandLine(value):http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.03/ >> > > Thanks :) Shorter local variable names are okay. > > Cheers, > David > >> Thanks, >> Dmitry >> >> On 15.02.2016 12:59, David Holmes wrote: >>> On 15/02/2016 7:41 PM, Dmitry Dmitriev wrote: >>>> Hello David, >>>> >>>> Thank you for looking into this. >>>> >>>> On 15.02.2016 8:32, David Holmes wrote: >>>>> Hi Dmitry, >>>>> >>>>> On 11/02/2016 11:54 PM, Dmitry Dmitriev wrote: >>>>>> Hello, >>>>>> >>>>>> Please, need a Reviewer for that change. >>>>>> I uploaded updated webrev.02: >>>>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.02/ >>>>>> >>>>>> Difference from webrev.01: I removed excluding of MinTLABSize and >>>>>> MarkSweepAlwaysCompactCount options from testing because underlying >>>>>> problems were fixed. >>>>> >>>>> JVMOption.java: >>>>> >>>>> The refactoring tended to obscure the primary change. But as you have >>>>> refactored could you also remove the duplicated call to >>>>> getErrorMessageCommandLine(value) please :) >>>> Yes, thanks. Will do! >>>>> >>>>> --- >>>>> >>>>> JVMOptionsUtils.java >>>>> >>>>> Is there not a more direct way to get the current GC argument from >>>>> jtreg ? >>>> The one of alternatives was to parse jtreg properties with command >>>> line >>>> options, but was decided not to depend on command line options in this >>>> case. >>> >>> Okay. Seems complicated but it is what it is. >>> >>>>> >>>>> --- >>>>> >>>>> JVMStartup.java >>>>> >>>>> Not sure what relevance the WeakRef usage and System.gc really has >>>>> here. Not sure why weakRef is volatile, nor createWeakRef is >>>>> synchronized ?? >>>> The idea was to make this class slightly more complex(instead of using >>>> simple HelloWorld before) and therefore I use volatile and >>>> synchronized. >>>> Also I used WeakRef and System.gc() to ensure that gc was happened. >>> >>> Okay. The details of the "test" don't really mastter. >>> >>> Thanks, >>> David >>> >>>> Thank you, >>>> Dmitry >>>>> >>>>> Thanks, >>>>> David >>>>> ----- >>>>> >>>>>> Thanks, >>>>>> Dmitry >>>>>> >>>>>> On 14.01.2016 16:59, Dmitry Dmitriev wrote: >>>>>>> Hi Sangheon, >>>>>>> >>>>>>> Thank you for the review! Updated webrev: >>>>>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.01/ >>>>>>> >>>>>>> Comments inline. >>>>>>> >>>>>>> On 13.01.2016 21:50, sangheon wrote: >>>>>>>> Hi Dmitry, >>>>>>>> >>>>>>>> Thank you for fixing this. >>>>>>>> Overall seems good. >>>>>>>> >>>>>>>> -------------------------------------------------------------------- >>>>>>>> >>>>>>>> test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> 87 /* >>>>>>>> 88 * JDK-8144578 >>>>>>>> 89 * Temporarily remove testing of max range for >>>>>>>> ParGCArrayScanChunk >>>>>>>> because >>>>>>>> 90 * JVM can hang when ParGCArrayScanChunk=4294967296 and >>>>>>>> ParallelGC >>>>>>>> is used >>>>>>>> 91 */ >>>>>>>> 92 excludeTestMaxRange("ParGCArrayScanChunk"); >>>>>>>> >>>>>>>> issue number should be 8145204. >>>>>>> Fixed. >>>>>>>> >>>>>>>> -------------------------------------------------------------------- >>>>>>>> >>>>>>>> test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> line 181 >>>>>>>> >>>>>>>> - if (name.startsWith("G1")) { >>>>>>>> - option.addPrepend("-XX:+UseG1GC"); >>>>>>>> - } >>>>>>>> - >>>>>>>> - if (name.startsWith("CMS")) { >>>>>>>> - option.addPrepend("-XX:+UseConcMarkSweepGC"); >>>>>>>> - } >>>>>>>> - >>>>>>>> >>>>>>>> Is this change really needed for dedicated gc flags(starting with >>>>>>>> "G1" or "CMS")? >>>>>>>> I thought this CR is targeted for non-dedicated gc flags such as >>>>>>>> TLABWasteIncrement. >>>>>>> I return deleted lines. >>>>>>> >>>>>>> Thanks, >>>>>>> Dmitry >>>>>>>> >>>>>>>> And if you still think that above lines should be removed, please >>>>>>>> remove line 224 as well. >>>>>>>> >>>>>>>> 224 case "NewSizeThreadIncrease": >>>>>>>> 225 option.addPrepend("-XX:+UseSerialGC"); >>>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Sangheon >>>>>>>> >>>>>>>> >>>>>>>> On 01/13/2016 09:11 AM, Dmitry Dmitriev wrote: >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> Please review small enhancement to the command line option >>>>>>>>> validation test framework which allow to run test with different >>>>>>>>> GCs. >>>>>>>>> Few comments: >>>>>>>>> 1) Code which executed for testing was moved from >>>>>>>>> JVMOptionsUtils.java to separate class(JVMStartup.java) to avoid >>>>>>>>> overhead at java start-up for determining vm and gc type. >>>>>>>>> 2) runJavaWithParam method in JVMOption.java was refactored to >>>>>>>>> avoid >>>>>>>>> code duplication. >>>>>>>>> >>>>>>>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8144578 >>>>>>>>> webrev.00: >>>>>>>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.00/ >>>>>>>>> >>>>>>>>> Testing: tested on all platforms with different gc by RBT, failed >>>>>>>>> flags were temporary removed from testing in >>>>>>>>> TestOptionsWithRanges.java >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Dmitry >>>>>>>> >>>>>>> >>>>>> >>>> >> From marcus.larsson at oracle.com Mon Feb 15 11:46:09 2016 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Mon, 15 Feb 2016 12:46:09 +0100 Subject: RFR(s): 8148219: Add decorator hostname to UL In-Reply-To: <56C1AB9A.7000302@oracle.com> References: <56BDCD08.2080202@oracle.com> <56C1A2AB.6070702@oracle.com> <56C1AB9A.7000302@oracle.com> Message-ID: <56C1BA81.6030702@oracle.com> Hi, On 02/15/2016 11:42 AM, David Holmes wrote: > On 15/02/2016 8:04 PM, Robbin Ehn wrote: >> Hi, please review this v2. >> >> Update according David's comments, except os::get_host_name which needs >> to be moved from private scope. >> >> New webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219.v2/ > > All looks good. Looks good to me too. > >> And manually re-tested. > > Is there a test for UL that can be enhanced to test this new decorator? There are unit tests for all current decorators in UL, but they are written in gtest and haven't been checked in yet. I suggest that we add a test case for this decorator when we integrate the rest of the unit tests. Thanks, Marcus > > Thanks, > David > >> Thanks! >> >> /Robbin >> >> On 02/12/2016 01:16 PM, Robbin Ehn wrote: >>> Hi, please review. >>> >>> This adds a new decorator for hostname to UL, with minor changes to >>> os::get_host_name and UL init. >>> >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 >>> Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ >>> >>> Manual tested and verified no change to hs_err_pid (uses >>> os::get_host_name when fastdebug build) and that UL prints hostname. >>> >>> Thanks! >>> >>> /Robbin From robbin.ehn at oracle.com Mon Feb 15 12:06:06 2016 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 15 Feb 2016 13:06:06 +0100 Subject: RFR(s): 8148219: Add decorator hostname to UL In-Reply-To: <56C1AB9A.7000302@oracle.com> References: <56BDCD08.2080202@oracle.com> <56C1A2AB.6070702@oracle.com> <56C1AB9A.7000302@oracle.com> Message-ID: <56C1BF2E.3010601@oracle.com> Thanks! On 02/15/2016 11:42 AM, David Holmes wrote: > On 15/02/2016 8:04 PM, Robbin Ehn wrote: >> Hi, please review this v2. >> >> Update according David's comments, except os::get_host_name which needs >> to be moved from private scope. >> >> New webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219.v2/ > > All looks good. > >> And manually re-tested. > > Is there a test for UL that can be enhanced to test this new decorator? See Marcus mail. /Robbin > > Thanks, > David > >> Thanks! >> >> /Robbin >> >> On 02/12/2016 01:16 PM, Robbin Ehn wrote: >>> Hi, please review. >>> >>> This adds a new decorator for hostname to UL, with minor changes to >>> os::get_host_name and UL init. >>> >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 >>> Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ >>> >>> Manual tested and verified no change to hs_err_pid (uses >>> os::get_host_name when fastdebug build) and that UL prints hostname. >>> >>> Thanks! >>> >>> /Robbin From robbin.ehn at oracle.com Mon Feb 15 12:06:53 2016 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 15 Feb 2016 13:06:53 +0100 Subject: RFR(s): 8148219: Add decorator hostname to UL In-Reply-To: <56C1BA81.6030702@oracle.com> References: <56BDCD08.2080202@oracle.com> <56C1A2AB.6070702@oracle.com> <56C1AB9A.7000302@oracle.com> <56C1BA81.6030702@oracle.com> Message-ID: <56C1BF5D.7010006@oracle.com> Hi On 02/15/2016 12:46 PM, Marcus Larsson wrote: > Hi, > > On 02/15/2016 11:42 AM, David Holmes wrote: >> On 15/02/2016 8:04 PM, Robbin Ehn wrote: >>> Hi, please review this v2. >>> >>> Update according David's comments, except os::get_host_name which needs >>> to be moved from private scope. >>> >>> New webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219.v2/ >> >> All looks good. > > Looks good to me too. Thanks! /Robbin > >> >>> And manually re-tested. >> >> Is there a test for UL that can be enhanced to test this new decorator? > > There are unit tests for all current decorators in UL, but they are > written in gtest and haven't been checked in yet. I suggest that we add > a test case for this decorator when we integrate the rest of the unit > tests. > > Thanks, > Marcus > >> >> Thanks, >> David >> >>> Thanks! >>> >>> /Robbin >>> >>> On 02/12/2016 01:16 PM, Robbin Ehn wrote: >>>> Hi, please review. >>>> >>>> This adds a new decorator for hostname to UL, with minor changes to >>>> os::get_host_name and UL init. >>>> >>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 >>>> Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ >>>> >>>> Manual tested and verified no change to hs_err_pid (uses >>>> os::get_host_name when fastdebug build) and that UL prints hostname. >>>> >>>> Thanks! >>>> >>>> /Robbin > From aleksey.shipilev at oracle.com Mon Feb 15 12:22:05 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Mon, 15 Feb 2016 15:22:05 +0300 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles Message-ID: <56C1C2ED.10702@oracle.com> Hi, I would like to solicit reviews for the slab of VM changes to support JEP 193 (VarHandles). This portion covers new Unsafe methods. The last review got fallen through the mailing list trenches with reposts, hopefully we will have more reviews now that we include hotspot-dev and jdk9-dev. Webrev: http://cr.openjdk.java.net/~shade/8148146/webrev.jdk.01/ http://cr.openjdk.java.net/~shade/8148146/webrev.hs.01/ These changes successfully pass JPRT -testset hotspot on all platforms. Eyeballing the generated code on x86 yields no obvious problems. Sanity microbenchmark runs do not show performance regressions on old methods, and show the expected performance on new methods: http://cr.openjdk.java.net/~shade/8148146/notes.txt A brief summary of changes: a) jdk.internal.misc.Unsafe has new methods. Since we now have split s.m.Unsafe and j.i.m.Unsafe, this change "safely" extends the private Unsafe, leaving the other one untouched. b) hotspot/test/compiler/unsafe tests are extended for newly added methods. c) unsafe.cpp gets the basic native method implementations. Most new operations are folded to their volatile (the strongest) counterparts, hoping that compilers would intrinsify them into more performant versions. d) C1 intrinsics are not present in this patch: we have some prototypes in VarHandles forest, but they are not ready to be pushed; e) C2 intrinsics for x86: * Most intrinsics code is covered by platform-independent LibraryCallKit changes, which means non-x86 architectures are also partially covered. * There are two classes of ops left for platform-dependent code: WeakCAS and CompareAndExchange nodes. Both seem simple enough to do, but there are details to be sorted out on each platform -- let's do those separately. * Both LibraryCallKit::inline_unsafe_access and LCK::inline_unsafe_load_store were modified to accept new access modes, and generally brushed up to accept the changes. * putOrdered intrinsic methods are purged in favor of put*Release operations. We still keep Unsafe.putOrdered for testability and compatibility reasons. Cheers, -Aleksey From kirk.pepperdine at gmail.com Mon Feb 15 13:17:59 2016 From: kirk.pepperdine at gmail.com (kirk.pepperdine at gmail.com) Date: Mon, 15 Feb 2016 14:17:59 +0100 Subject: RFR(s): 8148219: Add decorator hostname to UL In-Reply-To: <56C1BA81.6030702@oracle.com> References: <56BDCD08.2080202@oracle.com> <56C1A2AB.6070702@oracle.com> <56C1AB9A.7000302@oracle.com> <56C1BA81.6030702@oracle.com> Message-ID: Hi, I have clients that I can currently get them to ship me logs without and NDA or security concerns. I am concerned that if the hostname is included in the logs they will no longer be able to send me these logs. In fact I just checked with one customer and they indicated that they would not be able to ship these logs even with an NDA. Kind regards, Kirk > On Feb 15, 2016, at 12:46 PM, Marcus Larsson wrote: > > Hi, > > On 02/15/2016 11:42 AM, David Holmes wrote: >> On 15/02/2016 8:04 PM, Robbin Ehn wrote: >>> Hi, please review this v2. >>> >>> Update according David's comments, except os::get_host_name which needs >>> to be moved from private scope. >>> >>> New webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219.v2/ >> >> All looks good. > > Looks good to me too. > >> >>> And manually re-tested. >> >> Is there a test for UL that can be enhanced to test this new decorator? > > There are unit tests for all current decorators in UL, but they are written in gtest and haven't been checked in yet. I suggest that we add a test case for this decorator when we integrate the rest of the unit tests. > > Thanks, > Marcus > >> >> Thanks, >> David >> >>> Thanks! >>> >>> /Robbin >>> >>> On 02/12/2016 01:16 PM, Robbin Ehn wrote: >>>> Hi, please review. >>>> >>>> This adds a new decorator for hostname to UL, with minor changes to >>>> os::get_host_name and UL init. >>>> >>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 >>>> Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ >>>> >>>> Manual tested and verified no change to hs_err_pid (uses >>>> os::get_host_name when fastdebug build) and that UL prints hostname. >>>> >>>> Thanks! >>>> >>>> /Robbin > From adinn at redhat.com Mon Feb 15 13:23:03 2016 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 15 Feb 2016 13:23:03 +0000 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C1C2ED.10702@oracle.com> References: <56C1C2ED.10702@oracle.com> Message-ID: <56C1D137.5020607@redhat.com> Hi Alexsey, On 15/02/16 12:22, Aleksey Shipilev wrote: > I would like to solicit reviews for the slab of VM changes to support > JEP 193 (VarHandles). This portion covers new Unsafe methods. The last > review got fallen through the mailing list trenches with reposts, > hopefully we will have more reviews now that we include hotspot-dev and > jdk9-dev. > > Webrev: > http://cr.openjdk.java.net/~shade/8148146/webrev.jdk.01/ > http://cr.openjdk.java.net/~shade/8148146/webrev.hs.01/ > > These changes successfully pass JPRT -testset hotspot on all platforms. Which platforms does that include? Specifically, does this include/exclude non-closed AArch64? > e) C2 intrinsics for x86: > > * Most intrinsics code is covered by platform-independent > LibraryCallKit changes, which means non-x86 architectures are also > partially covered. The volatile stuff looks ok for Aarch64 after a quick eyeball scan. However, I would like to check the code generated by the back end. I'm especially interested in any potential interaction with Roland's patch for 8087341 (which required associated AArch64 back end changes). I think the two patches should combine correctly (and probably commutatively) but it is worth checking. > * There are two classes of ops left for platform-dependent code: > WeakCAS and CompareAndExchange nodes. Both seem simple enough to do, but > there are details to be sorted out on each platform -- let's do those > separately. Agreed, this is probably best left as a separate step for AArch64. I think we will still be able to optimize the new CAS variants effectively in the back end using the same technique as is employed for volatile stores and standard CAS. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul Argiry (US) From robbin.ehn at oracle.com Mon Feb 15 13:38:31 2016 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 15 Feb 2016 14:38:31 +0100 Subject: RFR(s): 8148219: Add decorator hostname to UL In-Reply-To: References: <56BDCD08.2080202@oracle.com> <56C1A2AB.6070702@oracle.com> <56C1AB9A.7000302@oracle.com> <56C1BA81.6030702@oracle.com> Message-ID: <56C1D4D7.7040504@oracle.com> Hi Kirk, Hostname will not be in any logs by default. Your clients must actively set it enabled. /Robbin On 02/15/2016 02:17 PM, kirk.pepperdine at gmail.com wrote: > Hi, > > I have clients that I can currently get them to ship me logs without and NDA or security concerns. I am concerned that if the hostname is included in the logs they will no longer be able to send me these logs. In fact I just checked with one customer and they indicated that they would not be able to ship these logs even with an NDA. > > Kind regards, > Kirk > >> On Feb 15, 2016, at 12:46 PM, Marcus Larsson wrote: >> >> Hi, >> >> On 02/15/2016 11:42 AM, David Holmes wrote: >>> On 15/02/2016 8:04 PM, Robbin Ehn wrote: >>>> Hi, please review this v2. >>>> >>>> Update according David's comments, except os::get_host_name which needs >>>> to be moved from private scope. >>>> >>>> New webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219.v2/ >>> >>> All looks good. >> >> Looks good to me too. >> >>> >>>> And manually re-tested. >>> >>> Is there a test for UL that can be enhanced to test this new decorator? >> >> There are unit tests for all current decorators in UL, but they are written in gtest and haven't been checked in yet. I suggest that we add a test case for this decorator when we integrate the rest of the unit tests. >> >> Thanks, >> Marcus >> >>> >>> Thanks, >>> David >>> >>>> Thanks! >>>> >>>> /Robbin >>>> >>>> On 02/12/2016 01:16 PM, Robbin Ehn wrote: >>>>> Hi, please review. >>>>> >>>>> This adds a new decorator for hostname to UL, with minor changes to >>>>> os::get_host_name and UL init. >>>>> >>>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8148219 >>>>> Webrev: http://cr.openjdk.java.net/~mlarsson/rehn/8148219/ >>>>> >>>>> Manual tested and verified no change to hs_err_pid (uses >>>>> os::get_host_name when fastdebug build) and that UL prints hostname. >>>>> >>>>> Thanks! >>>>> >>>>> /Robbin >> > From aleksey.shipilev at oracle.com Mon Feb 15 13:49:24 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Mon, 15 Feb 2016 16:49:24 +0300 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C1D137.5020607@redhat.com> References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> Message-ID: <56C1D764.6020100@oracle.com> Hi Andrew, On 02/15/2016 04:23 PM, Andrew Dinn wrote: > On 15/02/16 12:22, Aleksey Shipilev wrote: >> I would like to solicit reviews for the slab of VM changes to support >> JEP 193 (VarHandles). This portion covers new Unsafe methods. The last >> review got fallen through the mailing list trenches with reposts, >> hopefully we will have more reviews now that we include hotspot-dev and >> jdk9-dev. >> >> Webrev: >> http://cr.openjdk.java.net/~shade/8148146/webrev.jdk.01/ >> http://cr.openjdk.java.net/~shade/8148146/webrev.hs.01/ >> >> These changes successfully pass JPRT -testset hotspot on all platforms. > > Which platforms does that include? Specifically, does this > include/exclude non-closed AArch64? Yes, open AArch64 is included there. But now I realize new Unsafe tests have not been run there, let me manhandle our infra into doing that. If that is easy for you, can you check if AArch64 works with this patch in your scenarios? >> e) C2 intrinsics for x86: >> >> * Most intrinsics code is covered by platform-independent >> LibraryCallKit changes, which means non-x86 architectures are also >> partially covered. > > The volatile stuff looks ok for Aarch64 after a quick eyeball scan. > However, I would like to check the code generated by the back end. I'm > especially interested in any potential interaction with Roland's patch > for 8087341 (which required associated AArch64 back end changes). I > think the two patches should combine correctly (and probably > commutatively) but it is worth checking. The changes are supposed to generate the same code for old Unsafe methods -- the refactoring shuffles the compiler code around, but the sequence of accesses/barriers should stay the same. Eyeballing x86_64 assembly indeed shows it is the same, but I haven't looked beyond x86. So Roland's patch and those super-(awe|grue)some ARM64 .ad matchers should be unaffected. If they are affected, then I screwed up somewhere during refactoring. I'll wait for Roland's patch to land before pushing these Unsafe changes anyway, and beef up on testing. Thanks, -Aleksey From maurizio.cimadamore at oracle.com Mon Feb 15 13:58:11 2016 From: maurizio.cimadamore at oracle.com (Maurizio Cimadamore) Date: Mon, 15 Feb 2016 13:58:11 +0000 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> Message-ID: <56C1D973.3060802@oracle.com> Langtools changes look ok to me. Would it make sense to file a follow up issue to add some tests in this area (i.e. bytecode tests using the classfile API) ? Cheers Maurizio On 11/02/16 15:39, Paul Sandoz wrote: > Hi, > > This is the implementation review request for VarHandles. > > Langtools: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/langtools/webrev/index.html > > Hotspot: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/hotspot/webrev/index.html > > JDK: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html > > The spec/API review is proceeding here [1]. > > The patches depend on Unsafe changes [2] and ByteBuffer changes [3]. > > Recent (as of today) JPRT runs for core and hotspot tests pass without failure. Many parts of the code have been soaking in the Valhalla repo for over a year, and it?s been soaking in the sandbox for quite and many a JPRT run was performed. > > It is planned to push through hs-comp as is the case for the dependent patches, and thus minimise any delays due to integration between forests. > > > The Langtools changes are small. Tweaks were made to support updates to signature polymorphic methods and where may be located, in addition to supporting compilation of calls to MethodHandle.link*. > > > The Hotspot changes are not very large. It?s mostly a matter of augmenting checks for MethodHandle to include that for VarHandle. It?s tempting to generalise the ?invokehandle" invocation as i believe there are other use-cases where it might be useful, but i resisted temptation here. I wanted to focus on the minimal changes required. > > > The JDK changes are more substantial, but a large proportion are new tests. The source compilation approach taken is to use templates, the same approach as for code in the nio package, to generate both implementation and test source code. The implementations are generated by the build, the tests are pre-generated. I believe the tests should have good coverage but we have yet to run any code coverage tool. > > The approach to invocation of VarHandle signature polymoprhic methods is slightly different to that of MethodHandles. I wanted to ensure that linking for the common cases avoids lambda form creation, compilation and therefore class spinning. That reduces start up costs and also potential circular dependencies that might be induced in the VM boot process if VarHandles are employed early on. > > For common basic (i.e. erased ref and widened primitive) method signatures, namely all those that matter for the efficient atomic operations there are pre-generated methods that would otherwise be generated from creating and compiling invoker lambda forms. Those methods reside on the VarHandleGuards class. When the VM makes an up call to MethodHandleNatives.linkMethod to link a call site then this up-called method will first check if an appropriate pre-generated method exists on VarHandleGuards and if so it links to that, otherwise it falls back to a method on a class generated from compiling a lambda form. For testing purposes there is a system property available to switch off this optimisation when linking [*]. > > Each VarHandle instance of the same variable type produced from the same factory will share an underlying immutable instance of a VarForm that contains a set of MemberName instances, one for each implementation of a signature polymorphic method (a value of null means unsupported). The invoke methods (on VarHandleGuards or on lambda forms) will statically link to such MemberName instances using a call to MethodHandle.linkToStatic. > > There are a couple of TODOs in comments, those are all on non-critical code paths and i plan to chase them up afterwards. > > C1 does not support constant folding for @Stable arrays hence why in certain cases we have exploded stuff into fields that are operated on using if/else loops. We can simplify such code if/when C1 support is added. > > > Thanks, > Paul. > > [1] http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038150.html > [2] http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2016-January/020953.html > http://mail.openjdk.java.net/pipermail/hotspot-dev/2016-January/021514.html > [3] http://mail.openjdk.java.net/pipermail/nio-dev/2016-February/003535.html > > [*] This technique might be useful for common signatures of MH invokers to reduce associated costs of lambda form creation and compilation in the interim of something better. From adinn at redhat.com Mon Feb 15 14:13:56 2016 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 15 Feb 2016 14:13:56 +0000 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C1D764.6020100@oracle.com> References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> Message-ID: <56C1DD24.2060503@redhat.com> Hi Aleksey, On 15/02/16 13:49, Aleksey Shipilev wrote: > Yes, open AArch64 is included there. But now I realize new Unsafe > tests have not been run there, let me manhandle our infra into > doing that. If that is easy for you, can you check if AArch64 works > with this patch in your scenarios? It would be very good to run your Unsafe tests. However, they may still succeed but minus the desired optimizations. So, I'll apply the patch to my tree and check the generated code by eyeball. Can you detail any special magic to perform to run your tests? or is there a grimoire I can consult? > The changes are supposed to generate the same code for old Unsafe > methods -- the refactoring shuffles the compiler code around, but > the sequence of accesses/barriers should stay the same. Eyeballing > x86_64 assembly indeed shows it is the same, but I haven't looked > beyond x86. Ok, good. That was what I thought had been implemented last time I studied a posted change set. > So Roland's patch and those super-(awe|grue)some ARM64 .ad > matchers should be unaffected. If they are affected, then I screwed > up somewhere during refactoring. I'll wait for Roland's patch to > land before pushing these Unsafe changes anyway, and beef up on > testing. That's probably the better way to do it. Roland's change led to a significant lowering of the grue to awe ratio (/his/ awe allowed me to remove much of /my/ grue). regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul Argiry (US) From gromero at linux.vnet.ibm.com Mon Feb 15 14:22:38 2016 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Mon, 15 Feb 2016 12:22:38 -0200 Subject: RTM disabled for Linux on PPC64 LE In-Reply-To: References: <56BDE1EF.1020305@linux.vnet.ibm.com> Message-ID: <56C1DF2E.8070603@linux.vnet.ibm.com> Hello Martin, Thank you for your reply. The problematic behavior of syscalls has been addressed since kernel 4.2 (already present in, por instance, Ubuntu 15.10 and 16.04): https://goo.gl/d80xAJ I'm taking a closer look at the RTM tests and I'll make additional experiments as you suggested. So far I enabled RTM for Linux on ppc64le and there is no regression in the RTM test suite. I'm using kernel 4.2.0. The following patch was applied to http://hg.openjdk.java.net/jdk9/jdk9/hotspot, 5d17092b6917+ tip, and I used the (major + minor) version to enable RTM as you said: # HG changeset patch # User gromero # Date 1455540780 7200 # Mon Feb 15 10:53:00 2016 -0200 # Node ID 0e9540f2156c4c4d7d8215eb89109ff81be82f58 # Parent 5d17092b691720d71f06360fb0cc183fe2877faa Enable RTM for Linux on PPC64 LE Enable RTM for Linux kernel version equal or above 4.2, since the problematic behavior of performing a syscall from within transaction which could lead to unpredictable results has been addressed. Please, refer to https://goo.gl/fi4tjC diff --git a/src/cpu/ppc/vm/globalDefinitions_ppc.hpp b/src/cpu/ppc/vm/globalDefinitions_ppc.hpp --- a/src/cpu/ppc/vm/globalDefinitions_ppc.hpp +++ b/src/cpu/ppc/vm/globalDefinitions_ppc.hpp @@ -52,4 +52,9 @@ #define INCLUDE_RTM_OPT 1 #endif +// Enable RTM experimental support for Linux. +#if defined(COMPILER2) && defined(linux) +#define INCLUDE_RTM_OPT 1 +#endif + #endif // CPU_PPC_VM_GLOBALDEFINITIONS_PPC_HPP diff --git a/src/cpu/ppc/vm/vm_version_ppc.cpp b/src/cpu/ppc/vm/vm_version_ppc.cpp --- a/src/cpu/ppc/vm/vm_version_ppc.cpp +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp @@ -255,7 +255,12 @@ } #endif #ifdef linux - // TODO: check kernel version (we currently have too old versions only) + // At least Linux kernel 4.2, as the problematic behavior of syscalls + // being called from within a transaction has been addressed. + // Please, refer to commit 4b4fadba057c1af7689fc8fa182b13baL7 + if (os::Linux::os_version() >= 0x040200) { + os_too_old = false; + } #endif if (os_too_old) { vm_exit_during_initialization("RTM is not supported on this OS version."); diff --git a/src/os/linux/vm/os_linux.cpp b/src/os/linux/vm/os_linux.cpp --- a/src/os/linux/vm/os_linux.cpp +++ b/src/os/linux/vm/os_linux.cpp @@ -135,6 +135,7 @@ int os::Linux::_page_size = -1; const int os::Linux::_vm_default_page_size = (8 * K); bool os::Linux::_supports_fast_thread_cpu_time = false; +uint32_t os::Linux::_os_version = 0; const char * os::Linux::_glibc_version = NULL; const char * os::Linux::_libpthread_version = NULL; pthread_condattr_t os::Linux::_condattr[1]; @@ -4332,6 +4333,31 @@ return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; } +void os::Linux::initialize_os_info() { + assert(_os_version == 0, "OS info already initialized"); + + struct utsname _uname; + + uint32_t major; + uint32_t minor; + uint32_t fix; + + uname(&_uname); // Not sure yet how to bail out if ret == -1 + sscanf(_uname.release,"%d.%d.%d", &major, + &minor, + &fix ); + + _os_version = (major << 16) | + (minor << 8 ) | + (fix << 0 ) ; +} + +uint32_t os::Linux::os_version() { + assert(_os_version != 0, "not initialized"); + return _os_version; +} + + ///// // glibc on Linux platform uses non-documented flag // to indicate, that some special sort of signal @@ -4552,6 +4578,8 @@ } init_page_sizes((size_t) Linux::page_size()); + Linux::initialize_os_info(); + Linux::initialize_system_info(); // main_thread points to the aboriginal thread diff --git a/src/os/linux/vm/os_linux.hpp b/src/os/linux/vm/os_linux.hpp --- a/src/os/linux/vm/os_linux.hpp +++ b/src/os/linux/vm/os_linux.hpp @@ -56,6 +56,12 @@ static GrowableArray* _cpu_to_node; + // Ox00AABBCC + // AA, Major Version + // BB, Minor Version + // CC, Fix Version + static uint32_t _os_version; + protected: static julong _physical_memory; @@ -198,6 +204,9 @@ static jlong fast_thread_cpu_time(clockid_t clockid); + static void initialize_os_info(); + static uint32_t os_version(); + // pthread_cond clock suppport private: static pthread_condattr_t _condattr[1]; Should I use any test suite besides the jtreg suite already present in the Hotspot forest? Best Regards, Gustavo On 12-02-2016 12:52, Doerr, Martin wrote: > Hi Gustavo, > > the reason why we disabled RTM for linux on PPC64 (big or little endian) was the problematic behavior of syscalls. > The old version of the document > www.kernel.org/doc/Documentation/powerpc/transactional_memory.txt > said: > ?Performing syscalls from within transaction is not recommended, and can lead to unpredictable results.? > > Transactions need to either pass completely or roll back completely without disturbing side effects of partially executed syscalls. > We rely on the kernel to abort transactions if necessary. > > The document has changed and it may possibly work with a new linux kernel. > However, we don't have such a new kernel, yet. So we can't test it at the moment. > I don't know which kernel version exactly contains the change. I guess this exact version number (major + minor) should be used for enabling RTM. > > I haven't looked into the tests, yet. There may be a need for additional adaptations and fixes. > > We appreciate if you make experiments and/or contributions. > > Thanks and best regards, > Martin > > > -----Original Message----- > From: ppc-aix-port-dev [mailto:ppc-aix-port-dev-bounces at openjdk.java.net] On Behalf Of Gustavo Romero > Sent: Freitag, 12. Februar 2016 14:45 > To: hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net > Subject: RTM disabled for Linux on PPC64 LE > Importance: High > > Hi, > As of now (tip 1922:be58b02c11f9, jdk9/jdk9 repo) Hotspot build for Linux on ppc64le of fails due to a simple uninitialized variable error: > > hotspot/src/share/vm/ci/ciMethodData.hpp:585:100: error: ?data? may be used uninitialized in this function > hotspot/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp:2408:78: error: ?md? may be used uninitialized in this function > > So this straightforward patch solves the issue: > diff -r 534c50395957 src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp > --- a/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Thu Jan 28 15:42:23 2016 -0800 > +++ b/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Mon Feb 08 17:13:14 2016 -0200 > @@ -2321,8 +2321,8 @@ > if (reg_conflict) { obj = dst; } > } > - ciMethodData* md; > - ciProfileData* data; > + ciMethodData* md = NULL; > + ciProfileData* data = NULL; > int mdo_offset_bias = 0; compiler/rtm > if (should_profile) { > ciMethod* method = op->profiled_method(); > > However, after the build, I realized that RTM is still disabled for Linux on ppc64le, failing 25 tests on compiler/rtm suite: > > http://hastebin.com/raw/ohoxiwaqih > > Hence after applying the following patches that enable RTM for Linux on ppc64le: > > diff -r 266fa9bb5297 src/cpu/ppc/vm/vm_version_ppc.cpp > --- a/src/cpu/ppc/vm/vm_version_ppc.cpp Thu Feb 04 16:48:39 2016 -0800 > +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp Fri Feb 12 10:55:46 2016 -0200 > @@ -255,7 +255,9 @@ > } > #endif > #ifdef linux > - // TODO: check kernel version (we currently have too old versions only) > + if (os::Linux::os_version() >= 4) { // at least Linux kernel version 4 > + os_too_old = false; > + } > #endif > if (os_too_old) { > vm_exit_during_initialization("RTM is not supported on this OS version."); > > > diff -r 266fa9bb5297 src/os/linux/vm/os_linux.cpp > --- a/src/os/linux/vm/os_linux.cpp Thu Feb 04 16:48:39 2016 -0800 > +++ b/src/os/linux/vm/os_linux.cpp Fri Feb 12 10:58:10 2016 -0200 > @@ -135,6 +135,7 @@ > int os::Linux::_page_size = -1; > const int os::Linux::_vm_default_page_size = (8 * K); > bool os::Linux::_supports_fast_thread_cpu_time = false; > +uint32_t os::Linux::_os_version = 0; > const char * os::Linux::_glibc_version = NULL; > const char * os::Linux::_libpthread_version = NULL; > pthread_condattr_t os::Linux::_condattr[1]; > @@ -4332,6 +4333,21 @@ > return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; > } > +void os::Linux::initialize_os_info() { > + assert(_os_version == 0, "OS info already initialized"); > + > + struct utsname _uname; > + + uname(&_uname); // Not sure yet how deal if ret == -1 > + _os_version = atoi(_uname.release); > +} > + > +uint32_t os::Linux::os_version() { > + assert(_os_version != 0, "not initialized"); > + return _os_version; > +} > + > + > ///// > // glibc on Linux platform uses non-documented flag > // to indicate, that some special sort of signal > @@ -4553,6 +4569,7 @@ > init_page_sizes((size_t) Linux::page_size()); > Linux::initialize_system_info(); > + Linux::initialize_os_info(); > // main_thread points to the aboriginal thread > Linux::_main_thread = pthread_self(); > > > diff -r 266fa9bb5297 src/os/linux/vm/os_linux.hpp > --- a/src/os/linux/vm/os_linux.hpp Thu Feb 04 16:48:39 2016 -0800 > +++ b/src/os/linux/vm/os_linux.hpp Fri Feb 12 10:59:01 2016 -0200 > @@ -55,7 +55,7 @@ > static bool _supports_fast_thread_cpu_time; > static GrowableArray* _cpu_to_node; > - > + static uint32_t _os_version; protected: > static julong _physical_memory; > @@ -198,6 +198,9 @@ > static jlong fast_thread_cpu_time(clockid_t clockid); > + static void initialize_os_info(); > + static uint32_t os_version(); + > // pthread_cond clock suppport > private: > static pthread_condattr_t _condattr[1]; > > > 23 tests are now passing: http://hastebin.com/raw/oyicagusod > > Is there a reason to let RTM disabled for Linux on ppc64le by now? Could somebody explain what is currently missing on PPC64 LE RTM implementation in order to make all RTM tests pass? > > Thank you. > > Regards, > -- > Gustavo Romero > From paul.sandoz at oracle.com Mon Feb 15 14:52:31 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 15 Feb 2016 15:52:31 +0100 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <56C1D973.3060802@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56C1D973.3060802@oracle.com> Message-ID: <7E6EB6FE-2422-4544-80F9-9939923B4711@oracle.com> > On 15 Feb 2016, at 14:58, Maurizio Cimadamore wrote: > > Langtools changes look ok to me. Thanks! > Would it make sense to file a follow up issue to add some tests in this area (i.e. bytecode tests using the classfile API) ? > Yes, created https://bugs.openjdk.java.net/browse/JDK-8149821 . Paul. From forax at univ-mlv.fr Mon Feb 15 14:58:42 2016 From: forax at univ-mlv.fr (Remi Forax) Date: Mon, 15 Feb 2016 15:58:42 +0100 (CET) Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <56C1D973.3060802@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56C1D973.3060802@oracle.com> Message-ID: <1533888573.323371.1455548322267.JavaMail.zimbra@u-pem.fr> Hi all, The comment in Infer "//The return type for a polymorphic signature call" should be updated to reflect the new implementation. and this change in the way to do the inference (if the return type is not Object use the declared return type) is too ad hoc for me, we will need to do the same special case for the parameter types, soon, no ? R?mi ----- Mail original ----- > De: "Maurizio Cimadamore" > ?: "Paul Sandoz" , "hotspot-dev developers" , "jdk9-dev" > > Envoy?: Lundi 15 F?vrier 2016 14:58:11 > Objet: Re: RFR JDK-8149644 Integrate VarHandles > > Langtools changes look ok to me. Would it make sense to file a follow up > issue to add some tests in this area (i.e. bytecode tests using the > classfile API) ? > > Cheers > Maurizio > > On 11/02/16 15:39, Paul Sandoz wrote: > > Hi, > > > > This is the implementation review request for VarHandles. > > > > Langtools: > > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/langtools/webrev/index.html > > > > Hotspot: > > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/hotspot/webrev/index.html > > > > JDK: > > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html > > > > The spec/API review is proceeding here [1]. > > > > The patches depend on Unsafe changes [2] and ByteBuffer changes [3]. > > > > Recent (as of today) JPRT runs for core and hotspot tests pass without > > failure. Many parts of the code have been soaking in the Valhalla repo for > > over a year, and it?s been soaking in the sandbox for quite and many a > > JPRT run was performed. > > > > It is planned to push through hs-comp as is the case for the dependent > > patches, and thus minimise any delays due to integration between forests. > > > > > > The Langtools changes are small. Tweaks were made to support updates to > > signature polymorphic methods and where may be located, in addition to > > supporting compilation of calls to MethodHandle.link*. > > > > > > The Hotspot changes are not very large. It?s mostly a matter of augmenting > > checks for MethodHandle to include that for VarHandle. It?s tempting to > > generalise the ?invokehandle" invocation as i believe there are other > > use-cases where it might be useful, but i resisted temptation here. I > > wanted to focus on the minimal changes required. > > > > > > The JDK changes are more substantial, but a large proportion are new tests. > > The source compilation approach taken is to use templates, the same > > approach as for code in the nio package, to generate both implementation > > and test source code. The implementations are generated by the build, the > > tests are pre-generated. I believe the tests should have good coverage but > > we have yet to run any code coverage tool. > > > > The approach to invocation of VarHandle signature polymoprhic methods is > > slightly different to that of MethodHandles. I wanted to ensure that > > linking for the common cases avoids lambda form creation, compilation and > > therefore class spinning. That reduces start up costs and also potential > > circular dependencies that might be induced in the VM boot process if > > VarHandles are employed early on. > > > > For common basic (i.e. erased ref and widened primitive) method signatures, > > namely all those that matter for the efficient atomic operations there are > > pre-generated methods that would otherwise be generated from creating and > > compiling invoker lambda forms. Those methods reside on the > > VarHandleGuards class. When the VM makes an up call to > > MethodHandleNatives.linkMethod to link a call site then this up-called > > method will first check if an appropriate pre-generated method exists on > > VarHandleGuards and if so it links to that, otherwise it falls back to a > > method on a class generated from compiling a lambda form. For testing > > purposes there is a system property available to switch off this > > optimisation when linking [*]. > > > > Each VarHandle instance of the same variable type produced from the same > > factory will share an underlying immutable instance of a VarForm that > > contains a set of MemberName instances, one for each implementation of a > > signature polymorphic method (a value of null means unsupported). The > > invoke methods (on VarHandleGuards or on lambda forms) will statically > > link to such MemberName instances using a call to > > MethodHandle.linkToStatic. > > > > There are a couple of TODOs in comments, those are all on non-critical code > > paths and i plan to chase them up afterwards. > > > > C1 does not support constant folding for @Stable arrays hence why in > > certain cases we have exploded stuff into fields that are operated on > > using if/else loops. We can simplify such code if/when C1 support is > > added. > > > > > > Thanks, > > Paul. > > > > [1] > > http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038150.html > > [2] > > http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2016-January/020953.html > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2016-January/021514.html > > [3] > > http://mail.openjdk.java.net/pipermail/nio-dev/2016-February/003535.html > > > > [*] This technique might be useful for common signatures of MH invokers to > > reduce associated costs of lambda form creation and compilation in the > > interim of something better. > > From vladimir.x.ivanov at oracle.com Mon Feb 15 15:49:50 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Mon, 15 Feb 2016 18:49:50 +0300 Subject: [9] RFR (XS): 8138922: StubCodeDesc constructor publishes partially-constructed objects on StubCodeDesc::_list In-Reply-To: <56BE3253.8030804@oracle.com> References: <56BB7DE1.4020002@oracle.com> <56BDDA5E.4080808@oracle.com> <56BDDE17.5090100@oracle.com> <56BE11BE.3090808@oracle.com> <56BE3253.8030804@oracle.com> Message-ID: <56C1F39E.3060302@oracle.com> Vladimir, Coleen, David, thanks for reviews. Best regards, Vladimir Ivanov On 2/12/16 10:28 PM, Vladimir Kozlov wrote: > webrev.02 is fine for me. > > Thanks, > Vladimir > > On 2/12/16 9:09 AM, Vladimir Ivanov wrote: >> Coleen, >> >> >> >> >> On 2/12/16 4:28 PM, Coleen Phillimore wrote: >>> >>> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/runtime/thread.cpp.udiff.html >>> >>> >>> >>> This has a collision with >>> >>> RFR: 8148630: Convert TraceStartupTime to Unified Logging >> Removed. I asked Rachel to cover java.lang.invoke case. >> >>> >>> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01/src/share/vm/code/codeBlob.cpp.udiff.html >>> >>> >>> >>> The 'new' operators already call vm_exit_out_of_memory rather than >>> returning null. MethodHandlesAdapterBlob may already do this. >> I don't see that happening in BufferBlob::operator new (which is >> called in ). It allocates right in the code cache and >> returns NULL if allocation fails. >> >> I replicate the code from other stub generators, e.g.: >> void StubRoutines::initialize2() { >> ... >> _code2 = BufferBlob::create("StubRoutines (2)", code_size2); >> if (_code2 == NULL) { >> vm_exit_out_of_memory(code_size2, OOM_MALLOC_ERROR, "CodeCache: >> no room for StubRoutines (2)"); >> } >> >> Or do you suggest to add MethodHandlesAdapterBlob::operator new and >> move the check there? >> >> Updated webrev: >> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.02 >> >> Had to move StubCodeDesc::freeze() call later in the init sequence: >> JFR also allocates some stubs. >> >> Best regards, >> Vladimir Ivanov >> >>> >>> Coleen >>> >>> On 2/12/16 8:13 AM, Vladimir Ivanov wrote: >>>> Vladimir, David, Andrew, thanks again for the feedback. >>>> >>>> Updated version: >>>> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.01 >>>> >>>> I moved method handle adapters generation to VM init phase and added >>>> verification logic to ensure there are no modifications to the >>>> StubCodeDesc::_list after that. >>>> >>>> Also, slightly refactored java.lang.invoke initialization logic. >>>> >>>> Best regards, >>>> Vladimir Ivanov >>>> >>>> On 2/10/16 9:13 PM, Vladimir Ivanov wrote: >>>>> http://cr.openjdk.java.net/~vlivanov/8138922/webrev.00 >>>>> https://bugs.openjdk.java.net/browse/JDK-8138922 >>>>> >>>>> StubCodeDesc keeps a list of all descriptors rooted at >>>>> StubCodeDesc::_list by placing newly instantiated objects there at the >>>>> end of the constructor. Unfortunately, it doesn't guarantee that only >>>>> fully-constructed objects are visible, because compiler (or HW) can >>>>> reorder the stores. >>>>> >>>>> Since method handle adapters are generated on demand when j.l.i >>>>> framework is initialized, it's possible there are readers iterating >>>>> over >>>>> the list at the moment. It's not a problem per se until everybody >>>>> sees a >>>>> consistent view of the list. >>>>> >>>>> The fix is to insert a StoreStore barrier before registering an object >>>>> on the list. >>>>> >>>>> (I also considered moving MH adapter allocation to VM initialization >>>>> phase before anybody reads the list, but it's non-trivial since >>>>> MethodHandles::generate_adapters() has a number of implicit >>>>> dependencies.) >>>>> >>>>> Testing: manual (verified StubCodeMark assembly), JPRT >>>>> >>>>> Thanks! >>>>> >>>>> Best regards, >>>>> Vladimir Ivanov >>> From paul.sandoz at oracle.com Mon Feb 15 16:37:32 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 15 Feb 2016 17:37:32 +0100 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <1533888573.323371.1455548322267.JavaMail.zimbra@u-pem.fr> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56C1D973.3060802@oracle.com> <1533888573.323371.1455548322267.JavaMail.zimbra@u-pem.fr> Message-ID: Hi Remi, > On 15 Feb 2016, at 15:58, Remi Forax wrote: > > Hi all, > > The comment in Infer > "//The return type for a polymorphic signature call" > should be updated to reflect the new implementation. > That comment should really be folded into the first if block. I could do that as follows: // The return type of the polymorphic signature is polymorphic, // and is computed from the ... And then in the else block // The return type of the polymorphic signature is fixed (not polymorphic) ? > and this change in the way to do the inference (if the return type is not Object use the declared return type) is too ad hoc for me, > we will need to do the same special case for the parameter types, soon, no ? > Do you have any use-cases in mind? Rather than ad-hoc i would ague instead the enhancement of signature-polymorphic methods is limited to that required by the current use-cases. IIRC I did pull on that more significantly at one point when i had sub-types for array handles since the index need not be polymorphic. But we dialled back from that approach. Paul. From maurizio.cimadamore at oracle.com Mon Feb 15 17:10:35 2016 From: maurizio.cimadamore at oracle.com (Maurizio Cimadamore) Date: Mon, 15 Feb 2016 17:10:35 +0000 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <7E6EB6FE-2422-4544-80F9-9939923B4711@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56C1D973.3060802@oracle.com> <7E6EB6FE-2422-4544-80F9-9939923B4711@oracle.com> Message-ID: <56C2068B.5080908@oracle.com> On 15/02/16 14:52, Paul Sandoz wrote: >> On 15 Feb 2016, at 14:58, Maurizio Cimadamore wrote: >> >> Langtools changes look ok to me. > Thanks! > > >> Would it make sense to file a follow up issue to add some tests in this area (i.e. bytecode tests using the classfile API) ? >> > Yes, created https://bugs.openjdk.java.net/browse/JDK-8149821 . > > Paul. Thanks! Maurizio From sangheon.kim at oracle.com Mon Feb 15 17:13:51 2016 From: sangheon.kim at oracle.com (sangheon) Date: Mon, 15 Feb 2016 09:13:51 -0800 Subject: RFR: 8144578: TestOptionsWithRanges test only ever uses the default collector In-Reply-To: <56C1A327.1010907@oracle.com> References: <5696854D.1000604@oracle.com> <56969C91.9050003@oracle.com> <5697A9D3.1000700@oracle.com> <56BC9295.8050806@oracle.com> <56C16307.7030108@oracle.com> <56C19D4A.9080705@oracle.com> <56C1A18C.6040400@oracle.com> <56C1A327.1010907@oracle.com> Message-ID: <56C2074F.9020204@oracle.com> Hi Dmitry, webrev.03 looks good to me. Thanks, Sangheon On 02/15/2016 02:06 AM, Dmitry Dmitriev wrote: > David, > I removed duplicated > getErrorMessageCommandLine(value):http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.03/ > > Thanks, > Dmitry > > On 15.02.2016 12:59, David Holmes wrote: >> On 15/02/2016 7:41 PM, Dmitry Dmitriev wrote: >>> Hello David, >>> >>> Thank you for looking into this. >>> >>> On 15.02.2016 8:32, David Holmes wrote: >>>> Hi Dmitry, >>>> >>>> On 11/02/2016 11:54 PM, Dmitry Dmitriev wrote: >>>>> Hello, >>>>> >>>>> Please, need a Reviewer for that change. >>>>> I uploaded updated webrev.02: >>>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.02/ >>>>> >>>>> Difference from webrev.01: I removed excluding of MinTLABSize and >>>>> MarkSweepAlwaysCompactCount options from testing because underlying >>>>> problems were fixed. >>>> >>>> JVMOption.java: >>>> >>>> The refactoring tended to obscure the primary change. But as you have >>>> refactored could you also remove the duplicated call to >>>> getErrorMessageCommandLine(value) please :) >>> Yes, thanks. Will do! >>>> >>>> --- >>>> >>>> JVMOptionsUtils.java >>>> >>>> Is there not a more direct way to get the current GC argument from >>>> jtreg ? >>> The one of alternatives was to parse jtreg properties with command line >>> options, but was decided not to depend on command line options in this >>> case. >> >> Okay. Seems complicated but it is what it is. >> >>>> >>>> --- >>>> >>>> JVMStartup.java >>>> >>>> Not sure what relevance the WeakRef usage and System.gc really has >>>> here. Not sure why weakRef is volatile, nor createWeakRef is >>>> synchronized ?? >>> The idea was to make this class slightly more complex(instead of using >>> simple HelloWorld before) and therefore I use volatile and >>> synchronized. >>> Also I used WeakRef and System.gc() to ensure that gc was happened. >> >> Okay. The details of the "test" don't really mastter. >> >> Thanks, >> David >> >>> Thank you, >>> Dmitry >>>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>>> Thanks, >>>>> Dmitry >>>>> >>>>> On 14.01.2016 16:59, Dmitry Dmitriev wrote: >>>>>> Hi Sangheon, >>>>>> >>>>>> Thank you for the review! Updated webrev: >>>>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.01/ >>>>>> >>>>>> Comments inline. >>>>>> >>>>>> On 13.01.2016 21:50, sangheon wrote: >>>>>>> Hi Dmitry, >>>>>>> >>>>>>> Thank you for fixing this. >>>>>>> Overall seems good. >>>>>>> >>>>>>> -------------------------------------------------------------------- >>>>>>> >>>>>>> test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>>>>>> >>>>>>> >>>>>>> 87 /* >>>>>>> 88 * JDK-8144578 >>>>>>> 89 * Temporarily remove testing of max range for >>>>>>> ParGCArrayScanChunk >>>>>>> because >>>>>>> 90 * JVM can hang when ParGCArrayScanChunk=4294967296 and >>>>>>> ParallelGC >>>>>>> is used >>>>>>> 91 */ >>>>>>> 92 excludeTestMaxRange("ParGCArrayScanChunk"); >>>>>>> >>>>>>> issue number should be 8145204. >>>>>> Fixed. >>>>>>> >>>>>>> -------------------------------------------------------------------- >>>>>>> >>>>>>> test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java >>>>>>> >>>>>>> >>>>>>> >>>>>>> line 181 >>>>>>> >>>>>>> - if (name.startsWith("G1")) { >>>>>>> - option.addPrepend("-XX:+UseG1GC"); >>>>>>> - } >>>>>>> - >>>>>>> - if (name.startsWith("CMS")) { >>>>>>> - option.addPrepend("-XX:+UseConcMarkSweepGC"); >>>>>>> - } >>>>>>> - >>>>>>> >>>>>>> Is this change really needed for dedicated gc flags(starting with >>>>>>> "G1" or "CMS")? >>>>>>> I thought this CR is targeted for non-dedicated gc flags such as >>>>>>> TLABWasteIncrement. >>>>>> I return deleted lines. >>>>>> >>>>>> Thanks, >>>>>> Dmitry >>>>>>> >>>>>>> And if you still think that above lines should be removed, please >>>>>>> remove line 224 as well. >>>>>>> >>>>>>> 224 case "NewSizeThreadIncrease": >>>>>>> 225 option.addPrepend("-XX:+UseSerialGC"); >>>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Sangheon >>>>>>> >>>>>>> >>>>>>> On 01/13/2016 09:11 AM, Dmitry Dmitriev wrote: >>>>>>>> Hello, >>>>>>>> >>>>>>>> Please review small enhancement to the command line option >>>>>>>> validation test framework which allow to run test with >>>>>>>> different GCs. >>>>>>>> Few comments: >>>>>>>> 1) Code which executed for testing was moved from >>>>>>>> JVMOptionsUtils.java to separate class(JVMStartup.java) to avoid >>>>>>>> overhead at java start-up for determining vm and gc type. >>>>>>>> 2) runJavaWithParam method in JVMOption.java was refactored to >>>>>>>> avoid >>>>>>>> code duplication. >>>>>>>> >>>>>>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8144578 >>>>>>>> webrev.00: >>>>>>>> http://cr.openjdk.java.net/~ddmitriev/8144578/webrev.00/ >>>>>>>> >>>>>>>> Testing: tested on all platforms with different gc by RBT, failed >>>>>>>> flags were temporary removed from testing in >>>>>>>> TestOptionsWithRanges.java >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Dmitry >>>>>>> >>>>>> >>>>> >>> > From forax at univ-mlv.fr Mon Feb 15 22:45:28 2016 From: forax at univ-mlv.fr (Remi Forax) Date: Mon, 15 Feb 2016 23:45:28 +0100 (CET) Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56C1D973.3060802@oracle.com> <1533888573.323371.1455548322267.JavaMail.zimbra@u-pem.fr> Message-ID: <531723180.551702.1455576328083.JavaMail.zimbra@u-pem.fr> ----- Mail original ----- > De: "Paul Sandoz" > Cc: "jdk9-dev" , "hotspot-dev developers" > Envoy?: Lundi 15 F?vrier 2016 17:37:32 > Objet: Re: RFR JDK-8149644 Integrate VarHandles > > Hi Remi, > > > On 15 Feb 2016, at 15:58, Remi Forax wrote: > > > > Hi all, > > > > The comment in Infer > > "//The return type for a polymorphic signature call" > > should be updated to reflect the new implementation. > > > > That comment should really be folded into the first if block. > > I could do that as follows: > > // The return type of the polymorphic signature is polymorphic, > // and is computed from the ... > > And then in the else block > > // The return type of the polymorphic signature is fixed (not polymorphic) > > ? yes, good idea. > > > > and this change in the way to do the inference (if the return type is not > > Object use the declared return type) is too ad hoc for me, > > we will need to do the same special case for the parameter types, soon, no > > ? > > > > Do you have any use-cases in mind? > > Rather than ad-hoc i would argue instead the enhancement of > signature-polymorphic methods is limited to that required by the current > use-cases. > > IIRC I did pull on that more significantly at one point when i had sub-types > for array handles since the index need not be polymorphic. But we dialled > back from that approach. as you said one use case is to be able to fix an index, but perhaps a more interesting case is to be able to bound the number of parameters, by example for compareAndSet boolean compareAndSet(Object expected, Object value) is better than boolean compareAndSet(Object... args); > > Paul. > R?mi From paul.sandoz at oracle.com Tue Feb 16 09:33:35 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Tue, 16 Feb 2016 10:33:35 +0100 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <531723180.551702.1455576328083.JavaMail.zimbra@u-pem.fr> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56C1D973.3060802@oracle.com> <1533888573.323371.1455548322267.JavaMail.zimbra@u-pem.fr> <531723180.551702.1455576328083.JavaMail.zimbra@u-pem.fr> Message-ID: <5ABD11CF-407C-4BAD-95A7-E267C5F1682D@oracle.com> > On 15 Feb 2016, at 23:45, Remi Forax wrote: > >>> The comment in Infer >>> "//The return type for a polymorphic signature call" >>> should be updated to reflect the new implementation. >>> >> >> That comment should really be folded into the first if block. >> >> I could do that as follows: >> >> // The return type of the polymorphic signature is polymorphic, >> // and is computed from the ... >> >> And then in the else block >> >> // The return type of the polymorphic signature is fixed (not polymorphic) >> >> ? > > yes, good idea. > Updated in place. >> >> >>> and this change in the way to do the inference (if the return type is not >>> Object use the declared return type) is too ad hoc for me, >>> we will need to do the same special case for the parameter types, soon, no >>> ? >>> >> >> Do you have any use-cases in mind? >> >> Rather than ad-hoc i would argue instead the enhancement of >> signature-polymorphic methods is limited to that required by the current >> use-cases. >> >> IIRC I did pull on that more significantly at one point when i had sub-types >> for array handles since the index need not be polymorphic. But we dialled >> back from that approach. > > as you said one use case is to be able to fix an index, but perhaps a more interesting case is to be able to bound the number of parameters, > by example for compareAndSet > boolean compareAndSet(Object expected, Object value) > is better than > boolean compareAndSet(Object... args); > That ain?t gonna work because the shape is defined by the factory method producing the var handle, there could be zero or more coordinate arguments preceding zero or more explicit value arguments. We cannot declare a varargs parameter preceding other parameters and declaring Object[] is an awkward fit. It?s more that i would care to bite off in terms of tweaking the definition of signature-polymorphism. Paul.. From martin.doerr at sap.com Tue Feb 16 13:33:31 2016 From: martin.doerr at sap.com (Doerr, Martin) Date: Tue, 16 Feb 2016 13:33:31 +0000 Subject: RTM disabled for Linux on PPC64 LE In-Reply-To: <56C1DF2E.8070603@linux.vnet.ibm.com> References: <56BDE1EF.1020305@linux.vnet.ibm.com> <56C1DF2E.8070603@linux.vnet.ibm.com> Message-ID: <82585848434d4624ae08ccacac542a17@DEWDFE13DE14.global.corp.sap> Hi Gustavo, thanks for the information and for working on this topic. I have used SPEC jbb2005 to test and benchmark RTM on PPC64. It has worked even with the old linux kernel to some extent. There are currently the following problems: The C2's scratch buffer seems to be too small if you enable all options: -XX:+UnlockExperimentalVMOptions -XX:+UseRTMLocking -XX:+UseRTMForStackLocks -XX:+UseRTMDeopt I guess we need to increase MAX_inst_size in ScratchBufferBlob (compile.hpp). I didn't have the time to try, yet. The following issue is important for performance work: RTM does not work with BiasedLocking. The latter gets switched off if RTM is activated which has a large performance impact (especially in jbb2005). I would disable it for a reference measurement: -XX:-UseBiasedLocking Unfortunately, RTM was slower than BiasedLocking but faster than the reference (without both) which tells me that there's room for improvement. There are basically 3 classes of locks: 1. no contention 2. contention on lock, low contention on data 3. high contention on data I believe the optimal treatment for the cases would be: 1. Biased Locking 2. Transactional Memory 3. classical locking with lock inflating I think it would be good if the JVM could optimize for all these cases in the future. But that would add additional complexity and code size. Best regards, Martin -----Original Message----- From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com] Sent: Montag, 15. Februar 2016 15:23 To: Doerr, Martin ; hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net Cc: Breno Leitao Subject: Re: RTM disabled for Linux on PPC64 LE Hello Martin, Thank you for your reply. The problematic behavior of syscalls has been addressed since kernel 4.2 (already present in, por instance, Ubuntu 15.10 and 16.04): https://goo.gl/d80xAJ I'm taking a closer look at the RTM tests and I'll make additional experiments as you suggested. So far I enabled RTM for Linux on ppc64le and there is no regression in the RTM test suite. I'm using kernel 4.2.0. The following patch was applied to http://hg.openjdk.java.net/jdk9/jdk9/hotspot, 5d17092b6917+ tip, and I used the (major + minor) version to enable RTM as you said: # HG changeset patch # User gromero # Date 1455540780 7200 # Mon Feb 15 10:53:00 2016 -0200 # Node ID 0e9540f2156c4c4d7d8215eb89109ff81be82f58 # Parent 5d17092b691720d71f06360fb0cc183fe2877faa Enable RTM for Linux on PPC64 LE Enable RTM for Linux kernel version equal or above 4.2, since the problematic behavior of performing a syscall from within transaction which could lead to unpredictable results has been addressed. Please, refer to https://goo.gl/fi4tjC diff --git a/src/cpu/ppc/vm/globalDefinitions_ppc.hpp b/src/cpu/ppc/vm/globalDefinitions_ppc.hpp --- a/src/cpu/ppc/vm/globalDefinitions_ppc.hpp +++ b/src/cpu/ppc/vm/globalDefinitions_ppc.hpp @@ -52,4 +52,9 @@ #define INCLUDE_RTM_OPT 1 #endif +// Enable RTM experimental support for Linux. +#if defined(COMPILER2) && defined(linux) +#define INCLUDE_RTM_OPT 1 +#endif + #endif // CPU_PPC_VM_GLOBALDEFINITIONS_PPC_HPP diff --git a/src/cpu/ppc/vm/vm_version_ppc.cpp b/src/cpu/ppc/vm/vm_version_ppc.cpp --- a/src/cpu/ppc/vm/vm_version_ppc.cpp +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp @@ -255,7 +255,12 @@ } #endif #ifdef linux - // TODO: check kernel version (we currently have too old versions only) + // At least Linux kernel 4.2, as the problematic behavior of syscalls + // being called from within a transaction has been addressed. + // Please, refer to commit 4b4fadba057c1af7689fc8fa182b13baL7 + if (os::Linux::os_version() >= 0x040200) { + os_too_old = false; + } #endif if (os_too_old) { vm_exit_during_initialization("RTM is not supported on this OS version."); diff --git a/src/os/linux/vm/os_linux.cpp b/src/os/linux/vm/os_linux.cpp --- a/src/os/linux/vm/os_linux.cpp +++ b/src/os/linux/vm/os_linux.cpp @@ -135,6 +135,7 @@ int os::Linux::_page_size = -1; const int os::Linux::_vm_default_page_size = (8 * K); bool os::Linux::_supports_fast_thread_cpu_time = false; +uint32_t os::Linux::_os_version = 0; const char * os::Linux::_glibc_version = NULL; const char * os::Linux::_libpthread_version = NULL; pthread_condattr_t os::Linux::_condattr[1]; @@ -4332,6 +4333,31 @@ return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; } +void os::Linux::initialize_os_info() { + assert(_os_version == 0, "OS info already initialized"); + + struct utsname _uname; + + uint32_t major; + uint32_t minor; + uint32_t fix; + + uname(&_uname); // Not sure yet how to bail out if ret == -1 + sscanf(_uname.release,"%d.%d.%d", &major, + &minor, + &fix ); + + _os_version = (major << 16) | + (minor << 8 ) | + (fix << 0 ) ; +} + +uint32_t os::Linux::os_version() { + assert(_os_version != 0, "not initialized"); + return _os_version; +} + + ///// // glibc on Linux platform uses non-documented flag // to indicate, that some special sort of signal @@ -4552,6 +4578,8 @@ } init_page_sizes((size_t) Linux::page_size()); + Linux::initialize_os_info(); + Linux::initialize_system_info(); // main_thread points to the aboriginal thread diff --git a/src/os/linux/vm/os_linux.hpp b/src/os/linux/vm/os_linux.hpp --- a/src/os/linux/vm/os_linux.hpp +++ b/src/os/linux/vm/os_linux.hpp @@ -56,6 +56,12 @@ static GrowableArray* _cpu_to_node; + // Ox00AABBCC + // AA, Major Version + // BB, Minor Version + // CC, Fix Version + static uint32_t _os_version; + protected: static julong _physical_memory; @@ -198,6 +204,9 @@ static jlong fast_thread_cpu_time(clockid_t clockid); + static void initialize_os_info(); + static uint32_t os_version(); + // pthread_cond clock suppport private: static pthread_condattr_t _condattr[1]; Should I use any test suite besides the jtreg suite already present in the Hotspot forest? Best Regards, Gustavo On 12-02-2016 12:52, Doerr, Martin wrote: > Hi Gustavo, > > the reason why we disabled RTM for linux on PPC64 (big or little endian) was the problematic behavior of syscalls. > The old version of the document > www.kernel.org/doc/Documentation/powerpc/transactional_memory.txt > said: > ?Performing syscalls from within transaction is not recommended, and can lead to unpredictable results.? > > Transactions need to either pass completely or roll back completely without disturbing side effects of partially executed syscalls. > We rely on the kernel to abort transactions if necessary. > > The document has changed and it may possibly work with a new linux kernel. > However, we don't have such a new kernel, yet. So we can't test it at the moment. > I don't know which kernel version exactly contains the change. I guess this exact version number (major + minor) should be used for enabling RTM. > > I haven't looked into the tests, yet. There may be a need for additional adaptations and fixes. > > We appreciate if you make experiments and/or contributions. > > Thanks and best regards, > Martin > > > -----Original Message----- > From: ppc-aix-port-dev [mailto:ppc-aix-port-dev-bounces at openjdk.java.net] On Behalf Of Gustavo Romero > Sent: Freitag, 12. Februar 2016 14:45 > To: hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net > Subject: RTM disabled for Linux on PPC64 LE > Importance: High > > Hi, > As of now (tip 1922:be58b02c11f9, jdk9/jdk9 repo) Hotspot build for Linux on ppc64le of fails due to a simple uninitialized variable error: > > hotspot/src/share/vm/ci/ciMethodData.hpp:585:100: error: ?data? may be used uninitialized in this function > hotspot/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp:2408:78: error: ?md? may be used uninitialized in this function > > So this straightforward patch solves the issue: > diff -r 534c50395957 src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp > --- a/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Thu Jan 28 15:42:23 2016 -0800 > +++ b/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Mon Feb 08 17:13:14 2016 -0200 > @@ -2321,8 +2321,8 @@ > if (reg_conflict) { obj = dst; } > } > - ciMethodData* md; > - ciProfileData* data; > + ciMethodData* md = NULL; > + ciProfileData* data = NULL; > int mdo_offset_bias = 0; compiler/rtm > if (should_profile) { > ciMethod* method = op->profiled_method(); > > However, after the build, I realized that RTM is still disabled for Linux on ppc64le, failing 25 tests on compiler/rtm suite: > > http://hastebin.com/raw/ohoxiwaqih > > Hence after applying the following patches that enable RTM for Linux on ppc64le: > > diff -r 266fa9bb5297 src/cpu/ppc/vm/vm_version_ppc.cpp > --- a/src/cpu/ppc/vm/vm_version_ppc.cpp Thu Feb 04 16:48:39 2016 -0800 > +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp Fri Feb 12 10:55:46 2016 -0200 > @@ -255,7 +255,9 @@ > } > #endif > #ifdef linux > - // TODO: check kernel version (we currently have too old versions only) > + if (os::Linux::os_version() >= 4) { // at least Linux kernel version 4 > + os_too_old = false; > + } > #endif > if (os_too_old) { > vm_exit_during_initialization("RTM is not supported on this OS version."); > > > diff -r 266fa9bb5297 src/os/linux/vm/os_linux.cpp > --- a/src/os/linux/vm/os_linux.cpp Thu Feb 04 16:48:39 2016 -0800 > +++ b/src/os/linux/vm/os_linux.cpp Fri Feb 12 10:58:10 2016 -0200 > @@ -135,6 +135,7 @@ > int os::Linux::_page_size = -1; > const int os::Linux::_vm_default_page_size = (8 * K); > bool os::Linux::_supports_fast_thread_cpu_time = false; > +uint32_t os::Linux::_os_version = 0; > const char * os::Linux::_glibc_version = NULL; > const char * os::Linux::_libpthread_version = NULL; > pthread_condattr_t os::Linux::_condattr[1]; > @@ -4332,6 +4333,21 @@ > return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; > } > +void os::Linux::initialize_os_info() { > + assert(_os_version == 0, "OS info already initialized"); > + > + struct utsname _uname; > + + uname(&_uname); // Not sure yet how deal if ret == -1 > + _os_version = atoi(_uname.release); > +} > + > +uint32_t os::Linux::os_version() { > + assert(_os_version != 0, "not initialized"); > + return _os_version; > +} > + > + > ///// > // glibc on Linux platform uses non-documented flag > // to indicate, that some special sort of signal > @@ -4553,6 +4569,7 @@ > init_page_sizes((size_t) Linux::page_size()); > Linux::initialize_system_info(); > + Linux::initialize_os_info(); > // main_thread points to the aboriginal thread > Linux::_main_thread = pthread_self(); > > > diff -r 266fa9bb5297 src/os/linux/vm/os_linux.hpp > --- a/src/os/linux/vm/os_linux.hpp Thu Feb 04 16:48:39 2016 -0800 > +++ b/src/os/linux/vm/os_linux.hpp Fri Feb 12 10:59:01 2016 -0200 > @@ -55,7 +55,7 @@ > static bool _supports_fast_thread_cpu_time; > static GrowableArray* _cpu_to_node; > - > + static uint32_t _os_version; protected: > static julong _physical_memory; > @@ -198,6 +198,9 @@ > static jlong fast_thread_cpu_time(clockid_t clockid); > + static void initialize_os_info(); > + static uint32_t os_version(); + > // pthread_cond clock suppport > private: > static pthread_condattr_t _condattr[1]; > > > 23 tests are now passing: http://hastebin.com/raw/oyicagusod > > Is there a reason to let RTM disabled for Linux on ppc64le by now? Could somebody explain what is currently missing on PPC64 LE RTM implementation in order to make all RTM tests pass? > > Thank you. > > Regards, > -- > Gustavo Romero > From kirill.zhaldybin at oracle.com Tue Feb 16 15:35:04 2016 From: kirill.zhaldybin at oracle.com (Kirill Zhaldybin) Date: Tue, 16 Feb 2016 18:35:04 +0300 Subject: RFR(XS): 8149780: GatherProcessInfoTimeoutHandler shouldn't call getWin32Pid if the lib isn't load Message-ID: <56C341A8.2050903@oracle.com> Dear all, Could you please review this small fix for 8149780 which adds correct handling of situation when the lib is not loaded? WebRev: http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8149780/webrev.00/ CR: https://bugs.openjdk.java.net/browse/JDK-8149780 Thank you. Regards, Kirill From kirill.zhaldybin at oracle.com Tue Feb 16 15:35:16 2016 From: kirill.zhaldybin at oracle.com (Kirill Zhaldybin) Date: Tue, 16 Feb 2016 18:35:16 +0300 Subject: RFR(XS): 8146287: typos in /test/failure_handler Message-ID: <56C341B4.2090801@oracle.com> Dear all, Could you please review this small fix for 8146287 which fixes the typos? WebRev: http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8146287/webrev.01/ CR: https://bugs.openjdk.java.net/browse/JDK-8146287 Thank you. Regards, Kirill From stanislav.smirnov at oracle.com Tue Feb 16 16:12:27 2016 From: stanislav.smirnov at oracle.com (Stas Smirnov) Date: Tue, 16 Feb 2016 19:12:27 +0300 Subject: RFR(XS): 8149780: GatherProcessInfoTimeoutHandler shouldn't call getWin32Pid if the lib isn't load In-Reply-To: <56C341A8.2050903@oracle.com> References: <56C341A8.2050903@oracle.com> Message-ID: <56C34A6B.3070905@oracle.com> Hi Kirill, looks good On 16/02/16 18:35, Kirill Zhaldybin wrote: > Dear all, > > Could you please review this small fix for 8149780 which adds correct > handling of situation when the lib is not loaded? > > WebRev: > http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8149780/webrev.00/ > > CR: https://bugs.openjdk.java.net/browse/JDK-8149780 > > Thank you. > > Regards, Kirill -- Best regards, Stanislav From stanislav.smirnov at oracle.com Tue Feb 16 16:17:57 2016 From: stanislav.smirnov at oracle.com (Stas Smirnov) Date: Tue, 16 Feb 2016 19:17:57 +0300 Subject: RFR(XS): 8146287: typos in /test/failure_handler In-Reply-To: <56C341B4.2090801@oracle.com> References: <56C341B4.2090801@oracle.com> Message-ID: <56C34BB5.4010805@oracle.com> Hi Kirill, looks good On 16/02/16 18:35, Kirill Zhaldybin wrote: > Dear all, > > Could you please review this small fix for 8146287 which fixes the typos? > > WebRev: > http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8146287/webrev.01/ > > CR: https://bugs.openjdk.java.net/browse/JDK-8146287 > > Thank you. > > Regards, Kirill -- Best regards, Stanislav From marcus.larsson at oracle.com Tue Feb 16 16:32:14 2016 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Tue, 16 Feb 2016 17:32:14 +0100 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework In-Reply-To: <56BCA8C9.102@oracle.com> References: <56BB3FD0.5000104@oracle.com> <3910DA9B-43C9-4C1A-8FD0-993A54225550@oracle.com> <56BCA8C9.102@oracle.com> Message-ID: <56C34F0E.4090803@oracle.com> Hi again, Alternative version where a LogMessage automatically writes its messages when it goes out of scope: http://cr.openjdk.java.net/~mlarsson/8145934/webrev.alt/ I still prefer the first patch though, where messages are neither tied to a particular log, nor automatically written when they go out of scope. Like I've said, the explicit write line makes it easier to read the code. For comparison I've updated the first suggestion with the guarantee for unwritten messages, as well as cleaning it up a bit by moving the implementation to the .cpp rather than the .hpp. Full webrev: http://cr.openjdk.java.net/~mlarsson/8145934/webrev.01/ Incremental: http://cr.openjdk.java.net/~mlarsson/8145934/webrev.00-01/ Let me know what you think. Thanks, Marcus On 02/11/2016 04:29 PM, Marcus Larsson wrote: > Hi, > > On 02/10/2016 11:43 PM, John Rose wrote: >> Thanks for taking this on. > > Thanks for looking at it! > >> >> To be an adequate substitute for ttyLocker it needs to >> support block structure via the RAII pattern. Otherwise >> the use cases are verbose enough to be a burden on >> programmers. >> >> This is easy, I think: Give LogMessage a constructor >> which takes a reference to the corresponding LogHandle. >> Have the LogMessage destructor call log.write(*this). > > Having automatic writing of the messages when they go out of scope > would be convenient I guess. The idea with the current and more > verbose API is to make it clear that the log message is just some > in-memory buffer that can be used to prepare a multi-part message, > which is sent explicitly to the intended log when ready. It makes it > very obvious how the different components interact, at the cost of > perhaps unnecessary verbosity. Having it automatically written makes > it less verbose, but also a bit cryptic, IMHO. > >> >> (BTW, as written it allows accidentally dropped writes, >> which is bad: We'll never find all those bugs. That's >> the burden of a rough-edged API, especially when it is >> turned off most of the time.) > > We could add an assert/guarantee in an attempt to prevent this. > >> >> If necessary or for flexibility, allow the LogMessage >> constructor an optional boolean to say "don't write >> automatically". Also, allow a "reset" method to >> cancel any buffered writing. So the default is to >> perform the write at the end of the block (if there >> is anything to write), but it can be turned off >> explicitly. >> >> Giving the LogMessage a clear linkage to a LogHandle >> allows the LogMessage to be a simple delegate for >> the LogHandle itself. This allows the user to ignore >> the LogHandle and work with the LogMessage as >> if it were the LogHandle. That seems preferable >> to requiring split attention to both objects. >> >> Given this simplification, the name LogMessage >> could be changed to BufferedLogHandle, LogBuffer, >> ScopedLog, etc., to emphasize that the thing is >> really a channel to some log, but with an extra >> bit of buffering to control. > > I still think the LogMessage name makes sense. BufferedLogHandle and > the likes give the impression that it's a LogHandle with some internal > buffering for the sake of performance, which actually the opposite of > it's intention. This class should only be used when it is important > that the multi-line message isn't interleaved by other messages. I > still expect the majority of the logging throughout the VM to still > use the regular (and faster) LogHandle and/or log macros. > >> >> To amend your example use case: >> >> // example buffered log messages (proposed) >> LogHandle(logging) log; >> if (log.is_debug()) { >> ResourceMark rm; >> LogMessage msg; >> msg.debug("debug message"); >> msg.trace("additional trace information"); >> log.write(msg); >> } >> >> Either this: >> >> // example buffered log messages (amended #1) >> LogHandle(logging) log; >> if (log.is_debug()) { >> ResourceMark rm; >> LogBuffer buf(log); >> buf.debug("debug message"); >> buf.trace("additional trace information"); >> } >> >> Or this: >> >> // example buffered log messages (amended #2) >> { LogBuffer(logging) log; >> if (log.is_debug()) { >> ResourceMark rm; >> log.debug("debug message"); >> log.trace("additional trace information"); >> } >> } >> >> The second is probably preferable, since it encourages the >> logging logic to be modularized into a single block, and >> because it reduces the changes for error that might occur >> from having two similar names (log/msg or log/buf). > > The second case is more compact, which is always a good thing when it > comes to logging. For the more involved scenarios where there are > multiple messages being sent, I usually assume (perhaps incorrectly) > that a LogHandle is used throughout the scope of such > scenarios/functions, for the sake of compactness and consistency (not > having to specify log tags in more than one place). In those cases > there would already be a LogHandle that could be used for testing > levels and such. With messages tied to a particular output like you > suggest, it does however make sense to allow level testing functions > on the message instances as well. > > I'll prepare another patch with your suggestions and we'll see how it > turns out. > > Thanks, > Marcus > >> >> The second usage requires the LogBuffer constructor >> to be lazy: It must delay internal memory allocation >> until the first output operation. >> >> ? John > From sgehwolf at redhat.com Tue Feb 16 17:47:07 2016 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Tue, 16 Feb 2016 18:47:07 +0100 Subject: RFR(S): 8143245: Zero build requires disabled warnings Message-ID: <1455644827.4680.17.camel@redhat.com> Hi, Could somebody please review and sponsor this Zero-only change. The hotspot build for Zero had some compiler warnings disabled for no good reason. I've fixed the code so the silencing isn't necessary any more. Bug:?https://bugs.openjdk.java.net/browse/JDK-8143245 webrev:?http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8143245/webrev.01/ Thoughts? Thanks, Severin From gromero at linux.vnet.ibm.com Fri Feb 12 13:45:19 2016 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Fri, 12 Feb 2016 11:45:19 -0200 Subject: RTM disabled for Linux on PPC64 LE Message-ID: <56BDE1EF.1020305@linux.vnet.ibm.com> Hi, As of now (tip 1922:be58b02c11f9, jdk9/jdk9 repo) Hotspot build for Linux on ppc64le of fails due to a simple uninitialized variable error: hotspot/src/share/vm/ci/ciMethodData.hpp:585:100: error: ?data? may be used uninitialized in this function hotspot/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp:2408:78: error: ?md? may be used uninitialized in this function So this straightforward patch solves the issue: diff -r 534c50395957 src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp --- a/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Thu Jan 28 15:42:23 2016 -0800 +++ b/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Mon Feb 08 17:13:14 2016 -0200 @@ -2321,8 +2321,8 @@ if (reg_conflict) { obj = dst; } } - ciMethodData* md; - ciProfileData* data; + ciMethodData* md = NULL; + ciProfileData* data = NULL; int mdo_offset_bias = 0; compiler/rtm if (should_profile) { ciMethod* method = op->profiled_method(); However, after the build, I realized that RTM is still disabled for Linux on ppc64le, failing 25 tests on compiler/rtm suite: http://hastebin.com/raw/ohoxiwaqih Hence after applying the following patches that enable RTM for Linux on ppc64le: diff -r 266fa9bb5297 src/cpu/ppc/vm/vm_version_ppc.cpp --- a/src/cpu/ppc/vm/vm_version_ppc.cpp Thu Feb 04 16:48:39 2016 -0800 +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp Fri Feb 12 10:55:46 2016 -0200 @@ -255,7 +255,9 @@ } #endif #ifdef linux - // TODO: check kernel version (we currently have too old versions only) + if (os::Linux::os_version() >= 4) { // at least Linux kernel version 4 + os_too_old = false; + } #endif if (os_too_old) { vm_exit_during_initialization("RTM is not supported on this OS version."); diff -r 266fa9bb5297 src/os/linux/vm/os_linux.cpp --- a/src/os/linux/vm/os_linux.cpp Thu Feb 04 16:48:39 2016 -0800 +++ b/src/os/linux/vm/os_linux.cpp Fri Feb 12 10:58:10 2016 -0200 @@ -135,6 +135,7 @@ int os::Linux::_page_size = -1; const int os::Linux::_vm_default_page_size = (8 * K); bool os::Linux::_supports_fast_thread_cpu_time = false; +uint32_t os::Linux::_os_version = 0; const char * os::Linux::_glibc_version = NULL; const char * os::Linux::_libpthread_version = NULL; pthread_condattr_t os::Linux::_condattr[1]; @@ -4332,6 +4333,21 @@ return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; } +void os::Linux::initialize_os_info() { + assert(_os_version == 0, "OS info already initialized"); + + struct utsname _uname; + + uname(&_uname); // Not sure yet how deal if ret == -1 + _os_version = atoi(_uname.release); +} + +uint32_t os::Linux::os_version() { + assert(_os_version != 0, "not initialized"); + return _os_version; +} + + ///// // glibc on Linux platform uses non-documented flag // to indicate, that some special sort of signal @@ -4553,6 +4569,7 @@ init_page_sizes((size_t) Linux::page_size()); Linux::initialize_system_info(); + Linux::initialize_os_info(); // main_thread points to the aboriginal thread Linux::_main_thread = pthread_self(); diff -r 266fa9bb5297 src/os/linux/vm/os_linux.hpp --- a/src/os/linux/vm/os_linux.hpp Thu Feb 04 16:48:39 2016 -0800 +++ b/src/os/linux/vm/os_linux.hpp Fri Feb 12 10:59:01 2016 -0200 @@ -55,7 +55,7 @@ static bool _supports_fast_thread_cpu_time; static GrowableArray* _cpu_to_node; - + static uint32_t _os_version; protected: static julong _physical_memory; @@ -198,6 +198,9 @@ static jlong fast_thread_cpu_time(clockid_t clockid); + static void initialize_os_info(); + static uint32_t os_version(); + // pthread_cond clock suppport private: static pthread_condattr_t _condattr[1]; 23 tests are now passing: http://hastebin.com/raw/oyicagusod Is there a reason to let RTM disabled for Linux on ppc64le by now? Could somebody explain what is currently missing on PPC64 LE RTM implementation in order to make all RTM tests pass? Thank you. Regards, -- Gustavo Romero From adinn at redhat.com Tue Feb 16 21:29:04 2016 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 16 Feb 2016 21:29:04 +0000 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C1DD24.2060503@redhat.com> References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> <56C1DD24.2060503@redhat.com> Message-ID: <56C394A0.2010606@redhat.com> Hi Aleksey, On 15/02/16 14:13, Andrew Dinn wrote: > It would be very good to run your Unsafe tests. However, they may > still succeed but minus the desired optimizations. So, I'll apply the > patch to my tree and check the generated code by eyeball. I ran the tests with Roland's patch and yours combined and they all passed but I am afraid that's not the full story. The AArch64 optimizations to use stlr for a volatile put and ldar for a volatile get were not performed (the CAS optimzation is still working). What is more the change you have made is actually causing invalid code to be generated for the get operation. This does not relate to Roland's patch. The same result will happen with the code as it was prior to Roland's patch. >> The changes are supposed to generate the same code for old Unsafe >> methods -- the refactoring shuffles the compiler code around, but >> the sequence of accesses/barriers should stay the same. Eyeballing >> x86_64 assembly indeed shows it is the same, but I haven't looked >> beyond x86. Unfortunately, the graphs are not quite the same and that affects the generated code on AArch64 even though it has no visible effect on x86. The critical difference is that for volatile puts and gets you have omitted to insert the MemBarRelease and MemBarAcquire nodes. Note that, unlike x86, AArch64 relies on these barriers to control whether or not memory barrier instructions are inserted. Thats true whether or not the optimization rules are used (by setting -X::+/-UseBarriersForVolatile). If the optimization to use stlr and ldar is switched off (UseBarriersForVolatile=true) then MemBarRelease and MemBarAcquire are translated to the dmb instructions which enforce memory synchronzation. If the optimization is switched on then they are must still be present in order to detect cases where the dmb can be elided and an stlr or ldar used instead. So, unfortunately, your change breaks AArch64 with or without the ldar/stlr optimization scheme. Details of what is wrong and a potential fix are included below. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul Argiry (US) Your changes to the graph layout are as follows: Ignoring GC barriers, the old code generated the following subgraph for an inlined Unsafe.putObjectVolatile A) MemBarRelease MemBarCPUOrder StoreX[mo_release] MemBarVolatile Your new code generates this subgraph B) MemBarCPUOrder StoreX[mo_release] MemBarVolatile Subgraph A is consistent with the following subgraph generated from a Java source level assignment of a volatile field C) MemBarRelease StoreX[mo_release] MemBarVolatile whereas subgraph B is not consistent with C Similarly, for an inlined volatile get the old code generated this subgraph A) MemBarCPUOrder || \\ MemBarAcquire LoadX[mo_acquire] MemBarCPUOrder (the double bars represent control and memory links) Your new code generates the following subgraph B) MemBarCPUOrder || \\ MemBarCPUOrder LoadX[mo_acquire] By contrast the subgraph generated from a Java source level read of a volatile field is C) LoadX[mo_acquire] MemBarAcquire or the minor variant D) LoadX[mo_acquire] DecodeN MemBarAcquire The first change explains why the put tests pass but the optimization fails. The AArch64 predicates which optimize volatile puts look for a StoreX bracketed by a MemBarRelease and MemBarVolatile (they ignore any intervening MemBarCPUOrder). Without the optimization this gets translated to dmb ish {Release} str dmb ish {Volatile} With the optimization generation of both dmbs is inhibited and it gets translated to stlr With your patch the optimizaton fails to apply and the result is str dmb ish {Volatile} which just happens to work but only because the non-optimized code is less than optimal -- in the absence of knowledge as to where the Release or Volatile membars have come from it has to generate two dmb instructions for a volatile put. Now, I could probably tweak the put optimization to recognize your put subgraph B. However, I'm not at all sure that would be valid. I believe that the presence of the MemBarRelease or MemBarAcquire is important because there may potentially be other circumstances where a subgraph of type B might arise. One might simply assume that put subgraph B is fine i.e. the presence of a MemBarCPUOrder and MemBarVolatile bracketing a releasing store is all that is needed to say "yes this is a volatile put". Well, the first gotcha is that the is_release() test will actually return true for certain stores to non-volatile fields. Puts in constructors can be marked as releasing. The second gotcha is that a CPUOrder membar might just happen to turn up as the tail of some other subgraph and a Volatile membar might also appear as head of a third subgraph. So, if they both happened to appear either side of the store you have a false positive for a volatile put. With the get subgraph the problem is more serious. I'm not sure why the tests pass since a required dmb instruction is missing. If my get optimization is switched off both graphs A and C (also variant D) get translated to ldr dmb ish The load is generated as a normal ldr and the Acquire membar is converted to a dmb ish without attempting to see if it can be elided. If the get optimization is enabled the predicate for the LoadX rules detects subgraph A or C (or D) and generates ldar. Similarly, the predicate for the MemBarAcquire rule detects these same 3 subgraphs and elides the dmb. The problem with your subgraph B is that it contains no MemBarAcquire. So the LoadX translates to ldr but nothing triggers the MemBarAcquire rule and no dmb ish is emitted. We end up with a plain ldr which does not enforce the required memory ordering. If you think there is a good reason to change the graph shape the way you have done then I'd be interested to understand your reasoning. It seems reasonable to me for each of the missing memory barriers to be present. I would have said that a volatile put should be releasing and a volatile get should be acquiring so the generated subgraphs should include a MemBarRelease and MemBarAcquire, respectively. If this was just an oversight then the small patch below, when applied over your posted patch, restores both missing barriers (I added 16 lines of context to locate it for you) and also restores the AArch64 optimization. n.b. luckily (or so it would seem given my current possibly misguided understanding of your plans) it turns out that the leading MemBarRelease is not critical as an element of the CAS signature. That's because a CAS is only ever generated by inlining. So, we can be sure that any leading CPUOrder membar (or indeed Release membar if that occurs instead) is only there because it was put there by the inliner. I think that's important for implementing the various flavours of CAS. If we want a CAS which omits either the Release or Acquire semantics then we should be able to omit the leading Release MemBar or tariling Acquire membar (so long as we retain the CPUOrder membars). I'm hoping this fact is going to allow us to implement the optimized CAS variants on AArch64 at least. ----- 8< -------- 8< -------- 8< -------- 8< -------- 8< -------- 8< --- diff -r e51ab854285b src/share/vm/opto/library_call.cpp --- a/src/share/vm/opto/library_call.cpp Tue Feb 16 06:47:18 2016 -0500 +++ b/src/share/vm/opto/library_call.cpp Tue Feb 16 12:46:37 2016 -0500 @@ -2498,32 +2498,35 @@ // We need to emit leading and trailing CPU membars (see below) in // addition to memory membars for special access modes. This is a little // too strong, but avoids the need to insert per-alias-type // volatile membars (for stores; compare Parse::do_put_xxx), which // we cannot do effectively here because we probably only have a // rough approximation of type. switch(kind) { case Relaxed: case Opaque: case Acquire: break; case Release: insert_mem_bar(Op_MemBarRelease); break; case Volatile: + if (is_store) { + insert_mem_bar(Op_MemBarRelease); + } if (!is_store && support_IRIW_for_not_multiple_copy_atomic_cpu) { insert_mem_bar(Op_MemBarVolatile); } break; default: ShouldNotReachHere(); } // Memory barrier to prevent normal and 'unsafe' accesses from // bypassing each other. Happens after null checks, so the // exception paths do not take memory state from the memory barrier, // so there's no problems making a strong assert about mixing users // of safe & unsafe memory. if (need_mem_bar) insert_mem_bar(Op_MemBarCPUOrder); assert(alias_type->adr_type() == TypeRawPtr::BOTTOM || alias_type->adr_type() == TypeOopPtr::BOTTOM || @@ -2636,32 +2639,35 @@ // Final sync IdealKit and GraphKit. final_sync(ideal); #undef __ } } } switch(kind) { case Relaxed: case Opaque: case Release: break; case Acquire: insert_mem_bar(Op_MemBarAcquire); break; case Volatile: + if (!is_store) { + insert_mem_bar(Op_MemBarAcquire); + } if (is_store && !support_IRIW_for_not_multiple_copy_atomic_cpu) { insert_mem_bar(Op_MemBarVolatile); } break; default: ShouldNotReachHere(); } if (need_mem_bar) insert_mem_bar(Op_MemBarCPUOrder); return true; } //----------------------------inline_unsafe_load_store---------------------------- // This method serves a couple of different customers (depending on LoadStoreKind): // From john.r.rose at oracle.com Tue Feb 16 23:19:58 2016 From: john.r.rose at oracle.com (John Rose) Date: Tue, 16 Feb 2016 15:19:58 -0800 Subject: RFR: 8145934: Make ttyLocker equivalent for Unified Logging framework In-Reply-To: <56C34F0E.4090803@oracle.com> References: <56BB3FD0.5000104@oracle.com> <3910DA9B-43C9-4C1A-8FD0-993A54225550@oracle.com> <56BCA8C9.102@oracle.com> <56C34F0E.4090803@oracle.com> Message-ID: <90DC33E3-F597-40E4-A317-6C92F4969575@oracle.com> On Feb 16, 2016, at 8:32 AM, Marcus Larsson wrote: > > Alternative version where a LogMessage automatically writes its messages when it goes out of scope: > http://cr.openjdk.java.net/~mlarsson/8145934/webrev.alt/ I like this, with the LogMessageBuffer that does the heavy work, and the [Scoped]LogMessage which is the simplest way to use it. The LogMessageBuffer should have a neutral unallocated state, for use through the LogMessage macro. I.e., is_c_allocated should be a three-state flag, including 'not allocated at all'. That way, if you create the thing only to ask 'is_debug' and get a false answer, you won't have done more than a few cycles of work. Probably the set_prefix operation should be lazy in the same way. I think the destructor should call a user-callable flush function, something like this: ~ScopedLogMessage() { flush(); } // in LogMessageBuffer: void flush() { if (_line_count > 0) { _log.write(*this); reset(); } } void reset() { _line_count = 0; _message_buffer_size = 0; } It will be rare for user code to want to either flush early or cancel pending output, but when you need it, it should be there. > I still prefer the first patch though, where messages are neither tied to a particular log, nor automatically written when they go out of scope. Like I've said, the explicit write line makes it easier to read the code. There's a tradeoff here: It's easier to read the *logging* code if all the *logging* operations are explicit. But the point of logging code is to add logging to code that is busy doing *other* operations besides logging. That's why (I assume) people have been noting that some uses of logging are "intrusive": The logging logic calls too much attention to itself, and with attention being a limited resource, it takes away attention from the actual algorithm that's being logged about. The scoped (RAII) log buffer, with automatic write, is the best way I know to reduce the intrusiveness of this auxiliary mechanism. Of course, I'm interested in finding out what your everyday customers think about it. (Rachel, Coleen, David, Dan?) > For comparison I've updated the first suggestion with the guarantee for unwritten messages, as well as cleaning it up a bit by moving the implementation to the .cpp rather than the .hpp. > Full webrev: http://cr.openjdk.java.net/~mlarsson/8145934/webrev.01/ > Incremental: http://cr.openjdk.java.net/~mlarsson/8145934/webrev.00-01/ > > Let me know what you think. That option is more intrusive than the RAII buffered log alias. Separately, the review thread on JDK-8149383 shows a use for LogMessageBuffer to collect a complex log message. The log message can then be sent down one of two log streams. Something like: if (need_to_log) { ResourceMark rm; LogMessageBuffer buf; buf.write("Revoking bias of object " INTPTR_FORMAT " , mark " INTPTR_FORMAT " , type %s , prototype header " INTPTR_FORMAT " , allow rebias %d , requesting thread " INTPTR_FORMAT, p2i((void *)obj), (intptr_t) mark, obj->klass()->external_name(), (intptr_t) obj->klass()->prototype_header(), (allow_rebias ? 1 : 0), (intptr_t) requesting_thread); if (!is_bulk) log_info(biasedlocking).write(buf); else log_trace(biasedlocking).write(buf); } It is important here (like you pointed out) that the LogMessageBuffer is decoupled from log levels and streams, so that it can be used as a flexible component of logic like this. But the commonest usage should (IMO) be supported by a scoped auto-writing log alias. ? John From john.r.rose at oracle.com Wed Feb 17 03:30:47 2016 From: john.r.rose at oracle.com (John Rose) Date: Tue, 16 Feb 2016 19:30:47 -0800 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C1C2ED.10702@oracle.com> References: <56C1C2ED.10702@oracle.com> Message-ID: <3175C589-834C-473B-89BD-8AAD9CBC094B@oracle.com> On Feb 15, 2016, at 4:22 AM, Aleksey Shipilev wrote: > > c) unsafe.cpp gets the basic native method implementations. Most new > operations are folded to their volatile (the strongest) counterparts, > hoping that compilers would intrinsify them into more performant versions. A simpler way to accomplish this would be to give the folded API points non-native method bodies, redirecting to whatever native they are folded to. This will move most of the folding choices up to Java code. A fair amount of movement in unsafe.cpp will disappear. The non-native folded methods would still be marked @HSIC and be optimized accordingly. ? John From david.holmes at oracle.com Wed Feb 17 08:38:36 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 17 Feb 2016 18:38:36 +1000 Subject: RFR(S): 8143245: Zero build requires disabled warnings In-Reply-To: <1455644827.4680.17.camel@redhat.com> References: <1455644827.4680.17.camel@redhat.com> Message-ID: <56C4318C.7050309@oracle.com> Hi Severin, On 17/02/2016 3:47 AM, Severin Gehwolf wrote: > Hi, > > Could somebody please review and sponsor this Zero-only change. The > hotspot build for Zero had some compiler warnings disabled for no good > reason. I've fixed the code so the silencing isn't necessary any more. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8143245 > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8143245/webrev.01/ > > Thoughts? This seems okay to me. One minor nit in os_linux_zero.cpp, SpinPause has an indent of 4 instead of 2. :) Is there a specific Zero reviewer you want to approve this? David ----- > Thanks, > Severin > From aleksey.shipilev at oracle.com Wed Feb 17 11:24:58 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 17 Feb 2016 14:24:58 +0300 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C394A0.2010606@redhat.com> References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> <56C1DD24.2060503@redhat.com> <56C394A0.2010606@redhat.com> Message-ID: <56C4588A.4070505@oracle.com> On 02/17/2016 12:29 AM, Andrew Dinn wrote: > On 15/02/16 14:13, Andrew Dinn wrote: >>> The changes are supposed to generate the same code for old Unsafe >>> methods -- the refactoring shuffles the compiler code around, but >>> the sequence of accesses/barriers should stay the same. Eyeballing >>> x86_64 assembly indeed shows it is the same, but I haven't looked >>> beyond x86. > > Unfortunately, the graphs are not quite the same and that affects the > generated code on AArch64 even though it has no visible effect on x86. > The critical difference is that for volatile puts and gets you have > omitted to insert the MemBarRelease and MemBarAcquire nodes. Dang. You are right, I have mistranslated the original code. Thanks for catching this one! New version that includes a variant of your fix, and also trims down on Unsafe changes, as John suggested in a separate thread: http://cr.openjdk.java.net/~shade/8148146/webrev.hs.02/ http://cr.openjdk.java.net/~shade/8148146/webrev.jdk.02/ This version still passes JPRT, microbenchmark results are fine. I am respinning other tests to see if anything is broken. Cheers, -Aleksey P.S. Andrew, if you have before/after builds for AArch64 and a suitable physical rig, you might be interested to run Unsafe benchmarks (this is a JMH runnable JAR): http://cr.openjdk.java.net/~shade/varhandles/unsafe-bench.jar From adinn at redhat.com Wed Feb 17 11:30:02 2016 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 17 Feb 2016 11:30:02 +0000 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C4588A.4070505@oracle.com> References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> <56C1DD24.2060503@redhat.com> <56C394A0.2010606@redhat.com> <56C4588A.4070505@oracle.com> Message-ID: <56C459BA.20509@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 17/02/16 11:24, Aleksey Shipilev wrote: > On 02/17/2016 12:29 AM, Andrew Dinn wrote: >> Unfortunately, the graphs are not quite the same and that >> affects the generated code on AArch64 even though it has no >> visible effect on x86. The critical difference is that for >> volatile puts and gets you have omitted to insert the >> MemBarRelease and MemBarAcquire nodes. > > Dang. You are right, I have mistranslated the original code. > Thanks for catching this one! > > New version that includes a variant of your fix, and also trims > down on Unsafe changes, as John suggested in a separate thread: > http://cr.openjdk.java.net/~shade/8148146/webrev.hs.02/ > http://cr.openjdk.java.net/~shade/8148146/webrev.jdk.02/ > > This version still passes JPRT, microbenchmark results are fine. I > am respinning other tests to see if anything is broken. Thanks, Aleksey. I'll rerun with this new patch and report back. > P.S. Andrew, if you have before/after builds for AArch64 and a > suitable physical rig, you might be interested to run Unsafe > benchmarks (this is a JMH runnable JAR): > http://cr.openjdk.java.net/~shade/varhandles/unsafe-bench.jar Sure, I'll do that too. regards, Andrew Dinn - ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul Argiry (US) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJWxFmwAAoJEGnaNq4xxcSzJgwH/1lGXyxT922uAybjNkv7nhox FbjH/yZndlTlEEq7LkWVISolHg2nudvD65d0PBPMuBU90jRayhLrF3rHCjJy7k6M aD5Elk/rje/p88ZD54sPwfDGJdVGFXQCAx0u/eV6jOhbvGNARlH0EnVtnsV1ADpC BVOsfKOMb55PJWWslaZ9p0vL4zUHDanbcQgUWBpFMg44CDyln4UmyY/xRtPNbWKQ lG7wYc0XAwmG4EKLFDRa2N6PmKSfwubXDOToXR03R/tU5orhWVm7wq+C0FZewodN +91GQvIPlQS4iQ/TZpjX8jAgowFBNxHHAerv1LJcZC7Ji7zGmJnG/Dn5NDKutOw= =oLN2 -----END PGP SIGNATURE----- From adinn at redhat.com Wed Feb 17 12:50:26 2016 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 17 Feb 2016 12:50:26 +0000 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C459BA.20509@redhat.com> References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> <56C1DD24.2060503@redhat.com> <56C394A0.2010606@redhat.com> <56C4588A.4070505@oracle.com> <56C459BA.20509@redhat.com> Message-ID: <56C46C92.9050609@redhat.com> On 17/02/16 11:30, Andrew Dinn wrote: >> This version still passes JPRT, microbenchmark results are fine. >> I am respinning other tests to see if anything is broken. > > Thanks, Aleksey. I'll rerun with this new patch and report back. This does indeed now generate the correct code on AArch64. As far as I am concerned the patches is fine to ship (with regard to the AArch64 port, that is -- not an official JDK9 review). regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul Argiry (US) From kirill.zhaldybin at oracle.com Wed Feb 17 14:01:58 2016 From: kirill.zhaldybin at oracle.com (Kirill Zhaldybin) Date: Wed, 17 Feb 2016 17:01:58 +0300 Subject: RFR(XS): 8146287: typos in /test/failure_handler In-Reply-To: <56C34BB5.4010805@oracle.com> References: <56C341B4.2090801@oracle.com> <56C34BB5.4010805@oracle.com> Message-ID: <56C47D56.4070808@oracle.com> Stanislav, Thank you. Regards, Kirill On 16.02.2016 19:17, Stas Smirnov wrote: > Hi Kirill, > > looks good > > On 16/02/16 18:35, Kirill Zhaldybin wrote: >> Dear all, >> >> Could you please review this small fix for 8146287 which fixes the typos? >> >> WebRev: >> http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8146287/webrev.01/ >> >> CR: https://bugs.openjdk.java.net/browse/JDK-8146287 >> >> Thank you. >> >> Regards, Kirill > From kirill.zhaldybin at oracle.com Wed Feb 17 14:02:21 2016 From: kirill.zhaldybin at oracle.com (Kirill Zhaldybin) Date: Wed, 17 Feb 2016 17:02:21 +0300 Subject: RFR(XS): 8149780: GatherProcessInfoTimeoutHandler shouldn't call getWin32Pid if the lib isn't load In-Reply-To: <56C34A6B.3070905@oracle.com> References: <56C341A8.2050903@oracle.com> <56C34A6B.3070905@oracle.com> Message-ID: <56C47D6D.60608@oracle.com> Stanislav, Thank you. Regards, Kirill On 16.02.2016 19:12, Stas Smirnov wrote: > Hi Kirill, > > looks good > > On 16/02/16 18:35, Kirill Zhaldybin wrote: >> Dear all, >> >> Could you please review this small fix for 8149780 which adds correct >> handling of situation when the lib is not loaded? >> >> WebRev: >> http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8149780/webrev.00/ >> >> CR: https://bugs.openjdk.java.net/browse/JDK-8149780 >> >> Thank you. >> >> Regards, Kirill > From marcelino.rodriguez-cancio at irisa.fr Wed Feb 17 14:21:52 2016 From: marcelino.rodriguez-cancio at irisa.fr (Marcelino Rodriguez cancio) Date: Wed, 17 Feb 2016 15:21:52 +0100 (CET) Subject: Debuggin Hotspot in Linux Ubuntu 14 using Netbeans In-Reply-To: <1973406027.4969153.1455717592978.JavaMail.zimbra@irisa.fr> Message-ID: <1226902439.4976132.1455718912918.JavaMail.zimbra@irisa.fr> Hello all, I'm trying to debug the with OpenJDK9 Hotspot in Linux Ubuntu 14 using Netbeans. I want to observe the behavior of the C++ code for some loop optimizations to learn how they work so then I can go ahead and implement some experimental ideas we have in the team. I'm not debugging generated bytecode. My problem is that the gdb debugger, which I launch from the IDE, don't stop in certain breakpoints, specifically in : /9dev/hotspot/src/share/vm/opto/loopPredicate.cpp/loopPredicate.cpp:909 I'm able to launch the compiler and stop in: /9dev/jdk/src/java.base/share/native/libjli/java.cjava.c:1138, for example. I believe this happens because gdb does not attach to the VM thread, which seems to me is a child thread, only the parent launched from the Java executable (images/jdk/java). I'm total newby to gdb and debugging multiththread C++ applications, so I would like to debug from the IDE if possible. Any hints (books, articles, talks, etc), leads or advice on this? My SETUP is as Follows: I can see that the code containing the breakpoint is executed looking at the trace of the Hotspot (I'm using -XX:+TraceLoopPredicate ): rc_predicate init This is a bug due to the abuse of default arguments in C++. I, ah, forgot to pass dest_uninitialized to the OOP arraycopy routines, so we always scan the destination array, even though it contains garbage. I also took the opportunity to do a little tidying-up. http://cr.openjdk.java.net/~aph/8150045/ Andrew. From aph at redhat.com Wed Feb 17 14:25:24 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 17 Feb 2016 14:25:24 +0000 Subject: 8150045: AArch64: arraycopy causes segfaults in SATB during garbage collection In-Reply-To: <56C48215.3040106@redhat.com> References: <56C48215.3040106@redhat.com> Message-ID: <56C482D4.2010101@redhat.com> Sorry, I forgot to say this is AArch64-specific. On 02/17/2016 02:22 PM, Andrew Haley wrote: > This is a bug due to the abuse of default arguments in C++. I, ah, > forgot to pass dest_uninitialized to the OOP arraycopy routines, so we > always scan the destination array, even though it contains garbage. > > I also took the opportunity to do a little tidying-up. > > http://cr.openjdk.java.net/~aph/8150045/ > > Andrew. > From tobias.hartmann at oracle.com Wed Feb 17 15:09:18 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 17 Feb 2016 16:09:18 +0100 Subject: [9] RFR(XS): 8150063: Optimized build fails with "undefined reference to 'test_memset_with_concurrent_readers()'" Message-ID: <56C48D1E.9070506@oracle.com> Hi, please review the following patch. https://bugs.openjdk.java.net/browse/JDK-8150063 http://cr.openjdk.java.net/~thartmann/8150063/webrev.00/ JDK-8131330 added 'test_memset_with_concurrent_readers()' which is guarded by #ifdef ASSERT and therefore not available in the optimized build. Since the optimized build executes unit tests, it should be guarded by #ifndef PRODUCT instead. Thanks, Tobias From tobias.hartmann at oracle.com Wed Feb 17 15:14:44 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 17 Feb 2016 16:14:44 +0100 Subject: [9] RFR(XS): 8150063: Optimized build fails with "undefined reference to 'test_memset_with_concurrent_readers()'" In-Reply-To: <56C48D1E.9070506@oracle.com> References: <56C48D1E.9070506@oracle.com> Message-ID: <56C48E64.5030902@oracle.com> Minor correction: JDK-8131330 did not add the test but removed the NOT_DEBUG_RETURN. For consistency, we should execute the test with the optimized build as we do with other unit tests. Tobias On 17.02.2016 16:09, Tobias Hartmann wrote: > Hi, > > please review the following patch. > > https://bugs.openjdk.java.net/browse/JDK-8150063 > http://cr.openjdk.java.net/~thartmann/8150063/webrev.00/ > > JDK-8131330 added 'test_memset_with_concurrent_readers()' which is guarded by #ifdef ASSERT and therefore not available in the optimized build. Since the optimized build executes unit tests, it should be guarded by #ifndef PRODUCT instead. > > Thanks, > Tobias > From vladimir.kozlov at oracle.com Wed Feb 17 19:16:06 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 17 Feb 2016 11:16:06 -0800 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C4588A.4070505@oracle.com> References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> <56C1DD24.2060503@redhat.com> <56C394A0.2010606@redhat.com> <56C4588A.4070505@oracle.com> Message-ID: <56C4C6F6.8040709@oracle.com> In general it looks good to me. My main question is about implementation of new functionality on other platforms. When it will be done? Yes, it works now because you have guard match_rule_supported(). But we usually do implementation on platforms at least as separate RFE. What is your plan? SAP guys should also test it on PPC64. What test/compiler/unsafe/generate-unsafe-tests.sh is for? It is not used by regression testing as far as I see. And please, push it into hs-comp for nightly testing. Thanks, Vladimir On 2/17/16 3:24 AM, Aleksey Shipilev wrote: > On 02/17/2016 12:29 AM, Andrew Dinn wrote: >> On 15/02/16 14:13, Andrew Dinn wrote: >>>> The changes are supposed to generate the same code for old Unsafe >>>> methods -- the refactoring shuffles the compiler code around, but >>>> the sequence of accesses/barriers should stay the same. Eyeballing >>>> x86_64 assembly indeed shows it is the same, but I haven't looked >>>> beyond x86. >> >> Unfortunately, the graphs are not quite the same and that affects the >> generated code on AArch64 even though it has no visible effect on x86. >> The critical difference is that for volatile puts and gets you have >> omitted to insert the MemBarRelease and MemBarAcquire nodes. > > Dang. You are right, I have mistranslated the original code. Thanks for > catching this one! > > New version that includes a variant of your fix, and also trims down on > Unsafe changes, as John suggested in a separate thread: > http://cr.openjdk.java.net/~shade/8148146/webrev.hs.02/ > http://cr.openjdk.java.net/~shade/8148146/webrev.jdk.02/ > > This version still passes JPRT, microbenchmark results are fine. I am > respinning other tests to see if anything is broken. > > Cheers, > -Aleksey > > P.S. Andrew, if you have before/after builds for AArch64 and a suitable > physical rig, you might be interested to run Unsafe benchmarks (this is > a JMH runnable JAR): > http://cr.openjdk.java.net/~shade/varhandles/unsafe-bench.jar > > From coleen.phillimore at oracle.com Wed Feb 17 19:39:12 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 17 Feb 2016 14:39:12 -0500 Subject: RFR(S): 8143245: Zero build requires disabled warnings In-Reply-To: <56C4318C.7050309@oracle.com> References: <1455644827.4680.17.camel@redhat.com> <56C4318C.7050309@oracle.com> Message-ID: <56C4CC60.5010206@oracle.com> Hi, this looks good. I'll test it out and sponsor it. Thanks Severin. Coleen On 2/17/16 3:38 AM, David Holmes wrote: > Hi Severin, > > On 17/02/2016 3:47 AM, Severin Gehwolf wrote: >> Hi, >> >> Could somebody please review and sponsor this Zero-only change. The >> hotspot build for Zero had some compiler warnings disabled for no good >> reason. I've fixed the code so the silencing isn't necessary any more. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8143245 >> webrev: >> http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8143245/webrev.01/ >> >> Thoughts? > > This seems okay to me. One minor nit in os_linux_zero.cpp, SpinPause > has an indent of 4 instead of 2. :) > > Is there a specific Zero reviewer you want to approve this? > > David > ----- > >> Thanks, >> Severin >> From john.r.rose at oracle.com Wed Feb 17 19:42:02 2016 From: john.r.rose at oracle.com (John Rose) Date: Wed, 17 Feb 2016 11:42:02 -0800 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <3175C589-834C-473B-89BD-8AAD9CBC094B@oracle.com> References: <56C1C2ED.10702@oracle.com> <3175C589-834C-473B-89BD-8AAD9CBC094B@oracle.com> Message-ID: <2A54ABA8-83ED-46F0-B95D-FD48F1DAB771@oracle.com> One other point: As long as you are massaging the library_call code, you could flush the *_raw versions of all the intrinsics: All we need are the normal intrinsics (GETPUTOOP), not the one-address ones (GETPUTNATIVE), as long as Unsafe class (both of them) redirects one call to the other. The *_raw (NATIVE) intrinsics should have been GC-ed a long time ago, given that the two-address version (OOP) is all anybody needs. To avoid mishaps, a @ForceInline is needed on that kind of wrapper. Mikael's forthcoming Unsafe changes are another place we could handle this cleanup. ? John > On Feb 16, 2016, at 7:30 PM, John Rose wrote: > > On Feb 15, 2016, at 4:22 AM, Aleksey Shipilev wrote: >> >> c) unsafe.cpp gets the basic native method implementations. Most new >> operations are folded to their volatile (the strongest) counterparts, >> hoping that compilers would intrinsify them into more performant versions. > > A simpler way to accomplish this would be to give the folded API points non-native method bodies, redirecting to whatever native they are folded to. > > This will move most of the folding choices up to Java code. A fair amount of movement in unsafe.cpp will disappear. > > The non-native folded methods would still be marked @HSIC and be optimized accordingly. > > ? John From aleksey.shipilev at oracle.com Wed Feb 17 19:42:34 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 17 Feb 2016 22:42:34 +0300 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C4C6F6.8040709@oracle.com> References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> <56C1DD24.2060503@redhat.com> <56C394A0.2010606@redhat.com> <56C4588A.4070505@oracle.com> <56C4C6F6.8040709@oracle.com> Message-ID: <56C4CD2A.1000701@oracle.com> On 02/17/2016 10:16 PM, Vladimir Kozlov wrote: > In general it looks good to me. Thanks Vladimir! > My main question is about implementation of new functionality on > other platforms. When it will be done? Yes, it works now because you > have guard match_rule_supported(). But we usually do implementation > on platforms at least as separate RFE. What is your plan? We have multiple subtasks for AArch64, SPARC and Power under VarHandles umbrella: https://bugs.openjdk.java.net/browse/JDK-8080588 Hopefully we will address them after/concurrently-with the bulk of VarHandles changes settle into mainline. But we need to get some basic code in mainline to build on. > SAP guys should also test it on PPC64. Volker, Goetz, I would appreciate if you can give it a spin! > What test/compiler/unsafe/generate-unsafe-tests.sh is for? It is not > used by regression testing as far as I see. The script (re)generates the tests from the template, and is supposed to be run manually when test template had changed. The test/compiler/unsafe/ tests you see in the webrev were generated by that script. > And please, push it into hs-comp for nightly testing. That's was the plan, I should have said that from the beginning. Cheers, -Aleksey From vladimir.kozlov at oracle.com Wed Feb 17 19:44:18 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 17 Feb 2016 11:44:18 -0800 Subject: [9] RFR(XS): 8150063: Optimized build fails with "undefined reference to 'test_memset_with_concurrent_readers()'" In-Reply-To: <56C48D1E.9070506@oracle.com> References: <56C48D1E.9070506@oracle.com> Message-ID: <56C4CD92.10203@oracle.com> Good. thanks, Vladimir On 2/17/16 7:09 AM, Tobias Hartmann wrote: > Hi, > > please review the following patch. > > https://bugs.openjdk.java.net/browse/JDK-8150063 > http://cr.openjdk.java.net/~thartmann/8150063/webrev.00/ > > JDK-8131330 added 'test_memset_with_concurrent_readers()' which is guarded by #ifdef ASSERT and therefore not available in the optimized build. Since the optimized build executes unit tests, it should be guarded by #ifndef PRODUCT instead. > > Thanks, > Tobias > From aleksey.shipilev at oracle.com Wed Feb 17 19:46:09 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 17 Feb 2016 22:46:09 +0300 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <2A54ABA8-83ED-46F0-B95D-FD48F1DAB771@oracle.com> References: <56C1C2ED.10702@oracle.com> <3175C589-834C-473B-89BD-8AAD9CBC094B@oracle.com> <2A54ABA8-83ED-46F0-B95D-FD48F1DAB771@oracle.com> Message-ID: <56C4CE01.7060805@oracle.com> Our hands were itching to remove these "duplicate" Unsafe methods for a while now. But I think these cleanups should really be done separately to catch mistakes. I'd like to keep this particular RFR for VarHandle-specific entries only. Cheers, -Aleksey On 02/17/2016 10:42 PM, John Rose wrote: > One other point: As long as you are massaging the library_call code, > you could flush the *_raw versions of all the intrinsics: All we need > are the normal intrinsics (GETPUTOOP), not the one-address ones > (GETPUTNATIVE), as long as Unsafe class (both of them) redirects > one call to the other. > > The *_raw (NATIVE) intrinsics should have been GC-ed a long time ago, > given that the two-address version (OOP) is all anybody needs. > > To avoid mishaps, a @ForceInline is needed on that kind of wrapper. > > Mikael's forthcoming Unsafe changes are another place we could > handle this cleanup. > > ? John > >> On Feb 16, 2016, at 7:30 PM, John Rose wrote: >> >> On Feb 15, 2016, at 4:22 AM, Aleksey Shipilev wrote: >>> >>> c) unsafe.cpp gets the basic native method implementations. Most new >>> operations are folded to their volatile (the strongest) counterparts, >>> hoping that compilers would intrinsify them into more performant versions. >> >> A simpler way to accomplish this would be to give the folded API points non-native method bodies, redirecting to whatever native they are folded to. >> >> This will move most of the folding choices up to Java code. A fair amount of movement in unsafe.cpp will disappear. >> >> The non-native folded methods would still be marked @HSIC and be optimized accordingly. >> >> ? John > From vladimir.kozlov at oracle.com Wed Feb 17 19:48:00 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 17 Feb 2016 11:48:00 -0800 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C4CE01.7060805@oracle.com> References: <56C1C2ED.10702@oracle.com> <3175C589-834C-473B-89BD-8AAD9CBC094B@oracle.com> <2A54ABA8-83ED-46F0-B95D-FD48F1DAB771@oracle.com> <56C4CE01.7060805@oracle.com> Message-ID: <56C4CE70.7030203@oracle.com> I agree with Aleksey. Clean up should be done separately, please, file RFE. This changes are already big. Thanks, Vladimir On 2/17/16 11:46 AM, Aleksey Shipilev wrote: > Our hands were itching to remove these "duplicate" Unsafe methods for a > while now. But I think these cleanups should really be done separately > to catch mistakes. I'd like to keep this particular RFR for > VarHandle-specific entries only. > > Cheers, > -Aleksey > > On 02/17/2016 10:42 PM, John Rose wrote: >> One other point: As long as you are massaging the library_call code, >> you could flush the *_raw versions of all the intrinsics: All we need >> are the normal intrinsics (GETPUTOOP), not the one-address ones >> (GETPUTNATIVE), as long as Unsafe class (both of them) redirects >> one call to the other. >> >> The *_raw (NATIVE) intrinsics should have been GC-ed a long time ago, >> given that the two-address version (OOP) is all anybody needs. >> >> To avoid mishaps, a @ForceInline is needed on that kind of wrapper. >> >> Mikael's forthcoming Unsafe changes are another place we could >> handle this cleanup. >> >> ? John >> >>> On Feb 16, 2016, at 7:30 PM, John Rose wrote: >>> >>> On Feb 15, 2016, at 4:22 AM, Aleksey Shipilev wrote: >>>> >>>> c) unsafe.cpp gets the basic native method implementations. Most new >>>> operations are folded to their volatile (the strongest) counterparts, >>>> hoping that compilers would intrinsify them into more performant versions. >>> >>> A simpler way to accomplish this would be to give the folded API points non-native method bodies, redirecting to whatever native they are folded to. >>> >>> This will move most of the folding choices up to Java code. A fair amount of movement in unsafe.cpp will disappear. >>> >>> The non-native folded methods would still be marked @HSIC and be optimized accordingly. >>> >>> ? John >> > > From kim.barrett at oracle.com Wed Feb 17 22:44:39 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 17 Feb 2016 17:44:39 -0500 Subject: [9] RFR(XS): 8150063: Optimized build fails with "undefined reference to 'test_memset_with_concurrent_readers()'" In-Reply-To: <56C48D1E.9070506@oracle.com> References: <56C48D1E.9070506@oracle.com> Message-ID: <2EE67F16-9123-40F0-BC01-7AD970B09E67@oracle.com> > On Feb 17, 2016, at 10:09 AM, Tobias Hartmann wrote: > > Hi, > > please review the following patch. > > https://bugs.openjdk.java.net/browse/JDK-8150063 > http://cr.openjdk.java.net/~thartmann/8150063/webrev.00/ > > JDK-8131330 added 'test_memset_with_concurrent_readers()' which is guarded by #ifdef ASSERT and therefore not available in the optimized build. Since the optimized build executes unit tests, it should be guarded by #ifndef PRODUCT instead. > > Thanks, > Tobias Looks good. From volker.simonis at gmail.com Thu Feb 18 07:47:36 2016 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 18 Feb 2016 08:47:36 +0100 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: <56C4CD2A.1000701@oracle.com> References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> <56C1DD24.2060503@redhat.com> <56C394A0.2010606@redhat.com> <56C4588A.4070505@oracle.com> <56C4C6F6.8040709@oracle.com> <56C4CD2A.1000701@oracle.com> Message-ID: On Wed, Feb 17, 2016 at 8:42 PM, Aleksey Shipilev wrote: > On 02/17/2016 10:16 PM, Vladimir Kozlov wrote: >> In general it looks good to me. > > Thanks Vladimir! > >> My main question is about implementation of new functionality on >> other platforms. When it will be done? Yes, it works now because you >> have guard match_rule_supported(). But we usually do implementation >> on platforms at least as separate RFE. What is your plan? > > We have multiple subtasks for AArch64, SPARC and Power under VarHandles > umbrella: > https://bugs.openjdk.java.net/browse/JDK-8080588 > > Hopefully we will address them after/concurrently-with the bulk of > VarHandles changes settle into mainline. But we need to get some basic > code in mainline to build on. > > >> SAP guys should also test it on PPC64. > > Volker, Goetz, I would appreciate if you can give it a spin! > Sorry, I only saw this thread yesterday. I'll start now right away with looking into it and testing it on ppc64. Regards, Volker > >> What test/compiler/unsafe/generate-unsafe-tests.sh is for? It is not >> used by regression testing as far as I see. > > The script (re)generates the tests from the template, and is supposed to > be run manually when test template had changed. The > test/compiler/unsafe/ tests you see in the webrev were generated by that > script. > >> And please, push it into hs-comp for nightly testing. > > That's was the plan, I should have said that from the beginning. > > Cheers, > -Aleksey > From tobias.hartmann at oracle.com Thu Feb 18 08:03:15 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 18 Feb 2016 09:03:15 +0100 Subject: [9] RFR(XS): 8150063: Optimized build fails with "undefined reference to 'test_memset_with_concurrent_readers()'" In-Reply-To: <56C4CD92.10203@oracle.com> References: <56C48D1E.9070506@oracle.com> <56C4CD92.10203@oracle.com> Message-ID: <56C57AC3.1040509@oracle.com> Thanks, Vladimir. Tobias On 17.02.2016 20:44, Vladimir Kozlov wrote: > Good. > > thanks, > Vladimir > > On 2/17/16 7:09 AM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch. >> >> https://bugs.openjdk.java.net/browse/JDK-8150063 >> http://cr.openjdk.java.net/~thartmann/8150063/webrev.00/ >> >> JDK-8131330 added 'test_memset_with_concurrent_readers()' which is guarded by #ifdef ASSERT and therefore not available in the optimized build. Since the optimized build executes unit tests, it should be guarded by #ifndef PRODUCT instead. >> >> Thanks, >> Tobias >> From tobias.hartmann at oracle.com Thu Feb 18 08:03:24 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 18 Feb 2016 09:03:24 +0100 Subject: [9] RFR(XS): 8150063: Optimized build fails with "undefined reference to 'test_memset_with_concurrent_readers()'" In-Reply-To: <2EE67F16-9123-40F0-BC01-7AD970B09E67@oracle.com> References: <56C48D1E.9070506@oracle.com> <2EE67F16-9123-40F0-BC01-7AD970B09E67@oracle.com> Message-ID: <56C57ACC.4050600@oracle.com> Thanks, Kim. Tobias On 17.02.2016 23:44, Kim Barrett wrote: >> On Feb 17, 2016, at 10:09 AM, Tobias Hartmann wrote: >> >> Hi, >> >> please review the following patch. >> >> https://bugs.openjdk.java.net/browse/JDK-8150063 >> http://cr.openjdk.java.net/~thartmann/8150063/webrev.00/ >> >> JDK-8131330 added 'test_memset_with_concurrent_readers()' which is guarded by #ifdef ASSERT and therefore not available in the optimized build. Since the optimized build executes unit tests, it should be guarded by #ifndef PRODUCT instead. >> >> Thanks, >> Tobias > > Looks good. > From tobias.hartmann at oracle.com Thu Feb 18 08:47:46 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 18 Feb 2016 09:47:46 +0100 Subject: [9] RFR(XS): 8150063: Optimized build fails with "undefined reference to 'test_memset_with_concurrent_readers()'" In-Reply-To: <56C48D1E.9070506@oracle.com> References: <56C48D1E.9070506@oracle.com> Message-ID: <56C58532.5020608@oracle.com> Just noticed that this issue was fixed by JDK-8149141 as well. Closing this a duplicate. Best regards, Tobias On 17.02.2016 16:09, Tobias Hartmann wrote: > Hi, > > please review the following patch. > > https://bugs.openjdk.java.net/browse/JDK-8150063 > http://cr.openjdk.java.net/~thartmann/8150063/webrev.00/ > > JDK-8131330 added 'test_memset_with_concurrent_readers()' which is guarded by #ifdef ASSERT and therefore not available in the optimized build. Since the optimized build executes unit tests, it should be guarded by #ifndef PRODUCT instead. > > Thanks, > Tobias > From sgehwolf at redhat.com Thu Feb 18 09:01:16 2016 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Thu, 18 Feb 2016 10:01:16 +0100 Subject: RFR(S): 8143245: Zero build requires disabled warnings In-Reply-To: <56C4CC60.5010206@oracle.com> References: <1455644827.4680.17.camel@redhat.com> <56C4318C.7050309@oracle.com> <56C4CC60.5010206@oracle.com> Message-ID: <1455786076.3626.2.camel@redhat.com> On Wed, 2016-02-17 at 14:39 -0500, Coleen Phillimore wrote: > Hi, this looks good.??I'll test it out and sponsor it. > Thanks Severin. > Coleen Thanks David and Coleen! Cheers, Severin > On 2/17/16 3:38 AM, David Holmes wrote: > > Hi Severin, > > > > On 17/02/2016 3:47 AM, Severin Gehwolf wrote: > > > Hi, > > > > > > Could somebody please review and sponsor this Zero-only change. The > > > hotspot build for Zero had some compiler warnings disabled for no good > > > reason. I've fixed the code so the silencing isn't necessary any more. > > > > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8143245 > > > webrev:? > > > http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8143245/webrev.01/ > > > > > > Thoughts? > > > > This seems okay to me. One minor nit in os_linux_zero.cpp, SpinPause? > > has an indent of 4 instead of 2. :) > > > > Is there a specific Zero reviewer you want to approve this? > > > > David > > ----- > > > > > Thanks, > > > Severin > > > > From igor.ignatyev at oracle.com Thu Feb 18 09:34:59 2016 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 18 Feb 2016 10:34:59 +0100 Subject: RFR(XS): 8149780: GatherProcessInfoTimeoutHandler shouldn't call getWin32Pid if the lib isn't load In-Reply-To: <56C341A8.2050903@oracle.com> References: <56C341A8.2050903@oracle.com> Message-ID: <7B5BAEDC-194C-4EE9-BC20-EF263A2B76C7@oracle.com> Hi Kirill, looks good to me, reviewed. Thanks, Igor > On Feb 16, 2016, at 4:35 PM, Kirill Zhaldybin wrote: > > Dear all, > > Could you please review this small fix for 8149780 which adds correct handling of situation when the lib is not loaded? > > WebRev: http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8149780/webrev.00/ > > CR: https://bugs.openjdk.java.net/browse/JDK-8149780 > > Thank you. > > Regards, Kirill From igor.ignatyev at oracle.com Thu Feb 18 09:35:12 2016 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 18 Feb 2016 10:35:12 +0100 Subject: RFR(XS): 8146287: typos in /test/failure_handler In-Reply-To: <56C341B4.2090801@oracle.com> References: <56C341B4.2090801@oracle.com> Message-ID: <3683D441-7F17-4661-8D11-4251C7F72950@oracle.com> Hi Kirill, looks good to me, reviewed. Thanks, ? Igor > On Feb 16, 2016, at 4:35 PM, Kirill Zhaldybin wrote: > > Dear all, > > Could you please review this small fix for 8146287 which fixes the typos? > > WebRev: http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8146287/webrev.01/ > > CR: https://bugs.openjdk.java.net/browse/JDK-8146287 > > Thank you. > > Regards, Kirill From kirill.zhaldybin at oracle.com Thu Feb 18 11:58:43 2016 From: kirill.zhaldybin at oracle.com (Kirill Zhaldybin) Date: Thu, 18 Feb 2016 14:58:43 +0300 Subject: RFR(XS): 8146287: typos in /test/failure_handler In-Reply-To: <3683D441-7F17-4661-8D11-4251C7F72950@oracle.com> References: <56C341B4.2090801@oracle.com> <3683D441-7F17-4661-8D11-4251C7F72950@oracle.com> Message-ID: <56C5B1F3.4040005@oracle.com> Igor, Thank you for review! Regards, Kirill On 18.02.2016 12:35, Igor Ignatyev wrote: > Hi Kirill, > > looks good to me, reviewed. > > Thanks, > ? Igor > >> On Feb 16, 2016, at 4:35 PM, Kirill Zhaldybin wrote: >> >> Dear all, >> >> Could you please review this small fix for 8146287 which fixes the typos? >> >> WebRev: http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8146287/webrev.01/ >> >> CR: https://bugs.openjdk.java.net/browse/JDK-8146287 >> >> Thank you. >> >> Regards, Kirill > From kirill.zhaldybin at oracle.com Thu Feb 18 11:58:55 2016 From: kirill.zhaldybin at oracle.com (Kirill Zhaldybin) Date: Thu, 18 Feb 2016 14:58:55 +0300 Subject: RFR(XS): 8149780: GatherProcessInfoTimeoutHandler shouldn't call getWin32Pid if the lib isn't load In-Reply-To: <7B5BAEDC-194C-4EE9-BC20-EF263A2B76C7@oracle.com> References: <56C341A8.2050903@oracle.com> <7B5BAEDC-194C-4EE9-BC20-EF263A2B76C7@oracle.com> Message-ID: <56C5B1FF.2030101@oracle.com> Igor, Thank you for review! Regards, Kirill On 18.02.2016 12:34, Igor Ignatyev wrote: > Hi Kirill, > > looks good to me, reviewed. > > Thanks, > Igor > >> On Feb 16, 2016, at 4:35 PM, Kirill Zhaldybin wrote: >> >> Dear all, >> >> Could you please review this small fix for 8149780 which adds correct handling of situation when the lib is not loaded? >> >> WebRev: http://cr.openjdk.java.net/~kzhaldyb/webrevs/JDK-8149780/webrev.00/ >> >> CR: https://bugs.openjdk.java.net/browse/JDK-8149780 >> >> Thank you. >> >> Regards, Kirill > From coleen.phillimore at oracle.com Thu Feb 18 15:12:40 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 18 Feb 2016 10:12:40 -0500 Subject: RFR(S): 8143245: Zero build requires disabled warnings In-Reply-To: <1455786076.3626.2.camel@redhat.com> References: <1455644827.4680.17.camel@redhat.com> <56C4318C.7050309@oracle.com> <56C4CC60.5010206@oracle.com> <1455786076.3626.2.camel@redhat.com> Message-ID: <56C5DF68.9090307@oracle.com> Severin, Thank you for the contribution. Coleen On 2/18/16 4:01 AM, Severin Gehwolf wrote: > On Wed, 2016-02-17 at 14:39 -0500, Coleen Phillimore wrote: >> Hi, this looks good. I'll test it out and sponsor it. >> Thanks Severin. >> Coleen > Thanks David and Coleen! > > Cheers, > Severin > >> On 2/17/16 3:38 AM, David Holmes wrote: >>> Hi Severin, >>> >>> On 17/02/2016 3:47 AM, Severin Gehwolf wrote: >>>> Hi, >>>> >>>> Could somebody please review and sponsor this Zero-only change. The >>>> hotspot build for Zero had some compiler warnings disabled for no good >>>> reason. I've fixed the code so the silencing isn't necessary any more. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8143245 >>>> webrev: >>>> http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8143245/webrev.01/ >>>> >>>> Thoughts? >>> This seems okay to me. One minor nit in os_linux_zero.cpp, SpinPause >>> has an indent of 4 instead of 2. :) >>> >>> Is there a specific Zero reviewer you want to approve this? >>> >>> David >>> ----- >>> >>>> Thanks, >>>> Severin >>>> From tom.benson at oracle.com Thu Feb 18 15:21:27 2016 From: tom.benson at oracle.com (Tom Benson) Date: Thu, 18 Feb 2016 10:21:27 -0500 Subject: [9] RFR (S) 8146436: Add -XX:+UseAggressiveHeapShrink option In-Reply-To: <56BBA353.2020805@oracle.com> References: <56B39A43.5070409@oracle.com> <56BB8F3D.3070502@oracle.com> <56BB9831.5030504@oracle.com> <56BB9A9A.7060901@oracle.com> <56BB9FEA.5070506@oracle.com> <56BBA27C.2080106@oracle.com> <56BBA353.2020805@oracle.com> Message-ID: <56C5E177.9030104@oracle.com> Hi Chris, On 2/10/2016 3:53 PM, Chris Plummer wrote: > On 2/10/16 12:50 PM, Tom Benson wrote: >> >> I've heard from another GC team person that there might be more >> feedback on the name coming, after some discussion. Not sure if it >> will constitute the 'landslide' I mentioned. 8^) > No problem. I'll wait for that to settle before sending out a final > webrev. "Landslide" may not be the right term, but there was mild consensus among the GC team that it's worth going through CCC again to replace UseAggressiveHeapShrink. Our suggestion is "ShrinkHeapInSteps", which would be on by default, and hopefully describes what's happening without any other implications. So you'd disable it to get what you want. Tom > > thanks, > > Chris >> Tom >> >> On 2/10/2016 3:39 PM, Chris Plummer wrote: >>> Hi Tom, >>> >>> if (!UseAggressiveHeapShrink) { >>> // If UseAggressiveHeapShrink is false (the default), >>> // we don't want shrink all the way back to initSize if >>> people call >>> // System.gc(), because some programs do that between >>> "phases" and then >>> // we'd just have to grow the heap up again for the next >>> phase. So we >>> // damp the shrinking: 0% on the first call, 10% on the >>> second call, 40% >>> // on the third call, and 100% by the fourth call. But if we >>> recompute >>> // size without shrinking, it goes back to 0%. >>> shrink_bytes = shrink_bytes / 100 * current_shrink_factor; >>> } >>> assert(shrink_bytes <= max_shrink_bytes, "invalid shrink size"); >>> if (current_shrink_factor == 0) { >>> _shrink_factor = 10; >>> } else { >>> _shrink_factor = MIN2(current_shrink_factor * 4, (size_t) 100); >>> } >>> >>> I got rid of the changes at the start of the method, and added the >>> !UseAggressiveHeapShrink check and the comment, so the first 2 lines >>> above and the closing right brace are now the only change in the >>> file, other than the copyright date. If you want I could also move >>> the _shrink_factor adjustment into this block since the value of >>> _shrink_factor becomes irrelevant if UseAggressiveHeapShrink is >>> true. The assert should remain outside the block. >>> >>> cheers, >>> >>> Chris >>> >>> On 2/10/16 12:16 PM, Tom Benson wrote: >>>> Hi Chris, >>>> OK, that all sounds good. >>>> >>>> >> I can change it, although that will mean filing a new CCC. >>>> Ah, I'd forgotten about that. Not worth it, unless there's a >>>> landslide of support for a different name. >>>> >>>> Tnx, >>>> Tom >>>> >>>> On 2/10/2016 3:06 PM, Chris Plummer wrote: >>>>> Hi Tom, >>>>> >>>>> Thanks for having a look. Comments inline below: >>>>> >>>>> On 2/10/16 11:27 AM, Tom Benson wrote: >>>>>> Hi Chris, >>>>>> My apologies if I missed the discussion somewhere, but is there a >>>>>> specific rationale for adding this that can be mentioned in the >>>>>> bug report? I can imagine scenarios where it would be useful, >>>>>> but maybe the real need can be called out. >>>>> In general, it is for customers that want to minimize the amount >>>>> of memory used by the java heap, and are willing to sacrifice some >>>>> performance (induce more frequent GCs) to save that memory. When >>>>> heap usage fluctuates greatly, the GC will tend to hold on to that >>>>> memory longer than needed due to the the current algorithm which >>>>> requires 4 full GCs before MaxHeapFreeRatio is fully honored. If >>>>> this is what you are looking for, I can add it to the CR. >>>>>> >>>>>> I think it might be clearer if the new code in cardGeneration was >>>>>> moved down to where the values are used. IE, I would leave the >>>>>> inits of current_shrink_factor and _shrink_factor as they were at >>>>>> lines 190/191. Then down at 270, just don't divide by the >>>>>> shrink factor if UseAggressiveHeapShrink is set, and the updates >>>>>> to shrink factor can be in the same conditional. This has the >>>>>> advantage that you can fix the comment just above it to match >>>>>> this special case. Do you think that would work? >>>>> Yes, that makes sense. I'll get started on it. I have a vacation >>>>> coming up shortly, so what I'll get a new webrev out soon, but >>>>> probably will need to wait until after my trip to do more thorough >>>>> testing and push the changes. >>>>>> >>>>>> It looks like the ending "\" at line 3330 in globals.hpp isn't >>>>>> aligned, and the copyright in cardGeneration.cpp needs to be >>>>>> updated. >>>>> Ok. >>>>>> >>>>>> One other nit, which you can ignore unless someone comes forward >>>>>> to agree with me 8^) , is that I'd prefer the name >>>>>> ShrinkHeapAggressively instead. Maybe this was already debated >>>>>> elsewhere.... >>>>> The name choice hasn't really been discussed or questioned. It was >>>>> what was suggested to me, so I stuck with it (The initial work was >>>>> done by someone else. I'm just getting it integrated into 9). I >>>>> can change it, although that will mean filing a new CCC. >>>>> >>>>> thanks, >>>>> >>>>> Chris >>>>>> Tom >>>>>> >>>>>> On 2/4/2016 1:36 PM, Chris Plummer wrote: >>>>>>> Hello, >>>>>>> >>>>>>> Please review the following for adding the -XX >>>>>>> UseAggressiveHeapShrink option. When turned on, it tells the GC >>>>>>> to reduce the heap size to the new target size immediately after >>>>>>> a full GC rather than doing it progressively over 4 GCs. >>>>>>> >>>>>>> Webrev: http://cr.openjdk.java.net/~cjplummer/8146436/webrev.02/ >>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8146436 >>>>>>> >>>>>>> Testing: >>>>>>> -JPRT with '-testset hotspot' >>>>>>> -JPRT with '-testset hotspot -vmflags >>>>>>> "-XX:+UseAggressiveHeapShrink"' >>>>>>> -added new TestMaxMinHeapFreeRatioFlags.java test >>>>>>> >>>>>>> thanks, >>>>>>> >>>>>>> Chris >>>>>> >>>>> >>>> >>> >> > From adinn at redhat.com Thu Feb 18 15:47:54 2016 From: adinn at redhat.com (Andrew Dinn) Date: Thu, 18 Feb 2016 15:47:54 +0000 Subject: 8150045: AArch64: arraycopy causes segfaults in SATB during garbage collection In-Reply-To: <56C482D4.2010101@redhat.com> References: <56C48215.3040106@redhat.com> <56C482D4.2010101@redhat.com> Message-ID: <56C5E7AA.4050209@redhat.com> On 17/02/16 14:25, Andrew Haley wrote: > Sorry, I forgot to say this is AArch64-specific. > > On 02/17/2016 02:22 PM, Andrew Haley wrote: >> This is a bug due to the abuse of default arguments in C++. I, ah, >> forgot to pass dest_uninitialized to the OOP arraycopy routines, so we >> always scan the destination array, even though it contains garbage. >> >> I also took the opportunity to do a little tidying-up. >> >> http://cr.openjdk.java.net/~aph/8150045/ This looks fine, including the tidying up. Reviewed as AArch64-only change. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul Argiry (US) From christian.thalinger at oracle.com Thu Feb 18 17:27:52 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Thu, 18 Feb 2016 07:27:52 -1000 Subject: Add instrumentation in the TemplateInterpreter In-Reply-To: References: Message-ID: Does it have to be the template interpreter or could you do your work with Zero as well? > On Feb 9, 2016, at 11:19 PM, Khanh Nguyen wrote: > > Hello, > > I want to add instrumentation to monitor all reads and writes in the > TemplateInterpreter, I think I got the correct place for it in > /cpu/x86/vm/templateTable_x86_64.cpp. Can someone please tell me if I'm > doing it right? > > For writes: > static void do_oop_store(InterpreterMacroAssembler* _masm, > Address obj, > Register val, > BarrierSet::Name barrier, > bool precise) { > [...] > case BarrierSet::CardTableModRef: > case BarrierSet::CardTableExtension: > { > if (val == noreg) { > __ store_heap_oop_null(obj); > } else { > __ store_heap_oop(obj, val); > > /*mycodeA*/ __ movptr(c_rarg1, obj.base()); // save this value otherwise > it will be changed? > > // flatten object address if needed > if (!precise || (obj.index() == noreg && obj.disp() == 0)) { > __ store_check(obj.base()); > /*mycodeB*/ __ call_VM(noreg, //void > CAST_FROM_FN_PTR(address, > InterpreterRuntime::write_helper), > c_rarg1, // obj > c_rarg1, // field address because store check is > called on field address > val); > } else { > __ leaq(rdx, obj); > __ store_check(rdx); > /*mycodeC*/ __ call_VM(noreg, //void > CAST_FROM_FN_PTR(address, > InterpreterRuntime::write_helper), > c_rarg1, // obj > rdx, // field address, because store check is > called on field address > val); > } > } > break; > > For reads: > case Bytecodes::_fast_agetfield: > __ load_heap_oop(rax, field); > > /*mycodeD*/ __ call_VM(noreg, > CAST_FROM_FN_PTR(address, > InterpreterRuntime::read_barrier_helper), > rax); > > __ verify_oop(rax); > break; > > My questions are: > > 1) I thought this represents a putfield a.f=b where a.f is represented by > the parameter obj of type Address. b is obvious the parameter val of type > Register. Especially in obj there are fields: base, index and disp. But as > I run this, looks like obj is actually the field address. (the case mycodeB) > I haven't found a test case that can trigger the case mycodeC to see the > behavior (i.e., rdx might get destroyed and I got random value back or > c_rarg1 is the obj address and rdx is field address) > > 2) Before this, I tried to insert the same __ call_VM in fast_aputfield > before do_oop_store but it results in JVM crash. I don't understand the > reason why. What I did in the call is just print the parameters. I did see > the values printed (only the 1st time it goes to the method) but then the > VM crashed. I thought __ call_VM will preserve all registers's value and > restore properly when comes back. My instrumentation has no side effect, I > just observe and record the values (actually just printing the values to > test). > > 3) Is it strictly required to have the line /*mycodeA*/ I tried to, in > mycodeB line, pass obj.base() twice and it got build errors for "smashed > args"? > > I greatly appreciate your time, > > Best, > > Khanh Nguyen From christian.thalinger at oracle.com Thu Feb 18 19:14:19 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Thu, 18 Feb 2016 09:14:19 -1000 Subject: Add instrumentation in the TemplateInterpreter In-Reply-To: References: Message-ID: Can you share the reason? > On Feb 18, 2016, at 8:01 AM, Khanh Nguyen wrote: > > Unfortunately it has to be the Template Interpreter. > > On Feb 18, 2016 9:27 AM, "Christian Thalinger" > wrote: > Does it have to be the template interpreter or could you do your work with Zero as well? > > > On Feb 9, 2016, at 11:19 PM, Khanh Nguyen > wrote: > > > > Hello, > > > > I want to add instrumentation to monitor all reads and writes in the > > TemplateInterpreter, I think I got the correct place for it in > > /cpu/x86/vm/templateTable_x86_64.cpp. Can someone please tell me if I'm > > doing it right? > > > > For writes: > > static void do_oop_store(InterpreterMacroAssembler* _masm, > > Address obj, > > Register val, > > BarrierSet::Name barrier, > > bool precise) { > > [...] > > case BarrierSet::CardTableModRef: > > case BarrierSet::CardTableExtension: > > { > > if (val == noreg) { > > __ store_heap_oop_null(obj); > > } else { > > __ store_heap_oop(obj, val); > > > > /*mycodeA*/ __ movptr(c_rarg1, obj.base()); // save this value otherwise > > it will be changed? > > > > // flatten object address if needed > > if (!precise || (obj.index() == noreg && obj.disp() == 0)) { > > __ store_check(obj.base()); > > /*mycodeB*/ __ call_VM(noreg, //void > > CAST_FROM_FN_PTR(address, > > InterpreterRuntime::write_helper), > > c_rarg1, // obj > > c_rarg1, // field address because store check is > > called on field address > > val); > > } else { > > __ leaq(rdx, obj); > > __ store_check(rdx); > > /*mycodeC*/ __ call_VM(noreg, //void > > CAST_FROM_FN_PTR(address, > > InterpreterRuntime::write_helper), > > c_rarg1, // obj > > rdx, // field address, because store check is > > called on field address > > val); > > } > > } > > break; > > > > For reads: > > case Bytecodes::_fast_agetfield: > > __ load_heap_oop(rax, field); > > > > /*mycodeD*/ __ call_VM(noreg, > > CAST_FROM_FN_PTR(address, > > InterpreterRuntime::read_barrier_helper), > > rax); > > > > __ verify_oop(rax); > > break; > > > > My questions are: > > > > 1) I thought this represents a putfield a.f=b where a.f is represented by > > the parameter obj of type Address. b is obvious the parameter val of type > > Register. Especially in obj there are fields: base, index and disp. But as > > I run this, looks like obj is actually the field address. (the case mycodeB) > > I haven't found a test case that can trigger the case mycodeC to see the > > behavior (i.e., rdx might get destroyed and I got random value back or > > c_rarg1 is the obj address and rdx is field address) > > > > 2) Before this, I tried to insert the same __ call_VM in fast_aputfield > > before do_oop_store but it results in JVM crash. I don't understand the > > reason why. What I did in the call is just print the parameters. I did see > > the values printed (only the 1st time it goes to the method) but then the > > VM crashed. I thought __ call_VM will preserve all registers's value and > > restore properly when comes back. My instrumentation has no side effect, I > > just observe and record the values (actually just printing the values to > > test). > > > > 3) Is it strictly required to have the line /*mycodeA*/ I tried to, in > > mycodeB line, pass obj.base() twice and it got build errors for "smashed > > args"? > > > > I greatly appreciate your time, > > > > Best, > > > > Khanh Nguyen > From mikael.vidstedt at oracle.com Thu Feb 18 19:22:47 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Thu, 18 Feb 2016 11:22:47 -0800 Subject: RFR (M): 8149159: Clean up Unsafe Message-ID: <56C61A07.4010604@oracle.com> Please review the following change which does some relatively significant cleaning up of the Unsafe implementation. Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 Webrev (hotspot): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ Webrev (jdk): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ Summary: * To avoid code duplication sun.misc.Unsafe now delegates all work to jdk.internal.misc.Unsafe. This also means that the VM - and unsafe.cpp specifically - no longer needs to know or care about s.m.Unsafe. * The s.m.Unsafe delegation methods have all been decorated with @ForceInline to minimize the risk of performance regressions, though it is highly likely that they will be inlined even without the annotations. * The documentation has been updated to reflect that it is the responsibility of the user of Unsafe to make sure arguments are valid. * The argument checking has, to the extent possible, been moved from unsafe.cpp up to Java to simplify the native code and allow the JIT to optimize it. * Some of the argument checks have been relaxed. For example, the recently introduced U.copySwapMemory does not check for null pointers anymore. See docs for j.i.m.U.checkPointer for the complete reasoning behind this. Note that the Unsafe methods today, apart from U.copySwapMemory, do not perform the NULL related check(s). * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. Feel free to point out that I should merge them (because I should). Also, unsafe.cpp was cleaned up rather dramatically. Some specific highlights: * Unsafe_ functions are now declared static, as are the other unsafe.cpp local functions. * Created unsafe.hpp and moved some functions used in other parts of the VM to that. Removed some "extern" function declarations (extern is bad, kittens die when extern is (over-)used). * Remove scary looking comment about UNSAFE_LEAF not being possible to use - there's nothing special about it, it's just a JVM_LEAF. * Used UNSAFE_LEAF for a few simple leaf methods * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help auto-indent * Removed unused Unsafe_<...>##140 functions/macros * Updated macro argument names to be consistent throughout unsafe.cpp macro definitions * Replaced some checks with asserts - as per above the checks are now performed in j.i.m.Unsafe instead. * Removed all the s.m.Unsafe related code Testing: * jtreg: hotspot_jprt group, jdk/internal * JPRT: hotspot testset * Perf: JMH unsafe-bench.jar (no significant changes) I'm taking suggestions on additional things to test. Cheers, Mikael From ktruong.nguyen at gmail.com Thu Feb 18 19:30:53 2016 From: ktruong.nguyen at gmail.com (Khanh Nguyen) Date: Thu, 18 Feb 2016 11:30:53 -0800 Subject: Add instrumentation in the TemplateInterpreter In-Reply-To: References: Message-ID: The main reason is the performance difference between the TemplateInterpreter and the BytecodeInterpreter in Zero. I did not verify the difference but I found from this mailing list that the difference is 10x. And since we are talking about Zero. How much is the performance difference between ZeroShark and the standard Hotspot, do you by any chance know? Thanks On Feb 18, 2016 11:14 AM, "Christian Thalinger" < christian.thalinger at oracle.com> wrote: > Can you share the reason? > > On Feb 18, 2016, at 8:01 AM, Khanh Nguyen > wrote: > > Unfortunately it has to be the Template Interpreter. > On Feb 18, 2016 9:27 AM, "Christian Thalinger" < > christian.thalinger at oracle.com> wrote: > >> Does it have to be the template interpreter or could you do your work >> with Zero as well? >> >> > On Feb 9, 2016, at 11:19 PM, Khanh Nguyen >> wrote: >> > >> > Hello, >> > >> > I want to add instrumentation to monitor all reads and writes in the >> > TemplateInterpreter, I think I got the correct place for it in >> > /cpu/x86/vm/templateTable_x86_64.cpp. Can someone please tell me if I'm >> > doing it right? >> > >> > For writes: >> > static void do_oop_store(InterpreterMacroAssembler* _masm, >> > Address obj, >> > Register val, >> > BarrierSet::Name barrier, >> > bool precise) { >> > [...] >> > case BarrierSet::CardTableModRef: >> > case BarrierSet::CardTableExtension: >> > { >> > if (val == noreg) { >> > __ store_heap_oop_null(obj); >> > } else { >> > __ store_heap_oop(obj, val); >> > >> > /*mycodeA*/ __ movptr(c_rarg1, obj.base()); // save this value >> otherwise >> > it will be changed? >> > >> > // flatten object address if needed >> > if (!precise || (obj.index() == noreg && obj.disp() == 0)) { >> > __ store_check(obj.base()); >> > /*mycodeB*/ __ call_VM(noreg, //void >> > CAST_FROM_FN_PTR(address, >> > >> InterpreterRuntime::write_helper), >> > c_rarg1, // obj >> > c_rarg1, // field address because store check is >> > called on field address >> > val); >> > } else { >> > __ leaq(rdx, obj); >> > __ store_check(rdx); >> > /*mycodeC*/ __ call_VM(noreg, //void >> > CAST_FROM_FN_PTR(address, >> > >> InterpreterRuntime::write_helper), >> > c_rarg1, // obj >> > rdx, // field address, because store check is >> > called on field address >> > val); >> > } >> > } >> > break; >> > >> > For reads: >> > case Bytecodes::_fast_agetfield: >> > __ load_heap_oop(rax, field); >> > >> > /*mycodeD*/ __ call_VM(noreg, >> > CAST_FROM_FN_PTR(address, >> > InterpreterRuntime::read_barrier_helper), >> > rax); >> > >> > __ verify_oop(rax); >> > break; >> > >> > My questions are: >> > >> > 1) I thought this represents a putfield a.f=b where a.f is represented >> by >> > the parameter obj of type Address. b is obvious the parameter val of >> type >> > Register. Especially in obj there are fields: base, index and disp. But >> as >> > I run this, looks like obj is actually the field address. (the case >> mycodeB) >> > I haven't found a test case that can trigger the case mycodeC to see the >> > behavior (i.e., rdx might get destroyed and I got random value back or >> > c_rarg1 is the obj address and rdx is field address) >> > >> > 2) Before this, I tried to insert the same __ call_VM in fast_aputfield >> > before do_oop_store but it results in JVM crash. I don't understand the >> > reason why. What I did in the call is just print the parameters. I did >> see >> > the values printed (only the 1st time it goes to the method) but then >> the >> > VM crashed. I thought __ call_VM will preserve all registers's value and >> > restore properly when comes back. My instrumentation has no side >> effect, I >> > just observe and record the values (actually just printing the values to >> > test). >> > >> > 3) Is it strictly required to have the line /*mycodeA*/ I tried to, in >> > mycodeB line, pass obj.base() twice and it got build errors for "smashed >> > args"? >> > >> > I greatly appreciate your time, >> > >> > Best, >> > >> > Khanh Nguyen >> >> > From vladimir.kozlov at oracle.com Thu Feb 18 21:52:33 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 18 Feb 2016 13:52:33 -0800 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C61A07.4010604@oracle.com> References: <56C61A07.4010604@oracle.com> Message-ID: <56C63D21.8000403@oracle.com> Hotspot changes looks fine. Nice cleanup! What was changed in interfaceSupport.hpp (it is empty in webrev)? Thanks, Vladimir On 2/18/16 11:22 AM, Mikael Vidstedt wrote: > > Please review the following change which does some relatively significant cleaning up of the Unsafe implementation. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 > Webrev (hotspot): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ > Webrev (jdk): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ > > Summary: > > * To avoid code duplication sun.misc.Unsafe now delegates all work to jdk.internal.misc.Unsafe. This also means that the > VM - and unsafe.cpp specifically - no longer needs to know or care about s.m.Unsafe. > * The s.m.Unsafe delegation methods have all been decorated with @ForceInline to minimize the risk of performance > regressions, though it is highly likely that they will be inlined even without the annotations. > * The documentation has been updated to reflect that it is the responsibility of the user of Unsafe to make sure > arguments are valid. > * The argument checking has, to the extent possible, been moved from unsafe.cpp up to Java to simplify the native code > and allow the JIT to optimize it. > * Some of the argument checks have been relaxed. For example, the recently introduced U.copySwapMemory does not check > for null pointers anymore. See docs for j.i.m.U.checkPointer for the complete reasoning behind this. Note that the > Unsafe methods today, apart from U.copySwapMemory, do not perform the NULL related check(s). > * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. Feel free to point out that I should merge them > (because I should). > > Also, unsafe.cpp was cleaned up rather dramatically. Some specific highlights: > > * Unsafe_ functions are now declared static, as are the other unsafe.cpp local functions. > * Created unsafe.hpp and moved some functions used in other parts of the VM to that. Removed some "extern" function > declarations (extern is bad, kittens die when extern is (over-)used). > * Remove scary looking comment about UNSAFE_LEAF not being possible to use - there's nothing special about it, it's just > a JVM_LEAF. > * Used UNSAFE_LEAF for a few simple leaf methods > * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help auto-indent > * Removed unused Unsafe_<...>##140 functions/macros > * Updated macro argument names to be consistent throughout unsafe.cpp macro definitions > * Replaced some checks with asserts - as per above the checks are now performed in j.i.m.Unsafe instead. > * Removed all the s.m.Unsafe related code > > > Testing: > > * jtreg: hotspot_jprt group, jdk/internal > * JPRT: hotspot testset > * Perf: JMH unsafe-bench.jar (no significant changes) > > I'm taking suggestions on additional things to test. > > Cheers, > Mikael > From mikael.vidstedt at oracle.com Thu Feb 18 22:13:52 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Thu, 18 Feb 2016 14:13:52 -0800 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C63D21.8000403@oracle.com> References: <56C61A07.4010604@oracle.com> <56C63D21.8000403@oracle.com> Message-ID: <56C64220.3070209@oracle.com> There's are indentation changes of two of the backslashes for the JNI_ENTRY_NO_PRESERVE macro definition to align them with the other backslashes. I noticed that a similar change is needed for JNI_QUICK_ENTRY and JNI_LEAF. Arguably it should be done as a separate change, but I'm not sure it's worth the overhead... Cheers, Mikael On 2016-02-18 13:52, Vladimir Kozlov wrote: > Hotspot changes looks fine. Nice cleanup! What was changed in > interfaceSupport.hpp (it is empty in webrev)? > > Thanks, > Vladimir > > On 2/18/16 11:22 AM, Mikael Vidstedt wrote: >> >> Please review the following change which does some relatively >> significant cleaning up of the Unsafe implementation. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 >> Webrev (hotspot): >> http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ >> Webrev (jdk): >> http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ >> >> Summary: >> >> * To avoid code duplication sun.misc.Unsafe now delegates all work to >> jdk.internal.misc.Unsafe. This also means that the >> VM - and unsafe.cpp specifically - no longer needs to know or care >> about s.m.Unsafe. >> * The s.m.Unsafe delegation methods have all been decorated with >> @ForceInline to minimize the risk of performance >> regressions, though it is highly likely that they will be inlined >> even without the annotations. >> * The documentation has been updated to reflect that it is the >> responsibility of the user of Unsafe to make sure >> arguments are valid. >> * The argument checking has, to the extent possible, been moved from >> unsafe.cpp up to Java to simplify the native code >> and allow the JIT to optimize it. >> * Some of the argument checks have been relaxed. For example, the >> recently introduced U.copySwapMemory does not check >> for null pointers anymore. See docs for j.i.m.U.checkPointer for the >> complete reasoning behind this. Note that the >> Unsafe methods today, apart from U.copySwapMemory, do not perform the >> NULL related check(s). >> * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. >> Feel free to point out that I should merge them >> (because I should). >> >> Also, unsafe.cpp was cleaned up rather dramatically. Some specific >> highlights: >> >> * Unsafe_ functions are now declared static, as are the other >> unsafe.cpp local functions. >> * Created unsafe.hpp and moved some functions used in other parts of >> the VM to that. Removed some "extern" function >> declarations (extern is bad, kittens die when extern is (over-)used). >> * Remove scary looking comment about UNSAFE_LEAF not being possible >> to use - there's nothing special about it, it's just >> a JVM_LEAF. >> * Used UNSAFE_LEAF for a few simple leaf methods >> * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help >> auto-indent >> * Removed unused Unsafe_<...>##140 functions/macros >> * Updated macro argument names to be consistent throughout unsafe.cpp >> macro definitions >> * Replaced some checks with asserts - as per above the checks are now >> performed in j.i.m.Unsafe instead. >> * Removed all the s.m.Unsafe related code >> >> >> Testing: >> >> * jtreg: hotspot_jprt group, jdk/internal >> * JPRT: hotspot testset >> * Perf: JMH unsafe-bench.jar (no significant changes) >> >> I'm taking suggestions on additional things to test. >> >> Cheers, >> Mikael >> From christian.thalinger at oracle.com Thu Feb 18 22:50:42 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Thu, 18 Feb 2016 12:50:42 -1000 Subject: Add instrumentation in the TemplateInterpreter In-Reply-To: References: Message-ID: <746B7FA9-950F-4F65-88F3-C7C39894F7E9@oracle.com> > On Feb 18, 2016, at 9:30 AM, Khanh Nguyen wrote: > > The main reason is the performance difference between the TemplateInterpreter and the BytecodeInterpreter in Zero. > I did not verify the difference but I found from this mailing list that the difference is 10x. > > But the instrumentation you are adding is quite expensive and I?m assuming this is for research or academia? > And since we are talking about Zero. How much is the performance difference between ZeroShark and the standard Hotspot, do you by any chance know? > > Thanks > > On Feb 18, 2016 11:14 AM, "Christian Thalinger" > wrote: > Can you share the reason? > >> On Feb 18, 2016, at 8:01 AM, Khanh Nguyen > wrote: >> >> Unfortunately it has to be the Template Interpreter. >> >> On Feb 18, 2016 9:27 AM, "Christian Thalinger" > wrote: >> Does it have to be the template interpreter or could you do your work with Zero as well? >> >> > On Feb 9, 2016, at 11:19 PM, Khanh Nguyen > wrote: >> > >> > Hello, >> > >> > I want to add instrumentation to monitor all reads and writes in the >> > TemplateInterpreter, I think I got the correct place for it in >> > /cpu/x86/vm/templateTable_x86_64.cpp. Can someone please tell me if I'm >> > doing it right? >> > >> > For writes: >> > static void do_oop_store(InterpreterMacroAssembler* _masm, >> > Address obj, >> > Register val, >> > BarrierSet::Name barrier, >> > bool precise) { >> > [...] >> > case BarrierSet::CardTableModRef: >> > case BarrierSet::CardTableExtension: >> > { >> > if (val == noreg) { >> > __ store_heap_oop_null(obj); >> > } else { >> > __ store_heap_oop(obj, val); >> > >> > /*mycodeA*/ __ movptr(c_rarg1, obj.base()); // save this value otherwise >> > it will be changed? >> > >> > // flatten object address if needed >> > if (!precise || (obj.index() == noreg && obj.disp() == 0)) { >> > __ store_check(obj.base()); >> > /*mycodeB*/ __ call_VM(noreg, //void >> > CAST_FROM_FN_PTR(address, >> > InterpreterRuntime::write_helper), >> > c_rarg1, // obj >> > c_rarg1, // field address because store check is >> > called on field address >> > val); >> > } else { >> > __ leaq(rdx, obj); >> > __ store_check(rdx); >> > /*mycodeC*/ __ call_VM(noreg, //void >> > CAST_FROM_FN_PTR(address, >> > InterpreterRuntime::write_helper), >> > c_rarg1, // obj >> > rdx, // field address, because store check is >> > called on field address >> > val); >> > } >> > } >> > break; >> > >> > For reads: >> > case Bytecodes::_fast_agetfield: >> > __ load_heap_oop(rax, field); >> > >> > /*mycodeD*/ __ call_VM(noreg, >> > CAST_FROM_FN_PTR(address, >> > InterpreterRuntime::read_barrier_helper), >> > rax); >> > >> > __ verify_oop(rax); >> > break; >> > >> > My questions are: >> > >> > 1) I thought this represents a putfield a.f=b where a.f is represented by >> > the parameter obj of type Address. b is obvious the parameter val of type >> > Register. Especially in obj there are fields: base, index and disp. But as >> > I run this, looks like obj is actually the field address. (the case mycodeB) >> > I haven't found a test case that can trigger the case mycodeC to see the >> > behavior (i.e., rdx might get destroyed and I got random value back or >> > c_rarg1 is the obj address and rdx is field address) >> > >> > 2) Before this, I tried to insert the same __ call_VM in fast_aputfield >> > before do_oop_store but it results in JVM crash. I don't understand the >> > reason why. What I did in the call is just print the parameters. I did see >> > the values printed (only the 1st time it goes to the method) but then the >> > VM crashed. I thought __ call_VM will preserve all registers's value and >> > restore properly when comes back. My instrumentation has no side effect, I >> > just observe and record the values (actually just printing the values to >> > test). >> > >> > 3) Is it strictly required to have the line /*mycodeA*/ I tried to, in >> > mycodeB line, pass obj.base() twice and it got build errors for "smashed >> > args"? >> > >> > I greatly appreciate your time, >> > >> > Best, >> > >> > Khanh Nguyen From ktruong.nguyen at gmail.com Thu Feb 18 23:43:03 2016 From: ktruong.nguyen at gmail.com (Khanh Nguyen) Date: Thu, 18 Feb 2016 15:43:03 -0800 Subject: Add instrumentation in the TemplateInterpreter In-Reply-To: <746B7FA9-950F-4F65-88F3-C7C39894F7E9@oracle.com> References: <746B7FA9-950F-4F65-88F3-C7C39894F7E9@oracle.com> Message-ID: Yes, you are correct. This is for a research project. We try to remember those references so that later we can do something similar to a GC to update the references. The base assumption is that we only have a small number of these kind of object so the cost should be acceptable. A side reason for staying with the TemplateInterpreter is that I can't convince my team to switch to BytecodeInterpreter in ZeroShark. The unknown performance of ZeroShark is partially responsible for my unability to convince them. On Feb 18, 2016 2:50 PM, "Christian Thalinger" < christian.thalinger at oracle.com> wrote: > > On Feb 18, 2016, at 9:30 AM, Khanh Nguyen > wrote: > > The main reason is the performance difference between the > TemplateInterpreter and the BytecodeInterpreter in Zero. > I did not verify the difference but I found from this mailing list that > the difference is 10x. > > > But the instrumentation you are adding is quite expensive and I?m assuming > this is for research or academia? > > And since we are talking about Zero. How much is the performance > difference between ZeroShark and the standard Hotspot, do you by any chance > know? > > Thanks > On Feb 18, 2016 11:14 AM, "Christian Thalinger" < > christian.thalinger at oracle.com> wrote: > >> Can you share the reason? >> >> On Feb 18, 2016, at 8:01 AM, Khanh Nguyen >> wrote: >> >> Unfortunately it has to be the Template Interpreter. >> On Feb 18, 2016 9:27 AM, "Christian Thalinger" < >> christian.thalinger at oracle.com> wrote: >> >>> Does it have to be the template interpreter or could you do your work >>> with Zero as well? >>> >>> > On Feb 9, 2016, at 11:19 PM, Khanh Nguyen >>> wrote: >>> > >>> > Hello, >>> > >>> > I want to add instrumentation to monitor all reads and writes in the >>> > TemplateInterpreter, I think I got the correct place for it in >>> > /cpu/x86/vm/templateTable_x86_64.cpp. Can someone please tell me if I'm >>> > doing it right? >>> > >>> > For writes: >>> > static void do_oop_store(InterpreterMacroAssembler* _masm, >>> > Address obj, >>> > Register val, >>> > BarrierSet::Name barrier, >>> > bool precise) { >>> > [...] >>> > case BarrierSet::CardTableModRef: >>> > case BarrierSet::CardTableExtension: >>> > { >>> > if (val == noreg) { >>> > __ store_heap_oop_null(obj); >>> > } else { >>> > __ store_heap_oop(obj, val); >>> > >>> > /*mycodeA*/ __ movptr(c_rarg1, obj.base()); // save this value >>> otherwise >>> > it will be changed? >>> > >>> > // flatten object address if needed >>> > if (!precise || (obj.index() == noreg && obj.disp() == 0)) { >>> > __ store_check(obj.base()); >>> > /*mycodeB*/ __ call_VM(noreg, //void >>> > CAST_FROM_FN_PTR(address, >>> > >>> InterpreterRuntime::write_helper), >>> > c_rarg1, // obj >>> > c_rarg1, // field address because store check is >>> > called on field address >>> > val); >>> > } else { >>> > __ leaq(rdx, obj); >>> > __ store_check(rdx); >>> > /*mycodeC*/ __ call_VM(noreg, //void >>> > CAST_FROM_FN_PTR(address, >>> > >>> InterpreterRuntime::write_helper), >>> > c_rarg1, // obj >>> > rdx, // field address, because store check is >>> > called on field address >>> > val); >>> > } >>> > } >>> > break; >>> > >>> > For reads: >>> > case Bytecodes::_fast_agetfield: >>> > __ load_heap_oop(rax, field); >>> > >>> > /*mycodeD*/ __ call_VM(noreg, >>> > CAST_FROM_FN_PTR(address, >>> > >>> InterpreterRuntime::read_barrier_helper), >>> > rax); >>> > >>> > __ verify_oop(rax); >>> > break; >>> > >>> > My questions are: >>> > >>> > 1) I thought this represents a putfield a.f=b where a.f is represented >>> by >>> > the parameter obj of type Address. b is obvious the parameter val of >>> type >>> > Register. Especially in obj there are fields: base, index and disp. >>> But as >>> > I run this, looks like obj is actually the field address. (the case >>> mycodeB) >>> > I haven't found a test case that can trigger the case mycodeC to see >>> the >>> > behavior (i.e., rdx might get destroyed and I got random value back or >>> > c_rarg1 is the obj address and rdx is field address) >>> > >>> > 2) Before this, I tried to insert the same __ call_VM in fast_aputfield >>> > before do_oop_store but it results in JVM crash. I don't understand the >>> > reason why. What I did in the call is just print the parameters. I did >>> see >>> > the values printed (only the 1st time it goes to the method) but then >>> the >>> > VM crashed. I thought __ call_VM will preserve all registers's value >>> and >>> > restore properly when comes back. My instrumentation has no side >>> effect, I >>> > just observe and record the values (actually just printing the values >>> to >>> > test). >>> > >>> > 3) Is it strictly required to have the line /*mycodeA*/ I tried to, in >>> > mycodeB line, pass obj.base() twice and it got build errors for >>> "smashed >>> > args"? >>> > >>> > I greatly appreciate your time, >>> > >>> > Best, >>> > >>> > Khanh Nguyen >> >> > From john.r.rose at oracle.com Fri Feb 19 03:09:22 2016 From: john.r.rose at oracle.com (John Rose) Date: Thu, 18 Feb 2016 19:09:22 -0800 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C61A07.4010604@oracle.com> References: <56C61A07.4010604@oracle.com> Message-ID: <1E1E7B1F-516C-413E-95CC-7EFCAF3B546B@oracle.com> This is good. Reviewed, except for any further discussion (if needed) on extra checks in copyMemory. The copyMemory intrinsic gets a little extra error checking, since copyMemory0 is the intrinsic, but cannot be accessed without the copyMemoryChecks. (I see it's tested by CopyMemory.testNegative.) It should not be a performance problem, except perhaps for very short arrays. If there's a problem we can consider putting the @HSIC annotation on the wrapper (in which case the testNegative will also have to go away). I'm *not* in favor of any systematic upgrade of argument testing on the Unsafe API. If you sign up for Unsafe coding, you check your arguments yourself, or take the consequences, as your new Javadoc so eloquently states. This could be a separate bug, but these one-address access methods are totally obsolete: @ForceInline public float getFloat(long address) { return theInternalUnsafe.getFloat(address); } They should be recoded in terms of their more "modern" two-address equivalents: @ForceInline public float getFloat(long address) { return theInternalUnsafe.getFloat(null, address); } And then (soon please?) we can remove them from the "real" Unsafe and from the vmIntrinsics list, and nuke all the DEFINE_GETSETNATIVE entry points in unsafe.cpp. (?And their place shall know them no more.) BTW, for Panama we want the two-address versions for loading and storing machine words: getAddress(oop,long) and putAddress(oop,long,long), but not the old one-address versions. (This will allow us to work with native data structures both on-heap and off-heap.) ? John On Feb 18, 2016, at 11:22 AM, Mikael Vidstedt wrote: > > > Please review the following change which does some relatively significant cleaning up of the Unsafe implementation. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 > Webrev (hotspot): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ > Webrev (jdk): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ > > Summary: > > * To avoid code duplication sun.misc.Unsafe now delegates all work to jdk.internal.misc.Unsafe. This also means that the VM - and unsafe.cpp specifically - no longer needs to know or care about s.m.Unsafe. > * The s.m.Unsafe delegation methods have all been decorated with @ForceInline to minimize the risk of performance regressions, though it is highly likely that they will be inlined even without the annotations. > * The documentation has been updated to reflect that it is the responsibility of the user of Unsafe to make sure arguments are valid. > * The argument checking has, to the extent possible, been moved from unsafe.cpp up to Java to simplify the native code and allow the JIT to optimize it. > * Some of the argument checks have been relaxed. For example, the recently introduced U.copySwapMemory does not check for null pointers anymore. See docs for j.i.m.U.checkPointer for the complete reasoning behind this. Note that the Unsafe methods today, apart from U.copySwapMemory, do not perform the NULL related check(s). > * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. Feel free to point out that I should merge them (because I should). > > Also, unsafe.cpp was cleaned up rather dramatically. Some specific highlights: > > * Unsafe_ functions are now declared static, as are the other unsafe.cpp local functions. > * Created unsafe.hpp and moved some functions used in other parts of the VM to that. Removed some "extern" function declarations (extern is bad, kittens die when extern is (over-)used). > * Remove scary looking comment about UNSAFE_LEAF not being possible to use - there's nothing special about it, it's just a JVM_LEAF. > * Used UNSAFE_LEAF for a few simple leaf methods > * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help auto-indent > * Removed unused Unsafe_<...>##140 functions/macros > * Updated macro argument names to be consistent throughout unsafe.cpp macro definitions > * Replaced some checks with asserts - as per above the checks are now performed in j.i.m.Unsafe instead. > * Removed all the s.m.Unsafe related code > > > Testing: > > * jtreg: hotspot_jprt group, jdk/internal > * JPRT: hotspot testset > * Perf: JMH unsafe-bench.jar (no significant changes) > > I'm taking suggestions on additional things to test. > > Cheers, > Mikael > From stanislav.smirnov at oracle.com Fri Feb 19 07:16:49 2016 From: stanislav.smirnov at oracle.com (Stas Smirnov) Date: Fri, 19 Feb 2016 10:16:49 +0300 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C61A07.4010604@oracle.com> References: <56C61A07.4010604@oracle.com> Message-ID: <56C6C161.9030901@oracle.com> Hi Mikael, overall changes look good, the only thing I did not quite get is the renaming of methods in hotspot, like Unsafe_CopyMemory -> Unsafe_CopyMemory0 with all the following, I counted three, changes, do we really need this, cause from what I see, you have only changed the implementation of this method, but left its signature and usage unchanged, though maybe I just missed something. On 18/02/16 22:22, Mikael Vidstedt wrote: > > Please review the following change which does some relatively > significant cleaning up of the Unsafe implementation. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 > Webrev (hotspot): > http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ > Webrev (jdk): > http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ > > Summary: > > * To avoid code duplication sun.misc.Unsafe now delegates all work to > jdk.internal.misc.Unsafe. This also means that the VM - and unsafe.cpp > specifically - no longer needs to know or care about s.m.Unsafe. > * The s.m.Unsafe delegation methods have all been decorated with > @ForceInline to minimize the risk of performance regressions, though > it is highly likely that they will be inlined even without the > annotations. > * The documentation has been updated to reflect that it is the > responsibility of the user of Unsafe to make sure arguments are valid. > * The argument checking has, to the extent possible, been moved from > unsafe.cpp up to Java to simplify the native code and allow the JIT to > optimize it. > * Some of the argument checks have been relaxed. For example, the > recently introduced U.copySwapMemory does not check for null pointers > anymore. See docs for j.i.m.U.checkPointer for the complete reasoning > behind this. Note that the Unsafe methods today, apart from > U.copySwapMemory, do not perform the NULL related check(s). > * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. > Feel free to point out that I should merge them (because I should). > > Also, unsafe.cpp was cleaned up rather dramatically. Some specific > highlights: > > * Unsafe_ functions are now declared static, as are the other > unsafe.cpp local functions. > * Created unsafe.hpp and moved some functions used in other parts of > the VM to that. Removed some "extern" function declarations (extern is > bad, kittens die when extern is (over-)used). > * Remove scary looking comment about UNSAFE_LEAF not being possible to > use - there's nothing special about it, it's just a JVM_LEAF. > * Used UNSAFE_LEAF for a few simple leaf methods > * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help auto-indent > * Removed unused Unsafe_<...>##140 functions/macros > * Updated macro argument names to be consistent throughout unsafe.cpp > macro definitions > * Replaced some checks with asserts - as per above the checks are now > performed in j.i.m.Unsafe instead. > * Removed all the s.m.Unsafe related code > > > Testing: > > * jtreg: hotspot_jprt group, jdk/internal > * JPRT: hotspot testset > * Perf: JMH unsafe-bench.jar (no significant changes) > > I'm taking suggestions on additional things to test. > > Cheers, > Mikael > -- Best regards, Stanislav From chris.hegarty at oracle.com Fri Feb 19 07:20:40 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Fri, 19 Feb 2016 07:20:40 +0000 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C61A07.4010604@oracle.com> References: <56C61A07.4010604@oracle.com> Message-ID: Thanks for doing this Mikael. The removal of explicit knowledge of sun.misc.Unsafe from the VM, and the delegation to the ?real? Unsafe is a nice cleanup ( arguably could have been done this way originally ;-) ). Making it clear that argument checking is the callers responsibility seems like the right thing to do. -Chris. On 18 Feb 2016, at 19:22, Mikael Vidstedt wrote: > > Please review the following change which does some relatively significant cleaning up of the Unsafe implementation. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 > Webrev (hotspot): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ > Webrev (jdk): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ > > Summary: > > * To avoid code duplication sun.misc.Unsafe now delegates all work to jdk.internal.misc.Unsafe. This also means that the VM - and unsafe.cpp specifically - no longer needs to know or care about s.m.Unsafe. > * The s.m.Unsafe delegation methods have all been decorated with @ForceInline to minimize the risk of performance regressions, though it is highly likely that they will be inlined even without the annotations. > * The documentation has been updated to reflect that it is the responsibility of the user of Unsafe to make sure arguments are valid. > * The argument checking has, to the extent possible, been moved from unsafe.cpp up to Java to simplify the native code and allow the JIT to optimize it. > * Some of the argument checks have been relaxed. For example, the recently introduced U.copySwapMemory does not check for null pointers anymore. See docs for j.i.m.U.checkPointer for the complete reasoning behind this. Note that the Unsafe methods today, apart from U.copySwapMemory, do not perform the NULL related check(s). > * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. Feel free to point out that I should merge them (because I should). > > Also, unsafe.cpp was cleaned up rather dramatically. Some specific highlights: > > * Unsafe_ functions are now declared static, as are the other unsafe.cpp local functions. > * Created unsafe.hpp and moved some functions used in other parts of the VM to that. Removed some "extern" function declarations (extern is bad, kittens die when extern is (over-)used). > * Remove scary looking comment about UNSAFE_LEAF not being possible to use - there's nothing special about it, it's just a JVM_LEAF. > * Used UNSAFE_LEAF for a few simple leaf methods > * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help auto-indent > * Removed unused Unsafe_<...>##140 functions/macros > * Updated macro argument names to be consistent throughout unsafe.cpp macro definitions > * Replaced some checks with asserts - as per above the checks are now performed in j.i.m.Unsafe instead. > * Removed all the s.m.Unsafe related code > > > Testing: > > * jtreg: hotspot_jprt group, jdk/internal > * JPRT: hotspot testset > * Perf: JMH unsafe-bench.jar (no significant changes) > > I'm taking suggestions on additional things to test. > > Cheers, > Mikael > From stanislav.smirnov at oracle.com Fri Feb 19 07:21:54 2016 From: stanislav.smirnov at oracle.com (Stas Smirnov) Date: Fri, 19 Feb 2016 10:21:54 +0300 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C61A07.4010604@oracle.com> References: <56C61A07.4010604@oracle.com> Message-ID: <56C6C292.7080007@oracle.com> Hi Mikael, the changes look good, the only thing I did not quite get in hotspot is the renaming of methods like Unsafe_CopyMemory -> Unsafe_CopyMemory0 with all the following, I counted three, changes, is it really required, because from what I noticed you just changed the methods implementation, but left the signature and usage unchanged, though I might missed something. On 18/02/16 22:22, Mikael Vidstedt wrote: > > Please review the following change which does some relatively > significant cleaning up of the Unsafe implementation. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 > Webrev (hotspot): > http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ > Webrev (jdk): > http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ > > Summary: > > * To avoid code duplication sun.misc.Unsafe now delegates all work to > jdk.internal.misc.Unsafe. This also means that the VM - and unsafe.cpp > specifically - no longer needs to know or care about s.m.Unsafe. > * The s.m.Unsafe delegation methods have all been decorated with > @ForceInline to minimize the risk of performance regressions, though > it is highly likely that they will be inlined even without the > annotations. > * The documentation has been updated to reflect that it is the > responsibility of the user of Unsafe to make sure arguments are valid. > * The argument checking has, to the extent possible, been moved from > unsafe.cpp up to Java to simplify the native code and allow the JIT to > optimize it. > * Some of the argument checks have been relaxed. For example, the > recently introduced U.copySwapMemory does not check for null pointers > anymore. See docs for j.i.m.U.checkPointer for the complete reasoning > behind this. Note that the Unsafe methods today, apart from > U.copySwapMemory, do not perform the NULL related check(s). > * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. > Feel free to point out that I should merge them (because I should). > > Also, unsafe.cpp was cleaned up rather dramatically. Some specific > highlights: > > * Unsafe_ functions are now declared static, as are the other > unsafe.cpp local functions. > * Created unsafe.hpp and moved some functions used in other parts of > the VM to that. Removed some "extern" function declarations (extern is > bad, kittens die when extern is (over-)used). > * Remove scary looking comment about UNSAFE_LEAF not being possible to > use - there's nothing special about it, it's just a JVM_LEAF. > * Used UNSAFE_LEAF for a few simple leaf methods > * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help auto-indent > * Removed unused Unsafe_<...>##140 functions/macros > * Updated macro argument names to be consistent throughout unsafe.cpp > macro definitions > * Replaced some checks with asserts - as per above the checks are now > performed in j.i.m.Unsafe instead. > * Removed all the s.m.Unsafe related code > > > Testing: > > * jtreg: hotspot_jprt group, jdk/internal > * JPRT: hotspot testset > * Perf: JMH unsafe-bench.jar (no significant changes) > > I'm taking suggestions on additional things to test. > > Cheers, > Mikael > -- Best regards, Stanislav From aph at redhat.com Fri Feb 19 09:41:58 2016 From: aph at redhat.com (Andrew Haley) Date: Fri, 19 Feb 2016 09:41:58 +0000 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C61A07.4010604@oracle.com> References: <56C61A07.4010604@oracle.com> Message-ID: <56C6E366.4020506@redhat.com> On 18/02/16 19:22, Mikael Vidstedt wrote: > Please review the following change which does some relatively > significant cleaning up of the Unsafe implementation. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 > Webrev (hotspot): > http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ > Webrev (jdk): > http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ That really is a nice cleanup. Unsafe really needed it. Thanks. Andrew. From paul.sandoz at oracle.com Fri Feb 19 10:00:28 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 19 Feb 2016 11:00:28 +0100 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C61A07.4010604@oracle.com> References: <56C61A07.4010604@oracle.com> Message-ID: <00D2DF3F-0040-439E-962F-E375DAFD451C@oracle.com> Hi Mikael, Very nice cleanup. That will help immensly when creating the ?unsupported? module. +1 to a follow up nuking the single addressing accessor intrinsics. That may also clear up some confusion with have in the docs, and much can follow from unifying on/off heap access with the double-addressing mode [*]. If you are paranoid about test failures then I suggest you do another JPRT run using the core testset. Paul. [*] I may camp outside the nio developers offices :-) > On 18 Feb 2016, at 20:22, Mikael Vidstedt wrote: > > > Please review the following change which does some relatively significant cleaning up of the Unsafe implementation. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 > Webrev (hotspot): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ > Webrev (jdk): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ > > Summary: > > * To avoid code duplication sun.misc.Unsafe now delegates all work to jdk.internal.misc.Unsafe. This also means that the VM - and unsafe.cpp specifically - no longer needs to know or care about s.m.Unsafe. > * The s.m.Unsafe delegation methods have all been decorated with @ForceInline to minimize the risk of performance regressions, though it is highly likely that they will be inlined even without the annotations. > * The documentation has been updated to reflect that it is the responsibility of the user of Unsafe to make sure arguments are valid. > * The argument checking has, to the extent possible, been moved from unsafe.cpp up to Java to simplify the native code and allow the JIT to optimize it. > * Some of the argument checks have been relaxed. For example, the recently introduced U.copySwapMemory does not check for null pointers anymore. See docs for j.i.m.U.checkPointer for the complete reasoning behind this. Note that the Unsafe methods today, apart from U.copySwapMemory, do not perform the NULL related check(s). > * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. Feel free to point out that I should merge them (because I should). > > Also, unsafe.cpp was cleaned up rather dramatically. Some specific highlights: > > * Unsafe_ functions are now declared static, as are the other unsafe.cpp local functions. > * Created unsafe.hpp and moved some functions used in other parts of the VM to that. Removed some "extern" function declarations (extern is bad, kittens die when extern is (over-)used). > * Remove scary looking comment about UNSAFE_LEAF not being possible to use - there's nothing special about it, it's just a JVM_LEAF. > * Used UNSAFE_LEAF for a few simple leaf methods > * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help auto-indent > * Removed unused Unsafe_<...>##140 functions/macros > * Updated macro argument names to be consistent throughout unsafe.cpp macro definitions > * Replaced some checks with asserts - as per above the checks are now performed in j.i.m.Unsafe instead. > * Removed all the s.m.Unsafe related code > > > Testing: > > * jtreg: hotspot_jprt group, jdk/internal > * JPRT: hotspot testset > * Perf: JMH unsafe-bench.jar (no significant changes) > > I'm taking suggestions on additional things to test. > > Cheers, > Mikael > From christian.thalinger at oracle.com Fri Feb 19 18:18:44 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Fri, 19 Feb 2016 08:18:44 -1000 Subject: Add instrumentation in the TemplateInterpreter In-Reply-To: References: <746B7FA9-950F-4F65-88F3-C7C39894F7E9@oracle.com> Message-ID: <305FB2D6-B0E4-4FC6-9A76-6F0B24A5725D@oracle.com> > On Feb 18, 2016, at 1:43 PM, Khanh Nguyen wrote: > > Yes, you are correct. This is for a research project. We try to remember those references so that later we can do something similar to a GC to update the references. The base assumption is that we only have a small number of these kind of object so the cost should be acceptable. > > A side reason for staying with the TemplateInterpreter is that I can't convince my team to switch to BytecodeInterpreter in ZeroShark. The unknown performance of ZeroShark is partially responsible for my unability to convince them. > > Are you running your experiments interpreted only or in a tiered environment with C1/C2? > On Feb 18, 2016 2:50 PM, "Christian Thalinger" > wrote: > >> On Feb 18, 2016, at 9:30 AM, Khanh Nguyen > wrote: >> >> The main reason is the performance difference between the TemplateInterpreter and the BytecodeInterpreter in Zero. >> I did not verify the difference but I found from this mailing list that the difference is 10x. >> >> > > But the instrumentation you are adding is quite expensive and I?m assuming this is for research or academia? > >> And since we are talking about Zero. How much is the performance difference between ZeroShark and the standard Hotspot, do you by any chance know? >> >> Thanks >> >> On Feb 18, 2016 11:14 AM, "Christian Thalinger" > wrote: >> Can you share the reason? >> >>> On Feb 18, 2016, at 8:01 AM, Khanh Nguyen > wrote: >>> >>> Unfortunately it has to be the Template Interpreter. >>> >>> On Feb 18, 2016 9:27 AM, "Christian Thalinger" > wrote: >>> Does it have to be the template interpreter or could you do your work with Zero as well? >>> >>> > On Feb 9, 2016, at 11:19 PM, Khanh Nguyen > wrote: >>> > >>> > Hello, >>> > >>> > I want to add instrumentation to monitor all reads and writes in the >>> > TemplateInterpreter, I think I got the correct place for it in >>> > /cpu/x86/vm/templateTable_x86_64.cpp. Can someone please tell me if I'm >>> > doing it right? >>> > >>> > For writes: >>> > static void do_oop_store(InterpreterMacroAssembler* _masm, >>> > Address obj, >>> > Register val, >>> > BarrierSet::Name barrier, >>> > bool precise) { >>> > [...] >>> > case BarrierSet::CardTableModRef: >>> > case BarrierSet::CardTableExtension: >>> > { >>> > if (val == noreg) { >>> > __ store_heap_oop_null(obj); >>> > } else { >>> > __ store_heap_oop(obj, val); >>> > >>> > /*mycodeA*/ __ movptr(c_rarg1, obj.base()); // save this value otherwise >>> > it will be changed? >>> > >>> > // flatten object address if needed >>> > if (!precise || (obj.index() == noreg && obj.disp() == 0)) { >>> > __ store_check(obj.base()); >>> > /*mycodeB*/ __ call_VM(noreg, //void >>> > CAST_FROM_FN_PTR(address, >>> > InterpreterRuntime::write_helper), >>> > c_rarg1, // obj >>> > c_rarg1, // field address because store check is >>> > called on field address >>> > val); >>> > } else { >>> > __ leaq(rdx, obj); >>> > __ store_check(rdx); >>> > /*mycodeC*/ __ call_VM(noreg, //void >>> > CAST_FROM_FN_PTR(address, >>> > InterpreterRuntime::write_helper), >>> > c_rarg1, // obj >>> > rdx, // field address, because store check is >>> > called on field address >>> > val); >>> > } >>> > } >>> > break; >>> > >>> > For reads: >>> > case Bytecodes::_fast_agetfield: >>> > __ load_heap_oop(rax, field); >>> > >>> > /*mycodeD*/ __ call_VM(noreg, >>> > CAST_FROM_FN_PTR(address, >>> > InterpreterRuntime::read_barrier_helper), >>> > rax); >>> > >>> > __ verify_oop(rax); >>> > break; >>> > >>> > My questions are: >>> > >>> > 1) I thought this represents a putfield a.f=b where a.f is represented by >>> > the parameter obj of type Address. b is obvious the parameter val of type >>> > Register. Especially in obj there are fields: base, index and disp. But as >>> > I run this, looks like obj is actually the field address. (the case mycodeB) >>> > I haven't found a test case that can trigger the case mycodeC to see the >>> > behavior (i.e., rdx might get destroyed and I got random value back or >>> > c_rarg1 is the obj address and rdx is field address) >>> > >>> > 2) Before this, I tried to insert the same __ call_VM in fast_aputfield >>> > before do_oop_store but it results in JVM crash. I don't understand the >>> > reason why. What I did in the call is just print the parameters. I did see >>> > the values printed (only the 1st time it goes to the method) but then the >>> > VM crashed. I thought __ call_VM will preserve all registers's value and >>> > restore properly when comes back. My instrumentation has no side effect, I >>> > just observe and record the values (actually just printing the values to >>> > test). >>> > >>> > 3) Is it strictly required to have the line /*mycodeA*/ I tried to, in >>> > mycodeB line, pass obj.base() twice and it got build errors for "smashed >>> > args"? >>> > >>> > I greatly appreciate your time, >>> > >>> > Best, >>> > >>> > Khanh Nguyen From christian.thalinger at oracle.com Fri Feb 19 18:34:22 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Fri, 19 Feb 2016 08:34:22 -1000 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C61A07.4010604@oracle.com> References: <56C61A07.4010604@oracle.com> Message-ID: <920A8C1F-2758-4DF2-A337-AFD47F5A4B35@oracle.com> ! UNSAFE_ENTRY(jobject, Unsafe_GetObject(JNIEnv *env, jobject unsafe, jobject obj, jlong offset)) { UnsafeWrapper("Unsafe_GetObject?); Could UnsafeWrapper be part of the UNSAFE_ENTRY? I mean, it?s empty anyway: #define UnsafeWrapper(arg) /*nothing, for the present*/ > On Feb 18, 2016, at 9:22 AM, Mikael Vidstedt wrote: > > > Please review the following change which does some relatively significant cleaning up of the Unsafe implementation. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 > Webrev (hotspot): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ > Webrev (jdk): http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ > > Summary: > > * To avoid code duplication sun.misc.Unsafe now delegates all work to jdk.internal.misc.Unsafe. This also means that the VM - and unsafe.cpp specifically - no longer needs to know or care about s.m.Unsafe. > * The s.m.Unsafe delegation methods have all been decorated with @ForceInline to minimize the risk of performance regressions, though it is highly likely that they will be inlined even without the annotations. > * The documentation has been updated to reflect that it is the responsibility of the user of Unsafe to make sure arguments are valid. > * The argument checking has, to the extent possible, been moved from unsafe.cpp up to Java to simplify the native code and allow the JIT to optimize it. > * Some of the argument checks have been relaxed. For example, the recently introduced U.copySwapMemory does not check for null pointers anymore. See docs for j.i.m.U.checkPointer for the complete reasoning behind this. Note that the Unsafe methods today, apart from U.copySwapMemory, do not perform the NULL related check(s). > * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. Feel free to point out that I should merge them (because I should). > > Also, unsafe.cpp was cleaned up rather dramatically. Some specific highlights: > > * Unsafe_ functions are now declared static, as are the other unsafe.cpp local functions. > * Created unsafe.hpp and moved some functions used in other parts of the VM to that. Removed some "extern" function declarations (extern is bad, kittens die when extern is (over-)used). > * Remove scary looking comment about UNSAFE_LEAF not being possible to use - there's nothing special about it, it's just a JVM_LEAF. > * Used UNSAFE_LEAF for a few simple leaf methods > * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help auto-indent > * Removed unused Unsafe_<...>##140 functions/macros > * Updated macro argument names to be consistent throughout unsafe.cpp macro definitions > * Replaced some checks with asserts - as per above the checks are now performed in j.i.m.Unsafe instead. > * Removed all the s.m.Unsafe related code > > > Testing: > > * jtreg: hotspot_jprt group, jdk/internal > * JPRT: hotspot testset > * Perf: JMH unsafe-bench.jar (no significant changes) > > I'm taking suggestions on additional things to test. > > Cheers, > Mikael > From varming at gmail.com Fri Feb 19 18:52:10 2016 From: varming at gmail.com (Carsten Varming) Date: Fri, 19 Feb 2016 13:52:10 -0500 Subject: Fwd: RFR 8150013: ParNew: Prune nmethods scavengable list In-Reply-To: References: Message-ID: Dear Hotspot developers, I would like to contribute a patch for JDK-8150013 . The current webrev can be found here: http://cr.openjdk.java.net/~cvarming/scavenge_nmethods_auto_prune/2/. Suggestions for improvements are very welcome. Carsten From coleen.phillimore at oracle.com Fri Feb 19 18:55:25 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 19 Feb 2016 13:55:25 -0500 Subject: Add instrumentation in the TemplateInterpreter In-Reply-To: References: Message-ID: <56C7651D.7020404@oracle.com> Hi, I think what you want to do is look for the calls to InterpreterRuntime::post_field_access() and add your own call at these places (and change the conditionals from checking if jvmti is on). These are called when it's safe to call into the jvm. Hacking do_oop_store() is pretty ugly, I'd suggest also piggybacking on post_field_modification and add some call to aastore(). The Zero interpreter is slower but works. Configure with --with-jvm-variants=zero --with-target-bits=64 --disable-warnings-as-errors. Maybe you don't need the last one anymore, and install libffi where it wants to be installed. Nobody knows if Shark compiler can be built anymore. Thanks, Coleen On 2/10/16 4:19 AM, Khanh Nguyen wrote: > Hello, > > I want to add instrumentation to monitor all reads and writes in the > TemplateInterpreter, I think I got the correct place for it in > /cpu/x86/vm/templateTable_x86_64.cpp. Can someone please tell me if I'm > doing it right? > > For writes: > static void do_oop_store(InterpreterMacroAssembler* _masm, > Address obj, > Register val, > BarrierSet::Name barrier, > bool precise) { > [...] > case BarrierSet::CardTableModRef: > case BarrierSet::CardTableExtension: > { > if (val == noreg) { > __ store_heap_oop_null(obj); > } else { > __ store_heap_oop(obj, val); > > /*mycodeA*/ __ movptr(c_rarg1, obj.base()); // save this value otherwise > it will be changed? > > // flatten object address if needed > if (!precise || (obj.index() == noreg && obj.disp() == 0)) { > __ store_check(obj.base()); > /*mycodeB*/ __ call_VM(noreg, //void > CAST_FROM_FN_PTR(address, > InterpreterRuntime::write_helper), > c_rarg1, // obj > c_rarg1, // field address because store check is > called on field address > val); > } else { > __ leaq(rdx, obj); > __ store_check(rdx); > /*mycodeC*/ __ call_VM(noreg, //void > CAST_FROM_FN_PTR(address, > InterpreterRuntime::write_helper), > c_rarg1, // obj > rdx, // field address, because store check is > called on field address > val); > } > } > break; > > For reads: > case Bytecodes::_fast_agetfield: > __ load_heap_oop(rax, field); > > /*mycodeD*/ __ call_VM(noreg, > CAST_FROM_FN_PTR(address, > InterpreterRuntime::read_barrier_helper), > rax); > > __ verify_oop(rax); > break; > > My questions are: > > 1) I thought this represents a putfield a.f=b where a.f is represented by > the parameter obj of type Address. b is obvious the parameter val of type > Register. Especially in obj there are fields: base, index and disp. But as > I run this, looks like obj is actually the field address. (the case mycodeB) > I haven't found a test case that can trigger the case mycodeC to see the > behavior (i.e., rdx might get destroyed and I got random value back or > c_rarg1 is the obj address and rdx is field address) > > 2) Before this, I tried to insert the same __ call_VM in fast_aputfield > before do_oop_store but it results in JVM crash. I don't understand the > reason why. What I did in the call is just print the parameters. I did see > the values printed (only the 1st time it goes to the method) but then the > VM crashed. I thought __ call_VM will preserve all registers's value and > restore properly when comes back. My instrumentation has no side effect, I > just observe and record the values (actually just printing the values to > test). > > 3) Is it strictly required to have the line /*mycodeA*/ I tried to, in > mycodeB line, pass obj.base() twice and it got build errors for "smashed > args"? > > I greatly appreciate your time, > > Best, > > Khanh Nguyen From claes.redestad at oracle.com Fri Feb 19 20:05:18 2016 From: claes.redestad at oracle.com (Claes Redestad) Date: Fri, 19 Feb 2016 21:05:18 +0100 Subject: RFR (M): 8149159: Clean up Unsafe In-Reply-To: <56C61A07.4010604@oracle.com> References: <56C61A07.4010604@oracle.com> Message-ID: <56C7757E.4070806@oracle.com> Good stuff! But quite a few delegating methods in sun.misc.Unsafe did not get the @ForceInline treatment, which seems like an oversight? Thanks! /Claes On 2016-02-18 20:22, Mikael Vidstedt wrote: > > Please review the following change which does some relatively > significant cleaning up of the Unsafe implementation. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149159 > Webrev (hotspot): > http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/hotspot/webrev.00/webrev/ > Webrev (jdk): > http://cr.openjdk.java.net/~mikael/webrevs/8149159_unsafecleanup/jdk/webrev.00/webrev/ > > Summary: > > * To avoid code duplication sun.misc.Unsafe now delegates all work to > jdk.internal.misc.Unsafe. This also means that the VM - and unsafe.cpp > specifically - no longer needs to know or care about s.m.Unsafe. > * The s.m.Unsafe delegation methods have all been decorated with > @ForceInline to minimize the risk of performance regressions, though > it is highly likely that they will be inlined even without the > annotations. > * The documentation has been updated to reflect that it is the > responsibility of the user of Unsafe to make sure arguments are valid. > * The argument checking has, to the extent possible, been moved from > unsafe.cpp up to Java to simplify the native code and allow the JIT to > optimize it. > * Some of the argument checks have been relaxed. For example, the > recently introduced U.copySwapMemory does not check for null pointers > anymore. See docs for j.i.m.U.checkPointer for the complete reasoning > behind this. Note that the Unsafe methods today, apart from > U.copySwapMemory, do not perform the NULL related check(s). > * A test was added for j.i.m.U.copyMemory, based on U.copySwapMemory. > Feel free to point out that I should merge them (because I should). > > Also, unsafe.cpp was cleaned up rather dramatically. Some specific > highlights: > > * Unsafe_ functions are now declared static, as are the other > unsafe.cpp local functions. > * Created unsafe.hpp and moved some functions used in other parts of > the VM to that. Removed some "extern" function declarations (extern is > bad, kittens die when extern is (over-)used). > * Remove scary looking comment about UNSAFE_LEAF not being possible to > use - there's nothing special about it, it's just a JVM_LEAF. > * Used UNSAFE_LEAF for a few simple leaf methods > * Added helpful braces around UNSAFE_ENTRY/UNSAFE_END to help auto-indent > * Removed unused Unsafe_<...>##140 functions/macros > * Updated macro argument names to be consistent throughout unsafe.cpp > macro definitions > * Replaced some checks with asserts - as per above the checks are now > performed in j.i.m.Unsafe instead. > * Removed all the s.m.Unsafe related code > > > Testing: > > * jtreg: hotspot_jprt group, jdk/internal > * JPRT: hotspot testset > * Perf: JMH unsafe-bench.jar (no significant changes) > > I'm taking suggestions on additional things to test. > > Cheers, > Mikael > From ktruong.nguyen at gmail.com Fri Feb 19 20:13:53 2016 From: ktruong.nguyen at gmail.com (Khanh Nguyen) Date: Fri, 19 Feb 2016 12:13:53 -0800 Subject: Add instrumentation in the TemplateInterpreter In-Reply-To: <305FB2D6-B0E4-4FC6-9A76-6F0B24A5725D@oracle.com> References: <746B7FA9-950F-4F65-88F3-C7C39894F7E9@oracle.com> <305FB2D6-B0E4-4FC6-9A76-6F0B24A5725D@oracle.com> Message-ID: I am doing interpreter-only mode currently. Instrumentation for C1/C2, I think can be done by modifying the file oops/oop.inline.hpp::oop_store() On Feb 19, 2016 10:18 AM, "Christian Thalinger" < christian.thalinger at oracle.com> wrote: > > On Feb 18, 2016, at 1:43 PM, Khanh Nguyen > wrote: > > Yes, you are correct. This is for a research project. We try to remember > those references so that later we can do something similar to a GC to > update the references. The base assumption is that we only have a small > number of these kind of object so the cost should be acceptable. > > A side reason for staying with the TemplateInterpreter is that I can't > convince my team to switch to BytecodeInterpreter in ZeroShark. The unknown > performance of ZeroShark is partially responsible for my unability to > convince them. > > > Are you running your experiments interpreted only or in a tiered > environment with C1/C2? > > On Feb 18, 2016 2:50 PM, "Christian Thalinger" < > christian.thalinger at oracle.com> wrote: > >> >> On Feb 18, 2016, at 9:30 AM, Khanh Nguyen >> wrote: >> >> The main reason is the performance difference between the >> TemplateInterpreter and the BytecodeInterpreter in Zero. >> I did not verify the difference but I found from this mailing list that >> the difference is 10x. >> >> >> But the instrumentation you are adding is quite expensive and I?m >> assuming this is for research or academia? >> >> And since we are talking about Zero. How much is the performance >> difference between ZeroShark and the standard Hotspot, do you by any chance >> know? >> >> Thanks >> On Feb 18, 2016 11:14 AM, "Christian Thalinger" < >> christian.thalinger at oracle.com> wrote: >> >>> Can you share the reason? >>> >>> On Feb 18, 2016, at 8:01 AM, Khanh Nguyen >>> wrote: >>> >>> Unfortunately it has to be the Template Interpreter. >>> On Feb 18, 2016 9:27 AM, "Christian Thalinger" < >>> christian.thalinger at oracle.com> wrote: >>> >>>> Does it have to be the template interpreter or could you do your work >>>> with Zero as well? >>>> >>>> > On Feb 9, 2016, at 11:19 PM, Khanh Nguyen >>>> wrote: >>>> > >>>> > Hello, >>>> > >>>> > I want to add instrumentation to monitor all reads and writes in the >>>> > TemplateInterpreter, I think I got the correct place for it in >>>> > /cpu/x86/vm/templateTable_x86_64.cpp. Can someone please tell me if >>>> I'm >>>> > doing it right? >>>> > >>>> > For writes: >>>> > static void do_oop_store(InterpreterMacroAssembler* _masm, >>>> > Address obj, >>>> > Register val, >>>> > BarrierSet::Name barrier, >>>> > bool precise) { >>>> > [...] >>>> > case BarrierSet::CardTableModRef: >>>> > case BarrierSet::CardTableExtension: >>>> > { >>>> > if (val == noreg) { >>>> > __ store_heap_oop_null(obj); >>>> > } else { >>>> > __ store_heap_oop(obj, val); >>>> > >>>> > /*mycodeA*/ __ movptr(c_rarg1, obj.base()); // save this value >>>> otherwise >>>> > it will be changed? >>>> > >>>> > // flatten object address if needed >>>> > if (!precise || (obj.index() == noreg && obj.disp() == 0)) { >>>> > __ store_check(obj.base()); >>>> > /*mycodeB*/ __ call_VM(noreg, //void >>>> > CAST_FROM_FN_PTR(address, >>>> > >>>> InterpreterRuntime::write_helper), >>>> > c_rarg1, // obj >>>> > c_rarg1, // field address because store check is >>>> > called on field address >>>> > val); >>>> > } else { >>>> > __ leaq(rdx, obj); >>>> > __ store_check(rdx); >>>> > /*mycodeC*/ __ call_VM(noreg, //void >>>> > CAST_FROM_FN_PTR(address, >>>> > >>>> InterpreterRuntime::write_helper), >>>> > c_rarg1, // obj >>>> > rdx, // field address, because store check is >>>> > called on field address >>>> > val); >>>> > } >>>> > } >>>> > break; >>>> > >>>> > For reads: >>>> > case Bytecodes::_fast_agetfield: >>>> > __ load_heap_oop(rax, field); >>>> > >>>> > /*mycodeD*/ __ call_VM(noreg, >>>> > CAST_FROM_FN_PTR(address, >>>> > >>>> InterpreterRuntime::read_barrier_helper), >>>> > rax); >>>> > >>>> > __ verify_oop(rax); >>>> > break; >>>> > >>>> > My questions are: >>>> > >>>> > 1) I thought this represents a putfield a.f=b where a.f is >>>> represented by >>>> > the parameter obj of type Address. b is obvious the parameter val of >>>> type >>>> > Register. Especially in obj there are fields: base, index and disp. >>>> But as >>>> > I run this, looks like obj is actually the field address. (the case >>>> mycodeB) >>>> > I haven't found a test case that can trigger the case mycodeC to see >>>> the >>>> > behavior (i.e., rdx might get destroyed and I got random value back or >>>> > c_rarg1 is the obj address and rdx is field address) >>>> > >>>> > 2) Before this, I tried to insert the same __ call_VM in >>>> fast_aputfield >>>> > before do_oop_store but it results in JVM crash. I don't understand >>>> the >>>> > reason why. What I did in the call is just print the parameters. I >>>> did see >>>> > the values printed (only the 1st time it goes to the method) but then >>>> the >>>> > VM crashed. I thought __ call_VM will preserve all registers's value >>>> and >>>> > restore properly when comes back. My instrumentation has no side >>>> effect, I >>>> > just observe and record the values (actually just printing the values >>>> to >>>> > test). >>>> > >>>> > 3) Is it strictly required to have the line /*mycodeA*/ I tried to, in >>>> > mycodeB line, pass obj.base() twice and it got build errors for >>>> "smashed >>>> > args"? >>>> > >>>> > I greatly appreciate your time, >>>> > >>>> > Best, >>>> > >>>> > Khanh Nguyen >>> >>> > From gromero at linux.vnet.ibm.com Fri Feb 19 21:35:19 2016 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Fri, 19 Feb 2016 19:35:19 -0200 Subject: RTM disabled for Linux on PPC64 LE In-Reply-To: <82585848434d4624ae08ccacac542a17@DEWDFE13DE14.global.corp.sap> References: <56BDE1EF.1020305@linux.vnet.ibm.com> <56C1DF2E.8070603@linux.vnet.ibm.com> <82585848434d4624ae08ccacac542a17@DEWDFE13DE14.global.corp.sap> Message-ID: <56C78A97.8080908@linux.vnet.ibm.com> Hi Martin, I can't afford the PECjbb2005 by now, since it's paid. Instead I'm using the SPECjvm2008 suite. Thanks for bringing up the problem on C2's scratch buffer. Indeed, I've got a core dump when I combined +UseRTMLocking, +UseRTMForStackLocks, and +UseRTMDeopt (http://goo.gl/Sc5Ekp). I've experimented a little with the MAX_inst_size value and found that at least doubling it is sufficient to solve the problem: # HG changeset patch # User gromero # Date 1455916590 7200 # Fri Feb 19 19:16:30 2016 -0200 # Node ID 721c2e526fa7ee5e46b0ab7219e2acac90c4239b # Parent a83242700c91e294886d23c89061c1916682836c Fix C2 scratch buffer too small diff --git a/src/share/vm/opto/compile.hpp b/src/share/vm/opto/compile.hpp --- a/src/share/vm/opto/compile.hpp +++ b/src/share/vm/opto/compile.hpp @@ -1118,7 +1118,7 @@ bool in_scratch_emit_size() const { return _in_scratch_emit_size; } enum ScratchBufferBlob { - MAX_inst_size = 1024, + MAX_inst_size = 2048, MAX_locs_size = 128, // number of relocInfo elements MAX_const_size = 128, MAX_stubs_size = 128 Do you think we can fix it upstream and enable the RTM for Linux on ppc64le? Any guidelines on it? BTW, I'm still taking a deeper reflection on your comments about biased, RTM and classic locking. Best regards, -- Gustavo Romero On 16-02-2016 11:33, Doerr, Martin wrote: > Hi Gustavo, > > thanks for the information and for working on this topic. > > I have used SPEC jbb2005 to test and benchmark RTM on PPC64. It has worked even with the old linux kernel to some extent. > > There are currently the following problems: > The C2's scratch buffer seems to be too small if you enable all options: > -XX:+UnlockExperimentalVMOptions -XX:+UseRTMLocking -XX:+UseRTMForStackLocks -XX:+UseRTMDeopt > I guess we need to increase MAX_inst_size in ScratchBufferBlob (compile.hpp). I didn't have the time to try, yet. > > The following issue is important for performance work: > RTM does not work with BiasedLocking. The latter gets switched off if RTM is activated which has a large performance impact (especially in jbb2005). > I would disable it for a reference measurement: > -XX:-UseBiasedLocking > > Unfortunately, RTM was slower than BiasedLocking but faster than the reference (without both) which tells me that there's room for improvement. > There are basically 3 classes of locks: > 1. no contention > 2. contention on lock, low contention on data > 3. high contention on data > > I believe the optimal treatment for the cases would be: > 1. Biased Locking > 2. Transactional Memory > 3. classical locking with lock inflating > > I think it would be good if the JVM could optimize for all these cases in the future. But that would add additional complexity and code size. > > Best regards, > Martin > > > -----Original Message----- > From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com] > Sent: Montag, 15. Februar 2016 15:23 > To: Doerr, Martin ; hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net > Cc: Breno Leitao > Subject: Re: RTM disabled for Linux on PPC64 LE > > Hello Martin, > > Thank you for your reply. > > The problematic behavior of syscalls has been addressed since kernel 4.2 > (already present in, por instance, Ubuntu 15.10 and 16.04): > https://goo.gl/d80xAJ > > I'm taking a closer look at the RTM tests and I'll make additional > experiments as you suggested. > > So far I enabled RTM for Linux on ppc64le and there is no regression in > the RTM test suite. I'm using kernel 4.2.0. > > The following patch was applied to > http://hg.openjdk.java.net/jdk9/jdk9/hotspot, 5d17092b6917+ tip, and I > used the (major + minor) version to enable RTM as you said: > > # HG changeset patch > # User gromero > # Date 1455540780 7200 > # Mon Feb 15 10:53:00 2016 -0200 > # Node ID 0e9540f2156c4c4d7d8215eb89109ff81be82f58 > # Parent 5d17092b691720d71f06360fb0cc183fe2877faa > Enable RTM for Linux on PPC64 LE > > Enable RTM for Linux kernel version equal or above 4.2, since the > problematic behavior of performing a syscall from within transaction > which could lead to unpredictable results has been addressed. Please, > refer to https://goo.gl/fi4tjC > > diff --git a/src/cpu/ppc/vm/globalDefinitions_ppc.hpp b/src/cpu/ppc/vm/globalDefinitions_ppc.hpp > --- a/src/cpu/ppc/vm/globalDefinitions_ppc.hpp > +++ b/src/cpu/ppc/vm/globalDefinitions_ppc.hpp > @@ -52,4 +52,9 @@ > #define INCLUDE_RTM_OPT 1 > #endif > > +// Enable RTM experimental support for Linux. > +#if defined(COMPILER2) && defined(linux) > +#define INCLUDE_RTM_OPT 1 > +#endif > + > #endif // CPU_PPC_VM_GLOBALDEFINITIONS_PPC_HPP > diff --git a/src/cpu/ppc/vm/vm_version_ppc.cpp b/src/cpu/ppc/vm/vm_version_ppc.cpp > --- a/src/cpu/ppc/vm/vm_version_ppc.cpp > +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp > @@ -255,7 +255,12 @@ > } > #endif > #ifdef linux > - // TODO: check kernel version (we currently have too old versions only) > + // At least Linux kernel 4.2, as the problematic behavior of syscalls > + // being called from within a transaction has been addressed. > + // Please, refer to commit 4b4fadba057c1af7689fc8fa182b13baL7 > + if (os::Linux::os_version() >= 0x040200) { > + os_too_old = false; > + } > #endif > if (os_too_old) { > vm_exit_during_initialization("RTM is not supported on this OS version."); > diff --git a/src/os/linux/vm/os_linux.cpp b/src/os/linux/vm/os_linux.cpp > --- a/src/os/linux/vm/os_linux.cpp > +++ b/src/os/linux/vm/os_linux.cpp > @@ -135,6 +135,7 @@ > int os::Linux::_page_size = -1; > const int os::Linux::_vm_default_page_size = (8 * K); > bool os::Linux::_supports_fast_thread_cpu_time = false; > +uint32_t os::Linux::_os_version = 0; > const char * os::Linux::_glibc_version = NULL; > const char * os::Linux::_libpthread_version = NULL; > pthread_condattr_t os::Linux::_condattr[1]; > @@ -4332,6 +4333,31 @@ > return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; > } > > +void os::Linux::initialize_os_info() { > + assert(_os_version == 0, "OS info already initialized"); > + > + struct utsname _uname; > + > + uint32_t major; > + uint32_t minor; > + uint32_t fix; > + > + uname(&_uname); // Not sure yet how to bail out if ret == -1 > + sscanf(_uname.release,"%d.%d.%d", &major, > + &minor, > + &fix ); > + > + _os_version = (major << 16) | > + (minor << 8 ) | > + (fix << 0 ) ; > +} > + > +uint32_t os::Linux::os_version() { > + assert(_os_version != 0, "not initialized"); > + return _os_version; > +} > + > + > ///// > // glibc on Linux platform uses non-documented flag > // to indicate, that some special sort of signal > @@ -4552,6 +4578,8 @@ > } > init_page_sizes((size_t) Linux::page_size()); > > + Linux::initialize_os_info(); > + > Linux::initialize_system_info(); > > // main_thread points to the aboriginal thread > diff --git a/src/os/linux/vm/os_linux.hpp b/src/os/linux/vm/os_linux.hpp > --- a/src/os/linux/vm/os_linux.hpp > +++ b/src/os/linux/vm/os_linux.hpp > @@ -56,6 +56,12 @@ > > static GrowableArray* _cpu_to_node; > > + // Ox00AABBCC > + // AA, Major Version > + // BB, Minor Version > + // CC, Fix Version > + static uint32_t _os_version; > + > protected: > > static julong _physical_memory; > @@ -198,6 +204,9 @@ > > static jlong fast_thread_cpu_time(clockid_t clockid); > > + static void initialize_os_info(); > + static uint32_t os_version(); > + > // pthread_cond clock suppport > private: > static pthread_condattr_t _condattr[1]; > > Should I use any test suite besides the jtreg suite already present > in the Hotspot forest? > > > Best Regards, > Gustavo > > On 12-02-2016 12:52, Doerr, Martin wrote: >> Hi Gustavo, >> >> the reason why we disabled RTM for linux on PPC64 (big or little endian) was the problematic behavior of syscalls. >> The old version of the document >> www.kernel.org/doc/Documentation/powerpc/transactional_memory.txt >> said: >> ?Performing syscalls from within transaction is not recommended, and can lead to unpredictable results.? >> >> Transactions need to either pass completely or roll back completely without disturbing side effects of partially executed syscalls. >> We rely on the kernel to abort transactions if necessary. >> >> The document has changed and it may possibly work with a new linux kernel. >> However, we don't have such a new kernel, yet. So we can't test it at the moment. >> I don't know which kernel version exactly contains the change. I guess this exact version number (major + minor) should be used for enabling RTM. >> >> I haven't looked into the tests, yet. There may be a need for additional adaptations and fixes. >> >> We appreciate if you make experiments and/or contributions. >> >> Thanks and best regards, >> Martin >> >> >> -----Original Message----- >> From: ppc-aix-port-dev [mailto:ppc-aix-port-dev-bounces at openjdk.java.net] On Behalf Of Gustavo Romero >> Sent: Freitag, 12. Februar 2016 14:45 >> To: hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net >> Subject: RTM disabled for Linux on PPC64 LE >> Importance: High >> >> Hi, >> As of now (tip 1922:be58b02c11f9, jdk9/jdk9 repo) Hotspot build for Linux on ppc64le of fails due to a simple uninitialized variable error: >> >> hotspot/src/share/vm/ci/ciMethodData.hpp:585:100: error: ?data? may be used uninitialized in this function >> hotspot/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp:2408:78: error: ?md? may be used uninitialized in this function >> >> So this straightforward patch solves the issue: >> diff -r 534c50395957 src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp >> --- a/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Thu Jan 28 15:42:23 2016 -0800 >> +++ b/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Mon Feb 08 17:13:14 2016 -0200 >> @@ -2321,8 +2321,8 @@ >> if (reg_conflict) { obj = dst; } >> } >> - ciMethodData* md; >> - ciProfileData* data; >> + ciMethodData* md = NULL; >> + ciProfileData* data = NULL; >> int mdo_offset_bias = 0; compiler/rtm >> if (should_profile) { >> ciMethod* method = op->profiled_method(); >> >> However, after the build, I realized that RTM is still disabled for Linux on ppc64le, failing 25 tests on compiler/rtm suite: >> >> http://hastebin.com/raw/ohoxiwaqih >> >> Hence after applying the following patches that enable RTM for Linux on ppc64le: >> >> diff -r 266fa9bb5297 src/cpu/ppc/vm/vm_version_ppc.cpp >> --- a/src/cpu/ppc/vm/vm_version_ppc.cpp Thu Feb 04 16:48:39 2016 -0800 >> +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp Fri Feb 12 10:55:46 2016 -0200 >> @@ -255,7 +255,9 @@ >> } >> #endif >> #ifdef linux >> - // TODO: check kernel version (we currently have too old versions only) >> + if (os::Linux::os_version() >= 4) { // at least Linux kernel version 4 >> + os_too_old = false; >> + } >> #endif >> if (os_too_old) { >> vm_exit_during_initialization("RTM is not supported on this OS version."); >> >> >> diff -r 266fa9bb5297 src/os/linux/vm/os_linux.cpp >> --- a/src/os/linux/vm/os_linux.cpp Thu Feb 04 16:48:39 2016 -0800 >> +++ b/src/os/linux/vm/os_linux.cpp Fri Feb 12 10:58:10 2016 -0200 >> @@ -135,6 +135,7 @@ >> int os::Linux::_page_size = -1; >> const int os::Linux::_vm_default_page_size = (8 * K); >> bool os::Linux::_supports_fast_thread_cpu_time = false; >> +uint32_t os::Linux::_os_version = 0; >> const char * os::Linux::_glibc_version = NULL; >> const char * os::Linux::_libpthread_version = NULL; >> pthread_condattr_t os::Linux::_condattr[1]; >> @@ -4332,6 +4333,21 @@ >> return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; >> } >> +void os::Linux::initialize_os_info() { >> + assert(_os_version == 0, "OS info already initialized"); >> + >> + struct utsname _uname; >> + + uname(&_uname); // Not sure yet how deal if ret == -1 >> + _os_version = atoi(_uname.release); >> +} >> + >> +uint32_t os::Linux::os_version() { >> + assert(_os_version != 0, "not initialized"); >> + return _os_version; >> +} >> + >> + >> ///// >> // glibc on Linux platform uses non-documented flag >> // to indicate, that some special sort of signal >> @@ -4553,6 +4569,7 @@ >> init_page_sizes((size_t) Linux::page_size()); >> Linux::initialize_system_info(); >> + Linux::initialize_os_info(); >> // main_thread points to the aboriginal thread >> Linux::_main_thread = pthread_self(); >> >> >> diff -r 266fa9bb5297 src/os/linux/vm/os_linux.hpp >> --- a/src/os/linux/vm/os_linux.hpp Thu Feb 04 16:48:39 2016 -0800 >> +++ b/src/os/linux/vm/os_linux.hpp Fri Feb 12 10:59:01 2016 -0200 >> @@ -55,7 +55,7 @@ >> static bool _supports_fast_thread_cpu_time; >> static GrowableArray* _cpu_to_node; >> - >> + static uint32_t _os_version; protected: >> static julong _physical_memory; >> @@ -198,6 +198,9 @@ >> static jlong fast_thread_cpu_time(clockid_t clockid); >> + static void initialize_os_info(); >> + static uint32_t os_version(); + >> // pthread_cond clock suppport >> private: >> static pthread_condattr_t _condattr[1]; >> >> >> 23 tests are now passing: http://hastebin.com/raw/oyicagusod >> >> Is there a reason to let RTM disabled for Linux on ppc64le by now? Could somebody explain what is currently missing on PPC64 LE RTM implementation in order to make all RTM tests pass? >> >> Thank you. >> >> Regards, >> -- >> Gustavo Romero >> > From michael.haupt at oracle.com Mon Feb 22 09:31:30 2016 From: michael.haupt at oracle.com (Michael Haupt) Date: Mon, 22 Feb 2016 10:31:30 +0100 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> Message-ID: <72838227-273B-4146-8A07-4548D31E8C00@oracle.com> Hi Paul, I've reviewed the JDK changes - looks good! Note that this is a lower-case review. Best, Michael > Am 11.02.2016 um 16:39 schrieb Paul Sandoz : > JDK: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html -- Dr. Michael Haupt | Principal Member of Technical Staff Phone: +49 331 200 7277 | Fax: +49 331 200 7561 Oracle Java Platform Group | LangTools Team | Nashorn Oracle Deutschland B.V. & Co. KG | Schiffbauergasse 14 | 14467 Potsdam, Germany ORACLE Deutschland B.V. & Co. KG | Hauptverwaltung: Riesstra?e 25, D-80992 M?nchen Registergericht: Amtsgericht M?nchen, HRA 95603 Komplement?rin: ORACLE Deutschland Verwaltung B.V. | Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Nederland, Nr. 30143697 Gesch?ftsf?hrer: Alexander van der Ven, Jan Schultheiss, Val Maher Oracle is committed to developing practices and products that help protect the environment From martin.doerr at sap.com Mon Feb 22 10:35:10 2016 From: martin.doerr at sap.com (Doerr, Martin) Date: Mon, 22 Feb 2016 10:35:10 +0000 Subject: RTM disabled for Linux on PPC64 LE In-Reply-To: <56C78A97.8080908@linux.vnet.ibm.com> References: <56BDE1EF.1020305@linux.vnet.ibm.com> <56C1DF2E.8070603@linux.vnet.ibm.com> <82585848434d4624ae08ccacac542a17@DEWDFE13DE14.global.corp.sap> <56C78A97.8080908@linux.vnet.ibm.com> Message-ID: <9588015c37c247ebb4282f75d12f4f32@DEWDFE13DE14.global.corp.sap> Hi Gustavo, I think the change should get contributed. I have opened a bug for it which is the first thing we need: JDK-8150353 Can you create and upload a webrev, please? The hg change comment should be: 8150353: PPC64LE: Support RTM on linux Reviewed-by: mdoerr When the webrev is there, please send out a request for review with the headline: RFR(M) 8150353: PPC64LE: Support RTM on linux Information about how to do this and about the review process can be found here: http://openjdk.java.net/guide/webrevHelp.html http://openjdk.java.net/guide/ http://openjdk.java.net/guide/codeReview.html If you have questions or problems feel free to contact us. Btw., do you think the big endian linux kernel will also contain the syscall change? If not, I suggest to only set INCLUDE_RTM_OPT to 1 on AIX and PPC64LE in globalDefinitions_ppc.hpp. #if defined(COMPILER2) && (defined(AIX) || defined(VM_LITTLE_ENDIAN) Best regards, Martin -----Original Message----- From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com] Sent: Freitag, 19. Februar 2016 22:35 To: Doerr, Martin ; hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net Cc: Breno Leitao Subject: Re: RTM disabled for Linux on PPC64 LE Hi Martin, I can't afford the PECjbb2005 by now, since it's paid. Instead I'm using the SPECjvm2008 suite. Thanks for bringing up the problem on C2's scratch buffer. Indeed, I've got a core dump when I combined +UseRTMLocking, +UseRTMForStackLocks, and +UseRTMDeopt (http://goo.gl/Sc5Ekp). I've experimented a little with the MAX_inst_size value and found that at least doubling it is sufficient to solve the problem: # HG changeset patch # User gromero # Date 1455916590 7200 # Fri Feb 19 19:16:30 2016 -0200 # Node ID 721c2e526fa7ee5e46b0ab7219e2acac90c4239b # Parent a83242700c91e294886d23c89061c1916682836c Fix C2 scratch buffer too small diff --git a/src/share/vm/opto/compile.hpp b/src/share/vm/opto/compile.hpp --- a/src/share/vm/opto/compile.hpp +++ b/src/share/vm/opto/compile.hpp @@ -1118,7 +1118,7 @@ bool in_scratch_emit_size() const { return _in_scratch_emit_size; } enum ScratchBufferBlob { - MAX_inst_size = 1024, + MAX_inst_size = 2048, MAX_locs_size = 128, // number of relocInfo elements MAX_const_size = 128, MAX_stubs_size = 128 Do you think we can fix it upstream and enable the RTM for Linux on ppc64le? Any guidelines on it? BTW, I'm still taking a deeper reflection on your comments about biased, RTM and classic locking. Best regards, -- Gustavo Romero On 16-02-2016 11:33, Doerr, Martin wrote: > Hi Gustavo, > > thanks for the information and for working on this topic. > > I have used SPEC jbb2005 to test and benchmark RTM on PPC64. It has worked even with the old linux kernel to some extent. > > There are currently the following problems: > The C2's scratch buffer seems to be too small if you enable all options: > -XX:+UnlockExperimentalVMOptions -XX:+UseRTMLocking -XX:+UseRTMForStackLocks -XX:+UseRTMDeopt > I guess we need to increase MAX_inst_size in ScratchBufferBlob (compile.hpp). I didn't have the time to try, yet. > > The following issue is important for performance work: > RTM does not work with BiasedLocking. The latter gets switched off if RTM is activated which has a large performance impact (especially in jbb2005). > I would disable it for a reference measurement: > -XX:-UseBiasedLocking > > Unfortunately, RTM was slower than BiasedLocking but faster than the reference (without both) which tells me that there's room for improvement. > There are basically 3 classes of locks: > 1. no contention > 2. contention on lock, low contention on data > 3. high contention on data > > I believe the optimal treatment for the cases would be: > 1. Biased Locking > 2. Transactional Memory > 3. classical locking with lock inflating > > I think it would be good if the JVM could optimize for all these cases in the future. But that would add additional complexity and code size. > > Best regards, > Martin > > > -----Original Message----- > From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com] > Sent: Montag, 15. Februar 2016 15:23 > To: Doerr, Martin ; hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net > Cc: Breno Leitao > Subject: Re: RTM disabled for Linux on PPC64 LE > > Hello Martin, > > Thank you for your reply. > > The problematic behavior of syscalls has been addressed since kernel 4.2 > (already present in, por instance, Ubuntu 15.10 and 16.04): > https://goo.gl/d80xAJ > > I'm taking a closer look at the RTM tests and I'll make additional > experiments as you suggested. > > So far I enabled RTM for Linux on ppc64le and there is no regression in > the RTM test suite. I'm using kernel 4.2.0. > > The following patch was applied to > http://hg.openjdk.java.net/jdk9/jdk9/hotspot, 5d17092b6917+ tip, and I > used the (major + minor) version to enable RTM as you said: > > # HG changeset patch > # User gromero > # Date 1455540780 7200 > # Mon Feb 15 10:53:00 2016 -0200 > # Node ID 0e9540f2156c4c4d7d8215eb89109ff81be82f58 > # Parent 5d17092b691720d71f06360fb0cc183fe2877faa > Enable RTM for Linux on PPC64 LE > > Enable RTM for Linux kernel version equal or above 4.2, since the > problematic behavior of performing a syscall from within transaction > which could lead to unpredictable results has been addressed. Please, > refer to https://goo.gl/fi4tjC > > diff --git a/src/cpu/ppc/vm/globalDefinitions_ppc.hpp b/src/cpu/ppc/vm/globalDefinitions_ppc.hpp > --- a/src/cpu/ppc/vm/globalDefinitions_ppc.hpp > +++ b/src/cpu/ppc/vm/globalDefinitions_ppc.hpp > @@ -52,4 +52,9 @@ > #define INCLUDE_RTM_OPT 1 > #endif > > +// Enable RTM experimental support for Linux. > +#if defined(COMPILER2) && defined(linux) > +#define INCLUDE_RTM_OPT 1 > +#endif > + > #endif // CPU_PPC_VM_GLOBALDEFINITIONS_PPC_HPP > diff --git a/src/cpu/ppc/vm/vm_version_ppc.cpp b/src/cpu/ppc/vm/vm_version_ppc.cpp > --- a/src/cpu/ppc/vm/vm_version_ppc.cpp > +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp > @@ -255,7 +255,12 @@ > } > #endif > #ifdef linux > - // TODO: check kernel version (we currently have too old versions only) > + // At least Linux kernel 4.2, as the problematic behavior of syscalls > + // being called from within a transaction has been addressed. > + // Please, refer to commit 4b4fadba057c1af7689fc8fa182b13baL7 > + if (os::Linux::os_version() >= 0x040200) { > + os_too_old = false; > + } > #endif > if (os_too_old) { > vm_exit_during_initialization("RTM is not supported on this OS version."); > diff --git a/src/os/linux/vm/os_linux.cpp b/src/os/linux/vm/os_linux.cpp > --- a/src/os/linux/vm/os_linux.cpp > +++ b/src/os/linux/vm/os_linux.cpp > @@ -135,6 +135,7 @@ > int os::Linux::_page_size = -1; > const int os::Linux::_vm_default_page_size = (8 * K); > bool os::Linux::_supports_fast_thread_cpu_time = false; > +uint32_t os::Linux::_os_version = 0; > const char * os::Linux::_glibc_version = NULL; > const char * os::Linux::_libpthread_version = NULL; > pthread_condattr_t os::Linux::_condattr[1]; > @@ -4332,6 +4333,31 @@ > return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; > } > > +void os::Linux::initialize_os_info() { > + assert(_os_version == 0, "OS info already initialized"); > + > + struct utsname _uname; > + > + uint32_t major; > + uint32_t minor; > + uint32_t fix; > + > + uname(&_uname); // Not sure yet how to bail out if ret == -1 > + sscanf(_uname.release,"%d.%d.%d", &major, > + &minor, > + &fix ); > + > + _os_version = (major << 16) | > + (minor << 8 ) | > + (fix << 0 ) ; > +} > + > +uint32_t os::Linux::os_version() { > + assert(_os_version != 0, "not initialized"); > + return _os_version; > +} > + > + > ///// > // glibc on Linux platform uses non-documented flag > // to indicate, that some special sort of signal > @@ -4552,6 +4578,8 @@ > } > init_page_sizes((size_t) Linux::page_size()); > > + Linux::initialize_os_info(); > + > Linux::initialize_system_info(); > > // main_thread points to the aboriginal thread > diff --git a/src/os/linux/vm/os_linux.hpp b/src/os/linux/vm/os_linux.hpp > --- a/src/os/linux/vm/os_linux.hpp > +++ b/src/os/linux/vm/os_linux.hpp > @@ -56,6 +56,12 @@ > > static GrowableArray* _cpu_to_node; > > + // Ox00AABBCC > + // AA, Major Version > + // BB, Minor Version > + // CC, Fix Version > + static uint32_t _os_version; > + > protected: > > static julong _physical_memory; > @@ -198,6 +204,9 @@ > > static jlong fast_thread_cpu_time(clockid_t clockid); > > + static void initialize_os_info(); > + static uint32_t os_version(); > + > // pthread_cond clock suppport > private: > static pthread_condattr_t _condattr[1]; > > Should I use any test suite besides the jtreg suite already present > in the Hotspot forest? > > > Best Regards, > Gustavo > > On 12-02-2016 12:52, Doerr, Martin wrote: >> Hi Gustavo, >> >> the reason why we disabled RTM for linux on PPC64 (big or little endian) was the problematic behavior of syscalls. >> The old version of the document >> www.kernel.org/doc/Documentation/powerpc/transactional_memory.txt >> said: >> ?Performing syscalls from within transaction is not recommended, and can lead to unpredictable results.? >> >> Transactions need to either pass completely or roll back completely without disturbing side effects of partially executed syscalls. >> We rely on the kernel to abort transactions if necessary. >> >> The document has changed and it may possibly work with a new linux kernel. >> However, we don't have such a new kernel, yet. So we can't test it at the moment. >> I don't know which kernel version exactly contains the change. I guess this exact version number (major + minor) should be used for enabling RTM. >> >> I haven't looked into the tests, yet. There may be a need for additional adaptations and fixes. >> >> We appreciate if you make experiments and/or contributions. >> >> Thanks and best regards, >> Martin >> >> >> -----Original Message----- >> From: ppc-aix-port-dev [mailto:ppc-aix-port-dev-bounces at openjdk.java.net] On Behalf Of Gustavo Romero >> Sent: Freitag, 12. Februar 2016 14:45 >> To: hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net >> Subject: RTM disabled for Linux on PPC64 LE >> Importance: High >> >> Hi, >> As of now (tip 1922:be58b02c11f9, jdk9/jdk9 repo) Hotspot build for Linux on ppc64le of fails due to a simple uninitialized variable error: >> >> hotspot/src/share/vm/ci/ciMethodData.hpp:585:100: error: ?data? may be used uninitialized in this function >> hotspot/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp:2408:78: error: ?md? may be used uninitialized in this function >> >> So this straightforward patch solves the issue: >> diff -r 534c50395957 src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp >> --- a/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Thu Jan 28 15:42:23 2016 -0800 >> +++ b/src/cpu/ppc/vm/c1_LIRAssembler_ppc.cpp Mon Feb 08 17:13:14 2016 -0200 >> @@ -2321,8 +2321,8 @@ >> if (reg_conflict) { obj = dst; } >> } >> - ciMethodData* md; >> - ciProfileData* data; >> + ciMethodData* md = NULL; >> + ciProfileData* data = NULL; >> int mdo_offset_bias = 0; compiler/rtm >> if (should_profile) { >> ciMethod* method = op->profiled_method(); >> >> However, after the build, I realized that RTM is still disabled for Linux on ppc64le, failing 25 tests on compiler/rtm suite: >> >> http://hastebin.com/raw/ohoxiwaqih >> >> Hence after applying the following patches that enable RTM for Linux on ppc64le: >> >> diff -r 266fa9bb5297 src/cpu/ppc/vm/vm_version_ppc.cpp >> --- a/src/cpu/ppc/vm/vm_version_ppc.cpp Thu Feb 04 16:48:39 2016 -0800 >> +++ b/src/cpu/ppc/vm/vm_version_ppc.cpp Fri Feb 12 10:55:46 2016 -0200 >> @@ -255,7 +255,9 @@ >> } >> #endif >> #ifdef linux >> - // TODO: check kernel version (we currently have too old versions only) >> + if (os::Linux::os_version() >= 4) { // at least Linux kernel version 4 >> + os_too_old = false; >> + } >> #endif >> if (os_too_old) { >> vm_exit_during_initialization("RTM is not supported on this OS version."); >> >> >> diff -r 266fa9bb5297 src/os/linux/vm/os_linux.cpp >> --- a/src/os/linux/vm/os_linux.cpp Thu Feb 04 16:48:39 2016 -0800 >> +++ b/src/os/linux/vm/os_linux.cpp Fri Feb 12 10:58:10 2016 -0200 >> @@ -135,6 +135,7 @@ >> int os::Linux::_page_size = -1; >> const int os::Linux::_vm_default_page_size = (8 * K); >> bool os::Linux::_supports_fast_thread_cpu_time = false; >> +uint32_t os::Linux::_os_version = 0; >> const char * os::Linux::_glibc_version = NULL; >> const char * os::Linux::_libpthread_version = NULL; >> pthread_condattr_t os::Linux::_condattr[1]; >> @@ -4332,6 +4333,21 @@ >> return (tp.tv_sec * NANOSECS_PER_SEC) + tp.tv_nsec; >> } >> +void os::Linux::initialize_os_info() { >> + assert(_os_version == 0, "OS info already initialized"); >> + >> + struct utsname _uname; >> + + uname(&_uname); // Not sure yet how deal if ret == -1 >> + _os_version = atoi(_uname.release); >> +} >> + >> +uint32_t os::Linux::os_version() { >> + assert(_os_version != 0, "not initialized"); >> + return _os_version; >> +} >> + >> + >> ///// >> // glibc on Linux platform uses non-documented flag >> // to indicate, that some special sort of signal >> @@ -4553,6 +4569,7 @@ >> init_page_sizes((size_t) Linux::page_size()); >> Linux::initialize_system_info(); >> + Linux::initialize_os_info(); >> // main_thread points to the aboriginal thread >> Linux::_main_thread = pthread_self(); >> >> >> diff -r 266fa9bb5297 src/os/linux/vm/os_linux.hpp >> --- a/src/os/linux/vm/os_linux.hpp Thu Feb 04 16:48:39 2016 -0800 >> +++ b/src/os/linux/vm/os_linux.hpp Fri Feb 12 10:59:01 2016 -0200 >> @@ -55,7 +55,7 @@ >> static bool _supports_fast_thread_cpu_time; >> static GrowableArray* _cpu_to_node; >> - >> + static uint32_t _os_version; protected: >> static julong _physical_memory; >> @@ -198,6 +198,9 @@ >> static jlong fast_thread_cpu_time(clockid_t clockid); >> + static void initialize_os_info(); >> + static uint32_t os_version(); + >> // pthread_cond clock suppport >> private: >> static pthread_condattr_t _condattr[1]; >> >> >> 23 tests are now passing: http://hastebin.com/raw/oyicagusod >> >> Is there a reason to let RTM disabled for Linux on ppc64le by now? Could somebody explain what is currently missing on PPC64 LE RTM implementation in order to make all RTM tests pass? >> >> Thank you. >> >> Regards, >> -- >> Gustavo Romero >> > From paul.sandoz at oracle.com Mon Feb 22 20:09:47 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 22 Feb 2016 21:09:47 +0100 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <72838227-273B-4146-8A07-4548D31E8C00@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <72838227-273B-4146-8A07-4548D31E8C00@oracle.com> Message-ID: <3ADCA13E-AB1E-4EE1-A7D5-7F25EBAC103F@oracle.com> > On 22 Feb 2016, at 10:31, Michael Haupt wrote: > > Hi Paul, > > I've reviewed the JDK changes - looks good! Note that this is a lower-case review. > Very much appreciated, thanks, Paul. > Best, > > Michael > >> Am 11.02.2016 um 16:39 schrieb Paul Sandoz >: >> JDK: >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html From tobias.hartmann at oracle.com Tue Feb 23 10:19:11 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 23 Feb 2016 11:19:11 +0100 Subject: [9] RFR(S): 8150441: CompileTask::print_impl() is broken after JDK-8146905 Message-ID: <56CC321F.8050209@oracle.com> Hi, please review the following patch. https://bugs.openjdk.java.net/browse/JDK-8150441 http://cr.openjdk.java.net/~thartmann/8150441/webrev.00/ The fix for JDK-8146905 [1] removed staticBufferStream and instead passes VMError::out/log to CompileTask::print_line_on_error() to print the current compile task if an error occurs. The problem is that fdStream VMError::out/log does not initialize the TimeStamp outputStream::_stamp and we hit the "must not be clear" assert in TimeStamp::milliseconds() which is called from CompileTask::print_impl(). Before, the TimeStamp was initialized in the staticBufferStream constructor [2]. The time stamps should be explicitly initialized like we do in ostream_init(). I verified that my patch solves the problem. Thanks, Tobias [1] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b [2] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b#l1.102 From aph at redhat.com Tue Feb 23 10:38:45 2016 From: aph at redhat.com (Andrew Haley) Date: Tue, 23 Feb 2016 10:38:45 +0000 Subject: PING: 8150045: AArch64: arraycopy causes segfaults in SATB during garbage collection In-Reply-To: <56C5E7AA.4050209@redhat.com> References: <56C48215.3040106@redhat.com> <56C482D4.2010101@redhat.com> <56C5E7AA.4050209@redhat.com> Message-ID: <56CC36B5.3090209@redhat.com> Can I have an official review of this, please? Vladimir or Roland, please have a quick look. Maybe it's time to make Andrew Dinn a JDK 9 reviewer. Andrew. On 02/18/2016 03:47 PM, Andrew Dinn wrote: > On 17/02/16 14:25, Andrew Haley wrote: >> Sorry, I forgot to say this is AArch64-specific. >> >> On 02/17/2016 02:22 PM, Andrew Haley wrote: >>> This is a bug due to the abuse of default arguments in C++. I, ah, >>> forgot to pass dest_uninitialized to the OOP arraycopy routines, so we >>> always scan the destination array, even though it contains garbage. >>> >>> I also took the opportunity to do a little tidying-up. >>> >>> http://cr.openjdk.java.net/~aph/8150045/ > > This looks fine, including the tidying up. Reviewed as AArch64-only change. > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in UK and Wales under Company Registration No. 3798903 > Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul > Argiry (US) > From roland.westrelin at oracle.com Tue Feb 23 11:14:11 2016 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Tue, 23 Feb 2016 12:14:11 +0100 Subject: PING: 8150045: AArch64: arraycopy causes segfaults in SATB during garbage collection In-Reply-To: <56CC36B5.3090209@redhat.com> References: <56C48215.3040106@redhat.com> <56C482D4.2010101@redhat.com> <56C5E7AA.4050209@redhat.com> <56CC36B5.3090209@redhat.com> Message-ID: > Can I have an official review of this, please? Vladimir or Roland, please > have a quick look. That looks good. > Maybe it's time to make Andrew Dinn a JDK 9 reviewer. I only see 10 or so changes that he contributed to jdk 9 but I understand he contributed a lot more changes to aarch64 that don?t show up under his name? Nominating him sounds good to me. Roland. From aph at redhat.com Tue Feb 23 11:18:16 2016 From: aph at redhat.com (Andrew Haley) Date: Tue, 23 Feb 2016 11:18:16 +0000 Subject: PING: 8150045: AArch64: arraycopy causes segfaults in SATB during garbage collection In-Reply-To: References: <56C48215.3040106@redhat.com> <56C482D4.2010101@redhat.com> <56C5E7AA.4050209@redhat.com> <56CC36B5.3090209@redhat.com> Message-ID: <56CC3FF8.3040304@redhat.com> On 02/23/2016 11:14 AM, Roland Westrelin wrote: > I only see 10 or so changes that he contributed to jdk 9 but I > understand he contributed a lot more changes to aarch64 that don?t > show up under his name? About half of the main commits are his. It's a shame that sanitizing the port for committing to the main tree erased this. Andrew. From adinn at redhat.com Tue Feb 23 11:59:40 2016 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 23 Feb 2016 11:59:40 +0000 Subject: PING: 8150045: AArch64: arraycopy causes segfaults in SATB during garbage collection In-Reply-To: <56CC3FF8.3040304@redhat.com> References: <56C48215.3040106@redhat.com> <56C482D4.2010101@redhat.com> <56C5E7AA.4050209@redhat.com> <56CC36B5.3090209@redhat.com> <56CC3FF8.3040304@redhat.com> Message-ID: <56CC49AC.4010200@redhat.com> On 23/02/16 11:18, Andrew Haley wrote: > On 02/23/2016 11:14 AM, Roland Westrelin wrote: > >> I only see 10 or so changes that he contributed to jdk 9 but I >> understand he contributed a lot more changes to aarch64 that don?t >> show up under his name? > > About half of the main commits are his. It's a shame that sanitizing > the port for committing to the main tree erased this. As ever Ed Nevill has the facts at his fingertips. When he nominated me as a committer he tallied up my contributions to aarch64 JDK 8 and apparently it came to 337. http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-September/002790.html I didn't check the figure but that sounds about right :-) regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul Argiry (US) From david.holmes at oracle.com Tue Feb 23 12:37:21 2016 From: david.holmes at oracle.com (David Holmes) Date: Tue, 23 Feb 2016 22:37:21 +1000 Subject: [9] RFR(S): 8150441: CompileTask::print_impl() is broken after JDK-8146905 In-Reply-To: <56CC321F.8050209@oracle.com> References: <56CC321F.8050209@oracle.com> Message-ID: <56CC5281.8010109@oracle.com> Hi Tobias, On 23/02/2016 8:19 PM, Tobias Hartmann wrote: > Hi, > > please review the following patch. > > https://bugs.openjdk.java.net/browse/JDK-8150441 > http://cr.openjdk.java.net/~thartmann/8150441/webrev.00/ > > The fix for JDK-8146905 [1] removed staticBufferStream and instead passes VMError::out/log to CompileTask::print_line_on_error() to print the current compile task if an error occurs. The problem is that fdStream VMError::out/log does not initialize the TimeStamp outputStream::_stamp and we hit the "must not be clear" assert in TimeStamp::milliseconds() which is called from CompileTask::print_impl(). Before, the TimeStamp was initialized in the staticBufferStream constructor [2]. I find the original code difficult to follow. The staticBufferStream did initialize its own stamp, but AFAICS that stamp is never used in relation to the wrapped outer-stream, so somehow that stamp must have been exposed through one of the methods that didn't delegate to the outer stream ?? > The time stamps should be explicitly initialized like we do in ostream_init(). I verified that my patch solves the problem. Why is the timestamp of a stream not initialized when the stream is constructed? Why are we deferring it until report_and_die() ? Thanks, David > Thanks, > Tobias > > [1] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b > [2] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b#l1.102 > From tobias.hartmann at oracle.com Tue Feb 23 12:56:08 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 23 Feb 2016 13:56:08 +0100 Subject: [9] RFR(S): 8150441: CompileTask::print_impl() is broken after JDK-8146905 In-Reply-To: <56CC5281.8010109@oracle.com> References: <56CC321F.8050209@oracle.com> <56CC5281.8010109@oracle.com> Message-ID: <56CC56E8.1010808@oracle.com> Hi David, thanks for having a look! On 23.02.2016 13:37, David Holmes wrote: > Hi Tobias, > > On 23/02/2016 8:19 PM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch. >> >> https://bugs.openjdk.java.net/browse/JDK-8150441 >> http://cr.openjdk.java.net/~thartmann/8150441/webrev.00/ >> >> The fix for JDK-8146905 [1] removed staticBufferStream and instead passes VMError::out/log to CompileTask::print_line_on_error() to print the current compile task if an error occurs. The problem is that fdStream VMError::out/log does not initialize the TimeStamp outputStream::_stamp and we hit the "must not be clear" assert in TimeStamp::milliseconds() which is called from CompileTask::print_impl(). Before, the TimeStamp was initialized in the staticBufferStream constructor [2]. > > I find the original code difficult to follow. In the original code, the staticBufferStream constructor initializes "_stamp" which is inherited from outputStream [2]. staticBufferStream sbs(buffer, sizeof(buffer), &out); report(&sbs, false); The staticBufferStream is passed on to -> VMError::report(outputStream* st, ..) -> CompileTask::print_line_on_error(..) -> CompileTask::print(..) -> CompileTask::print_impl(..) which then invokes st->time_stamp().milliseconds() with st->time_stamp()._counter == 0. > The staticBufferStream did initialize its own stamp, but AFAICS that stamp is never used in relation to the wrapped outer-stream, so somehow that stamp must have been exposed through one of the methods that didn't delegate to the outer stream ?? Yes, the stamp initialized by staticBufferStream is exposed through st->time_stamp() which does not delegate to the outer stream. >> The time stamps should be explicitly initialized like we do in ostream_init(). I verified that my patch solves the problem. > > Why is the timestamp of a stream not initialized when the stream is constructed? Why are we deferring it until report_and_die() ? I'm not sure about this but I think we could eagerly initialize the timestamp in fdStream::fdStream() or outputStream::outputStream(). Thanks, Tobias > > Thanks, > David > >> Thanks, >> Tobias >> >> [1] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b >> [2] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b#l1.102 >> From roland.westrelin at oracle.com Tue Feb 23 13:58:32 2016 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Tue, 23 Feb 2016 14:58:32 +0100 Subject: [8u] backport of 8149543: range check CastII nodes should not be split through Phi Message-ID: <91F11E92-E63F-4092-862A-8849DD7F1474@oracle.com> Hi, Please approve and review the following backport to 8u. 8149543 was pushed to jdk9 a week ago and it hasn?t caused any new failures during nightly testing. The change applies cleanly to 8u. https://bugs.openjdk.java.net/browse/JDK-8149543 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a63cf6a69972 Roland. From volker.simonis at gmail.com Tue Feb 23 14:19:16 2016 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 23 Feb 2016 15:19:16 +0100 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> <56C1DD24.2060503@redhat.com> <56C394A0.2010606@redhat.com> <56C4588A.4070505@oracle.com> <56C4C6F6.8040709@oracle.com> <56C4CD2A.1000701@oracle.com> Message-ID: Hi Aleksey, sorry for the delay. Your change builds and runs fine on ppc64. Still haven't had time to look at the possible optimizations and intrinsics on ppc64 but I think that can and should be done in a follow up change. So from my side you can go ahead and push. Regards, Volker On Thu, Feb 18, 2016 at 8:47 AM, Volker Simonis wrote: > On Wed, Feb 17, 2016 at 8:42 PM, Aleksey Shipilev > wrote: >> On 02/17/2016 10:16 PM, Vladimir Kozlov wrote: >>> In general it looks good to me. >> >> Thanks Vladimir! >> >>> My main question is about implementation of new functionality on >>> other platforms. When it will be done? Yes, it works now because you >>> have guard match_rule_supported(). But we usually do implementation >>> on platforms at least as separate RFE. What is your plan? >> >> We have multiple subtasks for AArch64, SPARC and Power under VarHandles >> umbrella: >> https://bugs.openjdk.java.net/browse/JDK-8080588 >> >> Hopefully we will address them after/concurrently-with the bulk of >> VarHandles changes settle into mainline. But we need to get some basic >> code in mainline to build on. >> >> >>> SAP guys should also test it on PPC64. >> >> Volker, Goetz, I would appreciate if you can give it a spin! >> > > Sorry, I only saw this thread yesterday. I'll start now right away > with looking into it and testing it on ppc64. > > Regards, > Volker > >> >>> What test/compiler/unsafe/generate-unsafe-tests.sh is for? It is not >>> used by regression testing as far as I see. >> >> The script (re)generates the tests from the template, and is supposed to >> be run manually when test template had changed. The >> test/compiler/unsafe/ tests you see in the webrev were generated by that >> script. >> >>> And please, push it into hs-comp for nightly testing. >> >> That's was the plan, I should have said that from the beginning. >> >> Cheers, >> -Aleksey >> From aleksey.shipilev at oracle.com Tue Feb 23 14:39:17 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 23 Feb 2016 17:39:17 +0300 Subject: RFR #2 (M) 8148146: Integrate new internal Unsafe entry points, and basic intrinsic support for VarHandles In-Reply-To: References: <56C1C2ED.10702@oracle.com> <56C1D137.5020607@redhat.com> <56C1D764.6020100@oracle.com> <56C1DD24.2060503@redhat.com> <56C394A0.2010606@redhat.com> <56C4588A.4070505@oracle.com> <56C4C6F6.8040709@oracle.com> <56C4CD2A.1000701@oracle.com> Message-ID: <56CC6F15.9060207@oracle.com> On 02/23/2016 05:19 PM, Volker Simonis wrote: > So from my side you can go ahead and push. Thanks Volker! Now that we have general blessings from Vladimir Kozlov and John Rose, Andrew Dinn on AArch64, and Volker Simonis on PPC64. There are also pending Unsafe cleanup changes from Mikael Vidstedt, but we negotiated that VarHandles change should go first. Therefore, I'd like to push these today: http://cr.openjdk.java.net/~shade/8148146/webrev.hs.03/ http://cr.openjdk.java.net/~shade/8148146/webrev.jdk.03/ Cheers, -Aleksey From roland.westrelin at oracle.com Tue Feb 23 17:05:27 2016 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Tue, 23 Feb 2016 18:05:27 +0100 Subject: [8u] backport of 8149543: range check CastII nodes should not be split through Phi In-Reply-To: <56CC6746.8020307@oracle.com> References: <91F11E92-E63F-4092-862A-8849DD7F1474@oracle.com> <56CC6746.8020307@oracle.com> Message-ID: <8C7EA09A-A06D-4BD0-B10F-DEF98431D448@oracle.com> > This request appears to be for a codereview along with push approval but I don't see a webrev for any changes that occurred in 8u. Does the patch apply cleanly? If so you likely don't need a codereview. > > If the patch doesn't apply cleanly please provide an updated webrev. > > Approved assuming the fix applies cleanly / you get a code review. The patch does apply cleanly. Sorry for the confusion. Roland. > > -Rob > > On 23/02/16 13:58, Roland Westrelin wrote: >> Hi, >> >> Please approve and review the following backport to 8u. >> >> 8149543 was pushed to jdk9 a week ago and it hasn?t caused any new failures during nightly testing. The change applies cleanly to 8u. >> >> https://bugs.openjdk.java.net/browse/JDK-8149543 >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a63cf6a69972 >> >> Roland. >> From rob.mckenna at oracle.com Tue Feb 23 14:05:58 2016 From: rob.mckenna at oracle.com (Rob McKenna) Date: Tue, 23 Feb 2016 14:05:58 +0000 Subject: [8u] backport of 8149543: range check CastII nodes should not be split through Phi In-Reply-To: <91F11E92-E63F-4092-862A-8849DD7F1474@oracle.com> References: <91F11E92-E63F-4092-862A-8849DD7F1474@oracle.com> Message-ID: <56CC6746.8020307@oracle.com> This request appears to be for a codereview along with push approval but I don't see a webrev for any changes that occurred in 8u. Does the patch apply cleanly? If so you likely don't need a codereview. If the patch doesn't apply cleanly please provide an updated webrev. Approved assuming the fix applies cleanly / you get a codereview. -Rob On 23/02/16 13:58, Roland Westrelin wrote: > Hi, > > Please approve and review the following backport to 8u. > > 8149543 was pushed to jdk9 a week ago and it hasn?t caused any new failures during nightly testing. The change applies cleanly to 8u. > > https://bugs.openjdk.java.net/browse/JDK-8149543 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a63cf6a69972 > > Roland. > From forax at univ-mlv.fr Tue Feb 23 20:10:42 2016 From: forax at univ-mlv.fr (Remi Forax) Date: Tue, 23 Feb 2016 21:10:42 +0100 (CET) Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <5ABD11CF-407C-4BAD-95A7-E267C5F1682D@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56C1D973.3060802@oracle.com> <1533888573.323371.1455548322267.JavaMail.zimbra@u-pem.fr> <531723180.551702.1455576328083.JavaMail.zimbra@u-pem.fr> <5ABD11CF-407C-4BAD-95A7-E267C5F1682D@oracle.com> Message-ID: <647685396.1040286.1456258242492.JavaMail.zimbra@u-pem.fr> ----- Mail original ----- > De: "Paul Sandoz" > Cc: "jdk9-dev" , "hotspot-dev developers" > Envoy?: Mardi 16 F?vrier 2016 10:33:35 > Objet: Re: RFR JDK-8149644 Integrate VarHandles > > > > On 15 Feb 2016, at 23:45, Remi Forax wrote: > > > >>> The comment in Infer > >>> "//The return type for a polymorphic signature call" > >>> should be updated to reflect the new implementation. > >>> > >> > >> That comment should really be folded into the first if block. > >> > >> I could do that as follows: > >> > >> // The return type of the polymorphic signature is polymorphic, > >> // and is computed from the ... > >> > >> And then in the else block > >> > >> // The return type of the polymorphic signature is fixed (not > >> polymorphic) > >> > >> ? > > > > yes, good idea. > > > > Updated in place. > > > >> > >> > >>> and this change in the way to do the inference (if the return type is not > >>> Object use the declared return type) is too ad hoc for me, > >>> we will need to do the same special case for the parameter types, soon, > >>> no > >>> ? > >>> > >> > >> Do you have any use-cases in mind? > >> > >> Rather than ad-hoc i would argue instead the enhancement of > >> signature-polymorphic methods is limited to that required by the current > >> use-cases. > >> > >> IIRC I did pull on that more significantly at one point when i had > >> sub-types > >> for array handles since the index need not be polymorphic. But we dialled > >> back from that approach. > > > > as you said one use case is to be able to fix an index, but perhaps a more > > interesting case is to be able to bound the number of parameters, > > by example for compareAndSet > > boolean compareAndSet(Object expected, Object value) > > is better than > > boolean compareAndSet(Object... args); > > > > That ain?t gonna work because the shape is defined by the factory method > producing the var handle, there could be zero or more coordinate arguments > preceding zero or more explicit value arguments. We cannot declare a varargs > parameter preceding other parameters and declaring Object[] is an awkward > fit. It?s more that i would care to bite off in terms of tweaking the > definition of signature-polymorphism. ok, the actual meta-protocol used to create a varhandle doesn't allow anything other than a varargs as 'real' arguments, but it's the tail waging the dog, this meta-protocol is an implementation artifact of the current way a varhandle is created. anyway, i think it's too late to change this kind of things now, given that the spec of what a polymorphic signature is has already changed, a future release may change this definition again. > > Paul.. > R?mi From vladimir.kozlov at oracle.com Tue Feb 23 22:16:29 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 23 Feb 2016 14:16:29 -0800 Subject: [8u] backport of 8149543: range check CastII nodes should not be split through Phi In-Reply-To: <8C7EA09A-A06D-4BD0-B10F-DEF98431D448@oracle.com> References: <91F11E92-E63F-4092-862A-8849DD7F1474@oracle.com> <56CC6746.8020307@oracle.com> <8C7EA09A-A06D-4BD0-B10F-DEF98431D448@oracle.com> Message-ID: <56CCDA3D.4030706@oracle.com> Here is review thread for jdk9: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2016-February/021222.html http://cr.openjdk.java.net/~roland/8149543/webrev.01/ Thanks, Vladimir On 2/23/16 9:05 AM, Roland Westrelin wrote: >> This request appears to be for a codereview along with push approval but I don't see a webrev for any changes that occurred in 8u. Does the patch apply cleanly? If so you likely don't need a codereview. >> >> If the patch doesn't apply cleanly please provide an updated webrev. >> >> Approved assuming the fix applies cleanly / you get a code review. > > The patch does apply cleanly. Sorry for the confusion. > > Roland. > >> >> -Rob >> >> On 23/02/16 13:58, Roland Westrelin wrote: >>> Hi, >>> >>> Please approve and review the following backport to 8u. >>> >>> 8149543 was pushed to jdk9 a week ago and it hasn?t caused any new failures during nightly testing. The change applies cleanly to 8u. >>> >>> https://bugs.openjdk.java.net/browse/JDK-8149543 >>> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a63cf6a69972 >>> >>> Roland. >>> > From david.holmes at oracle.com Wed Feb 24 00:40:33 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 24 Feb 2016 10:40:33 +1000 Subject: [9] RFR(S): 8150441: CompileTask::print_impl() is broken after JDK-8146905 In-Reply-To: <56CC56E8.1010808@oracle.com> References: <56CC321F.8050209@oracle.com> <56CC5281.8010109@oracle.com> <56CC56E8.1010808@oracle.com> Message-ID: <56CCFC01.6030603@oracle.com> Hi Tobias, On 23/02/2016 10:56 PM, Tobias Hartmann wrote: > Hi David, > > thanks for having a look! > > On 23.02.2016 13:37, David Holmes wrote: >> Hi Tobias, >> >> On 23/02/2016 8:19 PM, Tobias Hartmann wrote: >>> Hi, >>> >>> please review the following patch. >>> >>> https://bugs.openjdk.java.net/browse/JDK-8150441 >>> http://cr.openjdk.java.net/~thartmann/8150441/webrev.00/ >>> >>> The fix for JDK-8146905 [1] removed staticBufferStream and instead passes VMError::out/log to CompileTask::print_line_on_error() to print the current compile task if an error occurs. The problem is that fdStream VMError::out/log does not initialize the TimeStamp outputStream::_stamp and we hit the "must not be clear" assert in TimeStamp::milliseconds() which is called from CompileTask::print_impl(). Before, the TimeStamp was initialized in the staticBufferStream constructor [2]. >> >> I find the original code difficult to follow. > > In the original code, the staticBufferStream constructor initializes "_stamp" which is inherited from outputStream [2]. > > staticBufferStream sbs(buffer, sizeof(buffer), &out); > report(&sbs, false); > > The staticBufferStream is passed on to > -> VMError::report(outputStream* st, ..) > -> CompileTask::print_line_on_error(..) > -> CompileTask::print(..) > -> CompileTask::print_impl(..) > which then invokes st->time_stamp().milliseconds() with st->time_stamp()._counter == 0. > >> The staticBufferStream did initialize its own stamp, but AFAICS that stamp is never used in relation to the wrapped outer-stream, so somehow that stamp must have been exposed through one of the methods that didn't delegate to the outer stream ?? > > Yes, the stamp initialized by staticBufferStream is exposed through st->time_stamp() which does not delegate to the outer stream. Got it - the timestamp was obtained directly from the stream instance, not the wrapped stream, and not used internally as I had assumed. Thanks. >>> The time stamps should be explicitly initialized like we do in ostream_init(). I verified that my patch solves the problem. >> >> Why is the timestamp of a stream not initialized when the stream is constructed? Why are we deferring it until report_and_die() ? > > I'm not sure about this but I think we could eagerly initialize the timestamp in fdStream::fdStream() or outputStream::outputStream(). I was thinking more about when we construct and assign out/log rather than the actual constructors, but that may work too. I think there is a potential bug with the proposed fix. We now have this "initialization" code at the start of report_and_die: 1102 { 1103 // Don't allocate large buffer on stack 1104 static char buffer[O_BUFLEN]; 1105 out.set_scratch_buffer(buffer, sizeof(buffer)); 1106 log.set_scratch_buffer(buffer, sizeof(buffer)); 1107 1108 // Initialize time stamps to use the same base. 1109 out.time_stamp().update_to(1); 1110 log.time_stamp().update_to(1); but this can be executed by more than one thread. The setting of the scratch buffer is idempotent so it doesn't matter if it is executed multiple times (though that makes it fragile as idempotency is not an obvious requirement). But the timestamp.update_to(1) will cause the timestamp to be reset multiple times. This may be benign but is probably not what is expected. I think this initialization either needs to be moved to when we initialize log/out or it needs to be moved into the branch were we only go for the first error: 1125 if (first_error_tid == -1 && 1126 Atomic::cmpxchg_ptr(mytid, &first_error_tid, -1) == -1) { 1127 // init time-stamp here Thanks, David > Thanks, > Tobias > >> >> Thanks, >> David >> >>> Thanks, >>> Tobias >>> >>> [1] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b >>> [2] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b#l1.102 >>> From david.holmes at oracle.com Wed Feb 24 07:03:14 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 24 Feb 2016 17:03:14 +1000 Subject: (S) RFR: 8150506: Remove unused locks Message-ID: <56CD55B2.3040300@oracle.com> I stumbled across the fact that the following locks are no longer being used in the VM: Runtime: - Interrupt_lock - ProfileVM_lock - ObjAllocPost_lock Serviceability: -JvmtiPendingEvent_lock GC: - CMark_lock - CMRegionStack_lock so unless there are objections I will remove them. A reviewer from each area would be appreciated. bug: https://bugs.openjdk.java.net/browse/JDK-8150506 webrev: http://cr.openjdk.java.net/~dholmes/8150506/webrev/ Thanks, David From markus.gronlund at oracle.com Wed Feb 24 07:15:04 2016 From: markus.gronlund at oracle.com (Markus Gronlund) Date: Tue, 23 Feb 2016 23:15:04 -0800 (PST) Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <56CD55B2.3040300@oracle.com> References: <56CD55B2.3040300@oracle.com> Message-ID: <930bfa18-16f9-4ea8-95b5-8436f46967d9@default> Looks good! /Markus -----Original Message----- From: David Holmes Sent: den 24 februari 2016 08:03 To: hotspot-dev developers; serviceability-dev Subject: (S) RFR: 8150506: Remove unused locks I stumbled across the fact that the following locks are no longer being used in the VM: Runtime: - Interrupt_lock - ProfileVM_lock - ObjAllocPost_lock Serviceability: -JvmtiPendingEvent_lock GC: - CMark_lock - CMRegionStack_lock so unless there are objections I will remove them. A reviewer from each area would be appreciated. bug: https://bugs.openjdk.java.net/browse/JDK-8150506 webrev: http://cr.openjdk.java.net/~dholmes/8150506/webrev/ Thanks, David From david.holmes at oracle.com Wed Feb 24 07:21:43 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 24 Feb 2016 17:21:43 +1000 Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <930bfa18-16f9-4ea8-95b5-8436f46967d9@default> References: <56CD55B2.3040300@oracle.com> <930bfa18-16f9-4ea8-95b5-8436f46967d9@default> Message-ID: <56CD5A07.3020002@oracle.com> On 24/02/2016 5:15 PM, Markus Gronlund wrote: > Looks good! Thanks Markus! David > /Markus > > -----Original Message----- > From: David Holmes > Sent: den 24 februari 2016 08:03 > To: hotspot-dev developers; serviceability-dev > Subject: (S) RFR: 8150506: Remove unused locks > > I stumbled across the fact that the following locks are no longer being used in the VM: > > Runtime: > - Interrupt_lock > - ProfileVM_lock > - ObjAllocPost_lock > > Serviceability: > -JvmtiPendingEvent_lock > > GC: > - CMark_lock > - CMRegionStack_lock > > so unless there are objections I will remove them. A reviewer from each area would be appreciated. > > bug: https://bugs.openjdk.java.net/browse/JDK-8150506 > > webrev: http://cr.openjdk.java.net/~dholmes/8150506/webrev/ > > Thanks, > David > From tobias.hartmann at oracle.com Wed Feb 24 07:34:59 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 24 Feb 2016 08:34:59 +0100 Subject: [9] RFR(S): 8150441: CompileTask::print_impl() is broken after JDK-8146905 In-Reply-To: <56CCFC01.6030603@oracle.com> References: <56CC321F.8050209@oracle.com> <56CC5281.8010109@oracle.com> <56CC56E8.1010808@oracle.com> <56CCFC01.6030603@oracle.com> Message-ID: <56CD5D23.4050206@oracle.com> Hi David, On 24.02.2016 01:40, David Holmes wrote: > Hi Tobias, > > On 23/02/2016 10:56 PM, Tobias Hartmann wrote: >> Hi David, >> >> thanks for having a look! >> >> On 23.02.2016 13:37, David Holmes wrote: >>> Hi Tobias, >>> >>> On 23/02/2016 8:19 PM, Tobias Hartmann wrote: >>>> Hi, >>>> >>>> please review the following patch. >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8150441 >>>> http://cr.openjdk.java.net/~thartmann/8150441/webrev.00/ >>>> >>>> The fix for JDK-8146905 [1] removed staticBufferStream and instead passes VMError::out/log to CompileTask::print_line_on_error() to print the current compile task if an error occurs. The problem is that fdStream VMError::out/log does not initialize the TimeStamp outputStream::_stamp and we hit the "must not be clear" assert in TimeStamp::milliseconds() which is called from CompileTask::print_impl(). Before, the TimeStamp was initialized in the staticBufferStream constructor [2]. >>> >>> I find the original code difficult to follow. >> >> In the original code, the staticBufferStream constructor initializes "_stamp" which is inherited from outputStream [2]. >> >> staticBufferStream sbs(buffer, sizeof(buffer), &out); >> report(&sbs, false); >> >> The staticBufferStream is passed on to >> -> VMError::report(outputStream* st, ..) >> -> CompileTask::print_line_on_error(..) >> -> CompileTask::print(..) >> -> CompileTask::print_impl(..) >> which then invokes st->time_stamp().milliseconds() with st->time_stamp()._counter == 0. >> >>> The staticBufferStream did initialize its own stamp, but AFAICS that stamp is never used in relation to the wrapped outer-stream, so somehow that stamp must have been exposed through one of the methods that didn't delegate to the outer stream ?? >> >> Yes, the stamp initialized by staticBufferStream is exposed through st->time_stamp() which does not delegate to the outer stream. > > Got it - the timestamp was obtained directly from the stream instance, not the wrapped stream, and not used internally as I had assumed. Thanks. > >>>> The time stamps should be explicitly initialized like we do in ostream_init(). I verified that my patch solves the problem. >>> >>> Why is the timestamp of a stream not initialized when the stream is constructed? Why are we deferring it until report_and_die() ? >> >> I'm not sure about this but I think we could eagerly initialize the timestamp in fdStream::fdStream() or outputStream::outputStream(). > > I was thinking more about when we construct and assign out/log rather than the actual constructors, but that may work too. I think there is a potential bug with the proposed fix. We now have this "initialization" code at the start of report_and_die: > > 1102 { > 1103 // Don't allocate large buffer on stack > 1104 static char buffer[O_BUFLEN]; > 1105 out.set_scratch_buffer(buffer, sizeof(buffer)); > 1106 log.set_scratch_buffer(buffer, sizeof(buffer)); > 1107 > 1108 // Initialize time stamps to use the same base. > 1109 out.time_stamp().update_to(1); > 1110 log.time_stamp().update_to(1); > > but this can be executed by more than one thread. The setting of the scratch buffer is idempotent so it doesn't matter if it is executed multiple times (though that makes it fragile as idempotency is not an obvious requirement). But the timestamp.update_to(1) will cause the timestamp to be reset multiple times. This may be benign but is probably not what is expected. I think this initialization either needs to be moved to when we initialize log/out or it needs to be moved into the branch were we only go for the first error: > > 1125 if (first_error_tid == -1 && > 1126 Atomic::cmpxchg_ptr(mytid, &first_error_tid, -1) == -1) { > 1127 > // init time-stamp here Right, the timestamps should only be initialized once. Since out/log are static fields, we could only initialize the timestamp in their constructors. Therefore, I would like to go with the solution you suggested and move the initialization to the "first error" branch: http://cr.openjdk.java.net/~thartmann/8150441/webrev.01/ Thanks, Tobias > Thanks, > David > >> Thanks, >> Tobias >> >>> >>> Thanks, >>> David >>> >>>> Thanks, >>>> Tobias >>>> >>>> [1] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b >>>> [2] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b#l1.102 >>>> From thomas.schatzl at oracle.com Wed Feb 24 08:08:15 2016 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 24 Feb 2016 09:08:15 +0100 Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <56CD55B2.3040300@oracle.com> References: <56CD55B2.3040300@oracle.com> Message-ID: <1456301295.2212.0.camel@oracle.com> Hi David, On Wed, 2016-02-24 at 17:03 +1000, David Holmes wrote: > I stumbled across the fact that the following locks are no longer > being > used in the VM: > > Runtime: > - Interrupt_lock > - ProfileVM_lock > - ObjAllocPost_lock > > Serviceability: > -JvmtiPendingEvent_lock > > GC: > - CMark_lock > - CMRegionStack_lock looks good. Please update the copyright for fprofiler.cpp too. No need for re -review. Thanks, Thomas From mikael.gerdin at oracle.com Wed Feb 24 08:38:26 2016 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 24 Feb 2016 09:38:26 +0100 Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <56CD55B2.3040300@oracle.com> References: <56CD55B2.3040300@oracle.com> Message-ID: <56CD6C02.2080002@oracle.com> Hi David, On 2016-02-24 08:03, David Holmes wrote: > I stumbled across the fact that the following locks are no longer being > used in the VM: > > Runtime: > - Interrupt_lock > - ProfileVM_lock > - ObjAllocPost_lock > > Serviceability: > -JvmtiPendingEvent_lock > > GC: > - CMark_lock > - CMRegionStack_lock GC Lock removal looks good. /Mikael > > so unless there are objections I will remove them. A reviewer from each > area would be appreciated. > > bug: https://bugs.openjdk.java.net/browse/JDK-8150506 > > webrev: http://cr.openjdk.java.net/~dholmes/8150506/webrev/ > > Thanks, > David From david.holmes at oracle.com Wed Feb 24 08:50:37 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 24 Feb 2016 18:50:37 +1000 Subject: [9] RFR(S): 8150441: CompileTask::print_impl() is broken after JDK-8146905 In-Reply-To: <56CD5D23.4050206@oracle.com> References: <56CC321F.8050209@oracle.com> <56CC5281.8010109@oracle.com> <56CC56E8.1010808@oracle.com> <56CCFC01.6030603@oracle.com> <56CD5D23.4050206@oracle.com> Message-ID: <56CD6EDD.6070307@oracle.com> Hi Tobias, On 24/02/2016 5:34 PM, Tobias Hartmann wrote: > Hi David, > > On 24.02.2016 01:40, David Holmes wrote: >> Hi Tobias, >> >> On 23/02/2016 10:56 PM, Tobias Hartmann wrote: >>> Hi David, >>> >>> thanks for having a look! >>> >>> On 23.02.2016 13:37, David Holmes wrote: >>>> Hi Tobias, >>>> >>>> On 23/02/2016 8:19 PM, Tobias Hartmann wrote: >>>>> Hi, >>>>> >>>>> please review the following patch. >>>>> >>>>> https://bugs.openjdk.java.net/browse/JDK-8150441 >>>>> http://cr.openjdk.java.net/~thartmann/8150441/webrev.00/ >>>>> >>>>> The fix for JDK-8146905 [1] removed staticBufferStream and instead passes VMError::out/log to CompileTask::print_line_on_error() to print the current compile task if an error occurs. The problem is that fdStream VMError::out/log does not initialize the TimeStamp outputStream::_stamp and we hit the "must not be clear" assert in TimeStamp::milliseconds() which is called from CompileTask::print_impl(). Before, the TimeStamp was initialized in the staticBufferStream constructor [2]. >>>> >>>> I find the original code difficult to follow. >>> >>> In the original code, the staticBufferStream constructor initializes "_stamp" which is inherited from outputStream [2]. >>> >>> staticBufferStream sbs(buffer, sizeof(buffer), &out); >>> report(&sbs, false); >>> >>> The staticBufferStream is passed on to >>> -> VMError::report(outputStream* st, ..) >>> -> CompileTask::print_line_on_error(..) >>> -> CompileTask::print(..) >>> -> CompileTask::print_impl(..) >>> which then invokes st->time_stamp().milliseconds() with st->time_stamp()._counter == 0. >>> >>>> The staticBufferStream did initialize its own stamp, but AFAICS that stamp is never used in relation to the wrapped outer-stream, so somehow that stamp must have been exposed through one of the methods that didn't delegate to the outer stream ?? >>> >>> Yes, the stamp initialized by staticBufferStream is exposed through st->time_stamp() which does not delegate to the outer stream. >> >> Got it - the timestamp was obtained directly from the stream instance, not the wrapped stream, and not used internally as I had assumed. Thanks. >> >>>>> The time stamps should be explicitly initialized like we do in ostream_init(). I verified that my patch solves the problem. >>>> >>>> Why is the timestamp of a stream not initialized when the stream is constructed? Why are we deferring it until report_and_die() ? >>> >>> I'm not sure about this but I think we could eagerly initialize the timestamp in fdStream::fdStream() or outputStream::outputStream(). >> >> I was thinking more about when we construct and assign out/log rather than the actual constructors, but that may work too. I think there is a potential bug with the proposed fix. We now have this "initialization" code at the start of report_and_die: >> >> 1102 { >> 1103 // Don't allocate large buffer on stack >> 1104 static char buffer[O_BUFLEN]; >> 1105 out.set_scratch_buffer(buffer, sizeof(buffer)); >> 1106 log.set_scratch_buffer(buffer, sizeof(buffer)); >> 1107 >> 1108 // Initialize time stamps to use the same base. >> 1109 out.time_stamp().update_to(1); >> 1110 log.time_stamp().update_to(1); >> >> but this can be executed by more than one thread. The setting of the scratch buffer is idempotent so it doesn't matter if it is executed multiple times (though that makes it fragile as idempotency is not an obvious requirement). But the timestamp.update_to(1) will cause the timestamp to be reset multiple times. This may be benign but is probably not what is expected. I think this initialization either needs to be moved to when we initialize log/out or it needs to be moved into the branch were we only go for the first error: >> >> 1125 if (first_error_tid == -1 && >> 1126 Atomic::cmpxchg_ptr(mytid, &first_error_tid, -1) == -1) { >> 1127 >> // init time-stamp here > > Right, the timestamps should only be initialized once. Since out/log are static fields, we could only initialize the timestamp in their constructors. Therefore, I would like to go with the solution you suggested and move the initialization to the "first error" branch: > > http://cr.openjdk.java.net/~thartmann/8150441/webrev.01/ Okay this seems fine. Thanks, David > Thanks, > Tobias > >> Thanks, >> David >> >>> Thanks, >>> Tobias >>> >>>> >>>> Thanks, >>>> David >>>> >>>>> Thanks, >>>>> Tobias >>>>> >>>>> [1] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b >>>>> [2] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b#l1.102 >>>>> From david.holmes at oracle.com Wed Feb 24 08:52:49 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 24 Feb 2016 18:52:49 +1000 Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <1456301295.2212.0.camel@oracle.com> References: <56CD55B2.3040300@oracle.com> <1456301295.2212.0.camel@oracle.com> Message-ID: <56CD6F61.6060908@oracle.com> Thanks Thomas! David On 24/02/2016 6:08 PM, Thomas Schatzl wrote: > Hi David, > > On Wed, 2016-02-24 at 17:03 +1000, David Holmes wrote: >> I stumbled across the fact that the following locks are no longer >> being >> used in the VM: >> >> Runtime: >> - Interrupt_lock >> - ProfileVM_lock >> - ObjAllocPost_lock >> >> Serviceability: >> -JvmtiPendingEvent_lock >> >> GC: >> - CMark_lock >> - CMRegionStack_lock > > looks good. > > Please update the copyright for fprofiler.cpp too. No need for re > -review. > > Thanks, > Thomas > From david.holmes at oracle.com Wed Feb 24 08:56:30 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 24 Feb 2016 18:56:30 +1000 Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <56CD6C02.2080002@oracle.com> References: <56CD55B2.3040300@oracle.com> <56CD6C02.2080002@oracle.com> Message-ID: <56CD703E.7060907@oracle.com> Thanks Mikael! David On 24/02/2016 6:38 PM, Mikael Gerdin wrote: > Hi David, > > On 2016-02-24 08:03, David Holmes wrote: >> I stumbled across the fact that the following locks are no longer being >> used in the VM: >> >> Runtime: >> - Interrupt_lock >> - ProfileVM_lock >> - ObjAllocPost_lock >> >> Serviceability: >> -JvmtiPendingEvent_lock >> >> GC: >> - CMark_lock >> - CMRegionStack_lock > > GC Lock removal looks good. > /Mikael > >> >> so unless there are objections I will remove them. A reviewer from each >> area would be appreciated. >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8150506 >> >> webrev: http://cr.openjdk.java.net/~dholmes/8150506/webrev/ >> >> Thanks, >> David From tobias.hartmann at oracle.com Wed Feb 24 09:22:24 2016 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 24 Feb 2016 10:22:24 +0100 Subject: [9] RFR(S): 8150441: CompileTask::print_impl() is broken after JDK-8146905 In-Reply-To: <56CD6EDD.6070307@oracle.com> References: <56CC321F.8050209@oracle.com> <56CC5281.8010109@oracle.com> <56CC56E8.1010808@oracle.com> <56CCFC01.6030603@oracle.com> <56CD5D23.4050206@oracle.com> <56CD6EDD.6070307@oracle.com> Message-ID: <56CD7650.7080901@oracle.com> Hi David, thanks for the review! Best regards, Tobias On 24.02.2016 09:50, David Holmes wrote: > Hi Tobias, > > On 24/02/2016 5:34 PM, Tobias Hartmann wrote: >> Hi David, >> >> On 24.02.2016 01:40, David Holmes wrote: >>> Hi Tobias, >>> >>> On 23/02/2016 10:56 PM, Tobias Hartmann wrote: >>>> Hi David, >>>> >>>> thanks for having a look! >>>> >>>> On 23.02.2016 13:37, David Holmes wrote: >>>>> Hi Tobias, >>>>> >>>>> On 23/02/2016 8:19 PM, Tobias Hartmann wrote: >>>>>> Hi, >>>>>> >>>>>> please review the following patch. >>>>>> >>>>>> https://bugs.openjdk.java.net/browse/JDK-8150441 >>>>>> http://cr.openjdk.java.net/~thartmann/8150441/webrev.00/ >>>>>> >>>>>> The fix for JDK-8146905 [1] removed staticBufferStream and instead passes VMError::out/log to CompileTask::print_line_on_error() to print the current compile task if an error occurs. The problem is that fdStream VMError::out/log does not initialize the TimeStamp outputStream::_stamp and we hit the "must not be clear" assert in TimeStamp::milliseconds() which is called from CompileTask::print_impl(). Before, the TimeStamp was initialized in the staticBufferStream constructor [2]. >>>>> >>>>> I find the original code difficult to follow. >>>> >>>> In the original code, the staticBufferStream constructor initializes "_stamp" which is inherited from outputStream [2]. >>>> >>>> staticBufferStream sbs(buffer, sizeof(buffer), &out); >>>> report(&sbs, false); >>>> >>>> The staticBufferStream is passed on to >>>> -> VMError::report(outputStream* st, ..) >>>> -> CompileTask::print_line_on_error(..) >>>> -> CompileTask::print(..) >>>> -> CompileTask::print_impl(..) >>>> which then invokes st->time_stamp().milliseconds() with st->time_stamp()._counter == 0. >>>> >>>>> The staticBufferStream did initialize its own stamp, but AFAICS that stamp is never used in relation to the wrapped outer-stream, so somehow that stamp must have been exposed through one of the methods that didn't delegate to the outer stream ?? >>>> >>>> Yes, the stamp initialized by staticBufferStream is exposed through st->time_stamp() which does not delegate to the outer stream. >>> >>> Got it - the timestamp was obtained directly from the stream instance, not the wrapped stream, and not used internally as I had assumed. Thanks. >>> >>>>>> The time stamps should be explicitly initialized like we do in ostream_init(). I verified that my patch solves the problem. >>>>> >>>>> Why is the timestamp of a stream not initialized when the stream is constructed? Why are we deferring it until report_and_die() ? >>>> >>>> I'm not sure about this but I think we could eagerly initialize the timestamp in fdStream::fdStream() or outputStream::outputStream(). >>> >>> I was thinking more about when we construct and assign out/log rather than the actual constructors, but that may work too. I think there is a potential bug with the proposed fix. We now have this "initialization" code at the start of report_and_die: >>> >>> 1102 { >>> 1103 // Don't allocate large buffer on stack >>> 1104 static char buffer[O_BUFLEN]; >>> 1105 out.set_scratch_buffer(buffer, sizeof(buffer)); >>> 1106 log.set_scratch_buffer(buffer, sizeof(buffer)); >>> 1107 >>> 1108 // Initialize time stamps to use the same base. >>> 1109 out.time_stamp().update_to(1); >>> 1110 log.time_stamp().update_to(1); >>> >>> but this can be executed by more than one thread. The setting of the scratch buffer is idempotent so it doesn't matter if it is executed multiple times (though that makes it fragile as idempotency is not an obvious requirement). But the timestamp.update_to(1) will cause the timestamp to be reset multiple times. This may be benign but is probably not what is expected. I think this initialization either needs to be moved to when we initialize log/out or it needs to be moved into the branch were we only go for the first error: >>> >>> 1125 if (first_error_tid == -1 && >>> 1126 Atomic::cmpxchg_ptr(mytid, &first_error_tid, -1) == -1) { >>> 1127 >>> // init time-stamp here >> >> Right, the timestamps should only be initialized once. Since out/log are static fields, we could only initialize the timestamp in their constructors. Therefore, I would like to go with the solution you suggested and move the initialization to the "first error" branch: >> >> http://cr.openjdk.java.net/~thartmann/8150441/webrev.01/ > > Okay this seems fine. > > Thanks, > David > >> Thanks, >> Tobias >> >>> Thanks, >>> David >>> >>>> Thanks, >>>> Tobias >>>> >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>>> Thanks, >>>>>> Tobias >>>>>> >>>>>> [1] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b >>>>>> [2] http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/64ba9950558b#l1.102 >>>>>> From dmitry.dmitriev at oracle.com Wed Feb 24 10:42:51 2016 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Wed, 24 Feb 2016 13:42:51 +0300 Subject: RFR(XS): 8149973: Optimize object alignment check in debug builds. Message-ID: <56CD892B.20409@oracle.com> Hello, Please, review small optimization to the object alignment check in the debug builds. In this fix I replace division by MinObjAlignmentInBytes to bitwise AND operation with MinObjAlignmentInBytesMask, because MinObjAlignmentInBytes is a power of two. Suggested construction already used in MacroAssembler, e.g. hotspot/src/cpu/x86/vm/c1_MacroAssembler_x86.cpp). JBS: https://bugs.openjdk.java.net/browse/JDK-8149973 webrev.00: http://cr.openjdk.java.net/~ddmitriev/8149973/webrev.00/ Testing: jprt, hotspot_all Thanks, Dmitry From david.holmes at oracle.com Wed Feb 24 11:28:17 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 24 Feb 2016 21:28:17 +1000 Subject: RFR(XS): 8149973: Optimize object alignment check in debug builds. In-Reply-To: <56CD892B.20409@oracle.com> References: <56CD892B.20409@oracle.com> Message-ID: <56CD93D1.6020606@oracle.com> Hi Dmitry, On 24/02/2016 8:42 PM, Dmitry Dmitriev wrote: > Hello, > > Please, review small optimization to the object alignment check in the > debug builds. In this fix I replace division by MinObjAlignmentInBytes > to bitwise AND operation with MinObjAlignmentInBytesMask, because > MinObjAlignmentInBytes is a power of two. Suggested construction already > used in MacroAssembler, e.g. > hotspot/src/cpu/x86/vm/c1_MacroAssembler_x86.cpp). > > JBS: https://bugs.openjdk.java.net/browse/JDK-8149973 > webrev.00: http://cr.openjdk.java.net/~ddmitriev/8149973/webrev.00/ > > Testing: jprt, hotspot_all Seems functionally correct - and as you say mirrors existing code. Cheers, David > Thanks, > Dmitry From dmitry.dmitriev at oracle.com Wed Feb 24 11:37:47 2016 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Wed, 24 Feb 2016 14:37:47 +0300 Subject: RFR(XS): 8149973: Optimize object alignment check in debug builds. In-Reply-To: <56CD93D1.6020606@oracle.com> References: <56CD892B.20409@oracle.com> <56CD93D1.6020606@oracle.com> Message-ID: <56CD960B.3040608@oracle.com> Hi David, Thank you for the review! Dmitry On 24.02.2016 14:28, David Holmes wrote: > Hi Dmitry, > > On 24/02/2016 8:42 PM, Dmitry Dmitriev wrote: >> Hello, >> >> Please, review small optimization to the object alignment check in the >> debug builds. In this fix I replace division by MinObjAlignmentInBytes >> to bitwise AND operation with MinObjAlignmentInBytesMask, because >> MinObjAlignmentInBytes is a power of two. Suggested construction already >> used in MacroAssembler, e.g. >> hotspot/src/cpu/x86/vm/c1_MacroAssembler_x86.cpp). >> >> JBS: https://bugs.openjdk.java.net/browse/JDK-8149973 >> webrev.00: http://cr.openjdk.java.net/~ddmitriev/8149973/webrev.00/ >> >> Testing: jprt, hotspot_all > > Seems functionally correct - and as you say mirrors existing code. > > Cheers, > David > >> Thanks, >> Dmitry From coleen.phillimore at oracle.com Wed Feb 24 12:17:50 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 24 Feb 2016 07:17:50 -0500 Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <56CD55B2.3040300@oracle.com> References: <56CD55B2.3040300@oracle.com> Message-ID: <56CD9F6E.7090806@oracle.com> Nice! Coleen On 2/24/16 2:03 AM, David Holmes wrote: > I stumbled across the fact that the following locks are no longer > being used in the VM: > > Runtime: > - Interrupt_lock > - ProfileVM_lock > - ObjAllocPost_lock > > Serviceability: > -JvmtiPendingEvent_lock > > GC: > - CMark_lock > - CMRegionStack_lock > > so unless there are objections I will remove them. A reviewer from > each area would be appreciated. > > bug: https://bugs.openjdk.java.net/browse/JDK-8150506 > > webrev: http://cr.openjdk.java.net/~dholmes/8150506/webrev/ > > Thanks, > David From david.holmes at oracle.com Wed Feb 24 12:21:10 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 24 Feb 2016 22:21:10 +1000 Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <56CD9F6E.7090806@oracle.com> References: <56CD55B2.3040300@oracle.com> <56CD9F6E.7090806@oracle.com> Message-ID: <56CDA036.9090304@oracle.com> On 24/02/2016 10:17 PM, Coleen Phillimore wrote: > > Nice! > Coleen Thanks Coleen! I learned my CDE skills from the best! ;-) David > On 2/24/16 2:03 AM, David Holmes wrote: >> I stumbled across the fact that the following locks are no longer >> being used in the VM: >> >> Runtime: >> - Interrupt_lock >> - ProfileVM_lock >> - ObjAllocPost_lock >> >> Serviceability: >> -JvmtiPendingEvent_lock >> >> GC: >> - CMark_lock >> - CMRegionStack_lock >> >> so unless there are objections I will remove them. A reviewer from >> each area would be appreciated. >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8150506 >> >> webrev: http://cr.openjdk.java.net/~dholmes/8150506/webrev/ >> >> Thanks, >> David > From coleen.phillimore at oracle.com Wed Feb 24 12:35:01 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 24 Feb 2016 07:35:01 -0500 Subject: RFR(XS): 8149973: Optimize object alignment check in debug builds. In-Reply-To: <56CD892B.20409@oracle.com> References: <56CD892B.20409@oracle.com> Message-ID: <56CDA375.4090501@oracle.com> From http://cr.openjdk.java.net/~ddmitriev/8149973/webrev.00/src/share/vm/gc/g1/g1OopClosures.inline.hpp.udiff.html http://cr.openjdk.java.net/~ddmitriev/8149973/webrev.00/src/share/vm/gc/g1/g1RemSet.inline.hpp.udiff.html Can you just call assert(check_obj_alignment(o), "not oop aligned"); rather than repeating the & expression? The check_obj_alignment is inlined. Coleen On 2/24/16 5:42 AM, Dmitry Dmitriev wrote: > Hello, > > Please, review small optimization to the object alignment check in the > debug builds. In this fix I replace division by MinObjAlignmentInBytes > to bitwise AND operation with MinObjAlignmentInBytesMask, because > MinObjAlignmentInBytes is a power of two. Suggested construction > already used in MacroAssembler, e.g. > hotspot/src/cpu/x86/vm/c1_MacroAssembler_x86.cpp). > > JBS: https://bugs.openjdk.java.net/browse/JDK-8149973 > webrev.00: http://cr.openjdk.java.net/~ddmitriev/8149973/webrev.00/ > > Testing: jprt, hotspot_all > > Thanks, > Dmitry From aph at redhat.com Wed Feb 24 17:29:23 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 24 Feb 2016 17:29:23 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject Message-ID: <56CDE873.2060701@redhat.com> What are the semantics of Unsafe.weakCompareAndSwapObject? These methods seem to be undocumented. Here's my guess: compareAndSwapObject : acquire, release weakCompareAndSwapObject: nothing weakCompareAndSwapObjectAcquire: acquire weakCompareAndSwapObjectRelease: release Andrew. From aph at redhat.com Wed Feb 24 17:33:40 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 24 Feb 2016 17:33:40 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDE873.2060701@redhat.com> References: <56CDE873.2060701@redhat.com> Message-ID: <56CDE974.7070002@redhat.com> On 02/24/2016 05:29 PM, Andrew Haley wrote: > What are the semantics of Unsafe.weakCompareAndSwapObject? > These methods seem to be undocumented. > > Here's my guess: > > compareAndSwapObject : acquire, release > weakCompareAndSwapObject: nothing > weakCompareAndSwapObjectAcquire: acquire > weakCompareAndSwapObjectRelease: release ...but not all of these seem to have C2 graph nodes. I can only see WeakCompareAndSwapX and CompareAndExchangeX. I'm guessing that WeakCompareAndSwapX corresponds to no acquire and no release; CompareAndExchangeX is an acquire and a release. Andrew. From aleksey.shipilev at oracle.com Wed Feb 24 17:44:47 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 24 Feb 2016 20:44:47 +0300 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDE873.2060701@redhat.com> References: <56CDE873.2060701@redhat.com> Message-ID: <56CDEC0F.5020708@oracle.com> On 02/24/2016 08:29 PM, Andrew Haley wrote: > compareAndSwapObject : acquire, release > weakCompareAndSwapObject: nothing > weakCompareAndSwapObjectAcquire: acquire > weakCompareAndSwapObjectRelease: release Yes, should be like that. In retrospect, we should have documented them straight in Unsafe.java, not in VarHandles, for which these methods are destined. -Aleksey P.S. Well, you have said that to myself half a year ago: http://mail.openjdk.java.net/pipermail/jmm-dev/2015-August/000192.html From aph at redhat.com Wed Feb 24 17:53:05 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 24 Feb 2016 17:53:05 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDEC0F.5020708@oracle.com> References: <56CDE873.2060701@redhat.com> <56CDEC0F.5020708@oracle.com> Message-ID: <56CDEE01.30709@redhat.com> On 02/24/2016 05:44 PM, Aleksey Shipilev wrote: > Yes, should be like that. In retrospect, we should have documented them > straight in Unsafe.java, not in VarHandles, for which these methods are > destined. Yea. This stuff is far too important to be uncommented. I'm particularly baffled that only some of these seem to have graph nodes. Andrew. From adinn at redhat.com Wed Feb 24 18:04:23 2016 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 24 Feb 2016 18:04:23 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDE974.7070002@redhat.com> References: <56CDE873.2060701@redhat.com> <56CDE974.7070002@redhat.com> Message-ID: <56CDF0A7.9050700@redhat.com> On 24/02/16 17:33, Andrew Haley wrote: > On 02/24/2016 05:29 PM, Andrew Haley wrote: >> What are the semantics of Unsafe.weakCompareAndSwapObject? >> These methods seem to be undocumented. >> >> Here's my guess: >> >> compareAndSwapObject : acquire, release >> weakCompareAndSwapObject: nothing >> weakCompareAndSwapObjectAcquire: acquire >> weakCompareAndSwapObjectRelease: release > > ...but not all of these seem to have C2 graph nodes. I can only see > WeakCompareAndSwapX and CompareAndExchangeX. > > I'm guessing that WeakCompareAndSwapX corresponds to no acquire > and no release; CompareAndExchangeX is an acquire and a release. Are you making this change for AArch64? If so then the important question here is whether these are generated in graph shapes which include a MemBarAcquire or MemBarRelease. The AArch64 predicates which elide the barriers for these nodes are tuned only to the presence of CompareAndSwapX and expect to see a MemBarAcquire or MemBarRelease wrapped around it. They will not currently match subgraphs which contain these other nodes or which omit the membars. regards, Andrew Dinn ----------- From aph at redhat.com Wed Feb 24 18:05:45 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 24 Feb 2016 18:05:45 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDF0A7.9050700@redhat.com> References: <56CDE873.2060701@redhat.com> <56CDE974.7070002@redhat.com> <56CDF0A7.9050700@redhat.com> Message-ID: <56CDF0F9.2070605@redhat.com> On 02/24/2016 06:04 PM, Andrew Dinn wrote: > On 24/02/16 17:33, Andrew Haley wrote: >> On 02/24/2016 05:29 PM, Andrew Haley wrote: >>> What are the semantics of Unsafe.weakCompareAndSwapObject? >>> These methods seem to be undocumented. >>> >>> Here's my guess: >>> >>> compareAndSwapObject : acquire, release >>> weakCompareAndSwapObject: nothing >>> weakCompareAndSwapObjectAcquire: acquire >>> weakCompareAndSwapObjectRelease: release >> >> ...but not all of these seem to have C2 graph nodes. I can only see >> WeakCompareAndSwapX and CompareAndExchangeX. >> >> I'm guessing that WeakCompareAndSwapX corresponds to no acquire >> and no release; CompareAndExchangeX is an acquire and a release. > > Are you making this change for AArch64? If so then the important > question here is whether these are generated in graph shapes which > include a MemBarAcquire or MemBarRelease. The AArch64 predicates which > elide the barriers for these nodes are tuned only to the presence of > CompareAndSwapX and expect to see a MemBarAcquire or MemBarRelease > wrapped around it. They will not currently match subgraphs which contain > these other nodes or which omit the membars. We've now got WeakCompareAndSwapX and compareAndExchangeX. The weak variants shouldn't affect the barrier removal code because there won't be any barriers to remove. However, the strong variants have new names. I'm guessing that we'll need to change this: bool is_CAS(int opcode) { return (opcode == Op_CompareAndSwapI || opcode == Op_CompareAndSwapL || opcode == Op_CompareAndSwapN || opcode == Op_CompareAndSwapP); } to add in the strong (but not the weak) variants of CAS. Andrew. From aleksey.shipilev at oracle.com Wed Feb 24 18:11:23 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 24 Feb 2016 21:11:23 +0300 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDEE01.30709@redhat.com> References: <56CDE873.2060701@redhat.com> <56CDEC0F.5020708@oracle.com> <56CDEE01.30709@redhat.com> Message-ID: <56CDF24B.2090600@oracle.com> On 02/24/2016 08:53 PM, Andrew Haley wrote: > On 02/24/2016 05:44 PM, Aleksey Shipilev wrote: >> Yes, should be like that. In retrospect, we should have documented them >> straight in Unsafe.java, not in VarHandles, for which these methods are >> destined. > > Yea. This stuff is far too important to be uncommented. Yes, that was an overlook, basically stemming from our original plan of pushing VarHandles in one large blob. Let me document those Unsafe entries. > I'm particularly baffled that only some of these seem to have graph nodes. Current simplistic implementation does only emit surrounding barriers around CAS/CAE nodes. Even that change was enough of a headache to get right. We were optimistically hoping that you can match the surrounding barriers into the weaker CAS/CAE form, when weaker barriers are around the CAS/CAE node. But either way, we could also split CAS/CAE into multiple {none, acq, rel, acqrel} nodes and match them directly -- which might both be the conceptually cleaner, and much more intrusive change. Thanks, -Aleksey From aph at redhat.com Wed Feb 24 18:31:22 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 24 Feb 2016 18:31:22 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDF24B.2090600@oracle.com> References: <56CDE873.2060701@redhat.com> <56CDEC0F.5020708@oracle.com> <56CDEE01.30709@redhat.com> <56CDF24B.2090600@oracle.com> Message-ID: <56CDF6FA.9070101@redhat.com> On 02/24/2016 06:11 PM, Aleksey Shipilev wrote: > Current simplistic implementation does only emit surrounding barriers > around CAS/CAE nodes. Even that change was enough of a headache to get > right. > > We were optimistically hoping that you can match the surrounding > barriers into the weaker CAS/CAE form, when weaker barriers are around > the CAS/CAE node. But either way, we could also split CAS/CAE into > multiple {none, acq, rel, acqrel} nodes and match them directly -- which > might both be the conceptually cleaner, and much more intrusive change. Aha! OK, I understand it now. Well, we certainly have all of the machinery to do that. However, I was rather hoping that in the long run we could do it the other way around: get rid of most of these memory barriers in the compiler. Instead we could have LoadAcquire, StoreRelease, and so on. The back ends could then expand them into whatever instructions were necessary. Of course, the optimizers would have to respect the barrier effects of these nodes. That would be much cleaner and simpler -- not to mention easier to get right -- than what we have today. Andrew. From aleksey.shipilev at oracle.com Wed Feb 24 18:42:44 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 24 Feb 2016 21:42:44 +0300 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDF6FA.9070101@redhat.com> References: <56CDE873.2060701@redhat.com> <56CDEC0F.5020708@oracle.com> <56CDEE01.30709@redhat.com> <56CDF24B.2090600@oracle.com> <56CDF6FA.9070101@redhat.com> Message-ID: <56CDF9A4.9010609@oracle.com> On 02/24/2016 09:31 PM, Andrew Haley wrote: > Well, we certainly have all of the machinery to do that. However, I > was rather hoping that in the long run we could do it the other way > around: get rid of most of these memory barriers in the compiler. > Instead we could have LoadAcquire, StoreRelease, and so on. The back > ends could then expand them into whatever instructions were necessary. > Of course, the optimizers would have to respect the barrier effects of > these nodes. > > That would be much cleaner and simpler -- not to mention easier to get > right -- than what we have today. I agree. But I also shudder at a thought about the size of such a change at this point. BTW, answering your original question. Even without Unsafe docs, these are the clear hints on targeted semantics: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/file/8b9fdaeb8c57/src/share/vm/opto/library_call.cpp#l671 -Aleksey From aph at redhat.com Wed Feb 24 18:50:06 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 24 Feb 2016 18:50:06 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDF9A4.9010609@oracle.com> References: <56CDE873.2060701@redhat.com> <56CDEC0F.5020708@oracle.com> <56CDEE01.30709@redhat.com> <56CDF24B.2090600@oracle.com> <56CDF6FA.9070101@redhat.com> <56CDF9A4.9010609@oracle.com> Message-ID: <56CDFB5E.6020903@redhat.com> On 02/24/2016 06:42 PM, Aleksey Shipilev wrote: > BTW, answering your original question. Even without Unsafe docs, these > are the clear hints on targeted semantics: > > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/file/8b9fdaeb8c57/src/share/vm/opto/library_call.cpp#l671 Ah yea, I should have guessed it would be there! :-) Thanks, Andrew. From gromero at linux.vnet.ibm.com Wed Feb 24 19:50:13 2016 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Wed, 24 Feb 2016 16:50:13 -0300 Subject: RFR(M) 8150353: PPC64LE: Support RTM on linux Message-ID: <56CE0975.8060807@linux.vnet.ibm.com> Hi Martin, Both little and big endian Linux kernel contain the syscall change, so I did not include: #if defined(COMPILER2) && (defined(AIX) || defined(VM_LITTLE_ENDIAN) in globalDefinitions_ppc.hpp. Please, could you review the following change? Bug: https://bugs.openjdk.java.net/browse/JDK-8150353 Webrev (hotspot): http://81.de.7a9f.ip4.static.sl-reverse.com/webrev/ Summary: * Enable RTM support for Linux on PPC64 (LE and BE). * Fix C2 compiler buffer size issue. Thank you. Regards, Gustavo From aleksey.shipilev at oracle.com Wed Feb 24 22:43:46 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Thu, 25 Feb 2016 01:43:46 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays Message-ID: <56CE3222.6040207@oracle.com> Hi, When instantiating arrays from Java, we have to zero the backing storage to match JLS requirements. In some cases, like with the subsequent arraycopy, compilers are able to remove zeroing. However, in a generic case where a complicated processing is done after the allocation, compilers are unable to reliably figure out the array is covered completely. JDK-8150463 is a motivational example of this: Java level String concat loses to C2's OptimizeStringConcat because C2 can skip zeroing for its own allocations. It might make sense to allow new Unsafe method that will return uninitialized arrays to trusted Java code: https://bugs.openjdk.java.net/browse/JDK-8150465 Webrevs: http://cr.openjdk.java.net/~shade/8150465/webrev.hs.01/ http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.01/ It helps that we already have java.lang.reflect.Array.newArray intrinsic, so we can reuse a lot of code. The intrinsic code nukes the array allocation in the same way PhaseStringOpts::allocate_byte_array does it in OptimizeStringConcat. Alas, no such luck in C1, and so it stays untouched, falling back to normal Java allocations. Performance data shows the promising improvements: http://cr.openjdk.java.net/~shade/8150465/notes.txt Also, using this new method brings the best Java-level-only concatenation strategy to OptimizeStringConcat levels, and beyond. Testing: new test; targeted microbenchmarks; JPRT (in progress) Thanks, -Aleksey From vladimir.kozlov at oracle.com Wed Feb 24 23:21:55 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 24 Feb 2016 15:21:55 -0800 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CE3222.6040207@oracle.com> References: <56CE3222.6040207@oracle.com> Message-ID: <56CE3B13.3090700@oracle.com> What is your story for GC? When an array become visible and GC happens, it will expect only initialized arrays. Thanks, Vladimir On 2/24/16 2:43 PM, Aleksey Shipilev wrote: > Hi, > > When instantiating arrays from Java, we have to zero the backing storage > to match JLS requirements. In some cases, like with the subsequent > arraycopy, compilers are able to remove zeroing. However, in a generic > case where a complicated processing is done after the allocation, > compilers are unable to reliably figure out the array is covered completely. > > JDK-8150463 is a motivational example of this: Java level String concat > loses to C2's OptimizeStringConcat because C2 can skip zeroing for its > own allocations. > > It might make sense to allow new Unsafe method that will return > uninitialized arrays to trusted Java code: > https://bugs.openjdk.java.net/browse/JDK-8150465 > > Webrevs: > http://cr.openjdk.java.net/~shade/8150465/webrev.hs.01/ > http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.01/ > > It helps that we already have java.lang.reflect.Array.newArray > intrinsic, so we can reuse a lot of code. The intrinsic code nukes the > array allocation in the same way PhaseStringOpts::allocate_byte_array > does it in OptimizeStringConcat. Alas, no such luck in C1, and so it > stays untouched, falling back to normal Java allocations. > > Performance data shows the promising improvements: > http://cr.openjdk.java.net/~shade/8150465/notes.txt > > Also, using this new method brings the best Java-level-only > concatenation strategy to OptimizeStringConcat levels, and beyond. > > Testing: new test; targeted microbenchmarks; JPRT (in progress) > > Thanks, > -Aleksey > From aleksey.shipilev at oracle.com Wed Feb 24 23:51:26 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Thu, 25 Feb 2016 02:51:26 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CE3B13.3090700@oracle.com> References: <56CE3222.6040207@oracle.com> <56CE3B13.3090700@oracle.com> Message-ID: <56CE41FE.6070103@oracle.com> On 02/25/2016 02:21 AM, Vladimir Kozlov wrote: > What is your story for GC? When an array become visible and GC happens, > it will expect only initialized arrays. New method allows primitive arrays only, and its headers should be intact. This is corroborated by the new jtreg test (and benchmarks!) that allocate lots of uninitialized arrays, and obviously they get GCed. Are there specific concerns about GC seeing an uninitialized primitive array? Thanks, -Aleksey From vladimir.kozlov at oracle.com Wed Feb 24 23:54:22 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 24 Feb 2016 15:54:22 -0800 Subject: RFR(M) 8150353: PPC64LE: Support RTM on linux In-Reply-To: <56CE0975.8060807@linux.vnet.ibm.com> References: <56CE0975.8060807@linux.vnet.ibm.com> Message-ID: <56CE42AE.7080605@oracle.com> My concern (but I am not export) is Linux version encoding. Is it true that each value in x.y.z is less then 256? Why not keep them as separate int values? I also thought we have OS versions in make files but we check only gcc version there. Do you have problem with ScratchBufferBlob only on PPC or on some other platforms too? May be we should make MAX_inst_size as platform specific value. Thanks, Vladimir On 2/24/16 11:50 AM, Gustavo Romero wrote: > Hi Martin, > > Both little and big endian Linux kernel contain the syscall change, so > I did not include: > > #if defined(COMPILER2) && (defined(AIX) || defined(VM_LITTLE_ENDIAN) > > in globalDefinitions_ppc.hpp. > > Please, could you review the following change? > > Bug: https://bugs.openjdk.java.net/browse/JDK-8150353 > Webrev (hotspot): http://81.de.7a9f.ip4.static.sl-reverse.com/webrev/ > > Summary: > > * Enable RTM support for Linux on PPC64 (LE and BE). > * Fix C2 compiler buffer size issue. > > Thank you. > > Regards, > Gustavo > From vladimir.kozlov at oracle.com Thu Feb 25 00:20:24 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 24 Feb 2016 16:20:24 -0800 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CE41FE.6070103@oracle.com> References: <56CE3222.6040207@oracle.com> <56CE3B13.3090700@oracle.com> <56CE41FE.6070103@oracle.com> Message-ID: <56CE48C8.2050707@oracle.com> On 2/24/16 3:51 PM, Aleksey Shipilev wrote: > On 02/25/2016 02:21 AM, Vladimir Kozlov wrote: >> What is your story for GC? When an array become visible and GC happens, >> it will expect only initialized arrays. > > New method allows primitive arrays only, and its headers should be > intact. This is corroborated by the new jtreg test (and benchmarks!) > that allocate lots of uninitialized arrays, and obviously they get GCed. Yes, primitive arrays are fine if the header is correct. In this case changes are fine but you may need to add a check in inline_unsafe_newArray() that it is only primitive types. testIAE() should throw exception if IllegalArgumentException is not thrown. Thanks, Vladimir > > Are there specific concerns about GC seeing an uninitialized primitive > array? > > Thanks, > -Aleksey > From chris.plummer at oracle.com Thu Feb 25 00:57:46 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Wed, 24 Feb 2016 16:57:46 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56B40480.6060703@oracle.com> References: <56B2FBB4.70407@oracle.com> <56B3FE5A.9010806@oracle.com> <56B40480.6060703@oracle.com> Message-ID: <56CE518A.2050706@oracle.com> Hello, I still need to finish up review of this change. I added the change that David suggested. Since it's minor, I'll just post the code from arguments.cpp here: #if !defined(COMPILER2) && !INCLUDE_JVMCI UNSUPPORTED_OPTION(ProfileInterpreter, "ProfileInterpreter"); UNSUPPORTED_OPTION(TraceProfileInterpreter, "TraceProfileInterpreter"); UNSUPPORTED_OPTION(PrintMethodData, "PrintMethodData"); #endif The ProfileInterpreter related code was in the original code review. The other two flag checks I just added. thanks, Chris On 2/4/16 6:10 PM, Chris Plummer wrote: > Hi David, > > On 2/4/16 5:43 PM, David Holmes wrote: >> Hi Chris, >> >> On 4/02/2016 5:20 PM, Chris Plummer wrote: >>> Hello, >>> >>> Please review the following for removing Method::_method_data when only >>> supporting C1 (or more specifically, when not supporting C2 or JVMCI). >> >> Does JVMCI exist with C1 only? > My understanding is it can exists with C2 or on its own, but currently > is not included with C1 builds. >> The COMPILER2_OR_JVMCI conjunction makes things a bit messy. Can we >> abstract that behind a single variable, INCLUDE_METHOD_DATA (or some >> such) to make it cleaner? > I'll also be using COMPILER2_OR_JVMCI with another change to that > removes some MethodCounter fields. So yes, I can add > INCLUDE_METHOD_DATA, but then will need another INCLUDE_XXX for the > MethodCounter fields I'll be conditionally removing. >> >>> This will help reduce dynamic footprint usage for the minimal VM. >>> >>> As part of this fix, ProfileInterperter is forced to false unless C2 or >>> JVMCI is supported. This was mainly done to avoid crashes if it is >>> turned on and Method::_method_data has been excluded, but also because >>> it is not useful except to C2 or JVMCI. >> >> Are you saying that the information generated by ProfileInterpreter >> is only used by C2 and JVMCI? If that is case it should really have >> been a C2 only flag. >> > That is my understanding. Coleen confirmed it for me. I believe she > got her info from the compiler team. BTW, we need a mechanism to make > these conditionally unsupported flags a constant value when they are > not supported. It would help deadstrip code. >> If ProfileInterpreter is forced to false then shouldn't you also be >> checking TraceProfileInterpreter and PrintMethodData use as well > Yes, I can add those. > > thanks, > > Chris >> >> Thanks, >> David >> >>> Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 >>> >>> Test with JPRT -testset hotspot. >>> >>> thanks, >>> >>> Chris > From serguei.spitsyn at oracle.com Thu Feb 25 02:16:00 2016 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Wed, 24 Feb 2016 18:16:00 -0800 Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <56CD55B2.3040300@oracle.com> References: <56CD55B2.3040300@oracle.com> Message-ID: <56CE63E0.7030103@oracle.com> On 2/23/16 23:03, David Holmes wrote: > I stumbled across the fact that the following locks are no longer > being used in the VM: > > Runtime: > - Interrupt_lock > - ProfileVM_lock > - ObjAllocPost_lock > > Serviceability: > -JvmtiPendingEvent_lock This looks good too. Thanks, Serguei > > GC: > - CMark_lock > - CMRegionStack_lock > > so unless there are objections I will remove them. A reviewer from > each area would be appreciated. > > bug: https://bugs.openjdk.java.net/browse/JDK-8150506 > > webrev: http://cr.openjdk.java.net/~dholmes/8150506/webrev/ > > Thanks, > David From david.holmes at oracle.com Thu Feb 25 03:23:31 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 25 Feb 2016 13:23:31 +1000 Subject: (S) RFR: 8150506: Remove unused locks In-Reply-To: <56CE63E0.7030103@oracle.com> References: <56CD55B2.3040300@oracle.com> <56CE63E0.7030103@oracle.com> Message-ID: <56CE73B3.2060008@oracle.com> Thanks Serguei - but already pushed. :) David On 25/02/2016 12:16 PM, serguei.spitsyn at oracle.com wrote: > On 2/23/16 23:03, David Holmes wrote: >> I stumbled across the fact that the following locks are no longer >> being used in the VM: >> >> Runtime: >> - Interrupt_lock >> - ProfileVM_lock >> - ObjAllocPost_lock >> >> Serviceability: >> -JvmtiPendingEvent_lock > > This looks good too. > > Thanks, > Serguei > >> >> GC: >> - CMark_lock >> - CMRegionStack_lock >> >> so unless there are objections I will remove them. A reviewer from >> each area would be appreciated. >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8150506 >> >> webrev: http://cr.openjdk.java.net/~dholmes/8150506/webrev/ >> >> Thanks, >> David > From david.holmes at oracle.com Thu Feb 25 07:46:17 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 25 Feb 2016 17:46:17 +1000 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56CE518A.2050706@oracle.com> References: <56B2FBB4.70407@oracle.com> <56B3FE5A.9010806@oracle.com> <56B40480.6060703@oracle.com> <56CE518A.2050706@oracle.com> Message-ID: <56CEB149.6050208@oracle.com> On 25/02/2016 10:57 AM, Chris Plummer wrote: > Hello, > > I still need to finish up review of this change. I added the change that > David suggested. Since it's minor, I'll just post the code from > arguments.cpp here: > > #if !defined(COMPILER2) && !INCLUDE_JVMCI > UNSUPPORTED_OPTION(ProfileInterpreter, "ProfileInterpreter"); > UNSUPPORTED_OPTION(TraceProfileInterpreter, "TraceProfileInterpreter"); > UNSUPPORTED_OPTION(PrintMethodData, "PrintMethodData"); > #endif > > The ProfileInterpreter related code was in the original code review. The > other two flag checks I just added. That addition seems fine to me. But I'll leave it to the compiler folk to review the core changes. Thanks, David > thanks, > > Chris > > On 2/4/16 6:10 PM, Chris Plummer wrote: >> Hi David, >> >> On 2/4/16 5:43 PM, David Holmes wrote: >>> Hi Chris, >>> >>> On 4/02/2016 5:20 PM, Chris Plummer wrote: >>>> Hello, >>>> >>>> Please review the following for removing Method::_method_data when only >>>> supporting C1 (or more specifically, when not supporting C2 or JVMCI). >>> >>> Does JVMCI exist with C1 only? >> My understanding is it can exists with C2 or on its own, but currently >> is not included with C1 builds. >>> The COMPILER2_OR_JVMCI conjunction makes things a bit messy. Can we >>> abstract that behind a single variable, INCLUDE_METHOD_DATA (or some >>> such) to make it cleaner? >> I'll also be using COMPILER2_OR_JVMCI with another change to that >> removes some MethodCounter fields. So yes, I can add >> INCLUDE_METHOD_DATA, but then will need another INCLUDE_XXX for the >> MethodCounter fields I'll be conditionally removing. >>> >>>> This will help reduce dynamic footprint usage for the minimal VM. >>>> >>>> As part of this fix, ProfileInterperter is forced to false unless C2 or >>>> JVMCI is supported. This was mainly done to avoid crashes if it is >>>> turned on and Method::_method_data has been excluded, but also because >>>> it is not useful except to C2 or JVMCI. >>> >>> Are you saying that the information generated by ProfileInterpreter >>> is only used by C2 and JVMCI? If that is case it should really have >>> been a C2 only flag. >>> >> That is my understanding. Coleen confirmed it for me. I believe she >> got her info from the compiler team. BTW, we need a mechanism to make >> these conditionally unsupported flags a constant value when they are >> not supported. It would help deadstrip code. >>> If ProfileInterpreter is forced to false then shouldn't you also be >>> checking TraceProfileInterpreter and PrintMethodData use as well >> Yes, I can add those. >> >> thanks, >> >> Chris >>> >>> Thanks, >>> David >>> >>>> Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 >>>> >>>> Test with JPRT -testset hotspot. >>>> >>>> thanks, >>>> >>>> Chris >> > From david.holmes at oracle.com Thu Feb 25 07:54:49 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 25 Feb 2016 17:54:49 +1000 Subject: RFR(M) 8150353: PPC64LE: Support RTM on linux In-Reply-To: <56CE0975.8060807@linux.vnet.ibm.com> References: <56CE0975.8060807@linux.vnet.ibm.com> Message-ID: <56CEB349.1050102@oracle.com> Hi Gustavo, Just a point of order. All contributions to the OpenJDK must be made through OpenJDK infrastructure. So you must either get someone to host your webrev on cr.openjdk.java.net, or include it inline in email (attachments tend to get stripped). Sorry for the inconvenience. David On 25/02/2016 5:50 AM, Gustavo Romero wrote: > Hi Martin, > > Both little and big endian Linux kernel contain the syscall change, so > I did not include: > > #if defined(COMPILER2) && (defined(AIX) || defined(VM_LITTLE_ENDIAN) > > in globalDefinitions_ppc.hpp. > > Please, could you review the following change? > > Bug: https://bugs.openjdk.java.net/browse/JDK-8150353 > Webrev (hotspot): http://81.de.7a9f.ip4.static.sl-reverse.com/webrev/ > > Summary: > > * Enable RTM support for Linux on PPC64 (LE and BE). > * Fix C2 compiler buffer size issue. > > Thank you. > > Regards, > Gustavo > From aleksey.shipilev at oracle.com Thu Feb 25 08:10:40 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Thu, 25 Feb 2016 11:10:40 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CE48C8.2050707@oracle.com> References: <56CE3222.6040207@oracle.com> <56CE3B13.3090700@oracle.com> <56CE41FE.6070103@oracle.com> <56CE48C8.2050707@oracle.com> Message-ID: <56CEB700.9020603@oracle.com> On 02/25/2016 03:20 AM, Vladimir Kozlov wrote: > On 2/24/16 3:51 PM, Aleksey Shipilev wrote: >> On 02/25/2016 02:21 AM, Vladimir Kozlov wrote: >>> What is your story for GC? When an array become visible and GC happens, >>> it will expect only initialized arrays. >> >> New method allows primitive arrays only, and its headers should be >> intact. This is corroborated by the new jtreg test (and benchmarks!) >> that allocate lots of uninitialized arrays, and obviously they get GCed. > > Yes, primitive arrays are fine if the header is correct. In this case > changes are fine but you may need to add a check in > inline_unsafe_newArray() that it is only primitive types. Alas, the class argument may not be constant, and so we would need a runtime check there, which would duplicate the check we already have in Unsafe.java. I'd prefer to follow the upcoming pattern in Mikael's Unsafe cleanup with making as much checks on Java side. > testIAE() should throw exception if IllegalArgumentException is not thrown. D'uh, of course! See updates: http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.01/ http://cr.openjdk.java.net/~shade/8150465/webrev.hs.02/ Cheers, -Aleksey From gromero at linux.vnet.ibm.com Thu Feb 25 08:11:41 2016 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Thu, 25 Feb 2016 05:11:41 -0300 Subject: RFR(M) 8150353: PPC64LE: Support RTM on linux In-Reply-To: <56CEB349.1050102@oracle.com> References: <56CE0975.8060807@linux.vnet.ibm.com> <56CEB349.1050102@oracle.com> Message-ID: <56CEB73D.8030904@linux.vnet.ibm.com> Hi David, OK, I'll fix that. Should I re-send the RFR with the right URL and abandon this one? In case a contribution is not so small, even so is it fine to include it inline? Thank you. Regards, Gustavo On 25-02-2016 04:54, David Holmes wrote: > Hi Gustavo, > > Just a point of order. All contributions to the OpenJDK must be made through OpenJDK infrastructure. So you must either get someone to host your webrev on cr.openjdk.java.net, or include it inline in > email (attachments tend to get stripped). > > Sorry for the inconvenience. > > David > > On 25/02/2016 5:50 AM, Gustavo Romero wrote: >> Hi Martin, >> >> Both little and big endian Linux kernel contain the syscall change, so >> I did not include: >> >> #if defined(COMPILER2) && (defined(AIX) || defined(VM_LITTLE_ENDIAN) >> >> in globalDefinitions_ppc.hpp. >> >> Please, could you review the following change? >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8150353 >> Webrev (hotspot): http://81.de.7a9f.ip4.static.sl-reverse.com/webrev/ >> >> Summary: >> >> * Enable RTM support for Linux on PPC64 (LE and BE). >> * Fix C2 compiler buffer size issue. >> >> Thank you. >> >> Regards, >> Gustavo >> > From david.holmes at oracle.com Thu Feb 25 08:21:22 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 25 Feb 2016 18:21:22 +1000 Subject: RFR(M) 8150353: PPC64LE: Support RTM on linux In-Reply-To: <56CEB73D.8030904@linux.vnet.ibm.com> References: <56CE0975.8060807@linux.vnet.ibm.com> <56CEB349.1050102@oracle.com> <56CEB73D.8030904@linux.vnet.ibm.com> Message-ID: <56CEB982.5030706@oracle.com> On 25/02/2016 6:11 PM, Gustavo Romero wrote: > Hi David, > > OK, I'll fix that. > > Should I re-send the RFR with the right URL and abandon this one? I think you can just post the new URL to this one. > In case a contribution is not so small, even so is it fine to include it inline? Well it's preferable, for larger contributions, to find someone to host the webrev for you. Cheers, David > Thank you. > > Regards, > Gustavo > > On 25-02-2016 04:54, David Holmes wrote: >> Hi Gustavo, >> >> Just a point of order. All contributions to the OpenJDK must be made through OpenJDK infrastructure. So you must either get someone to host your webrev on cr.openjdk.java.net, or include it inline in >> email (attachments tend to get stripped). >> >> Sorry for the inconvenience. >> >> David >> >> On 25/02/2016 5:50 AM, Gustavo Romero wrote: >>> Hi Martin, >>> >>> Both little and big endian Linux kernel contain the syscall change, so >>> I did not include: >>> >>> #if defined(COMPILER2) && (defined(AIX) || defined(VM_LITTLE_ENDIAN) >>> >>> in globalDefinitions_ppc.hpp. >>> >>> Please, could you review the following change? >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8150353 >>> Webrev (hotspot): http://81.de.7a9f.ip4.static.sl-reverse.com/webrev/ >>> >>> Summary: >>> >>> * Enable RTM support for Linux on PPC64 (LE and BE). >>> * Fix C2 compiler buffer size issue. >>> >>> Thank you. >>> >>> Regards, >>> Gustavo >>> >> > From aph at redhat.com Thu Feb 25 09:50:22 2016 From: aph at redhat.com (Andrew Haley) Date: Thu, 25 Feb 2016 09:50:22 +0000 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CE3222.6040207@oracle.com> References: <56CE3222.6040207@oracle.com> Message-ID: <56CECE5E.4080105@redhat.com> This is something of a loaded gun pointed at our feet. We'll have to be extremely careful that we can prove that such arrays are never unsafely published. It's the "generic case where a complicated processing is done after the allocation" I'm worried about. The only way to guarantee safety is to prove that the array reference doesn't escape the thread until the array is fully initialized and a release barrier has been executed. I urge extreme caution. Andrew. From adinn at redhat.com Thu Feb 25 09:50:21 2016 From: adinn at redhat.com (Andrew Dinn) Date: Thu, 25 Feb 2016 09:50:21 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CDF0F9.2070605@redhat.com> References: <56CDE873.2060701@redhat.com> <56CDE974.7070002@redhat.com> <56CDF0A7.9050700@redhat.com> <56CDF0F9.2070605@redhat.com> Message-ID: <56CECE5D.1050706@redhat.com> On 24/02/16 18:05, Andrew Haley wrote: > On 02/24/2016 06:04 PM, Andrew Dinn wrote: >> On 24/02/16 17:33, Andrew Haley wrote: >>> On 02/24/2016 05:29 PM, Andrew Haley wrote: >>>> What are the semantics of Unsafe.weakCompareAndSwapObject? >>>> These methods seem to be undocumented. >>>> >>>> Here's my guess: >>>> >>>> compareAndSwapObject : acquire, release >>>> weakCompareAndSwapObject: nothing >>>> weakCompareAndSwapObjectAcquire: acquire >>>> weakCompareAndSwapObjectRelease: release >>> >>> ...but not all of these seem to have C2 graph nodes. I can only see >>> WeakCompareAndSwapX and CompareAndExchangeX. >>> >>> I'm guessing that WeakCompareAndSwapX corresponds to no acquire >>> and no release; CompareAndExchangeX is an acquire and a release. >> >> Are you making this change for AArch64? If so then the important >> question here is whether these are generated in graph shapes which >> include a MemBarAcquire or MemBarRelease. The AArch64 predicates which >> elide the barriers for these nodes are tuned only to the presence of >> CompareAndSwapX and expect to see a MemBarAcquire or MemBarRelease >> wrapped around it. They will not currently match subgraphs which contain >> these other nodes or which omit the membars. > > We've now got WeakCompareAndSwapX and compareAndExchangeX. > > The weak variants shouldn't affect the barrier removal code because > there won't be any barriers to remove. However, the strong variants > have new names. I'm guessing that we'll need to change this: > > bool is_CAS(int opcode) > { > return (opcode == Op_CompareAndSwapI || > opcode == Op_CompareAndSwapL || > opcode == Op_CompareAndSwapN || > opcode == Op_CompareAndSwapP); > } > > to add in the strong (but not the weak) variants of CAS. Yes, agreed that to handle CompareAndExchangeX all that needs changing is to add the associated opcodes to this method -- that's assuming that the subgraph containing a CompareAndExchangeX node still contains the leading MemBarRelease and trailing MemBarAcquire that we also see generated for CompareAndSwapX. If so then the remaining predicates should recognise CompareAndExchangeX as a CAS needing acquire+release semantics and should inhibit translation of the MemBarRelease and MemBarAcquire to a dmb. However, I am not sure about WeakCompareAndSwapX. I was under the impression from Aleksey's earlier comments and the patch proposed for JDK-8148146 that the plan was for translation of weakCompareAndSwapObjectRelease and weakCompareAndSwapObjectAcquire to generate a subgraph which wraps a WeakCompareAndSwapX node within a leading and trailing MemBarCPUOrder that also contains, respectively, only the leading MemBarRelease or only the trailing MemBarAcquire. If that plan is adopted then the rule predicates will need a small tweak to recognise this new, slightly different configuration: - translation of a WeakCompareAndSwapX node would change according to which of these membar nodes is present (neither ==> cas or ldxr+stxr, leading MemBarRelease only ==> casl or ldxr+stlxr, trailing MemBarAcquire ==> casa or ldaxr+stlr, both ==> error!) - translation of the MemBarAcquire or MemBarRelease to a dmb will need to be inhibited in all cases where a WeakCompareAndSwapX is found in the graph. I think it's relatively straightforward to modify the predicates to identify these extra cases/make the encoding plant the relevant cas or ldxr/stxr flavour based on graph shape. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul Argiry (US) From aph at redhat.com Thu Feb 25 09:52:02 2016 From: aph at redhat.com (Andrew Haley) Date: Thu, 25 Feb 2016 09:52:02 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CECE5D.1050706@redhat.com> References: <56CDE873.2060701@redhat.com> <56CDE974.7070002@redhat.com> <56CDF0A7.9050700@redhat.com> <56CDF0F9.2070605@redhat.com> <56CECE5D.1050706@redhat.com> Message-ID: <56CECEC2.8070703@redhat.com> On 25/02/16 09:50, Andrew Dinn wrote: > I think it's relatively straightforward to modify the predicates to > identify these extra cases/make the encoding plant the relevant cas or > ldxr/stxr flavour based on graph shape. OK. I have a patch which refactors CAS to allow any combination of acquire and release. Shall I send that patch and you can build on it? Andrew. From adinn at redhat.com Thu Feb 25 09:52:50 2016 From: adinn at redhat.com (Andrew Dinn) Date: Thu, 25 Feb 2016 09:52:50 +0000 Subject: jdk.internal.misc.Unsafe.weakCompareAndSwapObject In-Reply-To: <56CECEC2.8070703@redhat.com> References: <56CDE873.2060701@redhat.com> <56CDE974.7070002@redhat.com> <56CDF0A7.9050700@redhat.com> <56CDF0F9.2070605@redhat.com> <56CECE5D.1050706@redhat.com> <56CECEC2.8070703@redhat.com> Message-ID: <56CECEF2.80704@redhat.com> On 25/02/16 09:52, Andrew Haley wrote: > On 25/02/16 09:50, Andrew Dinn wrote: >> I think it's relatively straightforward to modify the predicates to >> identify these extra cases/make the encoding plant the relevant cas or >> ldxr/stxr flavour based on graph shape. > > OK. I have a patch which refactors CAS to allow any combination of > acquire and release. Shall I send that patch and you can build on it? Yes, please. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (US), Michael O'Neill (Ireland), Paul Argiry (US) From thomas.stuefe at gmail.com Thu Feb 25 10:57:25 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 25 Feb 2016 11:57:25 +0100 Subject: RFR(M) 8150353: PPC64LE: Support RTM on linux In-Reply-To: <56CEB982.5030706@oracle.com> References: <56CE0975.8060807@linux.vnet.ibm.com> <56CEB349.1050102@oracle.com> <56CEB73D.8030904@linux.vnet.ibm.com> <56CEB982.5030706@oracle.com> Message-ID: Hi Gustavo, I can host this for you, if you have no access to cr.openjdk.java.net. Just send me the patch file. About your change: src/cpu/ppc/vm/globalDefinitions_ppc.hpp small nit: Could be probably be merged with the paragraph above where we do the same thing for AIX, but I do not have strong emotions. src/cpu/ppc/vm/vm_version_ppc.cpp + // Please, refer to commit 4b4fadba057c1af7689fc8fa182b13baL7 Does this refer to an OpenJDK change? If yes, could you please instead mention the OpenJDK bug number instead? src/os/linux/vm/os_linux.cpp os::Linux::initialize_os_info() Please make this code more robust: - Check the return value of sscanf (must be 3, otherwise your assumption about the version format was wrong) - Could this happen: "3.2" ? If yes, could you please handle it too? - Please handle overflow - if any one of minor/fix is > 256, something sensible should happen (for major, this probably indicates an error). Possibly cap out at 256? If version cannot be read successfully, the VM should not abort imho but behave gracefully. Worst that should happen is that RTM gets deactivated. Kind Regards, Thomas On Thu, Feb 25, 2016 at 9:21 AM, David Holmes wrote: > On 25/02/2016 6:11 PM, Gustavo Romero wrote: > >> Hi David, >> >> OK, I'll fix that. >> >> Should I re-send the RFR with the right URL and abandon this one? >> > > I think you can just post the new URL to this one. > > In case a contribution is not so small, even so is it fine to include it >> inline? >> > > Well it's preferable, for larger contributions, to find someone to host > the webrev for you. > > Cheers, > David > > > Thank you. >> >> Regards, >> Gustavo >> >> On 25-02-2016 04:54, David Holmes wrote: >> >>> Hi Gustavo, >>> >>> Just a point of order. All contributions to the OpenJDK must be made >>> through OpenJDK infrastructure. So you must either get someone to host your >>> webrev on cr.openjdk.java.net, or include it inline in >>> email (attachments tend to get stripped). >>> >>> Sorry for the inconvenience. >>> >>> David >>> >>> On 25/02/2016 5:50 AM, Gustavo Romero wrote: >>> >>>> Hi Martin, >>>> >>>> Both little and big endian Linux kernel contain the syscall change, so >>>> I did not include: >>>> >>>> #if defined(COMPILER2) && (defined(AIX) || defined(VM_LITTLE_ENDIAN) >>>> >>>> in globalDefinitions_ppc.hpp. >>>> >>>> Please, could you review the following change? >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8150353 >>>> Webrev (hotspot): http://81.de.7a9f.ip4.static.sl-reverse.com/webrev/ >>>> >>>> Summary: >>>> >>>> * Enable RTM support for Linux on PPC64 (LE and BE). >>>> * Fix C2 compiler buffer size issue. >>>> >>>> Thank you. >>>> >>>> Regards, >>>> Gustavo >>>> >>>> >>> >> From martin.doerr at sap.com Thu Feb 25 11:43:02 2016 From: martin.doerr at sap.com (Doerr, Martin) Date: Thu, 25 Feb 2016 11:43:02 +0000 Subject: RFR(M) 8150353: PPC64LE: Support RTM on linux In-Reply-To: <56CE42AE.7080605@oracle.com> References: <56CE0975.8060807@linux.vnet.ibm.com> <56CE42AE.7080605@oracle.com> Message-ID: <10c9a1cb6d9b40618a094283ac838038@DEWDFE13DE14.global.corp.sap> Hi Vladimir, thanks for taking a look. About version values: We are using a similar scheme for version checks on AIX where we know that the version values are less than 256. It makes comparisons much more convenient. But I agree that we should double-check if it is guaranteed for linux as well (and possibly add an assertion). About scratch buffer size: We only noticed that the scratch buffer was too small when we enable all RTM features: -XX:+UnlockExperimentalVMOptions -XX:+UseRTMLocking -XX:+UseRTMForStackLocks -XX:+UseRTMDeopt We have only tried on PPC64, but I wonder if the current size is sufficient for x86. I currently don't have access to a Skylake machine. I think adding 1024 bytes to the scratch buffer doesn't hurt. (It may also lead to larger CodeBuffers in output.cpp but I don't think this is problematic as long as the real content gets copied to nmethods.) Would you agree? Best regards, Martin -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Donnerstag, 25. Februar 2016 00:54 To: Gustavo Romero ; Doerr, Martin ; hotspot-dev at openjdk.java.net Cc: brenohl at br.ibm.com Subject: Re: RFR(M) 8150353: PPC64LE: Support RTM on linux My concern (but I am not export) is Linux version encoding. Is it true that each value in x.y.z is less then 256? Why not keep them as separate int values? I also thought we have OS versions in make files but we check only gcc version there. Do you have problem with ScratchBufferBlob only on PPC or on some other platforms too? May be we should make MAX_inst_size as platform specific value. Thanks, Vladimir On 2/24/16 11:50 AM, Gustavo Romero wrote: > Hi Martin, > > Both little and big endian Linux kernel contain the syscall change, so > I did not include: > > #if defined(COMPILER2) && (defined(AIX) || defined(VM_LITTLE_ENDIAN) > > in globalDefinitions_ppc.hpp. > > Please, could you review the following change? > > Bug: https://bugs.openjdk.java.net/browse/JDK-8150353 > Webrev (hotspot): http://81.de.7a9f.ip4.static.sl-reverse.com/webrev/ > > Summary: > > * Enable RTM support for Linux on PPC64 (LE and BE). > * Fix C2 compiler buffer size issue. > > Thank you. > > Regards, > Gustavo > From stefan.karlsson at oracle.com Thu Feb 25 12:22:22 2016 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 25 Feb 2016 13:22:22 +0100 Subject: RFR: 8150617: nth_bit and friends are broken Message-ID: <56CEF1FE.5040008@oracle.com> Hi all, Please review this patch to fix the nth_bit, right_n_bits, and left_n_bits macros. http://cr.openjdk.java.net/~stefank/8150617/webrev.00 The macros were broken when expressions with low-precedence operators were used. For example (in 64 bit JVMs): nth_bit(true ? 32 : 64) returns 0x20 instead of 0x100000000 nth_bit(1|2) returns 0x0 instead of 0x8 The fix is to add parentheses around all usages of the macro input parameter. I also added some extra parentheses to further disambiguate the expression for readers of the code. I added STATIC_ASSERTS to show the problem, but I can remove them if someone thinks they are unnecessary. Tested with JPRT. Thanks, StefanK From dmitry.dmitriev at oracle.com Thu Feb 25 12:32:14 2016 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Thu, 25 Feb 2016 15:32:14 +0300 Subject: RFR(XS): 8149973: Optimize object alignment check in debug builds. In-Reply-To: <56CDA375.4090501@oracle.com> References: <56CD892B.20409@oracle.com> <56CDA375.4090501@oracle.com> Message-ID: <56CEF44E.5050803@oracle.com> Hello Coleen, Thank you for suggestion, but I'm not sure about that. check_obj_alignment accepts 'oop obj' argument, but 'o' is a oopDesc. Thanks, Dmitry On 24.02.2016 15:35, Coleen Phillimore wrote: > > From > http://cr.openjdk.java.net/~ddmitriev/8149973/webrev.00/src/share/vm/gc/g1/g1OopClosures.inline.hpp.udiff.html > > http://cr.openjdk.java.net/~ddmitriev/8149973/webrev.00/src/share/vm/gc/g1/g1RemSet.inline.hpp.udiff.html > > > Can you just call > > assert(check_obj_alignment(o), "not oop aligned"); > > rather than repeating the & expression? The check_obj_alignment is > inlined. > > Coleen > > On 2/24/16 5:42 AM, Dmitry Dmitriev wrote: >> Hello, >> >> Please, review small optimization to the object alignment check in >> the debug builds. In this fix I replace division by >> MinObjAlignmentInBytes to bitwise AND operation with >> MinObjAlignmentInBytesMask, because MinObjAlignmentInBytes is a power >> of two. Suggested construction already used in MacroAssembler, e.g. >> hotspot/src/cpu/x86/vm/c1_MacroAssembler_x86.cpp). >> >> JBS: https://bugs.openjdk.java.net/browse/JDK-8149973 >> webrev.00: http://cr.openjdk.java.net/~ddmitriev/8149973/webrev.00/ >> >> Testing: jprt, hotspot_all >> >> Thanks, >> Dmitry > From aleksey.shipilev at oracle.com Thu Feb 25 12:39:11 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Thu, 25 Feb 2016 15:39:11 +0300 Subject: RFR: 8150617: nth_bit and friends are broken In-Reply-To: <56CEF1FE.5040008@oracle.com> References: <56CEF1FE.5040008@oracle.com> Message-ID: <56CEF5EF.9080602@oracle.com> On 02/25/2016 03:22 PM, Stefan Karlsson wrote: > Please review this patch to fix the nth_bit, right_n_bits, and > left_n_bits macros. > http://cr.openjdk.java.net/~stefank/8150617/webrev.00 Oh snap. Are there actual usages in VM that break? The patch looks good to me. Thanks, -Aleksey From thomas.schatzl at oracle.com Thu Feb 25 13:39:47 2016 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 25 Feb 2016 14:39:47 +0100 Subject: RFR: 8150617: nth_bit and friends are broken In-Reply-To: <56CEF1FE.5040008@oracle.com> References: <56CEF1FE.5040008@oracle.com> Message-ID: <1456407587.11691.20.camel@oracle.com> Hi, On Thu, 2016-02-25 at 13:22 +0100, Stefan Karlsson wrote: > Hi all, > > Please review this patch to fix the nth_bit, right_n_bits, and > left_n_bits macros. > http://cr.openjdk.java.net/~stefank/8150617/webrev.00 > > The macros were broken when expressions with low-precedence operators > were used. For example (in 64 bit JVMs): > nth_bit(true ? 32 : 64) returns 0x20 instead of 0x100000000 > nth_bit(1|2) returns 0x0 instead of 0x8 > > The fix is to add parentheses around all usages of the macro input > parameter. I also added some extra parentheses to further > disambiguate > the expression for readers of the code. > > I added STATIC_ASSERTS to show the problem, but I can remove them if > someone thinks they are unnecessary. Tested with JPRT. looks good. Thomas From aleksey.shipilev at oracle.com Thu Feb 25 13:44:26 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Thu, 25 Feb 2016 16:44:26 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CECE5E.4080105@redhat.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> Message-ID: <56CF053A.3070607@oracle.com> On 02/25/2016 12:50 PM, Andrew Haley wrote: > This is something of a loaded gun pointed at our feet. We'll have to > be extremely careful that we can prove that such arrays are never > unsafely published. It's the "generic case where a complicated > processing is done after the allocation" I'm worried about. > > The only way to guarantee safety is to prove that the array reference > doesn't escape the thread until the array is fully initialized and a > release barrier has been executed. I think the large part of the concern is the protection of object metadata. Current implementation does not eliminate the subsequent barriers after storing the metadata, and so racy publication of uninitialized array should not violate VM invariants. (Disallowing non-primitive arrays is the second part of safeguards -- no garbage oops in uninitialized arrays!) See e.g. benchmark disassembly with PrintOptoAssembly: Regular allocation: 100 B17: # B18 <- B24 top-of-loop Freq: 95429.2 100 # TLS is in R15 100 movq [R15 + #120 (8-bit)], RSI # ptr 104 PREFETCHNTA [RSI + #192 (32-bit)] 10b movq [RBX], 0x0000000000000001 # ptr 112 PREFETCHNTA [RSI + #256 (32-bit)] 119 movl [RBX + #8 (8-bit)], narrowklass: precise klass [B: 0x00007fa6b000e0e0:Constant:exact * # compressed klass ptr 120 movl [RBX + #12 (8-bit)], RDX # int 123 PREFETCHNTA [RSI + #320 (32-bit)] 12a movq RDI, RBX # spill 12d addq RDI, #16 # ptr 131 PREFETCHNTA [RSI + #384 (32-bit)] 138 shrq RCX, #3 13c addq RCX, #-2 # long 140 xorq rax, rax # ClearArray: shlq rcx,3 # Convert doublewords to bytes rep stosb # Store rax to *rdi++ while rcx-- 14a movq RBP, R11 # spill 14d movq [rsp + #0], R8 # spill 151 movq [rsp + #8], R10 # spill 156 movq [rsp + #16], R9 # spill 156 15b B18: # B36 B19 <- B26 B17 Freq: 95438.9 15b 15b MEMBAR-storestore (empty encoding) There, a StoreStore membar at the end of allocation gives you a safe semantics. For comparison, Unsafe.allocateArrayUninit allocation: 100 B17: # B18 <- B26 top-of-loop Freq: 79963.3 100 # TLS is in R15 100 movq [R15 + #120 (8-bit)], RBX # ptr 104 PREFETCHNTA [RBX + #192 (32-bit)] 10b movq [RAX], 0x0000000000000001 # ptr 112 PREFETCHNTA [RBX + #256 (32-bit)] 119 movl [RAX + #8 (8-bit)], narrowklass: precise klass [B: 0x00007f5a1c00e0e0:Constant:exact * # compressed klass ptr 120 movl [RAX + #12 (8-bit)], RDX # int 123 PREFETCHNTA [RBX + #320 (32-bit)] 12a PREFETCHNTA [RBX + #384 (32-bit)] 131 movq RBP, R8 # spill 134 movq [rsp + #0], RCX # spill 138 movq [rsp + #8], R9 # spill 13d movq [rsp + #16], R10 # spill 13d 142 B18: # B41 B19 <- B28 B17 Freq: 79971.4 142 142 MEMBAR-storestore (empty encoding) "ClearArray" parts are gone (we nuked it), but the StoreStore is still at our guard, protecting the array metadata. Of course, you will still see garbage data if after storing the array elements into the uninitialized array you would publish it racily. But the same is true for the "regular" allocations and subsequent writes. The only difference is whether you see "real" garbage, or some "synthetic" garbage like zeros. It is, of course, a caller responsibility to publish array safely in both cases, if garbage is unwanted. Aside: I really wanted to coalesce the metadata barriers with final field barriers one day, see https://bugs.openjdk.java.net/browse/JDK-8032481. Cheers, -Aleksey From vladimir.x.ivanov at oracle.com Thu Feb 25 13:53:31 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Thu, 25 Feb 2016 16:53:31 +0300 Subject: RFR: 8150617: nth_bit and friends are broken In-Reply-To: <56CEF1FE.5040008@oracle.com> References: <56CEF1FE.5040008@oracle.com> Message-ID: <56CF075B.5030708@oracle.com> Looks good. Best regards, Vladimir Ivanov On 2/25/16 3:22 PM, Stefan Karlsson wrote: > Hi all, > > Please review this patch to fix the nth_bit, right_n_bits, and > left_n_bits macros. > http://cr.openjdk.java.net/~stefank/8150617/webrev.00 > > The macros were broken when expressions with low-precedence operators > were used. For example (in 64 bit JVMs): > nth_bit(true ? 32 : 64) returns 0x20 instead of 0x100000000 > nth_bit(1|2) returns 0x0 instead of 0x8 > > The fix is to add parentheses around all usages of the macro input > parameter. I also added some extra parentheses to further disambiguate > the expression for readers of the code. > > I added STATIC_ASSERTS to show the problem, but I can remove them if > someone thinks they are unnecessary. Tested with JPRT. > > Thanks, > StefanK From aph at redhat.com Thu Feb 25 14:13:46 2016 From: aph at redhat.com (Andrew Haley) Date: Thu, 25 Feb 2016 14:13:46 +0000 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CF053A.3070607@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> Message-ID: <56CF0C1A.4040600@redhat.com> On 02/25/2016 01:44 PM, Aleksey Shipilev wrote: > Of course, you will still see garbage data if after storing the array > elements into the uninitialized array you would publish it racily. But > the same is true for the "regular" allocations and subsequent writes. > The only difference is whether you see "real" garbage, or some > "synthetic" garbage like zeros. It is, of course, a caller > responsibility to publish array safely in both cases, if garbage is > unwanted. Of course, my worry with this optimization assumes that programmers make mistakes. But you did say "complicated processing is done after the allocation." And that's where programmers make mistakes. Andrew. From aleksey.shipilev at oracle.com Thu Feb 25 14:36:19 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Thu, 25 Feb 2016 17:36:19 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CF0C1A.4040600@redhat.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> Message-ID: <56CF1163.4040005@oracle.com> On 02/25/2016 05:13 PM, Andrew Haley wrote: > On 02/25/2016 01:44 PM, Aleksey Shipilev wrote: >> Of course, you will still see garbage data if after storing the array >> elements into the uninitialized array you would publish it racily. But >> the same is true for the "regular" allocations and subsequent writes. >> The only difference is whether you see "real" garbage, or some >> "synthetic" garbage like zeros. It is, of course, a caller >> responsibility to publish array safely in both cases, if garbage is >> unwanted. > > Of course, my worry with this optimization assumes that programmers > make mistakes. But you did say "complicated processing is done after > the allocation." And that's where programmers make mistakes. Of course they do; at least half of the Unsafe methods is suitable for shooting oneself in a foot in creative ways. Unsafe is a sharp tool, and Unsafe callers are trusted in their madness. This is not your average Joe's use case, for sure. In other words, callers can and should provide defense in depth when they are using Unsafe. It's not the goal for Unsafe to provide those defenses, if that contradicts performance goals. If you need defenses, code in plain Java. E.g. for suggested use in StringConcatFactory [1], we say: "StringConcatFactory would probably have to provide a few more checks if using any new Unsafe API: notably the "exactness" debug check in MH_INLINE_SIZED_EXACT should probably be turned on by default -- this will check we never ever construct a String with garbage data." This single index check is much cheaper than defensively zeroing the entire array. Thanks, -Aleksey [1] https://bugs.openjdk.java.net/browse/JDK-8150463 From stefan.karlsson at oracle.com Thu Feb 25 14:59:48 2016 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 25 Feb 2016 15:59:48 +0100 Subject: RFR: 8150617: nth_bit and friends are broken In-Reply-To: <56CEF5EF.9080602@oracle.com> References: <56CEF1FE.5040008@oracle.com> <56CEF5EF.9080602@oracle.com> Message-ID: <56CF16E4.5030705@oracle.com> Hi Aleksey, On 2016-02-25 13:39, Aleksey Shipilev wrote: > On 02/25/2016 03:22 PM, Stefan Karlsson wrote: >> Please review this patch to fix the nth_bit, right_n_bits, and >> left_n_bits macros. >> http://cr.openjdk.java.net/~stefank/8150617/webrev.00 > Oh snap. Are there actual usages in VM that break? A cursory glance at the code didn't show any existing problems. I saw this while I was looking at a problem where we compared singed and unsigned integers in nth_bit. > > The patch looks good to me. Thanks, StefanK > > Thanks, > -Aleksey > From stefan.karlsson at oracle.com Thu Feb 25 15:00:05 2016 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 25 Feb 2016 16:00:05 +0100 Subject: RFR: 8150617: nth_bit and friends are broken In-Reply-To: <1456407587.11691.20.camel@oracle.com> References: <56CEF1FE.5040008@oracle.com> <1456407587.11691.20.camel@oracle.com> Message-ID: <56CF16F5.9030207@oracle.com> Thanks, Thomas. StefanK On 2016-02-25 14:39, Thomas Schatzl wrote: > Hi, > > On Thu, 2016-02-25 at 13:22 +0100, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to fix the nth_bit, right_n_bits, and >> left_n_bits macros. >> http://cr.openjdk.java.net/~stefank/8150617/webrev.00 >> >> The macros were broken when expressions with low-precedence operators >> were used. For example (in 64 bit JVMs): >> nth_bit(true ? 32 : 64) returns 0x20 instead of 0x100000000 >> nth_bit(1|2) returns 0x0 instead of 0x8 >> >> The fix is to add parentheses around all usages of the macro input >> parameter. I also added some extra parentheses to further >> disambiguate >> the expression for readers of the code. >> >> I added STATIC_ASSERTS to show the problem, but I can remove them if >> someone thinks they are unnecessary. Tested with JPRT. > looks good. > > Thomas From stefan.karlsson at oracle.com Thu Feb 25 15:00:24 2016 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 25 Feb 2016 16:00:24 +0100 Subject: RFR: 8150617: nth_bit and friends are broken In-Reply-To: <56CF075B.5030708@oracle.com> References: <56CEF1FE.5040008@oracle.com> <56CF075B.5030708@oracle.com> Message-ID: <56CF1708.2070305@oracle.com> Thanks, Vladimir. StefanK On 2016-02-25 14:53, Vladimir Ivanov wrote: > Looks good. > > Best regards, > Vladimir Ivanov > > On 2/25/16 3:22 PM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to fix the nth_bit, right_n_bits, and >> left_n_bits macros. >> http://cr.openjdk.java.net/~stefank/8150617/webrev.00 >> >> The macros were broken when expressions with low-precedence operators >> were used. For example (in 64 bit JVMs): >> nth_bit(true ? 32 : 64) returns 0x20 instead of 0x100000000 >> nth_bit(1|2) returns 0x0 instead of 0x8 >> >> The fix is to add parentheses around all usages of the macro input >> parameter. I also added some extra parentheses to further disambiguate >> the expression for readers of the code. >> >> I added STATIC_ASSERTS to show the problem, but I can remove them if >> someone thinks they are unnecessary. Tested with JPRT. >> >> Thanks, >> StefanK From aph at redhat.com Thu Feb 25 15:06:45 2016 From: aph at redhat.com (Andrew Haley) Date: Thu, 25 Feb 2016 15:06:45 +0000 Subject: RFR: 8150652: Remove unused code in AArch64 back end Message-ID: <56CF1885.1060600@redhat.com> Defining min in this way breaks compilation if min is already a #define, which it is on some compilers. http://cr.openjdk.java.net/~aph/8150652/ Andrew. From paul.sandoz at oracle.com Thu Feb 25 15:47:11 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 25 Feb 2016 16:47:11 +0100 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CF1163.4040005@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> Message-ID: <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> > On 25 Feb 2016, at 15:36, Aleksey Shipilev wrote: > > On 02/25/2016 05:13 PM, Andrew Haley wrote: >> On 02/25/2016 01:44 PM, Aleksey Shipilev wrote: >>> Of course, you will still see garbage data if after storing the array >>> elements into the uninitialized array you would publish it racily. But >>> the same is true for the "regular" allocations and subsequent writes. >>> The only difference is whether you see "real" garbage, or some >>> "synthetic" garbage like zeros. It is, of course, a caller >>> responsibility to publish array safely in both cases, if garbage is >>> unwanted. >> >> Of course, my worry with this optimization assumes that programmers >> make mistakes. But you did say "complicated processing is done after >> the allocation." And that's where programmers make mistakes. > > Of course they do; at least half of the Unsafe methods is suitable for > shooting oneself in a foot in creative ways. Unsafe is a sharp tool, and > Unsafe callers are trusted in their madness. This is not your average > Joe's use case, for sure. > FTR the contents of the memory allocated by Unsafe.allocateMemory are also uninitialized. Paul. From jesper.wilhelmsson at oracle.com Thu Feb 25 15:53:28 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Thu, 25 Feb 2016 16:53:28 +0100 Subject: RFR(XS): JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is fixed Message-ID: <56CF2378.9010506@oracle.com> Hi, In order to push hs-rt to main today we need to quarantine two tests that are broken. JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is fixed https://bugs.openjdk.java.net/browse/JDK-8150647 JDK-8150318 - serviceability/dcmd/jvmti/LoadAgentDcmdTest.java - Could not find JDK_DIR/lib/x86_64/libinstrument.so https://bugs.openjdk.java.net/browse/JDK-8150318 I'm quarantining these in my local snapshot before pushing to main. In effect it will be the same thing as pushing hs-rt to main first and then push this change directly to main. Webrev: http://cr.openjdk.java.net/~jwilhelm/8150183/webrev.00/ (A single change with two bug IDs. HG updater will close both quarantine bugs.) Thanks, /Jesper From ioi.lam at oracle.com Thu Feb 25 16:09:58 2016 From: ioi.lam at oracle.com (Ioi Lam) Date: Thu, 25 Feb 2016 08:09:58 -0800 Subject: RFR(XS): JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is fixed In-Reply-To: <56CF2378.9010506@oracle.com> References: <56CF2378.9010506@oracle.com> Message-ID: <56CF2756.207@oracle.com> Looks good. Thanks! - Ioi On 2/25/16 7:53 AM, Jesper Wilhelmsson wrote: > Hi, > > In order to push hs-rt to main today we need to quarantine two tests > that are broken. > > JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is fixed > https://bugs.openjdk.java.net/browse/JDK-8150647 > > JDK-8150318 - serviceability/dcmd/jvmti/LoadAgentDcmdTest.java - Could > not find JDK_DIR/lib/x86_64/libinstrument.so > https://bugs.openjdk.java.net/browse/JDK-8150318 > > I'm quarantining these in my local snapshot before pushing to main. In > effect it will be the same thing as pushing hs-rt to main first and > then push this change directly to main. > > Webrev: http://cr.openjdk.java.net/~jwilhelm/8150183/webrev.00/ > > (A single change with two bug IDs. HG updater will close both > quarantine bugs.) > > Thanks, > /Jesper From jesper.wilhelmsson at oracle.com Thu Feb 25 16:14:40 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Thu, 25 Feb 2016 17:14:40 +0100 Subject: RFR(XS): JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is fixed In-Reply-To: <56CF2756.207@oracle.com> References: <56CF2378.9010506@oracle.com> <56CF2756.207@oracle.com> Message-ID: <56CF2870.6010107@oracle.com> Thanks! I see that I managed to link to the main bug in the second case (JDK-8150318). The sub-task that I'm using to push the change is: JDK-8150562 - Quarantine LoadAgentDcmdTest.java due to JDK-8150318 https://bugs.openjdk.java.net/browse/JDK-8150562 /Jesper Den 25/2/16 kl. 17:09, skrev Ioi Lam: > Looks good. > > Thanks! > - Ioi > > On 2/25/16 7:53 AM, Jesper Wilhelmsson wrote: >> Hi, >> >> In order to push hs-rt to main today we need to quarantine two tests that are >> broken. >> >> JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is fixed >> https://bugs.openjdk.java.net/browse/JDK-8150647 >> >> JDK-8150318 - serviceability/dcmd/jvmti/LoadAgentDcmdTest.java - Could not >> find JDK_DIR/lib/x86_64/libinstrument.so >> https://bugs.openjdk.java.net/browse/JDK-8150318 >> >> I'm quarantining these in my local snapshot before pushing to main. In effect >> it will be the same thing as pushing hs-rt to main first and then push this >> change directly to main. >> >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8150183/webrev.00/ >> >> (A single change with two bug IDs. HG updater will close both quarantine bugs.) >> >> Thanks, >> /Jesper > From thomas.schatzl at oracle.com Thu Feb 25 16:24:45 2016 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 25 Feb 2016 17:24:45 +0100 Subject: RFR(XS): JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is fixed In-Reply-To: <56CF2378.9010506@oracle.com> References: <56CF2378.9010506@oracle.com> Message-ID: <1456417485.11691.43.camel@oracle.com> Hi Jesper, On Thu, 2016-02-25 at 16:53 +0100, Jesper Wilhelmsson wrote: > Hi, > > In order to push hs-rt to main today we need to quarantine two tests > that are > broken. > > JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is > fixed > https://bugs.openjdk.java.net/browse/JDK-8150647 > > JDK-8150318 - serviceability/dcmd/jvmti/LoadAgentDcmdTest.java - > Could not find > JDK_DIR/lib/x86_64/libinstrument.so > https://bugs.openjdk.java.net/browse/JDK-8150318 > > I'm quarantining these in my local snapshot before pushing to main. > In effect it > will be the same thing as pushing hs-rt to main first and then push > this change > directly to main. > > Webrev: http://cr.openjdk.java.net/~jwilhelm/8150183/webrev.00/ > > (A single change with two bug IDs. HG updater will close both > quarantine bugs.) looks good. Thomas From jesper.wilhelmsson at oracle.com Thu Feb 25 16:25:44 2016 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Thu, 25 Feb 2016 17:25:44 +0100 Subject: RFR(XS): JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is fixed In-Reply-To: <1456417485.11691.43.camel@oracle.com> References: <56CF2378.9010506@oracle.com> <1456417485.11691.43.camel@oracle.com> Message-ID: <56CF2B08.2000700@oracle.com> Thanks Thomas! /Jesper Den 25/2/16 kl. 17:24, skrev Thomas Schatzl: > Hi Jesper, > > On Thu, 2016-02-25 at 16:53 +0100, Jesper Wilhelmsson wrote: >> Hi, >> >> In order to push hs-rt to main today we need to quarantine two tests >> that are >> broken. >> >> JDK-8150647 - Quarantine TestPLABResize.java until JDK-8150183 is >> fixed >> https://bugs.openjdk.java.net/browse/JDK-8150647 >> >> JDK-8150318 - serviceability/dcmd/jvmti/LoadAgentDcmdTest.java - >> Could not find >> JDK_DIR/lib/x86_64/libinstrument.so >> https://bugs.openjdk.java.net/browse/JDK-8150318 >> >> I'm quarantining these in my local snapshot before pushing to main. >> In effect it >> will be the same thing as pushing hs-rt to main first and then push >> this change >> directly to main. >> >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8150183/webrev.00/ >> >> (A single change with two bug IDs. HG updater will close both >> quarantine bugs.) > > looks good. > > Thomas > From vladimir.kozlov at oracle.com Thu Feb 25 16:34:56 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 25 Feb 2016 08:34:56 -0800 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CEB700.9020603@oracle.com> References: <56CE3222.6040207@oracle.com> <56CE3B13.3090700@oracle.com> <56CE41FE.6070103@oracle.com> <56CE48C8.2050707@oracle.com> <56CEB700.9020603@oracle.com> Message-ID: <56CF2D30.6020208@oracle.com> Okay. Looks good. Thanks, Vladimir On 2/25/16 12:10 AM, Aleksey Shipilev wrote: > On 02/25/2016 03:20 AM, Vladimir Kozlov wrote: >> On 2/24/16 3:51 PM, Aleksey Shipilev wrote: >>> On 02/25/2016 02:21 AM, Vladimir Kozlov wrote: >>>> What is your story for GC? When an array become visible and GC happens, >>>> it will expect only initialized arrays. >>> >>> New method allows primitive arrays only, and its headers should be >>> intact. This is corroborated by the new jtreg test (and benchmarks!) >>> that allocate lots of uninitialized arrays, and obviously they get GCed. >> >> Yes, primitive arrays are fine if the header is correct. In this case >> changes are fine but you may need to add a check in >> inline_unsafe_newArray() that it is only primitive types. > > Alas, the class argument may not be constant, and so we would need a > runtime check there, which would duplicate the check we already have in > Unsafe.java. I'd prefer to follow the upcoming pattern in Mikael's > Unsafe cleanup with making as much checks on Java side. > >> testIAE() should throw exception if IllegalArgumentException is not thrown. > > D'uh, of course! > > See updates: > http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.01/ > http://cr.openjdk.java.net/~shade/8150465/webrev.hs.02/ > > Cheers, > -Aleksey > From vladimir.kozlov at oracle.com Thu Feb 25 16:40:01 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 25 Feb 2016 08:40:01 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56CEB149.6050208@oracle.com> References: <56B2FBB4.70407@oracle.com> <56B3FE5A.9010806@oracle.com> <56B40480.6060703@oracle.com> <56CE518A.2050706@oracle.com> <56CEB149.6050208@oracle.com> Message-ID: <56CF2E61.1090803@oracle.com> Looks fine to me. I assume you tested with Client VM (only C1). Thanks, Vladimir On 2/24/16 11:46 PM, David Holmes wrote: > On 25/02/2016 10:57 AM, Chris Plummer wrote: >> Hello, >> >> I still need to finish up review of this change. I added the change that >> David suggested. Since it's minor, I'll just post the code from >> arguments.cpp here: >> >> #if !defined(COMPILER2) && !INCLUDE_JVMCI >> UNSUPPORTED_OPTION(ProfileInterpreter, "ProfileInterpreter"); >> UNSUPPORTED_OPTION(TraceProfileInterpreter, "TraceProfileInterpreter"); >> UNSUPPORTED_OPTION(PrintMethodData, "PrintMethodData"); >> #endif >> >> The ProfileInterpreter related code was in the original code review. The >> other two flag checks I just added. > > That addition seems fine to me. But I'll leave it to the compiler folk to review the core changes. > > Thanks, > David > >> thanks, >> >> Chris >> >> On 2/4/16 6:10 PM, Chris Plummer wrote: >>> Hi David, >>> >>> On 2/4/16 5:43 PM, David Holmes wrote: >>>> Hi Chris, >>>> >>>> On 4/02/2016 5:20 PM, Chris Plummer wrote: >>>>> Hello, >>>>> >>>>> Please review the following for removing Method::_method_data when only >>>>> supporting C1 (or more specifically, when not supporting C2 or JVMCI). >>>> >>>> Does JVMCI exist with C1 only? >>> My understanding is it can exists with C2 or on its own, but currently >>> is not included with C1 builds. >>>> The COMPILER2_OR_JVMCI conjunction makes things a bit messy. Can we >>>> abstract that behind a single variable, INCLUDE_METHOD_DATA (or some >>>> such) to make it cleaner? >>> I'll also be using COMPILER2_OR_JVMCI with another change to that >>> removes some MethodCounter fields. So yes, I can add >>> INCLUDE_METHOD_DATA, but then will need another INCLUDE_XXX for the >>> MethodCounter fields I'll be conditionally removing. >>>> >>>>> This will help reduce dynamic footprint usage for the minimal VM. >>>>> >>>>> As part of this fix, ProfileInterperter is forced to false unless C2 or >>>>> JVMCI is supported. This was mainly done to avoid crashes if it is >>>>> turned on and Method::_method_data has been excluded, but also because >>>>> it is not useful except to C2 or JVMCI. >>>> >>>> Are you saying that the information generated by ProfileInterpreter >>>> is only used by C2 and JVMCI? If that is case it should really have >>>> been a C2 only flag. >>>> >>> That is my understanding. Coleen confirmed it for me. I believe she >>> got her info from the compiler team. BTW, we need a mechanism to make >>> these conditionally unsupported flags a constant value when they are >>> not supported. It would help deadstrip code. >>>> If ProfileInterpreter is forced to false then shouldn't you also be >>>> checking TraceProfileInterpreter and PrintMethodData use as well >>> Yes, I can add those. >>> >>> thanks, >>> >>> Chris >>>> >>>> Thanks, >>>> David >>>> >>>>> Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 >>>>> >>>>> Test with JPRT -testset hotspot. >>>>> >>>>> thanks, >>>>> >>>>> Chris >>> >> From chris.plummer at oracle.com Thu Feb 25 16:44:55 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Thu, 25 Feb 2016 08:44:55 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56CF2E61.1090803@oracle.com> References: <56B2FBB4.70407@oracle.com> <56B3FE5A.9010806@oracle.com> <56B40480.6060703@oracle.com> <56CE518A.2050706@oracle.com> <56CEB149.6050208@oracle.com> <56CF2E61.1090803@oracle.com> Message-ID: <56CF2F87.3030909@oracle.com> Yes. "-testset hotspot" does plenty of c1 test runs. thanks, Chris On 2/25/16 8:40 AM, Vladimir Kozlov wrote: > Looks fine to me. I assume you tested with Client VM (only C1). > > Thanks, > Vladimir > > On 2/24/16 11:46 PM, David Holmes wrote: >> On 25/02/2016 10:57 AM, Chris Plummer wrote: >>> Hello, >>> >>> I still need to finish up review of this change. I added the change >>> that >>> David suggested. Since it's minor, I'll just post the code from >>> arguments.cpp here: >>> >>> #if !defined(COMPILER2) && !INCLUDE_JVMCI >>> UNSUPPORTED_OPTION(ProfileInterpreter, "ProfileInterpreter"); >>> UNSUPPORTED_OPTION(TraceProfileInterpreter, >>> "TraceProfileInterpreter"); >>> UNSUPPORTED_OPTION(PrintMethodData, "PrintMethodData"); >>> #endif >>> >>> The ProfileInterpreter related code was in the original code review. >>> The >>> other two flag checks I just added. >> >> That addition seems fine to me. But I'll leave it to the compiler >> folk to review the core changes. >> >> Thanks, >> David >> >>> thanks, >>> >>> Chris >>> >>> On 2/4/16 6:10 PM, Chris Plummer wrote: >>>> Hi David, >>>> >>>> On 2/4/16 5:43 PM, David Holmes wrote: >>>>> Hi Chris, >>>>> >>>>> On 4/02/2016 5:20 PM, Chris Plummer wrote: >>>>>> Hello, >>>>>> >>>>>> Please review the following for removing Method::_method_data >>>>>> when only >>>>>> supporting C1 (or more specifically, when not supporting C2 or >>>>>> JVMCI). >>>>> >>>>> Does JVMCI exist with C1 only? >>>> My understanding is it can exists with C2 or on its own, but currently >>>> is not included with C1 builds. >>>>> The COMPILER2_OR_JVMCI conjunction makes things a bit messy. Can we >>>>> abstract that behind a single variable, INCLUDE_METHOD_DATA (or some >>>>> such) to make it cleaner? >>>> I'll also be using COMPILER2_OR_JVMCI with another change to that >>>> removes some MethodCounter fields. So yes, I can add >>>> INCLUDE_METHOD_DATA, but then will need another INCLUDE_XXX for the >>>> MethodCounter fields I'll be conditionally removing. >>>>> >>>>>> This will help reduce dynamic footprint usage for the minimal VM. >>>>>> >>>>>> As part of this fix, ProfileInterperter is forced to false unless >>>>>> C2 or >>>>>> JVMCI is supported. This was mainly done to avoid crashes if it is >>>>>> turned on and Method::_method_data has been excluded, but also >>>>>> because >>>>>> it is not useful except to C2 or JVMCI. >>>>> >>>>> Are you saying that the information generated by ProfileInterpreter >>>>> is only used by C2 and JVMCI? If that is case it should really have >>>>> been a C2 only flag. >>>>> >>>> That is my understanding. Coleen confirmed it for me. I believe she >>>> got her info from the compiler team. BTW, we need a mechanism to make >>>> these conditionally unsupported flags a constant value when they are >>>> not supported. It would help deadstrip code. >>>>> If ProfileInterpreter is forced to false then shouldn't you also be >>>>> checking TraceProfileInterpreter and PrintMethodData use as well >>>> Yes, I can add those. >>>> >>>> thanks, >>>> >>>> Chris >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>>> Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 >>>>>> >>>>>> Test with JPRT -testset hotspot. >>>>>> >>>>>> thanks, >>>>>> >>>>>> Chris >>>> >>> From christian.thalinger at oracle.com Thu Feb 25 19:45:57 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Thu, 25 Feb 2016 09:45:57 -1000 Subject: RFR: 8150652: Remove unused code in AArch64 back end In-Reply-To: <56CF1885.1060600@redhat.com> References: <56CF1885.1060600@redhat.com> Message-ID: <7BB0CCFA-51C0-4C04-8BB4-53A3A8B1D25C@oracle.com> Looks good. > On Feb 25, 2016, at 5:06 AM, Andrew Haley wrote: > > Defining min in this way breaks compilation if min is already a #define, > which it is on some compilers. > > http://cr.openjdk.java.net/~aph/8150652/ > > Andrew. From christian.thalinger at oracle.com Thu Feb 25 19:52:29 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Thu, 25 Feb 2016 09:52:29 -1000 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CE3222.6040207@oracle.com> References: <56CE3222.6040207@oracle.com> Message-ID: + public Object allocateArrayUninit(Class componentType, int length) { Can we use another name like allocateUninitializedArray? > On Feb 24, 2016, at 12:43 PM, Aleksey Shipilev wrote: > > Hi, > > When instantiating arrays from Java, we have to zero the backing storage > to match JLS requirements. In some cases, like with the subsequent > arraycopy, compilers are able to remove zeroing. However, in a generic > case where a complicated processing is done after the allocation, > compilers are unable to reliably figure out the array is covered completely. > > JDK-8150463 is a motivational example of this: Java level String concat > loses to C2's OptimizeStringConcat because C2 can skip zeroing for its > own allocations. > > It might make sense to allow new Unsafe method that will return > uninitialized arrays to trusted Java code: > https://bugs.openjdk.java.net/browse/JDK-8150465 > > Webrevs: > http://cr.openjdk.java.net/~shade/8150465/webrev.hs.01/ > http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.01/ > > It helps that we already have java.lang.reflect.Array.newArray > intrinsic, so we can reuse a lot of code. The intrinsic code nukes the > array allocation in the same way PhaseStringOpts::allocate_byte_array > does it in OptimizeStringConcat. Alas, no such luck in C1, and so it > stays untouched, falling back to normal Java allocations. > > Performance data shows the promising improvements: > http://cr.openjdk.java.net/~shade/8150465/notes.txt > > Also, using this new method brings the best Java-level-only > concatenation strategy to OptimizeStringConcat levels, and beyond. > > Testing: new test; targeted microbenchmarks; JPRT (in progress) > > Thanks, > -Aleksey > From aleksey.shipilev at oracle.com Thu Feb 25 20:11:51 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Thu, 25 Feb 2016 23:11:51 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: References: <56CE3222.6040207@oracle.com> Message-ID: <56CF6007.5060607@oracle.com> On 02/25/2016 10:52 PM, Christian Thalinger wrote: > + public Object allocateArrayUninit(Class componentType, int length) { > > Can we use another name like allocateUninitializedArray? Yes, we can: http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.02/ http://cr.openjdk.java.net/~shade/8150465/webrev.hs.03/ This was search-and-replace renaming, and the test is still working fine. Haven't re-spinned JPRT for this one. Cheers, -Aleksey From aleksey.shipilev at oracle.com Thu Feb 25 20:13:05 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Thu, 25 Feb 2016 23:13:05 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CF2D30.6020208@oracle.com> References: <56CE3222.6040207@oracle.com> <56CE3B13.3090700@oracle.com> <56CE41FE.6070103@oracle.com> <56CE48C8.2050707@oracle.com> <56CEB700.9020603@oracle.com> <56CF2D30.6020208@oracle.com> Message-ID: <56CF6051.8080201@oracle.com> On 02/25/2016 07:34 PM, Vladimir Kozlov wrote: > Okay. Looks good. Thanks for review, Vladimir! -Aleksey P.S. FTR, I renamed the method to Unsafe.allocateUninitializedArray, as per Christian's request; see my previous note. From christian.thalinger at oracle.com Thu Feb 25 20:51:09 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Thu, 25 Feb 2016 10:51:09 -1000 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CF6007.5060607@oracle.com> References: <56CE3222.6040207@oracle.com> <56CF6007.5060607@oracle.com> Message-ID: <3456FD3A-CBF8-46AD-9092-5DC7C6A02DD8@oracle.com> > On Feb 25, 2016, at 10:11 AM, Aleksey Shipilev wrote: > > On 02/25/2016 10:52 PM, Christian Thalinger wrote: >> + public Object allocateArrayUninit(Class componentType, int length) { >> >> Can we use another name like allocateUninitializedArray? > > Yes, we can: > http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.02/ > http://cr.openjdk.java.net/~shade/8150465/webrev.hs.03/ Thanks but I wanted the change in hotspot code too. > > This was search-and-replace renaming, and the test is still working > fine. Haven't re-spinned JPRT for this one. > > Cheers, > -Aleksey > > > From aleksey.shipilev at oracle.com Thu Feb 25 21:18:36 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Fri, 26 Feb 2016 00:18:36 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <3456FD3A-CBF8-46AD-9092-5DC7C6A02DD8@oracle.com> References: <56CE3222.6040207@oracle.com> <56CF6007.5060607@oracle.com> <3456FD3A-CBF8-46AD-9092-5DC7C6A02DD8@oracle.com> Message-ID: <56CF6FAC.8000603@oracle.com> On 02/25/2016 11:51 PM, Christian Thalinger wrote: > >> On Feb 25, 2016, at 10:11 AM, Aleksey Shipilev wrote: >> >> On 02/25/2016 10:52 PM, Christian Thalinger wrote: >>> + public Object allocateArrayUninit(Class componentType, int length) { >>> >>> Can we use another name like allocateUninitializedArray? >> >> Yes, we can: >> http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.02/ >> http://cr.openjdk.java.net/~shade/8150465/webrev.hs.03/ > > Thanks but I wanted the change in hotspot code too. That wasn't made obvious. Here you go: http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.02/ http://cr.openjdk.java.net/~shade/8150465/webrev.hs.04/ -Aleksey From james.graham at oracle.com Thu Feb 25 22:57:50 2016 From: james.graham at oracle.com (Jim Graham) Date: Thu, 25 Feb 2016 14:57:50 -0800 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> Message-ID: <56CF86EE.6070704@oracle.com> Just to play devil's advocate here. It's true that from a code correctness-safety perspective Unsafe programmers can already shoot themselves in the foot with uninitialized allocations, but from the security point of view the two methods don't have the same opportunity to leak information. Unsafe.allocateMemory returns a long, which is just a long to any untrusted code since it can't use the Unsafe methods to access the data in it. The new uninitialized array allocation returns a primitive array which can be inspected by untrusted code for any stale elements that hold private information from a previous allocation - should that array ever be leaked to untrusted code... ...jim On 2/25/2016 7:47 AM, Paul Sandoz wrote: > >> On 25 Feb 2016, at 15:36, Aleksey Shipilev wrote: >> >> On 02/25/2016 05:13 PM, Andrew Haley wrote: >>> On 02/25/2016 01:44 PM, Aleksey Shipilev wrote: >>>> Of course, you will still see garbage data if after storing the array >>>> elements into the uninitialized array you would publish it racily. But >>>> the same is true for the "regular" allocations and subsequent writes. >>>> The only difference is whether you see "real" garbage, or some >>>> "synthetic" garbage like zeros. It is, of course, a caller >>>> responsibility to publish array safely in both cases, if garbage is >>>> unwanted. >>> >>> Of course, my worry with this optimization assumes that programmers >>> make mistakes. But you did say "complicated processing is done after >>> the allocation." And that's where programmers make mistakes. >> >> Of course they do; at least half of the Unsafe methods is suitable for >> shooting oneself in a foot in creative ways. Unsafe is a sharp tool, and >> Unsafe callers are trusted in their madness. This is not your average >> Joe's use case, for sure. >> > > FTR the contents of the memory allocated by Unsafe.allocateMemory are also uninitialized. > > Paul. > From chris.plummer at oracle.com Fri Feb 26 00:01:03 2016 From: chris.plummer at oracle.com (Chris Plummer) Date: Thu, 25 Feb 2016 16:01:03 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56CF2F87.3030909@oracle.com> References: <56B2FBB4.70407@oracle.com> <56B3FE5A.9010806@oracle.com> <56B40480.6060703@oracle.com> <56CE518A.2050706@oracle.com> <56CEB149.6050208@oracle.com> <56CF2E61.1090803@oracle.com> <56CF2F87.3030909@oracle.com> Message-ID: <56CF95BF.5070300@oracle.com> The TraceProfileInterpreter UNSUPPORTED_OPTION check I added failed to compile for release builds because TraceProfileInterpreter is a developer flag. I had to add NOT_PRODUCT: #if !defined(COMPILER2) && !INCLUDE_JVMCI UNSUPPORTED_OPTION(ProfileInterpreter, "ProfileInterpreter"); NOT_PRODUCT(UNSUPPORTED_OPTION(TraceProfileInterpreter, "TraceProfileInterpreter")); UNSUPPORTED_OPTION(PrintMethodData, "PrintMethodData"); #endif Chris On 2/25/16 8:44 AM, Chris Plummer wrote: > Yes. "-testset hotspot" does plenty of c1 test runs. > > thanks, > > Chris > > On 2/25/16 8:40 AM, Vladimir Kozlov wrote: >> Looks fine to me. I assume you tested with Client VM (only C1). >> >> Thanks, >> Vladimir >> >> On 2/24/16 11:46 PM, David Holmes wrote: >>> On 25/02/2016 10:57 AM, Chris Plummer wrote: >>>> Hello, >>>> >>>> I still need to finish up review of this change. I added the change >>>> that >>>> David suggested. Since it's minor, I'll just post the code from >>>> arguments.cpp here: >>>> >>>> #if !defined(COMPILER2) && !INCLUDE_JVMCI >>>> UNSUPPORTED_OPTION(ProfileInterpreter, "ProfileInterpreter"); >>>> UNSUPPORTED_OPTION(TraceProfileInterpreter, >>>> "TraceProfileInterpreter"); >>>> UNSUPPORTED_OPTION(PrintMethodData, "PrintMethodData"); >>>> #endif >>>> >>>> The ProfileInterpreter related code was in the original code >>>> review. The >>>> other two flag checks I just added. >>> >>> That addition seems fine to me. But I'll leave it to the compiler >>> folk to review the core changes. >>> >>> Thanks, >>> David >>> >>>> thanks, >>>> >>>> Chris >>>> >>>> On 2/4/16 6:10 PM, Chris Plummer wrote: >>>>> Hi David, >>>>> >>>>> On 2/4/16 5:43 PM, David Holmes wrote: >>>>>> Hi Chris, >>>>>> >>>>>> On 4/02/2016 5:20 PM, Chris Plummer wrote: >>>>>>> Hello, >>>>>>> >>>>>>> Please review the following for removing Method::_method_data >>>>>>> when only >>>>>>> supporting C1 (or more specifically, when not supporting C2 or >>>>>>> JVMCI). >>>>>> >>>>>> Does JVMCI exist with C1 only? >>>>> My understanding is it can exists with C2 or on its own, but >>>>> currently >>>>> is not included with C1 builds. >>>>>> The COMPILER2_OR_JVMCI conjunction makes things a bit messy. Can we >>>>>> abstract that behind a single variable, INCLUDE_METHOD_DATA (or some >>>>>> such) to make it cleaner? >>>>> I'll also be using COMPILER2_OR_JVMCI with another change to that >>>>> removes some MethodCounter fields. So yes, I can add >>>>> INCLUDE_METHOD_DATA, but then will need another INCLUDE_XXX for the >>>>> MethodCounter fields I'll be conditionally removing. >>>>>> >>>>>>> This will help reduce dynamic footprint usage for the minimal VM. >>>>>>> >>>>>>> As part of this fix, ProfileInterperter is forced to false >>>>>>> unless C2 or >>>>>>> JVMCI is supported. This was mainly done to avoid crashes if it is >>>>>>> turned on and Method::_method_data has been excluded, but also >>>>>>> because >>>>>>> it is not useful except to C2 or JVMCI. >>>>>> >>>>>> Are you saying that the information generated by ProfileInterpreter >>>>>> is only used by C2 and JVMCI? If that is case it should really have >>>>>> been a C2 only flag. >>>>>> >>>>> That is my understanding. Coleen confirmed it for me. I believe she >>>>> got her info from the compiler team. BTW, we need a mechanism to make >>>>> these conditionally unsupported flags a constant value when they are >>>>> not supported. It would help deadstrip code. >>>>>> If ProfileInterpreter is forced to false then shouldn't you also be >>>>>> checking TraceProfileInterpreter and PrintMethodData use as well >>>>> Yes, I can add those. >>>>> >>>>> thanks, >>>>> >>>>> Chris >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> >>>>>>> Webrev: http://cr.openjdk.java.net/~cjplummer/8147978/webrev.02/ >>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8147978 >>>>>>> >>>>>>> Test with JPRT -testset hotspot. >>>>>>> >>>>>>> thanks, >>>>>>> >>>>>>> Chris >>>>> >>>> > From john.r.rose at oracle.com Fri Feb 26 01:14:29 2016 From: john.r.rose at oracle.com (John Rose) Date: Thu, 25 Feb 2016 17:14:29 -0800 Subject: [9] RFR (S) 8147978: Remove Method::_method_data for C1 In-Reply-To: <56B2FBB4.70407@oracle.com> References: <56B2FBB4.70407@oracle.com> Message-ID: <5C2E2190-CCBF-4513-BD86-906B6E314D55@oracle.com> On Feb 3, 2016, at 11:20 PM, Chris Plummer wrote: > > Please review the following for removing Method::_method_data when only supporting C1 (or more specifically, when not supporting C2 or JVMCI). This will help reduce dynamic footprint usage for the minimal VM. Even with C2 we could save footprint if we could merge the _method_data and _method_counters field. Even on C1, with this change, we could save more footprint if we could make the _method_counters field a tagged union between a short count and the full method counters field, allocated lazily after (say) 10 iterations of a method. The tracking bug for this is https://bugs.openjdk.java.net/browse/JDK-8013169 I just added a comment explaining the lazy allocation idea. Any takers? ? John From vladimir.kozlov at oracle.com Fri Feb 26 02:23:40 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 25 Feb 2016 18:23:40 -0800 Subject: RFR(M) 8150353: PPC64LE: Support RTM on linux In-Reply-To: <10c9a1cb6d9b40618a094283ac838038@DEWDFE13DE14.global.corp.sap> References: <56CE0975.8060807@linux.vnet.ibm.com> <56CE42AE.7080605@oracle.com> <10c9a1cb6d9b40618a094283ac838038@DEWDFE13DE14.global.corp.sap> Message-ID: <56CFB72C.7030401@oracle.com> The problem with increasing ScratchBufferBlob size is that with Tiered compilation we scale number of compiler threads based on cpu count and increase space in CodeCache accordingly: code_buffers_size += c2_count * C2Compiler::initial_code_buffer_size(); I did experiment on Intel setting ON all RTM flags which can increase size of lock code: $ java -XX:+UnlockExperimentalVMOptions -XX:+UnlockDiagnosticVMOptions -XX:+UseRTMLocking -XX:+UseRTMDeopt -XX:+UseRTMForStackLocks -XX:+PrintPreciseRTMLockingStatistics -XX:+PrintFlagsFinal -version |grep RTM Java HotSpot(TM) 64-Bit Server VM warning: UseRTMLocking is only available as experimental option on this platform. bool PrintPreciseRTMLockingStatistics := true {C2 diagnostic} intx RTMAbortRatio = 50 {ARCH experimental} intx RTMAbortThreshold = 1000 {ARCH experimental} intx RTMLockingCalculationDelay = 0 {ARCH experimental} intx RTMLockingThreshold = 10000 {ARCH experimental} uintx RTMRetryCount = 5 {ARCH product} intx RTMSpinLoopCount = 100 {ARCH experimental} intx RTMTotalCountIncrRate = 64 {ARCH experimental} bool UseRTMDeopt := true {ARCH product} bool UseRTMForStackLocks := true {ARCH experimental} bool UseRTMLocking := true {ARCH product} bool UseRTMXendForLockBusy = true {ARCH experimental} I added next lines to the end of Compile::scratch_emit_size() method: if (n->is_Mach() && n->as_Mach()->ideal_Opcode() == Op_FastLock) { tty->print_cr("======== FastLock size: %d ==========", buf.total_content_size()); } if (n->is_Mach() && n->as_Mach()->ideal_Opcode() == Op_FastUnlock) { tty->print_cr("======== FastUnlock size: %d ==========", buf.total_content_size()); } and got: ======== FastLock size: 657 ========== ======== FastUnlock size: 175 ========== Thanks, Vladimir On 2/25/16 3:43 AM, Doerr, Martin wrote: > Hi Vladimir, > > thanks for taking a look. > > About version values: > We are using a similar scheme for version checks on AIX where we know that the version values are less than 256. > It makes comparisons much more convenient. > But I agree that we should double-check if it is guaranteed for linux as well (and possibly add an assertion). > > About scratch buffer size: > We only noticed that the scratch buffer was too small when we enable all RTM features: > -XX:+UnlockExperimentalVMOptions -XX:+UseRTMLocking -XX:+UseRTMForStackLocks -XX:+UseRTMDeopt > We have only tried on PPC64, but I wonder if the current size is sufficient for x86. I currently don't have access to a Skylake machine. > > I think adding 1024 bytes to the scratch buffer doesn't hurt. > (It may also lead to larger CodeBuffers in output.cpp but I don't think this is problematic as long as the real content gets copied to nmethods.) > Would you agree? > > Best regards, > Martin > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Donnerstag, 25. Februar 2016 00:54 > To: Gustavo Romero ; Doerr, Martin ; hotspot-dev at openjdk.java.net > Cc: brenohl at br.ibm.com > Subject: Re: RFR(M) 8150353: PPC64LE: Support RTM on linux > > My concern (but I am not export) is Linux version encoding. Is it true > that each value in x.y.z is less then 256? Why not keep them as separate > int values? > I also thought we have OS versions in make files but we check only gcc > version there. > > Do you have problem with ScratchBufferBlob only on PPC or on some other > platforms too? May be we should make MAX_inst_size as platform specific > value. > > Thanks, > Vladimir > > On 2/24/16 11:50 AM, Gustavo Romero wrote: >> Hi Martin, >> >> Both little and big endian Linux kernel contain the syscall change, so >> I did not include: >> >> #if defined(COMPILER2) && (defined(AIX) || defined(VM_LITTLE_ENDIAN) >> >> in globalDefinitions_ppc.hpp. >> >> Please, could you review the following change? >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8150353 >> Webrev (hotspot): http://81.de.7a9f.ip4.static.sl-reverse.com/webrev/ >> >> Summary: >> >> * Enable RTM support for Linux on PPC64 (LE and BE). >> * Fix C2 compiler buffer size issue. >> >> Thank you. >> >> Regards, >> Gustavo >> From aleksey.shipilev at oracle.com Fri Feb 26 07:01:39 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Fri, 26 Feb 2016 10:01:39 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CF86EE.6070704@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> <56CF86EE.6070704@oracle.com> Message-ID: <56CFF853.7020207@oracle.com> Unsafe is unsafe, no doubt about it. But if you accept the premise that Unsafe.allocateUninitializedArray is wrong because it can leak data from previous allocation if used incorrectly, then you should equally accept that compilers should not do the same automatic optimization too. Even more so, because the compiler code is much harder to audit for these mistakes; and programmers *do* make mistakes in that complicated compiler code. Notable examples in C2 are OptimizeStringConcat allocating uninitialized array for String (pretty much what we are targeting to do here), and subsequent arraycopy eliminating the array zeroing in a "tightly coupled" allocation. But, we do these optimizations already. Unsafe users *have* to provide an additional protection to avoid such leakage, pretty much how compilers *have* to guarantee safety in those situations too. Even without Unsafe.allocateUninitializedArray, I can certainly construct the buggy JDK code that will leak uninitialized memory to untrusted code: a simple mistake in offset calculation for Unsafe.getX(long,long) is enough to leak. Yet, we don't nuke the raw memory accessors from Unsafe, because they are useful, and we do understand those are sharp tools, and we should be extra careful using them. The same goes for any other Unsafe method. If you treat U.allocateUninitializedArray with the same respect you treating other tools in that toolbox, everything is fine. If you don't, then stay away from Unsafe to begin with. Unsafe is unsafe, there is no doubt about it. -Aleksey On 02/26/2016 01:57 AM, Jim Graham wrote: > Just to play devil's advocate here. > > It's true that from a code correctness-safety perspective Unsafe > programmers can already shoot themselves in the foot with uninitialized > allocations, but from the security point of view the two methods don't > have the same opportunity to leak information. > > Unsafe.allocateMemory returns a long, which is just a long to any > untrusted code since it can't use the Unsafe methods to access the data > in it. > > The new uninitialized array allocation returns a primitive array which > can be inspected by untrusted code for any stale elements that hold > private information from a previous allocation - should that array ever > be leaked to untrusted code... > > ...jim > > On 2/25/2016 7:47 AM, Paul Sandoz wrote: >> >>> On 25 Feb 2016, at 15:36, Aleksey Shipilev >>> wrote: >>> >>> On 02/25/2016 05:13 PM, Andrew Haley wrote: >>>> On 02/25/2016 01:44 PM, Aleksey Shipilev wrote: >>>>> Of course, you will still see garbage data if after storing the array >>>>> elements into the uninitialized array you would publish it racily. But >>>>> the same is true for the "regular" allocations and subsequent writes. >>>>> The only difference is whether you see "real" garbage, or some >>>>> "synthetic" garbage like zeros. It is, of course, a caller >>>>> responsibility to publish array safely in both cases, if garbage is >>>>> unwanted. >>>> >>>> Of course, my worry with this optimization assumes that programmers >>>> make mistakes. But you did say "complicated processing is done after >>>> the allocation." And that's where programmers make mistakes. >>> >>> Of course they do; at least half of the Unsafe methods is suitable for >>> shooting oneself in a foot in creative ways. Unsafe is a sharp tool, and >>> Unsafe callers are trusted in their madness. This is not your average >>> Joe's use case, for sure. >>> >> >> FTR the contents of the memory allocated by Unsafe.allocateMemory are >> also uninitialized. >> >> Paul. >> From volker.simonis at gmail.com Fri Feb 26 07:36:41 2016 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 26 Feb 2016 08:36:41 +0100 Subject: Tool to visualize C2 Ideal IR In-Reply-To: <10103492.6964066.1456422155411.JavaMail.zimbra@irisa.fr> References: <1011422486.6957119.1456419690960.JavaMail.zimbra@irisa.fr> <10103492.6964066.1456422155411.JavaMail.zimbra@irisa.fr> Message-ID: Hi Marcelino, IdealGraphVisualizer is now part of the hotspot repository. See: hotspot/src/share/tools/IdealGraphVisualizer You should be able to easily build it yourself. If you encounter any problems, post them on hotspot-dev (CC'ed). Regards, Volker On Thu, Feb 25, 2016 at 6:42 PM, Marcelino Rodriguez cancio wrote: > Hello all, > > Is there any tool to visualize the trace produced by Hotspot's C2 compiler when PrintIdeal is set to true? For example: > > ................ > 780 ConvI2L === _ 794 [[ 774 ]] #long:0..maxint-63:www !orig=[684],[626],[586],[479],[439],[259],208 !jvms: MyBenchmark::testMethod @ bci:31 > 774 L ShiftL === _ 780 209 [[ 773 ]] !orig=[678],[620],[582],[475],[435],[251],210 !jvms: MyBenchmark::testMethod @ bci:31 > 773 A ddP = = = _ 86 86 77 4 [[ 751 ]] !orig=[677],[619],[581],[474],[434],[250],212 !jvms: MyBenchmark::testMethod @ bci:31 > 608 C onL = = = 0 [[ 770 ]] #long:32 > ........... > > I was going to do my own script to turn the graph into GraphViz, but then I saw the work "Visualization of Program Dependence Graphs" by Thomas Wurthinger, Christian Wimmer, and Hanspeter M?ssenb?ck. ( http://dl.acm.org/citation.cfm?id=1788391 ) and seems amazing. But I can't find the tool for download anywhere. > > I know there is a tool for C1: https://java.net/projects/c1visualizer/ > > Best and thanks > Marcelino > > From john.r.rose at oracle.com Fri Feb 26 08:01:22 2016 From: john.r.rose at oracle.com (John Rose) Date: Fri, 26 Feb 2016 00:01:22 -0800 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CEB700.9020603@oracle.com> References: <56CE3222.6040207@oracle.com> <56CE3B13.3090700@oracle.com> <56CE41FE.6070103@oracle.com> <56CE48C8.2050707@oracle.com> <56CEB700.9020603@oracle.com> Message-ID: On Feb 25, 2016, at 12:10 AM, Aleksey Shipilev wrote: > >> Yes, primitive arrays are fine if the header is correct. In this case >> changes are fine but you may need to add a check in >> inline_unsafe_newArray() that it is only primitive types. > > Alas, the class argument may not be constant, and so we would need a > runtime check there, which would duplicate the check we already have in > Unsafe.java. I'd prefer to follow the upcoming pattern in Mikael's > Unsafe cleanup with making as much checks on Java side. Yes, follow that pattern. In unsafe.cpp, you should use an assert (or guarantee) to back up the Java-level checks. The effect of the assert is to help diagnose any failure of the Java-level checks. The compiler intrinsic does not need the double-checking. Rather, it should run AFAP (as fast as possible) under the assumption that the Java-level checks have run or don't apply. On Feb 25, 2016, at 11:01 PM, Aleksey Shipilev wrote: > But if you accept the premise that Unsafe.allocateUninitializedArray is > wrong because it can leak data from previous allocation if used > incorrectly, then you should equally accept that compilers should not do > the same automatic optimization too. Even more so, because the compiler > code is much harder to audit for these mistakes; and programmers *do* > make mistakes in that complicated compiler code. Exactly right. Unsafe is the "in your face" way to do the same kind of shady stuff that (otherwise) the C runtime or the JIT IR transforms would have to do. In many cases, the necessary audits and proofs are safer and easier to accomplish when looking at Java code than when looking at C code (behind a JNI mask) or JIT IR transforms (which nobody fully understands). Unsafe is a frank admission that sometimes the barn needs sweeping, and that it might as well be swept in the daylight. We don't sit around in the living room pretending we can't smell it. About the javadoc: > * This method is suitable for special high-performance code > * that is known to overwrite the array contents right after the allocation. I suggest pounding harder on this warning: > In fact, users of this method are required to overwrite the initial (garbage) array > contents before allowing untrusted code, or code in other threads, to observe > the reference to the newly allocated array. In addition, the publication of the > array reference must be safe according to the JMM. Suggesting: Provide a second function to ensure the safe publication, and require users of aUA to call it before publication. > public Object markArrayInitialized(Object array) { /*mem-fence?*/ return array; } It can't really be enforced, but it makes a clear target for best practices. I find it very useful, when thinking about safe object allocation, to distinguish between larval and public stages. The mAI function makes it very explicit where the stage change happens. And it could map directly into the C2 IR, which represents these stages internally. ? John From james.graham at oracle.com Fri Feb 26 10:24:19 2016 From: james.graham at oracle.com (Jim Graham) Date: Fri, 26 Feb 2016 02:24:19 -0800 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CFF853.7020207@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> <56CF86EE.6070704@oracle.com> <56CFF853.7020207@oracle.com> Message-ID: <56D027D3.9010609@oracle.com> Hi Aleksey, Agreed on all fronts that Unsafe is playing with fire. I was not objecting, I just wanted to accurately depict the similarities and differences with the existing function before we allow the faulty analogy to move that analysis off the table. "A is like B so we don't need to consider any new info" only works if all aspects of A are covered by aspects of B. As you note, we can leak uninitialized memory from Unsafe.allocateMemory as well, if we copy some of the uninitialized data out of it on behalf of an untrusted caller, but that is one level of "oops" harder than storing the reference to the unsafe primitive array in an accidentally accessible location. The potential failures of the existing method are that we dig out unwritten data and return it. It is not a failure to accidentally give them the handle, though, because the handle is useless to them. It would be a huge error to let them overwrite or otherwise change any of these handles, though, but simply returning the handle from a method does not create a leak. The potential failures of the new method are both that we might dig out unwritten data and supply that data, but also that we might accidentally give away the handle itself. It doesn't have to be "uh, oh, I forgot to initialize all of the data because my algorithm was too obscure", it can be that the programmer stored the array reference in a field that has a getter before they are finished initializing it. You can examine the initialization code in fine detail and say "Yep, that's all of the locations in the array", but if you didn't notice that there was a getter that someone could call while you were doing that work, you've leaked data. Or maybe you knew about the getter, but thought that the object was only held by trusted modules - until there's an exploit. The getter may not even exist when they write the initialization code, but someone comes along later and thinks "it would be really handy to have a getter for that array reference" and doesn't notice that it goes through part of its life span as a partially initialized array ("OMG, what's that now?!"). I also don't think that it's a good analogy that these are similar to bugs that the compiler could have. The compiler code is some of the most scrutinized code we have, with a fairly narrow and specific set of code translation operations it performs, maintained by a fairly limited group of engineers who are experts in those issues. But we have a ton of library code in many places and a huge body of engineers working on it - a few orders of magnitude more fingers that aren't trained in memory barriers and proving accessibility that can slip up ("What do you mean they can call a protected method? I thought it was protected for a reason!?"). That widens the gap of potential programmer errors that can create a data leak. It sounds like we probably accept that risk, but let's be on the table about it. All of those are programmer errors, but let's at least acknowledge the scenarios before we simply say "all programmers everywhere shouldn't do bad stuff" and rubber stamp a new feature. In the end, I'm just saying that the specific comment that we already have a mechanism that can create uninitialized memory doesn't imply that there are no new security implications to consider here. That's a logical shortcut to minimize discussion, not a valid conclusion. I'm seeing a fair amount of hand waving here and not much objective analysis of the risks. It's important to nail down the issues if for no other reason than to provide good advice on usage practices in our documentation to our internal developers... ...jim On 2/25/2016 11:01 PM, Aleksey Shipilev wrote: > Unsafe is unsafe, no doubt about it. > > But if you accept the premise that Unsafe.allocateUninitializedArray is > wrong because it can leak data from previous allocation if used > incorrectly, then you should equally accept that compilers should not do > the same automatic optimization too. Even more so, because the compiler > code is much harder to audit for these mistakes; and programmers *do* > make mistakes in that complicated compiler code. > > Notable examples in C2 are OptimizeStringConcat allocating uninitialized > array for String (pretty much what we are targeting to do here), and > subsequent arraycopy eliminating the array zeroing in a "tightly > coupled" allocation. But, we do these optimizations already. > > Unsafe users *have* to provide an additional protection to avoid such > leakage, pretty much how compilers *have* to guarantee safety in those > situations too. > > Even without Unsafe.allocateUninitializedArray, I can certainly > construct the buggy JDK code that will leak uninitialized memory to > untrusted code: a simple mistake in offset calculation for > Unsafe.getX(long,long) is enough to leak. Yet, we don't nuke the raw > memory accessors from Unsafe, because they are useful, and we do > understand those are sharp tools, and we should be extra careful using > them. > > The same goes for any other Unsafe method. If you treat > U.allocateUninitializedArray with the same respect you treating other > tools in that toolbox, everything is fine. If you don't, then stay away > from Unsafe to begin with. Unsafe is unsafe, there is no doubt about it. > > -Aleksey > > On 02/26/2016 01:57 AM, Jim Graham wrote: >> Just to play devil's advocate here. >> >> It's true that from a code correctness-safety perspective Unsafe >> programmers can already shoot themselves in the foot with uninitialized >> allocations, but from the security point of view the two methods don't >> have the same opportunity to leak information. >> >> Unsafe.allocateMemory returns a long, which is just a long to any >> untrusted code since it can't use the Unsafe methods to access the data >> in it. >> >> The new uninitialized array allocation returns a primitive array which >> can be inspected by untrusted code for any stale elements that hold >> private information from a previous allocation - should that array ever >> be leaked to untrusted code... >> >> ...jim >> >> On 2/25/2016 7:47 AM, Paul Sandoz wrote: >>> >>>> On 25 Feb 2016, at 15:36, Aleksey Shipilev >>>> wrote: >>>> >>>> On 02/25/2016 05:13 PM, Andrew Haley wrote: >>>>> On 02/25/2016 01:44 PM, Aleksey Shipilev wrote: >>>>>> Of course, you will still see garbage data if after storing the array >>>>>> elements into the uninitialized array you would publish it racily. But >>>>>> the same is true for the "regular" allocations and subsequent writes. >>>>>> The only difference is whether you see "real" garbage, or some >>>>>> "synthetic" garbage like zeros. It is, of course, a caller >>>>>> responsibility to publish array safely in both cases, if garbage is >>>>>> unwanted. >>>>> >>>>> Of course, my worry with this optimization assumes that programmers >>>>> make mistakes. But you did say "complicated processing is done after >>>>> the allocation." And that's where programmers make mistakes. >>>> >>>> Of course they do; at least half of the Unsafe methods is suitable for >>>> shooting oneself in a foot in creative ways. Unsafe is a sharp tool, and >>>> Unsafe callers are trusted in their madness. This is not your average >>>> Joe's use case, for sure. >>>> >>> >>> FTR the contents of the memory allocated by Unsafe.allocateMemory are >>> also uninitialized. >>> >>> Paul. >>> > > From vladimir.x.ivanov at oracle.com Fri Feb 26 16:31:47 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Fri, 26 Feb 2016 19:31:47 +0300 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> Message-ID: <56D07DF3.6050805@oracle.com> > Hotspot: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/hotspot/webrev/index.html Looks good. > JDK: > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html src/java.base/share/classes/java/lang/invoke/MethodHandles.java: + public VarHandle unreflectVarHandle(Field f) throws IllegalAccessException { + MemberName getField = new MemberName(f, false); + MemberName putField = new MemberName(f, true); + return getFieldVarHandleNoSecurityManager(getField.getReferenceKind(), putField.getReferenceKind(), + f.getDeclaringClass(), getField, putField); + } unreflectField has the following check: Lookup lookup = f.isAccessible() ? IMPL_LOOKUP : this; return lookup.getDirectFieldNoSecurityManager(field.getReferenceKind(), f.getDeclaringClass(), field); } Why don't you do the same? Otherwise, looks good. Best regards, Vladimir Ivanov > > The spec/API review is proceeding here [1]. > > The patches depend on Unsafe changes [2] and ByteBuffer changes [3]. > > Recent (as of today) JPRT runs for core and hotspot tests pass without failure. Many parts of the code have been soaking in the Valhalla repo for over a year, and it?s been soaking in the sandbox for quite and many a JPRT run was performed. > > It is planned to push through hs-comp as is the case for the dependent patches, and thus minimise any delays due to integration between forests. > > > The Langtools changes are small. Tweaks were made to support updates to signature polymorphic methods and where may be located, in addition to supporting compilation of calls to MethodHandle.link*. > > > The Hotspot changes are not very large. It?s mostly a matter of augmenting checks for MethodHandle to include that for VarHandle. It?s tempting to generalise the ?invokehandle" invocation as i believe there are other use-cases where it might be useful, but i resisted temptation here. I wanted to focus on the minimal changes required. > > > The JDK changes are more substantial, but a large proportion are new tests. The source compilation approach taken is to use templates, the same approach as for code in the nio package, to generate both implementation and test source code. The implementations are generated by the build, the tests are pre-generated. I believe the tests should have good coverage but we have yet to run any code coverage tool. > > The approach to invocation of VarHandle signature polymoprhic methods is slightly different to that of MethodHandles. I wanted to ensure that linking for the common cases avoids lambda form creation, compilation and therefore class spinning. That reduces start up costs and also potential circular dependencies that might be induced in the VM boot process if VarHandles are employed early on. > > For common basic (i.e. erased ref and widened primitive) method signatures, namely all those that matter for the efficient atomic operations there are pre-generated methods that would otherwise be generated from creating and compiling invoker lambda forms. Those methods reside on the VarHandleGuards class. When the VM makes an up call to MethodHandleNatives.linkMethod to link a call site then this up-called method will first check if an appropriate pre-generated method exists on VarHandleGuards and if so it links to that, otherwise it falls back to a method on a class generated from compiling a lambda form. For testing purposes there is a system property available to switch off this optimisation when linking [*]. > > Each VarHandle instance of the same variable type produced from the same factory will share an underlying immutable instance of a VarForm that contains a set of MemberName instances, one for each implementation of a signature polymorphic method (a value of null means unsupported). The invoke methods (on VarHandleGuards or on lambda forms) will statically link to such MemberName instances using a call to MethodHandle.linkToStatic. > > There are a couple of TODOs in comments, those are all on non-critical code paths and i plan to chase them up afterwards. > > C1 does not support constant folding for @Stable arrays hence why in certain cases we have exploded stuff into fields that are operated on using if/else loops. We can simplify such code if/when C1 support is added. > > > Thanks, > Paul. > > [1] http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038150.html > [2] http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2016-January/020953.html > http://mail.openjdk.java.net/pipermail/hotspot-dev/2016-January/021514.html > [3] http://mail.openjdk.java.net/pipermail/nio-dev/2016-February/003535.html > > [*] This technique might be useful for common signatures of MH invokers to reduce associated costs of lambda form creation and compilation in the interim of something better. > From christian.thalinger at oracle.com Fri Feb 26 18:46:31 2016 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Fri, 26 Feb 2016 08:46:31 -1000 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56CF6FAC.8000603@oracle.com> References: <56CE3222.6040207@oracle.com> <56CF6007.5060607@oracle.com> <3456FD3A-CBF8-46AD-9092-5DC7C6A02DD8@oracle.com> <56CF6FAC.8000603@oracle.com> Message-ID: <454CC9C7-E498-4F23-A210-5DA188126029@oracle.com> > On Feb 25, 2016, at 11:18 AM, Aleksey Shipilev wrote: > > On 02/25/2016 11:51 PM, Christian Thalinger wrote: >> >>> On Feb 25, 2016, at 10:11 AM, Aleksey Shipilev wrote: >>> >>> On 02/25/2016 10:52 PM, Christian Thalinger wrote: >>>> + public Object allocateArrayUninit(Class componentType, int length) { >>>> >>>> Can we use another name like allocateUninitializedArray? >>> >>> Yes, we can: >>> http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.02/ >>> http://cr.openjdk.java.net/~shade/8150465/webrev.hs.03/ >> >> Thanks but I wanted the change in hotspot code too. > > That wasn't made obvious. > > Here you go: > http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.02/ > http://cr.openjdk.java.net/~shade/8150465/webrev.hs.04/ Looks good. > > -Aleksey > > From aleksey.shipilev at oracle.com Fri Feb 26 18:56:04 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Fri, 26 Feb 2016 21:56:04 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56D027D3.9010609@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> <56CF86EE.6070704@oracle.com> <56CFF853.7020207@oracle.com> <56D027D3.9010609@oracle.com> Message-ID: <56D09FC4.3030004@oracle.com> Hi Jim, I agree with most of your points. You are correct that comparing allocateMemory and allocateUninitializedArray is cumbersome, and I was not trying to compare these. My comment was about Unsafe at large, notably peek-and-poke methods that we already have, and can already be used to leak out data. This is to frame the discussion into "Unsafe is unsafe" mood. This is not a regular JDK method. As many other Unsafe methods, it comes with caveats, and hordes of developers are not the target audience, even most JDK developers are not the target audience. The target audience is a tiny group of core library developers who are (admittedly) well-versed in reading the labels on unsafe methods before using them. As much as I would like to have a capability-based (and also fingerprint/retina-scan-based) access control to internal APIs, current incarnation of Unsafe requires engineering discipline from users. Unsafe is "off limits" for those who do not understand low-level mechanics (memory ordering, atomicity requirements, interaction with runtime, you name it). Unsafe is a special -- perhaps, the only! -- place in JDK where the tradeoff between performance and safety is heavily tilted towards performance. Anyone from the huge body of engineers who does not understand this and uses Unsafe as just another JDK class, could use a really good talking-to, and probably lots and lots of training. On 02/26/2016 01:24 PM, Jim Graham wrote: > I'm seeing a fair amount of hand waving here and not much objective > analysis of the risks. It's important to nail down the issues if for > no other reason than to provide good advice on usage practices in > our documentation to our internal developers... Hopefully the updated Javadoc provides enough deterrent from accidental use. I'd be happy to amend this with even harsher wording, if you tell me the exact words :) Webrevs: http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.03/ http://cr.openjdk.java.net/~shade/8150465/webrev.hs.04/ Thanks, -Aleksey From Paul.Sandoz at oracle.com Fri Feb 26 19:01:27 2016 From: Paul.Sandoz at oracle.com (Paul Sandoz) Date: Fri, 26 Feb 2016 20:01:27 +0100 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: <56D07DF3.6050805@oracle.com> References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56D07DF3.6050805@oracle.com> Message-ID: Hi Vladimir, Thanks for the reviews. > On 26 Feb 2016, at 17:31, Vladimir Ivanov wrote: > >> Hotspot: >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/hotspot/webrev/index.html > Looks good. > >> JDK: >> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html > src/java.base/share/classes/java/lang/invoke/MethodHandles.java: > + public VarHandle unreflectVarHandle(Field f) throws IllegalAccessException { > + MemberName getField = new MemberName(f, false); > + MemberName putField = new MemberName(f, true); > + return getFieldVarHandleNoSecurityManager(getField.getReferenceKind(), putField.getReferenceKind(), > + f.getDeclaringClass(), getField, putField); > + } > > unreflectField has the following check: > Lookup lookup = f.isAccessible() ? IMPL_LOOKUP : this; > return lookup.getDirectFieldNoSecurityManager(field.getReferenceKind(), f.getDeclaringClass(), field); > } > > Why don't you do the same? > So as not to widen the places where the reflection accessibility bit can be leveraged (calls to setAccessible), and thus not support more things that can stomp on final fields. The documentation on unreflectVarHandle states: "Access checking is performed immediately on behalf of the lookup class, regardless of value of the field's accessible flag.? (There is a typo there i need to fix, ?of value? -> ?of the value?.) There is also a comment in the following code that is not quite correct when i updated the implementation with the accessibility restriction: private VarHandle getFieldVarHandleCommon(byte getRefKind, byte putRefKind, Class refc, MemberName getField, MemberName putField, boolean checkSecurity) throws IllegalAccessException { assert getField.isStatic() == putField.isStatic(); assert getField.isGetter() && putField.isSetter(); assert MethodHandleNatives.refKindIsStatic(getRefKind) == MethodHandleNatives.refKindIsStatic(putRefKind); assert MethodHandleNatives.refKindIsGetter(getRefKind) && MethodHandleNatives.refKindIsSetter(putRefKind); checkField(getRefKind, refc, getField); if (checkSecurity) checkSecurityManager(refc, getField); if (!putField.isFinal()) { // A VarHandle will only support updates final fields if allowed // modes is TRUSTED. In such cases the following checks are // no-ops and therefore there is no need to invoke if the field // is marked final checkField(putRefKind, refc, putField); if (checkSecurity) checkSecurityManager(refc, putField); } I will update to: // A VarHandle does not support updates to final fields, any // such VarHandle to a final field will be read-only and // therefore the following write-based accessibility checks are // only required for non-final fields Paul. > Otherwise, looks good. > > Best regards, > Vladimir Ivanov From john.r.rose at oracle.com Fri Feb 26 23:41:09 2016 From: john.r.rose at oracle.com (John Rose) Date: Fri, 26 Feb 2016 15:41:09 -0800 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56D0D2E2.6010503@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> <56CF86EE.6070704@oracle.com> <56CFF853.7020207@oracle.com> <56D027D3.9010609@oracle.com> <56D09FC4.3030004@oracle.com> <56D0D2E2.6010503@oracle.com> Message-ID: <4B0CD00B-187F-4FBE-94F0-35F6D9DF097F@oracle.com> +1 on recommending a fence op in the Java-doc. Also +1 on recommending keeping it in a local. That's overkill, but simple and safe to follow. ? John > On Feb 26, 2016, at 2:34 PM, Jim Graham wrote: > > Thanks Aleksey, > > As I said, I just wanted to see more objective diligence in the discussion of the risks here. > > With regard to documentation, I think the changes made to the javadoc include a lot of stating the obvious, which is fine, and with some tricky points mentioned as well. I think most engineers can handle "have I written to every element of the array" considerations, but one area where they may not have expertise would be in areas of how the compiler and/or processor cache mechanisms might reorder memory accesses in unexpected ways. For instance, an earlier comment showed some sort of mem-fence operation that was indicated to ensure that the data was written to the array in a thread-safe manner before any external access was allowed. (I'm guessing that the simple act of calling the method and returning the value would enforce a mem-fence in that case?) That consideration would be a good thing to describe in the javadoc. Also, perhaps recommending that the references be held in a local variable until the initialization phase is complete before storing the reference into a field (that might be part of stating the obvious, other than how it might interact with mem-fence considerations)? > > In the end, I'm not intending to be the voice of opposition on this, just hoping to see the discussion be a little more rounded... > > ...jim > >> On 2/26/2016 10:56 AM, Aleksey Shipilev wrote: >> Hi Jim, >> >> I agree with most of your points. >> >> You are correct that comparing allocateMemory and >> allocateUninitializedArray is cumbersome, and I was not trying to >> compare these. My comment was about Unsafe at large, notably >> peek-and-poke methods that we already have, and can already be used to >> leak out data. This is to frame the discussion into "Unsafe is unsafe" >> mood. >> >> This is not a regular JDK method. As many other Unsafe methods, it comes >> with caveats, and hordes of developers are not the target audience, even >> most JDK developers are not the target audience. >> >> The target audience is a tiny group of core library developers who are >> (admittedly) well-versed in reading the labels on unsafe methods before >> using them. As much as I would like to have a capability-based (and also >> fingerprint/retina-scan-based) access control to internal APIs, current >> incarnation of Unsafe requires engineering discipline from users. >> >> Unsafe is "off limits" for those who do not understand low-level >> mechanics (memory ordering, atomicity requirements, interaction with >> runtime, you name it). Unsafe is a special -- perhaps, the only! -- >> place in JDK where the tradeoff between performance and safety is >> heavily tilted towards performance. >> >> Anyone from the huge body of engineers who does not understand this and >> uses Unsafe as just another JDK class, could use a really good >> talking-to, and probably lots and lots of training. >> >>> On 02/26/2016 01:24 PM, Jim Graham wrote: >>> I'm seeing a fair amount of hand waving here and not much objective >>> analysis of the risks. It's important to nail down the issues if for >>> no other reason than to provide good advice on usage practices in >>> our documentation to our internal developers... >> >> Hopefully the updated Javadoc provides enough deterrent from accidental >> use. I'd be happy to amend this with even harsher wording, if you tell >> me the exact words :) >> >> Webrevs: >> http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.03/ >> http://cr.openjdk.java.net/~shade/8150465/webrev.hs.04/ >> >> Thanks, >> -Aleksey >> >> From aph at redhat.com Sat Feb 27 10:24:18 2016 From: aph at redhat.com (Andrew Haley) Date: Sat, 27 Feb 2016 10:24:18 +0000 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56D0D2E2.6010503@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> <56CF86EE.6070704@oracle.com> <56CFF853.7020207@oracle.com> <56D027D3.9010609@oracle.com> <56D09FC4.3030004@oracle.com> <56D0D2E2.6010503@oracle.com> Message-ID: <56D17952.5080503@redhat.com> On 26/02/16 22:34, Jim Graham wrote: > I think most engineers can handle "have I written to every element > of the array" considerations, but one area where they may not have > expertise would be in areas of how the compiler and/or processor > cache mechanisms might reorder memory accesses in unexpected ways. > For instance, an earlier comment showed some sort of mem-fence > operation that was indicated to ensure that the data was written to > the array in a thread-safe manner before any external access was > allowed. (I'm guessing that the simple act of calling the method > and returning the value would enforce a mem-fence in that case?) No, it doesn't. The Java memory model relies on dependency ordering. That is to say, if there has been a release fence between writing the array and storing a pointer to that array in a field Obj.f, then no other thread will observe the uninitialized array. This is because any reader of the data in the array must get its address via Obj.f, and all current CPUs enforce address dependency ordering. (An address dependency exists when the value returned by a read is used to compute the address of a subsequent read or write.) One other thing: it's probably not safe to use a StoreStore fence unless the contents of the array are all constants. The reasons for this are complex and I can provide a reference if required, but it's probably best simply to say "use a release fence." Andrew. From varming at gmail.com Mon Feb 29 02:16:48 2016 From: varming at gmail.com (Carsten Varming) Date: Sun, 28 Feb 2016 18:16:48 -0800 Subject: RFR 8150013: ParNew: Prune nmethods scavengable list In-Reply-To: References: Message-ID: Dear Hotspot developers, Any chance of a review of this patch? The patch cut between 7ms and 10ms of every ParNew with one application at Twitter and I expect a 1-2ms improvement for most applications. I touch the code cache and GenCollectedHeap, so I assume I need reviews from both gc and compiler reviewers. Thank you Tony Printezis for the review (posted on the hotspot-gc-dev list). I also need a sponsor. Carsten On Fri, Feb 19, 2016 at 10:52 AM, Carsten Varming wrote: > Dear Hotspot developers, > > I would like to contribute a patch for JDK-8150013 > . The current webrev > can be found here: > http://cr.openjdk.java.net/~cvarming/scavenge_nmethods_auto_prune/2/. > > Suggestions for improvements are very welcome. > > Carsten > From erik.joelsson at oracle.com Mon Feb 29 10:19:50 2016 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Mon, 29 Feb 2016 11:19:50 +0100 Subject: RFR: JDK-8150822: Fix typo in JDK-8150201 Message-ID: <56D41B46.7000900@oracle.com> In JDK-8150201, some debug flags were corrected. In one of the overrides, the file name was misspelled so the debug flag correction is not in effect. Bug: https://bugs.openjdk.java.net/browse/JDK-8150822 Patch: diff -r 63a9e10565c4 make/solaris/makefiles/amd64.make --- a/make/solaris/makefiles/amd64.make +++ b/make/solaris/makefiles/amd64.make @@ -39,7 +39,7 @@ # of OPT_CFLAGS. Restore it here. ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) OPT_CFLAGS/generateOptoStub.o += -g0 -xs - OPT_CFLAGS/LinearScan.o += -g0 -xs + OPT_CFLAGS/c1_LinearScan.o += -g0 -xs endif /Erik From vladimir.x.ivanov at oracle.com Mon Feb 29 11:26:20 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Mon, 29 Feb 2016 14:26:20 +0300 Subject: RFR JDK-8149644 Integrate VarHandles In-Reply-To: References: <9C47EF6F-80D6-467E-A5CB-2FDD5FF6AE17@oracle.com> <56D07DF3.6050805@oracle.com> Message-ID: <56D42ADC.4000808@oracle.com> Thanks for the clarifications. JDK part looks good. Best regards, Vladimir Ivanov On 2/26/16 10:01 PM, Paul Sandoz wrote: > Hi Vladimir, > > Thanks for the reviews. > >> On 26 Feb 2016, at 17:31, Vladimir Ivanov wrote: >> >>> Hotspot: >>> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/hotspot/webrev/index.html >> Looks good. >> >>> JDK: >>> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149644-varhandles-integration/jdk/webrev/index.html >> src/java.base/share/classes/java/lang/invoke/MethodHandles.java: >> + public VarHandle unreflectVarHandle(Field f) throws IllegalAccessException { >> + MemberName getField = new MemberName(f, false); >> + MemberName putField = new MemberName(f, true); >> + return getFieldVarHandleNoSecurityManager(getField.getReferenceKind(), putField.getReferenceKind(), >> + f.getDeclaringClass(), getField, putField); >> + } >> >> unreflectField has the following check: >> Lookup lookup = f.isAccessible() ? IMPL_LOOKUP : this; >> return lookup.getDirectFieldNoSecurityManager(field.getReferenceKind(), f.getDeclaringClass(), field); >> } >> >> Why don't you do the same? >> > > So as not to widen the places where the reflection accessibility bit can be leveraged (calls to setAccessible), and thus not support more things that can stomp on final fields. > > The documentation on unreflectVarHandle states: > > "Access checking is performed immediately on behalf of the lookup class, regardless of value of the field's accessible flag.? > > (There is a typo there i need to fix, ?of value? -> ?of the value?.) > > > There is also a comment in the following code that is not quite correct when i updated the implementation with the accessibility restriction: > > private VarHandle getFieldVarHandleCommon(byte getRefKind, byte putRefKind, > Class refc, MemberName getField, MemberName putField, > boolean checkSecurity) throws IllegalAccessException { > assert getField.isStatic() == putField.isStatic(); > assert getField.isGetter() && putField.isSetter(); > assert MethodHandleNatives.refKindIsStatic(getRefKind) == MethodHandleNatives.refKindIsStatic(putRefKind); > assert MethodHandleNatives.refKindIsGetter(getRefKind) && MethodHandleNatives.refKindIsSetter(putRefKind); > > checkField(getRefKind, refc, getField); > if (checkSecurity) > checkSecurityManager(refc, getField); > > if (!putField.isFinal()) { > // A VarHandle will only support updates final fields if allowed > // modes is TRUSTED. In such cases the following checks are > // no-ops and therefore there is no need to invoke if the field > // is marked final > checkField(putRefKind, refc, putField); > if (checkSecurity) > checkSecurityManager(refc, putField); > } > > I will update to: > > // A VarHandle does not support updates to final fields, any > // such VarHandle to a final field will be read-only and > // therefore the following write-based accessibility checks are > // only required for non-final fields > > Paul. > >> Otherwise, looks good. >> >> Best regards, >> Vladimir Ivanov > From dmitry.dmitriev at oracle.com Mon Feb 29 11:27:07 2016 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Mon, 29 Feb 2016 14:27:07 +0300 Subject: RFR: 8078112: Integrate Selection/Resolution test suite into jtreg tests Message-ID: <56D42B0B.6050008@oracle.com> Hello, Please review this patch, which integrates the selection/resolution test suite into the JTreg test suite. This test suite was developed by Eric Mccorkle(OpenJDK: emc). Thanks Eric for that! This test suite uses a template-based generation scheme to exercise all aspects of the updated selection/resolution test suite from JVMS 8. It runs a very large number of tests, representing a very wide variety of cases. Extensive javadoc comments are found throughout the test code which describe the suite's functioning in more detail. Also note that this suite as already undergone extensive review as part of the development process. JBS: https://bugs.openjdk.java.net/browse/JDK-8078112 webrev.00: http://cr.openjdk.java.net/~ddmitriev/8078112/webrev.00/ Testing: Jprt, RBT(all platforms, product & fastdebug builds) Thanks, Dmitry From magnus.ihse.bursie at oracle.com Mon Feb 29 12:12:52 2016 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 29 Feb 2016 13:12:52 +0100 Subject: RFR: JDK-8150822: Fix typo in JDK-8150201 In-Reply-To: <56D41B46.7000900@oracle.com> References: <56D41B46.7000900@oracle.com> Message-ID: <9B27630C-9DCF-42A8-9CBA-5C93EAFF2FA8@oracle.com> Looks good to me. /Magnus > 29 feb. 2016 kl. 11:19 skrev Erik Joelsson : > > In JDK-8150201, some debug flags were corrected. In one of the overrides, the file name was misspelled so the debug flag correction is not in effect. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8150822 > > Patch: > diff -r 63a9e10565c4 make/solaris/makefiles/amd64.make > --- a/make/solaris/makefiles/amd64.make > +++ b/make/solaris/makefiles/amd64.make > @@ -39,7 +39,7 @@ > # of OPT_CFLAGS. Restore it here. > ifeq ($(ENABLE_FULL_DEBUG_SYMBOLS),1) > OPT_CFLAGS/generateOptoStub.o += -g0 -xs > - OPT_CFLAGS/LinearScan.o += -g0 -xs > + OPT_CFLAGS/c1_LinearScan.o += -g0 -xs > endif > > > > /Erik From volker.simonis at gmail.com Mon Feb 29 12:16:11 2016 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 29 Feb 2016 13:16:11 +0100 Subject: Should we use '-fno-asynchronous-unwind-tables' to reduce the size of libjvm.so by 10 percent? Message-ID: Hi, We are currently building and linking the libjvm.so on Linux with -fnoexceptions because we currently don't use C++ exception handling in the HotSpot. Nevertheless, g++ generates unwind tables (i.e. .eh_frame sections) in the object files and shared libraries which can not be stripped from the binary. In the case of libjvm.so, these sections consume 10% of the whole library. It is possible to omit the creation of these sections by using the '-fno-asynchronous-unwind-tables' option during compilation and linking. Ive verified that this indeed reduces the size of libjvm.so by 10% on Linux/x86_64 for a product build: -rwxrwxr-x 1 simonis simonis 18798859 Feb 24 18:32 hotspot/linux_amd64_compiler2/product/libjvm.so -rwxrwxr-x 1 simonis simonis 17049867 Feb 25 18:12 hotspot_no_unwind/linux_amd64_compiler2/product/libjvm.so The gcc documentation mentions that the unwind information is used "for stack unwinding from asynchronous events (such as debugger or garbage collector)". But various references [1,2] also mention that using '-fno-asynchronous-unwind-tables' together with '-g' will force gcc to create this information in the debug sections of the object files (i.e. .debug_frame) which can easily be stripped from the object files and libraries. As we build the product version of the libjvm.so with '-g' anyway, I'd suggest to use '-fno-asynchronous-unwind-tables' to reduce its size. I've done some quick tests (debugging, creation of hs_err files) with a product version of libjvm.so which was build with '-fno-asynchronous-unwind-tables' and couldn't find any draw backs. I could observe that all the date from the current .eh_frame sections has bee moved to the .debug_frame sections in the stripped out data of the libjvm.debuginfo file. I've opened "8150828: Consider using '-fno-asynchronous-unwind-tables' to reduce the size of libjvm.so by 10 percent" to track this issue: https://bugs.openjdk.java.net/browse/JDK-8150828 and would be interested what others think about this "optimization"? The only reason for not using it I can currently think of is that we might have to switch exception handling on when we are integrating the new "JEP 281: HotSpot C++ Unit-Test Framework". Regards, Volker [1] http://stackoverflow.com/questions/26300819/why-gcc-compiled-c-program-needs-eh-frame-section [2] https://www.chromium.org/chromium-os/build/c-exception-support From aleksey.shipilev at oracle.com Mon Feb 29 16:32:49 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Mon, 29 Feb 2016 19:32:49 +0300 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <4B0CD00B-187F-4FBE-94F0-35F6D9DF097F@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> <56CF86EE.6070704@oracle.com> <56CFF853.7020207@oracle.com> <56D027D3.9010609@oracle.com> <56D09FC4.3030004@oracle.com> <56D0D2E2.6010503@oracle.com> <4B0CD00B-187F-4FBE-94F0-35F6D9DF097F@oracle.com> Message-ID: <56D472B1.6020205@oracle.com> On 27.02.2016 02:41, John Rose wrote: > +1 on recommending a fence op in the Java-doc. > > Also +1 on recommending keeping it in a local. That's overkill, but simple and safe to follow. Thanks Jim, John and Andrew for chiming in. Let's see if this Javadoc variant floats: http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.04/ http://cr.openjdk.java.net/~shade/8150465/webrev.hs.04/ -Aleksey From james.graham at oracle.com Fri Feb 26 22:34:10 2016 From: james.graham at oracle.com (Jim Graham) Date: Fri, 26 Feb 2016 14:34:10 -0800 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56D09FC4.3030004@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> <56CF86EE.6070704@oracle.com> <56CFF853.7020207@oracle.com> <56D027D3.9010609@oracle.com> <56D09FC4.3030004@oracle.com> Message-ID: <56D0D2E2.6010503@oracle.com> Thanks Aleksey, As I said, I just wanted to see more objective diligence in the discussion of the risks here. With regard to documentation, I think the changes made to the javadoc include a lot of stating the obvious, which is fine, and with some tricky points mentioned as well. I think most engineers can handle "have I written to every element of the array" considerations, but one area where they may not have expertise would be in areas of how the compiler and/or processor cache mechanisms might reorder memory accesses in unexpected ways. For instance, an earlier comment showed some sort of mem-fence operation that was indicated to ensure that the data was written to the array in a thread-safe manner before any external access was allowed. (I'm guessing that the simple act of calling the method and returning the value would enforce a mem-fence in that case?) That consideration would be a good thing to describe in the javadoc. Also, perhaps recommending that the references be held in a local variable until the initialization phase is complete before storing the reference into a field (that might be part of stating the obvious, other than how it might interact with mem-fence considerations)? In the end, I'm not intending to be the voice of opposition on this, just hoping to see the discussion be a little more rounded... ...jim On 2/26/2016 10:56 AM, Aleksey Shipilev wrote: > Hi Jim, > > I agree with most of your points. > > You are correct that comparing allocateMemory and > allocateUninitializedArray is cumbersome, and I was not trying to > compare these. My comment was about Unsafe at large, notably > peek-and-poke methods that we already have, and can already be used to > leak out data. This is to frame the discussion into "Unsafe is unsafe" > mood. > > This is not a regular JDK method. As many other Unsafe methods, it comes > with caveats, and hordes of developers are not the target audience, even > most JDK developers are not the target audience. > > The target audience is a tiny group of core library developers who are > (admittedly) well-versed in reading the labels on unsafe methods before > using them. As much as I would like to have a capability-based (and also > fingerprint/retina-scan-based) access control to internal APIs, current > incarnation of Unsafe requires engineering discipline from users. > > Unsafe is "off limits" for those who do not understand low-level > mechanics (memory ordering, atomicity requirements, interaction with > runtime, you name it). Unsafe is a special -- perhaps, the only! -- > place in JDK where the tradeoff between performance and safety is > heavily tilted towards performance. > > Anyone from the huge body of engineers who does not understand this and > uses Unsafe as just another JDK class, could use a really good > talking-to, and probably lots and lots of training. > > On 02/26/2016 01:24 PM, Jim Graham wrote: >> I'm seeing a fair amount of hand waving here and not much objective >> analysis of the risks. It's important to nail down the issues if for >> no other reason than to provide good advice on usage practices in >> our documentation to our internal developers... > > Hopefully the updated Javadoc provides enough deterrent from accidental > use. I'd be happy to amend this with even harsher wording, if you tell > me the exact words :) > > Webrevs: > http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.03/ > http://cr.openjdk.java.net/~shade/8150465/webrev.hs.04/ > > Thanks, > -Aleksey > > From brian.toal at gmail.com Mon Feb 29 16:32:00 2016 From: brian.toal at gmail.com (Brian Toal) Date: Mon, 29 Feb 2016 08:32:00 -0800 Subject: safepoint accumulating time counter Message-ID: Hi, does the JVM expose total time spent in safepoints via an MBean or another mechanism other than PrintGCApplicationStoppedTime? Specifically, I'm curious whether this feature exists in Java 8. From charlie.hunt at oracle.com Mon Feb 29 17:35:57 2016 From: charlie.hunt at oracle.com (charlie hunt) Date: Mon, 29 Feb 2016 11:35:57 -0600 Subject: safepoint accumulating time counter In-Reply-To: References: Message-ID: <7845BD64-DD13-4ED8-A2A1-B6858BF2FA7A@oracle.com> Hi Brian, Nice to hear from you! Look at +PrintSafepointStatistics. I think you will figure it out from there. AFAIR, info is not exposed in an MBean. hths, Charlie > On Feb 29, 2016, at 10:32 AM, Brian Toal wrote: > > Hi, does the JVM expose total time spent in safepoints via an MBean or > another mechanism other than PrintGCApplicationStoppedTime? Specifically, > I'm curious whether this feature exists in Java 8. From varming at gmail.com Mon Feb 29 18:02:22 2016 From: varming at gmail.com (Carsten Varming) Date: Mon, 29 Feb 2016 13:02:22 -0500 Subject: safepoint accumulating time counter In-Reply-To: <7845BD64-DD13-4ED8-A2A1-B6858BF2FA7A@oracle.com> References: <7845BD64-DD13-4ED8-A2A1-B6858BF2FA7A@oracle.com> Message-ID: Dear Brian, I believe the HotspotRuntimeMBean[1] exports total time spent in safepoints. [1]: http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/file/5972ad24ef27/src/share/classes/sun/management/HotspotRuntimeMBean.java Carsten On Mon, Feb 29, 2016 at 12:35 PM, charlie hunt wrote: > Hi Brian, > > Nice to hear from you! > > Look at +PrintSafepointStatistics. I think you will figure it out from > there. > > AFAIR, info is not exposed in an MBean. > > hths, > > Charlie > > > On Feb 29, 2016, at 10:32 AM, Brian Toal wrote: > > > > Hi, does the JVM expose total time spent in safepoints via an MBean or > > another mechanism other than PrintGCApplicationStoppedTime? > Specifically, > > I'm curious whether this feature exists in Java 8. > From roland.westrelin at oracle.com Mon Feb 29 20:52:57 2016 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Mon, 29 Feb 2016 21:52:57 +0100 Subject: [8u] backport of 8148353: [linux-sparc] Crash in libawt.so on Linux SPARC Message-ID: <826FAB32-8FC1-4FDA-AE0E-0F3D083D8AFA@oracle.com> Hi, Please approve and review the following backport to 8u. 8148353 was pushed to jdk9 last week (on Wednesday) and it hasn?t caused any new failures during nightly testing. The change doesn?t apply cleanly to 8u: I had to rework the test because the infrastructure to run native jtreg tests doesn?t seem to exist in 8. I restricted the test to linux-sparc to keep it simple. https://bugs.openjdk.java.net/browse/JDK-8148353 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/1f4f4866aee0 New webrev: http://cr.openjdk.java.net/~roland/8148353/webrev.8u.00/ review thread: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2016-February/thread.html#21530 Note that when I pushed 8148353 to 9, the fix version was set to 8 and so a backport was created for 9. Roland. From charlie.hunt at oracle.com Mon Feb 29 20:57:50 2016 From: charlie.hunt at oracle.com (charlie hunt) Date: Mon, 29 Feb 2016 14:57:50 -0600 Subject: safepoint accumulating time counter In-Reply-To: References: <7845BD64-DD13-4ED8-A2A1-B6858BF2FA7A@oracle.com> Message-ID: Hi Carsten, Thanks for sharing! I?ve kinda lost track of what all is exposed via MBeans ? glad to see Safepoint info is indeed exposed. charlie > On Feb 29, 2016, at 12:02 PM, Carsten Varming wrote: > > Dear Brian, > > I believe the HotspotRuntimeMBean[1] exports total time spent in safepoints. > > [1]: http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/file/5972ad24ef27/src/share/classes/sun/management/HotspotRuntimeMBean.java > > Carsten > > On Mon, Feb 29, 2016 at 12:35 PM, charlie hunt > wrote: > Hi Brian, > > Nice to hear from you! > > Look at +PrintSafepointStatistics. I think you will figure it out from there. > > AFAIR, info is not exposed in an MBean. > > hths, > > Charlie > > > On Feb 29, 2016, at 10:32 AM, Brian Toal > wrote: > > > > Hi, does the JVM expose total time spent in safepoints via an MBean or > > another mechanism other than PrintGCApplicationStoppedTime? Specifically, > > I'm curious whether this feature exists in Java 8. > From james.graham at oracle.com Mon Feb 29 19:19:14 2016 From: james.graham at oracle.com (Jim Graham) Date: Mon, 29 Feb 2016 11:19:14 -0800 Subject: RFR (S) 8150465: Unsafe methods to produce uninitialized arrays In-Reply-To: <56D472B1.6020205@oracle.com> References: <56CE3222.6040207@oracle.com> <56CECE5E.4080105@redhat.com> <56CF053A.3070607@oracle.com> <56CF0C1A.4040600@redhat.com> <56CF1163.4040005@oracle.com> <6D79C6DB-B770-4CAA-9338-154589441F8B@oracle.com> <56CF86EE.6070704@oracle.com> <56CFF853.7020207@oracle.com> <56D027D3.9010609@oracle.com> <56D09FC4.3030004@oracle.com> <56D0D2E2.6010503@oracle.com> <4B0CD00B-187F-4FBE-94F0-35F6D9DF097F@oracle.com> <56D472B1.6020205@oracle.com> Message-ID: <56D499B2.7060100@oracle.com> That looks great! Maybe I'm missing something (memory fences are not in my wheelhouse), but is storeFence() the right fence to use? I would think you would want stores before the fence to not be reordered wrt loads after the fence, but storeFence() only protects against stores after the fence (according to its doc comment). I couldn't find a storeLoadFence() method...? (and storeFence does not incorporate storeload protection) One issue I found confusing, the docs for loadFence() say that a loadloadFence is not provided and gives reasons - then 2 methods later we have loadloadFence() - it turns out that loadloadFence calls the full loadFence method, but its presence contradicts the comment in the earlier method, which is just confusing. It should probably be deprecated and mention that it is provided for convenience and actually does a full loadFence() and move the reason from the other method to the comment on the loadload method (a caller to loadFence() wouldn't likely care about that issue, but a caller to loadloadFence() would need to know it so that documentation really belongs in the latter method). Perhaps the comment was added to loadFence before the convenience loadload method was added? ...jim On 2/29/2016 8:32 AM, Aleksey Shipilev wrote: > On 27.02.2016 02:41, John Rose wrote: >> +1 on recommending a fence op in the Java-doc. >> >> Also +1 on recommending keeping it in a local. That's overkill, but simple and safe to follow. > > Thanks Jim, John and Andrew for chiming in. > > Let's see if this Javadoc variant floats: > http://cr.openjdk.java.net/~shade/8150465/webrev.jdk.04/ > http://cr.openjdk.java.net/~shade/8150465/webrev.hs.04/ > > -Aleksey > From vladimir.kozlov at oracle.com Mon Feb 29 23:35:24 2016 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 29 Feb 2016 15:35:24 -0800 Subject: [8u] backport of 8148353: [linux-sparc] Crash in libawt.so on Linux SPARC In-Reply-To: <826FAB32-8FC1-4FDA-AE0E-0F3D083D8AFA@oracle.com> References: <826FAB32-8FC1-4FDA-AE0E-0F3D083D8AFA@oracle.com> Message-ID: <56D4D5BC.30503@oracle.com> Looks good. Note, the code fix was applied cleanly to jdk 8u. Only test have to be modified. Thanks, Vladimir On 2/29/16 12:52 PM, Roland Westrelin wrote: > Hi, > > Please approve and review the following backport to 8u. > > 8148353 was pushed to jdk9 last week (on Wednesday) and it hasn?t caused any new failures during nightly testing. The change doesn?t apply cleanly to 8u: I had to rework the test because the infrastructure to run native jtreg tests doesn?t seem to exist in 8. I restricted the test to linux-sparc to keep it simple. > > https://bugs.openjdk.java.net/browse/JDK-8148353 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/1f4f4866aee0 > > New webrev: > http://cr.openjdk.java.net/~roland/8148353/webrev.8u.00/ > > review thread: > http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2016-February/thread.html#21530 > > Note that when I pushed 8148353 to 9, the fix version was set to 8 and so a backport was created for 9. > > Roland. > From brian.toal at gmail.com Mon Feb 29 23:04:58 2016 From: brian.toal at gmail.com (Brian Toal) Date: Mon, 29 Feb 2016 15:04:58 -0800 Subject: safepoint accumulating time counter In-Reply-To: References: <7845BD64-DD13-4ED8-A2A1-B6858BF2FA7A@oracle.com> Message-ID: When I pull up jconsole and connect to a 1.8 runtime, I see com.sun.management (not sun.management) and under the HotSpotDiagnostic MBean I don't see any of the attributes mentioned. Is there something I need to do to enable the sun.management.HotspotRuntimeMBean? On Mon, Feb 29, 2016 at 10:02 AM, Carsten Varming wrote: > Dear Brian, > > I believe the HotspotRuntimeMBean[1] exports total time spent in > safepoints. > > [1]: > http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/file/5972ad24ef27/src/share/classes/sun/management/HotspotRuntimeMBean.java > > Carsten > > On Mon, Feb 29, 2016 at 12:35 PM, charlie hunt > wrote: > >> Hi Brian, >> >> Nice to hear from you! >> >> Look at +PrintSafepointStatistics. I think you will figure it out from >> there. >> >> AFAIR, info is not exposed in an MBean. >> >> hths, >> >> Charlie >> >> > On Feb 29, 2016, at 10:32 AM, Brian Toal wrote: >> > >> > Hi, does the JVM expose total time spent in safepoints via an MBean or >> > another mechanism other than PrintGCApplicationStoppedTime? >> Specifically, >> > I'm curious whether this feature exists in Java 8. >> > >