From yasuenag at gmail.com Tue Mar 1 01:03:43 2016 From: yasuenag at gmail.com (Yasumasa Suenaga) Date: Tue, 01 Mar 2016 10:03:43 +0900 Subject: [PING] PoC for JDK-4347142: Need method to set Password protection to Zip entries In-Reply-To: <56BB4AFC.3010403@gmail.com> References: <565FB732.4090401@oracle.com> <566C11A7.1070309@gmail.com> <56716147.20100@gmail.com> <567164D8.8040201@oracle.com> <5671796C.1030406@gmail.com> <5672BC96.3080301@gmail.com> <56765A42.3060307@oracle.com> <568C3F46.3040801@oracle.com> <568D6553.4090108@oracle.com> <56BB4A5E.60801@gmail.com> <56BB4AFC.3010403@gmail.com> Message-ID: <56D4EA6F.7010809@gmail.com> Hi all, PING: Could you review new implementation for ZIP encryption? >> http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.04/ > http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.04/Test.java Thanks, Yasumasa On 2016/02/10 23:36, Yasumasa Suenaga wrote: > I've uploaded testcase here: > > http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.04/Test.java > > > Yasumasa > > > On 2016/02/10 23:34, Yasumasa Suenaga wrote: >> Hi Sherman, >> >> I've refactored a patch for this enhancement: >> >> http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.04/ >> >> 1. I changed ZipCryption and implementation class to package private. >> 2. Encryption / Decryption key is allowed passphrase string. >> 3. I added passphrase and validation methods to ZipEntry. >> >> I would like to hear your comment. >> >> >> Thanks, >> >> Yasumasa >> >> >> On 2016/02/01 18:23, KUBOTA Yuji wrote: >>> Hi Sherman and all, >>> >>> Could you please let know your thought and the past case about AES? >>> >>> Thanks, >>> Yuji >>> >>> >>> 2016-01-08 0:01 GMT+09:00 KUBOTA Yuji : >>>> Hi Sherman, >>>> >>>> Thank you for sharing! >>>> >>>> 2016-01-07 4:04 GMT+09:00 Xueming Shen : >>>>> The reason that I'm not convinced that we really need a public interface of >>>>> ZipCryption here >>>>> is I don't know how useful/helpful/likely it would be going forward that >>>>> someone might really >>>>> use this interface to implement other encryption(s), especially the pkware >>>>> proprietary one, >>>>> I doubt it might be not that straightforward. >>>> >>>> In this proposal, we aim to support "traditional" because most people need it >>>> in secure environment. BTW, could you please share the reason why you did >>>> not support WinZip AES? Do you have a plan to support in the future? >>>> >>>> If you can share the reason, we want to decide the way of implementation with >>>> consideration for your information. I think we can implement by two >>>> way as below. >>>> >>>> 1. Implementing by reference to >>>> http://cr.openjdk.java.net/~sherman/zipmisc/ZipFile.java >>>> This is good simply API. If we need to implement other encryption(s), >>>> try to refactor it. >>>> >>>> 2. Implementing with a package private interface of ZipCryption for next step. >>>> This has two problems as your advice. >>>> >>>> We agree with that the "encryption" and "compression" should be >>>> separated logically. >>>> However, current implementation compress the encrypted data, and buffering it. >>>> It is too tightly-coupled, so we need refactoring to separate the >>>> managing buffer >>>> processing and the stream processing of InflaterInputStream / >>>> DeflaterOutputStream. >>>> >>>> About "push back the bytes belong to next entry", we think >>>> InflaterInputStream.originBuf >>>> of our PoC do not required the needed info. Do this implements have problem? >>>> >>>> http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.00/src/java.base/share/classes/java/util/zip/InflaterInputStream.java.cdiff.html >>>> >>>> Thanks, >>>> Yuji >>>> >>>>> In fact I did have a draft implementation that supports WinZip AES about 5-6 >>>>> years ago :-) >>>>> (which also supports compression methods bzip and lzma, btw) Here is the >>>>> top class, It appears >>>>> a general interface might not be that helpful and it might be ideal to >>>>> simply implement it inside >>>>> the JDK, as what is proposed here, when it's really desired. >>>>> >>>>> http://cr.openjdk.java.net/~sherman/zipmisc/ZipFile.java >>>>> >>>>> It is a ZipFile based implementation, so it does not have the headache that >>>>> ZipInputStream has, >>>>> such as to push back the bytes belong to next entry, since the loc might not >>>>> have the needed >>>>> info regarding the size/csize in stream mode. >>>>> >>>>> From abstract point of view. The "encryption" and "compression" are >>>>> different layers, it would >>>>> be ideal to have them in separate classes logically, instead of mixing the >>>>> encryption into >>>>> compression. Sure, it might be convenient and probably may have better >>>>> performance to mix >>>>> them in certain use scenario, but the "encryption" should never appear in >>>>> the public interface >>>>> of those compression classes. Package private interface should be fine, if >>>>> have to. >>>>> >>>>> -Sherman >>>>> >>>>> >>>>>> >>>>>> 2016-01-06 7:10 GMT+09:00 Xueming Shen : >>>>>>> >>>>>>> it appears that instead of adding "password" specific method to these >>>>>>> classes directly, it might be more appropriate to extend the ZipEntry >>>>>>> class >>>>>>> for such "password" functionality. For example, with a pair of new >>>>>>> methods >>>>>>> >>>>>>> boolean ZipEntry.isTraditionalEncryption(). >>>>>>> void ZipEntry.setTraditionalEncryption(String password); >>>>>> >>>>>> Thanks advice, I agree. We should re-design the API to extend the >>>>>> ZipEntry class. >>>>>> >>>>>>> The encryption support should/can be added naturally/smoothly with >>>>>>> ZipFile.getInputStream(e), ZipInputstream and >>>>>>> ZipOutputStream.putNextEntry(e), >>>>>>> with no extra new method in these two classes. The implementation checks >>>>>>> the flag (bit0, no bit 6) first and then verifies the password, as an >>>>>>> implementation details. >>>>>> >>>>>> Agree. For this proposal, we aim to support only traditional >>>>>> encryption. So I think we should also check bit 6. >>>>>> >>>>>>> For ZipFile and ZipInputStream, we can add note to the api doc to force >>>>>>> the >>>>>>> invoker to check if the returned ZipEntry indicates it's an encrypted >>>>>>> entry. >>>>>>> If yes, it must to set the appropriate password to the returned ZipEntry >>>>>>> via >>>>>>> ZipEntry.setTraditionalEncryption(password); before reading any byte from >>>>>>> the input stream. >>>>>> >>>>>> Yes, we have to add note the flow of codes to the JavaDoc. >>>>>> >>>>>>> Again, we should not have any "encryption" related public field/method in >>>>>>> DeflaterOutputStream/InflaterInputStream. Ideally these two classes >>>>>>> really >>>>>>> should not be aware of "encryption" at all. >>>>>> >>>>>> Agree, but I think we might be faced technical difficulty about a >>>>>> processing between zlib and the internal buffer of InflaterInputStream >>>>>> / DeflaterOutputStream. Please give us time to implement. >>>>>> >>>>>>> -Sherman >>>>>> >>>>>> Thanks, >>>>>> Yuji >>>>>> >>>>>> >>>>>>> On 01/04/2016 06:26 AM, KUBOTA Yuji wrote: >>>>>>>> >>>>>>>> Hi Sherman and all, >>>>>>>> >>>>>>>> Happy new year to everyone! >>>>>>>> >>>>>>>> Please let know your feedback about this proposal. :-) >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Yuji >>>>>>>> >>>>>>>> 2015-12-21 22:38 GMT+09:00 KUBOTA Yuji: >>>>>>>>> >>>>>>>>> Hi Sherman, >>>>>>>>> >>>>>>>>> 2015-12-20 16:35 GMT+09:00 Xueming Shen: >>>>>>>>>> >>>>>>>>>> It is no longer necessary to touch the native code (zip_util.c/h) >>>>>>>>>> after >>>>>>>>>> the >>>>>>>>>> native ZipFile implementation has been moved up to the java level. >>>>>>>>>> Those >>>>>>>>>> native code are for vm access only now, which I dont think care about >>>>>>>>>> the >>>>>>>>>> password support at all. >>>>>>>>> >>>>>>>>> Thanks for your information. We do not take care the native. >>>>>>>>> >>>>>>>>> I discussed with Yasumasa, and our thought is as below. >>>>>>>>> >>>>>>>>>> (1) what's the benefit of exposing the public interface ZipCryption? >>>>>>>>>> the >>>>>>>>>> real >>>>>>>>>> question is whether or not this interface is good enough for other >>>>>>>>>> encryption >>>>>>>>>> implementation to plugin their implementation to support the >>>>>>>>>> ZipFile/Input/ >>>>>>>>>> OutputStream to their encryption spec. >>>>>>>>> >>>>>>>>> We aimed that the public interface ZipCryption supports the >>>>>>>>> extensibillity for other encrypt engine. The JDK core libs developers >>>>>>>>> have to implementation ZipyCryption only. If not provide, the JDK >>>>>>>>> developers must implement ZipStream/Entry by JDK API to design the >>>>>>>>> data structure of entry. >>>>>>>>> If you want to use binary key data such as PKI, you can implement new >>>>>>>>> encrypt/decrypt engine by ZipCryption interface. >>>>>>>>> So we think we should provide this interface to be clearly how to >>>>>>>>> implement a new engine, e.g., cipher algorithm, cipher strength and >>>>>>>>> converting the header, etc. >>>>>>>>> >>>>>>>>>> (2) it seems like it might be possible to hide most of the >>>>>>>>>> implementation >>>>>>>>>> and only expose the "String password" (instead of the ZipCryption) as >>>>>>>>>> the >>>>>>>>>> public interface to support the "traditional" encryption. This depends >>>>>>>>>> on the >>>>>>>>>> result of (1) though. >>>>>>>>> >>>>>>>>> Thanks for your clues. We think the string password at first. However, >>>>>>>>> we should also create a new binary interface given we support PKI in >>>>>>>>> the future. >>>>>>>>> >>>>>>>>>> (3) I'm concerned of pushing ZipCryption into >>>>>>>>>> InflaterInputStream/DeflaterOutputStream. >>>>>>>>>> It might be worth considering to replace the ZipCryption >>>>>>>>>> implementation >>>>>>>>>> with >>>>>>>>>> a pair of FilterOutput/InputStream. It would be easy and reasonable to >>>>>>>>>> use >>>>>>>>>> the FilterOutputStream for the ZipOutputStream and the >>>>>>>>>> FilterInputStream >>>>>>>>>> for the >>>>>>>>>> ZipFile. The PushbackInputStream in ZipInputStream might be an issue >>>>>>>>>> ... >>>>>>>>> >>>>>>>>> Thanks for your clues, too. Honestly speaking, we think the current >>>>>>>>> zip implementation may break the data when used PushbackInputStream >>>>>>>>> for the following reasons. >>>>>>>>> >>>>>>>>> * PushbackInputStream uses an unique internal buffer for re-read >>>>>>>>> operation. >>>>>>>>> * But, InflaterInputStream provide date to Inflater per reads and >>>>>>>>> buffer by JNI (zlib). >>>>>>>>> * So we think PushbackInputStream is poor compatibility with >>>>>>>>> InflaterInputStream. >>>>>>>>> >>>>>>>>> We generally use InputStream through ZipEntry#getInputStream(). We do >>>>>>>>> not touch FileInputStream for reading ZIP data. If we call unread() >>>>>>>>> when we use PushbackInputStream as reading ZIP archive, we guess that >>>>>>>>> it will break the reading data. >>>>>>>>> So, our approach do not affect the PushbackInputStream. >>>>>>>>> What do you think about this? >>>>>>>>> >>>>>>>>>> (4) It seems the ZipOutputStream only supports the "stream based" >>>>>>>>>> password, while >>>>>>>>>> the ZipInputStream supports the "entry based" password. Do we really >>>>>>>>>> need >>>>>>>>>> "entry based" support here? >>>>>>>>> >>>>>>>>> As your suggestion, we should support "entry based". We will start to >>>>>>>>> implement "entry based" after this discussion is closed. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Yuji >>>>>>>>> >>>>>>>>>> On 12/17/15, 9:45 PM, Yasumasa Suenaga wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Jason, >>>>>>>>>>> >>>>>>>>>>> Thank you for your comment. >>>>>>>>>>> I've fixed it in new webrev: >>>>>>>>>>> http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.03/ >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> >>>>>>>>>>> Yasumasa >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 2015/12/17 0:33, Jason Mehrens wrote: >>>>>>>>>>>> >>>>>>>>>>>> The null check of 'entry' at line 351 of ZipFile.getInputStream is >>>>>>>>>>>> in >>>>>>>>>>>> conflict with line 350 and 348. >>>>>>>>>>>> >>>>>>>>>>>> ________________________________________ >>>>>>>>>>>> From: core-libs-dev on >>>>>>>>>>>> behalf >>>>>>>>>>>> of >>>>>>>>>>>> Yasumasa Suenaga >>>>>>>>>>>> Sent: Wednesday, December 16, 2015 8:47 AM >>>>>>>>>>>> To: Sergey Bylokhov; Xueming Shen >>>>>>>>>>>> Cc: core-libs-dev at openjdk.java.net >>>>>>>>>>>> Subject: Re: [PING] PoC for JDK-4347142: Need method to set Password >>>>>>>>>>>> protection to Zip entries >>>>>>>>>>>> >>>>>>>>>>>> Hi Sergey, >>>>>>>>>>>> >>>>>>>>>>>> Thank you for your comment. >>>>>>>>>>>> >>>>>>>>>>>> I added that description in new webrev: >>>>>>>>>>>> http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.02/ >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> >>>>>>>>>>>> Yasumasa >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On 2015/12/16 22:19, Sergey Bylokhov wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> Should the new methods describe how they will work in case of null >>>>>>>>>>>>> params? >>>>>>>>>>>>> >>>>>>>>>>>>> On 16/12/15 16:04, Yasumasa Suenaga wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> I adapted this enhancement after JDK-8145260: >>>>>>>>>>>>>> http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.01/ >>>>>>>>>>>>>> >>>>>>>>>>>>>> Could you review it? >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Yasumasa >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 2015/12/12 21:23, Yasumasa Suenaga wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Sherman, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Our proposal is affected by JDK-8142508. >>>>>>>>>>>>>>> We have to change ZipFile.java and and ZipFile.c . >>>>>>>>>>>>>>> Thus we will create a new webrev for current (after 8142508) >>>>>>>>>>>>>>> jdk9/dev >>>>>>>>>>>>>>> repos. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Do you have any comments about current webrev? >>>>>>>>>>>>>>> http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.00/ >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> If you have comments, we will fix them in new webrev. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Yasumasa >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On 2015/12/03 16:51, KUBOTA Yuji wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Sherman, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks for your quick response :) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I aimed to implement the "traditional" at this proposal by the >>>>>>>>>>>>>>>> below >>>>>>>>>>>>>>>> reasons. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> * We want to prepare API for encrypted zip files at first. >>>>>>>>>>>>>>>> * Many people use the "traditional" in problem-free scope >>>>>>>>>>>>>>>> like a >>>>>>>>>>>>>>>> temporary file. >>>>>>>>>>>>>>>> * We do not know which implementation of the "stronger" is >>>>>>>>>>>>>>>> best >>>>>>>>>>>>>>>> for >>>>>>>>>>>>>>>> openjdk. >>>>>>>>>>>>>>>> * PKWare claims that they have patents about the >>>>>>>>>>>>>>>> "stronger" >>>>>>>>>>>>>>>> on >>>>>>>>>>>>>>>> Zip[1]. >>>>>>>>>>>>>>>> * OTOH, WinZip have the alternative implementation of the >>>>>>>>>>>>>>>> "stronger" [2][3]. >>>>>>>>>>>>>>>> * Instead, we prepared the extensibility by ZipCryption >>>>>>>>>>>>>>>> interface >>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>> implement other encrypt engine, such as the AES based. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thus, I think this PoC should support the "traditional" only. >>>>>>>>>>>>>>>> In the future, anyone who want to implement the "stronger" can >>>>>>>>>>>>>>>> easily >>>>>>>>>>>>>>>> add their code by virtue of this proposal. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> [1] >>>>>>>>>>>>>>>> https://pkware.cachefly.net/webdocs/APPNOTE/APPNOTE-6.3.3.TXT >>>>>>>>>>>>>>>> (1.4 Permitted Use& 7.0 Strong Encryption >>>>>>>>>>>>>>>> Specification) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> [2] >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> https://en.wikipedia.org/wiki/Zip_(file_format)#Strong_encryption_controversy >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> [3] http://www.winzip.com/aes_info.htm >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>> Yuji >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2015-12-03 12:29 GMT+09:00 Xueming >>>>>>>>>>>>>>>> Shen: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi Yuji, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I will take a look at your PoC. Might need some time and even >>>>>>>>>>>>>>>>> bring >>>>>>>>>>>>>>>>> in the >>>>>>>>>>>>>>>>> security guy >>>>>>>>>>>>>>>>> to evaluate the proposal. It seems like you are only interested >>>>>>>>>>>>>>>>> in >>>>>>>>>>>>>>>>> the >>>>>>>>>>>>>>>>> "traditional PKWare >>>>>>>>>>>>>>>>> decryption", which is, based on the wiki, "known to be >>>>>>>>>>>>>>>>> seriously >>>>>>>>>>>>>>>>> flawed, and >>>>>>>>>>>>>>>>> in particular >>>>>>>>>>>>>>>>> is vulnerable to known-plaintext attacks":-) Any request to >>>>>>>>>>>>>>>>> support >>>>>>>>>>>>>>>>> "stronger" encryption >>>>>>>>>>>>>>>>> mechanism, such as the AES based? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>>>> Sherman >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On 12/2/15 6:48 PM, KUBOTA Yuji wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hi all, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> We need reviewer(s) for this PoC. >>>>>>>>>>>>>>>>>> Could you please review this proposal and PoC ? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>> Yuji >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2015-11-26 13:22 GMT+09:00 KUBOTA Yuji: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Hi all, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> * Sorry for my mistake. I re-post this mail because I sent >>>>>>>>>>>>>>>>>>> before >>>>>>>>>>>>>>>>>>> get >>>>>>>>>>>>>>>>>>> a response of subscription confirmation of core-libs-dev. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Our customers have to handle password-protected zip files. >>>>>>>>>>>>>>>>>>> However, >>>>>>>>>>>>>>>>>>> Java SE does not provide the APIs to handle it yet, so we >>>>>>>>>>>>>>>>>>> must >>>>>>>>>>>>>>>>>>> use >>>>>>>>>>>>>>>>>>> third party library so far. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Recently, we found JDK-4347142: "Need method to set Password >>>>>>>>>>>>>>>>>>> protection to Zip entries", and we tried to implement it. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> The current zlib in JDK is completely unaffected by this >>>>>>>>>>>>>>>>>>> proposal. >>>>>>>>>>>>>>>>>>> The >>>>>>>>>>>>>>>>>>> traditional zip encryption encrypts a data after it is has >>>>>>>>>>>>>>>>>>> been >>>>>>>>>>>>>>>>>>> compressed by zlib.[1] So we do NOT need to change existing >>>>>>>>>>>>>>>>>>> zlib >>>>>>>>>>>>>>>>>>> implementation. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> We've created PoC and uploaded it as webrev: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.00/ >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Test code is as below. This code will let you know >>>>>>>>>>>>>>>>>>> how >>>>>>>>>>>>>>>>>>> this >>>>>>>>>>>>>>>>>>> PoC >>>>>>>>>>>>>>>>>>> works. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> http://cr.openjdk.java.net/~ysuenaga/JDK-4347142/webrev.00/Test.java >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> In NTT, a Japanese telecommunications company. We are >>>>>>>>>>>>>>>>>>> providing >>>>>>>>>>>>>>>>>>> many >>>>>>>>>>>>>>>>>>> enterprise systems to customers. Some of them, we need to >>>>>>>>>>>>>>>>>>> implement to >>>>>>>>>>>>>>>>>>> handle password-protected zip file. I guess that this >>>>>>>>>>>>>>>>>>> proposal >>>>>>>>>>>>>>>>>>> is >>>>>>>>>>>>>>>>>>> desired for many developers and users. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I'm working together with Yasumasa Suenaga, jdk9 committer >>>>>>>>>>>>>>>>>>> (ysuenaga). >>>>>>>>>>>>>>>>>>> We want to implement it if this proposal accepted. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> [1]: >>>>>>>>>>>>>>>>>>> https://pkware.cachefly.net/webdocs/APPNOTE/APPNOTE-6.3.3.TXT >>>>>>>>>>>>>>>>>>> (6.0 Traditional PKWARE Encryption) >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>> Yuji >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>> From amy.lu at oracle.com Tue Mar 1 06:11:49 2016 From: amy.lu at oracle.com (Amy Lu) Date: Tue, 1 Mar 2016 14:11:49 +0800 Subject: JDK 9 RFR of JDK-8038330: tools/jar/JarEntryTime.java fails intermittently on checking extracted file last modified values are the current times Message-ID: <56D532A5.7090204@oracle.com> Please review the patch for test tools/jar/JarEntryTime.java In which two issues fixed: 1. Test fails intermittently on checking the extracted files' last-modified-time are the current times. Instead of compare the file last-modified-time with pre-saved time value ?now? (which is the time *before* current time, especially in a slow run, the time diff of ?now? and current time is possible greater than 2 seconds precision (PRECISION)), test now compares the extracted file?s last-modified-time with newly created file last-modified-time. 2. Test may fail if run during the Daylight Saving Time change. bug: https://bugs.openjdk.java.net/browse/JDK-8038330 webrev: http://cr.openjdk.java.net/~amlu/8038330/webrev.00/ Thanks, Amy From michael.haupt at oracle.com Tue Mar 1 09:21:37 2016 From: michael.haupt at oracle.com (Michael Haupt) Date: Tue, 1 Mar 2016 10:21:37 +0100 Subject: RFR(M): 8150635: j.l.i.MethodHandles.loop(...) throws IndexOutOfBoundsException In-Reply-To: References: <8D2F1F62-A639-432B-8E13-29245E467BBA@oracle.com> Message-ID: <7D78E2DB-7D0B-45C2-93EE-06A8FABF5BFA@oracle.com> Hi Paul, > Am 29.02.2016 um 14:46 schrieb Paul Sandoz : >> A new webrev with the above changes (save the renaming) is at http://cr.openjdk.java.net/~mhaupt/8150635/webrev.01 >> > > +1 thank you. I'll push once CCC approves. Best, Michael -- Dr. Michael Haupt | Principal Member of Technical Staff Phone: +49 331 200 7277 | Fax: +49 331 200 7561 Oracle Java Platform Group | LangTools Team | Nashorn Oracle Deutschland B.V. & Co. KG | Schiffbauergasse 14 | 14467 Potsdam, Germany ORACLE Deutschland B.V. & Co. KG | Hauptverwaltung: Riesstra?e 25, D-80992 M?nchen Registergericht: Amtsgericht M?nchen, HRA 95603 Komplement?rin: ORACLE Deutschland Verwaltung B.V. | Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Nederland, Nr. 30143697 Gesch?ftsf?hrer: Alexander van der Ven, Jan Schultheiss, Val Maher Oracle is committed to developing practices and products that help protect the environment From thomas.stuefe at gmail.com Tue Mar 1 09:27:27 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 1 Mar 2016 10:27:27 +0100 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: References: <56CE0441.6060308@Oracle.com> Message-ID: Ping... Could I have reviewer and a sponsor, please? Thanks you! Thomas On Thu, Feb 25, 2016 at 5:51 PM, Thomas St?fe wrote: > Hi Roger, > > thank you for the review! > > New webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ > > Please find my comments inline. > > On Wed, Feb 24, 2016 at 8:28 PM, Roger Riggs > wrote: > >> Hi Thomas, >> >> On 2/24/2016 12:30 PM, Thomas St?fe wrote: >> >>> Hi all, >>> >>> please take a look at this proposed fix. >>> >>> The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 >>> The Webrev: >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ >>> >>> Basically, the file descriptor table implemented in linux_close.c may not >>> work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a 50MB >>> table) for high values for RLIMIT_NO_FILE. Please see details in the bug >>> description. >>> >>> The proposed solution is to implement the file descriptor table not as >>> plain array, but as a twodimensional sparse array, which grows on demand. >>> This keeps the memory footprint small and fixes the corner cases >>> described >>> in the bug description. >>> >>> Please note that the implemented solution is kept simple, at the cost of >>> somewhat higher (some kb) memory footprint for low values of >>> RLIMIT_NO_FILE. >>> This can be optimized, if we even think it is worth the trouble. >>> >>> Please also note that the proposed implementation now uses a mutex lock >>> for >>> every call to getFdEntry() - I do not think this matters, as this is all >>> in >>> preparation for an IO system call, which are usually way more expensive >>> than a pthread mutex. But again, this could be optimized. >>> >> I would suggest preallocating the index[0] array and then skip the mutex >> for that case. >> That would give the same as current performance. >> >> > I did this. > > >> And I'd suggest a different hi/low split of the indexes to reduce the >> size of pre-allocated memory. >> Most processes are going to use a lot fewer than 16384 fd's. How about >> 2048? >> > > I did this too. Now I calculate the split point based on RLIMIT_NO_FILE. > For small values of RLIMIT_NO_FILE > (<64K), I effectivly fall back to a one-dimensional array by making the > first level table a size 1. For larger values, > multiple second level tables, each 64K size, will be allocated on demand > (save for the first one which is preallocated). > > >> I have my doubts about needing to cover fd's up to the full range of 32 >> bits. >> Can the RLIMIT_NO_FILE be used too parametrize the allocation of the >> first level table? >> >> > I did this. > > Interesting note, I found nowhere in the Posix specs a mentioning that > socked descriptors have to be handed out > sequentially and therefore cannot be larger than RLIMIT_NO_FILE. But in > reality on all operating systems file descriptors > seem to be [0, RLIMIT_NO_FILE). > > > Not specific to your change but it would nice to see consistency between >> libnio and libnet on >> the name of the sigWakeup/INTERRUPT_SIGNAL constant. > > > I agree, but this is out of the scope of this bug fix. > > >> >> >>> This is an implementation proposal for Linux; the same code found its way >>> to BSD and AIX. Should you approve of this fix, I will modify those files >>> too. >>> >> yes please. >> >> $.02, Roger >> >> >> > Thanks, Thomas > > >> >>> Thank you and Kind Regards, Thomas >>> >> >> > From david.holmes at oracle.com Tue Mar 1 10:02:38 2016 From: david.holmes at oracle.com (David Holmes) Date: Tue, 1 Mar 2016 20:02:38 +1000 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: References: <56CE0441.6060308@Oracle.com> Message-ID: <56D568BE.2010401@oracle.com> On 1/03/2016 7:27 PM, Thomas St?fe wrote: > Ping... > > Could I have reviewer and a sponsor, please? You don't need a sponsor for this JDK change - you are a Committer. :) Cheers, David > Thanks you! > > Thomas > > On Thu, Feb 25, 2016 at 5:51 PM, Thomas St?fe > wrote: > >> Hi Roger, >> >> thank you for the review! >> >> New webrev: >> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ >> >> Please find my comments inline. >> >> On Wed, Feb 24, 2016 at 8:28 PM, Roger Riggs >> wrote: >> >>> Hi Thomas, >>> >>> On 2/24/2016 12:30 PM, Thomas St?fe wrote: >>> >>>> Hi all, >>>> >>>> please take a look at this proposed fix. >>>> >>>> The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 >>>> The Webrev: >>>> >>>> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ >>>> >>>> Basically, the file descriptor table implemented in linux_close.c may not >>>> work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a 50MB >>>> table) for high values for RLIMIT_NO_FILE. Please see details in the bug >>>> description. >>>> >>>> The proposed solution is to implement the file descriptor table not as >>>> plain array, but as a twodimensional sparse array, which grows on demand. >>>> This keeps the memory footprint small and fixes the corner cases >>>> described >>>> in the bug description. >>>> >>>> Please note that the implemented solution is kept simple, at the cost of >>>> somewhat higher (some kb) memory footprint for low values of >>>> RLIMIT_NO_FILE. >>>> This can be optimized, if we even think it is worth the trouble. >>>> >>>> Please also note that the proposed implementation now uses a mutex lock >>>> for >>>> every call to getFdEntry() - I do not think this matters, as this is all >>>> in >>>> preparation for an IO system call, which are usually way more expensive >>>> than a pthread mutex. But again, this could be optimized. >>>> >>> I would suggest preallocating the index[0] array and then skip the mutex >>> for that case. >>> That would give the same as current performance. >>> >>> >> I did this. >> >> >>> And I'd suggest a different hi/low split of the indexes to reduce the >>> size of pre-allocated memory. >>> Most processes are going to use a lot fewer than 16384 fd's. How about >>> 2048? >>> >> >> I did this too. Now I calculate the split point based on RLIMIT_NO_FILE. >> For small values of RLIMIT_NO_FILE >> (<64K), I effectivly fall back to a one-dimensional array by making the >> first level table a size 1. For larger values, >> multiple second level tables, each 64K size, will be allocated on demand >> (save for the first one which is preallocated). >> >> >>> I have my doubts about needing to cover fd's up to the full range of 32 >>> bits. >>> Can the RLIMIT_NO_FILE be used too parametrize the allocation of the >>> first level table? >>> >>> >> I did this. >> >> Interesting note, I found nowhere in the Posix specs a mentioning that >> socked descriptors have to be handed out >> sequentially and therefore cannot be larger than RLIMIT_NO_FILE. But in >> reality on all operating systems file descriptors >> seem to be [0, RLIMIT_NO_FILE). >> >> >> Not specific to your change but it would nice to see consistency between >>> libnio and libnet on >>> the name of the sigWakeup/INTERRUPT_SIGNAL constant. >> >> >> I agree, but this is out of the scope of this bug fix. >> >> >>> >>> >>>> This is an implementation proposal for Linux; the same code found its way >>>> to BSD and AIX. Should you approve of this fix, I will modify those files >>>> too. >>>> >>> yes please. >>> >>> $.02, Roger >>> >>> >>> >> Thanks, Thomas >> >> >>> >>>> Thank you and Kind Regards, Thomas >>>> >>> >>> >> From dmitry.samersoff at oracle.com Tue Mar 1 10:20:23 2016 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Tue, 1 Mar 2016 13:20:23 +0300 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: References: Message-ID: <56D56CE7.6070700@oracle.com> Thomas, Sorry for being later. I'm not sure we should take a lock at ll. 131 for each fdTable lookup. As soon as we never deallocate fdTable[base_index] it's safe to try to return value first and then take a slow path (take a lock and check fdTable[base_index] again) -Dmitry On 2016-02-24 20:30, Thomas St?fe wrote: > Hi all, > > please take a look at this proposed fix. > > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 > The Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ > > Basically, the file descriptor table implemented in linux_close.c may not > work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a 50MB > table) for high values for RLIMIT_NO_FILE. Please see details in the bug > description. > > The proposed solution is to implement the file descriptor table not as > plain array, but as a twodimensional sparse array, which grows on demand. > This keeps the memory footprint small and fixes the corner cases described > in the bug description. > > Please note that the implemented solution is kept simple, at the cost of > somewhat higher (some kb) memory footprint for low values of RLIMIT_NO_FILE. > This can be optimized, if we even think it is worth the trouble. > > Please also note that the proposed implementation now uses a mutex lock for > every call to getFdEntry() - I do not think this matters, as this is all in > preparation for an IO system call, which are usually way more expensive > than a pthread mutex. But again, this could be optimized. > > This is an implementation proposal for Linux; the same code found its way > to BSD and AIX. Should you approve of this fix, I will modify those files > too. > > Thank you and Kind Regards, Thomas > -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From christoph.langer at sap.com Tue Mar 1 10:47:46 2016 From: christoph.langer at sap.com (Langer, Christoph) Date: Tue, 1 Mar 2016 10:47:46 +0000 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: <56D56CE7.6070700@oracle.com> References: <56D56CE7.6070700@oracle.com> Message-ID: <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> Hi Dmitry, Thomas, Dmitry, I think you are referring to an outdated version of the webrev, the current one is this: http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ However, I agree - the lock should probably not be taken every time but only in the case where we find the entry table was not yet allocated. So, maybe getFdEntry should always do this: entryTable = fdTable[rootArrayIndex]; // no matter if rootArrayIndex is 0 Then check if entryTable is NULL and if yes then enter a guarded section which does the allocation and before that checks if another thread did it already. Also I'm wondering if the entryArrayMask and the rootArrayMask should be calculated once in the init() function and stored in a static field? Because right now it is calculated every time getFdEntry() is called and I don't think this would be optimized by inlining... Best regards Christoph -----Original Message----- From: core-libs-dev [mailto:core-libs-dev-bounces at openjdk.java.net] On Behalf Of Dmitry Samersoff Sent: Dienstag, 1. M?rz 2016 11:20 To: Thomas St?fe ; Java Core Libs Subject: Re: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all Thomas, Sorry for being later. I'm not sure we should take a lock at ll. 131 for each fdTable lookup. As soon as we never deallocate fdTable[base_index] it's safe to try to return value first and then take a slow path (take a lock and check fdTable[base_index] again) -Dmitry On 2016-02-24 20:30, Thomas St?fe wrote: > Hi all, > > please take a look at this proposed fix. > > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 > The Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ > > Basically, the file descriptor table implemented in linux_close.c may not > work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a 50MB > table) for high values for RLIMIT_NO_FILE. Please see details in the bug > description. > > The proposed solution is to implement the file descriptor table not as > plain array, but as a twodimensional sparse array, which grows on demand. > This keeps the memory footprint small and fixes the corner cases described > in the bug description. > > Please note that the implemented solution is kept simple, at the cost of > somewhat higher (some kb) memory footprint for low values of RLIMIT_NO_FILE. > This can be optimized, if we even think it is worth the trouble. > > Please also note that the proposed implementation now uses a mutex lock for > every call to getFdEntry() - I do not think this matters, as this is all in > preparation for an IO system call, which are usually way more expensive > than a pthread mutex. But again, this could be optimized. > > This is an implementation proposal for Linux; the same code found its way > to BSD and AIX. Should you approve of this fix, I will modify those files > too. > > Thank you and Kind Regards, Thomas > -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From jan.lahoda at oracle.com Tue Mar 1 10:54:21 2016 From: jan.lahoda at oracle.com (Jan Lahoda) Date: Tue, 01 Mar 2016 11:54:21 +0100 Subject: RFR 8131913: jdk/internal/jline/console/StripAnsiTest.java can't run in the background Message-ID: <56D574DD.2070606@oracle.com> Hi, I'd like to ask for a review of a patch for JDK-8131913. The fix is to use the "UnsupportedTerminal", which will not try to switch the OS terminal into the raw mode. The proposed patch is here: http://cr.openjdk.java.net/~jlahoda/8131913/webrev.00/index.html Any comments are welcome. Thanks, Jan From daniel.fuchs at oracle.com Tue Mar 1 11:06:28 2016 From: daniel.fuchs at oracle.com (Daniel Fuchs) Date: Tue, 1 Mar 2016 12:06:28 +0100 Subject: RFR: 8150856 - Inconsistent API documentation for @param caller in System.LoggerFinder.getLogger In-Reply-To: References: <56D4840C.3030006@oracle.com> Message-ID: <56D577B4.3010907@oracle.com> On 29/02/16 18:54, Martin Buchholz wrote: > You need to delete the orphaned semicolon Thanks Martin! -- daniel > > On Mon, Feb 29, 2016 at 9:46 AM, Daniel Fuchs wrote: >> Hi, >> >> Please find below a trivial fix for: >> >> https://bugs.openjdk.java.net/browse/JDK-8150856 >> 8150856: Inconsistent API documentation for @param caller >> in System.LoggerFinder.getLogger >> >> http://cr.openjdk.java.net/~dfuchs/webrev_8150856/webrev.00 >> >> The @param caller clause says that caller can be null, whereas >> the @throws clause says that NPE will be thrown. >> >> The @throws clause is correct and @param needs to be fixed. >> >> best regards, >> >> -- daniel >> >> --- old/src/java.base/share/classes/java/lang/System.java 2016-02-29 >> 18:41:30.000000000 +0100 >> +++ new/src/java.base/share/classes/java/lang/System.java 2016-02-29 >> 18:41:30.000000000 +0100 >> @@ -1,5 +1,5 @@ >> /* >> - * Copyright (c) 1994, 2014, Oracle and/or its affiliates. All rights >> reserved. >> + * Copyright (c) 1994, 2016, Oracle and/or its affiliates. All rights >> reserved. >> * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. >> * >> * This code is free software; you can redistribute it and/or modify it >> @@ -1419,7 +1419,6 @@ >> * >> * @param name the name of the logger. >> * @param caller the class for which the logger is being requested; >> - * can be {@code null}. >> * >> * @return a {@link Logger logger} suitable for the given caller's >> * use. From peter.levart at gmail.com Tue Mar 1 11:41:28 2016 From: peter.levart at gmail.com (Peter Levart) Date: Tue, 1 Mar 2016 12:41:28 +0100 Subject: JDK 9 RFR of JDK-8038330: tools/jar/JarEntryTime.java fails intermittently on checking extracted file last modified values are the current times In-Reply-To: <56D532A5.7090204@oracle.com> References: <56D532A5.7090204@oracle.com> Message-ID: <56D57FE8.4000008@gmail.com> Hi Amy, I think that the following test: 178 if (!(Math.abs(now - start) >= 0L && Math.abs(end - now) >= 0L)) { ...will always be false. Therefore, the test will always succeed. Perhaps you wanted to test the following: assert start <= end; if (start > now || now > end) { ... Regards, Peter On 03/01/2016 07:11 AM, Amy Lu wrote: > Please review the patch for test tools/jar/JarEntryTime.java > > In which two issues fixed: > > 1. Test fails intermittently on checking the extracted files' > last-modified-time are the current times. > Instead of compare the file last-modified-time with pre-saved time > value ?now? (which is the time *before* current time, especially in a > slow run, the time diff of ?now? and current time is possible greater > than 2 seconds precision (PRECISION)), test now compares the extracted > file?s last-modified-time with newly created file last-modified-time. > 2. Test may fail if run during the Daylight Saving Time change. > > > bug: https://bugs.openjdk.java.net/browse/JDK-8038330 > webrev: http://cr.openjdk.java.net/~amlu/8038330/webrev.00/ > > Thanks, > Amy From dmitry.samersoff at oracle.com Tue Mar 1 12:44:40 2016 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Tue, 1 Mar 2016 15:44:40 +0300 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> Message-ID: <56D58EB8.4080504@oracle.com> Christoph, > Dmitry, I think you are referring to an outdated version of the > webrev, the current one is this: Yes. Sorry! You may consider a bit different approach to save memory: Allocate multiple baseTables for different ranges of fd's with plain array of 32 * (fdEntry_t*) for simple case. i.e. if (fd < 32) do plain array lookup if (fd < N1) do two steps lookup in baseTable1 if (fd < N2) do two steps lookup in baseTable2 ... -Dmitry On 2016-03-01 13:47, Langer, Christoph wrote: > Hi Dmitry, Thomas, > > Dmitry, I think you are referring to an outdated version of the > webrev, the current one is this: > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ > > However, I agree - the lock should probably not be taken every time > but only in the case where we find the entry table was not yet > allocated. > > So, maybe getFdEntry should always do this: entryTable = > fdTable[rootArrayIndex]; // no matter if rootArrayIndex is 0 > > Then check if entryTable is NULL and if yes then enter a guarded > section which does the allocation and before that checks if another > thread did it already. > > Also I'm wondering if the entryArrayMask and the rootArrayMask should > be calculated once in the init() function and stored in a static > field? Because right now it is calculated every time getFdEntry() is > called and I don't think this would be optimized by inlining... > > Best regards Christoph > > -----Original Message----- From: core-libs-dev > [mailto:core-libs-dev-bounces at openjdk.java.net] On Behalf Of Dmitry > Samersoff Sent: Dienstag, 1. M?rz 2016 11:20 To: Thomas St?fe > ; Java Core Libs > Subject: Re: RFR(s): 8150460: > (linux|bsd|aix)_close.c: file descriptor table may become large or > may not work at all > > Thomas, > > Sorry for being later. > > I'm not sure we should take a lock at ll. 131 for each fdTable > lookup. > > As soon as we never deallocate fdTable[base_index] it's safe to try > to return value first and then take a slow path (take a lock and > check fdTable[base_index] again) > > -Dmitry > > > On 2016-02-24 20:30, Thomas St?fe wrote: >> Hi all, >> >> please take a look at this proposed fix. >> >> The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 The >> Webrev: >> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ >> >> >> Basically, the file descriptor table implemented in linux_close.c may not >> work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a >> 50MB table) for high values for RLIMIT_NO_FILE. Please see details >> in the bug description. >> >> The proposed solution is to implement the file descriptor table not >> as plain array, but as a twodimensional sparse array, which grows >> on demand. This keeps the memory footprint small and fixes the >> corner cases described in the bug description. >> >> Please note that the implemented solution is kept simple, at the >> cost of somewhat higher (some kb) memory footprint for low values >> of RLIMIT_NO_FILE. This can be optimized, if we even think it is >> worth the trouble. >> >> Please also note that the proposed implementation now uses a mutex >> lock for every call to getFdEntry() - I do not think this matters, >> as this is all in preparation for an IO system call, which are >> usually way more expensive than a pthread mutex. But again, this >> could be optimized. >> >> This is an implementation proposal for Linux; the same code found >> its way to BSD and AIX. Should you approve of this fix, I will >> modify those files too. >> >> Thank you and Kind Regards, Thomas >> > > -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From thomas.stuefe at gmail.com Tue Mar 1 13:13:36 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 1 Mar 2016 14:13:36 +0100 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> Message-ID: Dmitry, Christoph, I am not 100% sure this would work for weak ordering platforms. If I understand you correctly you suggest the double checking pattern: if (basetable[index] == NULL) { lock if (basetable[index] == NULL) { basetable[index] = calloc(size); } unlock } The problem I cannot wrap my head around is how to make this safe for all platforms. Note: I am not an expert for this. How do you prevent the "reading thread reads partially initialized object" problem? Consider this: We need to allocate memory, set it completely to zero and then store a pointer to it in basetable[index]. This means we have multiple stores - one store for the pointer, n stores for zero-ing out the memory, and god knows how many stores the C-Runtime allcoator needs to update its internal structures. On weak ordering platforms like ppc (and arm?), the store for basetable[index] may be visible before the other stores, so the reading threads, running on different CPUs, may read a pointer to partially initialized memory. What you need is a memory barrier between the calloc() and store of basetable[index], to prevent the latter store from floating above the other stores. I did not find anything about multithread safety in the calloc() docs, or guaranteed barrier behaviour, nor did I expect anything. In the hotspot we have our memory barrier APIs, but in the JDK I am confined to basic C and there is no way that I know of to do memory barriers with plain Posix APIs. Bottomline, I am not sure. Maybe I am too cautious here, but I do not see a way to make this safe without locking the reader thread too. Also, we are about to do an IO operation - is a mutex really that bad here? Especially with the optimization Roger suggested of pre-allocating the basetable[0] array and omitting lock protection there? Kind Regards, Thomas On Tue, Mar 1, 2016 at 11:47 AM, Langer, Christoph wrote: > Hi Dmitry, Thomas, > > Dmitry, I think you are referring to an outdated version of the webrev, > the current one is this: > > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ > > However, I agree - the lock should probably not be taken every time but > only in the case where we find the entry table was not yet allocated. > > So, maybe getFdEntry should always do this: > entryTable = fdTable[rootArrayIndex]; // no matter if rootArrayIndex is 0 > > Then check if entryTable is NULL and if yes then enter a guarded section > which does the allocation and before that checks if another thread did it > already. > > Also I'm wondering if the entryArrayMask and the rootArrayMask should be > calculated once in the init() function and stored in a static field? > Because right now it is calculated every time getFdEntry() is called and I > don't think this would be optimized by inlining... > > Best regards > Christoph > > -----Original Message----- > From: core-libs-dev [mailto:core-libs-dev-bounces at openjdk.java.net] On > Behalf Of Dmitry Samersoff > Sent: Dienstag, 1. M?rz 2016 11:20 > To: Thomas St?fe ; Java Core Libs < > core-libs-dev at openjdk.java.net> > Subject: Re: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor > table may become large or may not work at all > > Thomas, > > Sorry for being later. > > I'm not sure we should take a lock at ll. 131 for each fdTable lookup. > > As soon as we never deallocate fdTable[base_index] it's safe to try to > return value first and then take a slow path (take a lock and check > fdTable[base_index] again) > > -Dmitry > > > On 2016-02-24 20:30, Thomas St?fe wrote: > > Hi all, > > > > please take a look at this proposed fix. > > > > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 > > The Webrev: > > > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ > > > > Basically, the file descriptor table implemented in linux_close.c may not > > work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a 50MB > > table) for high values for RLIMIT_NO_FILE. Please see details in the bug > > description. > > > > The proposed solution is to implement the file descriptor table not as > > plain array, but as a twodimensional sparse array, which grows on demand. > > This keeps the memory footprint small and fixes the corner cases > described > > in the bug description. > > > > Please note that the implemented solution is kept simple, at the cost of > > somewhat higher (some kb) memory footprint for low values of > RLIMIT_NO_FILE. > > This can be optimized, if we even think it is worth the trouble. > > > > Please also note that the proposed implementation now uses a mutex lock > for > > every call to getFdEntry() - I do not think this matters, as this is all > in > > preparation for an IO system call, which are usually way more expensive > > than a pthread mutex. But again, this could be optimized. > > > > This is an implementation proposal for Linux; the same code found its way > > to BSD and AIX. Should you approve of this fix, I will modify those files > > too. > > > > Thank you and Kind Regards, Thomas > > > > > -- > Dmitry Samersoff > Oracle Java development team, Saint Petersburg, Russia > * I would love to change the world, but they won't give me the sources. > From amy.lu at oracle.com Tue Mar 1 13:27:47 2016 From: amy.lu at oracle.com (Amy Lu) Date: Tue, 1 Mar 2016 21:27:47 +0800 Subject: JDK 9 RFR of JDK-8038330: tools/jar/JarEntryTime.java fails intermittently on checking extracted file last modified values are the current times In-Reply-To: <56D57FE8.4000008@gmail.com> References: <56D532A5.7090204@oracle.com> <56D57FE8.4000008@gmail.com> Message-ID: <56D598D3.4050109@oracle.com> On 3/1/16 7:41 PM, Peter Levart wrote: > Hi Amy, > > I think that the following test: > > 178 if (!(Math.abs(now - start) >= 0L && Math.abs(end - now) > >= 0L)) { > > ...will always be false. Therefore, the test will always succeed. > > Perhaps you wanted to test the following: > > assert start <= end; > if (start > now || now > end) { ... Thank you Peter for reviewing. My bad ... I'm updating the webrev and will send updated version tomorrow. Thanks, Amy > > > Regards, Peter > > On 03/01/2016 07:11 AM, Amy Lu wrote: >> Please review the patch for test tools/jar/JarEntryTime.java >> >> In which two issues fixed: >> >> 1. Test fails intermittently on checking the extracted files' >> last-modified-time are the current times. >> Instead of compare the file last-modified-time with pre-saved time >> value ?now? (which is the time *before* current time, especially in a >> slow run, the time diff of ?now? and current time is possible greater >> than 2 seconds precision (PRECISION)), test now compares the >> extracted file?s last-modified-time with newly created file >> last-modified-time. >> 2. Test may fail if run during the Daylight Saving Time change. >> >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8038330 >> webrev: http://cr.openjdk.java.net/~amlu/8038330/webrev.00/ >> >> Thanks, >> Amy > From daniel.fuchs at oracle.com Tue Mar 1 13:30:41 2016 From: daniel.fuchs at oracle.com (Daniel Fuchs) Date: Tue, 1 Mar 2016 14:30:41 +0100 Subject: RFR - 8148820: Missing @since Javadoc tag in Logger.log(Level, Supplier) Message-ID: <56D59981.6010903@oracle.com> Hi, Please find below a trivial fix for 8148820: Missing @since Javadoc tag in Logger.log(Level, Supplier) https://bugs.openjdk.java.net/browse/JDK-8148820 This method was added to java.util.logging.Logger in jdk 8, but the @since tag was missing. -- daniel diff --git a/src/java.logging/share/classes/java/util/logging/Logger.java b/src/java.logging/share/classes/java/util/logging/Logger.java --- a/src/java.logging/share/classes/java/util/logging/Logger.java +++ b/src/java.logging/share/classes/java/util/logging/Logger.java @@ -839,6 +839,7 @@ * @param level One of the message level identifiers, e.g., SEVERE * @param msgSupplier A function, which when called, produces the * desired log message + * @since 1.8 */ public void log(Level level, Supplier msgSupplier) { if (!isLoggable(level)) { From thomas.stuefe at gmail.com Tue Mar 1 13:33:50 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 1 Mar 2016 14:33:50 +0100 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: <56D58EB8.4080504@oracle.com> References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> <56D58EB8.4080504@oracle.com> Message-ID: Hi Dmitry, On Tue, Mar 1, 2016 at 1:44 PM, Dmitry Samersoff < dmitry.samersoff at oracle.com> wrote: > Christoph, > > > Dmitry, I think you are referring to an outdated version of the > > webrev, the current one is this: > > Yes. Sorry! > > You may consider a bit different approach to save memory: > > Allocate multiple baseTables for different ranges of fd's with > plain array of 32 * (fdEntry_t*) for simple case. > > i.e. if (fd < 32) > do plain array lookup > > if (fd < N1) > do two steps lookup in baseTable1 > > if (fd < N2) > do two steps lookup in baseTable2 > > How does this differ from my approach in http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ ? For fd < 65535, I effectively fall back to a plain array lookup by setting the size of the base table to 1. So, for this case the sparse array degenerates to a one-dimensional plain array. Kind Regards, Thomas > ... > > -Dmitry > > > > On 2016-03-01 13:47, Langer, Christoph wrote: > > Hi Dmitry, Thomas, > > > > Dmitry, I think you are referring to an outdated version of the > > webrev, the current one is this: > > > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ > > > > However, I agree - the lock should probably not be taken every time > > but only in the case where we find the entry table was not yet > > allocated. > > > > So, maybe getFdEntry should always do this: entryTable = > > fdTable[rootArrayIndex]; // no matter if rootArrayIndex is 0 > > > > Then check if entryTable is NULL and if yes then enter a guarded > > section which does the allocation and before that checks if another > > thread did it already. > > > > Also I'm wondering if the entryArrayMask and the rootArrayMask should > > be calculated once in the init() function and stored in a static > > field? Because right now it is calculated every time getFdEntry() is > > called and I don't think this would be optimized by inlining... > > > > Best regards Christoph > > > > -----Original Message----- From: core-libs-dev > > [mailto:core-libs-dev-bounces at openjdk.java.net] On Behalf Of Dmitry > > Samersoff Sent: Dienstag, 1. M?rz 2016 11:20 To: Thomas St?fe > > ; Java Core Libs > > Subject: Re: RFR(s): 8150460: > > (linux|bsd|aix)_close.c: file descriptor table may become large or > > may not work at all > > > > Thomas, > > > > Sorry for being later. > > > > I'm not sure we should take a lock at ll. 131 for each fdTable > > lookup. > > > > As soon as we never deallocate fdTable[base_index] it's safe to try > > to return value first and then take a slow path (take a lock and > > check fdTable[base_index] again) > > > > -Dmitry > > > > > > On 2016-02-24 20:30, Thomas St?fe wrote: > >> Hi all, > >> > >> please take a look at this proposed fix. > >> > >> The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 The > >> Webrev: > >> > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ > >> > >> > >> > Basically, the file descriptor table implemented in linux_close.c may not > >> work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a > >> 50MB table) for high values for RLIMIT_NO_FILE. Please see details > >> in the bug description. > >> > >> The proposed solution is to implement the file descriptor table not > >> as plain array, but as a twodimensional sparse array, which grows > >> on demand. This keeps the memory footprint small and fixes the > >> corner cases described in the bug description. > >> > >> Please note that the implemented solution is kept simple, at the > >> cost of somewhat higher (some kb) memory footprint for low values > >> of RLIMIT_NO_FILE. This can be optimized, if we even think it is > >> worth the trouble. > >> > >> Please also note that the proposed implementation now uses a mutex > >> lock for every call to getFdEntry() - I do not think this matters, > >> as this is all in preparation for an IO system call, which are > >> usually way more expensive than a pthread mutex. But again, this > >> could be optimized. > >> > >> This is an implementation proposal for Linux; the same code found > >> its way to BSD and AIX. Should you approve of this fix, I will > >> modify those files too. > >> > >> Thank you and Kind Regards, Thomas > >> > > > > > > > -- > Dmitry Samersoff > Oracle Java development team, Saint Petersburg, Russia > * I would love to change the world, but they won't give me the sources. > From dmitry.samersoff at oracle.com Tue Mar 1 13:39:15 2016 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Tue, 1 Mar 2016 16:39:15 +0300 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> Message-ID: <56D59B83.3010503@oracle.com> Thomas, We probably can do: if (fdTable[rootArrayIndex] != NULL) { entryTable = fdTable[rootArrayIndex]; } else { // existing code pthread_mutex_lock(&fdTableLock); if (fdTable[rootArrayIndex] == NULL) { .... } } -Dmitry On 2016-03-01 16:13, Thomas St?fe wrote: > Dmitry, Christoph, > > I am not 100% sure this would work for weak ordering platforms. > > If I understand you correctly you suggest the double checking pattern: > > if (basetable[index] == NULL) { > lock > if (basetable[index] == NULL) { > basetable[index] = calloc(size); > } > unlock > } > > The problem I cannot wrap my head around is how to make this safe for > all platforms. Note: I am not an expert for this. > > How do you prevent the "reading thread reads partially initialized > object" problem? > > Consider this: We need to allocate memory, set it completely to zero and > then store a pointer to it in basetable[index]. This means we have > multiple stores - one store for the pointer, n stores for zero-ing out > the memory, and god knows how many stores the C-Runtime allcoator needs > to update its internal structures. > > On weak ordering platforms like ppc (and arm?), the store for > basetable[index] may be visible before the other stores, so the reading > threads, running on different CPUs, may read a pointer to partially > initialized memory. What you need is a memory barrier between the > calloc() and store of basetable[index], to prevent the latter store from > floating above the other stores. > > I did not find anything about multithread safety in the calloc() docs, > or guaranteed barrier behaviour, nor did I expect anything. In the > hotspot we have our memory barrier APIs, but in the JDK I am confined to > basic C and there is no way that I know of to do memory barriers with > plain Posix APIs. > > Bottomline, I am not sure. Maybe I am too cautious here, but I do not > see a way to make this safe without locking the reader thread too. > > Also, we are about to do an IO operation - is a mutex really that bad > here? Especially with the optimization Roger suggested of pre-allocating > the basetable[0] array and omitting lock protection there? > > Kind Regards, > > Thomas > > > > > On Tue, Mar 1, 2016 at 11:47 AM, Langer, Christoph > > wrote: > > Hi Dmitry, Thomas, > > Dmitry, I think you are referring to an outdated version of the > webrev, the current one is this: > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ > > However, I agree - the lock should probably not be taken every time > but only in the case where we find the entry table was not yet > allocated. > > So, maybe getFdEntry should always do this: > entryTable = fdTable[rootArrayIndex]; // no matter if rootArrayIndex > is 0 > > Then check if entryTable is NULL and if yes then enter a guarded > section which does the allocation and before that checks if another > thread did it already. > > Also I'm wondering if the entryArrayMask and the rootArrayMask > should be calculated once in the init() function and stored in a > static field? Because right now it is calculated every time > getFdEntry() is called and I don't think this would be optimized by > inlining... > > Best regards > Christoph > > -----Original Message----- > From: core-libs-dev [mailto:core-libs-dev-bounces at openjdk.java.net > ] On Behalf Of Dmitry > Samersoff > Sent: Dienstag, 1. M?rz 2016 11:20 > To: Thomas St?fe >; Java Core Libs > > > Subject: Re: RFR(s): 8150460: (linux|bsd|aix)_close.c: file > descriptor table may become large or may not work at all > > Thomas, > > Sorry for being later. > > I'm not sure we should take a lock at ll. 131 for each fdTable lookup. > > As soon as we never deallocate fdTable[base_index] it's safe to try to > return value first and then take a slow path (take a lock and check > fdTable[base_index] again) > > -Dmitry > > > On 2016-02-24 20:30, Thomas St?fe wrote: > > Hi all, > > > > please take a look at this proposed fix. > > > > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 > > The Webrev: > > > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ > > > > Basically, the file descriptor table implemented in linux_close.c > may not > > work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a 50MB > > table) for high values for RLIMIT_NO_FILE. Please see details in > the bug > > description. > > > > The proposed solution is to implement the file descriptor table not as > > plain array, but as a twodimensional sparse array, which grows on > demand. > > This keeps the memory footprint small and fixes the corner cases > described > > in the bug description. > > > > Please note that the implemented solution is kept simple, at the > cost of > > somewhat higher (some kb) memory footprint for low values of > RLIMIT_NO_FILE. > > This can be optimized, if we even think it is worth the trouble. > > > > Please also note that the proposed implementation now uses a mutex > lock for > > every call to getFdEntry() - I do not think this matters, as this > is all in > > preparation for an IO system call, which are usually way more > expensive > > than a pthread mutex. But again, this could be optimized. > > > > This is an implementation proposal for Linux; the same code found > its way > > to BSD and AIX. Should you approve of this fix, I will modify > those files > > too. > > > > Thank you and Kind Regards, Thomas > > > > > -- > Dmitry Samersoff > Oracle Java development team, Saint Petersburg, Russia > * I would love to change the world, but they won't give me the sources. > > -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From michael.haupt at oracle.com Tue Mar 1 13:46:47 2016 From: michael.haupt at oracle.com (Michael Haupt) Date: Tue, 1 Mar 2016 14:46:47 +0100 Subject: RFR(XS): 8150953: j.l.i.MethodHandles: example section in whileLoop(...) provides example for doWhileLoop Message-ID: Dear all, please review this fix. Bug: https://bugs.openjdk.java.net/browse/JDK-8150953 Webrev: http://cr.openjdk.java.net/~mhaupt/8150953/webrev.00/ The API docs and corresponding JavaDocExampleTest test case for MethodHandles.whileLoop() wrongly used the example for MethodHandles.doWhileLoop(). Thanks, Michael -- Dr. Michael Haupt | Principal Member of Technical Staff Phone: +49 331 200 7277 | Fax: +49 331 200 7561 Oracle Java Platform Group | LangTools Team | Nashorn Oracle Deutschland B.V. & Co. KG | Schiffbauergasse 14 | 14467 Potsdam, Germany ORACLE Deutschland B.V. & Co. KG | Hauptverwaltung: Riesstra?e 25, D-80992 M?nchen Registergericht: Amtsgericht M?nchen, HRA 95603 Komplement?rin: ORACLE Deutschland Verwaltung B.V. | Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Nederland, Nr. 30143697 Gesch?ftsf?hrer: Alexander van der Ven, Jan Schultheiss, Val Maher Oracle is committed to developing practices and products that help protect the environment From dmitry.samersoff at oracle.com Tue Mar 1 13:49:31 2016 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Tue, 1 Mar 2016 16:49:31 +0300 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> <56D58EB8.4080504@oracle.com> Message-ID: <56D59DEB.2090007@oracle.com> Thomas, > For fd < 65535, I effectively fall back to a plain array lookup by > setting the size of the base table to 1. So, for this case the sparse > array degenerates to a one-dimensional plain array. It might be good to make it more explicit: just allocate a separate array for values less than 65535 and skip other machinery if nbr_files.rlim_max less than 65536. But it's just a cosmetic, so feel free to leave the code as is. -Dmitry On 2016-03-01 16:33, Thomas St?fe wrote: > Hi Dmitry, > > On Tue, Mar 1, 2016 at 1:44 PM, Dmitry Samersoff > > wrote: > > Christoph, > > > Dmitry, I think you are referring to an outdated version of the > > webrev, the current one is this: > > Yes. Sorry! > > You may consider a bit different approach to save memory: > > Allocate multiple baseTables for different ranges of fd's with > plain array of 32 * (fdEntry_t*) for simple case. > > i.e. if (fd < 32) > do plain array lookup > > if (fd < N1) > do two steps lookup in baseTable1 > > if (fd < N2) > do two steps lookup in baseTable2 > > > How does this differ from my approach > in http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ ? > > For fd < 65535, I effectively fall back to a plain array lookup by > setting the size of the base table to 1. So, for this case the sparse > array degenerates to a one-dimensional plain array. > > Kind Regards, Thomas > > > > ... > > -Dmitry > > > > On 2016-03-01 13:47, Langer, Christoph wrote: > > Hi Dmitry, Thomas, > > > > Dmitry, I think you are referring to an outdated version of the > > webrev, the current one is this: > > > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ > > > > However, I agree - the lock should probably not be taken every time > > but only in the case where we find the entry table was not yet > > allocated. > > > > So, maybe getFdEntry should always do this: entryTable = > > fdTable[rootArrayIndex]; // no matter if rootArrayIndex is 0 > > > > Then check if entryTable is NULL and if yes then enter a guarded > > section which does the allocation and before that checks if another > > thread did it already. > > > > Also I'm wondering if the entryArrayMask and the rootArrayMask should > > be calculated once in the init() function and stored in a static > > field? Because right now it is calculated every time getFdEntry() is > > called and I don't think this would be optimized by inlining... > > > > Best regards Christoph > > > > -----Original Message----- From: core-libs-dev > > [mailto:core-libs-dev-bounces at openjdk.java.net > ] On Behalf Of Dmitry > > Samersoff Sent: Dienstag, 1. M?rz 2016 11:20 To: Thomas St?fe > > >; Java > Core Libs > > > Subject: Re: RFR(s): 8150460: > > (linux|bsd|aix)_close.c: file descriptor table may become large or > > may not work at all > > > > Thomas, > > > > Sorry for being later. > > > > I'm not sure we should take a lock at ll. 131 for each fdTable > > lookup. > > > > As soon as we never deallocate fdTable[base_index] it's safe to try > > to return value first and then take a slow path (take a lock and > > check fdTable[base_index] again) > > > > -Dmitry > > > > > > On 2016-02-24 20:30, Thomas St?fe wrote: > >> Hi all, > >> > >> please take a look at this proposed fix. > >> > >> The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 The > >> Webrev: > >> > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ > >> > >> > >> > Basically, the file descriptor table implemented in linux_close.c > may not > >> work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a > >> 50MB table) for high values for RLIMIT_NO_FILE. Please see details > >> in the bug description. > >> > >> The proposed solution is to implement the file descriptor table not > >> as plain array, but as a twodimensional sparse array, which grows > >> on demand. This keeps the memory footprint small and fixes the > >> corner cases described in the bug description. > >> > >> Please note that the implemented solution is kept simple, at the > >> cost of somewhat higher (some kb) memory footprint for low values > >> of RLIMIT_NO_FILE. This can be optimized, if we even think it is > >> worth the trouble. > >> > >> Please also note that the proposed implementation now uses a mutex > >> lock for every call to getFdEntry() - I do not think this matters, > >> as this is all in preparation for an IO system call, which are > >> usually way more expensive than a pthread mutex. But again, this > >> could be optimized. > >> > >> This is an implementation proposal for Linux; the same code found > >> its way to BSD and AIX. Should you approve of this fix, I will > >> modify those files too. > >> > >> Thank you and Kind Regards, Thomas > >> > > > > > > > -- > Dmitry Samersoff > Oracle Java development team, Saint Petersburg, Russia > * I would love to change the world, but they won't give me the sources. > > -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From thomas.stuefe at gmail.com Tue Mar 1 13:51:21 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 1 Mar 2016 14:51:21 +0100 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: <56D59DEB.2090007@oracle.com> References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> <56D58EB8.4080504@oracle.com> <56D59DEB.2090007@oracle.com> Message-ID: Dmitry, On Tue, Mar 1, 2016 at 2:49 PM, Dmitry Samersoff < dmitry.samersoff at oracle.com> wrote: > Thomas, > > > For fd < 65535, I effectively fall back to a plain array lookup by > > setting the size of the base table to 1. So, for this case the sparse > > array degenerates to a one-dimensional plain array. > > It might be good to make it more explicit: just allocate a separate > array for values less than 65535 and skip other machinery if > nbr_files.rlim_max less than 65536. > > Yes, maybe it makes the code more readable. My code is clever, but I am not a big fan of cleverness if it costs readability. I will prepare a new change. Thanks for reviewing! ..Thomas > But it's just a cosmetic, so feel free to leave the code as is. > > -Dmitry > > > > On 2016-03-01 16:33, Thomas St?fe wrote: > > Hi Dmitry, > > > > On Tue, Mar 1, 2016 at 1:44 PM, Dmitry Samersoff > > > > wrote: > > > > Christoph, > > > > > Dmitry, I think you are referring to an outdated version of the > > > webrev, the current one is this: > > > > Yes. Sorry! > > > > You may consider a bit different approach to save memory: > > > > Allocate multiple baseTables for different ranges of fd's with > > plain array of 32 * (fdEntry_t*) for simple case. > > > > i.e. if (fd < 32) > > do plain array lookup > > > > if (fd < N1) > > do two steps lookup in baseTable1 > > > > if (fd < N2) > > do two steps lookup in baseTable2 > > > > > > How does this differ from my approach > > in > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ > ? > > > > For fd < 65535, I effectively fall back to a plain array lookup by > > setting the size of the base table to 1. So, for this case the sparse > > array degenerates to a one-dimensional plain array. > > > > Kind Regards, Thomas > > > > > > > > ... > > > > -Dmitry > > > > > > > > On 2016-03-01 13:47, Langer, Christoph wrote: > > > Hi Dmitry, Thomas, > > > > > > Dmitry, I think you are referring to an outdated version of the > > > webrev, the current one is this: > > > > > > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ > > > > > > However, I agree - the lock should probably not be taken every > time > > > but only in the case where we find the entry table was not yet > > > allocated. > > > > > > So, maybe getFdEntry should always do this: entryTable = > > > fdTable[rootArrayIndex]; // no matter if rootArrayIndex is 0 > > > > > > Then check if entryTable is NULL and if yes then enter a guarded > > > section which does the allocation and before that checks if another > > > thread did it already. > > > > > > Also I'm wondering if the entryArrayMask and the rootArrayMask > should > > > be calculated once in the init() function and stored in a static > > > field? Because right now it is calculated every time getFdEntry() > is > > > called and I don't think this would be optimized by inlining... > > > > > > Best regards Christoph > > > > > > -----Original Message----- From: core-libs-dev > > > [mailto:core-libs-dev-bounces at openjdk.java.net > > ] On Behalf Of Dmitry > > > Samersoff Sent: Dienstag, 1. M?rz 2016 11:20 To: Thomas St?fe > > > >; Java > > Core Libs > > > > > Subject: Re: RFR(s): > 8150460: > > > (linux|bsd|aix)_close.c: file descriptor table may become large or > > > may not work at all > > > > > > Thomas, > > > > > > Sorry for being later. > > > > > > I'm not sure we should take a lock at ll. 131 for each fdTable > > > lookup. > > > > > > As soon as we never deallocate fdTable[base_index] it's safe to try > > > to return value first and then take a slow path (take a lock and > > > check fdTable[base_index] again) > > > > > > -Dmitry > > > > > > > > > On 2016-02-24 20:30, Thomas St?fe wrote: > > >> Hi all, > > >> > > >> please take a look at this proposed fix. > > >> > > >> The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 The > > >> Webrev: > > >> > > > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ > > >> > > >> > > >> > > Basically, the file descriptor table implemented in linux_close.c > > may not > > >> work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a > > >> 50MB table) for high values for RLIMIT_NO_FILE. Please see details > > >> in the bug description. > > >> > > >> The proposed solution is to implement the file descriptor table > not > > >> as plain array, but as a twodimensional sparse array, which grows > > >> on demand. This keeps the memory footprint small and fixes the > > >> corner cases described in the bug description. > > >> > > >> Please note that the implemented solution is kept simple, at the > > >> cost of somewhat higher (some kb) memory footprint for low values > > >> of RLIMIT_NO_FILE. This can be optimized, if we even think it is > > >> worth the trouble. > > >> > > >> Please also note that the proposed implementation now uses a mutex > > >> lock for every call to getFdEntry() - I do not think this matters, > > >> as this is all in preparation for an IO system call, which are > > >> usually way more expensive than a pthread mutex. But again, this > > >> could be optimized. > > >> > > >> This is an implementation proposal for Linux; the same code found > > >> its way to BSD and AIX. Should you approve of this fix, I will > > >> modify those files too. > > >> > > >> Thank you and Kind Regards, Thomas > > >> > > > > > > > > > > > > -- > > Dmitry Samersoff > > Oracle Java development team, Saint Petersburg, Russia > > * I would love to change the world, but they won't give me the > sources. > > > > > > > -- > Dmitry Samersoff > Oracle Java development team, Saint Petersburg, Russia > * I would love to change the world, but they won't give me the sources. > From Lance.Andersen at oracle.com Tue Mar 1 13:52:56 2016 From: Lance.Andersen at oracle.com (Lance Andersen) Date: Tue, 1 Mar 2016 08:52:56 -0500 Subject: RFR - 8148820: Missing @since Javadoc tag in Logger.log(Level, Supplier) In-Reply-To: <56D59981.6010903@oracle.com> References: <56D59981.6010903@oracle.com> Message-ID: <1A77B35B-83A5-4E05-8610-F5915BB51904@oracle.com> +1 -- Lance Andersen| Principal Member of Technical Staff | +1.781.442.2037 Oracle Java Engineering 1 Network Drive Burlington, MA 01803 Lance.Andersen at oracle.com Sent from my iPhone > On Mar 1, 2016, at 8:30 AM, Daniel Fuchs wrote: > > Hi, > > Please find below a trivial fix for > > 8148820: Missing @since Javadoc tag in Logger.log(Level, Supplier) > https://bugs.openjdk.java.net/browse/JDK-8148820 > > This method was added to java.util.logging.Logger in jdk 8, but > the @since tag was missing. > > -- daniel > > diff --git a/src/java.logging/share/classes/java/util/logging/Logger.java b/src/java.logging/share/classes/java/util/logging/Logger.java > --- a/src/java.logging/share/classes/java/util/logging/Logger.java > +++ b/src/java.logging/share/classes/java/util/logging/Logger.java > @@ -839,6 +839,7 @@ > * @param level One of the message level identifiers, e.g., SEVERE > * @param msgSupplier A function, which when called, produces the > * desired log message > + * @since 1.8 > */ > public void log(Level level, Supplier msgSupplier) { > if (!isLoggable(level)) { From aleksej.efimov at oracle.com Tue Mar 1 13:56:03 2016 From: aleksej.efimov at oracle.com (Aleksej Efimov) Date: Tue, 1 Mar 2016 16:56:03 +0300 Subject: [9] RFR: 8150174: Update JAX-WS RI integration to latest version (2.3.0-SNAPSHOT) In-Reply-To: <3539B35D-4BB7-41E4-83B2-B01B4125A47A@oracle.com> References: <56CE33AF.3060504@oracle.com> <3539B35D-4BB7-41E4-83B2-B01B4125A47A@oracle.com> Message-ID: <56D59F73.5090704@oracle.com> Hi Lance, Thanks for review! Best Aleksej On 02/29/2016 02:43 PM, Lance Andersen wrote: > Hi Alejsej > > This looks fine > > Best > Lance > On Feb 24, 2016, at 5:50 PM, Aleksej Efimov > wrote: > >> Hi, >> >> Please, review the bulk update of JAX-WS/B from upstream projects: >> http://cr.openjdk.java.net/~aefimov/jaxws-integrations/8150174/9/00/ >> >> >> Details (list of fixed issues) can be found in bug report: >> https://bugs.openjdk.java.net/browse/JDK-8150174 >> >> The following test sets were executed over JDK9 with integrated changes: >> jdk_other JTREG tests; JCK9 jaxws tests; JAXWS unit tests; >> >> Thanks, >> Aleksej > > > > Lance > Andersen| Principal Member of Technical Staff | +1.781.442.2037 > Oracle Java Engineering > 1 Network Drive > Burlington, MA 01803 > Lance.Andersen at oracle.com > > > From Roger.Riggs at Oracle.com Tue Mar 1 15:06:51 2016 From: Roger.Riggs at Oracle.com (Roger Riggs) Date: Tue, 1 Mar 2016 10:06:51 -0500 Subject: RFR - 8148820: Missing @since Javadoc tag in Logger.log(Level, Supplier) In-Reply-To: <56D59981.6010903@oracle.com> References: <56D59981.6010903@oracle.com> Message-ID: <56D5B00B.4040406@Oracle.com> +1 On 3/1/2016 8:30 AM, Daniel Fuchs wrote: > Hi, > > Please find below a trivial fix for > > 8148820: Missing @since Javadoc tag in Logger.log(Level, Supplier) > https://bugs.openjdk.java.net/browse/JDK-8148820 > > This method was added to java.util.logging.Logger in jdk 8, but > the @since tag was missing. > > -- daniel > > diff --git > a/src/java.logging/share/classes/java/util/logging/Logger.java > b/src/java.logging/share/classes/java/util/logging/Logger.java > --- a/src/java.logging/share/classes/java/util/logging/Logger.java > +++ b/src/java.logging/share/classes/java/util/logging/Logger.java > @@ -839,6 +839,7 @@ > * @param level One of the message level identifiers, e.g., > SEVERE > * @param msgSupplier A function, which when called, produces > the > * desired log message > + * @since 1.8 > */ > public void log(Level level, Supplier msgSupplier) { > if (!isLoggable(level)) { From mandy.chung at oracle.com Tue Mar 1 16:24:15 2016 From: mandy.chung at oracle.com (Mandy Chung) Date: Tue, 1 Mar 2016 08:24:15 -0800 Subject: RFR: 8150856 - Inconsistent API documentation for @param caller in System.LoggerFinder.getLogger In-Reply-To: <56D4840C.3030006@oracle.com> References: <56D4840C.3030006@oracle.com> Message-ID: <90997DEB-9CD5-45F2-8C13-5870C31C9096@oracle.com> > On Feb 29, 2016, at 9:46 AM, Daniel Fuchs wrote: > > --- old/src/java.base/share/classes/java/lang/System.java 2016-02-29 18:41:30.000000000 +0100 > +++ new/src/java.base/share/classes/java/lang/System.java 2016-02-29 18:41:30.000000000 +0100 > @@ -1,5 +1,5 @@ > /* > - * Copyright (c) 1994, 2014, Oracle and/or its affiliates. All rights reserved. > + * Copyright (c) 1994, 2016, Oracle and/or its affiliates. All rights reserved. > * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. > * > * This code is free software; you can redistribute it and/or modify it > @@ -1419,7 +1419,6 @@ > * > * @param name the name of the logger. > * @param caller the class for which the logger is being requested; > - * can be {@code null}. > * > * @return a {@link Logger logger} suitable for the given caller's > * use. +1 s/requested;/requested./ as Martin points out. Mandy From chris.hegarty at oracle.com Tue Mar 1 16:38:54 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Tue, 1 Mar 2016 16:38:54 +0000 Subject: RFR [9] 8150976: JarFile and MRJAR tests should use the JDK specific Version API Message-ID: Currently JarFile and MRJAR tests use sun.misc.Version to retrieve the major runtime version. They should be updated to use the new JDK specific Version API. Note: There is an issue, 8144062 [1], to revisit the JDK specific Version API to determine if it should be moved, or even standardized. The changes being proposed here may need to be updated, in a trivial way, in the future, but this issue is intending to break the dependency on sun.misc.Version so that 8150162 [2] can make progress. Additionally, the future refactoring will most likely be trivial. http://cr.openjdk.java.net/~chegar/8150976/ https://bugs.openjdk.java.net/browse/JDK-8150976 -Chris. [1] https://bugs.openjdk.java.net/browse/JDK-8144062 [2] https://bugs.openjdk.java.net/browse/JDK-8150162 From ivan.gerasimov at oracle.com Tue Mar 1 17:33:54 2016 From: ivan.gerasimov at oracle.com (Ivan Gerasimov) Date: Tue, 1 Mar 2016 20:33:54 +0300 Subject: RFR: 8149330: Capacity of StringBuilder should not get close to Integer.MAX_VALUE unless necessary In-Reply-To: References: <56CB5F76.3030102@oracle.com> <56CB9B9B.8070509@oracle.com> <56CC78BA.5010409@oracle.com> Message-ID: <56D5D282.8090804@oracle.com> Hello! I added another regtest to perform some basic sanity checks wrt StringBuilder's capacity. In this test I we only operate on relatively small sizes. A situation when capacity grows large is checked in a separate test, which is ignored by default. Do you think this fix is good to go? BUGURL: https://bugs.openjdk.java.net/browse/JDK-8149330 WEBREV: http://cr.openjdk.java.net/~igerasim/8149330/03/webrev/ Comments, suggestions are very welcome. Sincerely yours, Ivan On 23.02.2016 20:29, Martin Buchholz wrote: > On Tue, Feb 23, 2016 at 7:20 AM, Ivan Gerasimov > wrote: >> While writing this, I just noticed that I actually made a mistake when did >> newCapacity < 0 check. >> This wouldn't catch the overflow when the oldCapacity happens to be >> Integer.MAX_VALUE (which is not possible with the current hotspot, but may >> become an issue one day). > Well done! > > One interesting way that capacity may end up being Integer.MAX_VALUE > is if we switch to char[] for storage. Then in LATIN1 mode you could > store Integer.MAX_VALUE elements even without help from hotspot! > From mikael.vidstedt at oracle.com Tue Mar 1 18:29:55 2016 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Tue, 1 Mar 2016 10:29:55 -0800 Subject: RFR (XS): 8149596: Remove java.nio.Bits copy wrapper methods Message-ID: <56D5DFA3.7010300@oracle.com> As part of JDK-8141491[1] the native methods in java.nio.Bits were removed, and the functionality is instead provided by the VM through j.i.m.Unsafe. The Bits wrapper methods are therefore redundant and can be removed. Bug: https://bugs.openjdk.java.net/browse/JDK-8149596 Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8149596/webrev.00/webrev/ I've run the java/nio jtreg tests and it all passes (modulo a couple of unrelated failures). Cheers, Mikael [1] https://bugs.openjdk.java.net/browse/JDK-8141491 From brian.burkhalter at oracle.com Tue Mar 1 18:48:07 2016 From: brian.burkhalter at oracle.com (Brian Burkhalter) Date: Tue, 1 Mar 2016 10:48:07 -0800 Subject: RFR (XS): 8149596: Remove java.nio.Bits copy wrapper methods In-Reply-To: <56D5DFA3.7010300@oracle.com> References: <56D5DFA3.7010300@oracle.com> Message-ID: Hi Mikael, Not a Reviewer here, but it looks OK to me aside from the copyright year in the template file. Nice to see code removed! Brian On Mar 1, 2016, at 10:29 AM, Mikael Vidstedt wrote: > As part of JDK-8141491[1] the native methods in java.nio.Bits were removed, and the functionality is instead provided by the VM through j.i.m.Unsafe. The Bits wrapper methods are therefore redundant and can be removed. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149596 > Webrev:http://cr.openjdk.java.net/~mikael/webrevs/8149596/webrev.00/webrev/ > > I've run the java/nio jtreg tests and it all passes (modulo a couple of unrelated failures). From martinrb at google.com Tue Mar 1 18:54:38 2016 From: martinrb at google.com (Martin Buchholz) Date: Tue, 1 Mar 2016 10:54:38 -0800 Subject: RFR: 8149330: Capacity of StringBuilder should not get close to Integer.MAX_VALUE unless necessary In-Reply-To: <56D5D282.8090804@oracle.com> References: <56CB5F76.3030102@oracle.com> <56CB9B9B.8070509@oracle.com> <56CC78BA.5010409@oracle.com> <56D5D282.8090804@oracle.com> Message-ID: Thanks, Ivan. 135 /** 136 * This method has the same contract as ensureCapacity, but is 137 * never synchronized. 138 */ This comment should be updated, since treatment of negative argument is completely different. Otherwise looks good. On Tue, Mar 1, 2016 at 9:33 AM, Ivan Gerasimov wrote: > Hello! > > I added another regtest to perform some basic sanity checks wrt > StringBuilder's capacity. > In this test I we only operate on relatively small sizes. > A situation when capacity grows large is checked in a separate test, which > is ignored by default. > > Do you think this fix is good to go? > > BUGURL: https://bugs.openjdk.java.net/browse/JDK-8149330 > WEBREV: http://cr.openjdk.java.net/~igerasim/8149330/03/webrev/ > > Comments, suggestions are very welcome. > > Sincerely yours, > Ivan > > > > On 23.02.2016 20:29, Martin Buchholz wrote: >> >> On Tue, Feb 23, 2016 at 7:20 AM, Ivan Gerasimov >> wrote: >>> >>> While writing this, I just noticed that I actually made a mistake when >>> did >>> newCapacity < 0 check. >>> This wouldn't catch the overflow when the oldCapacity happens to be >>> Integer.MAX_VALUE (which is not possible with the current hotspot, but >>> may >>> become an issue one day). >> >> Well done! >> >> One interesting way that capacity may end up being Integer.MAX_VALUE >> is if we switch to char[] for storage. Then in LATIN1 mode you could >> store Integer.MAX_VALUE elements even without help from hotspot! >> > From chris.hegarty at oracle.com Tue Mar 1 19:19:29 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Tue, 1 Mar 2016 19:19:29 +0000 Subject: RFR (XS): 8149596: Remove java.nio.Bits copy wrapper methods In-Reply-To: References: <56D5DFA3.7010300@oracle.com> Message-ID: +1. -Chris. On 1 Mar 2016, at 18:48, Brian Burkhalter wrote: > Hi Mikael, > > Not a Reviewer here, but it looks OK to me aside from the copyright year in the template file. > > Nice to see code removed! > > Brian > > On Mar 1, 2016, at 10:29 AM, Mikael Vidstedt wrote: > >> As part of JDK-8141491[1] the native methods in java.nio.Bits were removed, and the functionality is instead provided by the VM through j.i.m.Unsafe. The Bits wrapper methods are therefore redundant and can be removed. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8149596 >> Webrev:http://cr.openjdk.java.net/~mikael/webrevs/8149596/webrev.00/webrev/ >> >> I've run the java/nio jtreg tests and it all passes (modulo a couple of unrelated failures). > From martinrb at google.com Tue Mar 1 19:40:59 2016 From: martinrb at google.com (Martin Buchholz) Date: Tue, 1 Mar 2016 11:40:59 -0800 Subject: RFR: 8149330: Capacity of StringBuilder should not get close to Integer.MAX_VALUE unless necessary In-Reply-To: <56CC03CF.4000101@oracle.com> References: <56CB5F76.3030102@oracle.com> <56CB9B9B.8070509@oracle.com> <56CC03CF.4000101@oracle.com> Message-ID: On Mon, Feb 22, 2016 at 11:01 PM, Xueming Shen wrote: > From certain perspective it's a kinda of "regression" that the maximum > capacity for a non-latin1 > buffer/builder is reduced by 2 . But arguably it's really an implementation > detail that how big a > StringBuffer/Builder can really go, as the spec and the implementation > don't/can't guarantee > you can really have a buffer/build with a Integer.MAX_VALUE capacity. On > the other hand on > certain system you might be able to have a bigger buffer/builder for latin-1 > only characters, as > it only requires half the space with the compact string implementation. That > said, I was debating > whether or not the constructor (with the capacity parameter) should check > the capacity, with the > assumption that the buffer/builder might be for non-latin1 input. But it > doesn't like the check > will bring in any benefit... I've done much of this kind of work to "get the last 2x of capacity" in collections and jar/zip, and I've always been surprised how popular these changes are. Serious users run into these limits and they REALLY want that last bit of capacity. So I would do the work to avoid the capacity regression here (or alternatively, don't even bother with LATIN1 compression for "temporary" objects like StringBuilder). From david.holmes at oracle.com Tue Mar 1 20:56:34 2016 From: david.holmes at oracle.com (David Holmes) Date: Wed, 2 Mar 2016 06:56:34 +1000 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: <56D59B83.3010503@oracle.com> References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> <56D59B83.3010503@oracle.com> Message-ID: <56D60202.6030803@oracle.com> On 1/03/2016 11:39 PM, Dmitry Samersoff wrote: > Thomas, > > We probably can do: > > if (fdTable[rootArrayIndex] != NULL) { > entryTable = fdTable[rootArrayIndex]; > } > else { // existing code > pthread_mutex_lock(&fdTableLock); > if (fdTable[rootArrayIndex] == NULL) { > .... > } > } This is double-checked locking and it requires memory barriers to be correct - as Thomas already discussed. David > -Dmitry > > > On 2016-03-01 16:13, Thomas St?fe wrote: >> Dmitry, Christoph, >> >> I am not 100% sure this would work for weak ordering platforms. >> >> If I understand you correctly you suggest the double checking pattern: >> >> if (basetable[index] == NULL) { >> lock >> if (basetable[index] == NULL) { >> basetable[index] = calloc(size); >> } >> unlock >> } >> >> The problem I cannot wrap my head around is how to make this safe for >> all platforms. Note: I am not an expert for this. >> >> How do you prevent the "reading thread reads partially initialized >> object" problem? >> >> Consider this: We need to allocate memory, set it completely to zero and >> then store a pointer to it in basetable[index]. This means we have >> multiple stores - one store for the pointer, n stores for zero-ing out >> the memory, and god knows how many stores the C-Runtime allcoator needs >> to update its internal structures. >> >> On weak ordering platforms like ppc (and arm?), the store for >> basetable[index] may be visible before the other stores, so the reading >> threads, running on different CPUs, may read a pointer to partially >> initialized memory. What you need is a memory barrier between the >> calloc() and store of basetable[index], to prevent the latter store from >> floating above the other stores. >> >> I did not find anything about multithread safety in the calloc() docs, >> or guaranteed barrier behaviour, nor did I expect anything. In the >> hotspot we have our memory barrier APIs, but in the JDK I am confined to >> basic C and there is no way that I know of to do memory barriers with >> plain Posix APIs. >> >> Bottomline, I am not sure. Maybe I am too cautious here, but I do not >> see a way to make this safe without locking the reader thread too. >> >> Also, we are about to do an IO operation - is a mutex really that bad >> here? Especially with the optimization Roger suggested of pre-allocating >> the basetable[0] array and omitting lock protection there? >> >> Kind Regards, >> >> Thomas >> >> >> >> >> On Tue, Mar 1, 2016 at 11:47 AM, Langer, Christoph >> > wrote: >> >> Hi Dmitry, Thomas, >> >> Dmitry, I think you are referring to an outdated version of the >> webrev, the current one is this: >> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ >> >> However, I agree - the lock should probably not be taken every time >> but only in the case where we find the entry table was not yet >> allocated. >> >> So, maybe getFdEntry should always do this: >> entryTable = fdTable[rootArrayIndex]; // no matter if rootArrayIndex >> is 0 >> >> Then check if entryTable is NULL and if yes then enter a guarded >> section which does the allocation and before that checks if another >> thread did it already. >> >> Also I'm wondering if the entryArrayMask and the rootArrayMask >> should be calculated once in the init() function and stored in a >> static field? Because right now it is calculated every time >> getFdEntry() is called and I don't think this would be optimized by >> inlining... >> >> Best regards >> Christoph >> >> -----Original Message----- >> From: core-libs-dev [mailto:core-libs-dev-bounces at openjdk.java.net >> ] On Behalf Of Dmitry >> Samersoff >> Sent: Dienstag, 1. M?rz 2016 11:20 >> To: Thomas St?fe > >; Java Core Libs >> > >> Subject: Re: RFR(s): 8150460: (linux|bsd|aix)_close.c: file >> descriptor table may become large or may not work at all >> >> Thomas, >> >> Sorry for being later. >> >> I'm not sure we should take a lock at ll. 131 for each fdTable lookup. >> >> As soon as we never deallocate fdTable[base_index] it's safe to try to >> return value first and then take a slow path (take a lock and check >> fdTable[base_index] again) >> >> -Dmitry >> >> >> On 2016-02-24 20:30, Thomas St?fe wrote: >> > Hi all, >> > >> > please take a look at this proposed fix. >> > >> > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 >> > The Webrev: >> > >> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ >> > >> > Basically, the file descriptor table implemented in linux_close.c >> may not >> > work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a 50MB >> > table) for high values for RLIMIT_NO_FILE. Please see details in >> the bug >> > description. >> > >> > The proposed solution is to implement the file descriptor table not as >> > plain array, but as a twodimensional sparse array, which grows on >> demand. >> > This keeps the memory footprint small and fixes the corner cases >> described >> > in the bug description. >> > >> > Please note that the implemented solution is kept simple, at the >> cost of >> > somewhat higher (some kb) memory footprint for low values of >> RLIMIT_NO_FILE. >> > This can be optimized, if we even think it is worth the trouble. >> > >> > Please also note that the proposed implementation now uses a mutex >> lock for >> > every call to getFdEntry() - I do not think this matters, as this >> is all in >> > preparation for an IO system call, which are usually way more >> expensive >> > than a pthread mutex. But again, this could be optimized. >> > >> > This is an implementation proposal for Linux; the same code found >> its way >> > to BSD and AIX. Should you approve of this fix, I will modify >> those files >> > too. >> > >> > Thank you and Kind Regards, Thomas >> > >> >> >> -- >> Dmitry Samersoff >> Oracle Java development team, Saint Petersburg, Russia >> * I would love to change the world, but they won't give me the sources. >> >> > > From ivan.gerasimov at oracle.com Tue Mar 1 20:58:10 2016 From: ivan.gerasimov at oracle.com (Ivan Gerasimov) Date: Tue, 1 Mar 2016 23:58:10 +0300 Subject: RFR: 8149330: Capacity of StringBuilder should not get close to Integer.MAX_VALUE unless necessary In-Reply-To: References: <56CB5F76.3030102@oracle.com> <56CB9B9B.8070509@oracle.com> <56CC78BA.5010409@oracle.com> <56D5D282.8090804@oracle.com> Message-ID: <56D60262.5070003@oracle.com> Thank you Martin! On 01.03.2016 21:54, Martin Buchholz wrote: > Thanks, Ivan. > > 135 /** > 136 * This method has the same contract as ensureCapacity, but is > 137 * never synchronized. > 138 */ I'll update this comment to reflect the real behavior of that method. Sincerely yours, Ivan > This comment should be updated, since treatment of negative argument > is completely different. > > Otherwise looks good. > > > On Tue, Mar 1, 2016 at 9:33 AM, Ivan Gerasimov > wrote: >> Hello! >> >> I added another regtest to perform some basic sanity checks wrt >> StringBuilder's capacity. >> In this test I we only operate on relatively small sizes. >> A situation when capacity grows large is checked in a separate test, which >> is ignored by default. >> >> Do you think this fix is good to go? >> >> BUGURL: https://bugs.openjdk.java.net/browse/JDK-8149330 >> WEBREV: http://cr.openjdk.java.net/~igerasim/8149330/03/webrev/ >> >> Comments, suggestions are very welcome. >> >> Sincerely yours, >> Ivan >> >> >> >> On 23.02.2016 20:29, Martin Buchholz wrote: >>> On Tue, Feb 23, 2016 at 7:20 AM, Ivan Gerasimov >>> wrote: >>>> While writing this, I just noticed that I actually made a mistake when >>>> did >>>> newCapacity < 0 check. >>>> This wouldn't catch the overflow when the oldCapacity happens to be >>>> Integer.MAX_VALUE (which is not possible with the current hotspot, but >>>> may >>>> become an issue one day). >>> Well done! >>> >>> One interesting way that capacity may end up being Integer.MAX_VALUE >>> is if we switch to char[] for storage. Then in LATIN1 mode you could >>> store Integer.MAX_VALUE elements even without help from hotspot! >>> From brent.christian at oracle.com Tue Mar 1 21:16:47 2016 From: brent.christian at oracle.com (Brent Christian) Date: Tue, 1 Mar 2016 13:16:47 -0800 Subject: RFR 8148187 : Remove OS X-specific com.apple.concurrent package Message-ID: <56D606BF.4080603@oracle.com> Hi, A number of internal APIs were carried over into the JDK with the Apple port. Among them was com.apple.concurrent.Dispatch. Supportedness has always been murky here, but Jigsaw necessitates a firmer stance. Some of these APIs have already been removed from JDK 9 [1], some will be supplanted by new, supported APIs [2]. As already discussed in [3] and [4], com.apple.concurrent.Dispatch is no longer in use, as far as we've been able to find. com.apple.concurrent.Dispatch and its supporting code should be removed from JDK 9. It turns out this opens the door for a little module pruning as well. com.applet.concurrent makes up the bulk of the jdk.deploy.osx module. All that's left is native code for libosx, a library relied on by com.apple.eio.FileManager in the java.desktop module. By moving libosx over to java.desktop, we are able to do away with the jdk.deploy.osx module altogether. For your review is a webrev of this change: http://cr.openjdk.java.net/~bchristi/8148187/webrev.01/ JBS: https://bugs.openjdk.java.net/browse/JDK-8148187 Automated build+test runs look fine. If, in the future, there is desire for an ExecutorService backed by the native platform (as com.apple.concurrent.Dispatch does for libdispatch on OS X), such a feature could be proposed. Thanks, -Brent 1. "Remove apple script engine code in jdk repository" https://bugs.openjdk.java.net/browse/JDK-8143404 2. JEP 272 : "Platform-Specific Desktop Features" https://bugs.openjdk.java.net/browse/JDK-8048731 3. http://mail.openjdk.java.net/pipermail/macosx-port-dev/2015-May/006934.html 4. http://mail.openjdk.java.net/pipermail/macosx-port-dev/2015-September/006968.html From stuart.marks at oracle.com Tue Mar 1 22:40:05 2016 From: stuart.marks at oracle.com (Stuart Marks) Date: Tue, 1 Mar 2016 14:40:05 -0800 Subject: RFR: jsr166 jdk9 integration wave 5 In-Reply-To: References: Message-ID: <56D61A45.7040005@oracle.com> Hi Martin, I'm a bit confused about exactly what pieces need review here. Since you mentioned me with respect to 8150523, I took a look at the webrev that adds the timeout factors: http://cr.openjdk.java.net/~martin/webrevs/openjdk9/jsr166-jdk9-integration/timeoutFactor/ Do other webrevs still need review as well? I haven't looked at them. But there are others' names on them already.... Otherwise, overall, it looks fine, just a few minor questions/comments: ------------------------------------------------------------ In the following files, various delays of 7 sec, 20 sec, 30 sec, 60 sec, 100 sec, 120 sec, and 1000 sec were changed to 10 sec (exclusive of timeout factor adjustment). Was that intentional? I guess making the "backstop" timeouts (e.g., waiting for an executor service to terminate) be uniform is reasonable, but there were some very long timeouts that are now much shorter. At least, something to keep an eye on. test/java/util/concurrent/BlockingQueue/Interrupt.java: test/java/util/concurrent/BlockingQueue/ProducerConsumerLoops.java test/java/util/concurrent/BlockingQueue/SingleProducerMultipleConsumerLoops.java test/java/util/concurrent/CompletableFuture/Basic.java test/java/util/concurrent/ConcurrentHashMap/MapLoops.java test/java/util/concurrent/ConcurrentQueues/ConcurrentQueueLoops.java test/java/util/concurrent/Exchanger/ExchangeLoops.java test/java/util/concurrent/Executors/AutoShutdown.java test/java/util/concurrent/ScheduledThreadPoolExecutor/ZeroCorePoolSize.java test/java/util/concurrent/ThreadPoolExecutor/Custom.java test/java/util/concurrent/ThreadPoolExecutor/SelfInterrupt.java test/java/util/concurrent/ThreadPoolExecutor/ThreadRestarts.java test/java/util/concurrent/locks/Lock/FlakyMutex.java test/java/util/concurrent/locks/LockSupport/ParkLoops.java test/java/util/concurrent/locks/ReentrantLock/LockOncePerThreadLoops.java test/java/util/concurrent/locks/ReentrantLock/SimpleReentrantLockLoops.java test/java/util/concurrent/locks/ReentrantLock/TimeoutLockLoops.java test/java/util/concurrent/locks/StampedLock/Basic.java ------------------------------------------------------------ test/java/util/concurrent/FutureTask/CancelledFutureLoops.java test/java/util/concurrent/ThreadPoolExecutor/TimeOutShrink.java It's slightly odd to see an additional multiplier at the use site of LONG_DELAY_MS, when this doesn't occur in most of the other tests that (formerly) had different timeouts. Put another way, why do these tests have different timeouts, whereas the tests above that had widely differing timeouts were all changed to 10 sec? ------------------------------------------------------------ In various scheduled thread pool executor tests, as well as in test/java/util/concurrent/ThreadPoolExecutor/ThreadRestarts.java should delays for scheduled tasks also be scaled as well? If the tests are running in a slow environment, and some timeouts are scaled but not others, it might result in some tasks executing too soon. I guess this depends on the semantics of what's being tested. ------------------------------------------------------------ test/java/util/concurrent/ConcurrentQueues/GCRetention.java - extra commented-out call to forceFullGc() ? - probably would be wise to scale the timeout in finalizeDone.await() ------------------------------------------------------------ test/java/util/concurrent/Exchanger/ExchangeLoops.java test/java/util/concurrent/ExecutorCompletionService/ExecutorCompletionServiceLoops.java The number of iterations was reduced from 100,000 to 2,000, particularly the initial "warm up" run (at least in ExchangeLoops). IIRC the C2 compiler kicks in at 10,000 iterations. The reduced the number of iterations (particularly in the initial "warm up" runs) doesn't meet this threshold. Could that be a problem? s'marks On 2/29/16 9:52 AM, Martin Buchholz wrote: > I added > > 8150523: improve jtreg test timeout handling, especially -timeout: > > to this wave, inspired by smarks. > I stress tested this with the flags that caused JDK-8150523 to fail, > and they now seem robust, as long as a reasonable -timeout flag is > provided to jtreg. > > On Tue, Feb 23, 2016 at 6:58 PM, Martin Buchholz wrote: >> A very boring jsr166 integration, focused on reliability. >> This one has the promised "even more unnecessarily robust" ThreadLocalRandom. >> >> http://cr.openjdk.java.net/~martin/webrevs/openjdk9/jsr166-jdk9-integration/ From martinrb at google.com Wed Mar 2 00:06:44 2016 From: martinrb at google.com (Martin Buchholz) Date: Tue, 1 Mar 2016 16:06:44 -0800 Subject: RFR: jsr166 jdk9 integration wave 5 In-Reply-To: <56D61A45.7040005@oracle.com> References: <56D61A45.7040005@oracle.com> Message-ID: Thanks, Stuart! On Tue, Mar 1, 2016 at 2:40 PM, Stuart Marks wrote: > Hi Martin, > > I'm a bit confused about exactly what pieces need review here. Since you > mentioned me with respect to 8150523, I took a look at the webrev that adds > the timeout factors: > > http://cr.openjdk.java.net/~martin/webrevs/openjdk9/jsr166-jdk9-integration/timeoutFactor/ > > Do other webrevs still need review as well? I haven't looked at them. But > there are others' names on them already.... Getting reviews done is a big problem with most open source projects, and openjdk is no exception. And it's even more difficult when code is imported from an upstream project... maybe my usual reviewers have gotten jsr166-review-fatigue... > Otherwise, overall, it looks fine, just a few minor questions/comments: > > ------------------------------------------------------------ > > In the following files, various delays of 7 sec, 20 sec, 30 sec, 60 sec, 100 > sec, 120 sec, and 1000 sec were changed to 10 sec (exclusive of timeout > factor adjustment). Was that intentional? I guess making the "backstop" > timeouts (e.g., waiting for an executor service to terminate) be uniform is > reasonable, but there were some very long timeouts that are now much > shorter. At least, something to keep an eye on. A useful rule of thumb is that 10 sec seems to be enough for any "single operation", except when using a slow VM, in which case we expect someone to provide a timeout factor. But some of these tests do a whole bunch of trials... Anyways, we adjust timeouts based on observed test runtimes, and keep adjusting if tests are observed to fail in the wild. I've also done stress testing with these tests using a fastdebug VM at Google. > test/java/util/concurrent/BlockingQueue/Interrupt.java: > test/java/util/concurrent/BlockingQueue/ProducerConsumerLoops.java > test/java/util/concurrent/BlockingQueue/SingleProducerMultipleConsumerLoops.java > test/java/util/concurrent/CompletableFuture/Basic.java > test/java/util/concurrent/ConcurrentHashMap/MapLoops.java > test/java/util/concurrent/ConcurrentQueues/ConcurrentQueueLoops.java > test/java/util/concurrent/Exchanger/ExchangeLoops.java > test/java/util/concurrent/Executors/AutoShutdown.java > test/java/util/concurrent/ScheduledThreadPoolExecutor/ZeroCorePoolSize.java > test/java/util/concurrent/ThreadPoolExecutor/Custom.java > test/java/util/concurrent/ThreadPoolExecutor/SelfInterrupt.java > test/java/util/concurrent/ThreadPoolExecutor/ThreadRestarts.java > test/java/util/concurrent/locks/Lock/FlakyMutex.java > test/java/util/concurrent/locks/LockSupport/ParkLoops.java > test/java/util/concurrent/locks/ReentrantLock/LockOncePerThreadLoops.java > test/java/util/concurrent/locks/ReentrantLock/SimpleReentrantLockLoops.java > test/java/util/concurrent/locks/ReentrantLock/TimeoutLockLoops.java > test/java/util/concurrent/locks/StampedLock/Basic.java > > ------------------------------------------------------------ > > test/java/util/concurrent/FutureTask/CancelledFutureLoops.java > test/java/util/concurrent/ThreadPoolExecutor/TimeOutShrink.java > > It's slightly odd to see an additional multiplier at the use site of > LONG_DELAY_MS, when this doesn't occur in most of the other tests that > (formerly) had different timeouts. Put another way, why do these tests have > different timeouts, whereas the tests above that had widely differing > timeouts were all changed to 10 sec? Looking more closely, we can improve the readability of TimeOutShrink, remove the excessive final wait, and claw back one second of test time. --- util/concurrent/ThreadPoolExecutor/TimeOutShrink.java 27 Feb 2016 21:15:57 -0000 1.5 +++ util/concurrent/ThreadPoolExecutor/TimeOutShrink.java 2 Mar 2016 00:01:55 -0000 @@ -39,6 +39,7 @@ public class TimeOutShrink { static final long LONG_DELAY_MS = Utils.adjustTimeout(10_000); + static final long KEEPALIVE_MS = 12L; static void checkPoolSizes(ThreadPoolExecutor pool, int size, int core, int max) { @@ -51,7 +52,8 @@ final int n = 4; final CyclicBarrier barrier = new CyclicBarrier(2*n+1); final ThreadPoolExecutor pool - = new ThreadPoolExecutor(n, 2*n, 1L, TimeUnit.SECONDS, + = new ThreadPoolExecutor(n, 2*n, + KEEPALIVE_MS, MILLISECONDS, new SynchronousQueue()); final Runnable r = new Runnable() { public void run() { try { @@ -64,12 +66,17 @@ barrier.await(); checkPoolSizes(pool, 2*n, n, 2*n); barrier.await(); - while (pool.getPoolSize() > n) - Thread.sleep(100); - Thread.sleep(100); + long nap = KEEPALIVE_MS + (KEEPALIVE_MS >> 2); + for (long sleepyTime = 0L; pool.getPoolSize() > n; ) { + check((sleepyTime += nap) <= LONG_DELAY_MS); + Thread.sleep(nap); + } + checkPoolSizes(pool, n, n, 2*n); + Thread.sleep(nap); checkPoolSizes(pool, n, n, 2*n); pool.shutdown(); - check(pool.awaitTermination(6 * LONG_DELAY_MS, MILLISECONDS)); + check(pool.awaitTermination(LONG_DELAY_MS, MILLISECONDS)); } //--------------------- Infrastructure --------------------------- > ------------------------------------------------------------ > > In various scheduled thread pool executor tests, as well as in > > test/java/util/concurrent/ThreadPoolExecutor/ThreadRestarts.java > > should delays for scheduled tasks also be scaled as well? If the tests are > running in a slow environment, and some timeouts are scaled but not others, > it might result in some tasks executing too soon. I guess this depends on > the semantics of what's being tested. Yeah... this change clarifies things a bit and creates daemon threads just in case: --- util/concurrent/ThreadPoolExecutor/ThreadRestarts.java 27 Feb 2016 21:15:57 -0000 1.4 +++ util/concurrent/ThreadPoolExecutor/ThreadRestarts.java 1 Mar 2016 23:27:33 -0000 @@ -22,6 +22,7 @@ public class ThreadRestarts { static final long LONG_DELAY_MS = Utils.adjustTimeout(10_000); + static final long FAR_FUTURE_MS = 10 * LONG_DELAY_MS; public static void main(String[] args) throws Exception { test(false); @@ -33,8 +34,9 @@ ScheduledThreadPoolExecutor stpe = new ScheduledThreadPoolExecutor(10, ctf); try { + // schedule a dummy task in the "far future" Runnable nop = new Runnable() { public void run() {}}; - stpe.schedule(nop, 10*1000L, MILLISECONDS); + stpe.schedule(nop, FAR_FUTURE_MS, MILLISECONDS); stpe.setKeepAliveTime(1L, MILLISECONDS); stpe.allowCoreThreadTimeOut(allowTimeout); MILLISECONDS.sleep(12L); @@ -53,8 +55,9 @@ final AtomicLong count = new AtomicLong(0L); public Thread newThread(Runnable r) { - Thread t = new Thread(r); count.getAndIncrement(); + Thread t = new Thread(r); + t.setDaemon(true); return t; } } > ------------------------------------------------------------ > > test/java/util/concurrent/ConcurrentQueues/GCRetention.java > > - extra commented-out call to forceFullGc() ? intentional exercise for the reader > - probably would be wise to scale the timeout in finalizeDone.await() It's in a retry loop, so it's probably fine. Maybe scale the number of iterations? > ------------------------------------------------------------ > > test/java/util/concurrent/Exchanger/ExchangeLoops.java > test/java/util/concurrent/ExecutorCompletionService/ExecutorCompletionServiceLoops.java > > The number of iterations was reduced from 100,000 to 2,000, particularly the > initial "warm up" run (at least in ExchangeLoops). IIRC the C2 compiler > kicks in at 10,000 iterations. The reduced the number of iterations > (particularly in the initial "warm up" runs) doesn't meet this threshold. > Could that be a problem? It's a general problem of any of these "loops" tests. They can run forever if we like, they can be used for benchmarks (derived from Doug's benchmark code), but it's also important to optimize test run time. And we don't have any standard support in jtreg for scaling tests for "stress mode". Although we do sometimes find hotspot bugs, that's not the primary purpose of these tests, and the hotspot team is good at using interesting flag combinations that should invoke C2 even with a small number of iterations. From hboehm at google.com Wed Mar 2 01:27:05 2016 From: hboehm at google.com (Hans Boehm) Date: Tue, 1 Mar 2016 17:27:05 -0800 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: <56D60202.6030803@oracle.com> References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> <56D59B83.3010503@oracle.com> <56D60202.6030803@oracle.com> Message-ID: The preferred C11 solution is to use atomics. Using just memory fences here is tricky, and not fully correct, since data races have undefined semantics in C (and Posix). On Tue, Mar 1, 2016 at 12:56 PM, David Holmes wrote: > On 1/03/2016 11:39 PM, Dmitry Samersoff wrote: > >> Thomas, >> >> We probably can do: >> >> if (fdTable[rootArrayIndex] != NULL) { >> entryTable = fdTable[rootArrayIndex]; >> } >> else { // existing code >> pthread_mutex_lock(&fdTableLock); >> if (fdTable[rootArrayIndex] == NULL) { >> .... >> } >> } >> > > This is double-checked locking and it requires memory barriers to be > correct - as Thomas already discussed. > > David > > > -Dmitry >> >> >> On 2016-03-01 16:13, Thomas St?fe wrote: >> >>> Dmitry, Christoph, >>> >>> I am not 100% sure this would work for weak ordering platforms. >>> >>> If I understand you correctly you suggest the double checking pattern: >>> >>> if (basetable[index] == NULL) { >>> lock >>> if (basetable[index] == NULL) { >>> basetable[index] = calloc(size); >>> } >>> unlock >>> } >>> >>> The problem I cannot wrap my head around is how to make this safe for >>> all platforms. Note: I am not an expert for this. >>> >>> How do you prevent the "reading thread reads partially initialized >>> object" problem? >>> >>> Consider this: We need to allocate memory, set it completely to zero and >>> then store a pointer to it in basetable[index]. This means we have >>> multiple stores - one store for the pointer, n stores for zero-ing out >>> the memory, and god knows how many stores the C-Runtime allcoator needs >>> to update its internal structures. >>> >>> On weak ordering platforms like ppc (and arm?), the store for >>> basetable[index] may be visible before the other stores, so the reading >>> threads, running on different CPUs, may read a pointer to partially >>> initialized memory. What you need is a memory barrier between the >>> calloc() and store of basetable[index], to prevent the latter store from >>> floating above the other stores. >>> >>> I did not find anything about multithread safety in the calloc() docs, >>> or guaranteed barrier behaviour, nor did I expect anything. In the >>> hotspot we have our memory barrier APIs, but in the JDK I am confined to >>> basic C and there is no way that I know of to do memory barriers with >>> plain Posix APIs. >>> >>> Bottomline, I am not sure. Maybe I am too cautious here, but I do not >>> see a way to make this safe without locking the reader thread too. >>> >>> Also, we are about to do an IO operation - is a mutex really that bad >>> here? Especially with the optimization Roger suggested of pre-allocating >>> the basetable[0] array and omitting lock protection there? >>> >>> Kind Regards, >>> >>> Thomas >>> >>> >>> >>> >>> On Tue, Mar 1, 2016 at 11:47 AM, Langer, Christoph >>> > wrote: >>> >>> Hi Dmitry, Thomas, >>> >>> Dmitry, I think you are referring to an outdated version of the >>> webrev, the current one is this: >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ >>> >>> However, I agree - the lock should probably not be taken every time >>> but only in the case where we find the entry table was not yet >>> allocated. >>> >>> So, maybe getFdEntry should always do this: >>> entryTable = fdTable[rootArrayIndex]; // no matter if rootArrayIndex >>> is 0 >>> >>> Then check if entryTable is NULL and if yes then enter a guarded >>> section which does the allocation and before that checks if another >>> thread did it already. >>> >>> Also I'm wondering if the entryArrayMask and the rootArrayMask >>> should be calculated once in the init() function and stored in a >>> static field? Because right now it is calculated every time >>> getFdEntry() is called and I don't think this would be optimized by >>> inlining... >>> >>> Best regards >>> Christoph >>> >>> -----Original Message----- >>> From: core-libs-dev [mailto:core-libs-dev-bounces at openjdk.java.net >>> ] On Behalf Of >>> Dmitry >>> Samersoff >>> Sent: Dienstag, 1. M?rz 2016 11:20 >>> To: Thomas St?fe >> >; Java Core Libs >>> >> core-libs-dev at openjdk.java.net>> >>> Subject: Re: RFR(s): 8150460: (linux|bsd|aix)_close.c: file >>> descriptor table may become large or may not work at all >>> >>> Thomas, >>> >>> Sorry for being later. >>> >>> I'm not sure we should take a lock at ll. 131 for each fdTable >>> lookup. >>> >>> As soon as we never deallocate fdTable[base_index] it's safe to try >>> to >>> return value first and then take a slow path (take a lock and check >>> fdTable[base_index] again) >>> >>> -Dmitry >>> >>> >>> On 2016-02-24 20:30, Thomas St?fe wrote: >>> > Hi all, >>> > >>> > please take a look at this proposed fix. >>> > >>> > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 >>> > The Webrev: >>> > >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ >>> > >>> > Basically, the file descriptor table implemented in linux_close.c >>> may not >>> > work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a >>> 50MB >>> > table) for high values for RLIMIT_NO_FILE. Please see details in >>> the bug >>> > description. >>> > >>> > The proposed solution is to implement the file descriptor table >>> not as >>> > plain array, but as a twodimensional sparse array, which grows on >>> demand. >>> > This keeps the memory footprint small and fixes the corner cases >>> described >>> > in the bug description. >>> > >>> > Please note that the implemented solution is kept simple, at the >>> cost of >>> > somewhat higher (some kb) memory footprint for low values of >>> RLIMIT_NO_FILE. >>> > This can be optimized, if we even think it is worth the trouble. >>> > >>> > Please also note that the proposed implementation now uses a mutex >>> lock for >>> > every call to getFdEntry() - I do not think this matters, as this >>> is all in >>> > preparation for an IO system call, which are usually way more >>> expensive >>> > than a pthread mutex. But again, this could be optimized. >>> > >>> > This is an implementation proposal for Linux; the same code found >>> its way >>> > to BSD and AIX. Should you approve of this fix, I will modify >>> those files >>> > too. >>> > >>> > Thank you and Kind Regards, Thomas >>> > >>> >>> >>> -- >>> Dmitry Samersoff >>> Oracle Java development team, Saint Petersburg, Russia >>> * I would love to change the world, but they won't give me the >>> sources. >>> >>> >>> >> >> From mandy.chung at oracle.com Wed Mar 2 04:36:24 2016 From: mandy.chung at oracle.com (Mandy Chung) Date: Tue, 1 Mar 2016 20:36:24 -0800 Subject: RFR 8148187 : Remove OS X-specific com.apple.concurrent package In-Reply-To: <56D606BF.4080603@oracle.com> References: <56D606BF.4080603@oracle.com> Message-ID: <66D9A294-73D2-49FB-9AFD-31D57077585B@oracle.com> > On Mar 1, 2016, at 1:16 PM, Brent Christian wrote: > > Hi, > > A number of internal APIs were carried over into the JDK with the Apple port. Among them was com.apple.concurrent.Dispatch. > > Supportedness has always been murky here, but Jigsaw necessitates a firmer stance. Some of these APIs have already been removed from JDK 9 [1], some will be supplanted by new, supported APIs [2]. > > As already discussed in [3] and [4], com.apple.concurrent.Dispatch is no longer in use, as far as we've been able to find. com.apple.concurrent.Dispatch and its supporting code should be removed from JDK 9. > > It turns out this opens the door for a little module pruning as well. com.applet.concurrent makes up the bulk of the jdk.deploy.osx module. All that's left is native code for libosx, a library relied on by com.apple.eio.FileManager in the java.desktop module. By moving libosx over to java.desktop, we are able to do away with the jdk.deploy.osx module altogether. > > For your review is a webrev of this change: > http://cr.openjdk.java.net/~bchristi/8148187/webrev.01/ > It?s good to see jdk.deploy.osx finally going away. The patch looks fine. common/bin/unshuffle_list.txt should be adjusted as well (while this file looks like not being kept up-to-date though) No need to submit a new webrev. Mandy From ramanand.patil at oracle.com Wed Mar 2 05:34:09 2016 From: ramanand.patil at oracle.com (Ramanand Patil) Date: Tue, 1 Mar 2016 21:34:09 -0800 (PST) Subject: RFR: JDK-8087104: DateFormatSymbols triggers this.clone() in the constructor In-Reply-To: <56CD66D9.9070605@oracle.com> References: <56CD66D9.9070605@oracle.com> Message-ID: <1e1da08c-4e32-4109-84da-94835ef4f028@default> Hi all, May I request one more review for this bug? [Thank you Masayoshi for your review.] Regards, Ramanand. -----Original Message----- From: Masayoshi Okutsu Sent: Wednesday, February 24, 2016 1:46 PM To: Ramanand Patil; i18n-dev at openjdk.java.net Cc: core-libs-dev at openjdk.java.net Subject: Re: RFR: JDK-8087104: DateFormatSymbols triggers this.clone() in the constructor Looks good to me. Masayoshi On 2/24/2016 4:40 PM, Ramanand Patil wrote: > Hi all, > Please review the fix for bug: https://bugs.openjdk.java.net/browse/JDK-8087104 > Bug Description: DateFormatSymbols caches its own instance and calls this.clone() in the constructor. Because of this, any subclass implementation (which expects a field is always initialized to non-null in the constructor) will throw NPE in its overridden clone() method while using any instance variables which it assumed are initilaized in its contructor. > Webrev: http://cr.openjdk.java.net/~rpatil/8087104/webrev.00/ > Fix: Instead of using its own instance for caching and calling clone in DateFormatSymbols, a nested class SymbolsCacheEntry is introduced. > > > Regards, > > Ramanand. From erik.joelsson at oracle.com Wed Mar 2 07:50:46 2016 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Wed, 2 Mar 2016 08:50:46 +0100 Subject: RFR 8148187 : Remove OS X-specific com.apple.concurrent package In-Reply-To: <56D606BF.4080603@oracle.com> References: <56D606BF.4080603@oracle.com> Message-ID: <56D69B56.2000604@oracle.com> Hello Brent, The build changes look pretty good. Just one nit, in the new LibosxLibraries.gmk, please remove the "include LibCommon.gmk" as that is now handled by Lib-java.desktop.gmk. /Erik On 2016-03-01 22:16, Brent Christian wrote: > Hi, > > A number of internal APIs were carried over into the JDK with the > Apple port. Among them was com.apple.concurrent.Dispatch. > > Supportedness has always been murky here, but Jigsaw necessitates a > firmer stance. Some of these APIs have already been removed from JDK > 9 [1], some will be supplanted by new, supported APIs [2]. > > As already discussed in [3] and [4], com.apple.concurrent.Dispatch is > no longer in use, as far as we've been able to find. > com.apple.concurrent.Dispatch and its supporting code should be > removed from JDK 9. > > It turns out this opens the door for a little module pruning as well. > com.applet.concurrent makes up the bulk of the jdk.deploy.osx module. > All that's left is native code for libosx, a library relied on by > com.apple.eio.FileManager in the java.desktop module. By moving > libosx over to java.desktop, we are able to do away with the > jdk.deploy.osx module altogether. > > For your review is a webrev of this change: > http://cr.openjdk.java.net/~bchristi/8148187/webrev.01/ > > JBS: https://bugs.openjdk.java.net/browse/JDK-8148187 > > > Automated build+test runs look fine. > > If, in the future, there is desire for an ExecutorService backed by > the native platform (as com.apple.concurrent.Dispatch does for > libdispatch on OS X), such a feature could be proposed. > > Thanks, > -Brent > > 1. "Remove apple script engine code in jdk repository" > https://bugs.openjdk.java.net/browse/JDK-8143404 > > 2. JEP 272 : "Platform-Specific Desktop Features" > https://bugs.openjdk.java.net/browse/JDK-8048731 > > 3. > http://mail.openjdk.java.net/pipermail/macosx-port-dev/2015-May/006934.html > > 4. > http://mail.openjdk.java.net/pipermail/macosx-port-dev/2015-September/006968.html > From yuka.kamiya at oracle.com Wed Mar 2 07:52:13 2016 From: yuka.kamiya at oracle.com (Yuka Kamiya) Date: Wed, 2 Mar 2016 16:52:13 +0900 Subject: RFR: JDK-8087104: DateFormatSymbols triggers this.clone() in the constructor In-Reply-To: <1e1da08c-4e32-4109-84da-94835ef4f028@default> References: <56CD66D9.9070605@oracle.com> <1e1da08c-4e32-4109-84da-94835ef4f028@default> Message-ID: <56D69BAD.50605@oracle.com> Hi Ramanand, Your fix looks good to me. Thanks, -- Yuka On 2016/03/02 14:34, Ramanand Patil wrote: > Hi all, > > May I request one more review for this bug? > > [Thank you Masayoshi for your review.] > > > Regards, > Ramanand. > > -----Original Message----- > From: Masayoshi Okutsu > Sent: Wednesday, February 24, 2016 1:46 PM > To: Ramanand Patil; i18n-dev at openjdk.java.net > Cc: core-libs-dev at openjdk.java.net > Subject: Re: RFR: JDK-8087104: DateFormatSymbols triggers this.clone() in the constructor > > Looks good to me. > > Masayoshi > > On 2/24/2016 4:40 PM, Ramanand Patil wrote: >> Hi all, >> Please review the fix for bug: https://bugs.openjdk.java.net/browse/JDK-8087104 >> Bug Description: DateFormatSymbols caches its own instance and calls this.clone() in the constructor. Because of this, any subclass implementation (which expects a field is always initialized to non-null in the constructor) will throw NPE in its overridden clone() method while using any instance variables which it assumed are initilaized in its contructor. >> Webrev: http://cr.openjdk.java.net/~rpatil/8087104/webrev.00/ >> Fix: Instead of using its own instance for caching and calling clone in DateFormatSymbols, a nested class SymbolsCacheEntry is introduced. >> >> >> Regards, >> >> Ramanand. From thomas.stuefe at gmail.com Wed Mar 2 08:09:53 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 2 Mar 2016 09:09:53 +0100 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> <56D59B83.3010503@oracle.com> <56D60202.6030803@oracle.com> Message-ID: Hi Hans, thanks for the hint! But how would I do this for my problem: Allocate memory, zero it out and then store the pointer into a variable seen by other threads, while preventing the other threads from seeing . I do not understand how atomics would help: I can make the pointer itself an atomic, but that only guarantees memory ordering in regard to this variable, not to the allocated memory. Kind Regards, Thomas On Wed, Mar 2, 2016 at 2:27 AM, Hans Boehm wrote: > The preferred C11 solution is to use atomics. Using just memory fences > here is tricky, and not fully correct, since data races have undefined > semantics in C (and Posix). > > > On Tue, Mar 1, 2016 at 12:56 PM, David Holmes > wrote: > >> On 1/03/2016 11:39 PM, Dmitry Samersoff wrote: >> >>> Thomas, >>> >>> We probably can do: >>> >>> if (fdTable[rootArrayIndex] != NULL) { >>> entryTable = fdTable[rootArrayIndex]; >>> } >>> else { // existing code >>> pthread_mutex_lock(&fdTableLock); >>> if (fdTable[rootArrayIndex] == NULL) { >>> .... >>> } >>> } >>> >> >> This is double-checked locking and it requires memory barriers to be >> correct - as Thomas already discussed. >> >> David >> >> >> -Dmitry >>> >>> >>> On 2016-03-01 16:13, Thomas St?fe wrote: >>> >>>> Dmitry, Christoph, >>>> >>>> I am not 100% sure this would work for weak ordering platforms. >>>> >>>> If I understand you correctly you suggest the double checking pattern: >>>> >>>> if (basetable[index] == NULL) { >>>> lock >>>> if (basetable[index] == NULL) { >>>> basetable[index] = calloc(size); >>>> } >>>> unlock >>>> } >>>> >>>> The problem I cannot wrap my head around is how to make this safe for >>>> all platforms. Note: I am not an expert for this. >>>> >>>> How do you prevent the "reading thread reads partially initialized >>>> object" problem? >>>> >>>> Consider this: We need to allocate memory, set it completely to zero and >>>> then store a pointer to it in basetable[index]. This means we have >>>> multiple stores - one store for the pointer, n stores for zero-ing out >>>> the memory, and god knows how many stores the C-Runtime allcoator needs >>>> to update its internal structures. >>>> >>>> On weak ordering platforms like ppc (and arm?), the store for >>>> basetable[index] may be visible before the other stores, so the reading >>>> threads, running on different CPUs, may read a pointer to partially >>>> initialized memory. What you need is a memory barrier between the >>>> calloc() and store of basetable[index], to prevent the latter store from >>>> floating above the other stores. >>>> >>>> I did not find anything about multithread safety in the calloc() docs, >>>> or guaranteed barrier behaviour, nor did I expect anything. In the >>>> hotspot we have our memory barrier APIs, but in the JDK I am confined to >>>> basic C and there is no way that I know of to do memory barriers with >>>> plain Posix APIs. >>>> >>>> Bottomline, I am not sure. Maybe I am too cautious here, but I do not >>>> see a way to make this safe without locking the reader thread too. >>>> >>>> Also, we are about to do an IO operation - is a mutex really that bad >>>> here? Especially with the optimization Roger suggested of pre-allocating >>>> the basetable[0] array and omitting lock protection there? >>>> >>>> Kind Regards, >>>> >>>> Thomas >>>> >>>> >>>> >>>> >>>> On Tue, Mar 1, 2016 at 11:47 AM, Langer, Christoph >>>> > wrote: >>>> >>>> Hi Dmitry, Thomas, >>>> >>>> Dmitry, I think you are referring to an outdated version of the >>>> webrev, the current one is this: >>>> >>>> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.01/webrev/ >>>> >>>> However, I agree - the lock should probably not be taken every time >>>> but only in the case where we find the entry table was not yet >>>> allocated. >>>> >>>> So, maybe getFdEntry should always do this: >>>> entryTable = fdTable[rootArrayIndex]; // no matter if >>>> rootArrayIndex >>>> is 0 >>>> >>>> Then check if entryTable is NULL and if yes then enter a guarded >>>> section which does the allocation and before that checks if another >>>> thread did it already. >>>> >>>> Also I'm wondering if the entryArrayMask and the rootArrayMask >>>> should be calculated once in the init() function and stored in a >>>> static field? Because right now it is calculated every time >>>> getFdEntry() is called and I don't think this would be optimized by >>>> inlining... >>>> >>>> Best regards >>>> Christoph >>>> >>>> -----Original Message----- >>>> From: core-libs-dev [mailto:core-libs-dev-bounces at openjdk.java.net >>>> ] On Behalf Of >>>> Dmitry >>>> Samersoff >>>> Sent: Dienstag, 1. M?rz 2016 11:20 >>>> To: Thomas St?fe >>> >; Java Core Libs >>>> >>> core-libs-dev at openjdk.java.net>> >>>> Subject: Re: RFR(s): 8150460: (linux|bsd|aix)_close.c: file >>>> descriptor table may become large or may not work at all >>>> >>>> Thomas, >>>> >>>> Sorry for being later. >>>> >>>> I'm not sure we should take a lock at ll. 131 for each fdTable >>>> lookup. >>>> >>>> As soon as we never deallocate fdTable[base_index] it's safe to >>>> try to >>>> return value first and then take a slow path (take a lock and check >>>> fdTable[base_index] again) >>>> >>>> -Dmitry >>>> >>>> >>>> On 2016-02-24 20:30, Thomas St?fe wrote: >>>> > Hi all, >>>> > >>>> > please take a look at this proposed fix. >>>> > >>>> > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 >>>> > The Webrev: >>>> > >>>> >>>> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ >>>> > >>>> > Basically, the file descriptor table implemented in linux_close.c >>>> may not >>>> > work for RLIMIT_NO_FILE=infinite or may grow very large (I saw a >>>> 50MB >>>> > table) for high values for RLIMIT_NO_FILE. Please see details in >>>> the bug >>>> > description. >>>> > >>>> > The proposed solution is to implement the file descriptor table >>>> not as >>>> > plain array, but as a twodimensional sparse array, which grows on >>>> demand. >>>> > This keeps the memory footprint small and fixes the corner cases >>>> described >>>> > in the bug description. >>>> > >>>> > Please note that the implemented solution is kept simple, at the >>>> cost of >>>> > somewhat higher (some kb) memory footprint for low values of >>>> RLIMIT_NO_FILE. >>>> > This can be optimized, if we even think it is worth the trouble. >>>> > >>>> > Please also note that the proposed implementation now uses a >>>> mutex >>>> lock for >>>> > every call to getFdEntry() - I do not think this matters, as this >>>> is all in >>>> > preparation for an IO system call, which are usually way more >>>> expensive >>>> > than a pthread mutex. But again, this could be optimized. >>>> > >>>> > This is an implementation proposal for Linux; the same code found >>>> its way >>>> > to BSD and AIX. Should you approve of this fix, I will modify >>>> those files >>>> > too. >>>> > >>>> > Thank you and Kind Regards, Thomas >>>> > >>>> >>>> >>>> -- >>>> Dmitry Samersoff >>>> Oracle Java development team, Saint Petersburg, Russia >>>> * I would love to change the world, but they won't give me the >>>> sources. >>>> >>>> >>>> >>> >>> > From aph at redhat.com Wed Mar 2 08:28:55 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 2 Mar 2016 08:28:55 +0000 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: <56D56CE7.6070700@oracle.com> References: <56D56CE7.6070700@oracle.com> Message-ID: <56D6A447.6050400@redhat.com> On 01/03/16 10:20, Dmitry Samersoff wrote: > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 >> The Webrev: >> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ Why use calloc here? Surely it makes more sense to use mmap(MAP_NORESERVE), at least on linux. We're probably only going to be using a small number of FDs, and there's no real point reserving a big block of memory we won't use. Andrew. From Alan.Bateman at oracle.com Wed Mar 2 08:51:19 2016 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Wed, 2 Mar 2016 08:51:19 +0000 Subject: RFR 8148187 : Remove OS X-specific com.apple.concurrent package In-Reply-To: <56D606BF.4080603@oracle.com> References: <56D606BF.4080603@oracle.com> Message-ID: <56D6A987.6040506@oracle.com> On 01/03/2016 21:16, Brent Christian wrote: > > For your review is a webrev of this change: > http://cr.openjdk.java.net/~bchristi/8148187/webrev.01/ This looks good to me, in particular the move of FileManager.m into the right source tree as it was just wrong for that to be in jdk.deploy.osx when the com.apple.eio classes are in the java.desktop module. -Alan. From thomas.stuefe at gmail.com Wed Mar 2 09:43:54 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 2 Mar 2016 10:43:54 +0100 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: <56D6A447.6050400@redhat.com> References: <56D56CE7.6070700@oracle.com> <56D6A447.6050400@redhat.com> Message-ID: Hi Andrew, On Wed, Mar 2, 2016 at 9:28 AM, Andrew Haley wrote: > On 01/03/16 10:20, Dmitry Samersoff wrote: > > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460 > >> The Webrev: > >> > http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/ > > Why use calloc here? Surely it makes more sense to use > mmap(MAP_NORESERVE), at least on linux. We're probably only > going to be using a small number of FDs, and there's no real > point reserving a big block of memory we won't use. > > Andrew. > > I am aware of this. I do not allocate all memory in one go, I allocate on demand in n-sized-steps - that was the point of my implementation as a sparse array. Changing my implementation to mmap(MAP_NORESERVE) would not make the code simpler: I would have to commit the memory before usage. So, I have to put some committed-pages-management atop the reserved range to keep track of which pages are committed, which aren't. File descriptors come in in no predictable order (usually sequentially, but there is no guarantee), so I cannot use a simple watermark model either, where I commit pages to cover the highest file descriptor. I mean I could, but that would be potentially wasteful if you have big holes in file descriptor value ranges. In the end I would end up with exactly the same implementation I have now, only swapping mmap(MAP_RESERVE) for calloc() and driving up reserved memory size for this process. And arguably, an even more complicated implementation. ..Thomas From nadeesh.tv at oracle.com Wed Mar 2 10:30:15 2016 From: nadeesh.tv at oracle.com (nadeesh tv) Date: Wed, 02 Mar 2016 16:00:15 +0530 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time Message-ID: <56D6C0B7.10205@oracle.com> Hi all, Please review an enhancement for a garbage free epochSecond method. Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 -- Thanks and Regards, Nadeesh TV From paul.sandoz at oracle.com Wed Mar 2 11:37:01 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Wed, 2 Mar 2016 12:37:01 +0100 Subject: RFR (XS): 8149596: Remove java.nio.Bits copy wrapper methods In-Reply-To: <56D5DFA3.7010300@oracle.com> References: <56D5DFA3.7010300@oracle.com> Message-ID: <9963229A-EA3C-492F-91B0-1D4995FCD62E@oracle.com> > On 1 Mar 2016, at 19:29, Mikael Vidstedt wrote: > > > As part of JDK-8141491[1] the native methods in java.nio.Bits were removed, and the functionality is instead provided by the VM through j.i.m.Unsafe. The Bits wrapper methods are therefore redundant and can be removed. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149596 > Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8149596/webrev.00/webrev/ > +1 Paul. > I've run the java/nio jtreg tests and it all passes (modulo a couple of unrelated failures). > > Cheers, > Mikael > > > [1] https://bugs.openjdk.java.net/browse/JDK-8141491 From Alan.Bateman at oracle.com Wed Mar 2 12:02:15 2016 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Wed, 2 Mar 2016 12:02:15 +0000 Subject: RFR [9] 8150976: JarFile and MRJAR tests should use the JDK specific Version API In-Reply-To: References: Message-ID: <56D6D647.10202@oracle.com> On 01/03/2016 16:38, Chris Hegarty wrote: > Currently JarFile and MRJAR tests use sun.misc.Version to retrieve the major > runtime version. They should be updated to use the new JDK specific Version > API. > > Note: There is an issue, 8144062 [1], to revisit the JDK specific Version API to > determine if it should be moved, or even standardized. The changes being > proposed here may need to be updated, in a trivial way, in the future, but this > issue is intending to break the dependency on sun.misc.Version so that > 8150162 [2] can make progress. Additionally, the future refactoring will most > likely be trivial. > > http://cr.openjdk.java.net/~chegar/8150976/ > https://bugs.openjdk.java.net/browse/JDK-8150976 > Looks okay to me. -Alan. From scolebourne at joda.org Wed Mar 2 12:11:54 2016 From: scolebourne at joda.org (Stephen Colebourne) Date: Wed, 2 Mar 2016 12:11:54 +0000 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time In-Reply-To: <56D6C0B7.10205@oracle.com> References: <56D6C0B7.10205@oracle.com> Message-ID: Remove "Subclass can override the default implementation for a more efficient implementation." as it adds no value. In the default implementation of epochSecond(Era era, int yearofEra, int month, int dayOfMonth, int hour, int minute, int second, ZoneOffset zoneOffset) use prolepticYear(era, yearOfEra) and call the other new epochSecond method. See dateYearDay(Era era, int yearOfEra, int dayOfYear) for the design to copy. If this is done, then there is no need to override the method in IsoChronology. In the test, LocalDate.MIN.with(chronoLd) could be LocalDate.from(chronoLd) Thanks Stephen On 2 March 2016 at 10:30, nadeesh tv wrote: > Hi all, > > Please review an enhancement for a garbage free epochSecond method. > > Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 > > webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 > > -- > Thanks and Regards, > Nadeesh TV > From michael.haupt at oracle.com Wed Mar 2 12:46:43 2016 From: michael.haupt at oracle.com (Michael Haupt) Date: Wed, 2 Mar 2016 13:46:43 +0100 Subject: RFR(M): 8150832: split T8139885 into several tests by functionality Message-ID: <7BE4292E-E917-463A-AAC0-5ECBB7E67CD8@oracle.com> Dear all, please review this change. RFE: https://bugs.openjdk.java.net/browse/JDK-8150832 Webrev: http://cr.openjdk.java.net/~mhaupt/8150832/webrev.00 This is a refactoring; the monolithic test for JEP 274 was split into several tests along functionality covered. Also, data providers and other declarative annotations were introduced where it made sense. Thanks, Michael -- Dr. Michael Haupt | Principal Member of Technical Staff Phone: +49 331 200 7277 | Fax: +49 331 200 7561 Oracle Java Platform Group | LangTools Team | Nashorn Oracle Deutschland B.V. & Co. KG | Schiffbauergasse 14 | 14467 Potsdam, Germany ORACLE Deutschland B.V. & Co. KG | Hauptverwaltung: Riesstra?e 25, D-80992 M?nchen Registergericht: Amtsgericht M?nchen, HRA 95603 Komplement?rin: ORACLE Deutschland Verwaltung B.V. | Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Nederland, Nr. 30143697 Gesch?ftsf?hrer: Alexander van der Ven, Jan Schultheiss, Val Maher Oracle is committed to developing practices and products that help protect the environment From claes.redestad at oracle.com Wed Mar 2 12:59:50 2016 From: claes.redestad at oracle.com (Claes Redestad) Date: Wed, 2 Mar 2016 13:59:50 +0100 Subject: RFR(M): 8150832: split T8139885 into several tests by functionality In-Reply-To: <7BE4292E-E917-463A-AAC0-5ECBB7E67CD8@oracle.com> References: <7BE4292E-E917-463A-AAC0-5ECBB7E67CD8@oracle.com> Message-ID: <56D6E3C6.602@oracle.com> Hi, this looks good to me. Maybe rename LoopTest to LoopCombinatorTest to add a bit of specificity? /Claes On 2016-03-02 13:46, Michael Haupt wrote: > Dear all, > > please review this change. > RFE: https://bugs.openjdk.java.net/browse/JDK-8150832 > Webrev: http://cr.openjdk.java.net/~mhaupt/8150832/webrev.00 > > This is a refactoring; the monolithic test for JEP 274 was split into several tests along functionality covered. Also, data providers and other declarative annotations were introduced where it made sense. > > Thanks, > > Michael > From michael.haupt at oracle.com Wed Mar 2 13:05:36 2016 From: michael.haupt at oracle.com (Michael Haupt) Date: Wed, 2 Mar 2016 14:05:36 +0100 Subject: RFR(M): 8150832: split T8139885 into several tests by functionality In-Reply-To: <56D6E3C6.602@oracle.com> References: <7BE4292E-E917-463A-AAC0-5ECBB7E67CD8@oracle.com> <56D6E3C6.602@oracle.com> Message-ID: Hi Claes, thanks a lot, and I agree with the renaming. Best, Michael > Am 02.03.2016 um 13:59 schrieb Claes Redestad : > > Hi, > > this looks good to me. > > Maybe rename LoopTest to LoopCombinatorTest to add a bit of specificity? > > /Claes > > On 2016-03-02 13:46, Michael Haupt wrote: >> Dear all, >> >> please review this change. >> RFE: https://bugs.openjdk.java.net/browse/JDK-8150832 >> Webrev: http://cr.openjdk.java.net/~mhaupt/8150832/webrev.00 >> >> This is a refactoring; the monolithic test for JEP 274 was split into several tests along functionality covered. Also, data providers and other declarative annotations were introduced where it made sense. >> >> Thanks, >> >> Michael >> > -- Dr. Michael Haupt | Principal Member of Technical Staff Phone: +49 331 200 7277 | Fax: +49 331 200 7561 Oracle Java Platform Group | LangTools Team | Nashorn Oracle Deutschland B.V. & Co. KG | Schiffbauergasse 14 | 14467 Potsdam, Germany ORACLE Deutschland B.V. & Co. KG | Hauptverwaltung: Riesstra?e 25, D-80992 M?nchen Registergericht: Amtsgericht M?nchen, HRA 95603 Komplement?rin: ORACLE Deutschland Verwaltung B.V. | Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Nederland, Nr. 30143697 Gesch?ftsf?hrer: Alexander van der Ven, Jan Schultheiss, Val Maher Oracle is committed to developing practices and products that help protect the environment From paul.sandoz at oracle.com Wed Mar 2 13:11:11 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Wed, 2 Mar 2016 14:11:11 +0100 Subject: RFR(XS): 8150953: j.l.i.MethodHandles: example section in whileLoop(...) provides example for doWhileLoop In-Reply-To: References: Message-ID: <21697F17-E4EE-4AB7-8517-831E621758CD@oracle.com> > On 1 Mar 2016, at 14:46, Michael Haupt wrote: > > Dear all, > > please review this fix. > Bug: https://bugs.openjdk.java.net/browse/JDK-8150953 > Webrev: http://cr.openjdk.java.net/~mhaupt/8150953/webrev.00/ > +1 Paul. > The API docs and corresponding JavaDocExampleTest test case for MethodHandles.whileLoop() wrongly used the example for MethodHandles.doWhileLoop(). > > Thanks, > > Michael From amy.lu at oracle.com Wed Mar 2 14:23:11 2016 From: amy.lu at oracle.com (Amy Lu) Date: Wed, 2 Mar 2016 22:23:11 +0800 Subject: JDK 9 RFR of JDK-8038330: tools/jar/JarEntryTime.java fails intermittently on checking extracted file last modified values are the current times In-Reply-To: <56D57FE8.4000008@gmail.com> References: <56D532A5.7090204@oracle.com> <56D57FE8.4000008@gmail.com> Message-ID: <56D6F74F.50803@oracle.com> Please help to review the updated version: http://cr.openjdk.java.net/~amlu/8038330/webrev.01/ Thanks, Amy On 3/1/16 7:41 PM, Peter Levart wrote: > Hi Amy, > > I think that the following test: > > 178 if (!(Math.abs(now - start) >= 0L && Math.abs(end - now) > >= 0L)) { > > ...will always be false. Therefore, the test will always succeed. > > Perhaps you wanted to test the following: > > assert start <= end; > if (start > now || now > end) { ... > > > Regards, Peter > > On 03/01/2016 07:11 AM, Amy Lu wrote: >> Please review the patch for test tools/jar/JarEntryTime.java >> >> In which two issues fixed: >> >> 1. Test fails intermittently on checking the extracted files' >> last-modified-time are the current times. >> Instead of compare the file last-modified-time with pre-saved time >> value ?now? (which is the time *before* current time, especially in a >> slow run, the time diff of ?now? and current time is possible greater >> than 2 seconds precision (PRECISION)), test now compares the >> extracted file?s last-modified-time with newly created file >> last-modified-time. >> 2. Test may fail if run during the Daylight Saving Time change. >> >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8038330 >> webrev: http://cr.openjdk.java.net/~amlu/8038330/webrev.00/ >> >> Thanks, >> Amy > From nadeesh.tv at oracle.com Wed Mar 2 15:17:26 2016 From: nadeesh.tv at oracle.com (nadeesh tv) Date: Wed, 02 Mar 2016 20:47:26 +0530 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time In-Reply-To: References: <56D6C0B7.10205@oracle.com> Message-ID: <56D70406.7010000@oracle.com> Hi, Stephen, Thanks for the comments. Please see the updated webrev http://cr.openjdk.java.net/~ntv/8030864/webrev.02/ Regards, Nadeesh TV On 3/2/2016 5:41 PM, Stephen Colebourne wrote: > Remove "Subclass can override the default implementation for a more > efficient implementation." as it adds no value. > > In the default implementation of > > epochSecond(Era era, int yearofEra, int month, int dayOfMonth, > int hour, int minute, int second, ZoneOffset zoneOffset) > > use > > prolepticYear(era, yearOfEra) > > and call the other new epochSecond method. See dateYearDay(Era era, > int yearOfEra, int dayOfYear) for the design to copy. If this is done, > then there is no need to override the method in IsoChronology. > > In the test, > > LocalDate.MIN.with(chronoLd) > > could be > > LocalDate.from(chronoLd) > > Thanks > Stephen > > > > > > > On 2 March 2016 at 10:30, nadeesh tv wrote: >> Hi all, >> >> Please review an enhancement for a garbage free epochSecond method. >> >> Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 >> >> webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 >> >> -- >> Thanks and Regards, >> Nadeesh TV >> -- Thanks and Regards, Nadeesh TV From kumar.x.srinivasan at oracle.com Wed Mar 2 15:41:01 2016 From: kumar.x.srinivasan at oracle.com (Kumar Srinivasan) Date: Wed, 02 Mar 2016 07:41:01 -0800 Subject: RFR: 8147755: ASM should create correct constant tag for invokestatic on handle point to interface static method Message-ID: <56D7098D.8080306@oracle.com> Hello Remi, et. al., Webrev: http://cr.openjdk.java.net/~ksrini/8147755/webrev.00/ Can you please approve this patch, it is taken out of ASM's svn repo. change id 1795, which addresses the problem described in [1]. Note 1: A couple of @Deprecated annotations and doc comments have been disabled, because we have a catch-22 that an internal and closed component depends on these APIs, and the replacement is not available until we push this patch. A follow up bug [2] has been filed. Note 2: jprt tested, all core-libs, langtools and nashorn regressions pass. HotSpot team has verified that it address their issues. Thank you Kumar [1] https://bugs.openjdk.java.net/browse/JDK-8147755 [2] https://bugs.openjdk.java.net/browse/JDK-8151056 From scolebourne at joda.org Wed Mar 2 15:48:56 2016 From: scolebourne at joda.org (Stephen Colebourne) Date: Wed, 2 Mar 2016 15:48:56 +0000 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time In-Reply-To: <56D70406.7010000@oracle.com> References: <56D6C0B7.10205@oracle.com> <56D70406.7010000@oracle.com> Message-ID: I think that this is fine now, but Roger/others should also chime in. thanks Stephen On 2 March 2016 at 15:17, nadeesh tv wrote: > Hi, > Stephen, Thanks for the comments. > Please see the updated webrev > http://cr.openjdk.java.net/~ntv/8030864/webrev.02/ > > Regards, > Nadeesh TV > > > On 3/2/2016 5:41 PM, Stephen Colebourne wrote: >> >> Remove "Subclass can override the default implementation for a more >> efficient implementation." as it adds no value. >> >> In the default implementation of >> >> epochSecond(Era era, int yearofEra, int month, int dayOfMonth, >> int hour, int minute, int second, ZoneOffset zoneOffset) >> >> use >> >> prolepticYear(era, yearOfEra) >> >> and call the other new epochSecond method. See dateYearDay(Era era, >> int yearOfEra, int dayOfYear) for the design to copy. If this is done, >> then there is no need to override the method in IsoChronology. >> >> In the test, >> >> LocalDate.MIN.with(chronoLd) >> >> could be >> >> LocalDate.from(chronoLd) >> >> Thanks >> Stephen >> >> >> >> >> >> >> On 2 March 2016 at 10:30, nadeesh tv wrote: >>> >>> Hi all, >>> >>> Please review an enhancement for a garbage free epochSecond method. >>> >>> Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 >>> >>> webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 >>> >>> -- >>> Thanks and Regards, >>> Nadeesh TV >>> > > -- > Thanks and Regards, > Nadeesh TV > From mandy.chung at oracle.com Wed Mar 2 16:11:56 2016 From: mandy.chung at oracle.com (Mandy Chung) Date: Wed, 2 Mar 2016 08:11:56 -0800 Subject: RFR [9] 8150976: JarFile and MRJAR tests should use the JDK specific Version API In-Reply-To: References: Message-ID: <4E932A02-FBA3-4C01-AEB2-27AAF750C796@oracle.com> > On Mar 1, 2016, at 8:38 AM, Chris Hegarty wrote: > > Currently JarFile and MRJAR tests use sun.misc.Version to retrieve the major > runtime version. They should be updated to use the new JDK specific Version > API. > > Note: There is an issue, 8144062 [1], to revisit the JDK specific Version API to > determine if it should be moved, or even standardized. The changes being > proposed here may need to be updated, in a trivial way, in the future, but this > issue is intending to break the dependency on sun.misc.Version so that > 8150162 [2] can make progress. Additionally, the future refactoring will most > likely be trivial. > > http://cr.openjdk.java.net/~chegar/8150976/ > https://bugs.openjdk.java.net/browse/JDK-8150976 > +1 Mandy From ivan.gerasimov at oracle.com Wed Mar 2 17:29:18 2016 From: ivan.gerasimov at oracle.com (Ivan Gerasimov) Date: Wed, 2 Mar 2016 20:29:18 +0300 Subject: [8u-dev] Request for REVIEW and APPROVAL to backport: 8149330: Capacity of StringBuilder should not get close to Integer.MAX_VALUE unless necessary Message-ID: <56D722EE.2070204@oracle.com> Hello! I'm seeking for approval to backport this fix into jdk8u-dev. Comparing to Jdk9, the patch had to be changed mainly due to compact string support introduced in jdk9. However, the fix is essentially the same: we just avoid getting too close to Integer.MAX_VALUE when doing reallocations unless explicitly required. Would you please help review it? Bug: https://bugs.openjdk.java.net/browse/JDK-8149330 Jdk9 change: http://hg.openjdk.java.net/jdk9/dev/jdk/rev/123593aacb48 Jdk9 review: http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-February/039018.html http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-March/039182.html Jdk8 webrev: http://cr.openjdk.java.net/~igerasim/8149330/04/webrev/ Sincerely yours, Ivan From Roger.Riggs at Oracle.com Wed Mar 2 18:31:27 2016 From: Roger.Riggs at Oracle.com (Roger Riggs) Date: Wed, 2 Mar 2016 13:31:27 -0500 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time In-Reply-To: <56D70406.7010000@oracle.com> References: <56D6C0B7.10205@oracle.com> <56D70406.7010000@oracle.com> Message-ID: <56D7317F.3000804@Oracle.com> Hi Nadeesh, Editorial comments: Chronology.java: 716+ "Java epoch" -> "epoch" "minute, second and zoneOffset" -> "minute, second*,* and zoneOffset" (add a comma; two places) "caluculated using given era, prolepticYear," -> "calculated using the era, year-of-era," "to represent" -> remove as unnecessary in all places IsoChronology: "to represent" -> remove as unnecessary in all places These should be fixed to cleanup the specification. The implementation and the tests look fine. Thanks, Roger On 3/2/2016 10:17 AM, nadeesh tv wrote: > Hi, > Stephen, Thanks for the comments. > Please see the updated webrev > http://cr.openjdk.java.net/~ntv/8030864/webrev.02/ > > Regards, > Nadeesh TV > > On 3/2/2016 5:41 PM, Stephen Colebourne wrote: >> Remove "Subclass can override the default implementation for a more >> efficient implementation." as it adds no value. >> >> In the default implementation of >> >> epochSecond(Era era, int yearofEra, int month, int dayOfMonth, >> int hour, int minute, int second, ZoneOffset zoneOffset) >> >> use >> >> prolepticYear(era, yearOfEra) >> >> and call the other new epochSecond method. See dateYearDay(Era era, >> int yearOfEra, int dayOfYear) for the design to copy. If this is done, >> then there is no need to override the method in IsoChronology. >> >> In the test, >> >> LocalDate.MIN.with(chronoLd) >> >> could be >> >> LocalDate.from(chronoLd) >> >> Thanks >> Stephen >> >> >> >> >> >> >> On 2 March 2016 at 10:30, nadeesh tv wrote: >>> Hi all, >>> >>> Please review an enhancement for a garbage free epochSecond method. >>> >>> Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 >>> >>> webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 >>> >>> -- >>> Thanks and Regards, >>> Nadeesh TV >>> > From coleen.phillimore at oracle.com Wed Mar 2 18:44:07 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 2 Mar 2016 13:44:07 -0500 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM Message-ID: <56D73477.4030100@oracle.com> Summary: replace JVM_GetStackTraceDepth and JVM_GetStackTraceElement, with JVM_GetStackTraceElements that gets all the elements in the StackTraceElement[] These improvements were found during the investigation for replacing Throwable with the StackWalkAPI. This change also adds iterator for BacktraceBuilder to make changing format of backtrace easier. Tested with -testset core, RBT nightly hotspot nightly tests on all platforms, and jck tests on linux x64. Compatibility request is approved. open webrev at http://cr.openjdk.java.net/~coleenp/8150778_jdk/ open webrev at http://cr.openjdk.java.net/~coleenp/8150778_hotspot bug link https://bugs.openjdk.java.net/browse/JDK-8150778 Thanks, Coleen From nadeesh.tv at oracle.com Wed Mar 2 18:51:35 2016 From: nadeesh.tv at oracle.com (nadeesh tv) Date: Thu, 03 Mar 2016 00:21:35 +0530 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time In-Reply-To: <56D7317F.3000804@Oracle.com> References: <56D6C0B7.10205@oracle.com> <56D70406.7010000@oracle.com> <56D7317F.3000804@Oracle.com> Message-ID: <56D73637.3090006@oracle.com> Hi , Please see the updated webrev http://cr.openjdk.java.net/~ntv/8030864/webrev.03/ Thanks and Regards, Nadeesh On 3/3/2016 12:01 AM, Roger Riggs wrote: > Hi Nadeesh, > > Editorial comments: > > Chronology.java: 716+ > "Java epoch" -> "epoch" > "minute, second and zoneOffset" -> "minute, second*,* and > zoneOffset" (add a comma; two places) > "caluculated using given era, prolepticYear," -> "calculated using > the era, year-of-era," > "to represent" -> remove as unnecessary in all places > > IsoChronology: > "to represent" -> remove as unnecessary in all places > > These should be fixed to cleanup the specification. > > The implementation and the tests look fine. > > Thanks, Roger > > > > On 3/2/2016 10:17 AM, nadeesh tv wrote: >> Hi, >> Stephen, Thanks for the comments. >> Please see the updated webrev >> http://cr.openjdk.java.net/~ntv/8030864/webrev.02/ >> >> Regards, >> Nadeesh TV >> >> On 3/2/2016 5:41 PM, Stephen Colebourne wrote: >>> Remove "Subclass can override the default implementation for a more >>> efficient implementation." as it adds no value. >>> >>> In the default implementation of >>> >>> epochSecond(Era era, int yearofEra, int month, int dayOfMonth, >>> int hour, int minute, int second, ZoneOffset zoneOffset) >>> >>> use >>> >>> prolepticYear(era, yearOfEra) >>> >>> and call the other new epochSecond method. See dateYearDay(Era era, >>> int yearOfEra, int dayOfYear) for the design to copy. If this is done, >>> then there is no need to override the method in IsoChronology. >>> >>> In the test, >>> >>> LocalDate.MIN.with(chronoLd) >>> >>> could be >>> >>> LocalDate.from(chronoLd) >>> >>> Thanks >>> Stephen >>> >>> >>> >>> >>> >>> >>> On 2 March 2016 at 10:30, nadeesh tv wrote: >>>> Hi all, >>>> >>>> Please review an enhancement for a garbage free epochSecond method. >>>> >>>> Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 >>>> >>>> webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 >>>> >>>> -- >>>> Thanks and Regards, >>>> Nadeesh TV >>>> >> > -- Thanks and Regards, Nadeesh TV From daniel.fuchs at oracle.com Wed Mar 2 18:57:32 2016 From: daniel.fuchs at oracle.com (Daniel Fuchs) Date: Wed, 2 Mar 2016 19:57:32 +0100 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D73477.4030100@oracle.com> References: <56D73477.4030100@oracle.com> Message-ID: <56D7379C.3030006@oracle.com> Hi Coleen, Nice improvement! Two remarks on http://cr.openjdk.java.net/~coleenp/8150778_jdk/ 1. StackTraceElement.java Does the new constructor in StackTraceElement really need to be public? Can't we keep that package protected? 2. Throwable.java:902 902 * package-protection for use by SharedSecrets. If I'm not mistaken we removed the shared secrets access - IIRC that was used by java.util.logging.LogRecord - which now uses the StackWalker API instead. So maybe you could make the method private and remove the comment as further cleanup. Please don't count me as (R)eviewer for the hotspot changes :-) best regards, -- daniel On 02/03/16 19:44, Coleen Phillimore wrote: > Summary: replace JVM_GetStackTraceDepth and JVM_GetStackTraceElement, > with JVM_GetStackTraceElements that gets all the elements in the > StackTraceElement[] > > These improvements were found during the investigation for replacing > Throwable with the StackWalkAPI. This change also adds iterator for > BacktraceBuilder to make changing format of backtrace easier. > > Tested with -testset core, RBT nightly hotspot nightly tests on all > platforms, and jck tests on linux x64. Compatibility request is approved. > > open webrev at http://cr.openjdk.java.net/~coleenp/8150778_jdk/ > open webrev at http://cr.openjdk.java.net/~coleenp/8150778_hotspot > bug link https://bugs.openjdk.java.net/browse/JDK-8150778 > > Thanks, > Coleen From aleksey.shipilev at oracle.com Wed Mar 2 18:58:37 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 2 Mar 2016 21:58:37 +0300 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D73477.4030100@oracle.com> References: <56D73477.4030100@oracle.com> Message-ID: <56D737DD.7000700@oracle.com> Hi Coleen, On 03/02/2016 09:44 PM, Coleen Phillimore wrote: > Summary: replace JVM_GetStackTraceDepth and JVM_GetStackTraceElement, > with JVM_GetStackTraceElements that gets all the elements in the > StackTraceElement[] > > These improvements were found during the investigation for replacing > Throwable with the StackWalkAPI. This change also adds iterator for > BacktraceBuilder to make changing format of backtrace easier. > > Tested with -testset core, RBT nightly hotspot nightly tests on all > platforms, and jck tests on linux x64. Compatibility request is approved. > > open webrev at http://cr.openjdk.java.net/~coleenp/8150778_jdk/ > open webrev at http://cr.openjdk.java.net/~coleenp/8150778_hotspot > bug link https://bugs.openjdk.java.net/browse/JDK-8150778 Looks interesting! Is there an underlying reason why we can't return the pre-filled StackTraceElements[] array from the JVM_GetStackTraceElements to begin with? This will avoid leaking StackTraceElement constructor into standard library, *and* allows to make StackTraceElement fields final. Taking stuff back from the standard library is hard, if not impossible, so we better expose as little as possible. Other minor nits: * Initializing fields to their default values is a code smell in Java: private transient int depth = 0; * Passing a null array to getStackTraceElement probably SEGVs? I don't see the null checks in native parts. Thanks, -Aleksey From steve.drach at oracle.com Wed Mar 2 19:12:18 2016 From: steve.drach at oracle.com (Steve Drach) Date: Wed, 2 Mar 2016 11:12:18 -0800 Subject: RFR 8150679: closed/javax/crypto/CryptoPermission/CallerIdentification.sh fails after fix for JDK-8132734 Message-ID: Please review the following fix for JDK-8150679 webrev: http://cr.openjdk.java.net/~sdrach/8150679/webrev/ issue: https://bugs.openjdk.java.net/browse/JDK-8150679 The test was modified to demonstrate the problem. From coleen.phillimore at oracle.com Wed Mar 2 19:29:57 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 2 Mar 2016 14:29:57 -0500 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D7379C.3030006@oracle.com> References: <56D73477.4030100@oracle.com> <56D7379C.3030006@oracle.com> Message-ID: <56D73F35.3030500@oracle.com> Hi Daniel, Thank you for looking at this so quickly. On 3/2/16 1:57 PM, Daniel Fuchs wrote: > Hi Coleen, > > Nice improvement! > > Two remarks on http://cr.openjdk.java.net/~coleenp/8150778_jdk/ > > 1. StackTraceElement.java > > Does the new constructor in StackTraceElement really need to be > public? Can't we keep that package protected? So I just removed the public keyword, and that seems good. Thanks! > > > 2. Throwable.java:902 > > 902 * package-protection for use by SharedSecrets. > > If I'm not mistaken we removed the shared secrets access - IIRC that > was used by java.util.logging.LogRecord - which now uses the > StackWalker API instead. > > So maybe you could make the method private and remove the comment > as further cleanup. I had just copied the SharedSecrets comments. I'll make getStackTraceElements private also. > > Please don't count me as (R)eviewer for the hotspot changes :-) Oh, but you know this code in hotspot, now. That's ok, you don't need to review hotspot code. Thanks! Coleen > > best regards, > > -- daniel > > On 02/03/16 19:44, Coleen Phillimore wrote: >> Summary: replace JVM_GetStackTraceDepth and JVM_GetStackTraceElement, >> with JVM_GetStackTraceElements that gets all the elements in the >> StackTraceElement[] >> >> These improvements were found during the investigation for replacing >> Throwable with the StackWalkAPI. This change also adds iterator for >> BacktraceBuilder to make changing format of backtrace easier. >> >> Tested with -testset core, RBT nightly hotspot nightly tests on all >> platforms, and jck tests on linux x64. Compatibility request is >> approved. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8150778_jdk/ >> open webrev at http://cr.openjdk.java.net/~coleenp/8150778_hotspot >> bug link https://bugs.openjdk.java.net/browse/JDK-8150778 >> >> Thanks, >> Coleen > From xueming.shen at oracle.com Wed Mar 2 19:42:23 2016 From: xueming.shen at oracle.com (Xueming Shen) Date: Wed, 02 Mar 2016 11:42:23 -0800 Subject: JDK 9 RFR of JDK-8038330: tools/jar/JarEntryTime.java fails intermittently on checking extracted file last modified values are the current times In-Reply-To: <56D6F74F.50803@oracle.com> References: <56D532A5.7090204@oracle.com> <56D57FE8.4000008@gmail.com> <56D6F74F.50803@oracle.com> Message-ID: <56D7421F.3020409@oracle.com> +1 though it might be better (?) to check as 184 if (now< start || now> end)) { thanks, sherman On 03/02/2016 06:23 AM, Amy Lu wrote: > Please help to review the updated version: > http://cr.openjdk.java.net/~amlu/8038330/webrev.01/ > > Thanks, > Amy > > On 3/1/16 7:41 PM, Peter Levart wrote: >> Hi Amy, >> >> I think that the following test: >> >> 178 if (!(Math.abs(now - start) >= 0L && Math.abs(end - now) >= 0L)) { >> >> ...will always be false. Therefore, the test will always succeed. >> >> Perhaps you wanted to test the following: >> >> assert start <= end; >> if (start > now || now > end) { ... >> >> >> Regards, Peter >> >> On 03/01/2016 07:11 AM, Amy Lu wrote: >>> Please review the patch for test tools/jar/JarEntryTime.java >>> >>> In which two issues fixed: >>> >>> 1. Test fails intermittently on checking the extracted files' last-modified-time are the current times. >>> Instead of compare the file last-modified-time with pre-saved time value ?now? (which is the time *before* current time, especially in a slow run, the time diff of ?now? and current time is possible greater than 2 seconds precision (PRECISION)), test now compares the extracted file?s last-modified-time with newly created file last-modified-time. >>> 2. Test may fail if run during the Daylight Saving Time change. >>> >>> >>> bug: https://bugs.openjdk.java.net/browse/JDK-8038330 >>> webrev: http://cr.openjdk.java.net/~amlu/8038330/webrev.00/ >>> >>> Thanks, >>> Amy >> > From michael.haupt at oracle.com Wed Mar 2 19:53:50 2016 From: michael.haupt at oracle.com (Michael Haupt) Date: Wed, 2 Mar 2016 20:53:50 +0100 Subject: RFR(S): 8150957: j.l.i.MethodHandles.whileLoop(...) fails with IOOBE in the case 'init' is null, 'step' and 'pred' have parameters Message-ID: <452A0D05-93D8-4DCE-941F-A582EB107153@oracle.com> Dear all, please review this change. Bug: https://bugs.openjdk.java.net/browse/JDK-8150957 Webrev: http://cr.openjdk.java.net/~mhaupt/8150957/webrev.00/ The bug was actually fixed with the push for JDK-8150635. This change adds a test for the issue. Thanks, Michael -- Dr. Michael Haupt | Principal Member of Technical Staff Phone: +49 331 200 7277 | Fax: +49 331 200 7561 Oracle Java Platform Group | LangTools Team | Nashorn Oracle Deutschland B.V. & Co. KG | Schiffbauergasse 14 | 14467 Potsdam, Germany ORACLE Deutschland B.V. & Co. KG | Hauptverwaltung: Riesstra?e 25, D-80992 M?nchen Registergericht: Amtsgericht M?nchen, HRA 95603 Komplement?rin: ORACLE Deutschland Verwaltung B.V. | Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Nederland, Nr. 30143697 Gesch?ftsf?hrer: Alexander van der Ven, Jan Schultheiss, Val Maher Oracle is committed to developing practices and products that help protect the environment From coleen.phillimore at oracle.com Wed Mar 2 19:57:43 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 2 Mar 2016 14:57:43 -0500 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D737DD.7000700@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> Message-ID: <56D745B7.4040508@oracle.com> On 3/2/16 1:58 PM, Aleksey Shipilev wrote: > Hi Coleen, > > On 03/02/2016 09:44 PM, Coleen Phillimore wrote: >> Summary: replace JVM_GetStackTraceDepth and JVM_GetStackTraceElement, >> with JVM_GetStackTraceElements that gets all the elements in the >> StackTraceElement[] >> >> These improvements were found during the investigation for replacing >> Throwable with the StackWalkAPI. This change also adds iterator for >> BacktraceBuilder to make changing format of backtrace easier. >> >> Tested with -testset core, RBT nightly hotspot nightly tests on all >> platforms, and jck tests on linux x64. Compatibility request is approved. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8150778_jdk/ >> open webrev at http://cr.openjdk.java.net/~coleenp/8150778_hotspot >> bug link https://bugs.openjdk.java.net/browse/JDK-8150778 > Looks interesting! > > Is there an underlying reason why we can't return the pre-filled > StackTraceElements[] array from the JVM_GetStackTraceElements to begin > with? This will avoid leaking StackTraceElement constructor into > standard library, *and* allows to make StackTraceElement fields final. > Taking stuff back from the standard library is hard, if not impossible, > so we better expose as little as possible. We measured that it's faster to allocate the StackTraceElement array in Java and it seems cleaner to the Java guys. It came from similar code we've been prototyping for StackFrameInfo. > > Other minor nits: > > * Initializing fields to their default values is a code smell in Java: > private transient int depth = 0; ok, not sure what "code smell" means but it doesn't have to be initialized like this. It's set in the native code. > > * Passing a null array to getStackTraceElement probably SEGVs? I don't > see the null checks in native parts. Yes, it would SEGV. I'll add some checks for null and make sure it's an array of StackTraceElement. Thanks, Coleen > > Thanks, > -Aleksey > From martinrb at google.com Wed Mar 2 20:20:04 2016 From: martinrb at google.com (Martin Buchholz) Date: Wed, 2 Mar 2016 12:20:04 -0800 Subject: [8u-dev] Request for REVIEW and APPROVAL to backport: 8149330: Capacity of StringBuilder should not get close to Integer.MAX_VALUE unless necessary In-Reply-To: <56D722EE.2070204@oracle.com> References: <56D722EE.2070204@oracle.com> Message-ID: Reviewed! On Wed, Mar 2, 2016 at 9:29 AM, Ivan Gerasimov wrote: > Hello! > > I'm seeking for approval to backport this fix into jdk8u-dev. > Comparing to Jdk9, the patch had to be changed mainly due to compact string > support introduced in jdk9. > However, the fix is essentially the same: we just avoid getting too close to > Integer.MAX_VALUE when doing reallocations unless explicitly required. > > Would you please help review it? > > Bug: https://bugs.openjdk.java.net/browse/JDK-8149330 > Jdk9 change: http://hg.openjdk.java.net/jdk9/dev/jdk/rev/123593aacb48 > Jdk9 review: > http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-February/039018.html > http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-March/039182.html > Jdk8 webrev: http://cr.openjdk.java.net/~igerasim/8149330/04/webrev/ > > Sincerely yours, > Ivan From aleksey.shipilev at oracle.com Wed Mar 2 20:21:31 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 2 Mar 2016 23:21:31 +0300 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D745B7.4040508@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> <56D745B7.4040508@oracle.com> Message-ID: <56D74B4B.9090708@oracle.com> On 03/02/2016 10:57 PM, Coleen Phillimore wrote: > On 3/2/16 1:58 PM, Aleksey Shipilev wrote: >> Is there an underlying reason why we can't return the pre-filled >> StackTraceElements[] array from the JVM_GetStackTraceElements to begin >> with? This will avoid leaking StackTraceElement constructor into >> standard library, *and* allows to make StackTraceElement fields final. >> Taking stuff back from the standard library is hard, if not impossible, >> so we better expose as little as possible. > > We measured that it's faster to allocate the StackTraceElement array > in Java and it seems cleaner to the Java guys. It came from similar > code we've been prototyping for StackFrameInfo. OK, it's not perfectly clean from implementation standpoint, but this RFE might not be the best opportunity to polish that. At least make StackTraceElement constructor private (better), or package-private (acceptable), and then we are good to go. Also, I think you can drop this line: 836 int depth = getStackTraceDepth(); Thanks, -Aleksey From coleen.phillimore at oracle.com Wed Mar 2 20:36:27 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 2 Mar 2016 15:36:27 -0500 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D74B4B.9090708@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> <56D745B7.4040508@oracle.com> <56D74B4B.9090708@oracle.com> Message-ID: <56D74ECB.7020503@oracle.com> On 3/2/16 3:21 PM, Aleksey Shipilev wrote: > On 03/02/2016 10:57 PM, Coleen Phillimore wrote: >> On 3/2/16 1:58 PM, Aleksey Shipilev wrote: >>> Is there an underlying reason why we can't return the pre-filled >>> StackTraceElements[] array from the JVM_GetStackTraceElements to begin >>> with? This will avoid leaking StackTraceElement constructor into >>> standard library, *and* allows to make StackTraceElement fields final. >>> Taking stuff back from the standard library is hard, if not impossible, >>> so we better expose as little as possible. >> We measured that it's faster to allocate the StackTraceElement array >> in Java and it seems cleaner to the Java guys. It came from similar >> code we've been prototyping for StackFrameInfo. > OK, it's not perfectly clean from implementation standpoint, but this > RFE might not be the best opportunity to polish that. At least make > StackTraceElement constructor private (better), or package-private > (acceptable), and then we are good to go. Well, the RFE is intended to clean this up but I don't think there's agreement about what the cleanest thing is. If the cleaner API is: StackTraceElement[] getStackTraceElements(); we should change it once and not twice. I'd like to hear other opinions! Since StackTraceElement constructor is called by Throwable it has to be package private but can't be private. I have made it package private. > > Also, I think you can drop this line: > 836 int depth = getStackTraceDepth(); Oh, right, I can do that. I was hiding the field depth. i don't need the function either. Thanks! Thank you for looking at this so quickly. Coleen > > Thanks, > -Aleksey > From martinrb at google.com Wed Mar 2 20:37:37 2016 From: martinrb at google.com (Martin Buchholz) Date: Wed, 2 Mar 2016 12:37:37 -0800 Subject: RFR: jsr166 jdk9 integration wave 5 In-Reply-To: References: <56D61A45.7040005@oracle.com> Message-ID: Webrevs updated, incorporating changes to tests in my previous message. From tprintezis at twitter.com Wed Mar 2 22:07:57 2016 From: tprintezis at twitter.com (Tony Printezis) Date: Wed, 2 Mar 2016 17:07:57 -0500 Subject: RFR: 8151098: Introduce multi-slot per-thread cache for StringDecoders/Encoders Message-ID: Hi all, We discussed this change in a previous e-mail thread. Here?s a patch for your consideration: http://cr.openjdk.java.net/~tonyp/8151098/webrev.1/ I cloned the Cache class from ThreadLocalCoders and reworked it a bit. The StringDecoder and StringEncoder classes had some common fields (the Charset and the requested charset name). I moved them to a superclass (StringCoder) which made the cache easier to write (I didn?t have to create one subclass for the decoder and one for the encoder, as it is the case in ThreadLocalCoders). Feedback very welcome! Tony ----- Tony Printezis | JVM/GC Engineer / VM Team | Twitter @TonyPrintezis tprintezis at twitter.com From david.holmes at oracle.com Thu Mar 3 01:45:03 2016 From: david.holmes at oracle.com (David Holmes) Date: Thu, 3 Mar 2016 11:45:03 +1000 Subject: Custom security policy without replacing files in the OpenJDK? In-Reply-To: References: Message-ID: <56D7971F.8040706@oracle.com> On 27/02/2016 2:56 AM, Marcus Lagergren wrote: > Hi! > > Is it possible to override lib/security/local_policy on app level without patching jdk distro? > i.e. -Duse.this.policy.jar= ? or something? > > Can?t find a way to do it http://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html Specifying an Additional Policy File at Runtime It is also possible to specify an additional or a different policy file when invoking execution of an application. This can be done via the "-Djava.security.policy" command line argument, which sets the value of the java.security.policy property. For example, if you use java -Djava.security.manager -Djava.security.policy=someURL SomeApp ... HTH David > Regards > Marcus > From mandy.chung at oracle.com Thu Mar 3 02:18:40 2016 From: mandy.chung at oracle.com (Mandy Chung) Date: Wed, 2 Mar 2016 18:18:40 -0800 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D77F55.9010801@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> <56D745B7.4040508@oracle.com> <56D77F55.9010801@oracle.com> Message-ID: <37875252-7E11-4A18-B58A-84DC048AE6A7@oracle.com> > On Mar 2, 2016, at 4:03 PM, Coleen Phillimore wrote: > > Freshly tested changes with jck tests, with missing checks and other changes to use the depth field, as pointed out by Aleksey. I've kept the StackTraceElement[] allocation in Java to match the new API that was approved. > > open webrev at http://cr.openjdk.java.net/~coleenp/8150778.02_hotspot/ > open webrev at http://cr.openjdk.java.net/~coleenp/8150778.02_jdk/ typo in your link: http://cr.openjdk.java.net/~coleenp/8150778.02_jck/ StackTraceElement.java 80 * @since 1.9 This is not needed. Simply take this out. Throwable.java 215 * Native code sets the depth of the backtrace for later retrieval s/Native code/VM/ since VM is setting the depth field. 896 private native void getStackTraceElements(StackTraceElement[] elements); Can you add the method description ?Gets the stack trace elements." I only skimmed on the hotspot change. Looks okay to me. TestThrowable.java 43 int getDepth(Throwable t) throws Exception { 44 for (Field f : Throwable.class.getDeclaredFields()) { 45 if (f.getName().equals("depth")) { You can replace the above with Throwable.class.getDeclaredField(?depth?) Otherwise, looks okay. Mandy From coleen.phillimore at oracle.com Thu Mar 3 02:55:09 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 2 Mar 2016 21:55:09 -0500 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <37875252-7E11-4A18-B58A-84DC048AE6A7@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> <56D745B7.4040508@oracle.com> <56D77F55.9010801@oracle.com> <37875252-7E11-4A18-B58A-84DC048AE6A7@oracle.com> Message-ID: <56D7A78D.7040500@oracle.com> Mandy, thank you for reviewing this. On 3/2/16 9:18 PM, Mandy Chung wrote: >> On Mar 2, 2016, at 4:03 PM, Coleen Phillimore wrote: >> >> Freshly tested changes with jck tests, with missing checks and other changes to use the depth field, as pointed out by Aleksey. I've kept the StackTraceElement[] allocation in Java to match the new API that was approved. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8150778.02_hotspot/ >> open webrev at http://cr.openjdk.java.net/~coleenp/8150778.02_jdk/ > typo in your link: > http://cr.openjdk.java.net/~coleenp/8150778.02_jck/ > > StackTraceElement.java > 80 * @since 1.9 Okay, good because it's probably 9.0 anyway. > > This is not needed. Simply take this out. > > Throwable.java > > 215 * Native code sets the depth of the backtrace for later retrieval > > s/Native code/VM/ I changed it to "The JVM sets the depth..." There was another sentence just like this for the backtrace field, which I also changed. > since VM is setting the depth field. > > > 896 private native void getStackTraceElements(StackTraceElement[] elements); > > Can you add the method description > ?Gets the stack trace elements." Fixed. > I only skimmed on the hotspot change. Looks okay to me. > > TestThrowable.java > > 43 int getDepth(Throwable t) throws Exception { > 44 for (Field f : Throwable.class.getDeclaredFields()) { > 45 if (f.getName().equals("depth")) { > > > You can replace the above with Throwable.class.getDeclaredField(?depth?) Yes, that's better. > Otherwise, looks okay. Thanks! Coleen > Mandy From hboehm at google.com Thu Mar 3 03:08:59 2016 From: hboehm at google.com (Hans Boehm) Date: Wed, 2 Mar 2016 19:08:59 -0800 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> <56D59B83.3010503@oracle.com> <56D60202.6030803@oracle.com> Message-ID: On Wed, Mar 2, 2016 at 12:09 AM, Thomas St?fe wrote: > > Hi Hans, > > thanks for the hint! > > But how would I do this for my problem: > > Allocate memory, zero it out and then store the pointer into a variable seen by other threads, while preventing the other threads from seeing . I do not understand how atomics would help: I can make the pointer itself an atomic, but that only guarantees memory ordering in regard to this variable, not to the allocated memory. > > Kind Regards, Thomas C11 atomics work essentially like Java volatiles: They order other memory accesses as well. If you declare the pointer to be atomic, and store into it, then another thread reading the newly assigned value will also see the stores preceding the pointer store. Since the pointer is the only value that can be accessed concurrently by multiple threads (with not all accesses reads), it's the only object that needs to be atomic. In this case, it's sufficient to store into the pointer with atomic_store_explicit(&ptr, , memory_order_release); and read it with atomic_load_explicit(&ptr, memory_order_acquire); which are a bit cheaper. However, this is C11 specific, and I don't know whether that's acceptable to use in this context. If you can't assume C11, the least incorrect workaround is generally to make the pointer volatile, precede the store with a fence, and follow the load with a fence. On x86, both fences just need to prevent compiler reordering. From amy.lu at oracle.com Thu Mar 3 04:30:34 2016 From: amy.lu at oracle.com (Amy Lu) Date: Thu, 3 Mar 2016 12:30:34 +0800 Subject: JDK 9 RFR of JDK-8038330: tools/jar/JarEntryTime.java fails intermittently on checking extracted file last modified values are the current times In-Reply-To: <56D7421F.3020409@oracle.com> References: <56D532A5.7090204@oracle.com> <56D57FE8.4000008@gmail.com> <56D6F74F.50803@oracle.com> <56D7421F.3020409@oracle.com> Message-ID: <56D7BDEA.9080903@oracle.com> On 3/3/16 3:42 AM, Xueming Shen wrote: > +1 > > though it might be better (?) to check as > > 184 if (now< start || now> end)) { Updated :-) http://cr.openjdk.java.net/~amlu/8038330/webrev.02/ Thanks, Amy > > thanks, > sherman > > > On 03/02/2016 06:23 AM, Amy Lu wrote: >> Please help to review the updated version: >> http://cr.openjdk.java.net/~amlu/8038330/webrev.01/ >> >> Thanks, >> Amy >> >> On 3/1/16 7:41 PM, Peter Levart wrote: >>> Hi Amy, >>> >>> I think that the following test: >>> >>> 178 if (!(Math.abs(now - start) >= 0L && Math.abs(end - >>> now) >= 0L)) { >>> >>> ...will always be false. Therefore, the test will always succeed. >>> >>> Perhaps you wanted to test the following: >>> >>> assert start <= end; >>> if (start > now || now > end) { ... >>> >>> >>> Regards, Peter >>> >>> On 03/01/2016 07:11 AM, Amy Lu wrote: >>>> Please review the patch for test tools/jar/JarEntryTime.java >>>> >>>> In which two issues fixed: >>>> >>>> 1. Test fails intermittently on checking the extracted files' >>>> last-modified-time are the current times. >>>> Instead of compare the file last-modified-time with pre-saved >>>> time value ?now? (which is the time *before* current time, >>>> especially in a slow run, the time diff of ?now? and current time >>>> is possible greater than 2 seconds precision (PRECISION)), test now >>>> compares the extracted file?s last-modified-time with newly created >>>> file last-modified-time. >>>> 2. Test may fail if run during the Daylight Saving Time change. >>>> >>>> >>>> bug: https://bugs.openjdk.java.net/browse/JDK-8038330 >>>> webrev: http://cr.openjdk.java.net/~amlu/8038330/webrev.00/ >>>> >>>> Thanks, >>>> Amy >>> >> > From shihua.guo at oracle.com Thu Mar 3 06:29:26 2016 From: shihua.guo at oracle.com (Eric Guo) Date: Thu, 03 Mar 2016 14:29:26 +0800 Subject: RFR: 8059169 [Findbugs]Classes under package com.sun.tools.internal.xjc may expose internal representation by storing an externally mutable object In-Reply-To: <56D7CE5C.2070607@oracle.com> References: <56D7CE5C.2070607@oracle.com> Message-ID: <56D7D9C6.20704@oracle.com> Hi all, Could you please help me to review my code change about issue https://bugs.openjdk.java.net/browse/JDK-8059169 ? webrev: http://cr.openjdk.java.net/~fyuan/eguo/8059169/webrev.00/ . These change are only for JDK 9. Best regards, Eric From sean.coffey at oracle.com Thu Mar 3 09:04:47 2016 From: sean.coffey at oracle.com (=?UTF-8?Q?Se=c3=a1n_Coffey?=) Date: Thu, 3 Mar 2016 09:04:47 +0000 Subject: [8u-dev] Request for REVIEW and APPROVAL to backport: 8149330: Capacity of StringBuilder should not get close to Integer.MAX_VALUE unless necessary In-Reply-To: References: <56D722EE.2070204@oracle.com> Message-ID: <56D7FE2F.8030207@oracle.com> Ivan, the JBS bug description is scare on detail. Can you fill it out a bit ? Some examples of the stack trace encountered and links to OpenJDK reviews/discussions will help people who encounter this issue in the future. Regards, Sean. On 02/03/2016 20:20, Martin Buchholz wrote: > Reviewed! > > On Wed, Mar 2, 2016 at 9:29 AM, Ivan Gerasimov > wrote: >> Hello! >> >> I'm seeking for approval to backport this fix into jdk8u-dev. >> Comparing to Jdk9, the patch had to be changed mainly due to compact string >> support introduced in jdk9. >> However, the fix is essentially the same: we just avoid getting too close to >> Integer.MAX_VALUE when doing reallocations unless explicitly required. >> >> Would you please help review it? >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8149330 >> Jdk9 change: http://hg.openjdk.java.net/jdk9/dev/jdk/rev/123593aacb48 >> Jdk9 review: >> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-February/039018.html >> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-March/039182.html >> Jdk8 webrev: http://cr.openjdk.java.net/~igerasim/8149330/04/webrev/ >> >> Sincerely yours, >> Ivan From sean.coffey at oracle.com Thu Mar 3 09:07:20 2016 From: sean.coffey at oracle.com (=?UTF-8?Q?Se=c3=a1n_Coffey?=) Date: Thu, 3 Mar 2016 09:07:20 +0000 Subject: [8u-dev] Request for REVIEW and APPROVAL to backport: 8149330: Capacity of StringBuilder should not get close to Integer.MAX_VALUE unless necessary In-Reply-To: <56D7FE2F.8030207@oracle.com> References: <56D722EE.2070204@oracle.com> <56D7FE2F.8030207@oracle.com> Message-ID: <56D7FEC8.3020300@oracle.com> Approved for jdk8u-dev (BTW). Regards, Sean. On 03/03/2016 09:04, Se?n Coffey wrote: > Ivan, > > the JBS bug description is scare on detail. Can you fill it out a bit ? > > Some examples of the stack trace encountered and links to OpenJDK > reviews/discussions will help people who encounter this issue in the > future. > > Regards, > Sean. > > On 02/03/2016 20:20, Martin Buchholz wrote: >> Reviewed! >> >> On Wed, Mar 2, 2016 at 9:29 AM, Ivan Gerasimov >> wrote: >>> Hello! >>> >>> I'm seeking for approval to backport this fix into jdk8u-dev. >>> Comparing to Jdk9, the patch had to be changed mainly due to compact >>> string >>> support introduced in jdk9. >>> However, the fix is essentially the same: we just avoid getting too >>> close to >>> Integer.MAX_VALUE when doing reallocations unless explicitly required. >>> >>> Would you please help review it? >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8149330 >>> Jdk9 change: http://hg.openjdk.java.net/jdk9/dev/jdk/rev/123593aacb48 >>> Jdk9 review: >>> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-February/039018.html >>> >>> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-March/039182.html >>> >>> Jdk8 webrev: http://cr.openjdk.java.net/~igerasim/8149330/04/webrev/ >>> >>> Sincerely yours, >>> Ivan > From amaembo at gmail.com Thu Mar 3 09:26:57 2016 From: amaembo at gmail.com (Tagir F. Valeev) Date: Thu, 3 Mar 2016 15:26:57 +0600 Subject: RFR: 8151123 - Collectors.summingDouble/averagingDouble unnecessarily call mapper twice Message-ID: <876784395.20160303152657@gmail.com> Hello! Please review and sponsor this small change: https://bugs.openjdk.java.net/browse/JDK-8151123 http://cr.openjdk.java.net/~tvaleev/webrev/8151123/r1/ User-supplied mapper function is unnecessarily called twice on each accumulation event in summingDouble and averagingDouble. This function could be computationally intensive which may degrade the performance up to 2x. The patch addresses this issue. Here's also simple JMH benchmark which illustrates the performance gain. http://cr.openjdk.java.net/~tvaleev/webrev/8151123/jmh/ Original: Benchmark (n) Mode Cnt Score Error Units AveragingTest.averageDistance 10 avgt 30 0,571 ? 0,049 us/op AveragingTest.averageDistance 1000 avgt 30 58,573 ? 1,194 us/op AveragingTest.averageDistance 100000 avgt 30 5854,428 ? 71,242 us/op Patched: Benchmark (n) Mode Cnt Score Error Units AveragingTest.averageDistance 10 avgt 30 0,336 ? 0,002 us/op AveragingTest.averageDistance 1000 avgt 30 31,932 ? 0,367 us/op AveragingTest.averageDistance 100000 avgt 30 3794,541 ? 21,599 us/op With best regards, Tagir Valeev. From thomas.stuefe at gmail.com Thu Mar 3 09:32:19 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 3 Mar 2016 10:32:19 +0100 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> <56D59B83.3010503@oracle.com> <56D60202.6030803@oracle.com> Message-ID: Hi Hans, On Thu, Mar 3, 2016 at 4:08 AM, Hans Boehm wrote: > > On Wed, Mar 2, 2016 at 12:09 AM, Thomas St?fe > wrote: > > > > Hi Hans, > > > > thanks for the hint! > > > > But how would I do this for my problem: > > > > Allocate memory, zero it out and then store the pointer into a variable > seen by other threads, while preventing the other threads from seeing . I > do not understand how atomics would help: I can make the pointer itself an > atomic, but that only guarantees memory ordering in regard to this > variable, not to the allocated memory. > > > > Kind Regards, Thomas > > C11 atomics work essentially like Java volatiles: They order other memory > accesses as well. If you declare the pointer to be atomic, and store into > it, then another thread reading the newly assigned value will also see the > stores preceding the pointer store. Since the pointer is the only value > that can be accessed concurrently by multiple threads (with not all > accesses reads), it's the only object that needs to be atomic. In this > case, it's sufficient to store into the pointer with > > atomic_store_explicit(&ptr, , memory_order_release); > > and read it with > > atomic_load_explicit(&ptr, memory_order_acquire); > > which are a bit cheaper. > > However, this is C11 specific, and I don't know whether that's acceptable > to use in this context. > > If you can't assume C11, the least incorrect workaround is generally to > make the pointer volatile, precede the store with a fence, and follow the > load with a fence. On x86, both fences just need to prevent compiler > reordering. > Thank you for that excellent explanation! This may be just my ignorance, but I actually did not know that atomics are now a part of the C standard. I took this occasion to look up all other C11 features and this is quite neat :) Nice to see that C continues to live. I am very hesitant though about introducing C11 features into the JDK. We deal with notoriously oldish compilers, especially on AIX, and I do not want to be the first to force C11, especially not on such a side issue. The more I look at this, the more I think that the costs for a pthread mutex lock are acceptable in this case: we are about to do a blocking IO operation anyway, which is already flanked by two mutex locking calls (in startOp and endOp). I doubt that a third mutex call will be the one making the costs suddenly unacceptable. Especially since they can be avoided altogether for low value mutex numbers (the optimization Roger suggested). I will do some performance tests and check whether the added locking calls are even measurable. Thomas From dmitry.samersoff at oracle.com Thu Mar 3 09:50:13 2016 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Thu, 3 Mar 2016 12:50:13 +0300 Subject: RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all In-Reply-To: References: <56D56CE7.6070700@oracle.com> <585f33afb6f7450f8456eb065f4886a5@DEWDFE13DE11.global.corp.sap> <56D59B83.3010503@oracle.com> <56D60202.6030803@oracle.com> Message-ID: <56D808D5.6060302@oracle.com> Thomas, > The more I look at this, the more I think that the costs for a > pthread mutex lock are acceptable in this case: we are about to do a > blocking IO operation anyway, which is already flanked by two mutex > locking calls (in startOp and endOp). I doubt that a third mutex call > will be the one making the costs suddenly unacceptable. Especially > since they can be avoided altogether for low value mutex numbers (the > optimization Roger suggested). After closer look to the code in a whole - I agree with you. -Dmitry On 2016-03-03 12:32, Thomas St?fe wrote: > Hi Hans, > > On Thu, Mar 3, 2016 at 4:08 AM, Hans Boehm > wrote: > > > On Wed, Mar 2, 2016 at 12:09 AM, Thomas St?fe > > wrote: >> >> Hi Hans, >> >> thanks for the hint! >> >> But how would I do this for my problem: >> >> Allocate memory, zero it out and then store the pointer into a >> variable seen by other threads, while preventing the other threads >> from seeing . I do not understand how atomics would help: I can >> make the pointer itself an atomic, but that only guarantees memory >> ordering in regard to this variable, not to the allocated memory. >> >> Kind Regards, Thomas > > C11 atomics work essentially like Java volatiles: They order other > memory accesses as well. If you declare the pointer to be atomic, > and store into it, then another thread reading the newly assigned > value will also see the stores preceding the pointer store. Since > the pointer is the only value that can be accessed concurrently by > multiple threads (with not all accesses reads), it's the only object > that needs to be atomic. In this case, it's sufficient to store into > the pointer with > > atomic_store_explicit(&ptr, , memory_order_release); > > and read it with > > atomic_load_explicit(&ptr, memory_order_acquire); > > which are a bit cheaper. > > > However, this is C11 specific, and I don't know whether that's > acceptable to use in this context. > > If you can't assume C11, the least incorrect workaround is generally > to make the pointer volatile, precede the store with a fence, and > follow the load with a fence. On x86, both fences just need to > prevent compiler reordering. > > > > Thank you for that excellent explanation! > > This may be just my ignorance, but I actually did not know that > atomics are now a part of the C standard. I took this occasion to > look up all other C11 features and this is quite neat :) Nice to see > that C continues to live. > > I am very hesitant though about introducing C11 features into the > JDK. We deal with notoriously oldish compilers, especially on AIX, > and I do not want to be the first to force C11, especially not on > such a side issue. > > The more I look at this, the more I think that the costs for a > pthread mutex lock are acceptable in this case: we are about to do a > blocking IO operation anyway, which is already flanked by two mutex > locking calls (in startOp and endOp). I doubt that a third mutex call > will be the one making the costs suddenly unacceptable. Especially > since they can be avoided altogether for low value mutex numbers (the > optimization Roger suggested). > > I will do some performance tests and check whether the added locking > calls are even measurable. > > Thomas > -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From chris.hegarty at oracle.com Thu Mar 3 10:31:58 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Thu, 3 Mar 2016 10:31:58 +0000 Subject: RFR [9] 8151140: Replace use of lambda/method ref in jdk.Version constructor Message-ID: Since 8150163 [1], jdk.Version can now be used earlier in startup, but not always. It was noticed that the use of lambda / method ref in the constructor, in some cases, was the first usage of such, and incurred the initialization costs of the java.lang.invoke infrastructure ( which can take a significant amount of time on first access). The solution is to simple avoid the usage, as has been done in other ?core" areas, that may be used early in startup. diff --git a/src/java.base/share/classes/jdk/Version.java b/src/java.base/share/classes/jdk/Version.java --- a/src/java.base/share/classes/jdk/Version.java +++ b/src/java.base/share/classes/jdk/Version.java @@ -28,10 +28,10 @@ import java.math.BigInteger; import java.security.AccessController; import java.security.PrivilegedAction; +import java.util.ArrayList; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; -import java.util.Arrays; import java.util.Collections; import java.util.List; import java.util.Optional; @@ -208,11 +208,10 @@ + s + "'"); // $VNUM is a dot-separated list of integers of arbitrary length - version - = Collections.unmodifiableList( - Arrays.stream(m.group(VNUM_GROUP).split("\\.")) - .map(Integer::parseInt) - .collect(Collectors.toList())); + List list = new ArrayList<>(); + for (String i : m.group(VNUM_GROUP).split("\\.")) + list.add(Integer.parseInt(i)); + version = Collections.unmodifiableList(list); pre = Optional.ofNullable(m.group(PRE_GROUP)); -Chris. [1] https://bugs.openjdk.java.net/browse/JDK-8150976 From paul.sandoz at oracle.com Thu Mar 3 10:38:32 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 3 Mar 2016 11:38:32 +0100 Subject: RFR: 8151123 - Collectors.summingDouble/averagingDouble unnecessarily call mapper twice In-Reply-To: <876784395.20160303152657@gmail.com> References: <876784395.20160303152657@gmail.com> Message-ID: <76827DD4-DB61-45C5-B5E7-DAC6797979B0@oracle.com> > On 3 Mar 2016, at 10:26, Tagir F. Valeev wrote: > > Hello! > > Please review and sponsor this small change: > > https://bugs.openjdk.java.net/browse/JDK-8151123 > http://cr.openjdk.java.net/~tvaleev/webrev/8151123/r1/ > > User-supplied mapper function is unnecessarily called twice on each > accumulation event in summingDouble and averagingDouble. This function > could be computationally intensive which may degrade the performance > up to 2x. The patch addresses this issue. > +1 An embarrassing oversight missed in review, well spotted. I can push for you. ? I find it annoying we have to maintain the compensated and uncompensated sum for the edge case of the compensated sum being NaN and the simple sum being infinite. I measured this a while back i was surprised it did not appear to make much difference when loops are unrolled and vectored instructions are used, but i did perform an in-depth investigation. https://bugs.openjdk.java.net/browse/JDK-8035561 Paul. > Here's also simple JMH benchmark which illustrates the performance > gain. > http://cr.openjdk.java.net/~tvaleev/webrev/8151123/jmh/ > > Original: > > Benchmark (n) Mode Cnt Score Error Units > AveragingTest.averageDistance 10 avgt 30 0,571 ? 0,049 us/op > AveragingTest.averageDistance 1000 avgt 30 58,573 ? 1,194 us/op > AveragingTest.averageDistance 100000 avgt 30 5854,428 ? 71,242 us/op > > Patched: > > Benchmark (n) Mode Cnt Score Error Units > AveragingTest.averageDistance 10 avgt 30 0,336 ? 0,002 us/op > AveragingTest.averageDistance 1000 avgt 30 31,932 ? 0,367 us/op > AveragingTest.averageDistance 100000 avgt 30 3794,541 ? 21,599 us/op > > With best regards, > Tagir Valeev. > From paul.sandoz at oracle.com Thu Mar 3 10:54:19 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 3 Mar 2016 11:54:19 +0100 Subject: RFR [9] 8151140: Replace use of lambda/method ref in jdk.Version constructor In-Reply-To: References: Message-ID: <199B2B1C-51A7-4DA1-BE75-166A7EB02C33@oracle.com> +1 Paul. > On 3 Mar 2016, at 11:31, Chris Hegarty wrote: > > Since 8150163 [1], jdk.Version can now be used earlier in startup, but not > always. It was noticed that the use of lambda / method ref in the constructor, > in some cases, was the first usage of such, and incurred the initialization > costs of the java.lang.invoke infrastructure ( which can take a significant > amount of time on first access). > > The solution is to simple avoid the usage, as has been done in other ?core" > areas, that may be used early in startup. > > diff --git a/src/java.base/share/classes/jdk/Version.java b/src/java.base/share/classes/jdk/Version.java > --- a/src/java.base/share/classes/jdk/Version.java > +++ b/src/java.base/share/classes/jdk/Version.java > @@ -28,10 +28,10 @@ > import java.math.BigInteger; > import java.security.AccessController; > import java.security.PrivilegedAction; > +import java.util.ArrayList; > import java.util.regex.Matcher; > import java.util.regex.Pattern; > import java.util.stream.Collectors; > -import java.util.Arrays; > import java.util.Collections; > import java.util.List; > import java.util.Optional; > @@ -208,11 +208,10 @@ > + s + "'"); > > // $VNUM is a dot-separated list of integers of arbitrary length > - version > - = Collections.unmodifiableList( > - Arrays.stream(m.group(VNUM_GROUP).split("\\.")) > - .map(Integer::parseInt) > - .collect(Collectors.toList())); > + List list = new ArrayList<>(); > + for (String i : m.group(VNUM_GROUP).split("\\.")) > + list.add(Integer.parseInt(i)); > + version = Collections.unmodifiableList(list); > > pre = Optional.ofNullable(m.group(PRE_GROUP)); > > -Chris. > > [1] https://bugs.openjdk.java.net/browse/JDK-8150976 > > From paul.sandoz at oracle.com Thu Mar 3 10:58:35 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 3 Mar 2016 11:58:35 +0100 Subject: RFR(S): 8150957: j.l.i.MethodHandles.whileLoop(...) fails with IOOBE in the case 'init' is null, 'step' and 'pred' have parameters In-Reply-To: <452A0D05-93D8-4DCE-941F-A582EB107153@oracle.com> References: <452A0D05-93D8-4DCE-941F-A582EB107153@oracle.com> Message-ID: > On 2 Mar 2016, at 20:53, Michael Haupt wrote: > > Dear all, > > please review this change. > Bug: https://bugs.openjdk.java.net/browse/JDK-8150957 > Webrev: http://cr.openjdk.java.net/~mhaupt/8150957/webrev.00/ > > The bug was actually fixed with the push for JDK-8150635. This change adds a test for the issue. > Looks good. Minor comment, up to you to accept/reject: you could assert that ?w.i? is the expected value after the loop invocation. Paul. From paul.sandoz at oracle.com Thu Mar 3 11:26:51 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 3 Mar 2016 12:26:51 +0100 Subject: RFR 8150679: closed/javax/crypto/CryptoPermission/CallerIdentification.sh fails after fix for JDK-8132734 In-Reply-To: References: Message-ID: <958CB879-5B99-4AD7-8E23-6B08E960EE16@oracle.com> > On 2 Mar 2016, at 20:12, Steve Drach wrote: > > Please review the following fix for JDK-8150679 > > webrev: http://cr.openjdk.java.net/~sdrach/8150679/webrev/ > issue: https://bugs.openjdk.java.net/browse/JDK-8150679 > > The test was modified to demonstrate the problem. You are essentially bombing out of MR-JAR functionality if the JarEntry is not an instance of JarFileEntry. That might be ok for a short-term solution, but it might require some further deeper investigation on things that extend JarEntry and how it is is used by VerifierStream [*]. JarFile: 895 private JarEntry verifiableEntry(ZipEntry ze) { 896 if (ze == null) return null; You don?t need this. The code will anyway throw an NPE elsewhere, and the original code threw an NPE when obtaining the name: return new JarVerifier.VerifierStream( getManifestFromReference(), ze instanceof JarFileEntry ? (JarEntry) ze : getJarEntry(ze.getName()), super.getInputStream(ze), jv); 897 if (ze instanceof JarFileEntry) { 898 // assure the name and entry match for verification 899 return ((JarFileEntry)ze).reifiedEntry(); 900 } 901 ze = getJarEntry(ze.getName()); 902 assert ze instanceof JarEntry; This assertion is redundant as the method signature of getJarEntry returns JarEntry. 903 if (ze instanceof JarFileEntry) { 904 return ((JarFileEntry)ze).reifiedEntry(); 905 } 906 return (JarEntry)ze; 907 } MultiReleaseJarURLConnection ? Given your changes above i am confused how your test passes for instances of URLJarFileEntry since they cannot be reified. Paul. [*] AFAICT JarVerifier directly accesses the fields JarEntry.signers/certs. From paul.sandoz at oracle.com Thu Mar 3 11:48:46 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 3 Mar 2016 12:48:46 +0100 Subject: RFR: jsr166 jdk9 integration wave 5 In-Reply-To: References: <56D61A45.7040005@oracle.com> Message-ID: > On 2 Mar 2016, at 21:37, Martin Buchholz wrote: > > Webrevs updated, incorporating changes to tests in my previous message. Looks ok, but i went through rather quickly. java/util/concurrent/ScheduledThreadPoolExecutor/DelayOverflow.java ? - pool.schedule(keepPoolBusy, 0, TimeUnit.SECONDS); + pool.schedule(keepPoolBusy, 0, DAYS); It probably does not matter that you changed the units here? Paul. From michael.haupt at oracle.com Thu Mar 3 12:13:52 2016 From: michael.haupt at oracle.com (Michael Haupt) Date: Thu, 3 Mar 2016 13:13:52 +0100 Subject: RFR(S): 8150957: j.l.i.MethodHandles.whileLoop(...) fails with IOOBE in the case 'init' is null, 'step' and 'pred' have parameters In-Reply-To: References: <452A0D05-93D8-4DCE-941F-A582EB107153@oracle.com> Message-ID: <12AB5166-76C8-4CBC-BCFC-BAB0B72BA6F7@oracle.com> Hi Paul, > Am 03.03.2016 um 11:58 schrieb Paul Sandoz : > Minor comment, up to you to accept/reject: you could assert that ?w.i? is the expected value after the loop invocation. thank you. Excellent suggestion, I'll push with that added. Best, Michael -- Dr. Michael Haupt | Principal Member of Technical Staff Phone: +49 331 200 7277 | Fax: +49 331 200 7561 Oracle Java Platform Group | LangTools Team | Nashorn Oracle Deutschland B.V. & Co. KG | Schiffbauergasse 14 | 14467 Potsdam, Germany ORACLE Deutschland B.V. & Co. KG | Hauptverwaltung: Riesstra?e 25, D-80992 M?nchen Registergericht: Amtsgericht M?nchen, HRA 95603 Komplement?rin: ORACLE Deutschland Verwaltung B.V. | Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Nederland, Nr. 30143697 Gesch?ftsf?hrer: Alexander van der Ven, Jan Schultheiss, Val Maher Oracle is committed to developing practices and products that help protect the environment From ivan.gerasimov at oracle.com Thu Mar 3 12:21:48 2016 From: ivan.gerasimov at oracle.com (Ivan Gerasimov) Date: Thu, 3 Mar 2016 15:21:48 +0300 Subject: [8u-dev] Request for REVIEW and APPROVAL to backport: 8149330: Capacity of StringBuilder should not get close to Integer.MAX_VALUE unless necessary In-Reply-To: <56D7FEC8.3020300@oracle.com> References: <56D722EE.2070204@oracle.com> <56D7FE2F.8030207@oracle.com> <56D7FEC8.3020300@oracle.com> Message-ID: <56D82C5C.1060208@oracle.com> Thank you Martin and Se?n! I'll add some info to the bug report with a reproducer code and symptoms. Sincerely yours, Ivan On 03.03.2016 12:07, Se?n Coffey wrote: > Approved for jdk8u-dev (BTW). > > Regards, > Sean. > > On 03/03/2016 09:04, Se?n Coffey wrote: >> Ivan, >> >> the JBS bug description is scare on detail. Can you fill it out a bit ? >> >> Some examples of the stack trace encountered and links to OpenJDK >> reviews/discussions will help people who encounter this issue in the >> future. >> >> Regards, >> Sean. >> >> On 02/03/2016 20:20, Martin Buchholz wrote: >>> Reviewed! >>> >>> On Wed, Mar 2, 2016 at 9:29 AM, Ivan Gerasimov >>> wrote: >>>> Hello! >>>> >>>> I'm seeking for approval to backport this fix into jdk8u-dev. >>>> Comparing to Jdk9, the patch had to be changed mainly due to >>>> compact string >>>> support introduced in jdk9. >>>> However, the fix is essentially the same: we just avoid getting too >>>> close to >>>> Integer.MAX_VALUE when doing reallocations unless explicitly required. >>>> >>>> Would you please help review it? >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8149330 >>>> Jdk9 change: http://hg.openjdk.java.net/jdk9/dev/jdk/rev/123593aacb48 >>>> Jdk9 review: >>>> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-February/039018.html >>>> >>>> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-March/039182.html >>>> >>>> Jdk8 webrev: http://cr.openjdk.java.net/~igerasim/8149330/04/webrev/ >>>> >>>> Sincerely yours, >>>> Ivan >> > > From sean.mullan at oracle.com Thu Mar 3 12:30:40 2016 From: sean.mullan at oracle.com (Sean Mullan) Date: Thu, 3 Mar 2016 07:30:40 -0500 Subject: Custom security policy without replacing files in the OpenJDK? In-Reply-To: <56D7971F.8040706@oracle.com> References: <56D7971F.8040706@oracle.com> Message-ID: <56D82E70.9030307@oracle.com> On 03/02/2016 08:45 PM, David Holmes wrote: > On 27/02/2016 2:56 AM, Marcus Lagergren wrote: >> Hi! >> >> Is it possible to override lib/security/local_policy on app level >> without patching jdk distro? >> i.e. -Duse.this.policy.jar= ? or something? >> >> Can?t find a way to do it > > http://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html > > > Specifying an Additional Policy File at Runtime > > It is also possible to specify an additional or a different policy file > when invoking execution of an application. This can be done via the > "-Djava.security.policy" command line argument, which sets the value of > the java.security.policy property. For example, if you use > > java -Djava.security.manager -Djava.security.policy=someURL SomeApp I believe Marcus is referring to local_policy.jar which is for enforcing import restrictions on cryptography. This is not a security policy file so the above command line won't work. There is no way to override it from the command line AFAIK. Thanks, Sean From abrygin at azul.com Thu Mar 3 13:34:24 2016 From: abrygin at azul.com (Andrew Brygin) Date: Thu, 3 Mar 2016 13:34:24 +0000 Subject: RFR [7] 8133206: "java.lang.OutOfMemoryError: unable to create new native thread" caused by upgrade to zlib 1.2.8 In-Reply-To: References: <561E7C96.9010207@azulsystems.com> <5627B1B8.6080307@azulsystems.com> <5641CD9D.8080208@alexkasko.com> <566AB4AF.60902@oracle.com> <566ABCEC.90502@azulsystems.com> <56C6F49C.5090604@azulsystems.com> Message-ID: I?d like to cast a vote for inclusion this fix in jdk9. Probably it has to be done in the original review thread created by Alex: http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-November/036463.html http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-November/003036.html but there is no any activity from November 2015? So, +1 to get this fix in jdk9. Thanks, Andrew On Feb 24, 2016, at 5:07 PM, Dmitry Cherepanov > wrote: On Feb 19, 2016, at 1:55 PM, Nikolay Gorshkov > wrote: Hi Sherman, Sean, Could you please help with making progress on this code review request? This fix is waiting for review since October. Webrev for jdk7u: http://cr.openjdk.java.net/~nikgor/8133206/jdk7u-dev/webrev.01/ Original mail thread: http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-October/035884.html I?m not expert in this area but the changes look reasonable to me. +1 for pushing this into JDK9. Thanks Dmitry Webrev for jdk9 (contributed by Alex Kashchenko): http://cr.openjdk.java.net/~akasko/jdk9/8133206/webrev.00/ Original mail thread: http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-November/003036.html Thanks, Nikolay On 11.12.2015 15:09, Nikolay Gorshkov wrote: Hi Sean, Thank you for your attention to this matter! Actually, the code review request was sent to core-libs-dev alias a month ago: http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-November/036463.html Unfortunately, we haven't got any feedback yet. Thanks, Nikolay On 11.12.2015 14:34, Se?n Coffey wrote: Hi Alex, I'm dropping jdk7u-dev mailing list for the moment. core-libs-dev is the mailing list where this review should happen. This fix would be required in JDK 9 first as per process. I think Sherman would be best to review if possible. Once it's soaked in JDK 9 for a few weeks, we could consider jdk8u and 7u backports. Regards, Sean. On 10/11/15 10:57, Alex Kashchenko wrote: Hi, On 10/21/2015 04:39 PM, Nikolay Gorshkov wrote: Hi Sherman, Thank you for your reply! My answers are inlined. > Can you be more specific about the "class loading cases" above? Sounds > more like we have a memory leaking here (the real root cause)? for example > the inflateEnd() never gets called? I agree, the real root cause is probably the following issue that exists since the end of 2002: https://bugs.openjdk.java.net/browse/JDK-4797189 "Finalizers not called promptly enough" And it is "the absence of a general solution to the non-heap resource exhaustion problem". zlib's inflateEnd() function is called by void java.util.zip.Inflater.end(long addr) native method only, and this method, in turn, is called only by void java.util.zip.Inflater.end() and void java.util.zip.Inflater.finalize() methods. According to the experiments, the typical stack trace for instantiating java.util.zip.Inflater is: java.util.zip.Inflater.(Inflater.java:116) java.util.zip.ZipFile.getInflater(ZipFile.java:450) java.util.zip.ZipFile.getInputStream(ZipFile.java:369) java.util.jar.JarFile.getInputStream(JarFile.java:412) org.jboss.virtual.plugins.context.zip.ZipFileWrapper.openStream(ZipFileWrapper.java:222) org.jboss.classloader.spi.base.BaseClassLoader$2.run(BaseClassLoader.java:592) java.security.AccessController.doPrivileged(Native Method) org.jboss.classloader.spi.base.BaseClassLoader.loadClassLocally(BaseClassLoader.java:591) org.jboss.classloader.spi.base.BaseClassLoader.loadClass(BaseClassLoader.java:447) java.lang.ClassLoader.loadClass(ClassLoader.java:358) java.lang.Class.forName0(Native Method) java.lang.Class.forName(Class.java:278) org.jboss.deployers.plugins.annotations.WeakClassLoaderHolder.loadClass(WeakClassLoaderHolder.java:72) It's quite hard to understand who is responsible for not calling Inflater.end() method explicitly; probably, it is the jboss/application's code. Unfortunately, we were in "it worked before and is broken now" customer situation here, so needed to fix it anyway. > From the doc/impl in inflate() it appears the proposed change should be > fine, though it's a little hacky, as you never know if it starts to return > Z_OK from some future release(s). Since the "current" implementation > never returns Z_OK, it might be worth considering to keep the Z_OK logic > asis in Inflater.c, together with the Z_BUF_ERROR, just in case? OK, I added handling of Z_OK code back. > I would be desired to add some words in Inflater.c to remind the > future maintainer why we switched from partial to finish and why to > check z_buf_error. I agree, added a comment. The updated webrev is available here: http://cr.openjdk.java.net/~nikgor/8133206/jdk7u-dev/webrev.01/ The change looks good to me (not a Reviewer/Committer). Patched jdk7u also passes JCK-7 on RHEL 7.1. I forward-ported this patch to jdk9 (consulted with Nikolay Gorshkov first), jtreg reproducer for jdk9 also works with jdk7u - http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-November/003036.html From kubota.yuji at gmail.com Thu Mar 3 15:03:01 2016 From: kubota.yuji at gmail.com (KUBOTA Yuji) Date: Fri, 4 Mar 2016 00:03:01 +0900 Subject: [DONG] Re: [DING] Re: [PING] Potential infinite waiting at JMXConnection#createConnection Message-ID: Hi all, Could someone please review this patch? Thanks, Yuji 2016-02-09 15:50 GMT+09:00 KUBOTA Yuji : > Hi David, > > Thank you for your advice and cc-ing! > > I do not have any role yet, so I paste my patches as below. > > diff --git a/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java > b/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java > --- a/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java > +++ b/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java > @@ -222,20 +222,34 @@ > // choose protocol (single op if not reusable socket) > if (!conn.isReusable()) { > out.writeByte(TransportConstants.SingleOpProtocol); > } else { > out.writeByte(TransportConstants.StreamProtocol); > + > + int usableSoTimeout = 0; > + try { > + /* > + * If socket factory had set a non-zero timeout on its > + * own, then restore it instead of using the property- > + * configured value. > + */ > + usableSoTimeout = sock.getSoTimeout(); > + if (usableSoTimeout == 0) { > + usableSoTimeout = responseTimeout; > + } > + sock.setSoTimeout(usableSoTimeout); > + } catch (Exception e) { > + // if we fail to set this, ignore and proceed anyway > + } > out.flush(); > > /* > * Set socket read timeout to configured value for JRMP > * connection handshake; this also serves to guard against > * non-JRMP servers that do not respond (see 4322806). > */ > - int originalSoTimeout = 0; > try { > - originalSoTimeout = sock.getSoTimeout(); > sock.setSoTimeout(handshakeTimeout); > } catch (Exception e) { > // if we fail to set this, ignore and proceed anyway > } > > @@ -279,18 +293,11 @@ > * connection. NOTE: this timeout, if configured to a > * finite duration, places an upper bound on the time > * that a remote method call is permitted to execute. > */ > try { > - /* > - * If socket factory had set a non-zero timeout on its > - * own, then restore it instead of using the property- > - * configured value. > - */ > - sock.setSoTimeout((originalSoTimeout != 0 ? > - originalSoTimeout : > - responseTimeout)); > + sock.setSoTimeout(usableSoTimeout); > } catch (Exception e) { > // if we fail to set this, ignore and proceed anyway > } > > out.flush(); > > Thanks, > Yuji > > 2016-02-09 13:11 GMT+09:00 David Holmes : >> Hi Yuji, >> >> Not sure who would look at this so cc'ing net-dev. >> >> Also note that contributions can only be accepted if presented via OpenJKDK >> infrastructure. Links to patches on http://icedtea.classpath.org are not >> acceptable. The patch needs to be included in the email (beware stripped >> attachments) if you can't get it hosted on cr.openjdk.java.net. Sorry. >> >> David >> >> >> On 9/02/2016 12:10 AM, KUBOTA Yuji wrote: >>> >>> Hi all, >>> >>> Could someone review this fix? >>> >>> Thanks, >>> Yuji >>> >>> 2016-02-04 2:27 GMT+09:00 KUBOTA Yuji : >>>> >>>> Hi all, >>>> >>>> Could someone please review and sponsor this fix ? >>>> I write the details of this issue again. Please review it. >>>> >>>> =Problem= >>>> Potential infinite waiting at TCPChannel#createConnection. >>>> >>>> This method flushes the DataOutputStream without the socket >>>> timeout settings when choose stream protocol [1]. If connection lost >>>> or the destination server do not return response during the flush, >>>> this method wait forever because the timeout settings is set the >>>> default value of SO_TIMEOUT, i.e., infinite. >>>> >>>> [1]: >>>> http://hg.openjdk.java.net/jdk9/dev/jdk/file/7adef1c3afd5/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java#l227 >>>> >>>> I think this issue is rarely, however serious. >>>> >>>> =Reproduce= >>>> I write a test program to reproduce. You can reproduce by the below. >>>> >>>> * hg clone >>>> http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/ >>>> * cd fixLoopAtJMXConnectorFactory; mvn package >>>> * setting "stop_time" at debugcontrol.properties if you need. >>>> * java -cp .:target/debugcontrol-1.0-SNAPSHOT.jar >>>> debugcontrol.DebugController >>>> >>>> This program keep to wait at TCPChannel#createConnection due to >>>> this issue. After "debugcontroltest.stop_time" ms, this program release >>>> the waiting by sending quit to jdb which is stopping the destination >>>> server. Finally, return 2. >>>> >>>> =Solution= >>>> Set timeout by using property-configured value: >>>> sun.rmi.transport.tcp.responseTimeout. >>>> >>>> My patch is below. >>>> >>>> http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/file/e31044f0804f/jdk9.patch >>>> >>>> If you run the test program with modified JDK9 by my patch, the test >>>> program will get java.net.SocketTimeoutException after the connection >>>> timeout happen, then return 0. >>>> >>>> Thanks, >>>> Yuji. >>>> >>>> >>>> 2016-01-13 23:31 GMT+09:00 KUBOTA Yuji : >>>>> >>>>> Hi all, >>>>> >>>>> Can somebody please review and sponsor this fix ? >>>>> >>>>> Thanks, >>>>> Yuji >>>>> >>>>> 2016-01-05 17:56 GMT+09:00 KUBOTA Yuji : >>>>>> >>>>>> Hi Jaroslav and core-libs-dev, >>>>>> >>>>>> Thank Jaroslav for your kindness! >>>>>> >>>>>> For core-libs-dev members, links the information about this issue. >>>>>> >>>>>> * details of problem >>>>>> >>>>>> http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-April/002152.html >>>>>> >>>>>> * patch >>>>>> >>>>>> http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/file/e31044f0804f/jdk9.patch >>>>>> >>>>>> * testcase for reproduce >>>>>> >>>>>> http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/file/e31044f0804f/testProgram >>>>>> >>>>>> http://mail.openjdk.java.net/pipermail/serviceability-dev/2015-December/018415.html >>>>>> >>>>>> Could you please review these reports? >>>>>> Hope this patch helps to community. >>>>>> >>>>>> Thanks, >>>>>> Yuji >>>>>> >>>>>> 2016-01-04 23:51 GMT+09:00 Jaroslav Bachorik >>>>>> : >>>>>>> >>>>>>> Hi Yuji, >>>>>>> >>>>>>> On 4.1.2016 15:14, KUBOTA Yuji wrote: >>>>>>>> >>>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> Could you please review this patch? >>>>>>> >>>>>>> >>>>>>> >>>>>>> Sorry for the long delay. Shanliang has not been present for some time >>>>>>> and >>>>>>> probably this slipped the attention of the others. >>>>>>> >>>>>>> However, core-libs mailing list might be more appropriate place to >>>>>>> review >>>>>>> this change since you are dealing with s.r.t.t.TCPChannel >>>>>>> >>>>>>> (http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/file/e31044f0804f/jdk9.patch) >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> -JB- From claes.redestad at oracle.com Thu Mar 3 15:00:20 2016 From: claes.redestad at oracle.com (Claes Redestad) Date: Thu, 3 Mar 2016 16:00:20 +0100 Subject: RFR 8150679: closed/javax/crypto/CryptoPermission/CallerIdentification.sh fails after fix for JDK-8132734 In-Reply-To: <958CB879-5B99-4AD7-8E23-6B08E960EE16@oracle.com> References: <958CB879-5B99-4AD7-8E23-6B08E960EE16@oracle.com> Message-ID: <56D85184.7010204@oracle.com> Hi, On 2016-03-03 12:26, Paul Sandoz wrote: >> On 2 Mar 2016, at 20:12, Steve Drach wrote: >> >> Please review the following fix for JDK-8150679 >> >> webrev: http://cr.openjdk.java.net/~sdrach/8150679/webrev/ Looks OK to me. >> issue: https://bugs.openjdk.java.net/browse/JDK-8150679 >> >> The test was modified to demonstrate the problem. > You are essentially bombing out of MR-JAR functionality if the JarEntry is not an instance of JarFileEntry. That might be ok for a short-term solution, but it might require some further deeper investigation on things that extend JarEntry and how it is is used by VerifierStream [*]. I agree with Paul that this needs deeper investigation as a follow-up, but would like to stress that this fix addresses numerous things that is breaking in 9-b108, including many benchmarks. With a number of critical integrations planned in the next couple of weeks I think we need to fast-track a promotion with this fix before that happens so that we can provide reasonable testing. Thanks! /Claes From forax at univ-mlv.fr Thu Mar 3 15:19:44 2016 From: forax at univ-mlv.fr (forax at univ-mlv.fr) Date: Thu, 3 Mar 2016 16:19:44 +0100 (CET) Subject: RFR: 8147755: ASM should create correct constant tag for invokestatic on handle point to interface static method In-Reply-To: <56D7098D.8080306@oracle.com> References: <56D7098D.8080306@oracle.com> Message-ID: <1687931015.2935950.1457018384303.JavaMail.zimbra@u-pem.fr> comments inlinined ... ----- Mail original ----- > De: "Kumar Srinivasan" > ?: "core-libs-dev" , "Remi Forax" > Cc: "SUNDARARAJAN.ATHIJEGANNATHAN" , "Michael Haupt" > , "Jaroslav Bachor?k" , "Coleen Phillmore" > , "Stas Smirnov" , "harsha wardhana b" > > Envoy?: Mercredi 2 Mars 2016 16:41:01 > Objet: RFR: 8147755: ASM should create correct constant tag for invokestatic on handle point to interface static > method > > Hello Remi, et. al., Hi Kumar, > > Webrev: > http://cr.openjdk.java.net/~ksrini/8147755/webrev.00/ > > Can you please approve this patch, it is taken out of ASM's svn repo. > change id 1795, which addresses the problem described in [1]. for anybody else, this revision roughtly correspond to ASM 5.1, which includes: - change non visible StringBuffer to StringBuilder - improve documentation of Printer API (and related classe) - provide a new Remapper API - add a way to create constant method handles on interface (invokestatic/invokespecial) which fix 8147755. > > Note 1: A couple of @Deprecated annotations and doc comments > have been disabled, because we have a catch-22 that an internal and closed > component depends on these APIs, and the replacement is not available until > we push this patch. A follow up bug [2] has been filed. for ASM, we use @Deprecated when a method is superseded by a new one, not to indicate that calling this method is hazardous, so i'm fine if you want to do the update in two steps. > > Note 2: jprt tested, all core-libs, langtools and nashorn regressions > pass. HotSpot team has verified that it address their issues. > > > Thank you > Kumar thumb up for me. R?mi > > [1] https://bugs.openjdk.java.net/browse/JDK-8147755 > [2] https://bugs.openjdk.java.net/browse/JDK-8151056 > From thomas.stuefe at gmail.com Thu Mar 3 16:26:28 2016 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 3 Mar 2016 17:26:28 +0100 Subject: (Round 2) RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all Message-ID: Hi all, https://bugs.openjdk.java.net/browse/JDK-8150460 thanks to all who took the time to review the first version of this fix! This is the new version: http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.02/webrev/ I reworked the fix, trying to add in all the input I got: This fix uses a simple one-dimensional array, preallocated at startup, for low-value file descriptors. Like the code did before. Only for large values of file descriptors it switches to an overflow table, organized as two dimensional sparse array of fixed sized slabs, which are allocated on demand. Only the overflow table is protected by a lock. For 99% of all cases we will be using the plain simple fdTable structure as before. Only for unusually large file descriptor values we will be using this overflow table. Memory footprint is kept low: for small values of RLIMIT_NOFILE, we will only allocate as much space as we need. Only if file descriptor values get large, memory is allocated in the overflow table. Note that I avoided the proposed double-checked locking solution: I find it too risky in this place and also unnecessary. When calling getFdEntry(), we will be executing a blocking IO operation afterwards, flanked by two mutex locks (in startOp and endOp). So, I do not think the third mutex lock in getFdEntry will add much, especially since it is only used in case of larger file descriptor values. I also added the fix to bsd_close.c and aix_close.c. I do not like this code triplication. I briefly played around with unifying this code, but this is more difficult than it seems: implementations subtly differ between the three platforms, and solaris implementation is completely different. It may be a worthwhile cleanup, but that would be a separate issue. I did some artificial tests to check how the code does with many and large file descriptor values, all seemed to work well. I also ran java/net jtreg tests on Linux and AIX. Kind Regards, Thomas From steve.drach at oracle.com Thu Mar 3 17:20:00 2016 From: steve.drach at oracle.com (Steve Drach) Date: Thu, 3 Mar 2016 09:20:00 -0800 Subject: RFR 8150679: closed/javax/crypto/CryptoPermission/CallerIdentification.sh fails after fix for JDK-8132734 In-Reply-To: <958CB879-5B99-4AD7-8E23-6B08E960EE16@oracle.com> References: <958CB879-5B99-4AD7-8E23-6B08E960EE16@oracle.com> Message-ID: > On Mar 3, 2016, at 3:26 AM, Paul Sandoz wrote: > > >> On 2 Mar 2016, at 20:12, Steve Drach wrote: >> >> Please review the following fix for JDK-8150679 >> >> webrev: http://cr.openjdk.java.net/~sdrach/8150679/webrev/ >> issue: https://bugs.openjdk.java.net/browse/JDK-8150679 >> >> The test was modified to demonstrate the problem. > > You are essentially bombing out of MR-JAR functionality if the JarEntry is not an instance of JarFileEntry. If it?s not a JarFileEntry, none of the MR functionality has been invoked. > That might be ok for a short-term solution, but it might require some further deeper investigation on things that extend JarEntry and how it is is used by VerifierStream [*]. > > JarFile: > > 895 private JarEntry verifiableEntry(ZipEntry ze) { > 896 if (ze == null) return null; > > You don?t need this. The code will anyway throw an NPE elsewhere, and the original code threw an NPE when obtaining the name: Ok. I?ll take this out. Feels a bit uncomfortable though. > > return new JarVerifier.VerifierStream( > getManifestFromReference(), > ze instanceof JarFileEntry ? > (JarEntry) ze : getJarEntry(ze.getName()), > super.getInputStream(ze), > jv); > > > 897 if (ze instanceof JarFileEntry) { > 898 // assure the name and entry match for verification > 899 return ((JarFileEntry)ze).reifiedEntry(); > 900 } > 901 ze = getJarEntry(ze.getName()); > 902 assert ze instanceof JarEntry; > > This assertion is redundant as the method signature of getJarEntry returns JarEntry. I know it?s redundant. It was a statement of fact. but the method signature does the same thing. > > > 903 if (ze instanceof JarFileEntry) { > 904 return ((JarFileEntry)ze).reifiedEntry(); > 905 } > 906 return (JarEntry)ze; > 907 } > > > MultiReleaseJarURLConnection > ? > > Given your changes above i am confused how your test passes for instances of URLJarFileEntry since they cannot be reified. I suspect that it works for regular jar files but not for MR jar files. That?s another bug in URLJarFile ? it gets a versioned entry that can?t be verified. I mentioned this yesterday. I?ll write a test and if warranted submit a bug on this. > > Paul. > > [*] AFAICT JarVerifier directly accesses the fields JarEntry.signers/certs. Yes, but the overriden fields are the ones of interest. > From dbrosius at mebigfatguy.com Thu Mar 3 18:26:49 2016 From: dbrosius at mebigfatguy.com (Dave Brosius) Date: Thu, 03 Mar 2016 13:26:49 -0500 Subject: Match.appendReplacement with StringBuilder In-Reply-To: References: <958CB879-5B99-4AD7-8E23-6B08E960EE16@oracle.com> Message-ID: <21203c166c7fe7003528521bfffd42a6@baybroadband.net> Greetings, It would be nice if java.util.regex.Matcher had a replacement for Matcher appendReplacement(StringBuffer sb, String replacement) StringBuffer appendTail(StringBuffer sb) That took StringBuilder. From nadeesh.tv at oracle.com Thu Mar 3 18:37:03 2016 From: nadeesh.tv at oracle.com (nadeesh tv) Date: Fri, 04 Mar 2016 00:07:03 +0530 Subject: RFR:JDK-8032051:"ZonedDateTime" class "parse" method fails with short time zone offset ("+01") In-Reply-To: References: <56BCED89.7040007@oracle.com> <56CF2154.6050503@oracle.com> <56D05E0E.6030003@Oracle.com> <56D06876.4020000@Oracle.com> <56D08DA5.5030700@Oracle.com> <56D4692B.1030402@Oracle.com> Message-ID: <56D8844F.1010701@oracle.com> Hi, Stephen, Roger Thanks for the comments. Please see the updated webrev http://cr.openjdk.java.net/~ntv/8032051/webrev.04/ Regards, Nadeesh On 3/1/2016 12:29 AM, Stephen Colebourne wrote: > I'm happy to go back to the spec I proposed before. That spec would > determine colons dynamically only for pattern HH. Otherwise, it would > use the presence/absence of a colon in the pattern as the signal. That > would deal with the ISO-8601 problem and resolve the original issue > (as ISO_OFFSET_DATE_TIME uses HH:MM:ss, which would leniently parse > using colons). > > Writing the spec wording is not easy however. I had: > > When parsing in lenient mode, only the hours are mandatory - minutes > and seconds are optional. The colons are required if the specified > pattern contains a colon. If the specified pattern is "+HH", the > presence of colons is determined by whether the character after the > hour digits is a colon or not. If the offset cannot be parsed then an > exception is thrown unless the section of the formatter is optional. > > which isn't too bad but alternatives are possible. > > Stephen > > > > > On 29 February 2016 at 15:52, Roger Riggs wrote: >> Hi Stephen, >> >> As a fix for the original issue[1], not correctly parsing a ISO defined >> offset, the use of lenient >> was a convenient implementation technique (hack). But with the expanded >> definition of lenient, >> it will allow many forms of the offset that are not allowed by the ISO >> specification >> and should not be accepted forDateTimeFormatter. ISO_OFFSET_DATE_TIME. >> In particular, ISO requires the ":" to separate the minutes. >> I'm not sure how to correctly fix the original issue with the new >> specification of lenient offset >> parsing without introducing some more specific implementation information. >> >> >> WRT the lenient parsing mode for appendOffset: >> >> I was considering that the subfields of the offset were to be treated >> leniently but it seems >> you were treating the entire offset field and text as the unit to be treated >> leniently. >> The spec for lenient parsing would be clearer if it were specified as >> allowing any >> of the patterns of appendOffset. The current wording around the character >> after the hour >> may be confusing. >> >> In the specification of appendOffset(pattern, noOffsetText) how about: >> >> "When parsing in lenient mode, the longest valid pattern that matches the >> input is used. Only the hours are mandatory, minutes and seconds are >> optional." >> >> Roger >> >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8032051 >> >> >> >> >> >> On 2/26/2016 1:10 PM, Stephen Colebourne wrote: >>> It is important to also consider the case where the user wants to >>> format using HH:MM but parse seconds if they are provided. >>> >>> As I said above, this is no different to SignStyle, where the user >>> requests something specific on format, but accepts anything on input. >>> >>> The pattern is still used for formatting and strict parsing under >>> these changes. It is effectively ignored in lenient parsing (which is >>> the very definition of leniency). >>> >>> Another way to look at it: >>> >>> using a pattern of HH:MM and strict: >>> +02 - disallowed >>> +02:00 - allowed >>> +02:00:00 - disallowed >>> >>> using a pattern of HH:mm and strict: >>> +02 - allowed >>> +02:00 - allowed >>> +02:00:00 - disallowed >>> >>> using any pattern and lenient: >>> +02 - allowed >>> +02:00 - allowed >>> +02:00:00 - allowed >>> >>> This covers pretty much anything a user needs when parsing. >>> >>> Stephen >>> >>> >>> On 26 February 2016 at 17:38, Roger Riggs wrote: >>>> Hi Stephen, >>>> >>>> Even in lenient mode the parser needs to stick to the fields provided in >>>> the >>>> pattern. >>>> If the caller intends to parse seconds, the pattern should include >>>> seconds. >>>> Otherwise the caller has not been able to specify their intent. >>>> That's consistent with lenient mode used in the other fields. >>>> Otherwise, the pattern is irrelevant except for whether it contains a ":" >>>> and makes >>>> the spec nearly useless. >>>> >>>> Roger >>>> >>>> >>>> >>>> On 2/26/2016 12:09 PM, Stephen Colebourne wrote: >>>>> On 26 February 2016 at 15:00, Roger Riggs >>>>> wrote: >>>>>> Hi Stephen, >>>>>> >>>>>> It does not seem natural to me with a pattern of HHMM for it to parse >>>>>> more >>>>>> than 4 digits. >>>>>> I can see lenient modifying the behavior as it it were HHmm, but there >>>>>> is >>>>>> no >>>>>> indication in the pattern >>>>>> that seconds would be considered. How it would it be implied from the >>>>>> spec? >>>>> The spec is being expanded to define what happens. Previously it >>>>> didn't define it at all, and would throw an error. >>>>> >>>>> Lenient parsing typically accepts much more than the strict parsing. >>>>> >>>>> When parsing numbers, you may set the SignStyle to NEVER, but the sign >>>>> will still be parsed in lenient mode >>>>> >>>>> When parsing text, you may select the short output format, but any >>>>> length of text will be parsed in lenient mode. >>>>> >>>>> As such, it is very much in line with the behavour of the API to parse >>>>> a much broader format than the one requested in lenient mode. (None of >>>>> this affects strict mode). >>>>> >>>>> Stephen >>>>> >>>>> >>>>>> In the original issue, appendOffsetId is defined as using the +HH:MM:ss >>>>>> pattern and >>>>>> specific to ISO the MM should be allowed to be optional. There is no >>>>>> question of parsing >>>>>> extra digits not included in the requested pattern. >>>>>> >>>>>> Separately, this is specifying the new lenient behavior of >>>>>> appendOffset(pattern, noffsetText). >>>>>> In that case, I don't think it will be understood that patterns >>>>>> 'shorter' >>>>>> than the input will >>>>>> gobble up extra digits and ':'s. >>>>>> >>>>>> Roger >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On 2/26/2016 9:42 AM, Stephen Colebourne wrote: >>>>>> >>>>>> Lenient can be however lenient we define it to be. Allowing minutes >>>>>> and seconds to be parsed when not specified in the pattern is the key >>>>>> part of the change. Whether the parser copes with both colons and >>>>>> no-colons is the choice at hand here. It seems to me that since the >>>>>> parser can easily handle figuring out whether the colon is present or >>>>>> not, we should just allow the parser to be fully lenient. >>>>>> >>>>>> Stephen >>>>>> >>>>>> >>>>>> On 26 February 2016 at 14:15, Roger Riggs >>>>>> wrote: >>>>>> >>>>>> HI Stephen, >>>>>> >>>>>> How lenient is lenient supposed to be? Looking at the offset test >>>>>> cases, >>>>>> it >>>>>> seems to allow minutes >>>>>> and seconds digits to be parsed even if the pattern did not include >>>>>> them. >>>>>> >>>>>> + @DataProvider(name="lenientOffsetParseData") >>>>>> + Object[][] data_lenient_offset_parse() { >>>>>> + return new Object[][] { >>>>>> + {"+HH", "+01", 3600}, >>>>>> + {"+HH", "+0101", 3660}, >>>>>> + {"+HH", "+010101", 3661}, >>>>>> + {"+HH", "+01", 3600}, >>>>>> + {"+HH", "+01:01", 3660}, >>>>>> + {"+HH", "+01:01:01", 3661}, >>>>>> >>>>>> Thanks, Roger >>>>>> >>>>>> >>>>>> >>>>>> On 2/26/2016 6:16 AM, Stephen Colebourne wrote: >>>>>> >>>>>> I don't think this is quite right. >>>>>> >>>>>> if ((length > position + 3) && (text.charAt(position + 3) == ':')) { >>>>>> parseType = 10; >>>>>> } >>>>>> >>>>>> This code will *always* select type 10 (colons) if a colon is found at >>>>>> position+3. Whereas the spec now says that it should only do this if >>>>>> the pattern is "HH". For other patterns, the colon/no-colon choice is >>>>>> defined to be based on the pattern. >>>>>> >>>>>> That said, I'm thinking it is better to make the spec more lenient to >>>>>> match the behaviour as implemented: >>>>>> >>>>>> >>>>>> When parsing in lenient mode, only the hours are mandatory - minutes >>>>>> and seconds are optional. If the character after the hour digits is a >>>>>> colon >>>>>> then the parser will parse using the pattern "HH:mm:ss", otherwise the >>>>>> parser will parse using the pattern "HHmmss". >>>>>> >>>>>> >>>>>> Additional TCKDateTimeFormatterBuilder tests will be needed to >>>>>> demonstrate the above. There should also be a test for data following >>>>>> the lenient parse. The following should all succeed: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> DateTimeFormatterBuilder().parseLenient().appendOffset("HH:MM").appendZoneId(); >>>>>> "+01:00Europe/London" >>>>>> "+0100Europe/London" >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> DateTimeFormatterBuilder().parseLenient().appendOffset("HH:MM").appendLiteral(":").appendZoneId(); >>>>>> "+01:Europe/London" >>>>>> >>>>>> Note this special case, where the colon affects the parse type, but is >>>>>> not ultimately part of the offset, thus it is left to match the >>>>>> appendLiteral(":") >>>>>> >>>>>> You may want to think of some additional nasty edge cases! >>>>>> >>>>>> Stephen >>>>>> >>>>>> On 25 February 2016 at 15:44, nadeesh tv wrote: >>>>>> >>>>>> Hi all, >>>>>> Please see the updated webrev >>>>>> http://cr.openjdk.java.net/~ntv/8032051/webrev.02/ >>>>>> >>>>>> Thanks and Regards, >>>>>> Nadeesh >>>>>> >>>>>> On 2/23/2016 5:17 PM, Stephen Colebourne wrote: >>>>>> >>>>>> Thanks for the changes. >>>>>> >>>>>> In `DateTimeFormatter`, the code should be >>>>>> >>>>>> .parseLenient() >>>>>> .appendOffsetId() >>>>>> .parseStrict() >>>>>> >>>>>> and the same in the other case. This ensures that existing callers who >>>>>> then embed the formatter in another formatter (like the >>>>>> ZONED_DATE_TIME constant) are unaffected. >>>>>> >>>>>> >>>>>> The logic for lenient parsing does not look right as it only handles >>>>>> types 5 and 6. This table shows the mappings needed: >>>>>> >>>>>> "+HH", -> "+HHmmss" or "+HH:mm:ss" >>>>>> "+HHmm", -> "+HHmmss", >>>>>> "+HH:mm", -> "+HH:mm:ss", >>>>>> "+HHMM", -> "+HHmmss", >>>>>> "+HH:MM", -> "+HH:mm:ss", >>>>>> "+HHMMss", -> "+HHmmss", >>>>>> "+HH:MM:ss", -> "+HH:mm:ss", >>>>>> "+HHMMSS", -> "+HHmmss", >>>>>> "+HH:MM:SS", -> "+HH:mm:ss", >>>>>> "+HHmmss", >>>>>> "+HH:mm:ss", >>>>>> >>>>>> Note that the "+HH" pattern is a special case, as we don't know >>>>>> whether to use the colon or non-colon pattern. Whether to require >>>>>> colon or not is based on whether the next character after the HH is a >>>>>> colon or not. >>>>>> >>>>>> Proposed appendOffsetId() Javadoc: >>>>>> >>>>>> * Appends the zone offset, such as '+01:00', to the formatter. >>>>>> *

>>>>>> * This appends an instruction to format/parse the offset ID to the >>>>>> builder. >>>>>> * This is equivalent to calling {@code appendOffset("+HH:MM:ss", "Z")}. >>>>>> * See {@link #appendOffset(String, String)} for details on formatting >>>>>> and parsing. >>>>>> >>>>>> Proposed appendOffset(String, String) Javadoc: >>>>>> >>>>>> * During parsing, the offset... >>>>>> >>>>>> changed to: >>>>>> >>>>>> * When parsing in strict mode, the input must contain the mandatory >>>>>> and optional elements are defined by the specified pattern. >>>>>> * If the offset cannot be parsed then an exception is thrown unless >>>>>> the section of the formatter is optional. >>>>>> *

>>>>>> * When parsing in lenient mode, only the hours are mandatory - minutes >>>>>> and seconds are optional. >>>>>> * The colons are required if the specified pattern contains a colon. >>>>>> * If the specified pattern is "+HH", the presence of colons is >>>>>> determined by whether the character after the hour digits is a colon >>>>>> or not. >>>>>> * If the offset cannot be parsed then an exception is thrown unless >>>>>> the section of the formatter is optional. >>>>>> >>>>>> thanks and sorry for delay >>>>>> Stephen >>>>>> >>>>>> >>>>>> >>>>>> On 11 February 2016 at 20:22, nadeesh tv wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> Please review a fix for >>>>>> >>>>>> Bug Id https://bugs.openjdk.java.net/browse/JDK-8032051 >>>>>> >>>>>> webrev http://cr.openjdk.java.net/~ntv/8032051/webrev.01/ >>>>>> >>>>>> -- >>>>>> Thanks and Regards, >>>>>> Nadeesh TV >>>>>> >>>>>> -- >>>>>> Thanks and Regards, >>>>>> Nadeesh TV >>>>>> >>>>>> -- Thanks and Regards, Nadeesh TV From xueming.shen at oracle.com Thu Mar 3 18:45:39 2016 From: xueming.shen at oracle.com (Xueming Shen) Date: Thu, 03 Mar 2016 10:45:39 -0800 Subject: Match.appendReplacement with StringBuilder In-Reply-To: <21203c166c7fe7003528521bfffd42a6@baybroadband.net> References: <958CB879-5B99-4AD7-8E23-6B08E960EE16@oracle.com> <21203c166c7fe7003528521bfffd42a6@baybroadband.net> Message-ID: <56D88653.3090508@oracle.com> On 3/3/16, 10:26 AM, Dave Brosius wrote: > Greetings, > > It would be nice if java.util.regex.Matcher had a replacement for > > Matcher appendReplacement(StringBuffer sb, String > replacement) > StringBuffer appendTail(StringBuffer sb) > > > That took StringBuilder. we have added that in 9, right? From nadeesh.tv at oracle.com Thu Mar 3 18:54:47 2016 From: nadeesh.tv at oracle.com (nadeesh tv) Date: Fri, 04 Mar 2016 00:24:47 +0530 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time In-Reply-To: <56D73637.3090006@oracle.com> References: <56D6C0B7.10205@oracle.com> <56D70406.7010000@oracle.com> <56D7317F.3000804@Oracle.com> <56D73637.3090006@oracle.com> Message-ID: <56D88877.4010202@oracle.com> Hi, Roger - Thanks for the comments Made the necessary changes in the spec Please see the updated webrev http://cr.openjdk.java.net/~ntv/8030864/webrev.05/ On 3/3/2016 12:21 AM, nadeesh tv wrote: > Hi , > > Please see the updated webrev > http://cr.openjdk.java.net/~ntv/8030864/webrev.03/ > > Thanks and Regards, > Nadeesh > > On 3/3/2016 12:01 AM, Roger Riggs wrote: >> Hi Nadeesh, >> >> Editorial comments: >> >> Chronology.java: 716+ >> "Java epoch" -> "epoch" >> "minute, second and zoneOffset" -> "minute, second*,* and >> zoneOffset" (add a comma; two places) >> "caluculated using given era, prolepticYear," -> "calculated using >> the era, year-of-era," >> "to represent" -> remove as unnecessary in all places >> >> IsoChronology: >> "to represent" -> remove as unnecessary in all places >> >> These should be fixed to cleanup the specification. >> >> The implementation and the tests look fine. >> >> Thanks, Roger >> >> >> >> On 3/2/2016 10:17 AM, nadeesh tv wrote: >>> Hi, >>> Stephen, Thanks for the comments. >>> Please see the updated webrev >>> http://cr.openjdk.java.net/~ntv/8030864/webrev.02/ >>> >>> Regards, >>> Nadeesh TV >>> >>> On 3/2/2016 5:41 PM, Stephen Colebourne wrote: >>>> Remove "Subclass can override the default implementation for a more >>>> efficient implementation." as it adds no value. >>>> >>>> In the default implementation of >>>> >>>> epochSecond(Era era, int yearofEra, int month, int dayOfMonth, >>>> int hour, int minute, int second, ZoneOffset zoneOffset) >>>> >>>> use >>>> >>>> prolepticYear(era, yearOfEra) >>>> >>>> and call the other new epochSecond method. See dateYearDay(Era era, >>>> int yearOfEra, int dayOfYear) for the design to copy. If this is done, >>>> then there is no need to override the method in IsoChronology. >>>> >>>> In the test, >>>> >>>> LocalDate.MIN.with(chronoLd) >>>> >>>> could be >>>> >>>> LocalDate.from(chronoLd) >>>> >>>> Thanks >>>> Stephen >>>> >>>> >>>> >>>> >>>> >>>> >>>> On 2 March 2016 at 10:30, nadeesh tv wrote: >>>>> Hi all, >>>>> >>>>> Please review an enhancement for a garbage free epochSecond method. >>>>> >>>>> Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 >>>>> >>>>> webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 >>>>> >>>>> -- >>>>> Thanks and Regards, >>>>> Nadeesh TV >>>>> >>> >> > -- Thanks and Regards, Nadeesh TV From martinrb at google.com Thu Mar 3 19:40:36 2016 From: martinrb at google.com (Martin Buchholz) Date: Thu, 3 Mar 2016 11:40:36 -0800 Subject: RFR: jsr166 jdk9 integration wave 5 In-Reply-To: References: <56D61A45.7040005@oracle.com> Message-ID: Committing. On Thu, Mar 3, 2016 at 3:48 AM, Paul Sandoz wrote: > java/util/concurrent/ScheduledThreadPoolExecutor/DelayOverflow.java > ? > > - pool.schedule(keepPoolBusy, 0, TimeUnit.SECONDS); > + pool.schedule(keepPoolBusy, 0, DAYS); > > It probably does not matter that you changed the units here? Probably wouldn't have happened if DAYS was spelled HECTOKILOSECONDS ! From Roger.Riggs at Oracle.com Thu Mar 3 19:48:19 2016 From: Roger.Riggs at Oracle.com (Roger Riggs) Date: Thu, 3 Mar 2016 14:48:19 -0500 Subject: [DING] Re: [PING] Potential infinite waiting at JMXConnection#createConnection In-Reply-To: References: Message-ID: <56D89503.1080909@Oracle.com> Hi Yuji, An issue has been created to track this issue: JDK-8151212 Flush in RMI TCPChannel createConnection can hang indefinitely Please send the patch and the reproducer in the body of email and I'll attach them to the bug report. Thanks, Roger On 2/3/2016 12:27 PM, KUBOTA Yuji wrote: > Hi all, > > Could someone please review and sponsor this fix ? > I write the details of this issue again. Please review it. > > =Problem= > Potential infinite waiting at TCPChannel#createConnection. > > This method flushes the DataOutputStream without the socket > timeout settings when choose stream protocol [1]. If connection lost > or the destination server do not return response during the flush, > this method wait forever because the timeout settings is set the > default value of SO_TIMEOUT, i.e., infinite. > > [1]: http://hg.openjdk.java.net/jdk9/dev/jdk/file/7adef1c3afd5/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java#l227 > > I think this issue is rarely, however serious. > > =Reproduce= > I write a test program to reproduce. You can reproduce by the below. > > * hg clone http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/ > * cd fixLoopAtJMXConnectorFactory; mvn package > * setting "stop_time" at debugcontrol.properties if you need. > * java -cp .:target/debugcontrol-1.0-SNAPSHOT.jar debugcontrol.DebugController > > This program keep to wait at TCPChannel#createConnection due to > this issue. After "debugcontroltest.stop_time" ms, this program release > the waiting by sending quit to jdb which is stopping the destination > server. Finally, return 2. > > =Solution= > Set timeout by using property-configured value: > sun.rmi.transport.tcp.responseTimeout. > > My patch is below. > http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/file/e31044f0804f/jdk9.patch > > If you run the test program with modified JDK9 by my patch, the test > program will get java.net.SocketTimeoutException after the connection > timeout happen, then return 0. > > Thanks, > Yuji. > > > 2016-01-13 23:31 GMT+09:00 KUBOTA Yuji : >> Hi all, >> >> Can somebody please review and sponsor this fix ? >> >> Thanks, >> Yuji >> >> 2016-01-05 17:56 GMT+09:00 KUBOTA Yuji : >>> Hi Jaroslav and core-libs-dev, >>> >>> Thank Jaroslav for your kindness! >>> >>> For core-libs-dev members, links the information about this issue. >>> >>> * details of problem >>> http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-April/002152.html >>> >>> * patch >>> http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/file/e31044f0804f/jdk9.patch >>> >>> * testcase for reproduce >>> http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/file/e31044f0804f/testProgram >>> http://mail.openjdk.java.net/pipermail/serviceability-dev/2015-December/018415.html >>> >>> Could you please review these reports? >>> Hope this patch helps to community. >>> >>> Thanks, >>> Yuji >>> >>> 2016-01-04 23:51 GMT+09:00 Jaroslav Bachorik : >>>> Hi Yuji, >>>> >>>> On 4.1.2016 15:14, KUBOTA Yuji wrote: >>>>> Hi all, >>>>> >>>>> Could you please review this patch? >>>> >>>> Sorry for the long delay. Shanliang has not been present for some time and >>>> probably this slipped the attention of the others. >>>> >>>> However, core-libs mailing list might be more appropriate place to review >>>> this change since you are dealing with s.r.t.t.TCPChannel >>>> (http://icedtea.classpath.org/people/ykubota/fixLoopAtJMXConnectorFactory/file/e31044f0804f/jdk9.patch) >>>> >>>> Regards, >>>> >>>> -JB- From Roger.Riggs at Oracle.com Thu Mar 3 20:59:55 2016 From: Roger.Riggs at Oracle.com (Roger Riggs) Date: Thu, 3 Mar 2016 15:59:55 -0500 Subject: RFR:JDK-8032051:"ZonedDateTime" class "parse" method fails with short time zone offset ("+01") In-Reply-To: <56D8844F.1010701@oracle.com> References: <56BCED89.7040007@oracle.com> <56CF2154.6050503@oracle.com> <56D05E0E.6030003@Oracle.com> <56D06876.4020000@Oracle.com> <56D08DA5.5030700@Oracle.com> <56D4692B.1030402@Oracle.com> <56D8844F.1010701@oracle.com> Message-ID: <56D8A5CB.80107@Oracle.com> Hi Nadeesh, Looks good. Thanks, Roger On 3/3/2016 1:37 PM, nadeesh tv wrote: > Hi, > > Stephen, Roger Thanks for the comments. > > Please see the updated webrev > http://cr.openjdk.java.net/~ntv/8032051/webrev.04/ > > > Regards, > Nadeesh > > > On 3/1/2016 12:29 AM, Stephen Colebourne wrote: >> I'm happy to go back to the spec I proposed before. That spec would >> determine colons dynamically only for pattern HH. Otherwise, it would >> use the presence/absence of a colon in the pattern as the signal. That >> would deal with the ISO-8601 problem and resolve the original issue >> (as ISO_OFFSET_DATE_TIME uses HH:MM:ss, which would leniently parse >> using colons). >> >> Writing the spec wording is not easy however. I had: >> >> When parsing in lenient mode, only the hours are mandatory - minutes >> and seconds are optional. The colons are required if the specified >> pattern contains a colon. If the specified pattern is "+HH", the >> presence of colons is determined by whether the character after the >> hour digits is a colon or not. If the offset cannot be parsed then an >> exception is thrown unless the section of the formatter is optional. >> >> which isn't too bad but alternatives are possible. >> >> Stephen >> >> >> >> >> On 29 February 2016 at 15:52, Roger Riggs >> wrote: >>> Hi Stephen, >>> >>> As a fix for the original issue[1], not correctly parsing a ISO defined >>> offset, the use of lenient >>> was a convenient implementation technique (hack). But with the >>> expanded >>> definition of lenient, >>> it will allow many forms of the offset that are not allowed by the ISO >>> specification >>> and should not be accepted forDateTimeFormatter. ISO_OFFSET_DATE_TIME. >>> In particular, ISO requires the ":" to separate the minutes. >>> I'm not sure how to correctly fix the original issue with the new >>> specification of lenient offset >>> parsing without introducing some more specific implementation >>> information. >>> >>> >>> WRT the lenient parsing mode for appendOffset: >>> >>> I was considering that the subfields of the offset were to be treated >>> leniently but it seems >>> you were treating the entire offset field and text as the unit to be >>> treated >>> leniently. >>> The spec for lenient parsing would be clearer if it were specified as >>> allowing any >>> of the patterns of appendOffset. The current wording around the >>> character >>> after the hour >>> may be confusing. >>> >>> In the specification of appendOffset(pattern, noOffsetText) how about: >>> >>> "When parsing in lenient mode, the longest valid pattern that >>> matches the >>> input is used. Only the hours are mandatory, minutes and seconds are >>> optional." >>> >>> Roger >>> >>> >>> [1] https://bugs.openjdk.java.net/browse/JDK-8032051 >>> >>> >>> >>> >>> >>> On 2/26/2016 1:10 PM, Stephen Colebourne wrote: >>>> It is important to also consider the case where the user wants to >>>> format using HH:MM but parse seconds if they are provided. >>>> >>>> As I said above, this is no different to SignStyle, where the user >>>> requests something specific on format, but accepts anything on input. >>>> >>>> The pattern is still used for formatting and strict parsing under >>>> these changes. It is effectively ignored in lenient parsing (which is >>>> the very definition of leniency). >>>> >>>> Another way to look at it: >>>> >>>> using a pattern of HH:MM and strict: >>>> +02 - disallowed >>>> +02:00 - allowed >>>> +02:00:00 - disallowed >>>> >>>> using a pattern of HH:mm and strict: >>>> +02 - allowed >>>> +02:00 - allowed >>>> +02:00:00 - disallowed >>>> >>>> using any pattern and lenient: >>>> +02 - allowed >>>> +02:00 - allowed >>>> +02:00:00 - allowed >>>> >>>> This covers pretty much anything a user needs when parsing. >>>> >>>> Stephen >>>> >>>> >>>> On 26 February 2016 at 17:38, Roger Riggs >>>> wrote: >>>>> Hi Stephen, >>>>> >>>>> Even in lenient mode the parser needs to stick to the fields >>>>> provided in >>>>> the >>>>> pattern. >>>>> If the caller intends to parse seconds, the pattern should include >>>>> seconds. >>>>> Otherwise the caller has not been able to specify their intent. >>>>> That's consistent with lenient mode used in the other fields. >>>>> Otherwise, the pattern is irrelevant except for whether it >>>>> contains a ":" >>>>> and makes >>>>> the spec nearly useless. >>>>> >>>>> Roger >>>>> >>>>> >>>>> >>>>> On 2/26/2016 12:09 PM, Stephen Colebourne wrote: >>>>>> On 26 February 2016 at 15:00, Roger Riggs >>>>>> wrote: >>>>>>> Hi Stephen, >>>>>>> >>>>>>> It does not seem natural to me with a pattern of HHMM for it to >>>>>>> parse >>>>>>> more >>>>>>> than 4 digits. >>>>>>> I can see lenient modifying the behavior as it it were HHmm, but >>>>>>> there >>>>>>> is >>>>>>> no >>>>>>> indication in the pattern >>>>>>> that seconds would be considered. How it would it be implied >>>>>>> from the >>>>>>> spec? >>>>>> The spec is being expanded to define what happens. Previously it >>>>>> didn't define it at all, and would throw an error. >>>>>> >>>>>> Lenient parsing typically accepts much more than the strict parsing. >>>>>> >>>>>> When parsing numbers, you may set the SignStyle to NEVER, but the >>>>>> sign >>>>>> will still be parsed in lenient mode >>>>>> >>>>>> When parsing text, you may select the short output format, but any >>>>>> length of text will be parsed in lenient mode. >>>>>> >>>>>> As such, it is very much in line with the behavour of the API to >>>>>> parse >>>>>> a much broader format than the one requested in lenient mode. >>>>>> (None of >>>>>> this affects strict mode). >>>>>> >>>>>> Stephen >>>>>> >>>>>> >>>>>>> In the original issue, appendOffsetId is defined as using the >>>>>>> +HH:MM:ss >>>>>>> pattern and >>>>>>> specific to ISO the MM should be allowed to be optional. There >>>>>>> is no >>>>>>> question of parsing >>>>>>> extra digits not included in the requested pattern. >>>>>>> >>>>>>> Separately, this is specifying the new lenient behavior of >>>>>>> appendOffset(pattern, noffsetText). >>>>>>> In that case, I don't think it will be understood that patterns >>>>>>> 'shorter' >>>>>>> than the input will >>>>>>> gobble up extra digits and ':'s. >>>>>>> >>>>>>> Roger >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 2/26/2016 9:42 AM, Stephen Colebourne wrote: >>>>>>> >>>>>>> Lenient can be however lenient we define it to be. Allowing minutes >>>>>>> and seconds to be parsed when not specified in the pattern is >>>>>>> the key >>>>>>> part of the change. Whether the parser copes with both colons and >>>>>>> no-colons is the choice at hand here. It seems to me that since the >>>>>>> parser can easily handle figuring out whether the colon is >>>>>>> present or >>>>>>> not, we should just allow the parser to be fully lenient. >>>>>>> >>>>>>> Stephen >>>>>>> >>>>>>> >>>>>>> On 26 February 2016 at 14:15, Roger Riggs >>>>>>> wrote: >>>>>>> >>>>>>> HI Stephen, >>>>>>> >>>>>>> How lenient is lenient supposed to be? Looking at the offset test >>>>>>> cases, >>>>>>> it >>>>>>> seems to allow minutes >>>>>>> and seconds digits to be parsed even if the pattern did not include >>>>>>> them. >>>>>>> >>>>>>> + @DataProvider(name="lenientOffsetParseData") >>>>>>> + Object[][] data_lenient_offset_parse() { >>>>>>> + return new Object[][] { >>>>>>> + {"+HH", "+01", 3600}, >>>>>>> + {"+HH", "+0101", 3660}, >>>>>>> + {"+HH", "+010101", 3661}, >>>>>>> + {"+HH", "+01", 3600}, >>>>>>> + {"+HH", "+01:01", 3660}, >>>>>>> + {"+HH", "+01:01:01", 3661}, >>>>>>> >>>>>>> Thanks, Roger >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 2/26/2016 6:16 AM, Stephen Colebourne wrote: >>>>>>> >>>>>>> I don't think this is quite right. >>>>>>> >>>>>>> if ((length > position + 3) && (text.charAt(position + 3) == >>>>>>> ':')) { >>>>>>> parseType = 10; >>>>>>> } >>>>>>> >>>>>>> This code will *always* select type 10 (colons) if a colon is >>>>>>> found at >>>>>>> position+3. Whereas the spec now says that it should only do >>>>>>> this if >>>>>>> the pattern is "HH". For other patterns, the colon/no-colon >>>>>>> choice is >>>>>>> defined to be based on the pattern. >>>>>>> >>>>>>> That said, I'm thinking it is better to make the spec more >>>>>>> lenient to >>>>>>> match the behaviour as implemented: >>>>>>> >>>>>>> >>>>>>> When parsing in lenient mode, only the hours are mandatory - >>>>>>> minutes >>>>>>> and seconds are optional. If the character after the hour digits >>>>>>> is a >>>>>>> colon >>>>>>> then the parser will parse using the pattern "HH:mm:ss", >>>>>>> otherwise the >>>>>>> parser will parse using the pattern "HHmmss". >>>>>>> >>>>>>> >>>>>>> Additional TCKDateTimeFormatterBuilder tests will be needed to >>>>>>> demonstrate the above. There should also be a test for data >>>>>>> following >>>>>>> the lenient parse. The following should all succeed: >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> DateTimeFormatterBuilder().parseLenient().appendOffset("HH:MM").appendZoneId(); >>>>>>> >>>>>>> "+01:00Europe/London" >>>>>>> "+0100Europe/London" >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> DateTimeFormatterBuilder().parseLenient().appendOffset("HH:MM").appendLiteral(":").appendZoneId(); >>>>>>> >>>>>>> "+01:Europe/London" >>>>>>> >>>>>>> Note this special case, where the colon affects the parse type, >>>>>>> but is >>>>>>> not ultimately part of the offset, thus it is left to match the >>>>>>> appendLiteral(":") >>>>>>> >>>>>>> You may want to think of some additional nasty edge cases! >>>>>>> >>>>>>> Stephen >>>>>>> >>>>>>> On 25 February 2016 at 15:44, nadeesh tv >>>>>>> wrote: >>>>>>> >>>>>>> Hi all, >>>>>>> Please see the updated webrev >>>>>>> http://cr.openjdk.java.net/~ntv/8032051/webrev.02/ >>>>>>> >>>>>>> Thanks and Regards, >>>>>>> Nadeesh >>>>>>> >>>>>>> On 2/23/2016 5:17 PM, Stephen Colebourne wrote: >>>>>>> >>>>>>> Thanks for the changes. >>>>>>> >>>>>>> In `DateTimeFormatter`, the code should be >>>>>>> >>>>>>> .parseLenient() >>>>>>> .appendOffsetId() >>>>>>> .parseStrict() >>>>>>> >>>>>>> and the same in the other case. This ensures that existing >>>>>>> callers who >>>>>>> then embed the formatter in another formatter (like the >>>>>>> ZONED_DATE_TIME constant) are unaffected. >>>>>>> >>>>>>> >>>>>>> The logic for lenient parsing does not look right as it only >>>>>>> handles >>>>>>> types 5 and 6. This table shows the mappings needed: >>>>>>> >>>>>>> "+HH", -> "+HHmmss" or "+HH:mm:ss" >>>>>>> "+HHmm", -> "+HHmmss", >>>>>>> "+HH:mm", -> "+HH:mm:ss", >>>>>>> "+HHMM", -> "+HHmmss", >>>>>>> "+HH:MM", -> "+HH:mm:ss", >>>>>>> "+HHMMss", -> "+HHmmss", >>>>>>> "+HH:MM:ss", -> "+HH:mm:ss", >>>>>>> "+HHMMSS", -> "+HHmmss", >>>>>>> "+HH:MM:SS", -> "+HH:mm:ss", >>>>>>> "+HHmmss", >>>>>>> "+HH:mm:ss", >>>>>>> >>>>>>> Note that the "+HH" pattern is a special case, as we don't know >>>>>>> whether to use the colon or non-colon pattern. Whether to require >>>>>>> colon or not is based on whether the next character after the HH >>>>>>> is a >>>>>>> colon or not. >>>>>>> >>>>>>> Proposed appendOffsetId() Javadoc: >>>>>>> >>>>>>> * Appends the zone offset, such as '+01:00', to the formatter. >>>>>>> *

>>>>>>> * This appends an instruction to format/parse the offset ID to the >>>>>>> builder. >>>>>>> * This is equivalent to calling {@code appendOffset("+HH:MM:ss", >>>>>>> "Z")}. >>>>>>> * See {@link #appendOffset(String, String)} for details on >>>>>>> formatting >>>>>>> and parsing. >>>>>>> >>>>>>> Proposed appendOffset(String, String) Javadoc: >>>>>>> >>>>>>> * During parsing, the offset... >>>>>>> >>>>>>> changed to: >>>>>>> >>>>>>> * When parsing in strict mode, the input must contain the mandatory >>>>>>> and optional elements are defined by the specified pattern. >>>>>>> * If the offset cannot be parsed then an exception is thrown unless >>>>>>> the section of the formatter is optional. >>>>>>> *

>>>>>>> * When parsing in lenient mode, only the hours are mandatory - >>>>>>> minutes >>>>>>> and seconds are optional. >>>>>>> * The colons are required if the specified pattern contains a >>>>>>> colon. >>>>>>> * If the specified pattern is "+HH", the presence of colons is >>>>>>> determined by whether the character after the hour digits is a >>>>>>> colon >>>>>>> or not. >>>>>>> * If the offset cannot be parsed then an exception is thrown unless >>>>>>> the section of the formatter is optional. >>>>>>> >>>>>>> thanks and sorry for delay >>>>>>> Stephen >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 11 February 2016 at 20:22, nadeesh tv >>>>>>> wrote: >>>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> Please review a fix for >>>>>>> >>>>>>> Bug Id https://bugs.openjdk.java.net/browse/JDK-8032051 >>>>>>> >>>>>>> webrev http://cr.openjdk.java.net/~ntv/8032051/webrev.01/ >>>>>>> >>>>>>> -- >>>>>>> Thanks and Regards, >>>>>>> Nadeesh TV >>>>>>> >>>>>>> -- >>>>>>> Thanks and Regards, >>>>>>> Nadeesh TV >>>>>>> >>>>>>> > From joe.darcy at oracle.com Thu Mar 3 23:01:48 2016 From: joe.darcy at oracle.com (joe darcy) Date: Thu, 3 Mar 2016 15:01:48 -0800 Subject: JDK 9 RFR of JDK-8151226: Mark UdpTest.java as intermittently failing Message-ID: <56D8C25C.1090002@oracle.com> Hello, The test java/net/ipv6tests/UdpTest.java has been observed to intermittently fail (JDK-8143998, JDK-8143097). Until these problems are addressed, the test should be marked accordingly. Please review the patch below which marks the test. Thanks, -Joe diff -r a603b1f1d9a1 test/java/net/ipv6tests/UdpTest.java --- a/test/java/net/ipv6tests/UdpTest.java Thu Mar 03 12:49:12 2016 -0800 +++ b/test/java/net/ipv6tests/UdpTest.java Thu Mar 03 15:01:29 2016 -0800 @@ -1,5 +1,5 @@ /* - * Copyright (c) 2003, Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 2003, 2016, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it @@ -24,6 +24,7 @@ /* * @test * @bug 4868820 + * @key intermittent * @summary IPv6 support for Windows XP and 2003 server */ From lance.andersen at oracle.com Thu Mar 3 23:08:28 2016 From: lance.andersen at oracle.com (Lance Andersen) Date: Thu, 3 Mar 2016 18:08:28 -0500 Subject: JDK 9 RFR of JDK-8151226: Mark UdpTest.java as intermittently failing In-Reply-To: <56D8C25C.1090002@oracle.com> References: <56D8C25C.1090002@oracle.com> Message-ID: looks ok joe On Mar 3, 2016, at 6:01 PM, joe darcy wrote: > Hello, > > The test > > java/net/ipv6tests/UdpTest.java > > has been observed to intermittently fail (JDK-8143998, JDK-8143097). > > Until these problems are addressed, the test should be marked accordingly. > > Please review the patch below which marks the test. > > Thanks, > > -Joe > > diff -r a603b1f1d9a1 test/java/net/ipv6tests/UdpTest.java > --- a/test/java/net/ipv6tests/UdpTest.java Thu Mar 03 12:49:12 2016 -0800 > +++ b/test/java/net/ipv6tests/UdpTest.java Thu Mar 03 15:01:29 2016 -0800 > @@ -1,5 +1,5 @@ > /* > - * Copyright (c) 2003, Oracle and/or its affiliates. All rights reserved. > + * Copyright (c) 2003, 2016, Oracle and/or its affiliates. All rights reserved. > * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. > * > * This code is free software; you can redistribute it and/or modify it > @@ -24,6 +24,7 @@ > /* > * @test > * @bug 4868820 > + * @key intermittent > * @summary IPv6 support for Windows XP and 2003 server > */ > > Lance Andersen| Principal Member of Technical Staff | +1.781.442.2037 Oracle Java Engineering 1 Network Drive Burlington, MA 01803 Lance.Andersen at oracle.com From kubota.yuji at gmail.com Fri Mar 4 02:06:18 2016 From: kubota.yuji at gmail.com (KUBOTA Yuji) Date: Fri, 4 Mar 2016 11:06:18 +0900 Subject: [DING] Re: [PING] Potential infinite waiting at JMXConnection#createConnection In-Reply-To: <56D89503.1080909@Oracle.com> References: <56D89503.1080909@Oracle.com> Message-ID: Hi Roger, Thank you for your help! My patch and reproducer are as below. * patch diff --git a/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java b/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java --- a/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java +++ b/src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPChannel.java @@ -222,20 +222,34 @@ // choose protocol (single op if not reusable socket) if (!conn.isReusable()) { out.writeByte(TransportConstants.SingleOpProtocol); } else { out.writeByte(TransportConstants.StreamProtocol); + + int usableSoTimeout = 0; + try { + /* + * If socket factory had set a non-zero timeout on its + * own, then restore it instead of using the property- + * configured value. + */ + usableSoTimeout = sock.getSoTimeout(); + if (usableSoTimeout == 0) { + usableSoTimeout = responseTimeout; + } + sock.setSoTimeout(usableSoTimeout); + } catch (Exception e) { + // if we fail to set this, ignore and proceed anyway + } out.flush(); /* * Set socket read timeout to configured value for JRMP * connection handshake; this also serves to guard against * non-JRMP servers that do not respond (see 4322806). */ - int originalSoTimeout = 0; try { - originalSoTimeout = sock.getSoTimeout(); sock.setSoTimeout(handshakeTimeout); } catch (Exception e) { // if we fail to set this, ignore and proceed anyway } @@ -279,18 +293,11 @@ * connection. NOTE: this timeout, if configured to a * finite duration, places an upper bound on the time * that a remote method call is permitted to execute. */ try { - /* - * If socket factory had set a non-zero timeout on its - * own, then restore it instead of using the property- - * configured value. - */ - sock.setSoTimeout((originalSoTimeout != 0 ? - originalSoTimeout : - responseTimeout)); + sock.setSoTimeout(usableSoTimeout); } catch (Exception e) { // if we fail to set this, ignore and proceed anyway } out.flush(); * reproducer ** tree |-- debugcontroltest.properties |-- jmx-test-cert.pkcs12 |-- jmxremote.password |-- pom.xml `-- src `-- main |-- java | `-- debugcontrol | |-- DebugController.java | |-- client | | `-- JMXSSLClient.java | `-- server | `-- JMXSSLServer.java `-- resources `-- jmxremote.password ** debugcontroltest.properties debugcontroltest.host = localhost debugcontroltest.port = 9876 debugcontroltest.stop_time = 120000 debugcontroltest.jmxremote.password.filename = jmxremote.password debugcontroltest.cert.filename = jmx-test-cert.pkcs12 debugcontroltest.cert.type = pkcs12 debugcontroltest.cert.pass = changeit sun.rmi.transport.tcp.responseTimeout = 1000 sun.rmi.transport.tcp.handshakeTimeout = 1000 ** jmx-test-cert.pkcs12 *** binary file. Create pkcs12 certificates by openssl or other commands. ** jmxremote.password (top directory) monitorRole adminadmin controlRole adminadmin ** jmxremote.password (resources directory) *** empty file. just try `touch src\main\resources\jmxremote.password` ** pom.xml 4.0.0 debugcontrol debugcontrol 1.0-SNAPSHOT ** DebugController.java package debugcontrol; import java.io.BufferedReader; import java.io.File; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.io.PrintWriter; import java.nio.file.FileSystem; import java.nio.file.FileSystems; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.attribute.AclEntry; import java.nio.file.attribute.AclEntryPermission; import java.nio.file.attribute.AclEntryType; import java.nio.file.attribute.AclFileAttributeView; import java.nio.file.attribute.UserPrincipal; import java.util.Arrays; import java.util.Date; import java.util.List; import java.util.Properties; /** * RMI connection timeout test. This program starts a simple sleep server * program (JMXSSLServer) on external jdb process with a breakpoint at * sun.security.ssl.ServerHandshaker.clientHello set. It then starts a client * process (JMXSSLCient) which tries to connect the sleep/jdb process. * ServerHandshaker.clientHello responds to the client hello message and sends * SSL record back. By setting breakpont in that function, we can emulate the * error mode in which client keeps waiting SSL record from server. * * JMXConnectorFactory.connect() ignores sun.rmi.transport.tcp.responseTimeout, * so wait the response from server infinitely. Once a fixes was added, then * the client return "0" when the connection timeout happen. * This DebugControlTest returns the client's return code. */ public class DebugController { public static final String PROP_FILE_NAME = "debugcontroltest.properties"; private static Properties dctProp = getDctProp(); private static final String HOST = dctProp.getProperty("debugcontroltest.host", "localhost"); private static final String PORT = dctProp.getProperty("debugcontroltest.port", "9876"); private static final int STOP_TIME = Integer.parseInt(dctProp.getProperty("debugcontroltest.stop_time", "60000")); private static FileSystem fs = FileSystems.getDefault(); private static Path jmxremotePasswordPath = fs.getPath(getResourceDirString(), dctProp.getProperty("debugcontroltest.jmxremote.password.filename", "jmxremote.password")); private static Path certPath = fs.getPath(getResourceDirString(), dctProp.getProperty("debugcontroltest.cert.filename", "jmx-test-cert.pkcs12")); private static String keyStoreType = dctProp.getProperty("debugcontroltest.cert.type"); private static String keyStorePass = dctProp.getProperty("debugcontroltest.cert.pass"); private static String responseTimeout = dctProp.getProperty("sun.rmi.transport.tcp.responseTimeout", "100"); private static String handshakeTimeout = dctProp.getProperty("sun.rmi.transport.tcp.handshakeTimeout", "100"); public static void main(String[] args) throws Exception { runJMXServerWithDebugger(); runJMXClientOnSeparateProcess(); } /** * Load properties from PROP_FILE_NAME and returns Properties instance. * If PROP_FILE_NAME was not found, return an empty Properties instance. * * @return Properties instance */ static Properties getDctProp() { File f = new File(PROP_FILE_NAME); Properties props = new Properties(); if (!f.exists()) { return props; } FileReader fr = null; try { fr = new FileReader(f); props.load(fr); fr.close(); } catch (FileNotFoundException fnfe) { } catch (IOException ioe) { System.err.println("[WARN] " + ioe.toString()); } return props; } /** * Prepare to run jdb process. * */ static void runJMXServerWithDebugger() throws FileNotFoundException { adjustRemotePasswordPermission(); final Path jdbPath = getJdbPath(); if (jdbPath == null) { throw new FileNotFoundException("jdb executable was not found. Check JDK_HOME, JAVA_HOME or TESTJAVA"); } new Thread("jdb-run-thread") { public void run() { runJMXServerWithDebuggerBody(jdbPath); } }.start(); } /** * Run server.JMXSSLServer with jdb process. * * @param jdbPath */ static void runJMXServerWithDebuggerBody(Path jdbPath) { final String[] args = new String[]{ jdbPath.toString(), "-classpath", getTargetClassPath(), "-J-Duser.language=en", "-Dcom.sun.management.jmxremote.port=" + PORT, "-Dcom.sun.management.jmxremote.password.file=" + jmxremotePasswordPath.toString(), "-Djavax.net.ssl.keyStore=" + certPath.toString(), "-Djavax.net.ssl.keyStoreType=" + keyStoreType, "-Djavax.net.ssl.keyStorePassword=" + keyStorePass, "debugcontrol.server.JMXSSLServer" }; System.out.println("[INFO] Server process args"); for (int i = 0; i < args.length; i++) { System.out.println(" args[" + i + "] " + args[i]); } ProcessBuilder pb = new ProcessBuilder(args); try { Process proc = pb.start(); OutputStream os = proc.getOutputStream(); PrintWriter pw = new PrintWriter(os); InputStream is = proc.getInputStream(); final BufferedReader br = new BufferedReader(new InputStreamReader(is)); InputStream es = proc.getErrorStream(); final BufferedReader bre = new BufferedReader(new InputStreamReader(es)); new Thread("server-stdout") { public void run() { String s = null; try { while ((s = br.readLine()) != null) { System.out.println("ser-out: " + s); } } catch (IOException ioe) { ioe.printStackTrace(); } } }.start(); new Thread("server-stderr") { public void run() { String s = null; try { while ((s = br.readLine()) != null) { System.out.println("ser-err: " + s); } } catch (IOException ioe) { ioe.printStackTrace(); } } }.start(); // jdb commands String[] commands = new String[]{ "stop in sun.security.ssl.ServerHandshaker.clientHello", "run" }; for (int i = 0; i < commands.length; i++) { pw.println(commands[i]); pw.flush(); try { Thread.sleep(2000); } catch (InterruptedException ie) { } } try { Thread.sleep(STOP_TIME); } catch (InterruptedException ie) { } System.out.println("[INFO] sending quit to jdb"); pw.println("quit"); pw.flush(); } catch (IOException e) { e.printStackTrace(); } } /** * Run client.JMXSSLClient on separate process. * */ static void runJMXClientOnSeparateProcess() throws FileNotFoundException { // Wait 10 sec to launch server. try { Thread.sleep(10 * 1000L); } catch (InterruptedException ie) { } Path javaPath = getJavaPath(); if (javaPath == null) { throw new FileNotFoundException("java executable was not found. Check JDK_HOME, JAVA_HOME or TESTJAVA"); } final String[] args = new String[]{ javaPath.toString(), "-classpath", getTargetClassPath(), "-Duser.language=en", "-Djavax.net.ssl.trustStore=" + certPath.toString(), "-Djavax.net.ssl.trustStoreType=" + keyStoreType, "-Djavax.net.ssl.trustStorePassword=" + keyStorePass, "-Dsun.rmi.transport.tcp.responseTimeout=" + responseTimeout, "-Dsun.rmi.transport.tcp.handshakeTimeout=" + handshakeTimeout, "debugcontrol.client.JMXSSLClient", HOST,PORT }; System.out.println("[INFO] Client process args:"); for (int i = 0; i < args.length; i++) { System.out.println(" args[" + i + "] " + args[i]); } ProcessBuilder pb = new ProcessBuilder(args); try { Process proc = pb.start(); final InputStream is = proc.getInputStream(); final BufferedReader br = new BufferedReader(new InputStreamReader(is)); final InputStream es = proc.getErrorStream(); final BufferedReader bre = new BufferedReader(new InputStreamReader(es)); long t0 = System.currentTimeMillis(); new Thread("client-stdout") { public void run() { String s = null; try { while ((s = br.readLine()) != null) { System.out.println("cli-out: " + s); } } catch (IOException ioe) { ioe.printStackTrace(); } } }.start(); new Thread("client-stderr") { public void run() { String s = null; try { while ((s = bre.readLine()) != null) { System.out.println("cli-err: " + s); } } catch (IOException ioe) { ioe.printStackTrace(); } } }.start(); int rc = proc.waitFor(); long t1 = System.currentTimeMillis(); System.out.println("[INFO] "+ (new Date()).toString() + " Client done. Result code: " + rc); System.out.println("[INFO] Client took " + (t1 - t0) + " msec."); if (System.getenv("DEBUG_CONTROL_TEST_STAY") != null) { System.out.println("[INFO] Press return to exit."); BufferedReader br1 = new BufferedReader(new InputStreamReader(System.in)); br1.readLine(); } // exit with client's return code. System.exit(rc); } catch (IOException ioe) { ioe.printStackTrace(); } catch (InterruptedException ie) { ie.printStackTrace(); } } /** * Set jmxremote.password to readable/writable only to owner. JMX server * side does not start if access to the file is not limited to owner. * setReadable/setWritable calls set file permission to 0600 on u*ix system. * For windows, the calls have not effect. Need to set ACL using * java.nio.file API Files.setFileAttributeView(Path, * AclFileAttributeView.class). */ static void adjustRemotePasswordPermission() { if (onWindows()) { limitAclToOwnerRead(jmxremotePasswordPath); } else { File f = jmxremotePasswordPath.toFile(); f.setReadable(false, false); // chmod a-r f.setReadable(true, true); // chmod u+r f.setWritable(false, false); // chmod a-w f.setWritable(true, true); // chmod u+w } } static Path getJavaPath() { String jdbBasename = "java"; if (isOnWindows()) { jdbBasename = "java.exe"; } String jdkhome = System.getenv("JDK_HOME"); if (jdkhome != null) { Path jdkPath = getFileAroundJdkHome(jdkhome, jdbBasename); if (jdkPath != null) { return jdkPath; } } jdkhome = System.getenv("JAVA_HOME"); if (jdkhome != null) { Path jdkPath = getFileAroundJdkHome(jdkhome, jdbBasename); if (jdkPath != null) { return jdkPath; } } // http://openjdk.java.net/jtreg/tag-spec.html jdkhome = System.getenv("TESTJAVA"); if (jdkhome != null) { Path jdkPath = getFileAroundJdkHome(jdkhome, jdbBasename); if (jdkPath != null) { return jdkPath; } } // try java.home property jdkhome = System.getProperty("java.home"); if (jdkhome != null) { Path jdkPath = getFileAroundJdkHome(jdkhome, jdbBasename); if (jdkPath != null) { return jdkPath; } } return null; } /** * Search jdb around the directory set in JDK_HOME, JAVA_HOME or TESTJAVA if * set and return as Path instance if found. Otherwise, return null. * * @return jdbPath */ static Path getJdbPath() { String jdbBasename = "jdb"; if (isOnWindows()) { jdbBasename = "jdb.exe"; } String jdkhome = System.getenv("JDK_HOME"); if (jdkhome != null) { Path jdkPath = getFileAroundJdkHome(jdkhome, jdbBasename); if (jdkPath != null) { return jdkPath; } } jdkhome = System.getenv("JAVA_HOME"); if (jdkhome != null) { Path jdkPath = getFileAroundJdkHome(jdkhome, jdbBasename); if (jdkPath != null) { return jdkPath; } } // http://openjdk.java.net/jtreg/tag-spec.html jdkhome = System.getenv("TESTJAVA"); if (jdkhome != null) { Path jdkPath = getFileAroundJdkHome(jdkhome, jdbBasename); if (jdkPath != null) { return jdkPath; } } // try java.home property jdkhome = System.getProperty("java.home"); if (jdkhome != null) { Path jdkPath = getFileAroundJdkHome(jdkhome, jdbBasename); if (jdkPath != null) { return jdkPath; } } return null; } /** * Find and return filename around home which might be jre or jdk home * * @param home jdk home or jre home * @param filename to find * @return */ static Path getFileAroundJdkHome(String home, String filename) { // try as jdk home Path fpath = fs.getPath(home, "bin", filename); if (Files.exists(fpath)) { return fpath; } // then try as jre dir under jdk fpath = fs.getPath(home).getParent().resolve("bin").resolve(filename); if (Files.exists(fpath)) { return fpath; } return null; } /** * Returns true if os.name property starts with windows ignoring case. * * @return true if os.name starts with windows */ static boolean isOnWindows() { String osname = System.getProperty("os.name"); if (osname.toLowerCase().trim().toLowerCase().startsWith("windows")) { return true; } else { return false; } } static boolean onWindows() { String osName = System.getProperty("os.name", "generic").toLowerCase(); return osName.indexOf("windows") >= 0; } static String getResourceDirString() { if (System.getenv("TESTSRC") != null) { return System.getenv("TESTSRC"); } return ""; } static String getTargetClassPath() { if (System.getenv("TESTCLASSES") != null) { return System.getenv("TESTCLASSES"); } return "target/classes"; } /** * Perform acl change equivalent to "cacls path /P :R. * * @param path */ static void limitAclToOwnerRead(Path path) { FileSystem fs = FileSystems.getDefault(); String userName = System.getProperty("user.name"); try { UserPrincipal me = fs.getUserPrincipalLookupService().lookupPrincipalByName(userName); AclEntry entry = AclEntry.newBuilder() .setType(AclEntryType.ALLOW) .setPrincipal(me) .setPermissions(AclEntryPermission.READ_DATA, AclEntryPermission.READ_ATTRIBUTES, AclEntryPermission.READ_NAMED_ATTRS, AclEntryPermission.READ_ACL) .build(); AclFileAttributeView view = Files.getFileAttributeView(path, AclFileAttributeView.class); List owerReadOnlyAcl = Arrays.asList(entry); view.setAcl(owerReadOnlyAcl); } catch (IOException ex) { ex.printStackTrace(); } } } ** JMXSSLClient.java package debugcontrol.client; import java.util.HashMap; import java.util.Map; import java.util.Set; import javax.management.MBeanServerConnection; import javax.management.ObjectInstance; import javax.management.remote.JMXConnector; import javax.management.remote.JMXConnectorFactory; import javax.management.remote.JMXServiceURL; import javax.management.remote.rmi.RMIConnectorServer; import javax.rmi.ssl.SslRMIClientSocketFactory; import java.net.MalformedURLException; import java.io.IOException; import java.net.SocketTimeoutException; /** * Connect JMX server at HOST:PORT with SSL, password authetication. * If the connection was made, then run simple remote JMX operations. * This JMXSSLClient returns 0 when a connection timeout happen for test. */ public class JMXSSLClient { private static String HOST = "localhost"; private static int PORT = 9876; public static void main(String[] args) throws Exception { if (args.length == 2) { HOST = args[0]; PORT = Integer.parseInt(args[1]); } execute(); } static void execute() { try { Thread.sleep(5000); } catch (InterruptedException ie) { } try { JMXServiceURL url = new JMXServiceURL(String.format("service:jmx:rmi:///jndi/rmi://%s:%d/jmxrmi", HOST, PORT)); System.out.println("[INFO] Service URL: " + url.toString()); String[] credentials = new String[]{"controlRole", "adminadmin"}; SslRMIClientSocketFactory csf = new SslRMIClientSocketFactory(); Map env = new HashMap(); env.put(RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE, csf); env.put("jmx.remote.credentials", credentials); JMXConnector jmxConnector = JMXConnectorFactory.connect(url, env); MBeanServerConnection mbeanServerConnection = jmxConnector.getMBeanServerConnection(); String[] domains = mbeanServerConnection.getDomains(); System.out.print("Domains:"); for (int i = 0; i < domains.length; i++) { System.out.print(" " + domains[i]); } mbeanList(mbeanServerConnection); System.out.println("[INFO] Client done."); System.exit(0); } catch (MalformedURLException me) { me.printStackTrace(); System.exit(1); } catch (java.rmi.RemoteException re) { // connection refused if (re.getCause() instanceof SocketTimeoutException) { System.out.println("[INFO] Conglaturation. We got a timeout."); System.exit(0); } re.printStackTrace(); System.exit(2); } catch (IOException e) { e.printStackTrace(); System.exit(3); } } /** * Simple mbean server connection operation. */ static void mbeanList(MBeanServerConnection conn) { try { Set mbeans = conn.queryMBeans(null, null); for (ObjectInstance oi : mbeans) { System.out.println(String.format("name=%s,class=%s", oi.getObjectName(), oi.getClassName())); } } catch (IOException ioe) { ioe.printStackTrace(); } } static String getResourceDirString() { if (System.getenv("TESTSRC") != null) { return System.getenv("TESTSRC"); } return (String) System.getProperty("user.dir"); } } ** JMXSSLServer.java package debugcontrol.server; /** * Simple sleep server. * @author KUBOTA Yuji */ public class JMXSSLServer { public static void main(String[] args) { System.out.println("[INFO] Server launched then sleep..."); try { while (true) { Thread.sleep(1000 * 1000L); } } catch (InterruptedException ie) { } } } Thanks, Yuji From xueming.shen at oracle.com Fri Mar 4 06:43:27 2016 From: xueming.shen at oracle.com (Xueming Shen) Date: Thu, 3 Mar 2016 22:43:27 -0800 Subject: RFR: Regex exponential backtracking issue Message-ID: <56D92E8F.8040300@oracle.com> Hi, webrev: http://cr.openjdk.java.net/~sherman/regexBacktracking/webrev/ Backtracking is the powerful and popular feature provided by Java regex engine when the regex pattern contains optional quantifiers or alternations, in which the regex engine can return to the previous saved state (backtracking) to continue matching/finding other alternative when the current try fails. However it is also a "well known" issue that some not-well-written regex might trigger "exponential backtraking" in situation like there is no possible match can be found after exhaustive search, in the worst case the regex engine will just keep going forever, hangup the vm. We have various reports in the past as showed in RegexFun.java [1] for this issue. For example following is the one reported in JDK-6693451. regex: "^(\\s*foo\\s*)*$" input: "foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo fo", In which the "group" (\\s*foo\\s*)*$" can alternatively match any one of the "foo", "foo ", " foo" and " foo ". The regex works fine and runs normally if it can find a match (replace the last "fo" in the input to "foo" for example). But in this case because the input text ends with a "fo", the regex actually can never match the input, it always fails at last one, nothing in the regex can match the "fo". Given the nature of the "greedy quantifier", the engine will be backtracking/bouncing among these 4 choices at each/every iteration/level and keep searching to the end of the input exhaustively, fail, backtrack, and try again recursively. Based on the number of "foo" in the input, it can run forever until the end of the world. There are lots of suggestions on how to avoid this kind of runaway regex. One is to use the possessive quantifier, if possible, to replace the greedy quantifier ( * -> *+ for example ) to workaround this kind of catastrophic backtracking, and it normally meets the real need and makes the exponential backtracking go away. The observation of these exponential backtracking cases indicates that most of these "exponential backtracking" however are actually not necessary in most cases. Even we can't solve all of these catastrophic backtracking, it might be possible, with a small cost, we can actually workaround most of the "cases", in particular (1) when the "exponential backtracking" happens at the top level group + greedy quantifier, and (2) there is no group ref in the pattern. The "group + greedy repetition" in j.u.regex is implemented via three nodes, "Prolog + Loop + body node", in wihch the body takes care of (\\s*foo\\s*), the Loop takes care of the looping for "*", and the Prolog, as its name, just a starter to kick of the looping (read source for more details). The normal matching sequence works as Prolog.match() -> Prolog.loop.matchInit() -> body.match() -> loop.match() -> body.match() -> loop.match() -> body.match() -> loop.match() ... -> body.match() -> loop.match() -> loop.next.match() The "body.match() -> loop.match -> body.match() -> loop.match() ..." is looping on the runtime call stack "recursively" (the body.next is pointing to loop). If we take the "recursive looping" part that we are interested (we only care about the "count >= cmin" situation, as it is where the exponential backtracking happens) out of Loop.match(), the loop.match is as simple as boolean match(Matcher matcher, int i, CharSequence seq) { ---> Before boolean b = body.match(matcher, i, seq); // If match failed we must backtrack, so // the loop count should NOT be incremented if (b) return true; matcher.locals[countIndex] = count; ---> After return next.match(matcher, i, seq); } It appears that each of the repetitive "body.match() + loop.match()" looping (though actually recursive on the runtime stack) actually is matching the input text kind of "linearly" in the order of loop counter, if this is a "top level" group repetition (not an inner group inside another repetitive group) ... foo foo foo foo foo foo foo... |_ i = n, (body-loop)(body-loop)(body-loop).... ->next() given the nature of its "greediness", the engine will exhausts all the possible match of "body+loop+ loop.next" from any given starting position "i". We can actually assume that if we have failed to match A "body + loop + its down-stream matching" last time at position "i", next time if we come back here again (after backtracking the "up stream" and come down here again), and if we are at exactly the same position "i", we no longer need to try it again, the result will not change (failed). With the assumption that we DO NOT have a group ref in the pattern (the result/content of the group ref can change, if the down-stream contains the group ref, we can't assume the result going forward will be the same, even start at the same position "i"). for example for the above sample, when we backtrack to loop counter "n", we can dance among 4 choices "foo", " foo", "foo " and " foo ", but when we pick one and more on to the next iteration/loop at n + 1, the only possible choice is either "foo..." or " foo" (with a leading space character), if we have tried either of them (or both) last time and it failed, we no longer need to try the same "down stream" again . And the good thing is that this "position-lookup-table" can be shared by all cmin <= n <= cmax iterations, again, because the nature of greediness. In last failed try, the engine has tried/exhausted all possible combinations of body->loop->body->loop...body->loop->next at "i", including the possilble exhaustive backtracking done by the embedded inner nodes of this group. So we are here at the same position 'i" again, the result will be the same. So here is my propoased workaround solution. We introduce in a "hashmap" (home-made hashmap for "int", supposed to be cheap and faster) to memorize all the tried-and-failed position "i", and check this hashmap before starting a new round of loop. boolean match(Matcher matcher, int i, CharSequence seq) { ---------------- if (posIndex != -1 && matcher.localsPos[posIndex].contains(i)) { return next.match(matcher, i, seq); } ---------------- boolean b = body.match(matcher, i, seq); // If match failed we must backtrack, so // the loop count should NOT be incremented if (b) return true; matcher.locals[countIndex] = count; ---------------- if (posIndex != -1) { matcher.localsPos[posIndex].add(i); } ---------------- return next.match(matcher, i, seq); } This makes most of the exponential backtracking failures in RegexFun.java [1] go away, except the two issues. (1) the exponential backtracking starts/happens at the second inner group level, in which the proposed change might help, but does not make it go away regex: "(h|h|ih(((i|a|c|c|a|i|i|j|b|a|i|b|a|a|j))+h)ahbfhba|c|i)*" input: "hchcchicihcchciiicichhcichcihcchiihichiciiiihhcchicchhcihchcihiihciichhccciccichcichiihcchcihhicchcciicchcccihiiihhihihihichicihhcciccchihhhcchichchciihiicihciihcccciciccicciiiiiiiiicihhhiiiihchccchchhhhiiihchihcccchhhiiiiiiiicicichicihcciciihichhhhchihciiihhiccccccciciihhichiccchhicchicihihccichicciihcichccihhiciccccccccichhhhihihhcchchihihiihhihihihicichihiiiihhhhihhhchhichiicihhiiiiihchccccchichci" (2) it's a {n, m} greedy case. This one probably can be handled the same way as the */+ but need a little more testing. regex: "(\\w|_){1,64}@" input: "______________________________" There are another kind of "abusing" case that overly and repeatitively uses ".*" or ".+", such as reported in jdk-5014450 http://cr.openjdk.java.net/~sherman/regexBacktracking/RegexFun3.java regex: "^\\s*sites\\[sites\\.length\\+\\+\\]\\s*=\\s*new\\s+Array\\(.*" + "\\s*//\\s*(\\d+).*" + "\\s*//\\s*([^-]+).*" + "\\s*//\\s*([^-]+).*" + "\\s*//\\s*([^-]+).*" + "/(?sui).*$" text "\tsites[sites.length++] = new Array(\n" + "// 1079193366647 - creation time\n" + "// 1078902678663 1078852539723 1078753482632 0 0 0 0 0 0 0 0 0 0 0 - creation time last 14 days\n" + "// 0 1 0 0 0 0 0 0 0 0 0 0 0 0 - bad\n" + "// 0.0030 0.0080 0.01 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -\n\n", The performance can be much/much better if we can apply the similar optimization for top-level dot-greedy-curly nodes as well, as showed at http://cr.openjdk.java.net/~sherman/regexBacktracking/webrev2/ But I'm a little concerned the possibility that the extra checks might slowdown the normal case and just wonder if it is worth the cost. So leave it out for now. Another optimization included is for the "CharProperty + Curly" combination, normally this is what you get for the for regex construct .*, \w* \s*. The new GreedyCharProperty node now takes advantage of the fact that we are iterating char/codepoint by char/codepoint, the matching implementation is more smooth and much faster. Anyway, please help review and comment. thanks, Sherman [1] http://cr.openjdk.java.net/~sherman/regexBacktracking/RegexFun.java From vaibhav.x.choudhary at oracle.com Fri Mar 4 06:51:47 2016 From: vaibhav.x.choudhary at oracle.com (vaibhav x.choudhary) Date: Fri, 04 Mar 2016 12:21:47 +0530 Subject: [9] RFR: 8150702: change in javadoc for parseObject for MessageFormat - JDK-8073211 Message-ID: <56D93083.60301@oracle.com> Hello, Please review this small fix for jdk9/dev repo :- Bug: https://bugs.openjdk.java.net/browse/JDK-8150702 Webrev: http://cr.openjdk.java.net/~ntv/vaibhav/8150702/webrev.00/ Reason :- MessageFormat don't throw NullPointer exception if source is null. This condition is explicitly handled in the code by :- if (source == null) { Object[] empty = {}; return empty; } -- Thank You, Vaibhav Choudhary http://blogs.oracle.com/vaibhav From jeremymanson at google.com Fri Mar 4 07:11:01 2016 From: jeremymanson at google.com (Jeremy Manson) Date: Thu, 3 Mar 2016 23:11:01 -0800 Subject: Match.appendReplacement with StringBuilder In-Reply-To: <56D88653.3090508@oracle.com> References: <958CB879-5B99-4AD7-8E23-6B08E960EE16@oracle.com> <21203c166c7fe7003528521bfffd42a6@baybroadband.net> <56D88653.3090508@oracle.com> Message-ID: https://bugs.openjdk.java.net/browse/JDK-8039124 (Just diving in because I was surprised to see a FR for one of the things I thought we had contributed: I was afraid it hadn't managed to get in! Fortunately, I was wrong!) Jeremy On Thu, Mar 3, 2016 at 10:45 AM, Xueming Shen wrote: > On 3/3/16, 10:26 AM, Dave Brosius wrote: > >> Greetings, >> >> It would be nice if java.util.regex.Matcher had a replacement for >> >> Matcher appendReplacement(StringBuffer sb, String >> replacement) >> StringBuffer appendTail(StringBuffer sb) >> >> >> That took StringBuilder. >> > > we have added that in 9, right? > From christoph.langer at sap.com Fri Mar 4 07:50:42 2016 From: christoph.langer at sap.com (Langer, Christoph) Date: Fri, 4 Mar 2016 07:50:42 +0000 Subject: [PING] RFR: JDK-8150704 XALAN: ERROR: 'No more DTM IDs are available' when transforming with lots of temporary result trees Message-ID: <98624cbb00fe4522846ef256aa1410d4@DEWDFE13DE11.global.corp.sap> Hi, Ping - any comments or reviews for this bugfix? Thanks Christoph From: Langer, Christoph Sent: Freitag, 26. Februar 2016 16:02 To: core-libs-dev at openjdk.java.net Subject: RFR: JDK-8150704 XALAN: ERROR: 'No more DTM IDs are available' when transforming with lots of temporary result trees Hi, I've created a fix proposal for the issue I have reported in this bug: https://bugs.openjdk.java.net/browse/JDK-8150704 The webrev can be found here: http://cr.openjdk.java.net/~clanger/webrevs/8150704.1/ The Xalan parser would eventually run out of DTM IDs if xsl transformations involve lots of temporary result trees. Those are never released although they could. A testcase is included for this. I've also done some cleanups in the Xalan code and in the tests. Thanks in advance for looking at this :) Best regards Christoph From amaembo at gmail.com Fri Mar 4 09:26:55 2016 From: amaembo at gmail.com (Tagir F. Valeev) Date: Fri, 4 Mar 2016 15:26:55 +0600 Subject: RFR-8148748: ArrayList.subList().spliterator() is not late-binding In-Reply-To: <3C9BC5F4-EEE1-4F79-943D-37822E7E5512@oracle.com> References: <27476765.20160129103224@gmail.com> <564410418.20160202102813@gmail.com> <1084445298.20160203182024@gmail.com> <534936148.20160204215527@gmail.com> <1877923692.20160208205352@gmail.com> <3C9BC5F4-EEE1-4F79-943D-37822E7E5512@oracle.com> Message-ID: <161394669.20160304152655@gmail.com> Hello! >> I'm just worrying a little that my changes might conflict with Ivan >> Gerasimov's pending 8079136 issue, so probably it would be better to >> wait till it's reviewed and pushed? Ivan said that 8079136 is stalled for a while, so I decided to continue working on 8148748. Here's updated webrev: http://cr.openjdk.java.net/~tvaleev/webrev/8148748/r2/ PS> Re: maintenance, ordinarily i would agree with you, but ArrayList PS> is kind of special being probably the most used collection class. PS> Using an anon-impl for SubList.spliterator seem ok in that respect. Now it's separate anonymous class as you suggested. ArrayListSpliterator is untouched. Note that trySplit() can return original ArrayListSpliterator as after the binding their behavior is compatible. With best regards, Tagir Valeev. From scolebourne at joda.org Fri Mar 4 10:48:20 2016 From: scolebourne at joda.org (Stephen Colebourne) Date: Fri, 4 Mar 2016 10:48:20 +0000 Subject: RFR:JDK-8032051:"ZonedDateTime" class "parse" method fails with short time zone offset ("+01") In-Reply-To: <56D8844F.1010701@oracle.com> References: <56BCED89.7040007@oracle.com> <56CF2154.6050503@oracle.com> <56D05E0E.6030003@Oracle.com> <56D06876.4020000@Oracle.com> <56D08DA5.5030700@Oracle.com> <56D4692B.1030402@Oracle.com> <56D8844F.1010701@oracle.com> Message-ID: In TCKDateTimeFormatterBuilder there is a commented out line in one of the new tests. Should be removed. No need for another review for that - happy otherwise. thanks for the work! Stephen On 3 March 2016 at 18:37, nadeesh tv wrote: > Hi, > > Stephen, Roger Thanks for the comments. > > Please see the updated webrev > http://cr.openjdk.java.net/~ntv/8032051/webrev.04/ > > > Regards, > Nadeesh > > > > On 3/1/2016 12:29 AM, Stephen Colebourne wrote: >> >> I'm happy to go back to the spec I proposed before. That spec would >> determine colons dynamically only for pattern HH. Otherwise, it would >> use the presence/absence of a colon in the pattern as the signal. That >> would deal with the ISO-8601 problem and resolve the original issue >> (as ISO_OFFSET_DATE_TIME uses HH:MM:ss, which would leniently parse >> using colons). >> >> Writing the spec wording is not easy however. I had: >> >> When parsing in lenient mode, only the hours are mandatory - minutes >> and seconds are optional. The colons are required if the specified >> pattern contains a colon. If the specified pattern is "+HH", the >> presence of colons is determined by whether the character after the >> hour digits is a colon or not. If the offset cannot be parsed then an >> exception is thrown unless the section of the formatter is optional. >> >> which isn't too bad but alternatives are possible. >> >> Stephen >> >> >> >> >> On 29 February 2016 at 15:52, Roger Riggs wrote: >>> >>> Hi Stephen, >>> >>> As a fix for the original issue[1], not correctly parsing a ISO defined >>> offset, the use of lenient >>> was a convenient implementation technique (hack). But with the expanded >>> definition of lenient, >>> it will allow many forms of the offset that are not allowed by the ISO >>> specification >>> and should not be accepted forDateTimeFormatter. ISO_OFFSET_DATE_TIME. >>> In particular, ISO requires the ":" to separate the minutes. >>> I'm not sure how to correctly fix the original issue with the new >>> specification of lenient offset >>> parsing without introducing some more specific implementation >>> information. >>> >>> >>> WRT the lenient parsing mode for appendOffset: >>> >>> I was considering that the subfields of the offset were to be treated >>> leniently but it seems >>> you were treating the entire offset field and text as the unit to be >>> treated >>> leniently. >>> The spec for lenient parsing would be clearer if it were specified as >>> allowing any >>> of the patterns of appendOffset. The current wording around the >>> character >>> after the hour >>> may be confusing. >>> >>> In the specification of appendOffset(pattern, noOffsetText) how about: >>> >>> "When parsing in lenient mode, the longest valid pattern that matches the >>> input is used. Only the hours are mandatory, minutes and seconds are >>> optional." >>> >>> Roger >>> >>> >>> [1] https://bugs.openjdk.java.net/browse/JDK-8032051 >>> >>> >>> >>> >>> >>> On 2/26/2016 1:10 PM, Stephen Colebourne wrote: >>>> >>>> It is important to also consider the case where the user wants to >>>> format using HH:MM but parse seconds if they are provided. >>>> >>>> As I said above, this is no different to SignStyle, where the user >>>> requests something specific on format, but accepts anything on input. >>>> >>>> The pattern is still used for formatting and strict parsing under >>>> these changes. It is effectively ignored in lenient parsing (which is >>>> the very definition of leniency). >>>> >>>> Another way to look at it: >>>> >>>> using a pattern of HH:MM and strict: >>>> +02 - disallowed >>>> +02:00 - allowed >>>> +02:00:00 - disallowed >>>> >>>> using a pattern of HH:mm and strict: >>>> +02 - allowed >>>> +02:00 - allowed >>>> +02:00:00 - disallowed >>>> >>>> using any pattern and lenient: >>>> +02 - allowed >>>> +02:00 - allowed >>>> +02:00:00 - allowed >>>> >>>> This covers pretty much anything a user needs when parsing. >>>> >>>> Stephen >>>> >>>> >>>> On 26 February 2016 at 17:38, Roger Riggs >>>> wrote: >>>>> >>>>> Hi Stephen, >>>>> >>>>> Even in lenient mode the parser needs to stick to the fields provided >>>>> in >>>>> the >>>>> pattern. >>>>> If the caller intends to parse seconds, the pattern should include >>>>> seconds. >>>>> Otherwise the caller has not been able to specify their intent. >>>>> That's consistent with lenient mode used in the other fields. >>>>> Otherwise, the pattern is irrelevant except for whether it contains a >>>>> ":" >>>>> and makes >>>>> the spec nearly useless. >>>>> >>>>> Roger >>>>> >>>>> >>>>> >>>>> On 2/26/2016 12:09 PM, Stephen Colebourne wrote: >>>>>> >>>>>> On 26 February 2016 at 15:00, Roger Riggs >>>>>> wrote: >>>>>>> >>>>>>> Hi Stephen, >>>>>>> >>>>>>> It does not seem natural to me with a pattern of HHMM for it to parse >>>>>>> more >>>>>>> than 4 digits. >>>>>>> I can see lenient modifying the behavior as it it were HHmm, but >>>>>>> there >>>>>>> is >>>>>>> no >>>>>>> indication in the pattern >>>>>>> that seconds would be considered. How it would it be implied from >>>>>>> the >>>>>>> spec? >>>>>> >>>>>> The spec is being expanded to define what happens. Previously it >>>>>> didn't define it at all, and would throw an error. >>>>>> >>>>>> Lenient parsing typically accepts much more than the strict parsing. >>>>>> >>>>>> When parsing numbers, you may set the SignStyle to NEVER, but the sign >>>>>> will still be parsed in lenient mode >>>>>> >>>>>> When parsing text, you may select the short output format, but any >>>>>> length of text will be parsed in lenient mode. >>>>>> >>>>>> As such, it is very much in line with the behavour of the API to parse >>>>>> a much broader format than the one requested in lenient mode. (None of >>>>>> this affects strict mode). >>>>>> >>>>>> Stephen >>>>>> >>>>>> >>>>>>> In the original issue, appendOffsetId is defined as using the >>>>>>> +HH:MM:ss >>>>>>> pattern and >>>>>>> specific to ISO the MM should be allowed to be optional. There is >>>>>>> no >>>>>>> question of parsing >>>>>>> extra digits not included in the requested pattern. >>>>>>> >>>>>>> Separately, this is specifying the new lenient behavior of >>>>>>> appendOffset(pattern, noffsetText). >>>>>>> In that case, I don't think it will be understood that patterns >>>>>>> 'shorter' >>>>>>> than the input will >>>>>>> gobble up extra digits and ':'s. >>>>>>> >>>>>>> Roger >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 2/26/2016 9:42 AM, Stephen Colebourne wrote: >>>>>>> >>>>>>> Lenient can be however lenient we define it to be. Allowing minutes >>>>>>> and seconds to be parsed when not specified in the pattern is the key >>>>>>> part of the change. Whether the parser copes with both colons and >>>>>>> no-colons is the choice at hand here. It seems to me that since the >>>>>>> parser can easily handle figuring out whether the colon is present or >>>>>>> not, we should just allow the parser to be fully lenient. >>>>>>> >>>>>>> Stephen >>>>>>> >>>>>>> >>>>>>> On 26 February 2016 at 14:15, Roger Riggs >>>>>>> wrote: >>>>>>> >>>>>>> HI Stephen, >>>>>>> >>>>>>> How lenient is lenient supposed to be? Looking at the offset test >>>>>>> cases, >>>>>>> it >>>>>>> seems to allow minutes >>>>>>> and seconds digits to be parsed even if the pattern did not include >>>>>>> them. >>>>>>> >>>>>>> + @DataProvider(name="lenientOffsetParseData") >>>>>>> + Object[][] data_lenient_offset_parse() { >>>>>>> + return new Object[][] { >>>>>>> + {"+HH", "+01", 3600}, >>>>>>> + {"+HH", "+0101", 3660}, >>>>>>> + {"+HH", "+010101", 3661}, >>>>>>> + {"+HH", "+01", 3600}, >>>>>>> + {"+HH", "+01:01", 3660}, >>>>>>> + {"+HH", "+01:01:01", 3661}, >>>>>>> >>>>>>> Thanks, Roger >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 2/26/2016 6:16 AM, Stephen Colebourne wrote: >>>>>>> >>>>>>> I don't think this is quite right. >>>>>>> >>>>>>> if ((length > position + 3) && (text.charAt(position + 3) == ':')) { >>>>>>> parseType = 10; >>>>>>> } >>>>>>> >>>>>>> This code will *always* select type 10 (colons) if a colon is found >>>>>>> at >>>>>>> position+3. Whereas the spec now says that it should only do this if >>>>>>> the pattern is "HH". For other patterns, the colon/no-colon choice is >>>>>>> defined to be based on the pattern. >>>>>>> >>>>>>> That said, I'm thinking it is better to make the spec more lenient to >>>>>>> match the behaviour as implemented: >>>>>>> >>>>>>> >>>>>>> When parsing in lenient mode, only the hours are mandatory - minutes >>>>>>> and seconds are optional. If the character after the hour digits is a >>>>>>> colon >>>>>>> then the parser will parse using the pattern "HH:mm:ss", otherwise >>>>>>> the >>>>>>> parser will parse using the pattern "HHmmss". >>>>>>> >>>>>>> >>>>>>> Additional TCKDateTimeFormatterBuilder tests will be needed to >>>>>>> demonstrate the above. There should also be a test for data following >>>>>>> the lenient parse. The following should all succeed: >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> DateTimeFormatterBuilder().parseLenient().appendOffset("HH:MM").appendZoneId(); >>>>>>> "+01:00Europe/London" >>>>>>> "+0100Europe/London" >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> DateTimeFormatterBuilder().parseLenient().appendOffset("HH:MM").appendLiteral(":").appendZoneId(); >>>>>>> "+01:Europe/London" >>>>>>> >>>>>>> Note this special case, where the colon affects the parse type, but >>>>>>> is >>>>>>> not ultimately part of the offset, thus it is left to match the >>>>>>> appendLiteral(":") >>>>>>> >>>>>>> You may want to think of some additional nasty edge cases! >>>>>>> >>>>>>> Stephen >>>>>>> >>>>>>> On 25 February 2016 at 15:44, nadeesh tv >>>>>>> wrote: >>>>>>> >>>>>>> Hi all, >>>>>>> Please see the updated webrev >>>>>>> http://cr.openjdk.java.net/~ntv/8032051/webrev.02/ >>>>>>> >>>>>>> Thanks and Regards, >>>>>>> Nadeesh >>>>>>> >>>>>>> On 2/23/2016 5:17 PM, Stephen Colebourne wrote: >>>>>>> >>>>>>> Thanks for the changes. >>>>>>> >>>>>>> In `DateTimeFormatter`, the code should be >>>>>>> >>>>>>> .parseLenient() >>>>>>> .appendOffsetId() >>>>>>> .parseStrict() >>>>>>> >>>>>>> and the same in the other case. This ensures that existing callers >>>>>>> who >>>>>>> then embed the formatter in another formatter (like the >>>>>>> ZONED_DATE_TIME constant) are unaffected. >>>>>>> >>>>>>> >>>>>>> The logic for lenient parsing does not look right as it only handles >>>>>>> types 5 and 6. This table shows the mappings needed: >>>>>>> >>>>>>> "+HH", -> "+HHmmss" or "+HH:mm:ss" >>>>>>> "+HHmm", -> "+HHmmss", >>>>>>> "+HH:mm", -> "+HH:mm:ss", >>>>>>> "+HHMM", -> "+HHmmss", >>>>>>> "+HH:MM", -> "+HH:mm:ss", >>>>>>> "+HHMMss", -> "+HHmmss", >>>>>>> "+HH:MM:ss", -> "+HH:mm:ss", >>>>>>> "+HHMMSS", -> "+HHmmss", >>>>>>> "+HH:MM:SS", -> "+HH:mm:ss", >>>>>>> "+HHmmss", >>>>>>> "+HH:mm:ss", >>>>>>> >>>>>>> Note that the "+HH" pattern is a special case, as we don't know >>>>>>> whether to use the colon or non-colon pattern. Whether to require >>>>>>> colon or not is based on whether the next character after the HH is a >>>>>>> colon or not. >>>>>>> >>>>>>> Proposed appendOffsetId() Javadoc: >>>>>>> >>>>>>> * Appends the zone offset, such as '+01:00', to the formatter. >>>>>>> *

>>>>>>> * This appends an instruction to format/parse the offset ID to the >>>>>>> builder. >>>>>>> * This is equivalent to calling {@code appendOffset("+HH:MM:ss", >>>>>>> "Z")}. >>>>>>> * See {@link #appendOffset(String, String)} for details on formatting >>>>>>> and parsing. >>>>>>> >>>>>>> Proposed appendOffset(String, String) Javadoc: >>>>>>> >>>>>>> * During parsing, the offset... >>>>>>> >>>>>>> changed to: >>>>>>> >>>>>>> * When parsing in strict mode, the input must contain the mandatory >>>>>>> and optional elements are defined by the specified pattern. >>>>>>> * If the offset cannot be parsed then an exception is thrown unless >>>>>>> the section of the formatter is optional. >>>>>>> *

>>>>>>> * When parsing in lenient mode, only the hours are mandatory - >>>>>>> minutes >>>>>>> and seconds are optional. >>>>>>> * The colons are required if the specified pattern contains a colon. >>>>>>> * If the specified pattern is "+HH", the presence of colons is >>>>>>> determined by whether the character after the hour digits is a colon >>>>>>> or not. >>>>>>> * If the offset cannot be parsed then an exception is thrown unless >>>>>>> the section of the formatter is optional. >>>>>>> >>>>>>> thanks and sorry for delay >>>>>>> Stephen >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 11 February 2016 at 20:22, nadeesh tv >>>>>>> wrote: >>>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> Please review a fix for >>>>>>> >>>>>>> Bug Id https://bugs.openjdk.java.net/browse/JDK-8032051 >>>>>>> >>>>>>> webrev http://cr.openjdk.java.net/~ntv/8032051/webrev.01/ >>>>>>> >>>>>>> -- >>>>>>> Thanks and Regards, >>>>>>> Nadeesh TV >>>>>>> >>>>>>> -- >>>>>>> Thanks and Regards, >>>>>>> Nadeesh TV >>>>>>> >>>>>>> > > -- > Thanks and Regards, > Nadeesh TV > From scolebourne at joda.org Fri Mar 4 11:04:01 2016 From: scolebourne at joda.org (Stephen Colebourne) Date: Fri, 4 Mar 2016 11:04:01 +0000 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time In-Reply-To: <56D88877.4010202@oracle.com> References: <56D6C0B7.10205@oracle.com> <56D70406.7010000@oracle.com> <56D7317F.3000804@Oracle.com> <56D73637.3090006@oracle.com> <56D88877.4010202@oracle.com> Message-ID: long DAYS_0000_TO_1970 should be extracted as a private static final constant. Otherwise looks good. Stephen On 3 March 2016 at 18:54, nadeesh tv wrote: > Hi, > > Roger - Thanks for the comments > > Made the necessary changes in the spec > > Please see the updated webrev > http://cr.openjdk.java.net/~ntv/8030864/webrev.05/ > On 3/3/2016 12:21 AM, nadeesh tv wrote: >> >> Hi , >> >> Please see the updated webrev >> http://cr.openjdk.java.net/~ntv/8030864/webrev.03/ >> >> Thanks and Regards, >> Nadeesh >> >> On 3/3/2016 12:01 AM, Roger Riggs wrote: >>> >>> Hi Nadeesh, >>> >>> Editorial comments: >>> >>> Chronology.java: 716+ >>> "Java epoch" -> "epoch" >>> "minute, second and zoneOffset" -> "minute, second*,* and zoneOffset" >>> (add a comma; two places) >>> >>> "caluculated using given era, prolepticYear," -> "calculated using the >>> era, year-of-era," >>> "to represent" -> remove as unnecessary in all places >>> >>> IsoChronology: >>> "to represent" -> remove as unnecessary in all places >>> >>> These should be fixed to cleanup the specification. >>> >>> The implementation and the tests look fine. >>> >>> Thanks, Roger >>> >>> >>> >>> On 3/2/2016 10:17 AM, nadeesh tv wrote: >>>> >>>> Hi, >>>> Stephen, Thanks for the comments. >>>> Please see the updated webrev >>>> http://cr.openjdk.java.net/~ntv/8030864/webrev.02/ >>>> >>>> Regards, >>>> Nadeesh TV >>>> >>>> On 3/2/2016 5:41 PM, Stephen Colebourne wrote: >>>>> >>>>> Remove "Subclass can override the default implementation for a more >>>>> efficient implementation." as it adds no value. >>>>> >>>>> In the default implementation of >>>>> >>>>> epochSecond(Era era, int yearofEra, int month, int dayOfMonth, >>>>> int hour, int minute, int second, ZoneOffset zoneOffset) >>>>> >>>>> use >>>>> >>>>> prolepticYear(era, yearOfEra) >>>>> >>>>> and call the other new epochSecond method. See dateYearDay(Era era, >>>>> int yearOfEra, int dayOfYear) for the design to copy. If this is done, >>>>> then there is no need to override the method in IsoChronology. >>>>> >>>>> In the test, >>>>> >>>>> LocalDate.MIN.with(chronoLd) >>>>> >>>>> could be >>>>> >>>>> LocalDate.from(chronoLd) >>>>> >>>>> Thanks >>>>> Stephen >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On 2 March 2016 at 10:30, nadeesh tv wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> Please review an enhancement for a garbage free epochSecond method. >>>>>> >>>>>> Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 >>>>>> >>>>>> webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 >>>>>> >>>>>> -- >>>>>> Thanks and Regards, >>>>>> Nadeesh TV >>>>>> >>>> >>> >> > > -- > Thanks and Regards, > Nadeesh TV > From paul.sandoz at oracle.com Fri Mar 4 11:45:37 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 4 Mar 2016 12:45:37 +0100 Subject: RFR-8148748: ArrayList.subList().spliterator() is not late-binding In-Reply-To: <161394669.20160304152655@gmail.com> References: <27476765.20160129103224@gmail.com> <564410418.20160202102813@gmail.com> <1084445298.20160203182024@gmail.com> <534936148.20160204215527@gmail.com> <1877923692.20160208205352@gmail.com> <3C9BC5F4-EEE1-4F79-943D-37822E7E5512@oracle.com> <161394669.20160304152655@gmail.com> Message-ID: > On 4 Mar 2016, at 10:26, Tagir F. Valeev wrote: > > Hello! > >>> I'm just worrying a little that my changes might conflict with Ivan >>> Gerasimov's pending 8079136 issue, so probably it would be better to >>> wait till it's reviewed and pushed? > > Ivan said that 8079136 is stalled for a while, so I decided to > continue working on 8148748. Ok. Hopefully Ivan is unblocked now, but i don?t think it matters much which one gets in first now, given the implementation approach for this patch. > Here's updated webrev: > > http://cr.openjdk.java.net/~tvaleev/webrev/8148748/r2/ > Looks good. I especially like: 125 addCollection(l.andThen(list -> list.subList(0, list.size()))); Can you also update SpliteratorTraversingAndSplittingTest? void addList(Function, ? extends List> l) { // @@@ If collection is instance of List then add sub-list tests addCollection(l); } > PS> Re: maintenance, ordinarily i would agree with you, but ArrayList > PS> is kind of special being probably the most used collection class. > PS> Using an anon-impl for SubList.spliterator seem ok in that respect. > > Now it's separate anonymous class as you suggested. > ArrayListSpliterator is untouched. Note that trySplit() can return > original ArrayListSpliterator as after the binding their behavior is > compatible. > Very nice, might be worth an extra comment noting that. Up to you. Paul. From daniel.fuchs at oracle.com Fri Mar 4 11:46:23 2016 From: daniel.fuchs at oracle.com (Daniel Fuchs) Date: Fri, 4 Mar 2016 12:46:23 +0100 Subject: RFR 8150840: Add an internal system property to control the default level of System.Logger when java.logging is not present. Message-ID: <56D9758F.6000106@oracle.com> Hi, Please find below a patch for: https://bugs.openjdk.java.net/browse/JDK-8150840 8150840: Add an internal system property to control the default level of System.Logger when java.logging is not present. This patch also introduces a better separation between the SimpleConsoleLogger (created by the DefaultLoggerFinder when java.logging is not there), and the SurrogateLogger, which emulates the behavior of java.util.logging.Logger when java.logging is present but there is no custom configuration (used to be PlatformLogger.DefaultLoggerProxy). best regards, -- daniel From daniel.fuchs at oracle.com Fri Mar 4 11:48:16 2016 From: daniel.fuchs at oracle.com (Daniel Fuchs) Date: Fri, 4 Mar 2016 12:48:16 +0100 Subject: RFR 8150840: Add an internal system property to control the default level of System.Logger when java.logging is not present. Message-ID: <56D97600.5080105@oracle.com> [Resending with a link to the patch] Hi, Please find below a patch for: https://bugs.openjdk.java.net/browse/JDK-8150840 8150840: Add an internal system property to control the default level of System.Logger when java.logging is not present. http://cr.openjdk.java.net/~dfuchs/webrev_8150840/webrev.00 This patch also introduces a better separation between the SimpleConsoleLogger (created by the DefaultLoggerFinder when java.logging is not there), and the SurrogateLogger, which emulates the behavior of java.util.logging.Logger when java.logging is present but there is no custom configuration (used to be PlatformLogger.DefaultLoggerProxy). best regards, -- daniel From vaibhav.x.choudhary at oracle.com Fri Mar 4 13:19:19 2016 From: vaibhav.x.choudhary at oracle.com (vaibhav x.choudhary) Date: Fri, 04 Mar 2016 18:49:19 +0530 Subject: [9] RFR: 8151182: HttpHeaders.allValues should return unmodifiable List as per JavaDoc Message-ID: <56D98B57.1050400@oracle.com> Hi, Please review :- Review Link :- http://cr.openjdk.java.net/~ntv/vaibhav/JDK8151182/webrev.00/ Bug ID: https://bugs.openjdk.java.net/browse/JDK-8151182 -- Thank You, Vaibhav Choudhary http://blogs.oracle.com/vaibhav From aleksey.shipilev at oracle.com Fri Mar 4 13:28:19 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Fri, 4 Mar 2016 16:28:19 +0300 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D74B4B.9090708@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> <56D745B7.4040508@oracle.com> <56D74B4B.9090708@oracle.com> Message-ID: <56D98D73.4010302@oracle.com> On 03/02/2016 11:21 PM, Aleksey Shipilev wrote: > On 03/02/2016 10:57 PM, Coleen Phillimore wrote: >> On 3/2/16 1:58 PM, Aleksey Shipilev wrote: >>> Is there an underlying reason why we can't return the pre-filled >>> StackTraceElements[] array from the JVM_GetStackTraceElements to begin >>> with? This will avoid leaking StackTraceElement constructor into >>> standard library, *and* allows to make StackTraceElement fields final. >>> Taking stuff back from the standard library is hard, if not impossible, >>> so we better expose as little as possible. >> >> We measured that it's faster to allocate the StackTraceElement array >> in Java and it seems cleaner to the Java guys. It came from similar >> code we've been prototyping for StackFrameInfo. > > OK, it's not perfectly clean from implementation standpoint, but this > RFE might not be the best opportunity to polish that. At least make > StackTraceElement constructor private (better), or package-private > (acceptable), and then we are good to go. Okay, here's a little exploration: http://cr.openjdk.java.net/~shade/8150778/StackTraceBench.java The difference between allocating in Java code, and allocating on VM side is marginal on my machine, but I think we are down to native memset performance when allocating on VM side. Therefore, I'd probably stay with Java allocation which codegen we absolutely control. Aside: see the last experiment, avoiding StringTable::intern (shows in profiles a lot!) trims down construction costs down even further. I'd think that is a worthwhile improvement to consider. Cheers, -Aleksey From claes.redestad at oracle.com Fri Mar 4 13:29:19 2016 From: claes.redestad at oracle.com (Claes Redestad) Date: Fri, 4 Mar 2016 14:29:19 +0100 Subject: [9] RFR: 8151182: HttpHeaders.allValues should return unmodifiable List as per JavaDoc In-Reply-To: <56D98B57.1050400@oracle.com> References: <56D98B57.1050400@oracle.com> Message-ID: <56D98DAF.7040201@oracle.com> Hi, even with this code, I think it'd still be possible to do map().get(name) to get to the underlying, mutable list. It seems we have to ensure lists mapped in headers are made unmodifiable, something the method makeUnmodifiable seems to ensure, albeit in a not very thread-safe manner. /Claes On 2016-03-04 14:19, vaibhav x.choudhary wrote: > Hi, > > Please review :- > > Review Link :- > http://cr.openjdk.java.net/~ntv/vaibhav/JDK8151182/webrev.00/ > Bug ID: https://bugs.openjdk.java.net/browse/JDK-8151182 > From paul.sandoz at oracle.com Fri Mar 4 14:05:55 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 4 Mar 2016 15:05:55 +0100 Subject: [9] RFR: 8151182: HttpHeaders.allValues should return unmodifiable List as per JavaDoc In-Reply-To: <56D98B57.1050400@oracle.com> References: <56D98B57.1050400@oracle.com> Message-ID: Hi Vaibhav, This will not work, as Claes points out. You also might wanna check if there are tests in place asserting unmodfiablity, if there are none you could add some. Michael can point you in the right direction. Follow the trail of HttpHeaders1.makeUnmodifiable(), which is used to transition map values from mutable to unmodifiable: @Override public void makeUnmodifiable() { if (isUnmodifiable) return; Set keys = new HashSet<>(headers.keySet()); for (String key : keys) { List values = headers.remove(key); if (values != null) { headers.put(key, Collections.unmodifiableList(values)); } } isUnmodifiable = true; } In fact duplication of the key set can be avoided if one does this: Iterator> ie = headers.entrySet().iterator(); while (ie.hashNext()) { Map.Entry e = ie.next(); if (e.getValue() != null) { e.setValue(Collections.unmodifiableList(e.getValue())); } else { ie.remove(); } } However, i suspect this could be simplified to: headers.replaceAll((k, v) -> Collections.unmodifiableList(v))) If there are never any explicit null values placed in the map, which should be the case as that is really an anti-pattern. Also this: private List getOrCreate(String name) { List l = headers.get(name); if (l == null) { l = new LinkedList<>(); headers.put(name, l); } return l; } can be replaced with this: return headers.computeIfAbsent(name, k -> new LinkedList<>()); Paul. > On 4 Mar 2016, at 14:19, vaibhav x.choudhary wrote: > > Hi, > > Please review :- > > Review Link :- http://cr.openjdk.java.net/~ntv/vaibhav/JDK8151182/webrev.00/ > Bug ID: https://bugs.openjdk.java.net/browse/JDK-8151182 > > -- > Thank You, > Vaibhav Choudhary > http://blogs.oracle.com/vaibhav > From amy.lu at oracle.com Fri Mar 4 14:35:07 2016 From: amy.lu at oracle.com (Amy Lu) Date: Fri, 4 Mar 2016 22:35:07 +0800 Subject: JDK 9 RFR of JDK-8151286: Remove intermittent key from TestLocalTime.java and move back to tier1 Message-ID: <56D99D1B.3070301@oracle.com> java/util/zip/TestLocalTime.java This test was failing intermittently (reported in JDK-8135108). JDK-8135108 has been resolved and no open bug (no failure reported) till now. This patch is to remove @key intermittent from the test and move it back to tier1. bug: https://bugs.openjdk.java.net/browse/JDK-8151286 webrev: http://cr.openjdk.java.net/~amlu/8151286/webrev.01/ Thanks, Amy --- old/test/TEST.groups 2016-03-04 22:22:13.000000000 +0800 +++ new/test/TEST.groups 2016-03-04 22:22:13.000000000 +0800 @@ -28,7 +28,6 @@ tier1 = \ :jdk_lang \ -java/lang/ProcessHandle/TreeTest.java \ - -java/util/zip/TestLocalTime.java \ :jdk_util \ -java/util/WeakHashMap/GCDuringIteration.java \ -java/util/concurrent/ThreadPoolExecutor/ConfigChanges.java \ @@ -40,7 +39,6 @@ tier2 = \ java/lang/ProcessHandle/TreeTest.java \ - java/util/zip/TestLocalTime.java \ java/util/WeakHashMap/GCDuringIteration.java \ java/util/concurrent/ThreadPoolExecutor/ConfigChanges.java \ java/util/concurrent/forkjoin/FJExceptionTableLeak.java \ --- old/test/java/util/zip/TestLocalTime.java 2016-03-04 22:22:13.000000000 +0800 +++ new/test/java/util/zip/TestLocalTime.java 2016-03-04 22:22:13.000000000 +0800 @@ -24,7 +24,6 @@ /* * @test * @bug 8075526 8135108 - * @key intermittent * @summary Test timestamp via ZipEntry.get/setTimeLocal() */ From amaembo at gmail.com Fri Mar 4 14:38:50 2016 From: amaembo at gmail.com (Tagir F. Valeev) Date: Fri, 4 Mar 2016 20:38:50 +0600 Subject: RFR-8148748: ArrayList.subList().spliterator() is not late-binding In-Reply-To: References: <27476765.20160129103224@gmail.com> <564410418.20160202102813@gmail.com> <1084445298.20160203182024@gmail.com> <534936148.20160204215527@gmail.com> <1877923692.20160208205352@gmail.com> <3C9BC5F4-EEE1-4F79-943D-37822E7E5512@oracle.com> <161394669.20160304152655@gmail.com> Message-ID: <334584844.20160304203850@gmail.com> Hello! Thank you for the review! Here's updated webrev: http://cr.openjdk.java.net/~tvaleev/webrev/8148748/r3/ PS> Looks good. I especially like: PS> 125 addCollection(l.andThen(list -> list.subList(0, list.size()))); PS> Can you also update SpliteratorTraversingAndSplittingTest? PS> void addList(Function, ? extends List> l) { PS> // @@@ If collection is instance of List then add sub-list tests PS> addCollection(l); PS> } Done. >> Now it's separate anonymous class as you suggested. >> ArrayListSpliterator is untouched. Note that trySplit() can return >> original ArrayListSpliterator as after the binding their behavior is >> compatible. >> PS> Very nice, might be worth an extra comment noting that. Up to you. Short comments added. With best regards, Tagir Valeev. From amy.lu at oracle.com Fri Mar 4 14:50:42 2016 From: amy.lu at oracle.com (Amy Lu) Date: Fri, 4 Mar 2016 22:50:42 +0800 Subject: JDK 9 RFR of JDK-8151263: Mark java/rmi test LeaseCheckInterval.java as intermittently failing Message-ID: <56D9A0C2.5000204@oracle.com> java/rmi/server/Unreferenced/leaseCheckInterval/LeaseCheckInterval.java This test is known to fail intermittently (JDK-8078587). This patch is to mark the test accordingly with keyword 'intermittent'. bug: https://bugs.openjdk.java.net/browse/JDK-8151263 webrev: http://cr.openjdk.java.net/~amlu/8151263/webrev.00/ Thanks, Amy --- old/test/java/rmi/server/Unreferenced/leaseCheckInterval/LeaseCheckInterval.java 2016-03-04 22:49:09.000000000 +0800 +++ new/test/java/rmi/server/Unreferenced/leaseCheckInterval/LeaseCheckInterval.java 2016-03-04 22:49:09.000000000 +0800 @@ -1,5 +1,5 @@ /* - * Copyright (c) 2001, 2012, Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 2001, 2016, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it @@ -43,6 +43,7 @@ * java.rmi/sun.rmi.transport.tcp * @build TestLibrary JavaVM LeaseCheckInterval_Stub SelfTerminator * @run main/othervm LeaseCheckInterval + * @key intermittent */ import java.rmi.Remote; From michael.x.mcmahon at oracle.com Fri Mar 4 15:14:35 2016 From: michael.x.mcmahon at oracle.com (Michael McMahon) Date: Fri, 4 Mar 2016 15:14:35 +0000 Subject: [9] RFR: 8151182: HttpHeaders.allValues should return unmodifiable List as per JavaDoc In-Reply-To: References: <56D98B57.1050400@oracle.com> Message-ID: <56D9A65B.1040306@oracle.com> Yes, there is a mutability test already. We will have to fix the thread safety problem (and also the fact HttpHeaders1 was left public by mistake). Probably will separate the mutable and immutable types completely. Vaibhav, if you'd like to do it, you can define a package private implementation of HttpHeaders whose constructor takes a HttpHeadersImpl and uses final fields to ensure thread safety, rather than using makeUnmodifiable(). I'll contact you with some other ideas. I'll open a new report on this point and we probably should review on net-dev - Michael. On 04/03/16 14:05, Paul Sandoz wrote: > Hi Vaibhav, > > This will not work, as Claes points out. You also might wanna check if there are tests in place asserting unmodfiablity, if there are none you could add some. Michael can point you in the right direction. > > Follow the trail of HttpHeaders1.makeUnmodifiable(), which is used to transition map values from mutable to unmodifiable: > > @Override > public void makeUnmodifiable() { > if (isUnmodifiable) > return; > > Set keys = new HashSet<>(headers.keySet()); > for (String key : keys) { > List values = headers.remove(key); > if (values != null) { > headers.put(key, Collections.unmodifiableList(values)); > } > } > isUnmodifiable = true; > } > > In fact duplication of the key set can be avoided if one does this: > > Iterator> ie = headers.entrySet().iterator(); > while (ie.hashNext()) { > Map.Entry e = ie.next(); > if (e.getValue() != null) { > e.setValue(Collections.unmodifiableList(e.getValue())); > } > else { > ie.remove(); > } > } > > However, i suspect this could be simplified to: > > headers.replaceAll((k, v) -> Collections.unmodifiableList(v))) > > If there are never any explicit null values placed in the map, which should be the case as that is really an anti-pattern. > > Also this: > > private List getOrCreate(String name) { > List l = headers.get(name); > if (l == null) { > l = new LinkedList<>(); > headers.put(name, l); > } > return l; > } > > can be replaced with this: > > return headers.computeIfAbsent(name, k -> new LinkedList<>()); > > Paul. > >> On 4 Mar 2016, at 14:19, vaibhav x.choudhary wrote: >> >> Hi, >> >> Please review :- >> >> Review Link :- http://cr.openjdk.java.net/~ntv/vaibhav/JDK8151182/webrev.00/ >> Bug ID: https://bugs.openjdk.java.net/browse/JDK-8151182 >> >> -- >> Thank You, >> Vaibhav Choudhary >> http://blogs.oracle.com/vaibhav >> From paul.sandoz at oracle.com Fri Mar 4 15:18:49 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 4 Mar 2016 16:18:49 +0100 Subject: RFR-8148748: ArrayList.subList().spliterator() is not late-binding In-Reply-To: <334584844.20160304203850@gmail.com> References: <27476765.20160129103224@gmail.com> <564410418.20160202102813@gmail.com> <1084445298.20160203182024@gmail.com> <534936148.20160204215527@gmail.com> <1877923692.20160208205352@gmail.com> <3C9BC5F4-EEE1-4F79-943D-37822E7E5512@oracle.com> <161394669.20160304152655@gmail.com> <334584844.20160304203850@gmail.com> Message-ID: > On 4 Mar 2016, at 15:38, Tagir F. Valeev wrote: > > Hello! > > Thank you for the review! > Thanks. I just realised there are some subtleties where if the top-level list is reduced in size the spliterator of a sublist may on traversal throw an ArrayIndexOutOfBoundsException rather than ConcurrentModificationException. This can already occur for a partially traversed top-level list spliterator, so i am wondering how much we should care. Arguably in the sublist case there is another level of indirection, where errors creep in before traversal, so it suggests we should additionally check the mod count at the start of the traversal methods, and it?s probably ok on a best-effort basis to do this for the sublist spliterator and not it?s splits. Separately, i am not sure why the SubList.iterator has to check that the offset + 1 is within bounds, since expectedModCount = ArrayList.this.modCount should be false. Paul. > Here's updated webrev: > http://cr.openjdk.java.net/~tvaleev/webrev/8148748/r3/ > > PS> Looks good. I especially like: > > PS> 125 addCollection(l.andThen(list -> list.subList(0, list.size()))); > > PS> Can you also update SpliteratorTraversingAndSplittingTest? > > PS> void addList(Function, ? extends List> l) { > PS> // @@@ If collection is instance of List then add sub-list tests > PS> addCollection(l); > PS> } > > Done. > >>> Now it's separate anonymous class as you suggested. >>> ArrayListSpliterator is untouched. Note that trySplit() can return >>> original ArrayListSpliterator as after the binding their behavior is >>> compatible. >>> > > PS> Very nice, might be worth an extra comment noting that. Up to you. > > Short comments added. > > With best regards, > Tagir Valeev. > From Roger.Riggs at Oracle.com Fri Mar 4 15:56:23 2016 From: Roger.Riggs at Oracle.com (Roger Riggs) Date: Fri, 4 Mar 2016 10:56:23 -0500 Subject: [DING] Re: [PING] Potential infinite waiting at JMXConnection#createConnection In-Reply-To: References: <56D89503.1080909@Oracle.com> Message-ID: <56D9B027.5030305@Oracle.com> Hi Yuji, The patch and reproducer have been attached to the issue 8151212[1]. Thanks, Roger [1] https://bugs.openjdk.java.net/browse/JDK-8151212 On 3/3/2016 9:06 PM, KUBOTA Yuji wrote: > Hi Roger, > > Thank you for your help! > My patch and reproducer are as below. > > From Roger.Riggs at Oracle.com Fri Mar 4 16:19:00 2016 From: Roger.Riggs at Oracle.com (Roger Riggs) Date: Fri, 4 Mar 2016 11:19:00 -0500 Subject: RFR 8150840: Add an internal system property to control the default level of System.Logger when java.logging is not present. In-Reply-To: <56D97600.5080105@oracle.com> References: <56D97600.5080105@oracle.com> Message-ID: <56D9B574.6090205@Oracle.com> Hi Daniel, Good idea. SimpleConsolerLogger.java: Some of the property accesses could use the existing property actions instead of anonymous inner classes. static Level getDefaultLevel() { String levelName = AccessController.doPrivileged( new sun.security.action.GetPropertyAction("jdk.system.logger.level", "INFO")); ... Roger On 3/4/2016 6:48 AM, Daniel Fuchs wrote: > [Resending with a link to the patch] > > Hi, > > Please find below a patch for: > > https://bugs.openjdk.java.net/browse/JDK-8150840 > 8150840: Add an internal system property to control the default > level of System.Logger when java.logging is not present. > > http://cr.openjdk.java.net/~dfuchs/webrev_8150840/webrev.00 > > This patch also introduces a better separation between the > SimpleConsoleLogger (created by the DefaultLoggerFinder > when java.logging is not there), and the SurrogateLogger, > which emulates the behavior of java.util.logging.Logger > when java.logging is present but there is no custom > configuration (used to be PlatformLogger.DefaultLoggerProxy). > > best regards, > > -- daniel From amaembo at gmail.com Fri Mar 4 16:42:35 2016 From: amaembo at gmail.com (Tagir F. Valeev) Date: Fri, 4 Mar 2016 22:42:35 +0600 Subject: RFR-8148748: ArrayList.subList().spliterator() is not late-binding In-Reply-To: References: <27476765.20160129103224@gmail.com> <564410418.20160202102813@gmail.com> <1084445298.20160203182024@gmail.com> <534936148.20160204215527@gmail.com> <1877923692.20160208205352@gmail.com> <3C9BC5F4-EEE1-4F79-943D-37822E7E5512@oracle.com> <161394669.20160304152655@gmail.com> <334584844.20160304203850@gmail.com> Message-ID: <1708058894.20160304224235@gmail.com> Hello! AIOOBE is possible for ArrayList itself as well. E.g.: ArrayList test = new ArrayList<>(Arrays.asList(1,2,3,4)); Spliterator spltr = test.spliterator(); spltr.tryAdvance(System.out::println); test.clear(); test.trimToSize(); spltr.tryAdvance(System.out::println); Result (both in Java-8 and Java-9): 1 Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1 at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1398) So this is not subList-specific problem. Seems that forEachRemaining is not affected by this, due to additional length check and making the local copy of elementData array. At least I don't see the way to make forEachRemaining (either of subList spliterator or main ArrayList spliterator) throwing AIOOBE. Probably I'm missing something. However it can traverse some unexpected nulls before throwing CME if it was shrinked: ArrayList test = new ArrayList<>(Arrays.asList(1,2,3,4)); Spliterator spltr = test.spliterator(); spltr.tryAdvance(System.out::println); test.clear(); spltr.forEachRemaining(System.out::println); Output: 1 null null null Exception in thread "main" java.util.ConcurrentModificationException at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1428) This may unexpectedly throw NPE if forEachRemaining Consumer unconditionally dereference them as program logic suggests that null is impossible here. So user will see NPE instead of CME. Probably it could be logged as separate issue. My patch does not cause any regression here (at least, I hope so). Probably some other collections should be checked for similar corner cases as well. With best regards, Tagir Valeev. >> On 4 Mar 2016, at 15:38, Tagir F. Valeev wrote: >> >> Hello! >> >> Thank you for the review! >> PS> Thanks. PS> I just realised there are some subtleties where if the top-level PS> list is reduced in size the spliterator of a sublist may on PS> traversal throw an ArrayIndexOutOfBoundsException rather than ConcurrentModificationException. PS> This can already occur for a partially traversed top-level list PS> spliterator, so i am wondering how much we should care. Arguably PS> in the sublist case there is another level of indirection, where PS> errors creep in before traversal, so it suggests we should PS> additionally check the mod count at the start of the traversal PS> methods, and it?s probably ok on a best-effort basis to do this PS> for the sublist spliterator and not it?s splits. PS> Separately, i am not sure why the SubList.iterator has to check PS> that the offset + 1 is within bounds, since expectedModCount = PS> ArrayList.this.modCount should be false. PS> Paul. >> Here's updated webrev: >> http://cr.openjdk.java.net/~tvaleev/webrev/8148748/r3/ >> >> PS> Looks good. I especially like: >> >> PS> 125 addCollection(l.andThen(list -> list.subList(0, list.size()))); >> >> PS> Can you also update SpliteratorTraversingAndSplittingTest? >> >> PS> void addList(Function, ? extends List> l) { >> PS> // @@@ If collection is instance of List then add sub-list tests >> PS> addCollection(l); >> PS> } >> >> Done. >> >>>> Now it's separate anonymous class as you suggested. >>>> ArrayListSpliterator is untouched. Note that trySplit() can return >>>> original ArrayListSpliterator as after the binding their behavior is >>>> compatible. >>>> >> >> PS> Very nice, might be worth an extra comment noting that. Up to you. >> >> Short comments added. >> >> With best regards, >> Tagir Valeev. >> From daniel.fuchs at oracle.com Fri Mar 4 17:05:22 2016 From: daniel.fuchs at oracle.com (Daniel Fuchs) Date: Fri, 4 Mar 2016 18:05:22 +0100 Subject: RFR 8150840: Add an internal system property to control the default level of System.Logger when java.logging is not present. In-Reply-To: <56D9B574.6090205@Oracle.com> References: <56D97600.5080105@oracle.com> <56D9B574.6090205@Oracle.com> Message-ID: <56D9C052.4070703@oracle.com> Hi Roger, Yes that's a good remark: Applied it to SimpleConsoleLogger.java. http://cr.openjdk.java.net/~dfuchs/webrev_8150840/webrev.01/ -- daniel On 04/03/16 17:19, Roger Riggs wrote: > Hi Daniel, > > Good idea. > > SimpleConsolerLogger.java: > Some of the property accesses could use the existing property > actions instead of anonymous inner classes. > > static Level getDefaultLevel() { > String levelName = AccessController.doPrivileged( new > sun.security.action.GetPropertyAction("jdk.system.logger.level", > "INFO")); ... > > Roger > > > > On 3/4/2016 6:48 AM, Daniel Fuchs wrote: >> [Resending with a link to the patch] >> >> Hi, >> >> Please find below a patch for: >> >> https://bugs.openjdk.java.net/browse/JDK-8150840 >> 8150840: Add an internal system property to control the default >> level of System.Logger when java.logging is not present. >> >> http://cr.openjdk.java.net/~dfuchs/webrev_8150840/webrev.00 >> >> This patch also introduces a better separation between the >> SimpleConsoleLogger (created by the DefaultLoggerFinder >> when java.logging is not there), and the SurrogateLogger, >> which emulates the behavior of java.util.logging.Logger >> when java.logging is present but there is no custom >> configuration (used to be PlatformLogger.DefaultLoggerProxy). >> >> best regards, >> >> -- daniel > From Roger.Riggs at Oracle.com Fri Mar 4 17:05:51 2016 From: Roger.Riggs at Oracle.com (Roger Riggs) Date: Fri, 4 Mar 2016 12:05:51 -0500 Subject: RFR 8150840: Add an internal system property to control the default level of System.Logger when java.logging is not present. In-Reply-To: <150506012761977100@unknownmsgid> References: <56D97600.5080105@oracle.com> <56D9B574.6090205@Oracle.com> <150506012761977100@unknownmsgid> Message-ID: <56D9C06F.4050200@Oracle.com> I thought about that also but it is one of those cases where where it is 'too early' for method refs. Roger On 3/4/2016 11:56 AM, David Lloyd wrote: > Can they be method refs, or is this one of those cases where it could > be early boot where none of that stuff works yet? > -- > - DML > > >> On Mar 4, 2016, at 10:19 AM, Roger Riggs wrote: >> >> Hi Daniel, >> >> Good idea. >> >> SimpleConsolerLogger.java: >> Some of the property accesses could use the existing property actions instead of anonymous inner classes. >> >> static Level getDefaultLevel() { >> String levelName = AccessController.doPrivileged( new sun.security.action.GetPropertyAction("jdk.system.logger.level", "INFO")); ... >> >> Roger >> >> >> >>> On 3/4/2016 6:48 AM, Daniel Fuchs wrote: >>> [Resending with a link to the patch] >>> >>> Hi, >>> >>> Please find below a patch for: >>> >>> https://bugs.openjdk.java.net/browse/JDK-8150840 >>> 8150840: Add an internal system property to control the default >>> level of System.Logger when java.logging is not present. >>> >>> http://cr.openjdk.java.net/~dfuchs/webrev_8150840/webrev.00 >>> >>> This patch also introduces a better separation between the >>> SimpleConsoleLogger (created by the DefaultLoggerFinder >>> when java.logging is not there), and the SurrogateLogger, >>> which emulates the behavior of java.util.logging.Logger >>> when java.logging is present but there is no custom >>> configuration (used to be PlatformLogger.DefaultLoggerProxy). >>> >>> best regards, >>> >>> -- daniel From paul.sandoz at oracle.com Fri Mar 4 17:07:09 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 4 Mar 2016 18:07:09 +0100 Subject: RFR-8148748: ArrayList.subList().spliterator() is not late-binding In-Reply-To: <1708058894.20160304224235@gmail.com> References: <27476765.20160129103224@gmail.com> <564410418.20160202102813@gmail.com> <1084445298.20160203182024@gmail.com> <534936148.20160204215527@gmail.com> <1877923692.20160208205352@gmail.com> <3C9BC5F4-EEE1-4F79-943D-37822E7E5512@oracle.com> <161394669.20160304152655@gmail.com> <334584844.20160304203850@gmail.com> <1708058894.20160304224235@gmail.com> Message-ID: <84B9DFB1-6BDF-4B41-B2CC-7B1C10A148A4@oracle.com> > On 4 Mar 2016, at 17:42, Tagir F. Valeev wrote: > > Hello! > > AIOOBE is possible for ArrayList itself as well. E.g.: > > ArrayList test = new ArrayList<>(Arrays.asList(1,2,3,4)); > Spliterator spltr = test.spliterator(); > spltr.tryAdvance(System.out::println); > test.clear(); > test.trimToSize(); > spltr.tryAdvance(System.out::println); > > Result (both in Java-8 and Java-9): > > 1 > Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1 > at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1398) > > So this is not subList-specific problem. > Yes, that was the first aspect i mentioned. Generally for bulk traversal we opted to throw the CME at the end, which i think is a reasonable compromise, especially since mixed traversal is an edge case. I was not suggesting we revisit that. I was concerned about the sublist case, then I recalled a co-mod check is performed *before* construction: 1282 public Spliterator spliterator() { 1283 checkForComodification(); 1284 1285 // ArrayListSpliterator is not used because late-binding logic 1286 // is different here 1287 return new Spliterator<>() { I forgot about that! sorry for the noise. We are good. I will push on Monday. Thanks, Paul. > Seems that forEachRemaining is not affected by this, due to additional > length check and making the local copy of elementData array. At least > I don't see the way to make forEachRemaining (either of subList > spliterator or main ArrayList spliterator) throwing AIOOBE. Probably > I'm missing something. However it can traverse some unexpected nulls > before throwing CME if it was shrinked: > > ArrayList test = new ArrayList<>(Arrays.asList(1,2,3,4)); > Spliterator spltr = test.spliterator(); > spltr.tryAdvance(System.out::println); > test.clear(); > spltr.forEachRemaining(System.out::println); > > Output: > > 1 > null > null > null > Exception in thread "main" java.util.ConcurrentModificationException > at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1428) > > This may unexpectedly throw NPE if forEachRemaining Consumer > unconditionally dereference them as program logic suggests that null > is impossible here. So user will see NPE instead of CME. > > Probably it could be logged as separate issue. My patch does not cause > any regression here (at least, I hope so). Probably some other > collections should be checked for similar corner cases as well. > > With best regards, > Tagir Valeev. > >>> On 4 Mar 2016, at 15:38, Tagir F. Valeev wrote: >>> >>> Hello! >>> >>> Thank you for the review! >>> > > PS> Thanks. > > PS> I just realised there are some subtleties where if the top-level > PS> list is reduced in size the spliterator of a sublist may on > PS> traversal throw an ArrayIndexOutOfBoundsException rather than ConcurrentModificationException. > > PS> This can already occur for a partially traversed top-level list > PS> spliterator, so i am wondering how much we should care. Arguably > PS> in the sublist case there is another level of indirection, where > PS> errors creep in before traversal, so it suggests we should > PS> additionally check the mod count at the start of the traversal > PS> methods, and it?s probably ok on a best-effort basis to do this > PS> for the sublist spliterator and not it?s splits. > > PS> Separately, i am not sure why the SubList.iterator has to check > PS> that the offset + 1 is within bounds, since expectedModCount = > PS> ArrayList.this.modCount should be false. > > PS> Paul. > >>> Here's updated webrev: >>> http://cr.openjdk.java.net/~tvaleev/webrev/8148748/r3/ >>> >>> PS> Looks good. I especially like: >>> >>> PS> 125 addCollection(l.andThen(list -> list.subList(0, list.size()))); >>> >>> PS> Can you also update SpliteratorTraversingAndSplittingTest? >>> >>> PS> void addList(Function, ? extends List> l) { >>> PS> // @@@ If collection is instance of List then add sub-list tests >>> PS> addCollection(l); >>> PS> } >>> >>> Done. >>> >>>>> Now it's separate anonymous class as you suggested. >>>>> ArrayListSpliterator is untouched. Note that trySplit() can return >>>>> original ArrayListSpliterator as after the binding their behavior is >>>>> compatible. >>>>> >>> >>> PS> Very nice, might be worth an extra comment noting that. Up to you. >>> >>> Short comments added. >>> >>> With best regards, >>> Tagir Valeev. >>> > From kubota.yuji at gmail.com Fri Mar 4 18:19:41 2016 From: kubota.yuji at gmail.com (KUBOTA Yuji) Date: Sat, 5 Mar 2016 03:19:41 +0900 Subject: [DING] Re: [PING] Potential infinite waiting at JMXConnection#createConnection In-Reply-To: <56D9B027.5030305@Oracle.com> References: <56D89503.1080909@Oracle.com> <56D9B027.5030305@Oracle.com> Message-ID: Hi Roger and all, Thanks your help to share patch and link to here :) For the assignee, more detailed about the attachments is as below. http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-February/038593.html Thanks, Yuji 2016-03-05 0:56 GMT+09:00 Roger Riggs : > Hi Yuji, > > The patch and reproducer have been attached to the issue 8151212[1]. > > Thanks, Roger > > [1] https://bugs.openjdk.java.net/browse/JDK-8151212 > > On 3/3/2016 9:06 PM, KUBOTA Yuji wrote: > > Hi Roger, > > Thank you for your help! > My patch and reproducer are as below. > > > From joe.darcy at oracle.com Fri Mar 4 18:24:59 2016 From: joe.darcy at oracle.com (joe darcy) Date: Fri, 4 Mar 2016 10:24:59 -0800 Subject: JDK 9 RFR of JDK-8151263: Mark java/rmi test LeaseCheckInterval.java as intermittently failing In-Reply-To: <56D9A0C2.5000204@oracle.com> References: <56D9A0C2.5000204@oracle.com> Message-ID: <56D9D2FB.5000008@oracle.com> Looks fine Amy; thanks, -Joe On 3/4/2016 6:50 AM, Amy Lu wrote: > java/rmi/server/Unreferenced/leaseCheckInterval/LeaseCheckInterval.java > > This test is known to fail intermittently (JDK-8078587). This patch is > to mark the test accordingly with keyword 'intermittent'. > > bug: https://bugs.openjdk.java.net/browse/JDK-8151263 > webrev: http://cr.openjdk.java.net/~amlu/8151263/webrev.00/ > > Thanks, > Amy > > --- > old/test/java/rmi/server/Unreferenced/leaseCheckInterval/LeaseCheckInterval.java > 2016-03-04 22:49:09.000000000 +0800 > +++ > new/test/java/rmi/server/Unreferenced/leaseCheckInterval/LeaseCheckInterval.java > 2016-03-04 22:49:09.000000000 +0800 > @@ -1,5 +1,5 @@ > /* > - * Copyright (c) 2001, 2012, Oracle and/or its affiliates. All rights > reserved. > + * Copyright (c) 2001, 2016, Oracle and/or its affiliates. All rights > reserved. > * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. > * > * This code is free software; you can redistribute it and/or modify it > @@ -43,6 +43,7 @@ > * java.rmi/sun.rmi.transport.tcp > * @build TestLibrary JavaVM LeaseCheckInterval_Stub SelfTerminator > * @run main/othervm LeaseCheckInterval > + * @key intermittent > */ > > import java.rmi.Remote; > From joe.darcy at oracle.com Fri Mar 4 18:27:40 2016 From: joe.darcy at oracle.com (joe darcy) Date: Fri, 4 Mar 2016 10:27:40 -0800 Subject: JDK 9 RFR of JDK-8151286: Remove intermittent key from TestLocalTime.java and move back to tier1 In-Reply-To: <56D99D1B.3070301@oracle.com> References: <56D99D1B.3070301@oracle.com> Message-ID: <56D9D39C.2090005@oracle.com> Hi Amy, Looks good; thanks, -Joe On 3/4/2016 6:35 AM, Amy Lu wrote: > java/util/zip/TestLocalTime.java > > This test was failing intermittently (reported in JDK-8135108). > JDK-8135108 has been resolved and no open bug (no failure reported) > till now. > > This patch is to remove @key intermittent from the test and move it > back to tier1. > > bug: https://bugs.openjdk.java.net/browse/JDK-8151286 > webrev: http://cr.openjdk.java.net/~amlu/8151286/webrev.01/ > > Thanks, > Amy > > > --- old/test/TEST.groups 2016-03-04 22:22:13.000000000 +0800 > +++ new/test/TEST.groups 2016-03-04 22:22:13.000000000 +0800 > @@ -28,7 +28,6 @@ > tier1 = \ > :jdk_lang \ > -java/lang/ProcessHandle/TreeTest.java \ > - -java/util/zip/TestLocalTime.java \ > :jdk_util \ > -java/util/WeakHashMap/GCDuringIteration.java \ > -java/util/concurrent/ThreadPoolExecutor/ConfigChanges.java \ > @@ -40,7 +39,6 @@ > > tier2 = \ > java/lang/ProcessHandle/TreeTest.java \ > - java/util/zip/TestLocalTime.java \ > java/util/WeakHashMap/GCDuringIteration.java \ > java/util/concurrent/ThreadPoolExecutor/ConfigChanges.java \ > java/util/concurrent/forkjoin/FJExceptionTableLeak.java \ > --- old/test/java/util/zip/TestLocalTime.java 2016-03-04 > 22:22:13.000000000 +0800 > +++ new/test/java/util/zip/TestLocalTime.java 2016-03-04 > 22:22:13.000000000 +0800 > @@ -24,7 +24,6 @@ > /* > * @test > * @bug 8075526 8135108 > - * @key intermittent > * @summary Test timestamp via ZipEntry.get/setTimeLocal() > */ > > From john.r.rose at oracle.com Fri Mar 4 18:42:39 2016 From: john.r.rose at oracle.com (John Rose) Date: Fri, 4 Mar 2016 10:42:39 -0800 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D98D73.4010302@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> <56D745B7.4040508@oracle.com> <56D74B4B.9090708@oracle.com> <56D98D73.4010302@oracle.com> Message-ID: <08051172-BCC1-4C60-A8DE-11407BE3D07F@oracle.com> Doing more on the Java side means it will be easier generate strings lazily, only if the exception actually prints or presents STEs. All we need to store eagerly per frame is a MemberName and a BCI. ? John > On Mar 4, 2016, at 5:28 AM, Aleksey Shipilev wrote: > >> On 03/02/2016 11:21 PM, Aleksey Shipilev wrote: >>> On 03/02/2016 10:57 PM, Coleen Phillimore wrote: >>>> On 3/2/16 1:58 PM, Aleksey Shipilev wrote: >>>> Is there an underlying reason why we can't return the pre-filled >>>> StackTraceElements[] array from the JVM_GetStackTraceElements to begin >>>> with? This will avoid leaking StackTraceElement constructor into >>>> standard library, *and* allows to make StackTraceElement fields final. >>>> Taking stuff back from the standard library is hard, if not impossible, >>>> so we better expose as little as possible. >>> >>> We measured that it's faster to allocate the StackTraceElement array >>> in Java and it seems cleaner to the Java guys. It came from similar >>> code we've been prototyping for StackFrameInfo. >> >> OK, it's not perfectly clean from implementation standpoint, but this >> RFE might not be the best opportunity to polish that. At least make >> StackTraceElement constructor private (better), or package-private >> (acceptable), and then we are good to go. > > Okay, here's a little exploration: > http://cr.openjdk.java.net/~shade/8150778/StackTraceBench.java > > The difference between allocating in Java code, and allocating on VM > side is marginal on my machine, but I think we are down to native memset > performance when allocating on VM side. Therefore, I'd probably stay > with Java allocation which codegen we absolutely control. > > Aside: see the last experiment, avoiding StringTable::intern (shows in > profiles a lot!) trims down construction costs down even further. I'd > think that is a worthwhile improvement to consider. > > Cheers, > -Aleksey > > From dlloyd at redhat.com Fri Mar 4 16:56:27 2016 From: dlloyd at redhat.com (David Lloyd) Date: Fri, 4 Mar 2016 10:56:27 -0600 Subject: RFR 8150840: Add an internal system property to control the default level of System.Logger when java.logging is not present. In-Reply-To: <56D9B574.6090205@Oracle.com> References: <56D97600.5080105@oracle.com> <56D9B574.6090205@Oracle.com> Message-ID: <150506012761977100@unknownmsgid> Can they be method refs, or is this one of those cases where it could be early boot where none of that stuff works yet? -- - DML > On Mar 4, 2016, at 10:19 AM, Roger Riggs wrote: > > Hi Daniel, > > Good idea. > > SimpleConsolerLogger.java: > Some of the property accesses could use the existing property actions instead of anonymous inner classes. > > static Level getDefaultLevel() { > String levelName = AccessController.doPrivileged( new sun.security.action.GetPropertyAction("jdk.system.logger.level", "INFO")); ... > > Roger > > > >> On 3/4/2016 6:48 AM, Daniel Fuchs wrote: >> [Resending with a link to the patch] >> >> Hi, >> >> Please find below a patch for: >> >> https://bugs.openjdk.java.net/browse/JDK-8150840 >> 8150840: Add an internal system property to control the default >> level of System.Logger when java.logging is not present. >> >> http://cr.openjdk.java.net/~dfuchs/webrev_8150840/webrev.00 >> >> This patch also introduces a better separation between the >> SimpleConsoleLogger (created by the DefaultLoggerFinder >> when java.logging is not there), and the SurrogateLogger, >> which emulates the behavior of java.util.logging.Logger >> when java.logging is present but there is no custom >> configuration (used to be PlatformLogger.DefaultLoggerProxy). >> >> best regards, >> >> -- daniel > From mandy.chung at oracle.com Fri Mar 4 22:00:15 2016 From: mandy.chung at oracle.com (Mandy Chung) Date: Fri, 4 Mar 2016 14:00:15 -0800 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <08051172-BCC1-4C60-A8DE-11407BE3D07F@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> <56D745B7.4040508@oracle.com> <56D74B4B.9090708@oracle.com> <56D98D73.4010302@oracle.com> <08051172-BCC1-4C60-A8DE-11407BE3D07F@oracle.com> Message-ID: <0EFF60C2-B20E-4075-9355-3FD11A486AFD@oracle.com> > On Mar 4, 2016, at 10:42 AM, John Rose wrote: > > Doing more on the Java side means it will be easier generate strings lazily, only if the exception actually prints or presents STEs. > > All we need to store eagerly per frame is a MemberName and a BCI. This is what StackWalker stores in StackFrameInfo per frame. I wish we could convert Throwable backtrace with the stack walker API. Footprint of MemberName as well as GC pressure (as they are kept as weak references in VM) are the performance concern that we will have to look at it in a future release. Mandy > > ? John > >> On Mar 4, 2016, at 5:28 AM, Aleksey Shipilev wrote: >> >>> On 03/02/2016 11:21 PM, Aleksey Shipilev wrote: >>>> On 03/02/2016 10:57 PM, Coleen Phillimore wrote: >>>>> On 3/2/16 1:58 PM, Aleksey Shipilev wrote: >>>>> Is there an underlying reason why we can't return the pre-filled >>>>> StackTraceElements[] array from the JVM_GetStackTraceElements to begin >>>>> with? This will avoid leaking StackTraceElement constructor into >>>>> standard library, *and* allows to make StackTraceElement fields final. >>>>> Taking stuff back from the standard library is hard, if not impossible, >>>>> so we better expose as little as possible. >>>> >>>> We measured that it's faster to allocate the StackTraceElement array >>>> in Java and it seems cleaner to the Java guys. It came from similar >>>> code we've been prototyping for StackFrameInfo. >>> >>> OK, it's not perfectly clean from implementation standpoint, but this >>> RFE might not be the best opportunity to polish that. At least make >>> StackTraceElement constructor private (better), or package-private >>> (acceptable), and then we are good to go. >> >> Okay, here's a little exploration: >> http://cr.openjdk.java.net/~shade/8150778/StackTraceBench.java >> >> The difference between allocating in Java code, and allocating on VM >> side is marginal on my machine, but I think we are down to native memset >> performance when allocating on VM side. Therefore, I'd probably stay >> with Java allocation which codegen we absolutely control. >> >> Aside: see the last experiment, avoiding StringTable::intern (shows in >> profiles a lot!) trims down construction costs down even further. I'd >> think that is a worthwhile improvement to consider. >> >> Cheers, >> -Aleksey >> >> From mark.reinhold at oracle.com Fri Mar 4 22:25:21 2016 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Fri, 4 Mar 2016 14:25:21 -0800 (PST) Subject: JEP 285: Spin-Wait Hints Message-ID: <20160304222521.693BC9CF2B@eggemoggin.niobe.net> New JEP Candidate: http://openjdk.java.net/jeps/285 - Mark From john.r.rose at oracle.com Fri Mar 4 22:49:06 2016 From: john.r.rose at oracle.com (John Rose) Date: Fri, 4 Mar 2016 14:49:06 -0800 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <0EFF60C2-B20E-4075-9355-3FD11A486AFD@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> <56D745B7.4040508@oracle.com> <56D74B4B.9090708@oracle.com> <56D98D73.4010302@oracle.com> <08051172-BCC1-4C60-A8DE-11407BE3D07F@oracle.com> <0EFF60C2-B20E-4075-9355-3FD11A486AFD@oracle.com> Message-ID: <62FC4886-B2E6-44FC-BE75-64E916E68391@oracle.com> On Mar 4, 2016, at 2:00 PM, Mandy Chung wrote: > > Footprint of MemberName as well as GC pressure (as they are kept as weak references in VM) are the performance concern that we will have to look at it in a future release. I hope we can increase our investment on MemberName as an all-purpose handle from Java to JVM metadata (like java.lang.Class). Specifically, the weak-pointer logic can probably be tuned to reduce overheads, at the cost of increased coupling between MemberName and the JVM. But the coupling is reasonable; we need something like that, and jlr.Method is way too heavy. ? John From nadeesh.tv at oracle.com Sat Mar 5 12:05:34 2016 From: nadeesh.tv at oracle.com (nadeesh tv) Date: Sat, 05 Mar 2016 17:35:34 +0530 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time In-Reply-To: References: <56D6C0B7.10205@oracle.com> <56D70406.7010000@oracle.com> <56D7317F.3000804@Oracle.com> <56D73637.3090006@oracle.com> <56D88877.4010202@oracle.com> Message-ID: <56DACB8E.30402@oracle.com> Hi all, Please see the updated webrev http://cr.openjdk.java.net/~ntv/8030864/webrev.06/ Regards, Nadeesh On 3/4/2016 4:34 PM, Stephen Colebourne wrote: > long DAYS_0000_TO_1970 should be extracted as a private static final constant. > > Otherwise looks good. > Stephen > > > On 3 March 2016 at 18:54, nadeesh tv wrote: >> Hi, >> >> Roger - Thanks for the comments >> >> Made the necessary changes in the spec >> >> Please see the updated webrev >> http://cr.openjdk.java.net/~ntv/8030864/webrev.05/ >> On 3/3/2016 12:21 AM, nadeesh tv wrote: >>> Hi , >>> >>> Please see the updated webrev >>> http://cr.openjdk.java.net/~ntv/8030864/webrev.03/ >>> >>> Thanks and Regards, >>> Nadeesh >>> >>> On 3/3/2016 12:01 AM, Roger Riggs wrote: >>>> Hi Nadeesh, >>>> >>>> Editorial comments: >>>> >>>> Chronology.java: 716+ >>>> "Java epoch" -> "epoch" >>>> "minute, second and zoneOffset" -> "minute, second*,* and zoneOffset" >>>> (add a comma; two places) >>>> >>>> "caluculated using given era, prolepticYear," -> "calculated using the >>>> era, year-of-era," >>>> "to represent" -> remove as unnecessary in all places >>>> >>>> IsoChronology: >>>> "to represent" -> remove as unnecessary in all places >>>> >>>> These should be fixed to cleanup the specification. >>>> >>>> The implementation and the tests look fine. >>>> >>>> Thanks, Roger >>>> >>>> >>>> >>>> On 3/2/2016 10:17 AM, nadeesh tv wrote: >>>>> Hi, >>>>> Stephen, Thanks for the comments. >>>>> Please see the updated webrev >>>>> http://cr.openjdk.java.net/~ntv/8030864/webrev.02/ >>>>> >>>>> Regards, >>>>> Nadeesh TV >>>>> >>>>> On 3/2/2016 5:41 PM, Stephen Colebourne wrote: >>>>>> Remove "Subclass can override the default implementation for a more >>>>>> efficient implementation." as it adds no value. >>>>>> >>>>>> In the default implementation of >>>>>> >>>>>> epochSecond(Era era, int yearofEra, int month, int dayOfMonth, >>>>>> int hour, int minute, int second, ZoneOffset zoneOffset) >>>>>> >>>>>> use >>>>>> >>>>>> prolepticYear(era, yearOfEra) >>>>>> >>>>>> and call the other new epochSecond method. See dateYearDay(Era era, >>>>>> int yearOfEra, int dayOfYear) for the design to copy. If this is done, >>>>>> then there is no need to override the method in IsoChronology. >>>>>> >>>>>> In the test, >>>>>> >>>>>> LocalDate.MIN.with(chronoLd) >>>>>> >>>>>> could be >>>>>> >>>>>> LocalDate.from(chronoLd) >>>>>> >>>>>> Thanks >>>>>> Stephen >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On 2 March 2016 at 10:30, nadeesh tv wrote: >>>>>>> Hi all, >>>>>>> >>>>>>> Please review an enhancement for a garbage free epochSecond method. >>>>>>> >>>>>>> Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 >>>>>>> >>>>>>> webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 >>>>>>> >>>>>>> -- >>>>>>> Thanks and Regards, >>>>>>> Nadeesh TV >>>>>>> >> -- >> Thanks and Regards, >> Nadeesh TV >> -- Thanks and Regards, Nadeesh TV From uschindler at apache.org Sat Mar 5 13:24:37 2016 From: uschindler at apache.org (Uwe Schindler) Date: Sat, 5 Mar 2016 14:24:37 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) Message-ID: <069f01d176e2$6084d6e0$218e84a0$@apache.org> Hi OpenJDK Core Developers, you may know the Apache Lucene team is testing early access releases of Java 9. We reported many bugs already, but most of them only applied to Hotspot and Lucene itsself. But this problem since build 108 is now really severe, because it breaks the build system already! To allow further testing of Open Source Projects, I'd suggest to revert the Multi-Release-JAR runtime support patch and provide a new preview build ASAP, because we found out after a night of debugging a build system from which we don't know all internals what is causing the problems and there is no workaround. I am very sorry that I have to say this, but it unfortunately build 108 breaks *ALL* versions of Apache Ant, the grandfather of all Java build systems :-) I know also OpenJDK is using it, too! So with Multi-Release JAR file patch applied (see http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c), any Ant-based build - including the JDK build itsself - would no longer bootstrap. It is impossible to also build Gradle projects, because Gradle uses Ant internally for many tasks). Maven projects may be affected, too. Now you might have the question: What happened? We tried to build Lucene on our Jenkins server, but the build itsself failed with a stupid error message: BUILD FAILED /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:21: The following error occurred while executing this line: /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:56: not doesn't support the nested "matches" element. The first idea was: Ah, there were changes in XML parsing (JDK-8149915). So we debugged the build. But it was quite clear that XML parsing was not the issue. It got quite clear when we enabled "-debug" on the build. What happened was that Ant was not loading its internal conditions/tasks/type definitions anymore, so the build system does not know almost any type anymore. The debug log showed that Ant was no longer able to load the resource "/org/apache/tools/ant/antlib.xml" from its own JAR file anymore. Instead it printed some strange debugging output (which looked totally broken). I spend the whole night digging through their code and found the issue: The commit of Multi-Release-Jar files (see http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c) broke resource handling in Apache Ant. In short: If you call ClassLoader.getResources() / or getResource() you get back an URL from where you can load the Resource - this is all fine and still works. But, with the Multi-Release JAR files patch this now has an URL fragment appended to the URL: '#release' (see http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c); this also applies to non-multi-release JAR files like Apache Ant's "ant.jar". In Java 7, Java 8,... and Java 9pre-b108, ClassLoader.getResource()/getResources() returned stuff like: "jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml" Now in Java 9b108 the following is returned: "jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml#release" And here Ant breaks (and I assume many other projects like Maven, too). Ant checks for the file extension of the string (because it may load definitions from both XML and properties files). So it does endsWith(".xml") and of course this now returns false. The effect is that Ant tries to load its own task definitions as a java properties file instead of XML. Of course this fails, because the data behind this URL is XML. The effect is that Ant cannot bootstrap as everything to build is missing. One might say: Ant's code is broken (I agree, it is not nice because it relies on the string representation of the resource URL - which is a no-go anyways), but it is impossible to fix, because Ant is bundled on most developer computers and those will suddenly break with Java 9! There is also no version out there that works around this, so we cannot test anything anymore! The problematic line in Ant's code is here: http://grepcode.com/file/repo1.maven.org/maven2/org.apache.ant/ant/1.9.6/org/apache/tools/ant/taskdefs/Definer.java?av=f#259 I'd suggest to please ASAP revert the Multi-Release JAR file patch and provide a new preview build as soon as possible. I think there is more work needed to fix this. If this does not revert to the original state, it will be impossible to build and test Lucene, Elasticsearch,.... (and almost every Java project out there!). So short: We cannot test anymore and it is likely that we cannot support Java 9 anymore because the build system used by most Java projects behind the scenes does not bootstrap itself anymore. My suggestion would be to investigate other versions for this patch that does *not* modify the resource URLs by appending a fragment to them (at least not for the "standard" case without an actual Multi-Release Jar). For new multi-release JAR files I am fine with appending fragments, but please not for default ones. Maybe change code to handle the URLs from the non-versioned part differently (without fragment). Leaving the fragment inide may break many othe rprojects, because many programmers are very sloppy with handling URLs (well-known issue is calling URL#getFile() of a file:-URL that breaks on Windows systems and spaces in path name). Many people just call toString() on URL and do some test on it (startsWith, endsWith). So appending fragments is a no-go for backwards compatibility with JAR resources! I posted this to the mailing list and did not open a bug report on http://bugs.java.com/, because this is a more general issue - feel free to open bug reports around this!!! I would be very happy if we could find a quick solution for this problem. Until there is a solution we have to stop testing Java 9 with Apache Lucene/Solr/..., and this is not a good sign, especially as Jigsaw will be merged soon. Thanks for listening, Uwe P.S.: I also CCed the Apache Ant team. They should fix the broken code anyways, but this won't help for many projects already out there (e.g. Apache Lucene still has a minimum requirement of Ant 1.8.2 because MacOSX computers ship with that version since years). ----- Uwe Schindler uschindler at apache.org ASF Member, Apache Lucene PMC / Committer Bremen, Germany http://lucene.apache.org/ From claes.redestad at oracle.com Sat Mar 5 13:50:12 2016 From: claes.redestad at oracle.com (Claes Redestad) Date: Sat, 05 Mar 2016 14:50:12 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: <069f01d176e2$6084d6e0$218e84a0$@apache.org> References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> Message-ID: Hi, similar issues were discovered too late to stop b108, e.g., https://bugs.openjdk.java.net/browse/JDK-8150920. Fix is already in jdk9/dev, so I think the next build should be more well-behaved and hope we can provide it more promptly than normal. If you can build OpenJDK from jdk9/dev and report any remaining issues due to the multi-release feature that would be quite helpful! Thanks! /Claes Uwe Schindler skrev: (5 mars 2016 14:24:37 CET) >Hi OpenJDK Core Developers, > >you may know the Apache Lucene team is testing early access releases of >Java 9. We reported many bugs already, but most of them only applied to >Hotspot and Lucene itsself. But this problem since build 108 is now >really severe, because it breaks the build system already! > >To allow further testing of Open Source Projects, I'd suggest to revert >the Multi-Release-JAR runtime support patch and provide a new preview >build ASAP, because we found out after a night of debugging a build >system from which we don't know all internals what is causing the >problems and there is no workaround. I am very sorry that I have to say >this, but it unfortunately build 108 breaks *ALL* versions of Apache >Ant, the grandfather of all Java build systems :-) I know also OpenJDK >is using it, too! So with Multi-Release JAR file patch applied (see >http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c), any >Ant-based build - including the JDK build itsself - would no longer >bootstrap. It is impossible to also build Gradle projects, because >Gradle uses Ant internally for many tasks). Maven projects may be >affected, too. > >Now you might have the question: What happened? > >We tried to build Lucene on our Jenkins server, but the build itsself >failed with a stupid error message: > >BUILD FAILED >/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:21: The >following error occurred while executing this line: >/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:56: >not doesn't support the nested "matches" element. > >The first idea was: Ah, there were changes in XML parsing >(JDK-8149915). So we debugged the build. But it was quite clear that >XML parsing was not the issue. It got quite clear when we enabled >"-debug" on the build. What happened was that Ant was not loading its >internal conditions/tasks/type definitions anymore, so the build system >does not know almost any type anymore. The debug log showed that Ant >was no longer able to load the resource >"/org/apache/tools/ant/antlib.xml" from its own JAR file anymore. >Instead it printed some strange debugging output (which looked totally >broken). > >I spend the whole night digging through their code and found the issue: >The commit of Multi-Release-Jar files (see >http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c) broke >resource handling in Apache Ant. In short: If you call >ClassLoader.getResources() / or getResource() you get back an URL from >where you can load the Resource - this is all fine and still works. >But, with the Multi-Release JAR files patch this now has an URL >fragment appended to the URL: '#release' (see >http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c); this also >applies to non-multi-release JAR files like Apache Ant's "ant.jar". > >In Java 7, Java 8,... and Java 9pre-b108, >ClassLoader.getResource()/getResources() returned stuff like: > >"jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml" > >Now in Java 9b108 the following is returned: > >"jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml#release" > >And here Ant breaks (and I assume many other projects like Maven, too). >Ant checks for the file extension of the string (because it may load >definitions from both XML and properties files). So it does >endsWith(".xml") and of course this now returns false. The effect is >that Ant tries to load its own task definitions as a java properties >file instead of XML. Of course this fails, because the data behind this >URL is XML. The effect is that Ant cannot bootstrap as everything to >build is missing. > >One might say: Ant's code is broken (I agree, it is not nice because it >relies on the string representation of the resource URL - which is a >no-go anyways), but it is impossible to fix, because Ant is bundled on >most developer computers and those will suddenly break with Java 9! >There is also no version out there that works around this, so we cannot >test anything anymore! > >The problematic line in Ant's code is here: >http://grepcode.com/file/repo1.maven.org/maven2/org.apache.ant/ant/1.9.6/org/apache/tools/ant/taskdefs/Definer.java?av=f#259 > >I'd suggest to please ASAP revert the Multi-Release JAR file patch and >provide a new preview build as soon as possible. I think there is more >work needed to fix this. If this does not revert to the original state, >it will be impossible to build and test Lucene, Elasticsearch,.... (and >almost every Java project out there!). So short: We cannot test anymore >and it is likely that we cannot support Java 9 anymore because the >build system used by most Java projects behind the scenes does not >bootstrap itself anymore. > >My suggestion would be to investigate other versions for this patch >that does *not* modify the resource URLs by appending a fragment to >them (at least not for the "standard" case without an actual >Multi-Release Jar). For new multi-release JAR files I am fine with >appending fragments, but please not for default ones. Maybe change code >to handle the URLs from the non-versioned part differently (without >fragment). Leaving the fragment inide may break many othe rprojects, >because many programmers are very sloppy with handling URLs (well-known >issue is calling URL#getFile() of a file:-URL that breaks on Windows >systems and spaces in path name). Many people just call toString() on >URL and do some test on it (startsWith, endsWith). So appending >fragments is a no-go for backwards compatibility with JAR resources! > >I posted this to the mailing list and did not open a bug report on >http://bugs.java.com/, because this is a more general issue - feel free >to open bug reports around this!!! I would be very happy if we could >find a quick solution for this problem. Until there is a solution we >have to stop testing Java 9 with Apache Lucene/Solr/..., and this is >not a good sign, especially as Jigsaw will be merged soon. > >Thanks for listening, >Uwe > >P.S.: I also CCed the Apache Ant team. They should fix the broken code >anyways, but this won't help for many projects already out there (e.g. >Apache Lucene still has a minimum requirement of Ant 1.8.2 because >MacOSX computers ship with that version since years). > >----- >Uwe Schindler >uschindler at apache.org >ASF Member, Apache Lucene PMC / Committer >Bremen, Germany >http://lucene.apache.org/ -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From Alan.Bateman at oracle.com Sat Mar 5 14:03:07 2016 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Sat, 5 Mar 2016 14:03:07 +0000 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: <069f01d176e2$6084d6e0$218e84a0$@apache.org> References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> Message-ID: <56DAE71B.7040400@oracle.com> On 05/03/2016 13:24, Uwe Schindler wrote: > : > > I'd suggest to please ASAP revert the Multi-Release JAR file patch and provide a new preview build as soon as possible. I think there is more work needed to fix this. If this does not revert to the original state, it will be impossible to build and test Lucene, Elasticsearch,.... (and almost every Java project out there!). So short: We cannot test anymore and it is likely that we cannot support Java 9 anymore because the build system used by most Java projects behind the scenes does not bootstrap itself anymore. > Sigh, I think those of us that reviewed this missed the point that the fragment is appended by default. This will of course break code that parses URL strings in naive ways (anything looking for ".xml" should be looking at the path component of course). I'll create a bug for this now, assuming you haven't created one already. One general point is that the purpose of EA builds and timely testing by Lucene and other projects is invaluable for shaking out issues. There will be issues periodically and much better to find these within a few days of pushing a change rather than months later. -Alan From uschindler at apache.org Sat Mar 5 14:03:36 2016 From: uschindler at apache.org (Uwe Schindler) Date: Sat, 5 Mar 2016 15:03:36 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> Message-ID: <06a801d176e7$d2ddf6e0$7899e4a0$@apache.org> Hi Claes, is there a way to just build a new runtime library without compiling a full JDK (including Hotspot). So just replacing the jimage files locally? Uwe ----- Uwe Schindler uschindler at apache.org ASF Member, Apache Lucene PMC / Committer Bremen, Germany http://lucene.apache.org/ From: Claes Redestad [mailto:claes.redestad at oracle.com] Sent: Saturday, March 05, 2016 2:50 PM To: Uwe Schindler ; core-libs-dev at openjdk.java.net Cc: rory.odonnell at oracle.com; dev at ant.apache.org; bodewig at apache.org Subject: Re: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) Hi, similar issues were discovered too late to stop b108, e.g., https://bugs.openjdk.java.net/browse/JDK-8150920. Fix is already in jdk9/dev, so I think the next build should be more well-behaved and hope we can provide it more promptly than normal. If you can build OpenJDK from jdk9/dev and report any remaining issues due to the multi-release feature that would be quite helpful! Thanks! /Claes Uwe Schindler > skrev: (5 mars 2016 14:24:37 CET) Hi OpenJDK Core Developers, you may know the Apache Lucene team is testing early access releases of Java 9. We reported many bugs already, but most of them only applied to Hotspot and Lucene itsself. But this problem since build 108 is now really severe, because it breaks the build system already! To allow further testing of Open Source Projects, I'd suggest to revert the Multi-Release-JAR runtime support patch and provide a new preview build ASAP, because we found out after a night of debugging a build system from which we don't know all internals what is causing the problems and there is no workaround. I am very sorry that I have to say this, but it unfortunately build 108 breaks *ALL* versions of Apache Ant, the grandfather of all Java build systems :-) I know also OpenJDK is using it, too! So with Multi-Release JAR file patch applied (see http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c), any Ant-based build - including the JDK build itsself - would no longer bootstrap. It is impossible to also build Gradle projects, because Gradle uses Ant internally for many tasks). Maven projects may be affected, too. Now you might have the question: What happened? We tried to build Lucene on our Jenkins server, but the build itsself failed with a stupid error message: BUILD FAILED /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:21: The following error occurred while executing this line: /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:56: not doesn't support the nested "matches" element. The first idea was: Ah, there were changes in XML parsing (JDK-8149915). So we debugged the build. But it was quite clear that XML parsing was not the issue. It got quite clear when we enabled "-debug" on the build. What happened was that Ant was not loading its internal conditions/tasks/type definitions anymore, so the build system does not know almost any type anymore. The debug log showed that Ant was no longer able to load the resource "/org/apache/tools/ant/antlib.xml" from its own JAR file anymore. Instead it printed some strange debugging output (which looked totally broken). I spend the whole night digging through their code and found the issue: The commit of Multi-Release-Jar files (see http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c) broke resource handling in Apache Ant. In short: If you call ClassLoader.getResources() / or getResource() you get back an URL from where you can load the Resource - this is all fine and still works. But, with the Multi-Release JAR files patch this now has an URL fragment appended to the URL: '#release' (see http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c); this also applies to non-multi-release JAR files like Apache Ant's "ant.jar". In Java 7, Java 8,... and Java 9pre-b108, ClassLoader.getResource()/getResources() returned stuff like: "jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml" Now in Java 9b108 the following is returned: "jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml#release" And here Ant breaks (and I assume many other projects like Maven, too). Ant checks for the file extension of the string (because it may load definitions from both XML and properties files). So it does endsWith(".xml") and of course this now returns false. The effect is that Ant tries to load its own task definitions as a java properties file instead of XML. Of course this fails, because the data behind this URL is XML. The effect is that Ant cannot bootstrap as everything to build is missing. One might say: Ant's code is broken (I agree, it is not nice because it relies on the string representation of the resource URL - which is a no-go anyways), but it is impossible to fix, because Ant is bundled on most developer computers and those will suddenly break with Java 9! There is also no version out there that works around this, so we cannot test anything anymore! The problematic line in Ant's code is here: http://grepcode.com/file/repo1.maven.org/maven2/org.apache.ant/ant/1.9.6/org/apache/tools/ant/taskdefs/Definer.java?av=f#259 I'd suggest to please ASAP revert the Multi-Release JAR file patch and provide a new preview build as soon as possible. I think there is more work needed to fix this. If this does not revert to the original state, it will be impossible to build and test Lucene, Elasticsearch,.... (and almost every Java project out there!). So short: We cannot test anymore and it is likely that we cannot support Java 9 anymore because the build system used by most Java projects behind the scenes does not bootstrap itself anymore. My suggestion would be to investigate other versions for this patch that does *not* modify the resource URLs by appending a fragment to them (at least not for the "standard" case without an actual Multi-Release Jar). For new multi-release JAR files I am fine with appending fragments, but please not for default ones. Maybe change code to handle the URLs from the non-versioned part differently (without fragment). Leaving the fragment inide may break many othe rprojects, because many programmers are very sloppy with handling URLs (well-known issue is calling URL#getFile() of a file:-URL that breaks on Windows systems and spaces in path name). Many people just call toString() on URL and do some test on it (startsWith, endsWith). So appending fragments is a no-go for backwards compatibility with JAR resources! I posted this to the mailing list and did not open a bug report on http://bugs.java.com/, because this is a more general issue - feel free to open bug reports around this!!! I would be very happy if we could find a quick solution for this problem. Until there is a solution we have to stop testing Java 9 with Apache Lucene/Solr/..., and this is not a good sign, especially as Jigsaw will be merged soon. Thanks for listening, Uwe P.S.: I also CCed the Apache Ant team. They should fix the broken code anyways, but this won't help for many projects already out there (e.g. Apache Lucene still has a minimum requirement of Ant 1.8.2 because MacOSX computers ship with that version since years). ----- Uwe Schindler uschindler at apache.org ASF Member, Apache Lucene PMC / Committer Bremen, Germany http://lucene.apache.org/ -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From uschindler at apache.org Sat Mar 5 14:17:26 2016 From: uschindler at apache.org (Uwe Schindler) Date: Sat, 5 Mar 2016 15:17:26 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: <56DAE71B.7040400@oracle.com> References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> <56DAE71B.7040400@oracle.com> Message-ID: <06ad01d176e9$c19e2740$44da75c0$@apache.org> Thanks Alan, I am glad that the appending of "#resource" is indeed a bug. > > I'd suggest to please ASAP revert the Multi-Release JAR file patch and > provide a new preview build as soon as possible. I think there is more work > needed to fix this. If this does not revert to the original state, it will be > impossible to build and test Lucene, Elasticsearch,.... (and almost every Java > project out there!). So short: We cannot test anymore and it is likely that we > cannot support Java 9 anymore because the build system used by most Java > projects behind the scenes does not bootstrap itself anymore. > > > Sigh, I think those of us that reviewed this missed the point that the > fragment is appended by default. This will of course break code that > parses URL strings in naive ways (anything looking for ".xml" should be > looking at the path component of course). This is why I put the Ant developers in CC. The correct way would be to look at the *decoded* path (not just getPath() because this is also one of the "famous" traps in the URL class - one reason why it should be avoided in favor of URI). URL.toURI().getPath() is most safe to fix the issue in Apache Ant (Stefan Bodewig: Should I open an issue in Ant?). Maybe Ant developers can fix this code in later versions to handle URLs more correct. In general there is lots of code outside that incorrectly uses URLs, because developers are lazy... > I'll create a bug for this > now, assuming you haven't created one already. No, I haven't. Thanks for doing this. > One general point is that the purpose of EA builds and timely testing by > Lucene and other projects is invaluable for shaking out issues. There > will be issues periodically and much better to find these within a few > days of pushing a change rather than months later. This is why we do this! The problem with the EA builds is still the large delay until pushes are appearing in builds. In most cases it takes > 2 weeks until an EA build contains pushed fixes. We are still waiting for fixes of JDK-8150280 and JDK-8150436 (duplicate of JDK-8148786). Both issues were resolved long time ago. The problem is if we have fatal issues like this, because it prevents testing the above bugs (once they are fixed). Thanks, Uwe From amaembo at gmail.com Sat Mar 5 17:35:16 2016 From: amaembo at gmail.com (Tagir F. Valeev) Date: Sat, 5 Mar 2016 23:35:16 +0600 Subject: Stream API: Fuse sorted().limit(n) into single operation Message-ID: <1598030827.20160305233516@gmail.com> Hello! One of the popular bulk data operation is to find given number of least or greatest elements. Currently Stream API provides no dedicated operation to do this. Of course, it could be implemented by custom collector and some third-party libraries already provide it. However it would be quite natural to use existing API: stream.sorted().limit(k) - k least elements stream.sorted(Comparator.reverseOrder()).limit(k) - k greatest elements. In fact people already doing this. Some samples could be found on GitHub: https://github.com/search?l=java&q=%22sorted%28%29.limit%28%22&type=Code&utf8=%E2%9C%93 Unfortunately current implementation of such sequence of operations is suboptimal: first the whole stream content is dumped into intermediate array, then sorted fully and after that k least elements is selected. On the other hand it's possible to provide a special implementation for this particular case which takes O(k) additional memory and in many cases works significantly faster. I wrote proof-of-concept implementation, which could be found here: http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/webrev/ The implementation switches to new algorithm if limit is less than 1000 which is quite common for such scenario (supporting bigger values is also possible, but would require more testing). New algorithm allocates an array of 2*limit elements. When its size is reached, it sorts the array (using Arrays.sort) and discards the second half. After that only those elements are accumulated which are less than the worst element found so far. When array is filled again, the second half is sorted and merged with the first half. Here's JMH test with results which covers several input patterns: http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/ You may check summary first: http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/summary.txt Speedup values bigger than 1 are good. The most significant regression in the sequential mode of the new implementation is the ever decreasing input (especially with the low limit value). Still, it's not that bad (given the fact that old implementation processes such input very fast). On the other hand, for random input new implementation could be in order of magnitude faster. Even for ever ascending input noteable speedup (like 40%) could be achieved. For parallel stream the new implementation is almost always faster, especially if you ignore the cases when parallel stream is unprofitable. What do you think about this improvement? Could it be included into JDK-9? Are there any issues I'm unaware of? I would be really happy to complete this work if this is supported by JDK team. Current implementation has no primitive specialization and does not optimize the sorting out if the input is known to be sorted, but it's not very hard to add these features as well if you find my idea useful. With best regards, Tagir Valeev. From lowasser at google.com Sat Mar 5 18:30:26 2016 From: lowasser at google.com (Louis Wasserman) Date: Sat, 05 Mar 2016 18:30:26 +0000 Subject: Stream API: Fuse sorted().limit(n) into single operation In-Reply-To: <1598030827.20160305233516@gmail.com> References: <1598030827.20160305233516@gmail.com> Message-ID: Worth noting: Guava uses a similar implementation for Ordering.leastOf , but instead of sorting the array when it's filled, does a quickselect pass to do it in O(k) time instead of O(k log k). We had been planning to put together a Collector implementation for it, since it's actually pretty amenable to Collectorification (clearly a word). On Sat, Mar 5, 2016 at 12:32 PM Tagir F. Valeev wrote: > Hello! > > One of the popular bulk data operation is to find given number of > least or greatest elements. Currently Stream API provides no dedicated > operation to do this. Of course, it could be implemented by custom > collector and some third-party libraries already provide it. However > it would be quite natural to use existing API: > > stream.sorted().limit(k) - k least elements > stream.sorted(Comparator.reverseOrder()).limit(k) - k greatest elements. > > In fact people already doing this. Some samples could be found on > GitHub: > > https://github.com/search?l=java&q=%22sorted%28%29.limit%28%22&type=Code&utf8=%E2%9C%93 > > Unfortunately current implementation of such sequence of operations is > suboptimal: first the whole stream content is dumped into intermediate > array, then sorted fully and after that k least elements is selected. > On the other hand it's possible to provide a special implementation > for this particular case which takes O(k) additional memory and in > many cases works significantly faster. > > I wrote proof-of-concept implementation, which could be found here: > http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/webrev/ > The implementation switches to new algorithm if limit is less than > 1000 which is quite common for such scenario (supporting bigger values > is also possible, but would require more testing). New algorithm > allocates an array of 2*limit elements. When its size is reached, it > sorts the array (using Arrays.sort) and discards the second half. > After that only those elements are accumulated which are less than the > worst element found so far. When array is filled again, the second > half is sorted and merged with the first half. > > Here's JMH test with results which covers several input patterns: > http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/ > > You may check summary first: > http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/summary.txt > Speedup values bigger than 1 are good. > > The most significant regression in the sequential mode of the new > implementation is the ever decreasing input (especially with the low > limit value). Still, it's not that bad (given the fact that old > implementation processes such input very fast). On the other hand, for > random input new implementation could be in order of magnitude faster. > Even for ever ascending input noteable speedup (like 40%) could be > achieved. > > For parallel stream the new implementation is almost always faster, > especially if you ignore the cases when parallel stream is > unprofitable. > > What do you think about this improvement? Could it be included into > JDK-9? Are there any issues I'm unaware of? I would be really happy to > complete this work if this is supported by JDK team. Current > implementation has no primitive specialization and does not optimize > the sorting out if the input is known to be sorted, but it's not very > hard to add these features as well if you find my idea useful. > > With best regards, > Tagir Valeev. > > From uschindler at apache.org Sat Mar 5 22:43:18 2016 From: uschindler at apache.org (Uwe Schindler) Date: Sat, 5 Mar 2016 23:43:18 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: <87oaasycsg.fsf@v35516.1blu.de> References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> <56DAE71B.7040400@oracle.com> <06ad01d176e9$c19e2740$44da75c0$@apache.org> <87oaasycsg.fsf@v35516.1blu.de> Message-ID: <072e01d17730$6c6a33d0$453e9b70$@apache.org> Hi Stefan, > -----Original Message----- > From: Stefan Bodewig [mailto:bodewig at apache.org] > Sent: Saturday, March 05, 2016 7:56 PM > To: dev at ant.apache.org; Uwe Schindler > Cc: 'Alan Bateman' ; core-libs- > dev at openjdk.java.net; rory.odonnell at oracle.com; dev at ant.apache.org; > 'Steve Drach' > Subject: Re: Multi-Release JAR file patch as applied to build 108 of Java 9 > breaks almost every project out there (Apache Ant, Gradle, partly Apache > Maven) > > On 2016-03-05, Uwe Schindler wrote: > > > This is why I put the Ant developers in CC. The correct way would be > > to look at the *decoded* path (not just getPath() because this is also > > one of the "famous" traps in the URL class - one reason why it should > > be avoided in favor of URI). URL.toURI().getPath() is most safe to fix > > the issue in Apache Ant > > Part of the reason for this certainly is that the code has been written > before the URI class even existed. > > > (Stefan Bodewig: Should I open an issue in Ant?). > > Yes, please do. Thanks Uwe. I opened: https://bz.apache.org/bugzilla/show_bug.cgi?id=59130 > > Maybe Ant developers can fix this code in later versions to handle > > URLs more correct. > > +1 Unfortunately this is not the only issue caused by this. After I tried to build Lucene with the patch applied, the next candidate for the issue broke: Apache Ivy. It was no longer able to load the ivy-settings.xml file from its JAR file. The reason here is another one: It constructs the JAR file URL on its own (it looks like this), but does not add the #release fragment. And because of this, JarURLConnection does not find the file...: [...] multiple parent causes [...] Caused by: java.io.FileNotFoundException: JAR entry org/apache/ivy/core/settings/ivysett/ivysettings-public.xml not found in C:\Users\Uwe Schindler\.ant\lib\ivy-2.3.0.jar at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:142) at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150) at org.apache.ivy.util.url.BasicURLHandler.openStream(BasicURLHandler.java:171) at org.apache.ivy.util.url.URLHandlerDispatcher.openStream(URLHandlerDispatcher.java:74) at org.apache.ivy.core.settings.XmlSettingsParser.doParse(XmlSettingsParser.java:157) at org.apache.ivy.core.settings.XmlSettingsParser.parse(XmlSettingsParser.java:183) at org.apache.ivy.core.settings.XmlSettingsParser.includeStarted(XmlSettingsParser.java:435) at org.apache.ivy.core.settings.XmlSettingsParser.startElement(XmlSettingsParser.java:211) ... 35 more So it looks like the Multi-release JAR file patch also breaks the other way round: Code constructing JAR URLs according to the standard no longer work. In my opinion, the JAR URLs should not change at all and the code should transparently choose the right release version. Maybe add a fragment only to explicitly state a specific version (so one would be able to load the Java 7 version). But this could also be done using the META-INF/... path. The default handling should be that "old" and (I think they are standardized) JAR URLs still works as they should - not requiring the fragment! Uwe From mandy.chung at oracle.com Sat Mar 5 23:29:10 2016 From: mandy.chung at oracle.com (Mandy Chung) Date: Sat, 5 Mar 2016 15:29:10 -0800 Subject: RFR 8150840: Add an internal system property to control the default level of System.Logger when java.logging is not present. In-Reply-To: <56D9C052.4070703@oracle.com> References: <56D97600.5080105@oracle.com> <56D9B574.6090205@Oracle.com> <56D9C052.4070703@oracle.com> Message-ID: > On Mar 4, 2016, at 9:05 AM, Daniel Fuchs wrote: > > http://cr.openjdk.java.net/~dfuchs/webrev_8150840/webrev.01/ Looks okay in general. I?m not a fan of using GetPropertyAction. While it?s convenient as the class already exists, method refs and anonymous class makes what it does more explicit at the callsite. No big deal. Does -Djava.util.logging.SimpleFormatter.format=? have any effect if java.logging is absent (when used together with jdk.system.logger.level)? It?s one of the test cases in SimpleConsoleLoggerTest. I would expect java.util.logging.* properties are used fro java.util.logging configuration only. JUL_FORMAT_PROP_KEY is defined in SimpleConsoleLogger. If I read it correctly, it?s only used for the limited doPrivileged. 472 new PropertyPermission(JUL_FORMAT_PROP_KEY, "read")); I was initially confused what SimpleConsoleLogger is done with java.util.logging formatting. If JUL_FORMAT_PROP_KEY is not referenced anywhere else, perhaps just remove the constant variable and have a comment to explain this getSimpleFormat method is shared with JUL? Mandy From amaembo at gmail.com Sun Mar 6 01:55:12 2016 From: amaembo at gmail.com (Tagir F. Valeev) Date: Sun, 6 Mar 2016 07:55:12 +0600 Subject: Stream API: Fuse sorted().limit(n) into single operation In-Reply-To: References: <1598030827.20160305233516@gmail.com> Message-ID: <941441862.20160306075512@gmail.com> Hello! LW> Worth noting: Guava uses a similar implementation for LW> Ordering.leastOf, but instead of sorting the array when it's LW> filled, does a quickselect pass to do it in O(k) time instead of O(k log k).? Thank you for mentioning Guava. Unfortunately quickselect is not stable, while sorted().limit(n) must produce stable result. Quickselect might be good for primitive specializations though. With best regards, Tagir Valeev. LW> We had been planning to put together a Collector implementation LW> for it, since it's actually pretty amenable to Collectorification (clearly a word). LW> On Sat, Mar 5, 2016 at 12:32 PM Tagir F. Valeev wrote: LW> Hello! LW> LW> One of the popular bulk data operation is to find given number of LW> least or greatest elements. Currently Stream API provides no dedicated LW> operation to do this. Of course, it could be implemented by custom LW> collector and some third-party libraries already provide it. However LW> it would be quite natural to use existing API: LW> LW> stream.sorted().limit(k) - k least elements LW> stream.sorted(Comparator.reverseOrder()).limit(k) - k greatest elements. LW> LW> In fact people already doing this. Some samples could be found on LW> GitHub: LW> LW> https://github.com/search?l=java&q=%22sorted%28%29.limit%28%22&type=Code&utf8=%E2%9C%93 LW> LW> Unfortunately current implementation of such sequence of operations is LW> suboptimal: first the whole stream content is dumped into intermediate LW> array, then sorted fully and after that k least elements is selected. LW> On the other hand it's possible to provide a special implementation LW> for this particular case which takes O(k) additional memory and in LW> many cases works significantly faster. LW> LW> I wrote proof-of-concept implementation, which could be found here: LW> http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/webrev/ LW> The implementation switches to new algorithm if limit is less than LW> 1000 which is quite common for such scenario (supporting bigger values LW> is also possible, but would require more testing). New algorithm LW> allocates an array of 2*limit elements. When its size is reached, it LW> sorts the array (using Arrays.sort) and discards the second half. LW> After that only those elements are accumulated which are less than the LW> worst element found so far. When array is filled again, the second LW> half is sorted and merged with the first half. LW> LW> Here's JMH test with results which covers several input patterns: LW> http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/ LW> LW> You may check summary first: LW> LW> http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/summary.txt LW> Speedup values bigger than 1 are good. LW> LW> The most significant regression in the sequential mode of the new LW> implementation is the ever decreasing input (especially with the low LW> limit value). Still, it's not that bad (given the fact that old LW> implementation processes such input very fast). On the other hand, for LW> random input new implementation could be in order of magnitude faster. LW> Even for ever ascending input noteable speedup (like 40%) could be LW> achieved. LW> LW> For parallel stream the new implementation is almost always faster, LW> especially if you ignore the cases when parallel stream is LW> unprofitable. LW> LW> What do you think about this improvement? Could it be included into LW> JDK-9? Are there any issues I'm unaware of? I would be really happy to LW> complete this work if this is supported by JDK team. Current LW> implementation has no primitive specialization and does not optimize LW> the sorting out if the input is known to be sorted, but it's not very LW> hard to add these features as well if you find my idea useful. LW> LW> With best regards, LW> Tagir Valeev. LW> LW> From uschindler at apache.org Sun Mar 6 09:29:35 2016 From: uschindler at apache.org (Uwe Schindler) Date: Sun, 6 Mar 2016 10:29:35 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: <072e01d17730$6c6a33d0$453e9b70$@apache.org> References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> <56DAE71B.7040400@oracle.com> <06ad01d176e9$c19e2740$44da75c0$@apache.org> <87oaasycsg.fsf@v35516.1blu.de> <072e01d17730$6c6a33d0$453e9b70$@apache.org> Message-ID: <079801d1778a$b5347080$1f9d5180$@apache.org> > > > This is why I put the Ant developers in CC. The correct way would be > > > to look at the *decoded* path (not just getPath() because this is also > > > one of the "famous" traps in the URL class - one reason why it should > > > be avoided in favor of URI). URL.toURI().getPath() is most safe to fix > > > the issue in Apache Ant > > > > Part of the reason for this certainly is that the code has been written > > before the URI class even existed. > > > > > (Stefan Bodewig: Should I open an issue in Ant?). > > > > Yes, please do. Thanks Uwe. > > I opened: https://bz.apache.org/bugzilla/show_bug.cgi?id=59130 > > > > Maybe Ant developers can fix this code in later versions to handle > > > URLs more correct. > > > > +1 > > Unfortunately this is not the only issue caused by this. After I tried to build > Lucene with the patch applied, the next candidate for the issue broke: > Apache Ivy. It was no longer able to load the ivy-settings.xml file from its JAR > file. > > The reason here is another one: It constructs the JAR file URL on its own (it > looks like this), but does not add the #release fragment. And because of this, > JarURLConnection does not find the file...: > > [...] multiple parent causes [...] > Caused by: java.io.FileNotFoundException: JAR entry > org/apache/ivy/core/settings/ivysett/ivysettings-public.xml not found in > C:\Users\Uwe Schindler\.ant\lib\ivy-2.3.0.jar > at > sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.jav > a:142) > at > sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnec > tion.java:150) > at > org.apache.ivy.util.url.BasicURLHandler.openStream(BasicURLHandler.java:1 > 71) > at > org.apache.ivy.util.url.URLHandlerDispatcher.openStream(URLHandlerDispat > cher.java:74) > at > org.apache.ivy.core.settings.XmlSettingsParser.doParse(XmlSettingsParser.j > ava:157) > at > org.apache.ivy.core.settings.XmlSettingsParser.parse(XmlSettingsParser.java > :183) > at > org.apache.ivy.core.settings.XmlSettingsParser.includeStarted(XmlSettingsP > arser.java:435) > at > org.apache.ivy.core.settings.XmlSettingsParser.startElement(XmlSettingsPar > ser.java:211) > ... 35 more > > So it looks like the Multi-release JAR file patch also breaks the other way > round: Code constructing JAR URLs according to the standard no longer work. > In my opinion, the JAR URLs should not change at all and the code should > transparently choose the right release version. Maybe add a fragment only > to explicitly state a specific version (so one would be able to load the Java 7 > version). But this could also be done using the META-INF/... path. The default > handling should be that "old" and (I think they are standardized) JAR URLs > still works as they should - not requiring the fragment! I tried another project (a private one) and it failed in similar ways while loading XSL templates. This project produced no self-crafted jar:-URLs; instead it relied on relative URL resolving (the same applies to Apache Ivy). A common pattern (especially in the "XML world") is to have relative links in your files, e.g. an XSLT file that includes another ones. If you place those XSL or XML files containing relative links in a JAR file, with previous Java versions everything worked as it should. You started the XML parser with the URL returned by the classloader and it was able to also resolve relative links between the files (because the jar: URL protocol correctly supports relative resolving of paths). So xml/xsl file containing a reference to another file in same package using a filename like works perfectly with the JAR URL protocol. If the original file had a URL like "jar:file:....!/package/master.xsl" and this was passed to XML parser [e.g, like TranformerFactory#newTransformer(new StreamSource(classloader.getResource("package/master.xsl ").toString())], the XML parser would load "jar:file:....!/package/otherfile.xsl" But because the fragment is lost during resolving relative URLs, this no longer works with Multi-Release JAR files. It looks like JARURLConnection throws FileNotFoundException without the #release fragment. I hope this helps to see why using fragments as part of the identifier is not quite correct in the URL world. I'd use some other way to refer to specific versions. At least let the no-fragment case always load the version-based file. Only use a fragment to refer to another version. Uwe From peter.levart at gmail.com Sun Mar 6 11:05:04 2016 From: peter.levart at gmail.com (Peter Levart) Date: Sun, 6 Mar 2016 12:05:04 +0100 Subject: Stream API: Fuse sorted().limit(n) into single operation In-Reply-To: <1598030827.20160305233516@gmail.com> References: <1598030827.20160305233516@gmail.com> Message-ID: <56DC0EE0.2030402@gmail.com> Hi Tagir, Nice work. I looked at the implementation and have two comments: - in Limiter.put: 127 final boolean put(T t) { 128 int l = limit; 129 T[] d = data; 130 if (l == 1) { 131 // limit == 1 is the special case: exactly one least element is stored, 132 // no sorting is performed 133 if (initial) { 134 initial = false; 135 size = 1; 136 } else if (comparator.compare(t, d[0]) >= 0) 137 return false; 138 d[0] = t; 139 return true; 140 } 141 if (initial) { 142 if (size == d.length) { 143 Arrays.sort(d, comparator); 144 initial = false; 145 size = l; 146 put(t); 147 } else { 148 d[size++] = t; 149 } 150 return true; 151 } 152 if (size == d.length) { 153 sortTail(d, l, size, comparator); 154 size = limit; 155 } 156 if (comparator.compare(t, d[l - 1]) < 0) { 157 d[size++] = t; 158 return true; 159 } 160 return false; 161 } ...couldn't the nested call to put in line 146 just be skipped and let the code fall through to "if" in line 152 (with return in line 150 moved between 148 and 149)? This will also fix the return value of put() which is ignored when you make a nested call and replace it with true. Also, what do you think of the following merging strategy that doesn't need to allocate a temporary array each time you perform a sortTail(): "first" phase: - accumulate elements data[0] ... data[limit-1] and when reaching limit, sort them and set first = false (this differs from your logic which accumulates up to data.length elements at first and is a better strategy, because it starts the second phase as soon as possible and second phase is more optimal since it already filters elements that accumulates) "second" phase: - accumulate elements < data[limit-1] into data[limit] ... data[data.length-1] and when reaching length, sort the tail and perform merge which looks like this: - simulate merge of data[0] ... data[limit-1] with data[limit] ... data[size-1] deriving end indices i and j of each sub-sequence: data[0] ... data[i-1] and data[limit] ... data[j-1]; - move elements data[0] ... data[i-1] to positions data[limit-i] ... data[limit-1] - perform in-place merge of data[limit-i] ... data[limit-1] and data[limit] ... data[j-1] into data[0] ... data[limit-1] This, I think, results in dividing the additional copying operations by 2 in average and eliminates allocation of temporary array for merging for the cost of pre-merge step which just derives the end indices. There's a chance that this might improve performance because it trades memory writes for reads. What do you think? Regards, Peter On 03/05/2016 06:35 PM, Tagir F. Valeev wrote: > Hello! > > One of the popular bulk data operation is to find given number of > least or greatest elements. Currently Stream API provides no dedicated > operation to do this. Of course, it could be implemented by custom > collector and some third-party libraries already provide it. However > it would be quite natural to use existing API: > > stream.sorted().limit(k) - k least elements > stream.sorted(Comparator.reverseOrder()).limit(k) - k greatest elements. > > In fact people already doing this. Some samples could be found on > GitHub: > https://github.com/search?l=java&q=%22sorted%28%29.limit%28%22&type=Code&utf8=%E2%9C%93 > > Unfortunately current implementation of such sequence of operations is > suboptimal: first the whole stream content is dumped into intermediate > array, then sorted fully and after that k least elements is selected. > On the other hand it's possible to provide a special implementation > for this particular case which takes O(k) additional memory and in > many cases works significantly faster. > > I wrote proof-of-concept implementation, which could be found here: > http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/webrev/ > The implementation switches to new algorithm if limit is less than > 1000 which is quite common for such scenario (supporting bigger values > is also possible, but would require more testing). New algorithm > allocates an array of 2*limit elements. When its size is reached, it > sorts the array (using Arrays.sort) and discards the second half. > After that only those elements are accumulated which are less than the > worst element found so far. When array is filled again, the second > half is sorted and merged with the first half. > > Here's JMH test with results which covers several input patterns: > http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/ > > You may check summary first: > http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/summary.txt > Speedup values bigger than 1 are good. > > The most significant regression in the sequential mode of the new > implementation is the ever decreasing input (especially with the low > limit value). Still, it's not that bad (given the fact that old > implementation processes such input very fast). On the other hand, for > random input new implementation could be in order of magnitude faster. > Even for ever ascending input noteable speedup (like 40%) could be > achieved. > > For parallel stream the new implementation is almost always faster, > especially if you ignore the cases when parallel stream is > unprofitable. > > What do you think about this improvement? Could it be included into > JDK-9? Are there any issues I'm unaware of? I would be really happy to > complete this work if this is supported by JDK team. Current > implementation has no primitive specialization and does not optimize > the sorting out if the input is known to be sorted, but it's not very > hard to add these features as well if you find my idea useful. > > With best regards, > Tagir Valeev. > From peter.levart at gmail.com Sun Mar 6 13:00:14 2016 From: peter.levart at gmail.com (Peter Levart) Date: Sun, 6 Mar 2016 14:00:14 +0100 Subject: RFR: JDK-8149925 We don't need jdk.internal.ref.Cleaner any more In-Reply-To: <56D0C5F5.7060509@Oracle.com> References: <56B72242.7050102@gmail.com> <56B7C328.3060800@gmail.com> <56B83553.3020202@oracle.com> <56B874DA.80001@gmail.com> <56B9EB17.7020303@oracle.com> <56C1E765.7080603@oracle.com> <56C1FE37.9010507@oracle.com> <015201d16813$333650c0$99a2f240$@apache.org> <56C34B1B.8050001@gmail.com> <56C43817.7060805@gmail.com> <7BA56B2F-C1C6-4EAF-B900-A825C6B602EF@oracle.com> <56CA080F.6010308@gmail.com> <56CB83FF.4010808@Oracle.com> <56CC8A4A.9080303@gmail.com> <56CEAC28.80802@gmail.com> <56CEB49A.4090000@oracle.com> <56CEC6A5.3070202@gmail.com> <56D0C5F5.7060509@Oracle.com> Message-ID: <56DC29DE.4040006@gmail.com> Hi, I have been asked to split the changes needed to remove jdk.internal.ref.Cleaner into two changesets. The first one is to contain the straightforward non-controversial changes that remove the references to jdk.internal.ref.Cleaner and swaps them with java.lang.ref.Cleaner in all places but Direct-X-Buffer. This part also contains changes that replace use of lambdas and method references with alternatives. Here's the 1st part: http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.07.part1/ And here's the 2nd part that applies on top of part 1: http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.07.part2/ Together they form functionally equivalent change as in webrev.06priv with only two additional cosmetic changes to part 2 (renaming of method Cleaner.cleanNextPending -> Cleaner.cleanNextEnqueued and removal of an obsolete comment in nio Bits). If part2 is to be developed further, I would like to 1st push part1 so that maintenance of part2 changeset will be easier. Regards, Peter On 02/26/2016 10:39 PM, Roger Riggs wrote: > Hi Peter, > > I think this cleans up all the points raised earlier. > The optimization for enqueuing from the reference queue seems ok to me > and should be > more efficient than the previous implementation but I think Mandy or > Alan should look at it also. > > Thanks, Roger > > > On 2/25/2016 4:17 AM, Peter Levart wrote: >> Hi Alan, >> >> On 02/25/2016 09:00 AM, Alan Bateman wrote: >>> >>> >>> On 25/02/2016 07:24, Peter Levart wrote: >>>> : >>>> >>>> I kept the public boolean Cleaner.cleanNextPending() method which >>>> now only deals with enqueued Cleanable(s). I think this method >>>> might still be beneficial for public use in situations where >>>> cleanup actions take relatively long time to execute so that the >>>> rate of cleanup falls behind the rate of registration of new >>>> cleanup actions. >>> I think we need also need to look at the option where this is not >>> public. I have concerns that it is exposing implementation to some >>> extent and that may become an attractive nuisance in the future. >>> This shouldn't be an issue for the NIO buffer usage, we can keep the >>> usage via the shared secrets mechanism. I think this is what Mandy >>> is suggesting too. >>> >>> -Alan. >> >> Sure, no problem. Here's a variant that keeps the >> Cleaner.cleanNextPending() method private and exposed via >> SharedSecrets to nio Bits but is otherwise equivalent to webrev.06: >> >> http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.06priv/ >> >> >> Regards, Peter >> > From david.holmes at oracle.com Sun Mar 6 21:32:40 2016 From: david.holmes at oracle.com (David Holmes) Date: Mon, 7 Mar 2016 07:32:40 +1000 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> Message-ID: <56DCA1F8.4040304@oracle.com> On 5/03/2016 11:50 PM, Claes Redestad wrote: > Hi, > > similar issues were discovered too late to stop b108, e.g., https://bugs.openjdk.java.net/browse/JDK-8150920. Fix is already in jdk9/dev, so I think the next build should be more well-behaved and hope we can provide it more promptly than normal. As that bug leads to a non-open bug here's the changeset URL: http://hg.openjdk.java.net/jdk9/dev/jdk/rev/721288127c82 David > If you can build OpenJDK from jdk9/dev and report any remaining issues due to the multi-release feature that would be quite helpful! > > Thanks! > > /Claes > > Uwe Schindler skrev: (5 mars 2016 14:24:37 CET) >> Hi OpenJDK Core Developers, >> >> you may know the Apache Lucene team is testing early access releases of >> Java 9. We reported many bugs already, but most of them only applied to >> Hotspot and Lucene itsself. But this problem since build 108 is now >> really severe, because it breaks the build system already! >> >> To allow further testing of Open Source Projects, I'd suggest to revert >> the Multi-Release-JAR runtime support patch and provide a new preview >> build ASAP, because we found out after a night of debugging a build >> system from which we don't know all internals what is causing the >> problems and there is no workaround. I am very sorry that I have to say >> this, but it unfortunately build 108 breaks *ALL* versions of Apache >> Ant, the grandfather of all Java build systems :-) I know also OpenJDK >> is using it, too! So with Multi-Release JAR file patch applied (see >> http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c), any >> Ant-based build - including the JDK build itsself - would no longer >> bootstrap. It is impossible to also build Gradle projects, because >> Gradle uses Ant internally for many tasks). Maven projects may be >> affected, too. >> >> Now you might have the question: What happened? >> >> We tried to build Lucene on our Jenkins server, but the build itsself >> failed with a stupid error message: >> >> BUILD FAILED >> /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:21: The >> following error occurred while executing this line: >> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:56: >> not doesn't support the nested "matches" element. >> >> The first idea was: Ah, there were changes in XML parsing >> (JDK-8149915). So we debugged the build. But it was quite clear that >> XML parsing was not the issue. It got quite clear when we enabled >> "-debug" on the build. What happened was that Ant was not loading its >> internal conditions/tasks/type definitions anymore, so the build system >> does not know almost any type anymore. The debug log showed that Ant >> was no longer able to load the resource >> "/org/apache/tools/ant/antlib.xml" from its own JAR file anymore. >> Instead it printed some strange debugging output (which looked totally >> broken). >> >> I spend the whole night digging through their code and found the issue: >> The commit of Multi-Release-Jar files (see >> http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c) broke >> resource handling in Apache Ant. In short: If you call >> ClassLoader.getResources() / or getResource() you get back an URL from >> where you can load the Resource - this is all fine and still works. >> But, with the Multi-Release JAR files patch this now has an URL >> fragment appended to the URL: '#release' (see >> http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c); this also >> applies to non-multi-release JAR files like Apache Ant's "ant.jar". >> >> In Java 7, Java 8,... and Java 9pre-b108, >> ClassLoader.getResource()/getResources() returned stuff like: >> >> "jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml" >> >> Now in Java 9b108 the following is returned: >> >> "jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml#release" >> >> And here Ant breaks (and I assume many other projects like Maven, too). >> Ant checks for the file extension of the string (because it may load >> definitions from both XML and properties files). So it does >> endsWith(".xml") and of course this now returns false. The effect is >> that Ant tries to load its own task definitions as a java properties >> file instead of XML. Of course this fails, because the data behind this >> URL is XML. The effect is that Ant cannot bootstrap as everything to >> build is missing. >> >> One might say: Ant's code is broken (I agree, it is not nice because it >> relies on the string representation of the resource URL - which is a >> no-go anyways), but it is impossible to fix, because Ant is bundled on >> most developer computers and those will suddenly break with Java 9! >> There is also no version out there that works around this, so we cannot >> test anything anymore! >> >> The problematic line in Ant's code is here: >> http://grepcode.com/file/repo1.maven.org/maven2/org.apache.ant/ant/1.9.6/org/apache/tools/ant/taskdefs/Definer.java?av=f#259 >> >> I'd suggest to please ASAP revert the Multi-Release JAR file patch and >> provide a new preview build as soon as possible. I think there is more >> work needed to fix this. If this does not revert to the original state, >> it will be impossible to build and test Lucene, Elasticsearch,.... (and >> almost every Java project out there!). So short: We cannot test anymore >> and it is likely that we cannot support Java 9 anymore because the >> build system used by most Java projects behind the scenes does not >> bootstrap itself anymore. >> >> My suggestion would be to investigate other versions for this patch >> that does *not* modify the resource URLs by appending a fragment to >> them (at least not for the "standard" case without an actual >> Multi-Release Jar). For new multi-release JAR files I am fine with >> appending fragments, but please not for default ones. Maybe change code >> to handle the URLs from the non-versioned part differently (without >> fragment). Leaving the fragment inide may break many othe rprojects, >> because many programmers are very sloppy with handling URLs (well-known >> issue is calling URL#getFile() of a file:-URL that breaks on Windows >> systems and spaces in path name). Many people just call toString() on >> URL and do some test on it (startsWith, endsWith). So appending >> fragments is a no-go for backwards compatibility with JAR resources! >> >> I posted this to the mailing list and did not open a bug report on >> http://bugs.java.com/, because this is a more general issue - feel free >> to open bug reports around this!!! I would be very happy if we could >> find a quick solution for this problem. Until there is a solution we >> have to stop testing Java 9 with Apache Lucene/Solr/..., and this is >> not a good sign, especially as Jigsaw will be merged soon. >> >> Thanks for listening, >> Uwe >> >> P.S.: I also CCed the Apache Ant team. They should fix the broken code >> anyways, but this won't help for many projects already out there (e.g. >> Apache Lucene still has a minimum requirement of Ant 1.8.2 because >> MacOSX computers ship with that version since years). >> >> ----- >> Uwe Schindler >> uschindler at apache.org >> ASF Member, Apache Lucene PMC / Committer >> Bremen, Germany >> http://lucene.apache.org/ > From huizhe.wang at oracle.com Mon Mar 7 02:26:56 2016 From: huizhe.wang at oracle.com (huizhe wang) Date: Sun, 06 Mar 2016 18:26:56 -0800 Subject: [PING] RFR: JDK-8150704 XALAN: ERROR: 'No more DTM IDs are available' when transforming with lots of temporary result trees In-Reply-To: <98624cbb00fe4522846ef256aa1410d4@DEWDFE13DE11.global.corp.sap> References: <98624cbb00fe4522846ef256aa1410d4@DEWDFE13DE11.global.corp.sap> Message-ID: <56DCE6F0.4020303@oracle.com> Hi Christoph, Thanks for reporting and providing patch for the issue! Looks like a nice solution that may potentially reduce memory requirement for some large templates. Could you also verify that the patch also fixes JDK-8150699 [1] that was created the same day as yours? I assume the stylesheet is created to just illustrate the issue. If it's a real use case, then it should have made the variable global to avoid creating a lot of RTFs, and therefore avoid the whole "No more DTM IDs" issue. It would make the process a lot more efficient. Some classes, such as Sort.java, still contain the old header, please update them with the new ones such as that in DOM.java. The $Id section, such as the following, can all be removed, they were from legacy repository, misleading since it implies the file was last updated, in this case, in 2005: 20 /* 21 * $Id: Sort.java,v 1.2.4.1 2005/09/12 11:08:12 pvedula Exp $ 22 */ For the new test, it's probably better to add some kind of assertion in the test, e.g. expected result, than failing on a broad Exception. What if the test passes but the transform operation isn't because of the changes? The test is also not sufficient. The release methods seem to be okay. However, they don't seem to have been fully exercised in the test (only simple RTs were created?). In that sense, the sample attached in JDK-8150699 provided an opportunity to better verify the changes. It would be good to add some javadoc or dev notes to the test. While consolidating tests (into TransformerTest), please make sure notes/javadoc are copied over or added. [1] https://bugs.openjdk.java.net/browse/JDK-8150699 Thanks, Joe On 3/3/2016 11:50 PM, Langer, Christoph wrote: > Hi, > > Ping - any comments or reviews for this bugfix? > > Thanks > Christoph > > From: Langer, Christoph > Sent: Freitag, 26. Februar 2016 16:02 > To: core-libs-dev at openjdk.java.net > Subject: RFR: JDK-8150704 XALAN: ERROR: 'No more DTM IDs are available' when transforming with lots of temporary result trees > > Hi, > > I've created a fix proposal for the issue I have reported in this bug: > https://bugs.openjdk.java.net/browse/JDK-8150704 > > The webrev can be found here: > http://cr.openjdk.java.net/~clanger/webrevs/8150704.1/ > > The Xalan parser would eventually run out of DTM IDs if xsl transformations involve lots of temporary result trees. Those are never released although they could. A testcase is included for this. I've also done some cleanups in the Xalan code and in the tests. > > Thanks in advance for looking at this :) > > Best regards > Christoph > From michael.hixson at gmail.com Mon Mar 7 09:02:34 2016 From: michael.hixson at gmail.com (Michael Hixson) Date: Mon, 7 Mar 2016 01:02:34 -0800 Subject: default random access list spliterator Message-ID: The default List.spliterator() is iterator-based. Could this be improved for random access lists, using List.get(int) to fetch elements instead of List.iterator()? I think it could improve most on Spliterator.trySplit(). The current implementation allocates a new array for split-off elements. I see almost twice the throughput from list.parallelStream().forEach(...) with a custom get(int)-based implementation over the default one. For example, instead of this: default Spliterator spliterator() { return Spliterators.spliterator(this, Spliterator.ORDERED); } I'm suggesting something like this: default Spliterator spliterator() { return (this instanceof RandomAccess) ? Spliterators.randomAccessListSpliterator(this) : Spliterators.spliterator(this, Spliterator.ORDERED); } where randomAccessListSpliterator is new code that looks a lot like Spliterators.ArraySpliterator. -Michael From paul.sandoz at oracle.com Mon Mar 7 09:43:48 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 7 Mar 2016 10:43:48 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: <56DAE71B.7040400@oracle.com> References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> <56DAE71B.7040400@oracle.com> Message-ID: Hi Uwe, Alan, Uwe, thanks so much for testing and investigating, that is very helpful and really appreciated. The EA process is working as intended, although i wish the result was not so debilitating in this case. Sorry about that. > On 5 Mar 2016, at 15:03, Alan Bateman wrote: > > > On 05/03/2016 13:24, Uwe Schindler wrote: >> : >> >> I'd suggest to please ASAP revert the Multi-Release JAR file patch and provide a new preview build as soon as possible. I think there is more work needed to fix this. If this does not revert to the original state, it will be impossible to build and test Lucene, Elasticsearch,.... (and almost every Java project out there!). So short: We cannot test anymore and it is likely that we cannot support Java 9 anymore because the build system used by most Java projects behind the scenes does not bootstrap itself anymore. >> > Sigh, I think those of us that reviewed this missed the point that the fragment is appended by default. Yes :-( i missed that in review Here is a possible fix: URLClassPath.java: ? /** * This class is used to maintain a search path of URLs for loading classes * and resources from both JAR files and directories. @@ -760,7 +759,11 @@ try { // add #runtime fragment to tell JarURLConnection to use // runtime versioning if the underlying jar file is multi-release - url = new URL(getBaseURL(), ParseUtil.encodePath(name, false) + "#runtime"); + if (jar.isMultiRelease()) { + url = new URL(getBaseURL(), ParseUtil.encodePath(name, false) + "#runtime"); + } else { + url = new URL(getBaseURL(), ParseUtil.encodePath(name, false)); + } if (check) { URLClassPath.check(url); } With that fix i can successfully build Lucene (i think the problem with Ivy is the same underlying cause as with Ant. We have also noticed problems with Jetty). My intention was the #runtime fragment should only be used for MR-JARs. We may need to reconsider that given the fragility of processing URLs that have been reported, although MR-JARs are new and it will take time for this to work through the eco-system allowing time to weed out the bugs. Ideally the best solution is to change the URL scheme, say ?mrjar:file:/?!/?class? only for MR-JARs of course, but i considered this might be even more invasive for class scanners etc, (assuming URLs are processed correctly). However, the Jigsaw image is already adjusting the scheme for classes in an image: l.getResource("java/net/URL.class?) -> jrt:/java.base/java/net/URL.class and that will also impact other stuff folded into the image. So perhaps we should revisit? Tricky tradeoffs here. > This will of course break code that parses URL strings in naive ways (anything looking for ".xml" should be looking at the path component of course). I'll create a bug for this now, assuming you haven't created one already. > Alan created: https://bugs.openjdk.java.net/browse/JDK-8151339 Thanks, Paul. > One general point is that the purpose of EA builds and timely testing by Lucene and other projects is invaluable for shaking out issues. There will be issues periodically and much better to find these within a few days of pushing a change rather than months later. > > -Alan From amaembo at gmail.com Mon Mar 7 09:57:21 2016 From: amaembo at gmail.com (Tagir F. Valeev) Date: Mon, 7 Mar 2016 15:57:21 +0600 Subject: default random access list spliterator In-Reply-To: References: Message-ID: <1258940150.20160307155721@gmail.com> Hello! I thought about such possibility. One problem which would arise is that such spliterator will not be able to properly track modCount and throw ConcurrentModificationException. As a consequence it might produce silently inconsistent result if the structural changes were performed on your list during the traversal. Note that currently you can override spliterator() in your List class this way: Spliterator spliterator() { return IntStream.range(0,size()).mapToObj(this::get).spliterator(); } Such one-liner produces a spliterator which splits nicely. The drawback is that it's eager-binding and not fail-fast, so it's definitely not an option for JDK, but possibly acceptable for your project. Another option for JDK would be to leave default List.spliterator() implementation as is, but override it in AbstractList (which already tracks modCount). With best regards, Tagir Valeev. MH> The default List.spliterator() is iterator-based. Could this be MH> improved for random access lists, using List.get(int) to fetch MH> elements instead of List.iterator()? MH> I think it could improve most on Spliterator.trySplit(). The current MH> implementation allocates a new array for split-off elements. I see MH> almost twice the throughput from list.parallelStream().forEach(...) MH> with a custom get(int)-based implementation over the default one. MH> For example, instead of this: MH> default Spliterator spliterator() { MH> return Spliterators.spliterator(this, Spliterator.ORDERED); MH> } MH> I'm suggesting something like this: MH> default Spliterator spliterator() { MH> return (this instanceof RandomAccess) MH> ? Spliterators.randomAccessListSpliterator(this) MH> : Spliterators.spliterator(this, Spliterator.ORDERED); MH> } MH> where randomAccessListSpliterator is new code that looks a lot like MH> Spliterators.ArraySpliterator. MH> -Michael From paul.sandoz at oracle.com Mon Mar 7 10:08:37 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 7 Mar 2016 11:08:37 +0100 Subject: default random access list spliterator In-Reply-To: References: Message-ID: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> Hi Michael, It could, stay tuned for some possible action on this. This is something we did discuss a while ago [1]. At the time we thought most List implementations would override so did not bother, and admittedly with the frenzy of all other stuff got de-prioritized. But, perhaps we underestimated the integration with existing libraries? To do that we would need to adjust the specification of the default behaviour which would also adjust the fail-fast behaviour as Tagir points out (which may be a reasonable compromise in the case, it should be possible to detect certain co-mod cases) Paul. [1] http://mail.openjdk.java.net/pipermail/lambda-libs-spec-experts/2013-May/001770.html > On 7 Mar 2016, at 10:02, Michael Hixson wrote: > > The default List.spliterator() is iterator-based. Could this be > improved for random access lists, using List.get(int) to fetch > elements instead of List.iterator()? > > I think it could improve most on Spliterator.trySplit(). The current > implementation allocates a new array for split-off elements. I see > almost twice the throughput from list.parallelStream().forEach(...) > with a custom get(int)-based implementation over the default one. > > For example, instead of this: > > default Spliterator spliterator() { > return Spliterators.spliterator(this, Spliterator.ORDERED); > } > > I'm suggesting something like this: > > default Spliterator spliterator() { > return (this instanceof RandomAccess) > ? Spliterators.randomAccessListSpliterator(this) > : Spliterators.spliterator(this, Spliterator.ORDERED); > } > > where randomAccessListSpliterator is new code that looks a lot like > Spliterators.ArraySpliterator. > > -Michael From michael.hixson at gmail.com Mon Mar 7 11:35:36 2016 From: michael.hixson at gmail.com (Michael Hixson) Date: Mon, 7 Mar 2016 03:35:36 -0800 Subject: default random access list spliterator In-Reply-To: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> References: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> Message-ID: Hi Tagir, Paul, Ah, it looks like Donald Raab had exactly the same suggestion. Sorry for the repeat. I was following that list at that time, and now I'm wondering whether my idea was my own. I agree with everything he said. > One problem which would arise is that such spliterator will not be able to properly track modCount and throw ConcurrentModificationException. Putting this in AbstractList instead of List sounds fine. I bet you could detect *more* co-mod cases and still improve performance, given that the current implementation dumps half of the elements into Spliterators.ArraySpliterator, which knows nothing about modifications. > But, perhaps we underestimated the integration with existing libraries? (from the previous thread) > The efficacy question is: what List implementations implement RA that don't already have their own specialized spliterator? Spliterator is pretty tough to implement. AbstractList is easy. I bet most List *views* (as opposed to complete storage) will extend AbstractList and provide get(int) and size(), and maybe a couple of other methods, but not the full catalog. That is my experience anyway. -Michael On Mon, Mar 7, 2016 at 2:08 AM, Paul Sandoz wrote: > Hi Michael, > > It could, stay tuned for some possible action on this. > > This is something we did discuss a while ago [1]. At the time we thought most List implementations would override so did not bother, and admittedly with the frenzy of all other stuff got de-prioritized. But, perhaps we underestimated the integration with existing libraries? > > To do that we would need to adjust the specification of the default behaviour which would also adjust the fail-fast behaviour as Tagir points out (which may be a reasonable compromise in the case, it should be possible to detect certain co-mod cases) > > Paul. > > [1] http://mail.openjdk.java.net/pipermail/lambda-libs-spec-experts/2013-May/001770.html > >> On 7 Mar 2016, at 10:02, Michael Hixson wrote: >> >> The default List.spliterator() is iterator-based. Could this be >> improved for random access lists, using List.get(int) to fetch >> elements instead of List.iterator()? >> >> I think it could improve most on Spliterator.trySplit(). The current >> implementation allocates a new array for split-off elements. I see >> almost twice the throughput from list.parallelStream().forEach(...) >> with a custom get(int)-based implementation over the default one. >> >> For example, instead of this: >> >> default Spliterator spliterator() { >> return Spliterators.spliterator(this, Spliterator.ORDERED); >> } >> >> I'm suggesting something like this: >> >> default Spliterator spliterator() { >> return (this instanceof RandomAccess) >> ? Spliterators.randomAccessListSpliterator(this) >> : Spliterators.spliterator(this, Spliterator.ORDERED); >> } >> >> where randomAccessListSpliterator is new code that looks a lot like >> Spliterators.ArraySpliterator. >> >> -Michael > From peter.levart at gmail.com Mon Mar 7 11:47:14 2016 From: peter.levart at gmail.com (Peter Levart) Date: Mon, 7 Mar 2016 12:47:14 +0100 Subject: default random access list spliterator In-Reply-To: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> References: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> Message-ID: <56DD6A42.5080002@gmail.com> What about a Spliterator based on List.subList() method? While the specification of List.subList() does not guarantee any specific behavior when underlying list is structurally modified, the implementations (at least implementations in JDK based on AbstractList) do have a fail-fast behavior and there's a chance other implementations too. Regards, Peter On 03/07/2016 11:08 AM, Paul Sandoz wrote: > Hi Michael, > > It could, stay tuned for some possible action on this. > > This is something we did discuss a while ago [1]. At the time we thought most List implementations would override so did not bother, and admittedly with the frenzy of all other stuff got de-prioritized. But, perhaps we underestimated the integration with existing libraries? > > To do that we would need to adjust the specification of the default behaviour which would also adjust the fail-fast behaviour as Tagir points out (which may be a reasonable compromise in the case, it should be possible to detect certain co-mod cases) > > Paul. > > [1] http://mail.openjdk.java.net/pipermail/lambda-libs-spec-experts/2013-May/001770.html > >> On 7 Mar 2016, at 10:02, Michael Hixson wrote: >> >> The default List.spliterator() is iterator-based. Could this be >> improved for random access lists, using List.get(int) to fetch >> elements instead of List.iterator()? >> >> I think it could improve most on Spliterator.trySplit(). The current >> implementation allocates a new array for split-off elements. I see >> almost twice the throughput from list.parallelStream().forEach(...) >> with a custom get(int)-based implementation over the default one. >> >> For example, instead of this: >> >> default Spliterator spliterator() { >> return Spliterators.spliterator(this, Spliterator.ORDERED); >> } >> >> I'm suggesting something like this: >> >> default Spliterator spliterator() { >> return (this instanceof RandomAccess) >> ? Spliterators.randomAccessListSpliterator(this) >> : Spliterators.spliterator(this, Spliterator.ORDERED); >> } >> >> where randomAccessListSpliterator is new code that looks a lot like >> Spliterators.ArraySpliterator. >> >> -Michael From paul.sandoz at oracle.com Mon Mar 7 12:55:49 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 7 Mar 2016 13:55:49 +0100 Subject: default random access list spliterator In-Reply-To: References: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> Message-ID: <417D5844-013A-4DB9-B66F-22F329A23E68@oracle.com> > On 7 Mar 2016, at 12:35, Michael Hixson wrote: > > Hi Tagir, Paul, > > Ah, it looks like Donald Raab had exactly the same suggestion. Sorry > for the repeat. I was following that list at that time, and now I'm > wondering whether my idea was my own. I agree with everything he > said. > >> One problem which would arise is that such spliterator will not be able to properly track modCount and throw ConcurrentModificationException. > > Putting this in AbstractList instead of List sounds fine. That will not work for all libraries (some don?t use AbstractList, such as GS/Eclipse collections). > I bet you > could detect *more* co-mod cases and still improve performance, given > that the current implementation dumps half of the elements into > Spliterators.ArraySpliterator, which knows nothing about > modifications. > Certainly there is no doubt leveraging the random-access property is of benefit performance-wise. >> But, perhaps we underestimated the integration with existing libraries? > (from the previous thread) >> The efficacy question is: what List implementations implement RA that don't already have their own specialized spliterator? > > Spliterator is pretty tough to implement. AbstractList is easy. I > bet most List *views* (as opposed to complete storage) will extend > AbstractList and provide get(int) and size(), and maybe a couple of > other methods, but not the full catalog. That is my experience > anyway. > Surfacing on AbstractList would be my backup solution if we cannot surface it on List, which i think we can where polling size() is sufficient for a best-effort basis IMO. Paul. From paul.sandoz at oracle.com Mon Mar 7 12:59:11 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 7 Mar 2016 13:59:11 +0100 Subject: default random access list spliterator In-Reply-To: <56DD6A42.5080002@gmail.com> References: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> <56DD6A42.5080002@gmail.com> Message-ID: > On 7 Mar 2016, at 12:47, Peter Levart wrote: > > What about a Spliterator based on List.subList() method? While the specification of List.subList() does not guarantee any specific behavior when underlying list is structurally modified, the implementations (at least implementations in JDK based on AbstractList) do have a fail-fast behavior and there's a chance other implementations too. > We currently have as the @implSpec: * @implSpec * The default implementation creates a * late-binding spliterator * from the list's {@code Iterator}. The spliterator inherits the * fail-fast properties of the list's iterator. Note the inheritance clause, which also covers the sublist case. We would need to update with something like: "If this list implements RandomAccess then?. and the spliterator is late-binding, and fail-fast on a best effort basis if it is detected that this list (or any backing list if this list is a sub-list) has been structurally modified when traversing due to an change in size as returned by the size() method." Paul. From david.lloyd at redhat.com Mon Mar 7 13:46:16 2016 From: david.lloyd at redhat.com (David M. Lloyd) Date: Mon, 7 Mar 2016 07:46:16 -0600 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> <56DAE71B.7040400@oracle.com> Message-ID: <56DD8628.3030302@redhat.com> On 03/07/2016 03:43 AM, Paul Sandoz wrote: > Hi Uwe, Alan, > > Uwe, thanks so much for testing and investigating, that is very helpful and really appreciated. The EA process is working as intended, although i wish the result was not so debilitating in this case. Sorry about that. > [...] > Here is a possible fix: > > URLClassPath.java: > ? > > /** > * This class is used to maintain a search path of URLs for loading classes > * and resources from both JAR files and directories. > @@ -760,7 +759,11 @@ > try { > // add #runtime fragment to tell JarURLConnection to use > // runtime versioning if the underlying jar file is multi-release > - url = new URL(getBaseURL(), ParseUtil.encodePath(name, false) + "#runtime"); > + if (jar.isMultiRelease()) { > + url = new URL(getBaseURL(), ParseUtil.encodePath(name, false) + "#runtime"); > + } else { > + url = new URL(getBaseURL(), ParseUtil.encodePath(name, false)); > + } > if (check) { > URLClassPath.check(url); > } > > > With that fix i can successfully build Lucene (i think the problem with Ivy is the same underlying cause as with Ant. We have also noticed problems with Jetty). > > My intention was the #runtime fragment should only be used for MR-JARs. Does that go far enough though? I think there is a substantial amount of code which assumes (rightly) that you can build an exact path to a class in JAR URL and until today that'd work fine. It makes more sense to me that you'd only want to have to add the fragment if you want to tell it "hey I want Java 8's view of this path" or something - basically only change API when you're doing something that the API could not previously do, rather than changing JAR URLs for everyone. > We may need to reconsider that given the fragility of processing URLs that have been reported, although MR-JARs are new and it will take time for this to work through the eco-system allowing time to weed out the bugs. > > Ideally the best solution is to change the URL scheme, say ?mrjar:file:/?!/?class? only for MR-JARs of course, but i considered this might be even more invasive for class scanners etc, (assuming URLs are processed correctly). However, the Jigsaw image is already adjusting the scheme for classes in an image: > > l.getResource("java/net/URL.class?) -> jrt:/java.base/java/net/URL.class > > and that will also impact other stuff folded into the image. Yeah but that is isolated to JDK cases. In my experience, very very few tools or containers normally construct URLs for system class path items. I think that a substantially larger pool of software is likely to try accessing JARs by URL (at least going off of a highly unscientific bit of poking around on grepcode), and I don't think that this behavior should change from an API perspective. -- - DML From paul.sandoz at oracle.com Mon Mar 7 14:48:25 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 7 Mar 2016 15:48:25 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: <56DD8628.3030302@redhat.com> References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> <56DAE71B.7040400@oracle.com> <56DD8628.3030302@redhat.com> Message-ID: <36B80BEA-5F1F-480A-A701-0F9D1AA617D1@oracle.com> > On 7 Mar 2016, at 14:46, David M. Lloyd wrote: >> >> My intention was the #runtime fragment should only be used for MR-JARs. > > Does that go far enough though? I think there is a substantial amount of code which assumes (rightly) that you can build an exact path to a class in JAR URL and until today that'd work fine. I would question the ?rightly? part if directly parsing the characters of a URL without taking into account the encoding. I don?t know how much of this is just a short-cut or because URL being buggy has forced this approach. > It makes more sense to me that you'd only want to have to add the fragment if you want to tell it "hey I want Java 8's view of this path" or something - basically only change API when you're doing something that the API could not previously do, rather than changing JAR URLs for everyone. > So, a class loader ?covering? an MR-JAR would: 1) return resource URLs as they do today; and 2) any consumer can opt in by modifying that URL to get a versioned view (scheme or fragment, preferable the former in that case) FWIW that is how the jar-based URL connection works today, you have to opt in. It?s only for MR-JAR contained resources from a class loader where that is not the case. We would need to carefully check other JDK areas, especially security/validation to see what the knock on effect it. This area is extremely fragile. >> We may need to reconsider that given the fragility of processing URLs that have been reported, although MR-JARs are new and it will take time for this to work through the eco-system allowing time to weed out the bugs. >> >> Ideally the best solution is to change the URL scheme, say ?mrjar:file:/?!/?class? only for MR-JARs of course, but i considered this might be even more invasive for class scanners etc, (assuming URLs are processed correctly). However, the Jigsaw image is already adjusting the scheme for classes in an image: >> >> l.getResource("java/net/URL.class?) -> jrt:/java.base/java/net/URL.class >> >> and that will also impact other stuff folded into the image. > > Yeah but that is isolated to JDK cases. Not necessarily in the future, where it will be possible to fold libraries or applications into an image. FWIW certain application servers also have different URL schemes for their class loaders. Thanks, Paul. > In my experience, very very few tools or containers normally construct URLs for system class path items. I think that a substantially larger pool of software is likely to try accessing JARs by URL (at least going off of a highly unscientific bit of poking around on grepcode), and I don't think that this behavior should change from an API perspective. > > -- > - DML From peter.levart at gmail.com Mon Mar 7 14:53:10 2016 From: peter.levart at gmail.com (Peter Levart) Date: Mon, 7 Mar 2016 15:53:10 +0100 Subject: default random access list spliterator In-Reply-To: References: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> <56DD6A42.5080002@gmail.com> Message-ID: <56DD95D6.4080309@gmail.com> On 03/07/2016 01:59 PM, Paul Sandoz wrote: >> On 7 Mar 2016, at 12:47, Peter Levart wrote: >> >> What about a Spliterator based on List.subList() method? While the specification of List.subList() does not guarantee any specific behavior when underlying list is structurally modified, the implementations (at least implementations in JDK based on AbstractList) do have a fail-fast behavior and there's a chance other implementations too. >> > We currently have as the @implSpec: > > * @implSpec > * The default implementation creates a > * late-binding spliterator > * from the list's {@code Iterator}. The spliterator inherits the > * fail-fast properties of the list's iterator. > > Note the inheritance clause, which also covers the sublist case. > > We would need to update with something like: > > "If this list implements RandomAccess then?. and the spliterator is late-binding, and fail-fast > on a best effort basis if it is detected that this list (or any backing list if this list is a sub-list) has > been structurally modified when traversing due to an change in size as returned by the size() > method." > > Paul. Hi Paul, I don't think you understood my hint. I was thinking of a Spliterator implementation for RandomAccess List(s) that would leverage List.subList() method to implement splitting and/or fail-fast behavior. As there is a good chance that sub-list implementations already provide fail-fast behavior for structural changes in the backing list. For example: Spliterator spliterator() { List subList = subList(0, size()); return IntStream.range(0,subList.size()).mapToObj(subList::get).spliterator(); } This is a simple variant of Tagir's eager-binding RandomAccess spliterator which is fail-fast if the List's sub-list is fail-fast. Regards, Peter From peter.levart at gmail.com Mon Mar 7 15:03:00 2016 From: peter.levart at gmail.com (Peter Levart) Date: Mon, 7 Mar 2016 16:03:00 +0100 Subject: default random access list spliterator In-Reply-To: <56DD95D6.4080309@gmail.com> References: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> <56DD6A42.5080002@gmail.com> <56DD95D6.4080309@gmail.com> Message-ID: <56DD9824.8060504@gmail.com> On 03/07/2016 03:53 PM, Peter Levart wrote: > As there is a good chance that sub-list implementations already > provide fail-fast behavior for structural changes in the backing list. Ah, well... I checked AbstractMutableList in Eclipse collections and it doesn't provide fail-fast behavior for it's subList(s) unfortunately... Regards, Peter From paul.sandoz at oracle.com Mon Mar 7 15:15:07 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 7 Mar 2016 16:15:07 +0100 Subject: default random access list spliterator In-Reply-To: <56DD95D6.4080309@gmail.com> References: <4519E9C5-BE03-4951-A967-9B88EA41D645@oracle.com> <56DD6A42.5080002@gmail.com> <56DD95D6.4080309@gmail.com> Message-ID: > On 7 Mar 2016, at 15:53, Peter Levart wrote: > > > > On 03/07/2016 01:59 PM, Paul Sandoz wrote: >>> On 7 Mar 2016, at 12:47, Peter Levart wrote: >>> >>> What about a Spliterator based on List.subList() method? While the specification of List.subList() does not guarantee any specific behavior when underlying list is structurally modified, the implementations (at least implementations in JDK based on AbstractList) do have a fail-fast behavior and there's a chance other implementations too. >>> >> We currently have as the @implSpec: >> >> * @implSpec >> * The default implementation creates a >> * late-binding spliterator >> * from the list's {@code Iterator}. The spliterator inherits the >> * fail-fast properties of the list's iterator. >> >> Note the inheritance clause, which also covers the sublist case. >> >> We would need to update with something like: >> >> "If this list implements RandomAccess then?. and the spliterator is late-binding, and fail-fast >> on a best effort basis if it is detected that this list (or any backing list if this list is a sub-list) has >> been structurally modified when traversing due to an change in size as returned by the size() >> method." >> >> Paul. > > Hi Paul, > > I don't think you understood my hint. Clearly not :-) I thought you were asking a general question on subList behaviour. I see what you mean now, specify the default implementation to defer to subList. > I was thinking of a Spliterator implementation for RandomAccess List(s) that would leverage List.subList() method to implement splitting and/or fail-fast behavior. As there is a good chance that sub-list implementations already provide fail-fast behavior for structural changes in the backing list. For example: > > Spliterator spliterator() { > List subList = subList(0, size()); > return IntStream.range(0,subList.size()).mapToObj(subList::get).spliterator(); > } > > This is a simple variant of Tagir's eager-binding RandomAccess spliterator which is fail-fast if the List's sub-list is fail-fast. > Although that is not not late-binding nor is it terribly efficient (the spliterator of stream is an escape hatch, we should try to avoid it for critical stuff like that in ArrayList [*]). If there is a constraint to be relaxed i would prefer it be the fail-fast properties. > On 03/07/2016 03:53 PM, Peter Levart wrote: >> As there is a good chance that sub-list implementations already provide fail-fast behavior for structural changes in the backing list. > > Ah, well... I checked AbstractMutableList in Eclipse collections and it doesn't provide fail-fast behavior for it's subList(s) unfortunately? > Ok. Thanks, Paul. [*] We did use it in the list implementation for Collection.nCopies, which defers to the stream implementation, which in this case is i think justifiable. From amaembo at gmail.com Mon Mar 7 15:30:06 2016 From: amaembo at gmail.com (Tagir F. Valeev) Date: Mon, 7 Mar 2016 21:30:06 +0600 Subject: Stream API: Fuse sorted().limit(n) into single operation In-Reply-To: <56DC0EE0.2030402@gmail.com> References: <1598030827.20160305233516@gmail.com> <56DC0EE0.2030402@gmail.com> Message-ID: <271771888.20160307213006@gmail.com> Hello! Thank you for your comments! PL> - in Limiter.put: Nice catch! A good example when series of minor code refactorings lead to something strange. Webrev is updated in-place: http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/webrev/ PL> Also, what do you think of the following merging strategy that PL> doesn't need to allocate a temporary array each time you perform a sortTail(): I think, the main goal of such algos is to reduce comparator calls. Allocating additional buffer and some copying operations should not be very expensive (especially given the fact that we don't know comparator call cost and it could be pretty high). Actually I have a couple of additional optimizations in mind which may speedup some input patterns. But before working on that I would like to get the green light for this feature. I already spent quite a big time working on proof-of-concept implementation. Paul, could you please comment on this? If some time is necessary for the evaluation, no problem, I will wait. If additional clarifications are necessary from my side, I would be happy to answer any questions. With best regards, Tagir Valeev. PL> "first" phase: PL> - accumulate elements data[0] ... data[limit-1] and when reaching PL> limit, sort them and set first = false (this differs from your PL> logic which accumulates up to data.length elements at first PL> and is a better strategy, because it starts the second phase PL> as soon as possible and second phase is more optimal since it PL> already filters elements that accumulates) PL> "second" phase: PL> - accumulate elements < data[limit-1] into data[limit] ... PL> data[data.length-1] and when reaching length, sort the tail and PL> perform merge which looks like this: PL> ? - simulate merge of data[0] ...? data[limit-1] with data[limit] PL> ... data[size-1] deriving end indices i and j of each PL> sub-sequence: data[0] ... data[i-1] and data[limit] ... data[j-1]; PL> ? - move elements data[0] ... data[i-1] to positions PL> data[limit-i] ... data[limit-1] PL> ? - perform in-place merge of data[limit-i] ... data[limit-1] and PL> data[limit] ... data[j-1] into data[0] ... data[limit-1] PL> This, I think, results in dividing the additional copying PL> operations by 2 in average and eliminates allocation of PL> temporary array for merging for the cost of pre-merge step PL> which just derives the end indices. There's a chance that this PL> might improve performance because it trades memory writes for reads. PL> What do you think? PL> Regards, Peter PL> On 03/05/2016 06:35 PM, Tagir F. Valeev wrote: PL> PL> PL> Hello! PL> One of the popular bulk data operation is to find given number of PL> least or greatest elements. Currently Stream API provides no dedicated PL> operation to do this. Of course, it could be implemented by custom PL> collector and some third-party libraries already provide it. However PL> it would be quite natural to use existing API: PL> stream.sorted().limit(k) - k least elements PL> stream.sorted(Comparator.reverseOrder()).limit(k) - k greatest elements. PL> In fact people already doing this. Some samples could be found on PL> GitHub: PL> https://github.com/search?l=java&q=%22sorted%28%29.limit%28%22&type=Code&utf8=%E2%9C%93 PL> Unfortunately current implementation of such sequence of operations is PL> suboptimal: first the whole stream content is dumped into intermediate PL> array, then sorted fully and after that k least elements is selected. PL> On the other hand it's possible to provide a special implementation PL> for this particular case which takes O(k) additional memory and in PL> many cases works significantly faster. PL> I wrote proof-of-concept implementation, which could be found here: PL> http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/webrev/ PL> The implementation switches to new algorithm if limit is less than PL> 1000 which is quite common for such scenario (supporting bigger values PL> is also possible, but would require more testing). New algorithm PL> allocates an array of 2*limit elements. When its size is reached, it PL> sorts the array (using Arrays.sort) and discards the second half. PL> After that only those elements are accumulated which are less than the PL> worst element found so far. When array is filled again, the second PL> half is sorted and merged with the first half. PL> Here's JMH test with results which covers several input patterns: PL> http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/ PL> You may check summary first: PL> http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/summary.txt PL> Speedup values bigger than 1 are good. PL> The most significant regression in the sequential mode of the new PL> implementation is the ever decreasing input (especially with the low PL> limit value). Still, it's not that bad (given the fact that old PL> implementation processes such input very fast). On the other hand, for PL> random input new implementation could be in order of magnitude faster. PL> Even for ever ascending input noteable speedup (like 40%) could be PL> achieved. PL> For parallel stream the new implementation is almost always faster, PL> especially if you ignore the cases when parallel stream is PL> unprofitable. PL> What do you think about this improvement? Could it be included into PL> JDK-9? Are there any issues I'm unaware of? I would be really happy to PL> complete this work if this is supported by JDK team. Current PL> implementation has no primitive specialization and does not optimize PL> the sorting out if the input is known to be sorted, but it's not very PL> hard to add these features as well if you find my idea useful. PL> With best regards, PL> Tagir Valeev. PL> PL> PL> From felix.yang at oracle.com Mon Mar 7 16:04:26 2016 From: felix.yang at oracle.com (Felix Yang) Date: Tue, 8 Mar 2016 00:04:26 +0800 Subject: RFR 8151352, jdk/test/sample fails with "effective library path is outside the test suite" Message-ID: <56DDA68A.6000907@oracle.com> Hi all, please review the fix for two tests under "test/sample/". Bug: https://bugs.openjdk.java.net/browse/JDK-8151352 Webrev: http://cr.openjdk.java.net/~xiaofeya/8151352/webrev.00/ Original declaration, "@library ../../../src/sample...", is invalid with the latest change in jtreg. See https://bugs.openjdk.java.net/browse/CODETOOLS-7901585. This fix doesn't resolve dependency to "src/sample", but only converts them into testng tests and declares "external.lib.roots" to avoid dot-dot. Thanks, Felix From chris.hegarty at oracle.com Mon Mar 7 16:29:42 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Mon, 7 Mar 2016 16:29:42 +0000 Subject: RFR [9] 8151384: Examine sun.misc.ASCIICaseInsensitiveComparator Message-ID: <56DDAC76.5080606@oracle.com> sun.misc.ASCIICaseInsensitiveComparator appears to be a specialized comparator for comparing strings that contain only ASCII characters. Its main usage seems to be in sorted maps that support the character set implementation. This is startup/performance sensitive code. It looks like an "optimized" version of Strings public case insensitive comparator, when the strings are known to contain only ASCII characters. The public string case insensitive comparator, in some cases, does a toUpperCase and a toLowerCase. ASCIICaseInsensitiveComparator is trying to avoid this. Looking at String.CASE_INSENSITIVE_ORDER it looks like it can be, somewhat easily, optimized to give similar performance to that of ASCIICaseInsensitiveComparator without much risk. This will allow usages of ASCIICaseInsensitiveComparator to be replaced with String.CASE_INSENSITIVE_ORDER. For one, internal getChar does not pay the cost of bounds checks that charAt does ( which is used by ASCIICaseInsensitiveComparator ). What is in the webrev is specialized versions of compare when the coder of the strings match. Alternatively, this could be pushed down to String[Latin1|UTF16]. Webrev & bug: http://cr.openjdk.java.net/~chegar/8151384/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8151384 Benchmarks and results ( based, somewhat, on Aleksey's [1] ): http://cr.openjdk.java.net/~chegar/8151384/bench/ Two micro benchmarks: 1) Compare performance of comparing available charset names with ASCIICaseInsensitiveComparator and CASE_INSENSITIVE_ORDER. After the changes, CASE_INSENSITIVE_ORDER marginally out performs ASCIICaseInsensitiveComparator. 2) Compare general performance of CASE_INSENSITIVE_ORDER. The results show improved performance for all cases, especially when one, or more, strings contains UTF16. Note: this issue is not intending to optimize String.CASE_INSENSITIVE_ORDER as much as possible, just to make reasonable changes that improve performance to a point where it is a reasonable replacement for ASCIICaseInsensitiveComparator. Further optimization should not be prevented, or twarted, by this work. Note: the usage of ASCIICaseInsensitiveComparator in jar attributes appears to have been done to avoid the allocation cost of toLowerCase. This seems acceptable for hashCode, but could be avoided, if necessary. -Chris. [2] http://cr.openjdk.java.net/~shade/density/ From chris.hegarty at oracle.com Mon Mar 7 16:42:39 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Mon, 7 Mar 2016 16:42:39 +0000 Subject: RFR: JDK-8149925 We don't need jdk.internal.ref.Cleaner any more In-Reply-To: <56DC29DE.4040006@gmail.com> References: <56B72242.7050102@gmail.com> <56B7C328.3060800@gmail.com> <56B83553.3020202@oracle.com> <56B874DA.80001@gmail.com> <56B9EB17.7020303@oracle.com> <56C1E765.7080603@oracle.com> <56C1FE37.9010507@oracle.com> <015201d16813$333650c0$99a2f240$@apache.org> <56C34B1B.8050001@gmail.com> <56C43817.7060805@gmail.com> <7BA56B2F-C1C6-4EAF-B900-A825C6B602EF@oracle.com> <56CA080F.6010308@gmail.com> <56CB83FF.4010808@Oracle.com> <56CC8A4A.9080303@gmail.com> <56CEAC28.80802@gmail.com> <56CEB49A.4090000@oracle.com> <56CEC6A5.3070202@gmail.com> <56D0C5F5.7060509@Oracle.com> <56DC29DE.4040006@gmail.com> Message-ID: <56DDAF7F.5070800@oracle.com> On 06/03/16 13:00, Peter Levart wrote: > Hi, > > I have been asked to split the changes needed to remove > jdk.internal.ref.Cleaner into two changesets. The first one is to > contain the straightforward non-controversial changes that remove the > references to jdk.internal.ref.Cleaner and swaps them with > java.lang.ref.Cleaner in all places but Direct-X-Buffer. This part also > contains changes that replace use of lambdas and method references with > alternatives. > > Here's the 1st part: > > http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.07.part1/ Looks good to me. > And here's the 2nd part that applies on top of part 1: > > http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.07.part2/ From what I can see. I think this is good. -Chris. > > Together they form functionally equivalent change as in webrev.06priv > with only two additional cosmetic changes to part 2 (renaming of method > Cleaner.cleanNextPending -> Cleaner.cleanNextEnqueued and removal of an > obsolete comment in nio Bits). > > If part2 is to be developed further, I would like to 1st push part1 so > that maintenance of part2 changeset will be easier. > > Regards, Peter > > On 02/26/2016 10:39 PM, Roger Riggs wrote: >> Hi Peter, >> >> I think this cleans up all the points raised earlier. >> The optimization for enqueuing from the reference queue seems ok to me >> and should be >> more efficient than the previous implementation but I think Mandy or >> Alan should look at it also. >> >> Thanks, Roger >> >> >> On 2/25/2016 4:17 AM, Peter Levart wrote: >>> Hi Alan, >>> >>> On 02/25/2016 09:00 AM, Alan Bateman wrote: >>>> >>>> >>>> On 25/02/2016 07:24, Peter Levart wrote: >>>>> : >>>>> >>>>> I kept the public boolean Cleaner.cleanNextPending() method which >>>>> now only deals with enqueued Cleanable(s). I think this method >>>>> might still be beneficial for public use in situations where >>>>> cleanup actions take relatively long time to execute so that the >>>>> rate of cleanup falls behind the rate of registration of new >>>>> cleanup actions. >>>> I think we need also need to look at the option where this is not >>>> public. I have concerns that it is exposing implementation to some >>>> extent and that may become an attractive nuisance in the future. >>>> This shouldn't be an issue for the NIO buffer usage, we can keep the >>>> usage via the shared secrets mechanism. I think this is what Mandy >>>> is suggesting too. >>>> >>>> -Alan. >>> >>> Sure, no problem. Here's a variant that keeps the >>> Cleaner.cleanNextPending() method private and exposed via >>> SharedSecrets to nio Bits but is otherwise equivalent to webrev.06: >>> >>> http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.06priv/ >>> >>> >>> Regards, Peter >>> >> > From aleksey.shipilev at oracle.com Mon Mar 7 16:51:35 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Mon, 7 Mar 2016 19:51:35 +0300 Subject: RFR [9] 8151384: Examine sun.misc.ASCIICaseInsensitiveComparator In-Reply-To: <56DDAC76.5080606@oracle.com> References: <56DDAC76.5080606@oracle.com> Message-ID: <56DDB197.3070409@oracle.com> Hi, On 03/07/2016 07:29 PM, Chris Hegarty wrote: > What is in the webrev is specialized versions of compare when > the coder of the strings match. Alternatively, this could be pushed > down to String[Latin1|UTF16]. > > Webrev & bug: > http://cr.openjdk.java.net/~chegar/8151384/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8151384 Overall, good cleanup idea. I think the actual helpers deserve to be pushed to String[Latin1|UTF16], as String is already overloaded with lots of code. See how e.g. String.regionMatches(boolean ignoreCase, int toffset, String other, int ooffset, int len) does it. Nits: *) Change: compareLatin1ToUTF16(v2, v1) * -1; To: -compareLatin1ToUTF16(v2, v1); *) Do we really need to cast up/down to "char" in compareLatin1*? > Benchmarks and results ( based, somewhat, on Aleksey's [1] ): > http://cr.openjdk.java.net/~chegar/8151384/bench/ Comments on benchmarks (that might have impact on validity): *) "# JMH 1.6 (released 388 days ago, please consider updating!)" *) CaseInsensitiveCompare.cmp1_cmp1 is suspiciously unaffected by size. That's because benchmark goes through the identity comparison: @Benchmark @CompilerControl(CompilerControl.Mode.DONT_INLINE) public int cmp1_cmp1() { return CASE_INSENSITIVE_ORDER.compare(cmp1_1, cmp1_1); } ...should really be: @Benchmark @CompilerControl(CompilerControl.Mode.DONT_INLINE) public int cmp1_cmp1() { return CASE_INSENSITIVE_ORDER.compare(cmp1_1, cmp1_2); } *) Probable dead-code elimination here: @Benchmark @CompilerControl(CompilerControl.Mode.DONT_INLINE) public void StringCaseInsensitiveComparator() { List strings = AVAILABLE_CHARSETS; for (String s1 : strings) { for (String s2 : strings) { String.CASE_INSENSITIVE_ORDER.compare(s1, s2); } } } ...should be: @Benchmark @CompilerControl(CompilerControl.Mode.DONT_INLINE) public void StringCaseInsensitiveComparator(Blackhole bh) { List strings = AVAILABLE_CHARSETS; for (String s1 : strings) { for (String s2 : strings) { bh.consume(String.CASE_INSENSITIVE_ORDER.compare(s1, s2)); } } } Thanks, -Aleksey From peter.levart at gmail.com Mon Mar 7 16:52:04 2016 From: peter.levart at gmail.com (Peter Levart) Date: Mon, 7 Mar 2016 17:52:04 +0100 Subject: Stream API: Fuse sorted().limit(n) into single operation In-Reply-To: <271771888.20160307213006@gmail.com> References: <1598030827.20160305233516@gmail.com> <56DC0EE0.2030402@gmail.com> <271771888.20160307213006@gmail.com> Message-ID: <56DDB1B4.2020206@gmail.com> Hi Tagir, On 03/07/2016 04:30 PM, Tagir F. Valeev wrote: > Hello! > > Thank you for your comments! > > PL> - in Limiter.put: > > Nice catch! A good example when series of minor code refactorings lead > to something strange. Webrev is updated in-place: > http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/webrev/ > > PL> Also, what do you think of the following merging strategy that > PL> doesn't need to allocate a temporary array each time you perform a sortTail(): > > I think, the main goal of such algos is to reduce comparator calls. > Allocating additional buffer and some copying operations should not be > very expensive (especially given the fact that we don't know > comparator call cost and it could be pretty high). You are right about that. Comparator can be specified by user and may be expensive. > Actually I have a > couple of additional optimizations in mind which may speedup some > input patterns. But before working on that I would like to get the > green light for this feature. I already spent quite a big time working > on proof-of-concept implementation. Right. Then maybe instead of repeated allocation of scratch array of size 'limit', you could allocate a 3*limit sized array (instead of 2*limit) for the whole fused operation and use this last third as a scratch space for merging. Even better, use 3 distinct arrays of size 'limit' and use them interchangeably: "first" phase: - collect elements into targetArray, sort it, set first = false "second" phase: while there's more: - set primaryArray = targetArray - collect elements < primaryArray[limit-1] into secondaryArray, sort it - merge primaryArray and secondaryArray into targetArray No copying necessary. I'm sure you have something like that in your mind already... Regards, Peter > > Paul, could you please comment on this? If some time is necessary for > the evaluation, no problem, I will wait. If additional clarifications > are necessary from my side, I would be happy to answer any questions. > > With best regards, > Tagir Valeev. > > PL> "first" phase: > > PL> - accumulate elements data[0] ... data[limit-1] and when reaching > PL> limit, sort them and set first = false (this differs from your > PL> logic which accumulates up to data.length elements at first > PL> and is a better strategy, because it starts the second phase > PL> as soon as possible and second phase is more optimal since it > PL> already filters elements that accumulates) > > PL> "second" phase: > > PL> - accumulate elements < data[limit-1] into data[limit] ... > PL> data[data.length-1] and when reaching length, sort the tail and > PL> perform merge which looks like this: > PL> - simulate merge of data[0] ... data[limit-1] with data[limit] > PL> ... data[size-1] deriving end indices i and j of each > PL> sub-sequence: data[0] ... data[i-1] and data[limit] ... data[j-1]; > PL> - move elements data[0] ... data[i-1] to positions > PL> data[limit-i] ... data[limit-1] > PL> - perform in-place merge of data[limit-i] ... data[limit-1] and > PL> data[limit] ... data[j-1] into data[0] ... data[limit-1] > > > PL> This, I think, results in dividing the additional copying > PL> operations by 2 in average and eliminates allocation of > PL> temporary array for merging for the cost of pre-merge step > PL> which just derives the end indices. There's a chance that this > PL> might improve performance because it trades memory writes for reads. > > PL> What do you think? > > PL> Regards, Peter > > > > PL> On 03/05/2016 06:35 PM, Tagir F. Valeev wrote: > PL> > PL> > PL> Hello! > > PL> One of the popular bulk data operation is to find given number of > PL> least or greatest elements. Currently Stream API provides no dedicated > PL> operation to do this. Of course, it could be implemented by custom > PL> collector and some third-party libraries already provide it. However > PL> it would be quite natural to use existing API: > > PL> stream.sorted().limit(k) - k least elements > PL> stream.sorted(Comparator.reverseOrder()).limit(k) - k greatest elements. > > PL> In fact people already doing this. Some samples could be found on > PL> GitHub: > PL> https://github.com/search?l=java&q=%22sorted%28%29.limit%28%22&type=Code&utf8=%E2%9C%93 > > PL> Unfortunately current implementation of such sequence of operations is > PL> suboptimal: first the whole stream content is dumped into intermediate > PL> array, then sorted fully and after that k least elements is selected. > PL> On the other hand it's possible to provide a special implementation > PL> for this particular case which takes O(k) additional memory and in > PL> many cases works significantly faster. > > PL> I wrote proof-of-concept implementation, which could be found here: > PL> http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/webrev/ > PL> The implementation switches to new algorithm if limit is less than > PL> 1000 which is quite common for such scenario (supporting bigger values > PL> is also possible, but would require more testing). New algorithm > PL> allocates an array of 2*limit elements. When its size is reached, it > PL> sorts the array (using Arrays.sort) and discards the second half. > PL> After that only those elements are accumulated which are less than the > PL> worst element found so far. When array is filled again, the second > PL> half is sorted and merged with the first half. > > PL> Here's JMH test with results which covers several input patterns: > PL> http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/ > > PL> You may check summary first: > PL> http://cr.openjdk.java.net/~tvaleev/patches/sortedLimit/jmh/summary.txt > PL> Speedup values bigger than 1 are good. > > PL> The most significant regression in the sequential mode of the new > PL> implementation is the ever decreasing input (especially with the low > PL> limit value). Still, it's not that bad (given the fact that old > PL> implementation processes such input very fast). On the other hand, for > PL> random input new implementation could be in order of magnitude faster. > PL> Even for ever ascending input noteable speedup (like 40%) could be > PL> achieved. > > PL> For parallel stream the new implementation is almost always faster, > PL> especially if you ignore the cases when parallel stream is > PL> unprofitable. > > PL> What do you think about this improvement? Could it be included into > PL> JDK-9? Are there any issues I'm unaware of? I would be really happy to > PL> complete this work if this is supported by JDK team. Current > PL> implementation has no primitive specialization and does not optimize > PL> the sorting out if the input is known to be sorted, but it's not very > PL> hard to add these features as well if you find my idea useful. > > PL> With best regards, > PL> Tagir Valeev. > > > PL> > PL> > PL> > From bodewig at apache.org Sat Mar 5 18:55:59 2016 From: bodewig at apache.org (Stefan Bodewig) Date: Sat, 05 Mar 2016 19:55:59 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: <06ad01d176e9$c19e2740$44da75c0$@apache.org> (Uwe Schindler's message of "Sat, 5 Mar 2016 15:17:26 +0100") References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> <56DAE71B.7040400@oracle.com> <06ad01d176e9$c19e2740$44da75c0$@apache.org> Message-ID: <87oaasycsg.fsf@v35516.1blu.de> On 2016-03-05, Uwe Schindler wrote: > This is why I put the Ant developers in CC. The correct way would be > to look at the *decoded* path (not just getPath() because this is also > one of the "famous" traps in the URL class - one reason why it should > be avoided in favor of URI). URL.toURI().getPath() is most safe to fix > the issue in Apache Ant Part of the reason for this certainly is that the code has been written before the URI class even existed. > (Stefan Bodewig: Should I open an issue in Ant?). Yes, please do. Thanks Uwe. > Maybe Ant developers can fix this code in later versions to handle > URLs more correct. +1 Stefan From andrejohn.mas at gmail.com Sat Mar 5 23:15:04 2016 From: andrejohn.mas at gmail.com (=?utf-8?Q?Andr=C3=A9-John_Mas?=) Date: Sat, 5 Mar 2016 18:15:04 -0500 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> Message-ID: Hi, Given the issues we are seeing, and I suspect this is not the only code with these assumptions, is there any way this functionality can be limited to "multi-release aware" code, either via a constructor parameter or a new method? What is the most elegant approach? Andre > On 5 Mar, 2016, at 08:50, Claes Redestad wrote: > > Hi, > > similar issues were discovered too late to stop b108, e.g., https://bugs.openjdk.java.net/browse/JDK-8150920. Fix is already in jdk9/dev, so I think the next build should be more well-behaved and hope we can provide it more promptly than normal. > > If you can build OpenJDK from jdk9/dev and report any remaining issues due to the multi-release feature that would be quite helpful! > > Thanks! > > /Claes > > Uwe Schindler skrev: (5 mars 2016 14:24:37 CET) >> Hi OpenJDK Core Developers, >> >> you may know the Apache Lucene team is testing early access releases of >> Java 9. We reported many bugs already, but most of them only applied to >> Hotspot and Lucene itsself. But this problem since build 108 is now >> really severe, because it breaks the build system already! >> >> To allow further testing of Open Source Projects, I'd suggest to revert >> the Multi-Release-JAR runtime support patch and provide a new preview >> build ASAP, because we found out after a night of debugging a build >> system from which we don't know all internals what is causing the >> problems and there is no workaround. I am very sorry that I have to say >> this, but it unfortunately build 108 breaks *ALL* versions of Apache >> Ant, the grandfather of all Java build systems :-) I know also OpenJDK >> is using it, too! So with Multi-Release JAR file patch applied (see >> http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c), any >> Ant-based build - including the JDK build itsself - would no longer >> bootstrap. It is impossible to also build Gradle projects, because >> Gradle uses Ant internally for many tasks). Maven projects may be >> affected, too. >> >> Now you might have the question: What happened? >> >> We tried to build Lucene on our Jenkins server, but the build itsself >> failed with a stupid error message: >> >> BUILD FAILED >> /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:21: The >> following error occurred while executing this line: >> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:56: >> not doesn't support the nested "matches" element. >> >> The first idea was: Ah, there were changes in XML parsing >> (JDK-8149915). So we debugged the build. But it was quite clear that >> XML parsing was not the issue. It got quite clear when we enabled >> "-debug" on the build. What happened was that Ant was not loading its >> internal conditions/tasks/type definitions anymore, so the build system >> does not know almost any type anymore. The debug log showed that Ant >> was no longer able to load the resource >> "/org/apache/tools/ant/antlib.xml" from its own JAR file anymore. >> Instead it printed some strange debugging output (which looked totally >> broken). >> >> I spend the whole night digging through their code and found the issue: >> The commit of Multi-Release-Jar files (see >> http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c) broke >> resource handling in Apache Ant. In short: If you call >> ClassLoader.getResources() / or getResource() you get back an URL from >> where you can load the Resource - this is all fine and still works. >> But, with the Multi-Release JAR files patch this now has an URL >> fragment appended to the URL: '#release' (see >> http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/f9913ea0f95c); this also >> applies to non-multi-release JAR files like Apache Ant's "ant.jar". >> >> In Java 7, Java 8,... and Java 9pre-b108, >> ClassLoader.getResource()/getResources() returned stuff like: >> >> "jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml" >> >> Now in Java 9b108 the following is returned: >> >> "jar:file:/C:/Program%20Files/Java/apache-ant-1.9.6/lib/ant.jar!/org/apache/tools/ant/antlib.xml#release" >> >> And here Ant breaks (and I assume many other projects like Maven, too). >> Ant checks for the file extension of the string (because it may load >> definitions from both XML and properties files). So it does >> endsWith(".xml") and of course this now returns false. The effect is >> that Ant tries to load its own task definitions as a java properties >> file instead of XML. Of course this fails, because the data behind this >> URL is XML. The effect is that Ant cannot bootstrap as everything to >> build is missing. >> >> One might say: Ant's code is broken (I agree, it is not nice because it >> relies on the string representation of the resource URL - which is a >> no-go anyways), but it is impossible to fix, because Ant is bundled on >> most developer computers and those will suddenly break with Java 9! >> There is also no version out there that works around this, so we cannot >> test anything anymore! >> >> The problematic line in Ant's code is here: >> http://grepcode.com/file/repo1.maven.org/maven2/org.apache.ant/ant/1.9.6/org/apache/tools/ant/taskdefs/Definer.java?av=f#259 >> >> I'd suggest to please ASAP revert the Multi-Release JAR file patch and >> provide a new preview build as soon as possible. I think there is more >> work needed to fix this. If this does not revert to the original state, >> it will be impossible to build and test Lucene, Elasticsearch,.... (and >> almost every Java project out there!). So short: We cannot test anymore >> and it is likely that we cannot support Java 9 anymore because the >> build system used by most Java projects behind the scenes does not >> bootstrap itself anymore. >> >> My suggestion would be to investigate other versions for this patch >> that does *not* modify the resource URLs by appending a fragment to >> them (at least not for the "standard" case without an actual >> Multi-Release Jar). For new multi-release JAR files I am fine with >> appending fragments, but please not for default ones. Maybe change code >> to handle the URLs from the non-versioned part differently (without >> fragment). Leaving the fragment inide may break many othe rprojects, >> because many programmers are very sloppy with handling URLs (well-known >> issue is calling URL#getFile() of a file:-URL that breaks on Windows >> systems and spaces in path name). Many people just call toString() on >> URL and do some test on it (startsWith, endsWith). So appending >> fragments is a no-go for backwards compatibility with JAR resources! >> >> I posted this to the mailing list and did not open a bug report on >> http://bugs.java.com/, because this is a more general issue - feel free >> to open bug reports around this!!! I would be very happy if we could >> find a quick solution for this problem. Until there is a solution we >> have to stop testing Java 9 with Apache Lucene/Solr/..., and this is >> not a good sign, especially as Jigsaw will be merged soon. >> >> Thanks for listening, >> Uwe >> >> P.S.: I also CCed the Apache Ant team. They should fix the broken code >> anyways, but this won't help for many projects already out there (e.g. >> Apache Lucene still has a minimum requirement of Ant 1.8.2 because >> MacOSX computers ship with that version since years). >> >> ----- >> Uwe Schindler >> uschindler at apache.org >> ASF Member, Apache Lucene PMC / Committer >> Bremen, Germany >> http://lucene.apache.org/ > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. From mandy.chung at oracle.com Mon Mar 7 18:35:27 2016 From: mandy.chung at oracle.com (Mandy Chung) Date: Mon, 7 Mar 2016 10:35:27 -0800 Subject: RFR: JDK-8149925 We don't need jdk.internal.ref.Cleaner any more In-Reply-To: <56DC29DE.4040006@gmail.com> References: <56B72242.7050102@gmail.com> <56B7C328.3060800@gmail.com> <56B83553.3020202@oracle.com> <56B874DA.80001@gmail.com> <56B9EB17.7020303@oracle.com> <56C1E765.7080603@oracle.com> <56C1FE37.9010507@oracle.com> <015201d16813$333650c0$99a2f240$@apache.org> <56C34B1B.8050001@gmail.com> <56C43817.7060805@gmail.com> <7BA56B2F-C1C6-4EAF-B900-A825C6B602EF@oracle.com> <56CA080F.6010308@gmail.com> <56CB83FF.4010808@Oracle.com> <56CC8A4A.9080303@gmail.com> <56CEAC28.80802@gmail.com> <56CEB49A.4090000@oracle.com> <56CEC6A5.3070202@gmail.com> <56D0C5F5.7060509@Oracle.com> <56DC29DE.4040006@gmail.com> Message-ID: <059F3798-66D4-4300-B618-09868A04E3DC@oracle.com> > On Mar 6, 2016, at 5:00 AM, Peter Levart wrote: > > Hi, > > I have been asked to split the changes needed to remove jdk.internal.ref.Cleaner into two changesets. The first one is to contain the straightforward non-controversial changes that remove the references to jdk.internal.ref.Cleaner and swaps them with java.lang.ref.Cleaner in all places but Direct-X-Buffer. This part also contains changes that replace use of lambdas and method references with alternatives. > > Here's the 1st part: > > http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.07.part1/ > webrev.07.part1 looks okay. > And here's the 2nd part that applies on top of part 1: > > http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.07.part2/ > > > Together they form functionally equivalent change as in webrev.06priv with only two additional cosmetic changes to part 2 (renaming of method Cleaner.cleanNextPending -> Cleaner.cleanNextEnqueued and removal of an obsolete comment in nio Bits). > I studied webrev.06priv and the history of JDK-6857566. I?m not comfortable for any arbitrary thread to handle the enqueuing of the pending references (this change is more about the fix for JDK-6857566). I like your proposed change to take over handling the whole chain of pending references at once. The unhookPhase and enqueuePhase add the complexity that I think we can avoid. I?m okay for only system's cleaner thread to help the reference handler thread doing its job. Would you consider having the special cleaner thread to help the enqueuing before waiting on the cleaner's ReferenceQueue? The allocating thread may do a System.gc() that may discover phantom reachable references. All it?s interested is only the direct byte buffer ones so that it can deallocate the native memory. What is the downside of having a dedicated Cleaner for direct byte buffer that could special case for it? > If part2 is to be developed further, I would like to 1st push part1 so that maintenance of part2 changeset will be easier. It?s okay with me to push part1. I?d like to see different prototypes for part2 being explored and evaluate the pros and cons of each one. Sorry I realize this may require additional works. Mandy From joe.darcy at oracle.com Mon Mar 7 18:51:57 2016 From: joe.darcy at oracle.com (joe darcy) Date: Mon, 7 Mar 2016 10:51:57 -0800 Subject: RFR 8151352, jdk/test/sample fails with "effective library path is outside the test suite" In-Reply-To: <56DDA68A.6000907@oracle.com> References: <56DDA68A.6000907@oracle.com> Message-ID: <56DDCDCD.5080207@oracle.com> Hello, Looks fine; thanks, -Joe On 3/7/2016 8:04 AM, Felix Yang wrote: > Hi all, > please review the fix for two tests under "test/sample/". > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8151352 > Webrev: > http://cr.openjdk.java.net/~xiaofeya/8151352/webrev.00/ > > Original declaration, "@library ../../../src/sample...", is invalid > with the latest change in jtreg. See > https://bugs.openjdk.java.net/browse/CODETOOLS-7901585. This fix > doesn't resolve dependency to "src/sample", but only converts them > into testng tests and declares "external.lib.roots" to avoid dot-dot. > > Thanks, > Felix From steve.drach at oracle.com Mon Mar 7 19:07:11 2016 From: steve.drach at oracle.com (Steve Drach) Date: Mon, 7 Mar 2016 11:07:11 -0800 Subject: RFR: 8151339 Adding fragment to JAR URLs breaks ant Message-ID: Hi, Please review the following changeset. We?d like to get this into build 109, which means by noon today. This is essentially a temporary fix, but it?s been tested and Lucene has been built against it. We will follow up with a more comprehensive fix by build 110. webrev: http://cr.openjdk.java.net/~sdrach/8151339/webrev/ issue: https://bugs.openjdk.java.net/browse/JDK-8151339 Thanks Steve From Alan.Bateman at oracle.com Mon Mar 7 19:23:23 2016 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Mon, 7 Mar 2016 19:23:23 +0000 Subject: RFR: 8151339 Adding fragment to JAR URLs breaks ant In-Reply-To: References: Message-ID: <56DDD52B.5070806@oracle.com> On 07/03/2016 19:07, Steve Drach wrote: > Hi, > > Please review the following changeset. We?d like to get this into > build 109, which means by noon today. This is essentially a temporary > fix, but it?s been tested and Lucene has been built against it. We > will follow up with a more comprehensive fix by build 110. > > webrev: http://cr.openjdk.java.net/~sdrach/8151339/webrev/ > > issue: https://bugs.openjdk.java.net/browse/JDK-8151339 > I chatted with Paul about this and I think we were both lending towards backing out the MR JAR changes until there is a better solution for URLs. If you don't want to go down that route then your short term fix to get the world working again is okay. -Alan From joe.darcy at oracle.com Mon Mar 7 19:25:39 2016 From: joe.darcy at oracle.com (joe darcy) Date: Mon, 7 Mar 2016 11:25:39 -0800 Subject: JDK 9 RFR of JDK-8151393: Revert changes for JDK-8087104 Message-ID: <56DDD5B3.1040705@oracle.com> Hello, The changes for JDK-8087104 introduced some test failures which have not yet been addressed (JDK-8151310). In order to get a clean snapshot for the next integration, if the fix for JDK-8151310 doesn't arrive in time, the changes for JDK-8087104 should be reverted until they can be otherwise corrected. In case the fix doesn't arrive, I patch -R'ed the changset for JDK-8087104. The (anti)diff to DateFormatSymbols.java is below; the newly-introduced failing test file from the previous changeset test/java/text/Format/DateFormat/DFSConstructorCloneTest.java is deleted. Thanks, -Joe --- a/src/java.base/share/classes/java/text/DateFormatSymbols.java Fri Mar 04 10:09:54 2016 -0800 +++ b/src/java.base/share/classes/java/text/DateFormatSymbols.java Mon Mar 07 11:20:50 2016 -0800 @@ -1,5 +1,5 @@ /* - * Copyright (c) 1996, 2016, Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it @@ -606,7 +606,7 @@ try { DateFormatSymbols other = (DateFormatSymbols)super.clone(); - copyMembers(new SymbolsCacheEntry(locale), other); + copyMembers(this, other); return other; } catch (CloneNotSupportedException e) { throw new InternalError(e); @@ -669,7 +669,7 @@ /** * Cache to hold DateFormatSymbols instances per Locale. */ - private static final ConcurrentMap> cachedInstances + private static final ConcurrentMap> cachedInstances = new ConcurrentHashMap<>(3); private transient int lastZoneIndex; @@ -683,10 +683,10 @@ locale = desiredLocale; // Copy values of a cached instance if any. - SoftReference ref = cachedInstances.get(locale); - SymbolsCacheEntry sce; - if (ref != null && (sce = ref.get()) != null) { - copyMembers(sce, this); + SoftReference ref = cachedInstances.get(locale); + DateFormatSymbols dfs; + if (ref != null && (dfs = ref.get()) != null) { + copyMembers(dfs, this); return; } @@ -717,11 +717,11 @@ weekdays = toOneBasedArray(resource.getStringArray("DayNames")); shortWeekdays = toOneBasedArray(resource.getStringArray("DayAbbreviations")); - sce = new SymbolsCacheEntry(locale); - ref = new SoftReference<>(sce); - SoftReference x = cachedInstances.putIfAbsent(locale, ref); + // Put a clone in the cache + ref = new SoftReference<>((DateFormatSymbols)this.clone()); + SoftReference x = cachedInstances.putIfAbsent(locale, ref); if (x != null) { - SymbolsCacheEntry y = x.get(); + DateFormatSymbols y = x.get(); if (y == null) { // Replace the empty SoftReference with ref. cachedInstances.put(locale, ref); @@ -812,7 +812,7 @@ * @param src the source DateFormatSymbols. * @param dst the target DateFormatSymbols. */ - private void copyMembers(SymbolsCacheEntry src, DateFormatSymbols dst) + private void copyMembers(DateFormatSymbols src, DateFormatSymbols dst) { dst.eras = Arrays.copyOf(src.eras, src.eras.length); dst.months = Arrays.copyOf(src.months, src.months.length); @@ -821,7 +821,7 @@ dst.shortWeekdays = Arrays.copyOf(src.shortWeekdays, src.shortWeekdays.length); dst.ampms = Arrays.copyOf(src.ampms, src.ampms.length); if (src.zoneStrings != null) { - dst.zoneStrings = getZoneStringsImpl(true); + dst.zoneStrings = src.getZoneStringsImpl(true); } else { dst.zoneStrings = null; } @@ -842,43 +842,4 @@ } stream.defaultWriteObject(); } - - private static class SymbolsCacheEntry { - - final String eras[]; - final String months[]; - final String shortMonths[]; - final String weekdays[]; - final String shortWeekdays[]; - final String ampms[]; - final String zoneStrings[][]; - final String localPatternChars; - - SymbolsCacheEntry(Locale locale) { - // Initialize the fields from the ResourceBundle for locale. - LocaleProviderAdapter adapter = LocaleProviderAdapter.getAdapter(DateFormatSymbolsProvider.class, locale); - // Avoid any potential recursions - if (!(adapter instanceof ResourceBundleBasedAdapter)) { - adapter = LocaleProviderAdapter.getResourceBundleBased(); - } - ResourceBundle resource = ((ResourceBundleBasedAdapter) adapter).getLocaleData().getDateFormatData(locale); - if (resource.containsKey("Eras")) { - this.eras = resource.getStringArray("Eras"); - } else if (resource.containsKey("long.Eras")) { - this.eras = resource.getStringArray("long.Eras"); - } else if (resource.containsKey("short.Eras")) { - this.eras = resource.getStringArray("short.Eras"); - } else { - this.eras = null; - } - this.months = resource.getStringArray("MonthNames"); - this.shortMonths = resource.getStringArray("MonthAbbreviations"); - this.weekdays = toOneBasedArray(resource.getStringArray("DayNames")); - this.shortWeekdays = toOneBasedArray(resource.getStringArray("DayAbbreviations")); - this.ampms = resource.getStringArray("AmPmMarkers"); - this.zoneStrings = TimeZoneNameUtility.getZoneStrings(locale); - this.localPatternChars = resource.getString("DateTimePatternChars"); - - } - } } From Alan.Bateman at oracle.com Mon Mar 7 19:55:12 2016 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Mon, 7 Mar 2016 19:55:12 +0000 Subject: JDK 9 RFR of JDK-8151393: Revert changes for JDK-8087104 In-Reply-To: <56DDD5B3.1040705@oracle.com> References: <56DDD5B3.1040705@oracle.com> Message-ID: <56DDDCA0.30503@oracle.com> On 07/03/2016 19:25, joe darcy wrote: > Hello, > > The changes for JDK-8087104 introduced some test failures which have > not yet been addressed (JDK-8151310). In order to get a clean snapshot > for the next integration, if the fix for JDK-8151310 doesn't arrive in > time, the changes for JDK-8087104 should be reverted until they can be > otherwise corrected. > > In case the fix doesn't arrive, I patch -R'ed the changset for > JDK-8087104. The (anti)diff to DateFormatSymbols.java is below; the > newly-introduced failing test file from the previous changeset > > test/java/text/Format/DateFormat/DFSConstructorCloneTest.java > > is deleted. This looks okay to me. -Alan From kim.barrett at oracle.com Mon Mar 7 20:31:15 2016 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 7 Mar 2016 15:31:15 -0500 Subject: RFR: JDK-8149925 We don't need jdk.internal.ref.Cleaner any more In-Reply-To: <56DC29DE.4040006@gmail.com> References: <56B72242.7050102@gmail.com> <56B7C328.3060800@gmail.com> <56B83553.3020202@oracle.com> <56B874DA.80001@gmail.com> <56B9EB17.7020303@oracle.com> <56C1E765.7080603@oracle.com> <56C1FE37.9010507@oracle.com> <015201d16813$333650c0$99a2f240$@apache.org> <56C34B1B.8050001@gmail.com> <56C43817.7060805@gmail.com> <7BA56B2F-C1C6-4EAF-B900-A825C6B602EF@oracle.com> <56CA080F.6010308@gmail.com> <56CB83FF.4010808@Oracle.com> <56CC8A4A.9080303@gmail.com> <56CEAC28.80802@gmail.com> <56CEB49A.4090000@oracle.com> <56CEC6A5.3070202@gmail.com> <56D0C5F5.7060509@Oracle.com> <56DC29DE.4040006@gmail.com> Message-ID: > On Mar 6, 2016, at 8:00 AM, Peter Levart wrote: > > Hi, > > I have been asked to split the changes needed to remove jdk.internal.ref.Cleaner into two changesets. The first one is to contain the straightforward non-controversial changes that remove the references to jdk.internal.ref.Cleaner and swaps them with java.lang.ref.Cleaner in all places but Direct-X-Buffer. This part also contains changes that replace use of lambdas and method references with alternatives. > > Here's the 1st part: > > http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.07.part1/ webrev.07.part1 looks good to me. (Consider me a ?reviewer" and not a ?Reviewer" for this.) > > And here's the 2nd part that applies on top of part 1: > > http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.07.part2/ I?ve only briefly skimmed part2; I agree with Mandy that this part needs more discussion. From chris.hegarty at oracle.com Mon Mar 7 21:55:52 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Mon, 7 Mar 2016 21:55:52 +0000 Subject: RFR [9] 8151384: Examine sun.misc.ASCIICaseInsensitiveComparator In-Reply-To: <56DDB197.3070409@oracle.com> References: <56DDAC76.5080606@oracle.com> <56DDB197.3070409@oracle.com> Message-ID: <27567165-455D-428A-8FF9-BA3C786A1697@oracle.com> Aleksey, Very helpful, as always. I pushed the methods down into String[Latin1|UTF16], and followed existing style. This is much cleaner. Thanks for catching the silly mistakes in the benchmarks. Updated links: http://cr.openjdk.java.net/~chegar/8151384/webrev.01/ http://cr.openjdk.java.net/~chegar/8151384/bench.01/ -Chris. On 7 Mar 2016, at 16:51, Aleksey Shipilev wrote: > Hi, > > On 03/07/2016 07:29 PM, Chris Hegarty wrote: >> What is in the webrev is specialized versions of compare when >> the coder of the strings match. Alternatively, this could be pushed >> down to String[Latin1|UTF16]. >> >> Webrev & bug: >> http://cr.openjdk.java.net/~chegar/8151384/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8151384 > > Overall, good cleanup idea. I think the actual helpers deserve to be > pushed to String[Latin1|UTF16], as String is already overloaded with > lots of code. See how e.g. String.regionMatches(boolean ignoreCase, int > toffset, String other, int ooffset, int len) does it. > > Nits: > > *) Change: compareLatin1ToUTF16(v2, v1) * -1; > To: -compareLatin1ToUTF16(v2, v1); > > *) Do we really need to cast up/down to "char" in compareLatin1*? > > >> Benchmarks and results ( based, somewhat, on Aleksey's [1] ): >> http://cr.openjdk.java.net/~chegar/8151384/bench/ > > Comments on benchmarks (that might have impact on validity): > > *) "# JMH 1.6 (released 388 days ago, please consider updating!)" > > *) CaseInsensitiveCompare.cmp1_cmp1 is suspiciously unaffected by size. > That's because benchmark goes through the identity comparison: > > @Benchmark > @CompilerControl(CompilerControl.Mode.DONT_INLINE) > public int cmp1_cmp1() { > return CASE_INSENSITIVE_ORDER.compare(cmp1_1, cmp1_1); > } > > ...should really be: > > @Benchmark > @CompilerControl(CompilerControl.Mode.DONT_INLINE) > public int cmp1_cmp1() { > return CASE_INSENSITIVE_ORDER.compare(cmp1_1, cmp1_2); > } > > *) Probable dead-code elimination here: > > @Benchmark > @CompilerControl(CompilerControl.Mode.DONT_INLINE) > public void StringCaseInsensitiveComparator() { > List strings = AVAILABLE_CHARSETS; > for (String s1 : strings) { > for (String s2 : strings) { > String.CASE_INSENSITIVE_ORDER.compare(s1, s2); > } > } > } > > ...should be: > > @Benchmark > @CompilerControl(CompilerControl.Mode.DONT_INLINE) > public void StringCaseInsensitiveComparator(Blackhole bh) { > List strings = AVAILABLE_CHARSETS; > for (String s1 : strings) { > for (String s2 : strings) { > bh.consume(String.CASE_INSENSITIVE_ORDER.compare(s1, s2)); > } > } > } > > > Thanks, > -Aleksey > From aleksey.shipilev at oracle.com Mon Mar 7 22:20:39 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 8 Mar 2016 01:20:39 +0300 Subject: RFR [9] 8151384: Examine sun.misc.ASCIICaseInsensitiveComparator In-Reply-To: <27567165-455D-428A-8FF9-BA3C786A1697@oracle.com> References: <56DDAC76.5080606@oracle.com> <56DDB197.3070409@oracle.com> <27567165-455D-428A-8FF9-BA3C786A1697@oracle.com> Message-ID: <56DDFEB7.9080809@oracle.com> Hi Chris, On 03/08/2016 12:55 AM, Chris Hegarty wrote: > Updated links: > http://cr.openjdk.java.net/~chegar/8151384/webrev.01/ *) Your previous patch had the explicit access to CharacterDataLatin1.instance.(toLowerCase|toUpperCase). Any reason not to use it in your new patch? Probably saves a shift and branch on a hot path -- j.l.String is one of those magical places where it actually matters. Otherwise looks good. > http://cr.openjdk.java.net/~chegar/8151384/bench.01/ *) This one worries me a little bit. ASCIICaseInsensitive has the anomalous 360 us/op result in one of the forks, and that "best" is better than average StringCaseInsensitive: http://cr.openjdk.java.net/~chegar/8151384/bench.01/AvailableCharsetsCompare_afterChanges.txt This might be the genuine run-to-run variance with compiling the hot loop; I'd recommend using Blackhole.consume, not the accumulator variable to try avoiding that. Otherwise looks good. Thanks, -Aleksey From xueming.shen at oracle.com Mon Mar 7 22:36:03 2016 From: xueming.shen at oracle.com (Xueming Shen) Date: Mon, 07 Mar 2016 14:36:03 -0800 Subject: RFR [9] 8151384: Examine sun.misc.ASCIICaseInsensitiveComparator In-Reply-To: <27567165-455D-428A-8FF9-BA3C786A1697@oracle.com> References: <56DDAC76.5080606@oracle.com> <56DDB197.3070409@oracle.com> <27567165-455D-428A-8FF9-BA3C786A1697@oracle.com> Message-ID: <56DE0253.6090508@oracle.com> Chris, 515 hashCode = name.toLowerCase(Locale.ROOT).hashCode(); otherwise + 1 -sherman On 03/07/2016 01:55 PM, Chris Hegarty wrote: > Aleksey, > > Very helpful, as always. > > I pushed the methods down into String[Latin1|UTF16], and followed existing > style. This is much cleaner. > > Thanks for catching the silly mistakes in the benchmarks. > > Updated links: > http://cr.openjdk.java.net/~chegar/8151384/webrev.01/ > http://cr.openjdk.java.net/~chegar/8151384/bench.01/ > > -Chris. > > > On 7 Mar 2016, at 16:51, Aleksey Shipilev wrote: > >> Hi, >> >> On 03/07/2016 07:29 PM, Chris Hegarty wrote: >>> What is in the webrev is specialized versions of compare when >>> the coder of the strings match. Alternatively, this could be pushed >>> down to String[Latin1|UTF16]. >>> >>> Webrev& bug: >>> http://cr.openjdk.java.net/~chegar/8151384/webrev.00/ >>> https://bugs.openjdk.java.net/browse/JDK-8151384 >> Overall, good cleanup idea. I think the actual helpers deserve to be >> pushed to String[Latin1|UTF16], as String is already overloaded with >> lots of code. See how e.g. String.regionMatches(boolean ignoreCase, int >> toffset, String other, int ooffset, int len) does it. >> >> Nits: >> >> *) Change: compareLatin1ToUTF16(v2, v1) * -1; >> To: -compareLatin1ToUTF16(v2, v1); >> >> *) Do we really need to cast up/down to "char" in compareLatin1*? >> >> >>> Benchmarks and results ( based, somewhat, on Aleksey's [1] ): >>> http://cr.openjdk.java.net/~chegar/8151384/bench/ >> Comments on benchmarks (that might have impact on validity): >> >> *) "# JMH 1.6 (released 388 days ago, please consider updating!)" >> >> *) CaseInsensitiveCompare.cmp1_cmp1 is suspiciously unaffected by size. >> That's because benchmark goes through the identity comparison: >> >> @Benchmark >> @CompilerControl(CompilerControl.Mode.DONT_INLINE) >> public int cmp1_cmp1() { >> return CASE_INSENSITIVE_ORDER.compare(cmp1_1, cmp1_1); >> } >> >> ...should really be: >> >> @Benchmark >> @CompilerControl(CompilerControl.Mode.DONT_INLINE) >> public int cmp1_cmp1() { >> return CASE_INSENSITIVE_ORDER.compare(cmp1_1, cmp1_2); >> } >> >> *) Probable dead-code elimination here: >> >> @Benchmark >> @CompilerControl(CompilerControl.Mode.DONT_INLINE) >> public void StringCaseInsensitiveComparator() { >> List strings = AVAILABLE_CHARSETS; >> for (String s1 : strings) { >> for (String s2 : strings) { >> String.CASE_INSENSITIVE_ORDER.compare(s1, s2); >> } >> } >> } >> >> ...should be: >> >> @Benchmark >> @CompilerControl(CompilerControl.Mode.DONT_INLINE) >> public void StringCaseInsensitiveComparator(Blackhole bh) { >> List strings = AVAILABLE_CHARSETS; >> for (String s1 : strings) { >> for (String s2 : strings) { >> bh.consume(String.CASE_INSENSITIVE_ORDER.compare(s1, s2)); >> } >> } >> } >> >> >> Thanks, >> -Aleksey >> From Roger.Riggs at Oracle.com Mon Mar 7 22:44:35 2016 From: Roger.Riggs at Oracle.com (Roger Riggs) Date: Mon, 7 Mar 2016 17:44:35 -0500 Subject: RFR:JDK-8030864:Add an efficient getDateTimeMillis method to java.time In-Reply-To: <56DACB8E.30402@oracle.com> References: <56D6C0B7.10205@oracle.com> <56D70406.7010000@oracle.com> <56D7317F.3000804@Oracle.com> <56D73637.3090006@oracle.com> <56D88877.4010202@oracle.com> <56DACB8E.30402@oracle.com> Message-ID: <56DE0453.2070606@Oracle.com> Look fine. Roger On 3/5/2016 7:05 AM, nadeesh tv wrote: > Hi all, > > Please see the updated webrev > http://cr.openjdk.java.net/~ntv/8030864/webrev.06/ > > > Regards, > Nadeesh > On 3/4/2016 4:34 PM, Stephen Colebourne wrote: >> long DAYS_0000_TO_1970 should be extracted as a private static final >> constant. >> >> Otherwise looks good. >> Stephen >> >> >> On 3 March 2016 at 18:54, nadeesh tv wrote: >>> Hi, >>> >>> Roger - Thanks for the comments >>> >>> Made the necessary changes in the spec >>> >>> Please see the updated webrev >>> http://cr.openjdk.java.net/~ntv/8030864/webrev.05/ >>> On 3/3/2016 12:21 AM, nadeesh tv wrote: >>>> Hi , >>>> >>>> Please see the updated webrev >>>> http://cr.openjdk.java.net/~ntv/8030864/webrev.03/ >>>> >>>> Thanks and Regards, >>>> Nadeesh >>>> >>>> On 3/3/2016 12:01 AM, Roger Riggs wrote: >>>>> Hi Nadeesh, >>>>> >>>>> Editorial comments: >>>>> >>>>> Chronology.java: 716+ >>>>> "Java epoch" -> "epoch" >>>>> "minute, second and zoneOffset" -> "minute, second*,* and >>>>> zoneOffset" >>>>> (add a comma; two places) >>>>> >>>>> "caluculated using given era, prolepticYear," -> "calculated >>>>> using the >>>>> era, year-of-era," >>>>> "to represent" -> remove as unnecessary in all places >>>>> >>>>> IsoChronology: >>>>> "to represent" -> remove as unnecessary in all places >>>>> >>>>> These should be fixed to cleanup the specification. >>>>> >>>>> The implementation and the tests look fine. >>>>> >>>>> Thanks, Roger >>>>> >>>>> >>>>> >>>>> On 3/2/2016 10:17 AM, nadeesh tv wrote: >>>>>> Hi, >>>>>> Stephen, Thanks for the comments. >>>>>> Please see the updated webrev >>>>>> http://cr.openjdk.java.net/~ntv/8030864/webrev.02/ >>>>>> >>>>>> Regards, >>>>>> Nadeesh TV >>>>>> >>>>>> On 3/2/2016 5:41 PM, Stephen Colebourne wrote: >>>>>>> Remove "Subclass can override the default implementation for a more >>>>>>> efficient implementation." as it adds no value. >>>>>>> >>>>>>> In the default implementation of >>>>>>> >>>>>>> epochSecond(Era era, int yearofEra, int month, int dayOfMonth, >>>>>>> int hour, int minute, int second, ZoneOffset zoneOffset) >>>>>>> >>>>>>> use >>>>>>> >>>>>>> prolepticYear(era, yearOfEra) >>>>>>> >>>>>>> and call the other new epochSecond method. See dateYearDay(Era era, >>>>>>> int yearOfEra, int dayOfYear) for the design to copy. If this is >>>>>>> done, >>>>>>> then there is no need to override the method in IsoChronology. >>>>>>> >>>>>>> In the test, >>>>>>> >>>>>>> LocalDate.MIN.with(chronoLd) >>>>>>> >>>>>>> could be >>>>>>> >>>>>>> LocalDate.from(chronoLd) >>>>>>> >>>>>>> Thanks >>>>>>> Stephen >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 2 March 2016 at 10:30, nadeesh tv wrote: >>>>>>>> Hi all, >>>>>>>> >>>>>>>> Please review an enhancement for a garbage free epochSecond >>>>>>>> method. >>>>>>>> >>>>>>>> Bug ID: https://bugs.openjdk.java.net/browse/JDK-8030864 >>>>>>>> >>>>>>>> webrev: http://cr.openjdk.java.net/~ntv/8030864/webrev.01 >>>>>>>> >>>>>>>> -- >>>>>>>> Thanks and Regards, >>>>>>>> Nadeesh TV >>>>>>>> >>> -- >>> Thanks and Regards, >>> Nadeesh TV >>> > From coleen.phillimore at oracle.com Mon Mar 7 22:55:43 2016 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 7 Mar 2016 17:55:43 -0500 Subject: RFR 8150778: Reduce Throwable.getStackTrace() calls to the JVM In-Reply-To: <56D98D73.4010302@oracle.com> References: <56D73477.4030100@oracle.com> <56D737DD.7000700@oracle.com> <56D745B7.4040508@oracle.com> <56D74B4B.9090708@oracle.com> <56D98D73.4010302@oracle.com> Message-ID: <56DE06EF.7040408@oracle.com> Hi Aleksey, This is an interesting experiment. On 3/4/16 8:28 AM, Aleksey Shipilev wrote: > On 03/02/2016 11:21 PM, Aleksey Shipilev wrote: >> On 03/02/2016 10:57 PM, Coleen Phillimore wrote: >>> On 3/2/16 1:58 PM, Aleksey Shipilev wrote: >>>> Is there an underlying reason why we can't return the pre-filled >>>> StackTraceElements[] array from the JVM_GetStackTraceElements to begin >>>> with? This will avoid leaking StackTraceElement constructor into >>>> standard library, *and* allows to make StackTraceElement fields final. >>>> Taking stuff back from the standard library is hard, if not impossible, >>>> so we better expose as little as possible. >>> We measured that it's faster to allocate the StackTraceElement array >>> in Java and it seems cleaner to the Java guys. It came from similar >>> code we've been prototyping for StackFrameInfo. >> OK, it's not perfectly clean from implementation standpoint, but this >> RFE might not be the best opportunity to polish that. At least make >> StackTraceElement constructor private (better), or package-private >> (acceptable), and then we are good to go. > Okay, here's a little exploration: > http://cr.openjdk.java.net/~shade/8150778/StackTraceBench.java > > The difference between allocating in Java code, and allocating on VM > side is marginal on my machine, but I think we are down to native memset > performance when allocating on VM side. Therefore, I'd probably stay > with Java allocation which codegen we absolutely control. Thanks for the experiment. We measured a greater performance difference. The theory is that through Java, allocation is a TLAB pointer update in most cases, vs going through all the C++ code to do allocation. The small difference for performance here isn't critical, but having the allocation in Java looks nicer to the Java programmers here. > Aside: see the last experiment, avoiding StringTable::intern (shows in > profiles a lot!) trims down construction costs down even further. I'd > think that is a worthwhile improvement to consider. Hm, this is an interesting experiment. I've been looking for a better way to store the name of the method rather than cpref. thanks, Coleen > > Cheers, > -Aleksey > > From uschindler at apache.org Mon Mar 7 23:24:06 2016 From: uschindler at apache.org (Uwe Schindler) Date: Tue, 8 Mar 2016 00:24:06 +0100 Subject: 8151339 Adding fragment to JAR URLs breaks ant In-Reply-To: References: Message-ID: <011101d178c8$74a815a0$5df840e0$@apache.org> Hi Steve, Thanks for the quick fix! I am not able to test this on the short term, but I trust you that Lucene builds now. I am a bit nervous, because it does not explain the Ivy issues, but I will try to create some test cases with relative jar:-URL resolving tomorrow. This may help with resolving the problems in build 110. I just want to make sure, that the following also works: - Get URL from classloader to a resource file - resolve a relative file against this URL and load it by URL (this is common pattern for parsing XML resources from JAR files that refer relatively to other resources in same JAR file by href) Keep me informed when build 109 is downloadable. Uwe ----- Uwe Schindler uschindler at apache.org ASF Member, Apache Lucene PMC / Committer Bremen, Germany http://lucene.apache.org/ > -----Original Message----- > From: core-libs-dev [mailto:core-libs-dev-bounces at openjdk.java.net] On > Behalf Of Steve Drach > Sent: Monday, March 07, 2016 8:07 PM > To: core-libs-dev ; paul Sandoz > ; Alan Bateman ; > Xueming Shen > Subject: RFR: 8151339 Adding fragment to JAR URLs breaks ant > > Hi, > > Please review the following changeset. We?d like to get this into build 109, > which means by noon today. This is essentially a temporary fix, but it?s been > tested and Lucene has been built against it. We will follow up with a more > comprehensive fix by build 110. > > webrev: http://cr.openjdk.java.net/~sdrach/8151339/webrev/ > > issue: https://bugs.openjdk.java.net/browse/JDK-8151339 > > > Thanks > Steve > From steve.drach at oracle.com Mon Mar 7 23:51:11 2016 From: steve.drach at oracle.com (Steve Drach) Date: Mon, 7 Mar 2016 15:51:11 -0800 Subject: 8151339 Adding fragment to JAR URLs breaks ant In-Reply-To: <011101d178c8$74a815a0$5df840e0$@apache.org> References: <011101d178c8$74a815a0$5df840e0$@apache.org> Message-ID: <77CDF307-CA29-4C80-B41C-458AD9F98CE3@oracle.com> Hi Uwe, > Thanks for the quick fix! I am not able to test this on the short term, but I trust you that Lucene builds now. I am a bit nervous, because it does not explain the Ivy issues, but I will try to create some test cases with relative jar:-URL resolving tomorrow. This may help with resolving the problems in build 110. If you can come up with small, easily reproducible test cases for any errors your find, that would help a lot. > > I just want to make sure, that the following also works: > - Get URL from classloader to a resource file > - resolve a relative file against this URL and load it by URL > (this is common pattern for parsing XML resources from JAR files that refer relatively to other resources in same JAR file by href) Please try everything. > > Keep me informed when build 109 is downloadable. I?ll try. Steve > > Uwe > > ----- > Uwe Schindler > uschindler at apache.org > ASF Member, Apache Lucene PMC / Committer > Bremen, Germany > http://lucene.apache.org/ > >> -----Original Message----- >> From: core-libs-dev [mailto:core-libs-dev-bounces at openjdk.java.net] On >> Behalf Of Steve Drach >> Sent: Monday, March 07, 2016 8:07 PM >> To: core-libs-dev ; paul Sandoz >> ; Alan Bateman ; >> Xueming Shen >> Subject: RFR: 8151339 Adding fragment to JAR URLs breaks ant >> >> Hi, >> >> Please review the following changeset. We?d like to get this into build 109, >> which means by noon today. This is essentially a temporary fix, but it?s been >> tested and Lucene has been built against it. We will follow up with a more >> comprehensive fix by build 110. >> >> webrev: http://cr.openjdk.java.net/~sdrach/8151339/webrev/ >> >> issue: https://bugs.openjdk.java.net/browse/JDK-8151339 >> >> >> Thanks >> Steve >> > > From joe.darcy at oracle.com Mon Mar 7 23:53:58 2016 From: joe.darcy at oracle.com (joe darcy) Date: Mon, 7 Mar 2016 15:53:58 -0800 Subject: 8151339 Adding fragment to JAR URLs breaks ant In-Reply-To: <77CDF307-CA29-4C80-B41C-458AD9F98CE3@oracle.com> References: <011101d178c8$74a815a0$5df840e0$@apache.org> <77CDF307-CA29-4C80-B41C-458AD9F98CE3@oracle.com> Message-ID: <56DE1496.1070908@oracle.com> IIRC, if all goes according to plan, the next build should be available for download on Friday. HTH, -Joe On 3/7/2016 3:51 PM, Steve Drach wrote: > Hi Uwe, > >> Thanks for the quick fix! I am not able to test this on the short term, but I trust you that Lucene builds now. I am a bit nervous, because it does not explain the Ivy issues, but I will try to create some test cases with relative jar:-URL resolving tomorrow. This may help with resolving the problems in build 110. > If you can come up with small, easily reproducible test cases for any errors your find, that would help a lot. >> I just want to make sure, that the following also works: >> - Get URL from classloader to a resource file >> - resolve a relative file against this URL and load it by URL >> (this is common pattern for parsing XML resources from JAR files that refer relatively to other resources in same JAR file by href) > Please try everything. > >> Keep me informed when build 109 is downloadable. > I?ll try. > > Steve > >> Uwe >> >> ----- >> Uwe Schindler >> uschindler at apache.org >> ASF Member, Apache Lucene PMC / Committer >> Bremen, Germany >> http://lucene.apache.org/ >> >>> -----Original Message----- >>> From: core-libs-dev [mailto:core-libs-dev-bounces at openjdk.java.net] On >>> Behalf Of Steve Drach >>> Sent: Monday, March 07, 2016 8:07 PM >>> To: core-libs-dev ; paul Sandoz >>> ; Alan Bateman ; >>> Xueming Shen >>> Subject: RFR: 8151339 Adding fragment to JAR URLs breaks ant >>> >>> Hi, >>> >>> Please review the following changeset. We?d like to get this into build 109, >>> which means by noon today. This is essentially a temporary fix, but it?s been >>> tested and Lucene has been built against it. We will follow up with a more >>> comprehensive fix by build 110. >>> >>> webrev: http://cr.openjdk.java.net/~sdrach/8151339/webrev/ >>> >>> issue: https://bugs.openjdk.java.net/browse/JDK-8151339 >>> >>> >>> Thanks >>> Steve >>> >> From ivan at azulsystems.com Tue Mar 8 01:04:35 2016 From: ivan at azulsystems.com (Ivan Krylov) Date: Mon, 7 Mar 2016 17:04:35 -0800 Subject: RFR(XS): 8147844: new method j.l.Thread.onSpinWait() (was j.l.Runtime) In-Reply-To: <56A90406.7060107@azulsystems.com> References: <56A8C6A9.8080705@azulsystems.com> <56A8CFBC.608@azulsystems.com> <56A8E05A.7040500@oracle.com> <56A90406.7060107@azulsystems.com> Message-ID: <56DE2523.8090303@azulsystems.com> The current wording of what is being called now JEP-285 [1] has placed onSpinWait() method into j.l.Thread. Hence, a new revision of the webrev. Everything is the same, except now it is the Thread class. http://cr.openjdk.java.net/~ikrylov/8147844.jdk.03/ Please, approve. Thanks, Ivan [1] - openjdk.java.net/jeps/285 On 27/01/2016 09:53, Ivan Krylov wrote: > Updated to http://cr.openjdk.java.net/~ikrylov/8147844.jdk.02/ > The sample JavaDoc has been updated too: > http://ivankrylov.github.io/onspinwait/api/java/lang/Runtime.html#onSpinWait-- > > Alan, Thank you. > > On 27/01/2016 18:20, Alan Bateman wrote: >> >> >> On 27/01/2016 14:10, Ivan Krylov wrote: >>> Indeed, thanks! >>> New webrev http://cr.openjdk.java.net/~ikrylov/8147844.jdk.01/ >>> >> Can you add @since 9 too? >> >> -Alan. > From felix.yang at oracle.com Tue Mar 8 01:20:40 2016 From: felix.yang at oracle.com (Felix Yang) Date: Tue, 8 Mar 2016 09:20:40 +0800 Subject: RFR 8151352, jdk/test/sample fails with "effective library path is outside the test suite" In-Reply-To: <56DDCDCD.5080207@oracle.com> References: <56DDA68A.6000907@oracle.com> <56DDCDCD.5080207@oracle.com> Message-ID: <56DE28E8.9010101@oracle.com> Joe, thank you for the quick review. Amy, could you sponsor this change? -Felix On 2016/3/8 2:51, joe darcy wrote: > Hello, > > Looks fine; thanks, > > -Joe > > On 3/7/2016 8:04 AM, Felix Yang wrote: >> Hi all, >> please review the fix for two tests under "test/sample/". >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8151352 >> Webrev: >> http://cr.openjdk.java.net/~xiaofeya/8151352/webrev.00/ >> >> Original declaration, "@library ../../../src/sample...", is invalid >> with the latest change in jtreg. See >> https://bugs.openjdk.java.net/browse/CODETOOLS-7901585. This fix >> doesn't resolve dependency to "src/sample", but only converts them >> into testng tests and declares "external.lib.roots" to avoid dot-dot. >> >> Thanks, >> Felix > From amy.lu at oracle.com Tue Mar 8 01:27:10 2016 From: amy.lu at oracle.com (Amy Lu) Date: Tue, 8 Mar 2016 09:27:10 +0800 Subject: RFR 8151352, jdk/test/sample fails with "effective library path is outside the test suite" In-Reply-To: <56DE28E8.9010101@oracle.com> References: <56DDA68A.6000907@oracle.com> <56DDCDCD.5080207@oracle.com> <56DE28E8.9010101@oracle.com> Message-ID: <56DE2A6E.20606@oracle.com> On 3/8/16 9:20 AM, Felix Yang wrote: > Joe, > thank you for the quick review. > > Amy, > could you sponsor this change? Sure, I will sponsor this for you. Thanks, Amy > > -Felix > On 2016/3/8 2:51, joe darcy wrote: >> Hello, >> >> Looks fine; thanks, >> >> -Joe >> >> On 3/7/2016 8:04 AM, Felix Yang wrote: >>> Hi all, >>> please review the fix for two tests under "test/sample/". >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8151352 >>> Webrev: >>> http://cr.openjdk.java.net/~xiaofeya/8151352/webrev.00/ >>> >>> Original declaration, "@library ../../../src/sample...", is invalid >>> with the latest change in jtreg. See >>> https://bugs.openjdk.java.net/browse/CODETOOLS-7901585. This fix >>> doesn't resolve dependency to "src/sample", but only converts them >>> into testng tests and declares "external.lib.roots" to avoid dot-dot. >>> >>> Thanks, >>> Felix >> > From david.holmes at oracle.com Tue Mar 8 01:40:21 2016 From: david.holmes at oracle.com (David Holmes) Date: Tue, 8 Mar 2016 11:40:21 +1000 Subject: RFR(XS): 8147844: new method j.l.Thread.onSpinWait() (was j.l.Runtime) In-Reply-To: <56DE2523.8090303@azulsystems.com> References: <56A8C6A9.8080705@azulsystems.com> <56A8CFBC.608@azulsystems.com> <56A8E05A.7040500@oracle.com> <56A90406.7060107@azulsystems.com> <56DE2523.8090303@azulsystems.com> Message-ID: <56DE2D85.6000900@oracle.com> Hi Ivan, On 8/03/2016 11:04 AM, Ivan Krylov wrote: > The current wording of what is being called now JEP-285 [1] has placed > onSpinWait() method into j.l.Thread. > Hence, a new revision of the webrev. Everything is the same, except now > it is the Thread class. > > http://cr.openjdk.java.net/~ikrylov/8147844.jdk.03/ Make sure the commit comment reflects the new synopsis :) I thought at some point there was discussion of giving a usage example in the javadoc? I think most people would be quite puzzled after reading the technical spec alone. Thanks, David > Please, approve. > > Thanks, > > Ivan > > [1] - openjdk.java.net/jeps/285 > > On 27/01/2016 09:53, Ivan Krylov wrote: >> Updated to http://cr.openjdk.java.net/~ikrylov/8147844.jdk.02/ >> The sample JavaDoc has been updated too: >> http://ivankrylov.github.io/onspinwait/api/java/lang/Runtime.html#onSpinWait-- >> >> >> Alan, Thank you. >> >> On 27/01/2016 18:20, Alan Bateman wrote: >>> >>> >>> On 27/01/2016 14:10, Ivan Krylov wrote: >>>> Indeed, thanks! >>>> New webrev http://cr.openjdk.java.net/~ikrylov/8147844.jdk.01/ >>>> >>> Can you add @since 9 too? >>> >>> -Alan. >> > From ivan at azulsystems.com Tue Mar 8 05:42:44 2016 From: ivan at azulsystems.com (Ivan Krylov) Date: Mon, 7 Mar 2016 21:42:44 -0800 Subject: RFR(XS): 8147844: new method j.l.Thread.onSpinWait() (was j.l.Runtime) In-Reply-To: <56DE2D85.6000900@oracle.com> References: <56A8C6A9.8080705@azulsystems.com> <56A8CFBC.608@azulsystems.com> <56A8E05A.7040500@oracle.com> <56A90406.7060107@azulsystems.com> <56DE2523.8090303@azulsystems.com> <56DE2D85.6000900@oracle.com> Message-ID: <56DE6654.4000401@azulsystems.com> On 07/03/2016 17:40, David Holmes wrote: > Hi Ivan, > > On 8/03/2016 11:04 AM, Ivan Krylov wrote: >> The current wording of what is being called now JEP-285 [1] has placed >> onSpinWait() method into j.l.Thread. >> Hence, a new revision of the webrev. Everything is the same, except now >> it is the Thread class. >> >> http://cr.openjdk.java.net/~ikrylov/8147844.jdk.03/ > > Make sure the commit comment reflects the new synopsis :) Yes, indeed, noticed this too late or sent to early. Fixed in place. > I thought at some point there was discussion of giving a usage example > in the javadoc? I think most people would be quite puzzled after > reading the technical spec alone. Ok, I will add something like an example once I figure out the javadoc syntax. Will post here when done. Thanks, Ivan > > Thanks, > David > >> Please, approve. >> >> Thanks, >> >> Ivan >> >> [1] - openjdk.java.net/jeps/285 >> >> On 27/01/2016 09:53, Ivan Krylov wrote: >>> Updated to http://cr.openjdk.java.net/~ikrylov/8147844.jdk.02/ >>> The sample JavaDoc has been updated too: >>> http://ivankrylov.github.io/onspinwait/api/java/lang/Runtime.html#onSpinWait-- >>> >>> >>> >>> Alan, Thank you. >>> >>> On 27/01/2016 18:20, Alan Bateman wrote: >>>> >>>> >>>> On 27/01/2016 14:10, Ivan Krylov wrote: >>>>> Indeed, thanks! >>>>> New webrev http://cr.openjdk.java.net/~ikrylov/8147844.jdk.01/ >>>>> >>>> Can you add @since 9 too? >>>> >>>> -Alan. >>> >> From paul.sandoz at oracle.com Tue Mar 8 10:08:47 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Tue, 8 Mar 2016 11:08:47 +0100 Subject: 8151339 Adding fragment to JAR URLs breaks ant In-Reply-To: <011101d178c8$74a815a0$5df840e0$@apache.org> References: <011101d178c8$74a815a0$5df840e0$@apache.org> Message-ID: <4503C3C8-9727-4095-AAAD-4E6E0C161CAF@oracle.com> > On 8 Mar 2016, at 00:24, Uwe Schindler wrote: > > Hi Steve, > > Thanks for the quick fix! I am not able to test this on the short term, but I trust you that Lucene builds now. I built it successfully a few times from scratch (downloading half the internet :-) ). > I am a bit nervous, because it does not explain the Ivy issues, but I will try to create some test cases with relative jar:-URL resolving tomorrow. Thanks. Note that the resource URLs produced from the class loader should no longer have the #runtime fragment, unless those resources are from an MR-JAR. #runtime is the signal to the URL protocol implementations to process as runtime versioned resource. > This may help with resolving the problems in build 110. > > I just want to make sure, that the following also works: > - Get URL from classloader to a resource file > - resolve a relative file against this URL and load it by URL > (this is common pattern for parsing XML resources from JAR files that refer relatively to other resources in same JAR file by href) > If you have a small test project you can share we can give it a test run in the interim. It could be that the URL resolving mechanism worked incorrectly with a #fragment in the way (especially that mechanism operated directly on the characters of the URL). > Keep me informed when build 109 is downloadable. > Will do. Paul. From paul.sandoz at oracle.com Tue Mar 8 10:40:48 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Tue, 8 Mar 2016 11:40:48 +0100 Subject: Multi-Release JAR file patch as applied to build 108 of Java 9 breaks almost every project out there (Apache Ant, Gradle, partly Apache Maven) In-Reply-To: References: <069f01d176e2$6084d6e0$218e84a0$@apache.org> Message-ID: <1EAF518D-F8A5-4EC2-BE1D-838497FBE4E4@oracle.com> Hi Andre, > On 6 Mar 2016, at 00:15, Andr?-John Mas wrote: > > Hi, > > Given the issues we are seeing, and I suspect this is not the only code with these assumptions, is there any way this functionality can be limited to "multi-release aware" code, either via a constructor parameter or a new method? What is the most elegant approach? > For resource URLs, associated with an MR-JAR, and obtained from a class loader, here are three possible routes we could take: 1) Modify the resources URLs, cognisent of the known issues processing such URLs; 2) Resources URLs are reified; or 3) Resources URLs are not modified (meaning they are not runtime versioned). By 2) i mean that: URL u = loader.getResource(?foo/Bar.class?) may return u that is say: ?jar:file:/?.!/META-INF/versions/9/foo/Bar.class? rather than: ?jar:file:/?.!/foo/Bar.class? But we need to work through the implications of that approach. Paul. From chris.hegarty at oracle.com Tue Mar 8 11:07:06 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Tue, 8 Mar 2016 11:07:06 +0000 Subject: RFR [9] 8151384: Examine sun.misc.ASCIICaseInsensitiveComparator In-Reply-To: <56DDFEB7.9080809@oracle.com> References: <56DDAC76.5080606@oracle.com> <56DDB197.3070409@oracle.com> <27567165-455D-428A-8FF9-BA3C786A1697@oracle.com> <56DDFEB7.9080809@oracle.com> Message-ID: On 7 Mar 2016, at 22:36, Xueming Shen wrote: > 515 hashCode = name.toLowerCase(Locale.ROOT).hashCode(); Fixed. On 7 Mar 2016, at 22:20, Aleksey Shipilev wrote: > Hi Chris, > > On 03/08/2016 12:55 AM, Chris Hegarty wrote: >> Updated links: >> http://cr.openjdk.java.net/~chegar/8151384/webrev.01/ > > *) Your previous patch had the explicit access to > CharacterDataLatin1.instance.(toLowerCase|toUpperCase). Any reason not > to use it in your new patch? Probably saves a shift and branch on a hot > path -- j.l.String is one of those magical places where it actually matters. I added this back for Latin1 compareToCI, which seems the most common path. > Otherwise looks good. Thanks. >> http://cr.openjdk.java.net/~chegar/8151384/bench.01/ > > *) This one worries me a little bit. ASCIICaseInsensitive has the > anomalous 360 us/op result in one of the forks, and that "best" is > better than average StringCaseInsensitive: > > http://cr.openjdk.java.net/~chegar/8151384/bench.01/AvailableCharsetsCompare_afterChanges.txt > > This might be the genuine run-to-run variance with compiling the hot > loop; I'd recommend using Blackhole.consume, not the accumulator > variable to try avoiding that. Blackhole added. The results show that the ASCII version is marginally better, even after the changes, but I suspect this can be further optimised in the future. > Otherwise looks good. -Chris. From chris.hegarty at oracle.com Tue Mar 8 11:10:12 2016 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Tue, 8 Mar 2016 11:10:12 +0000 Subject: RFR [9] 8151384: Examine sun.misc.ASCIICaseInsensitiveComparator In-Reply-To: References: <56DDAC76.5080606@oracle.com> <56DDB197.3070409@oracle.com> <27567165-455D-428A-8FF9-BA3C786A1697@oracle.com> <56DDFEB7.9080809@oracle.com> Message-ID: <1A8F2FC2-53E5-43A0-AE5D-F4277D652B5A@oracle.com> ? and the links: http://cr.openjdk.java.net/~chegar/8151384/webrev.02/ http://cr.openjdk.java.net/~chegar/8151384/bench.02/ -Chris. On 8 Mar 2016, at 11:07, Chris Hegarty wrote: > On 7 Mar 2016, at 22:36, Xueming Shen wrote: > >> 515 hashCode = name.toLowerCase(Locale.ROOT).hashCode(); > > Fixed. > > On 7 Mar 2016, at 22:20, Aleksey Shipilev wrote: > >> Hi Chris, >> >> On 03/08/2016 12:55 AM, Chris Hegarty wrote: >>> Updated links: >>> http://cr.openjdk.java.net/~chegar/8151384/webrev.01/ >> >> *) Your previous patch had the explicit access to >> CharacterDataLatin1.instance.(toLowerCase|toUpperCase). Any reason not >> to use it in your new patch? Probably saves a shift and branch on a hot >> path -- j.l.String is one of those magical places where it actually matters. > > I added this back for Latin1 compareToCI, which seems the most common > path. > >> Otherwise looks good. > > Thanks. > >>> http://cr.openjdk.java.net/~chegar/8151384/bench.01/ >> >> *) This one worries me a little bit. ASCIICaseInsensitive has the >> anomalous 360 us/op result in one of the forks, and that "best" is >> better than average StringCaseInsensitive: >> >> http://cr.openjdk.java.net/~chegar/8151384/bench.01/AvailableCharsetsCompare_afterChanges.txt >> >> This might be the genuine run-to-run variance with compiling the hot >> loop; I'd recommend using Blackhole.consume, not the accumulator >> variable to try avoiding that. > > Blackhole added. The results show that the ASCII version is marginally better, > even after the changes, but I suspect this can be further optimised in the future. > >> Otherwise looks good. > > -Chris. > From aleksey.shipilev at oracle.com Tue Mar 8 11:16:32 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 8 Mar 2016 14:16:32 +0300 Subject: RFR [9] 8151384: Examine sun.misc.ASCIICaseInsensitiveComparator In-Reply-To: <1A8F2FC2-53E5-43A0-AE5D-F4277D652B5A@oracle.com> References: <56DDAC76.5080606@oracle.com> <56DDB197.3070409@oracle.com> <27567165-455D-428A-8FF9-BA3C786A1697@oracle.com> <56DDFEB7.9080809@oracle.com> <1A8F2FC2-53E5-43A0-AE5D-F4277D652B5A@oracle.com> Message-ID: <56DEB490.30902@oracle.com> On 03/08/2016 02:10 PM, Chris Hegarty wrote: > ? and the links: > > http://cr.openjdk.java.net/~chegar/8151384/webrev.02/ > http://cr.openjdk.java.net/~chegar/8151384/bench.02/ Looks good. Ship it! -Aleksey From paul.sandoz at oracle.com Tue Mar 8 13:06:37 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Tue, 8 Mar 2016 14:06:37 +0100 Subject: RFR 8151163 All Buffer implementations should leverage Unsafe unaligned accessors Message-ID: <142D9ABF-C144-42E6-9FA2-48A90A51C3A9@oracle.com> Hi, Please pre-emptively review a fix to update the buffer implementations to leverage the Unsafe unaligned accessors: http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8151163-buffer-unsafe-unaligned-access/webrev/ http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8151163-buffer-unsafe-unaligned-access-hotspot/webrev/ The JDK changes depend on those for the following which is in CCC review: https://bugs.openjdk.java.net/browse/JDK-8149469 ByteBuffer API and implementation enhancements for VarHandles http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8149469-byte-buffer-align-and-unifying-enhancements/webrev/ The changes in this webrev take advantage of those for JDK-8149469 and apply the unsafe double addressing scheme so certain byte buffer view implementations can work across heap and direct buffers. This should improve the performance on x86 for: 1) direct ByteBuffers using the wider unit size method accessors; and 2) wider unit size views over heap ByteBuffers. As a consequence Bits.java has greatly reduced in size :-) The HotSpot changes update the test that was originally added when the heap ByteBuffer method accessors were updated to utilise unsafe unaligned access. I split the test out so as to reduce the execution time, since I doubled the amount of tests. These tests could be improved for views at various unaligned/unaligned positions in the byte buffer, but i left that for now. I plan to push through hs-comp since JDK-8149469 will go through hs-comp. Later on today i will kick of a JPRT hotspot test run. ? This is a small step towards unifying the buffer implementations using the unsafe double addressing scheme: https://bugs.openjdk.java.net/browse/JDK-6509032 Thanks, Paul. From aleksey.shipilev at oracle.com Tue Mar 8 13:37:39 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 8 Mar 2016 16:37:39 +0300 Subject: RFR 8151163 All Buffer implementations should leverage Unsafe unaligned accessors In-Reply-To: <142D9ABF-C144-42E6-9FA2-48A90A51C3A9@oracle.com> References: <142D9ABF-C144-42E6-9FA2-48A90A51C3A9@oracle.com> Message-ID: <56DED5A3.8030603@oracle.com> On 03/08/2016 04:06 PM, Paul Sandoz wrote: > Please pre-emptively review a fix to update the buffer implementations to leverage the Unsafe unaligned accessors: > > http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8151163-buffer-unsafe-unaligned-access/webrev/ *) My concern with using double-register Unsafe calls is that compilers are unable to speculate on (hb == null) value, which means you will have the additional field read on the fast path. See: https://bugs.openjdk.java.net/browse/JDK-8150921 http://cr.openjdk.java.net/~shade/8150921/notes.txt So, while I agree that using double-register unaligned accessors are cleaner, I'd try to special-case (bb.hb == null) case for Heap* buffers. Current patch might still be better than going through Bits though. > The changes in this webrev take advantage of those for JDK-8149469 > and apply the unsafe double addressing scheme so certain byte buffer > view implementations can work across heap and direct buffers. This > should improve the performance on x86 for: I understand the idea, but I think we would need to verify this before pushing. Thanks, -Aleksey From xueming.shen at oracle.com Tue Mar 8 17:45:41 2016 From: xueming.shen at oracle.com (Xueming Shen) Date: Tue, 8 Mar 2016 09:45:41 -0800 Subject: RFR: Regex exponential backtracking issue --- more cleanup/tuning Message-ID: <56DF0FC5.4030709@oracle.com> Hi, While waiting patiently for someone to help review the proposal for the exponential backtracking issue [1] I went ahead replacing those "CharProperty constant nodes" with the IntPredicate. We were hoping having closure back then when working on those CharProperty classes, which ended up with those make()/clone(). Now it might be the time to replace it with what we wanted at the beginning. http://cr.openjdk.java.net/~sherman/regexClosure/webrev.02/ Here are the notes about the changes (1) pulled out the "broken" printNodeTree (for debugging) from the Pattern. This one does not work as expected for a while . I do have a working copy and have to put it in every time I need debug the engine. So now I replaced the printNoteTree with working one and putting it at a separate class j.u.regex.PrintPattern, which now can print out the clean and complete node tree of the pattern. For example, Pattern: [a-z0-9]+|ABCDEFG 0: 1: 2: 3: 4: 5: <-branch.separator-> 6: 7: 8: (2) the optimization for the greedy repetition of a "CharProperty", which parse the greedy repetition on a single "CharProperty", such as \p{IsGreek}+, or the most commonly used .* into a single/smooth loop node. from Pattern: \p{IsGreek}+ 0: 1: 2: