From kishor.kharbas at intel.com Thu Nov 3 00:03:50 2016 From: kishor.kharbas at intel.com (Kharbas, Kishor) Date: Thu, 3 Nov 2016 00:03:50 +0000 Subject: Update public key Message-ID: Hello! I was recently accepted to be an Author for OpenJDK. I am trying to upload a patch to cr.openjdk.java.net; however I get "Permission Denied (public key)" error. I am sure I registered with the correct public key, but since I have ran out of options to fix this, I wanted to update the public key to check if that fixes the error. My user name is : kkharbas Public key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKKVjuIBirneatPqJayOHmri9Y58mEcpYCg8KzU3HRSb9MyJduHrM0zgBlOjA8N1hbq/hZK8UqJQ/eKTUIvNtST8B2jkdXeaE2crH41Fbm2ey/JchbNwvZ4er2BlZQbZwkGat/sSjTtU5aQcvRq8Y7SYxnzkR/ZNnM1jHBP103W7Kva0oyf4Gv3oNk7yrbjIJYTcE+Yd3TIPg8wZbboUFlr0toRNxBFmFFtXQmuqu+hqjuTYMoo4JHt3sFSGziS2yM+2W+/BCRedoAn1G+qsdVxhsPXnRYGi7vpQD2gXn48ppJLZL/R8/4wr3ZcmqxjsrMql3g2rAcjrUcWtTDuwpT kishor.kharbas at intel.com Thanks Kishor From tim.bell at oracle.com Thu Nov 3 00:42:25 2016 From: tim.bell at oracle.com (Tim Bell) Date: Wed, 2 Nov 2016 17:42:25 -0700 Subject: Update public key In-Reply-To: References: Message-ID: Hello Kishor: > I was recently accepted to be an Author for OpenJDK. I am trying to upload a patch to cr.openjdk.java.net; however I get "Permission Denied (public key)" error. > I am sure I registered with the correct public key, but since I have ran out of options to fix this, I wanted to update the public key to check if that fixes the error. For operational issues such as this, please contact ops at openjdk.java.net in the future. In this case, we only got part of the key string. Please send your id_rsa.pub file as an attachment along with your JDK username to keys(at)openjdk.java.net. For more information, refer to the guide here: http://openjdk.java.net/guide/producingChangeset.html#sshGen Tim From kishor.kharbas at intel.com Thu Nov 3 17:33:18 2016 From: kishor.kharbas at intel.com (Kharbas, Kishor) Date: Thu, 3 Nov 2016 17:33:18 +0000 Subject: Update public key In-Reply-To: References: Message-ID: Thanks Tim, I sent the key to keys(at)openjdk.java.net. - Kishor -----Original Message----- From: Tim Bell [mailto:tim.bell at oracle.com] Sent: Wednesday, November 2, 2016 5:42 PM To: Kharbas, Kishor Cc: discuss at openjdk.java.net; ops at openjdk.java.net Subject: Re: Update public key Hello Kishor: > I was recently accepted to be an Author for OpenJDK. I am trying to upload a patch to cr.openjdk.java.net; however I get "Permission Denied (public key)" error. > I am sure I registered with the correct public key, but since I have ran out of options to fix this, I wanted to update the public key to check if that fixes the error. For operational issues such as this, please contact ops at openjdk.java.net in the future. In this case, we only got part of the key string. Please send your id_rsa.pub file as an attachment along with your JDK username to keys(at)openjdk.java.net. For more information, refer to the guide here: http://openjdk.java.net/guide/producingChangeset.html#sshGen Tim From keith at deenlo.com Sat Nov 12 16:45:42 2016 From: keith at deenlo.com (Keith Turner) Date: Sat, 12 Nov 2016 11:45:42 -0500 Subject: Java needs an immutable byte array wrapper Message-ID: While trying to design an API for Fluo, its become clear to me that Java could really benefit from an immutable byte array wrapper. Something like java.lang.String except for byte arrays instead of char arrays. It would be nice if this new type interoperated well with byte[], String, ByteBuffer, InputStream, OutputStream etc. I wrote the following blog post about my experiences with this issue while designing an API for Fluo. http://fluo.apache.org/blog/2016/11/10/immutable-bytes/ Is there any reason something like this should not be added to Java? Thanks, Keith From roman at kennke.org Sat Nov 12 17:04:37 2016 From: roman at kennke.org (Roman Kennke) Date: Sat, 12 Nov 2016 18:04:37 +0100 Subject: Java needs an immutable byte array wrapper In-Reply-To: References: Message-ID: <1478970277.2649.3.camel@kennke.org> Am Samstag, den 12.11.2016, 11:45 -0500 schrieb Keith Turner: > While trying to design an API for Fluo, its become clear to me that > Java could really benefit from an immutable byte array wrapper. > Something like java.lang.String except for byte arrays instead of > char > arrays.??It would be nice if this new type interoperated well with > byte[], String, ByteBuffer, InputStream, OutputStream etc. > > I wrote the following blog post about my experiences with this issue > while designing an API for Fluo. > > http://fluo.apache.org/blog/2016/11/10/immutable-bytes/ > > Is there any reason something like this should not be added to Java? You mean something like NIO ByteBuffers and related APIs? Roman From keith at deenlo.com Sat Nov 12 17:16:37 2016 From: keith at deenlo.com (Keith Turner) Date: Sat, 12 Nov 2016 12:16:37 -0500 Subject: Java needs an immutable byte array wrapper In-Reply-To: <1478970277.2649.3.camel@kennke.org> References: <1478970277.2649.3.camel@kennke.org> Message-ID: On Sat, Nov 12, 2016 at 12:04 PM, Roman Kennke wrote: > Am Samstag, den 12.11.2016, 11:45 -0500 schrieb Keith Turner: >> While trying to design an API for Fluo, its become clear to me that >> Java could really benefit from an immutable byte array wrapper. >> Something like java.lang.String except for byte arrays instead of >> char >> arrays. It would be nice if this new type interoperated well with >> byte[], String, ByteBuffer, InputStream, OutputStream etc. >> >> I wrote the following blog post about my experiences with this issue >> while designing an API for Fluo. >> >> http://fluo.apache.org/blog/2016/11/10/immutable-bytes/ >> >> Is there any reason something like this should not be added to Java? > > You mean something like NIO ByteBuffers and related APIs? As I discussed in the blog post, ByteBuffers does not fit the bill of what I need. In the blog post I have the following little program as an example to show that ByteBuffer is not immutable in the way String is. byte[] bytes1 = new byte[] {1,2,3,(byte)250}; ByteBuffer bb1 = ByteBuffer.wrap(bytes1).asReadOnlyBuffer(); System.out.println(bb1.hashCode()); bytes1[2]=89; System.out.println(bb1.hashCode()); bb1.get(); System.out.println(bb1.hashCode()); Would not want to use ByteBuffer as a map key. Would be nice if Java had something like ByteString[1] or Bytes[2]. Having a type like that in Java would allow to be used in library APIs and avoid copies between multiple implementations of an immutable byte array wrapper. [1]: https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/ByteString [2]: https://static.javadoc.io/org.apache.fluo/fluo-api/1.0.0-incubating/org/apache/fluo/api/data/Bytes.html > > Roman > From peter.lawrey at gmail.com Sat Nov 12 17:53:40 2016 From: peter.lawrey at gmail.com (Peter Lawrey) Date: Sat, 12 Nov 2016 17:53:40 +0000 Subject: Java needs an immutable byte array wrapper In-Reply-To: References: <1478970277.2649.3.camel@kennke.org> Message-ID: Java 9 String has a byte [] at its core. I suspect it's not appropriate but worth thinking about. We have a BytesStore class which wraps bytes on or off heap which can be used for keys. On 12 Nov 2016 17:17, "Keith Turner" wrote: > On Sat, Nov 12, 2016 at 12:04 PM, Roman Kennke wrote: > > Am Samstag, den 12.11.2016, 11:45 -0500 schrieb Keith Turner: > >> While trying to design an API for Fluo, its become clear to me that > >> Java could really benefit from an immutable byte array wrapper. > >> Something like java.lang.String except for byte arrays instead of > >> char > >> arrays. It would be nice if this new type interoperated well with > >> byte[], String, ByteBuffer, InputStream, OutputStream etc. > >> > >> I wrote the following blog post about my experiences with this issue > >> while designing an API for Fluo. > >> > >> http://fluo.apache.org/blog/2016/11/10/immutable-bytes/ > >> > >> Is there any reason something like this should not be added to Java? > > > > You mean something like NIO ByteBuffers and related APIs? > > As I discussed in the blog post, ByteBuffers does not fit the bill of > what I need. In the blog post I have the following little program as > an example to show that ByteBuffer is not immutable in the way String > is. > > byte[] bytes1 = new byte[] {1,2,3,(byte)250}; > ByteBuffer bb1 = ByteBuffer.wrap(bytes1).asReadOnlyBuffer(); > > System.out.println(bb1.hashCode()); > bytes1[2]=89; > System.out.println(bb1.hashCode()); > bb1.get(); > System.out.println(bb1.hashCode()); > > Would not want to use ByteBuffer as a map key. Would be nice if Java > had something like ByteString[1] or Bytes[2]. Having a type like that > in Java would allow to be used in library APIs and avoid copies > between multiple implementations of an immutable byte array wrapper. > > [1]: https://developers.google.com/protocol-buffers/docs/ > reference/java/com/google/protobuf/ByteString > [2]: https://static.javadoc.io/org.apache.fluo/fluo-api/1.0.0- > incubating/org/apache/fluo/api/data/Bytes.html > > > > > Roman > > > From john.r.rose at oracle.com Sat Nov 12 18:04:56 2016 From: john.r.rose at oracle.com (John Rose) Date: Sat, 12 Nov 2016 10:04:56 -0800 Subject: Java needs an immutable byte array wrapper In-Reply-To: References: Message-ID: <58B4FC32-CF92-4E07-AAD0-0F22C5A26031@oracle.com> On Nov 12, 2016, at 8:45 AM, Keith Turner wrote: > > While trying to design an API for Fluo, its become clear to me that > Java could really benefit from an immutable byte array wrapper. > Something like java.lang.String except for byte arrays instead of char > arrays. It would be nice if this new type interoperated well with > byte[], String, ByteBuffer, InputStream, OutputStream etc. > > I wrote the following blog post about my experiences with this issue > while designing an API for Fluo. > > http://fluo.apache.org/blog/2016/11/10/immutable-bytes/ That's a good blog entry; thanks, especially the pointer to ByteString. Of course Java needs a type like that, but our story for immutability is still in flux, so folks are being cautious about adopting such features. In a similar vein, I would like to see the ability to freeze Java arrays (make them immutable), and (independently) add more API points to them. But the ideas are not fully baked yet. See also this application for immutable bytes: https://bugs.openjdk.java.net/browse/JDK-8161256 ? John From keith at deenlo.com Sat Nov 12 18:45:28 2016 From: keith at deenlo.com (Keith Turner) Date: Sat, 12 Nov 2016 13:45:28 -0500 Subject: Java needs an immutable byte array wrapper In-Reply-To: <58B4FC32-CF92-4E07-AAD0-0F22C5A26031@oracle.com> References: <58B4FC32-CF92-4E07-AAD0-0F22C5A26031@oracle.com> Message-ID: On Sat, Nov 12, 2016 at 1:04 PM, John Rose wrote: > On Nov 12, 2016, at 8:45 AM, Keith Turner wrote: > > > While trying to design an API for Fluo, its become clear to me that > Java could really benefit from an immutable byte array wrapper. > Something like java.lang.String except for byte arrays instead of char > arrays. It would be nice if this new type interoperated well with > byte[], String, ByteBuffer, InputStream, OutputStream etc. > > I wrote the following blog post about my experiences with this issue > while designing an API for Fluo. > > http://fluo.apache.org/blog/2016/11/10/immutable-bytes/ > > > That's a good blog entry; thanks, especially the pointer to ByteString. > > Of course Java needs a type like that, but our story for immutability > is still in flux, so folks are being cautious about adopting such features. > > In a similar vein, I would like to see the ability to freeze Java arrays > (make them immutable), and (independently) add more API points Is the concept of freezing byte arrays written up anywhere? > to them. But the ideas are not fully baked yet. > > See also this application for immutable bytes: > https://bugs.openjdk.java.net/browse/JDK-8161256 > > ? John > From keith at deenlo.com Sat Nov 12 18:46:37 2016 From: keith at deenlo.com (Keith Turner) Date: Sat, 12 Nov 2016 13:46:37 -0500 Subject: Java needs an immutable byte array wrapper In-Reply-To: References: <1478970277.2649.3.camel@kennke.org> Message-ID: On Sat, Nov 12, 2016 at 12:53 PM, Peter Lawrey wrote: > Java 9 String has a byte [] at its core. I suspect it's not appropriate but > worth thinking about. I am not sure, I would have to look into it. Would there always be a conversions to/from char when creating Strings from byte[] and when calling String.getBytes()? Also would like something that interoperates well with ByteBuffer, InputStream, OutputStream for byte sequence data, like protobuf's ByteString and Fluo's Bytes do. > > We have a BytesStore class which wraps bytes on or off heap which can be > used for keys. I suspect many project roll their own thing for this. > > > On 12 Nov 2016 17:17, "Keith Turner" wrote: >> >> On Sat, Nov 12, 2016 at 12:04 PM, Roman Kennke wrote: >> > Am Samstag, den 12.11.2016, 11:45 -0500 schrieb Keith Turner: >> >> While trying to design an API for Fluo, its become clear to me that >> >> Java could really benefit from an immutable byte array wrapper. >> >> Something like java.lang.String except for byte arrays instead of >> >> char >> >> arrays. It would be nice if this new type interoperated well with >> >> byte[], String, ByteBuffer, InputStream, OutputStream etc. >> >> >> >> I wrote the following blog post about my experiences with this issue >> >> while designing an API for Fluo. >> >> >> >> http://fluo.apache.org/blog/2016/11/10/immutable-bytes/ >> >> >> >> Is there any reason something like this should not be added to Java? >> > >> > You mean something like NIO ByteBuffers and related APIs? >> >> As I discussed in the blog post, ByteBuffers does not fit the bill of >> what I need. In the blog post I have the following little program as >> an example to show that ByteBuffer is not immutable in the way String >> is. >> >> byte[] bytes1 = new byte[] {1,2,3,(byte)250}; >> ByteBuffer bb1 = ByteBuffer.wrap(bytes1).asReadOnlyBuffer(); >> >> System.out.println(bb1.hashCode()); >> bytes1[2]=89; >> System.out.println(bb1.hashCode()); >> bb1.get(); >> System.out.println(bb1.hashCode()); >> >> Would not want to use ByteBuffer as a map key. Would be nice if Java >> had something like ByteString[1] or Bytes[2]. Having a type like that >> in Java would allow to be used in library APIs and avoid copies >> between multiple implementations of an immutable byte array wrapper. >> >> [1]: >> https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/ByteString >> [2]: >> https://static.javadoc.io/org.apache.fluo/fluo-api/1.0.0-incubating/org/apache/fluo/api/data/Bytes.html >> >> > >> > Roman >> > From per at bothner.com Sun Nov 13 04:06:55 2016 From: per at bothner.com (Per Bothner) Date: Sat, 12 Nov 2016 20:06:55 -0800 Subject: string indexing (was: Java needs an immutable byte array wrapper) In-Reply-To: References: <1478970277.2649.3.camel@kennke.org> Message-ID: On 11/12/2016 09:53 AM, Peter Lawrey wrote: > Java 9 String has a byte [] at its core. I suspect it's not appropriate but > worth thinking about. Interesting. I would be even more interested if they could make codePointAt and codePointCount be constant-time: A number of programming languages define a string as a sequence of code-points, and the indexing operator that their standard library provide is basically codePointAt. Example languages include Python3, Scheme, and the XQuery/XPath/XSLT family. Implementing string indexing for such a language on the JVM gives you the unpalatable choice of either having indexing take linear time, or not using java.lang.String and thus hurting Java interoperability. Note it would be easy to change the Java9 String implementation such that codePointAt was constant-time in the case of BMP-only (no-surrogate) strings. Just use a bit in the 'coder' field to indicate that the string is BMP-only. Doing so would be a big and easy win for the common BMP-only case, though it doesn't give us guaranteed constant-time indexing - a single non-BMP character breaks that. As a compromise I recently implemented an IString class, which gives you O(1) codepoint indexing while still being compact and implementing CharSequence efficiently: http://sourceware.org/viewvc/kawa/branches/invoke/gnu/lists/IString.java?view=markup [Warning: this has not been tested much.] Still, it would be much nicer if we could use java.lang.String directly. It wouldn't be very expensive. Note that the offsets array in my IString class only adds 0.24 bytes per 2-byte char, so roughly 12%. It is possible to encode the Java9 'coder' field using the IString 'offsets' field (by using a static flag array for the LATIN1 case). -- --Per Bothner per at bothner.com http://per.bothner.com/ From zen at freedbms.net Sun Nov 13 12:21:48 2016 From: zen at freedbms.net (Zenaan Harkness) Date: Sun, 13 Nov 2016 23:21:48 +1100 Subject: string indexing (was: Java needs an immutable byte array wrapper) In-Reply-To: References: <1478970277.2649.3.camel@kennke.org> Message-ID: <20161113122148.GA5258@x220-a02> On Sat, Nov 12, 2016 at 08:06:55PM -0800, Per Bothner wrote: > On 11/12/2016 09:53 AM, Peter Lawrey wrote: > >Java 9 String has a byte [] at its core. I suspect it's not > >appropriate but worth thinking about. Time to read up on that, thanks. > Interesting. I would be even more interested if they could make > codePointAt and codePointCount be constant-time: A number of > programming languages define a string as a sequence of code-points, > and the indexing operator that their standard library provide is > basically codePointAt. Example languages include Python3, Scheme, and > the XQuery/XPath/XSLT family. Ack. Although grapheme indexing is probably more generally useful for multi-lingual UI. Swift basically gets "String" right as far as my reading of Swift's docs goes - not only code-points, but graphemes, the next layer of indexing above code-points. I cannott speak to Swift's implementation as to storage / time tradeoffs made. Trying to create a simple string formatter (left, right, centered) that was also "multi lingual" lead me into the deep dark past of Java's (pre v1.0) decision to go with UTF-16 (sensible at the time), which for 20 years has been known to be deficient (prior to Java 1.1 it was when Unicode ascertained they needed more than 16 bits) and yet java.lang.String never got updated, at least until recently with Java 9, which now lays the foundation for a sane string class. Took me two full working weeks to sort out the mess in my head, so I wrote up the details of that exploration here: https://zenaan.github.io/zen/javadoc/zen/lang/string.html (Note, this was pre-Java 9) Hopefully by Java 10, 11 or 12, we might see full grapheme support in Java (as is the case in Swift), now that String is implemented with byte array storage. > Implementing string indexing for such a language on the JVM gives you > the unpalatable choice of either having indexing take linear time, or > not using java.lang.String and thus hurting Java interoperability. Can class finality be bypassed at the JVM level? With byte[] underlying Java 9's String class, code-point and grapheme indexing could be in a subclass? The trade off then is between the storage (and construction time) cost for the extra layers of indexing (code-points, then graphemes on top of that), vs the run time performance hit for dynamically finding these index points every time needed. There is no universal "best" option of course... depends always on the application. > Note it would be easy to change the Java9 String implementation such > that codePointAt was constant-time in the case of BMP-only > (no-surrogate) strings. I.e. without increasing storage cost. I don't think code-points really solve the significant problem though (discovery of grapheme boundaries when one truly needs to handle multiple languages). > Just use a bit in the 'coder' field to indicate that the string is > BMP-only. Doing so would be a big and easy win for the common > BMP-only case, though it doesn't give us guaranteed constant-time > indexing - a single non-BMP character breaks that. Again, my write up highlights the issues with code-points - we have combining "characters", non displayed "characters" and plenty more besides - it is graphemes (and non-graphemes) that, at the UI layer at least, we really need to know about. > As a compromise I recently implemented an IString class, which gives > you O(1) codepoint indexing while still being compact and implementing > CharSequence efficiently: > > http://sourceware.org/viewvc/kawa/branches/invoke/gnu/lists/IString.java?view=markup > [Warning: this has not been tested much.] Thanks. "CharSequence" is deceptive. Should be called CodePointSequence or something else again... "char" is -so- overloaded in Java in particular. > Still, it would be much nicer if we could use java.lang.String > directly. It wouldn't be very expensive. Note that the offsets array > in my IString class only adds 0.24 bytes per 2-byte char, so roughly > 12%. It is possible to encode the Java9 'coder' field using the > IString 'offsets' field (by using a static flag array for the LATIN1 > case). I strongly believe the that immutability of byte arrays would provide the safety that java.lang.String otherwise provides, and that as long as removing String finality did not significantly impact performance of code in the wild, the new byte[] String would be entirely sufficient for one or two additional, and optional indexing layers - one for code-points, and the top layer for graphemes. Regards, Zenaan From per at bothner.com Mon Nov 14 01:28:36 2016 From: per at bothner.com (Per Bothner) Date: Sun, 13 Nov 2016 17:28:36 -0800 Subject: string indexing In-Reply-To: <20161113122148.GA5258@x220-a02> References: <1478970277.2649.3.camel@kennke.org> <20161113122148.GA5258@x220-a02> Message-ID: On 11/13/2016 04:21 AM, Zenaan Harkness wrote: > Although grapheme indexing is probably more generally useful for > multi-lingual UI. Quite possibly. However, a code-point can be represented as an unboxed int. A grapheme requires memory allocation. You cannot store it in a register or even a fixed number of registers, unless you use an indirect substring representation (base string, start offset, end offset), which has its own problems. You can always build a grapheme-based API on top of a codepoint API, but not vice versa. You can of course do the same on top of a UTF16 code-unit API, but it's more error-prone and unnatural: At least code-points have some natural semantic meaning; code-units do not. > "CharSequence" is deceptive. Should be called CodePointSequence or > something else again... "char" is -so- overloaded in Java in particular. java.lang.CharSequence is *not* a sequence of code-points. It's a sequence of UTF-16 code-units, just like java.lang.String. -- --Per Bothner per at bothner.com http://per.bothner.com/ From zen at freedbms.net Mon Nov 14 02:07:43 2016 From: zen at freedbms.net (Zenaan Harkness) Date: Mon, 14 Nov 2016 13:07:43 +1100 Subject: string indexing In-Reply-To: References: <1478970277.2649.3.camel@kennke.org> <20161113122148.GA5258@x220-a02> Message-ID: <20161114020743.GA6823@x220-a02> On Sun, Nov 13, 2016 at 05:28:36PM -0800, Per Bothner wrote: > On 11/13/2016 04:21 AM, Zenaan Harkness wrote: > >Although grapheme indexing is probably more generally useful for > >multi-lingual UI. > > Quite possibly. However, a code-point can be represented as an unboxed > int. A grapheme requires memory allocation. You cannot store it in a > register or even a fixed number of registers, unless you use an indirect > substring representation (base string, start offset, end offset), which > has its own problems. > > You can always build a grapheme-based API on top of a codepoint API, > but not vice versa. You can of course do the same on top of a UTF16 > code-unit API, but it's more error-prone and unnatural: At least > code-points have some natural semantic meaning; code-units do not. Ack. I would only refer here of course: http://utf8everywhere.org/ Java is what it is, and String is particularly unfortunate - Java 9's byte[] implementation is a performance improvement in some situations, but still messy: http://stackoverflow.com/questions/38213239/what-is-java-9s-new-string-implementaion " Because most usages of Strings are Latin-1 and only require one byte, Java-9's String will be updated to be implemented under the hood as a byte array with an encoding flag field to note if it is a byte array. If the characters are not Latin-1 and require more than one byte it will be stored as a UTF-16 char array (2 bytes per char) and the flag. " > >"CharSequence" is deceptive. Should be called CodePointSequence or > >something else again... "char" is -so- overloaded in Java in particular. > > java.lang.CharSequence is *not* a sequence of code-points. > It's a sequence of UTF-16 code-units, just like java.lang.String. Even more the reason it's name is problematic. From karthik.ganesan at oracle.com Mon Nov 14 16:23:05 2016 From: karthik.ganesan at oracle.com (Karthik Ganesan) Date: Mon, 14 Nov 2016 10:23:05 -0600 Subject: Project Proposal: Trinity Message-ID: <5829E4E9.9080604@oracle.com> Hi, I would like to propose the creation of a new Project: Project Trinity. This Project would explore enhanced execution of bulk aggregate calculations over Streams through offloading calculations to hardware accelerators. Streams allow developers to express calculations such that data parallelism can be efficiently exploited. Such calculations are prime candidates for leveraging enhanced data-oriented instructions on CPUs (such as SIMD instructions) or offloading to hardware accelerators (such as the SPARC Data Accelerator co-processor, further referred to as DAX [1]). To identify a path to improving performance and power efficiency, Project Trinity will explore how libraries like Streams can be enhanced to leverage data processing hardware features to execute Streams more efficiently. Directions for exploration include: - Building a streams-like library optimized for offload to -- hardware accelerators (such as DAX), or -- a GPU, or -- SIMD instructions; - Optimizations in the Graal compiler to automatically transform suitable Streams pipelines, taking advantage of data processing hardware features; - Explorations with Project Valhalla to expand the range of effective acceleration to Streams of value types. Success will be evaluated based upon: (1) speedups and resource efficiency gains achieved for a broad range of representative streams calculations under offload, (2) ease of use of the hardware acceleration capability, and (3) ensuring that there is no time or space overhead for non-accelerated calculations. Can I please request the support of the Core Libraries Group as the Sponsoring Group with myself as the Project Lead. Warm Regards, Karthik Ganesan [1] https://community.oracle.com/docs/DOC-994842 From volker.simonis at gmail.com Mon Nov 14 16:49:45 2016 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 14 Nov 2016 17:49:45 +0100 Subject: Project Proposal: Trinity In-Reply-To: <5829E4E9.9080604@oracle.com> References: <5829E4E9.9080604@oracle.com> Message-ID: Hi Karthik, we had project "Sumatra" [1] for this which is inactive since quite some time. We also have project "Panama" [2] which, as far as I understand, is also looking into auto-parallelization/vectorization. See for example the "Vectors for Java" presentation from JavaOne which describes some very similar ideas to yours. What justifies the creation of yet another project instead of doing this work in the context of the existing projects? What in your approach is different to the one described in [3] which is already, at least partially, implemented in project Panama? Thanks, Volker [1] http://openjdk.java.net/projects/sumatra/ [2] http://openjdk.java.net/projects/panama/ [3] http://cr.openjdk.java.net/~psandoz/conferences/2016-JavaOne/j1-2016-vectors-for-java-CON1560.pdf On Mon, Nov 14, 2016 at 5:23 PM, Karthik Ganesan wrote: > Hi, > > I would like to propose the creation of a new Project: Project Trinity. > > This Project would explore enhanced execution of bulk aggregate calculations > over Streams through offloading calculations to hardware accelerators. > > Streams allow developers to express calculations such that data parallelism > can be efficiently exploited. Such calculations are prime candidates for > leveraging enhanced data-oriented instructions on CPUs (such as SIMD > instructions) or offloading to hardware accelerators (such as the SPARC Data > Accelerator co-processor, further referred to as DAX [1]). > > To identify a path to improving performance and power efficiency, Project > Trinity will explore how libraries like Streams can be enhanced to leverage > data processing hardware features to execute Streams more efficiently. > > Directions for exploration include: > - Building a streams-like library optimized for offload to > -- hardware accelerators (such as DAX), or > -- a GPU, or > -- SIMD instructions; > - Optimizations in the Graal compiler to automatically transform suitable > Streams pipelines, taking advantage of data processing hardware features; > - Explorations with Project Valhalla to expand the range of effective > acceleration to Streams of value types. > > Success will be evaluated based upon: > (1) speedups and resource efficiency gains achieved for a broad range of > representative streams calculations under offload, > (2) ease of use of the hardware acceleration capability, and > (3) ensuring that there is no time or space overhead for non-accelerated > calculations. > > Can I please request the support of the Core Libraries Group as the > Sponsoring Group with myself as the Project Lead. > > Warm Regards, > Karthik Ganesan > > [1] https://community.oracle.com/docs/DOC-994842 > From karthik.ganesan at oracle.com Tue Nov 15 05:57:58 2016 From: karthik.ganesan at oracle.com (Karthik Ganesan) Date: Mon, 14 Nov 2016 23:57:58 -0600 Subject: Project Proposal: Trinity In-Reply-To: References: <5829E4E9.9080604@oracle.com> Message-ID: Hi Volker, Thanks for your comments and the relevant questions. We have reviewed projects Sumatra and Panama and talked to members who are familiar with the projects. Project Sumatra was aimed at translation of Java byte code to execute on GPU, which was an ambitious goal and a challenging task to take up. In this project, we aim to come up with APIs targeting the most common Analytics operations that can be readily offloaded to accelerators transparently. Most of the information needed for offload to the accelerator is expected to be readily provided by the API semantics and there by, simplifying the need to do tedious byte code analysis. While the vector API (part of Panama) brings some most wanted abstraction for vectors, it is still loop based and is most useful for superword type of operations leveraging SIMD units on general purpose cores. The aim of this proposed project is to provide a more abstract API (similar to the Streams API) that will directly work on streams of data and transparently accommodate a wider set of heterogeneous accelerators like DAX, GPUs underneath. Initially, the project will focus on coming up with a complete set of APIs, relevant input/output formats, optimized data structures and storage format that can be used as building blocks to build high performance analytics applications/frameworks in Java. Simple examples of such operations will include Scan, select, filter, lookup, transcode, merge, sort etc. Additionally, this project will also require more functionality like operating system library calls, handling Garbage Collection needs amidst offload etc. The artifacts provided by Project Panama including the code snippets (or even the Vector API) along with value types from project Valhalla will come in handy to be leveraged wherever it is applicable in this project. Overall, I feel that the goals of this project and the needed work are different from what the Vector API is targeting. Hope this answers your question. Regards, Karthik On 11/14/2016 10:49 AM, Volker Simonis wrote: > Hi Karthik, > > we had project "Sumatra" [1] for this which is inactive since quite some time. > We also have project "Panama" [2] which, as far as I understand, is > also looking into auto-parallelization/vectorization. See for example > the "Vectors for Java" presentation from JavaOne which describes some > very similar ideas to yours. > > What justifies the creation of yet another project instead of doing > this work in the context of the existing projects? > What in your approach is different to the one described in [3] which > is already, at least partially, implemented in project Panama? > > Thanks, > Volker > > [1] http://openjdk.java.net/projects/sumatra/ > [2] http://openjdk.java.net/projects/panama/ > [3] http://cr.openjdk.java.net/~psandoz/conferences/2016-JavaOne/j1-2016-vectors-for-java-CON1560.pdf > > On Mon, Nov 14, 2016 at 5:23 PM, Karthik Ganesan > wrote: >> Hi, >> >> I would like to propose the creation of a new Project: Project Trinity. >> >> This Project would explore enhanced execution of bulk aggregate calculations >> over Streams through offloading calculations to hardware accelerators. >> >> Streams allow developers to express calculations such that data parallelism >> can be efficiently exploited. Such calculations are prime candidates for >> leveraging enhanced data-oriented instructions on CPUs (such as SIMD >> instructions) or offloading to hardware accelerators (such as the SPARC Data >> Accelerator co-processor, further referred to as DAX [1]). >> >> To identify a path to improving performance and power efficiency, Project >> Trinity will explore how libraries like Streams can be enhanced to leverage >> data processing hardware features to execute Streams more efficiently. >> >> Directions for exploration include: >> - Building a streams-like library optimized for offload to >> -- hardware accelerators (such as DAX), or >> -- a GPU, or >> -- SIMD instructions; >> - Optimizations in the Graal compiler to automatically transform suitable >> Streams pipelines, taking advantage of data processing hardware features; >> - Explorations with Project Valhalla to expand the range of effective >> acceleration to Streams of value types. >> >> Success will be evaluated based upon: >> (1) speedups and resource efficiency gains achieved for a broad range of >> representative streams calculations under offload, >> (2) ease of use of the hardware acceleration capability, and >> (3) ensuring that there is no time or space overhead for non-accelerated >> calculations. >> >> Can I please request the support of the Core Libraries Group as the >> Sponsoring Group with myself as the Project Lead. >> >> Warm Regards, >> Karthik Ganesan >> >> [1] https://community.oracle.com/docs/DOC-994842 >> From volker.simonis at gmail.com Tue Nov 15 08:13:46 2016 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 15 Nov 2016 09:13:46 +0100 Subject: Project Proposal: Trinity In-Reply-To: References: <5829E4E9.9080604@oracle.com> Message-ID: Hi Karthik, thanks a lot for your quick answer and the detailed description of the project's goals and relation to the other projects. This all sounds reasonable and very interesting! I wish I'll find some time to take a deeper look at your project :) Wish you all the best, Volker On Tue, Nov 15, 2016 at 6:57 AM, Karthik Ganesan wrote: > Hi Volker, > > Thanks for your comments and the relevant questions. We have reviewed > projects Sumatra and Panama and talked to members who are familiar with the > projects. > > Project Sumatra was aimed at translation of Java byte code to execute on > GPU, which was an ambitious goal and a challenging task to take up. In this > project, we aim to come up with APIs targeting the most common Analytics > operations that can be readily offloaded to accelerators transparently. Most > of the information needed for offload to the accelerator is expected to be > readily provided by the API semantics and there by, simplifying the need to > do tedious byte code analysis. > > While the vector API (part of Panama) brings some most wanted abstraction > for vectors, it is still loop based and is most useful for superword type of > operations leveraging SIMD units on general purpose cores. The aim of this > proposed project is to provide a more abstract API (similar to the Streams > API) that will directly work on streams of data and transparently > accommodate a wider set of heterogeneous accelerators like DAX, GPUs > underneath. Initially, the project will focus on coming up with a complete > set of APIs, relevant input/output formats, optimized data structures and > storage format that can be used as building blocks to build high performance > analytics applications/frameworks in Java. Simple examples of such > operations will include Scan, select, filter, lookup, transcode, merge, sort > etc. Additionally, this project will also require more functionality like > operating system library calls, handling Garbage Collection needs amidst > offload etc. > > The artifacts provided by Project Panama including the code snippets (or > even the Vector API) along with value types from project Valhalla will come > in handy to be leveraged wherever it is applicable in this project. Overall, > I feel that the goals of this project and the needed work are different from > what the Vector API is targeting. Hope this answers your question. > > Regards, > > Karthik > > > On 11/14/2016 10:49 AM, Volker Simonis wrote: >> >> Hi Karthik, >> >> we had project "Sumatra" [1] for this which is inactive since quite some >> time. >> We also have project "Panama" [2] which, as far as I understand, is >> also looking into auto-parallelization/vectorization. See for example >> the "Vectors for Java" presentation from JavaOne which describes some >> very similar ideas to yours. >> >> What justifies the creation of yet another project instead of doing >> this work in the context of the existing projects? >> What in your approach is different to the one described in [3] which >> is already, at least partially, implemented in project Panama? >> >> Thanks, >> Volker >> >> [1] http://openjdk.java.net/projects/sumatra/ >> [2] http://openjdk.java.net/projects/panama/ >> [3] >> http://cr.openjdk.java.net/~psandoz/conferences/2016-JavaOne/j1-2016-vectors-for-java-CON1560.pdf >> >> On Mon, Nov 14, 2016 at 5:23 PM, Karthik Ganesan >> wrote: >>> >>> Hi, >>> >>> I would like to propose the creation of a new Project: Project Trinity. >>> >>> This Project would explore enhanced execution of bulk aggregate >>> calculations >>> over Streams through offloading calculations to hardware accelerators. >>> >>> Streams allow developers to express calculations such that data >>> parallelism >>> can be efficiently exploited. Such calculations are prime candidates for >>> leveraging enhanced data-oriented instructions on CPUs (such as SIMD >>> instructions) or offloading to hardware accelerators (such as the SPARC >>> Data >>> Accelerator co-processor, further referred to as DAX [1]). >>> >>> To identify a path to improving performance and power efficiency, Project >>> Trinity will explore how libraries like Streams can be enhanced to >>> leverage >>> data processing hardware features to execute Streams more efficiently. >>> >>> Directions for exploration include: >>> - Building a streams-like library optimized for offload to >>> -- hardware accelerators (such as DAX), or >>> -- a GPU, or >>> -- SIMD instructions; >>> - Optimizations in the Graal compiler to automatically transform suitable >>> Streams pipelines, taking advantage of data processing hardware features; >>> - Explorations with Project Valhalla to expand the range of effective >>> acceleration to Streams of value types. >>> >>> Success will be evaluated based upon: >>> (1) speedups and resource efficiency gains achieved for a broad range of >>> representative streams calculations under offload, >>> (2) ease of use of the hardware acceleration capability, and >>> (3) ensuring that there is no time or space overhead for non-accelerated >>> calculations. >>> >>> Can I please request the support of the Core Libraries Group as the >>> Sponsoring Group with myself as the Project Lead. >>> >>> Warm Regards, >>> Karthik Ganesan >>> >>> [1] https://community.oracle.com/docs/DOC-994842 >>> > From vladimir.x.ivanov at oracle.com Wed Nov 16 11:01:12 2016 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 16 Nov 2016 14:01:12 +0300 Subject: Project Proposal: Trinity In-Reply-To: References: <5829E4E9.9080604@oracle.com> Message-ID: <3a56402c-6bab-7159-50be-9a3668896aab@oracle.com> Thanks for the clarifications, Karthik. Have you considered conducting the experiments in Project Panama? IMO the goals you stated fit Panama really well. There are different directions being explored at the momemnt and most of them are relevant for the project you propose. Though current explorations in Vector API are primarily focused on SIMD support, there are no inherent barriers in extending the scope to GPUs and special-purpose HW accelerators. It doesn't mean in any way it should be part of Vector API work, but both projects should win from such collaboration. During explorations you can rely on new FFI, foreign data layout support, machine code snippets, and value types (since they are important for Panama, there will be regular syncs between projects and the engineering costs can be amortized). Overall, I expect all participants to benefit from such synergy. Best regards, Vladimir Ivanov On 11/15/16 8:57 AM, Karthik Ganesan wrote: > Hi Volker, > > Thanks for your comments and the relevant questions. We have reviewed > projects Sumatra and Panama and talked to members who are familiar with > the projects. > > Project Sumatra was aimed at translation of Java byte code to execute on > GPU, which was an ambitious goal and a challenging task to take up. In > this project, we aim to come up with APIs targeting the most common > Analytics operations that can be readily offloaded to accelerators > transparently. Most of the information needed for offload to the > accelerator is expected to be readily provided by the API semantics and > there by, simplifying the need to do tedious byte code analysis. > > While the vector API (part of Panama) brings some most wanted > abstraction for vectors, it is still loop based and is most useful for > superword type of operations leveraging SIMD units on general purpose > cores. The aim of this proposed project is to provide a more abstract > API (similar to the Streams API) that will directly work on streams of > data and transparently accommodate a wider set of heterogeneous > accelerators like DAX, GPUs underneath. Initially, the project will > focus on coming up with a complete set of APIs, relevant input/output > formats, optimized data structures and storage format that can be used > as building blocks to build high performance analytics > applications/frameworks in Java. Simple examples of such operations will > include Scan, select, filter, lookup, transcode, merge, sort etc. > Additionally, this project will also require more functionality like > operating system library calls, handling Garbage Collection needs amidst > offload etc. > > The artifacts provided by Project Panama including the code snippets (or > even the Vector API) along with value types from project Valhalla will > come in handy to be leveraged wherever it is applicable in this project. > Overall, I feel that the goals of this project and the needed work are > different from what the Vector API is targeting. Hope this answers your > question. > > Regards, > > Karthik > > On 11/14/2016 10:49 AM, Volker Simonis wrote: >> Hi Karthik, >> >> we had project "Sumatra" [1] for this which is inactive since quite >> some time. >> We also have project "Panama" [2] which, as far as I understand, is >> also looking into auto-parallelization/vectorization. See for example >> the "Vectors for Java" presentation from JavaOne which describes some >> very similar ideas to yours. >> >> What justifies the creation of yet another project instead of doing >> this work in the context of the existing projects? >> What in your approach is different to the one described in [3] which >> is already, at least partially, implemented in project Panama? >> >> Thanks, >> Volker >> >> [1] http://openjdk.java.net/projects/sumatra/ >> [2] http://openjdk.java.net/projects/panama/ >> [3] >> http://cr.openjdk.java.net/~psandoz/conferences/2016-JavaOne/j1-2016-vectors-for-java-CON1560.pdf >> >> >> On Mon, Nov 14, 2016 at 5:23 PM, Karthik Ganesan >> wrote: >>> Hi, >>> >>> I would like to propose the creation of a new Project: Project Trinity. >>> >>> This Project would explore enhanced execution of bulk aggregate >>> calculations >>> over Streams through offloading calculations to hardware accelerators. >>> >>> Streams allow developers to express calculations such that data >>> parallelism >>> can be efficiently exploited. Such calculations are prime candidates for >>> leveraging enhanced data-oriented instructions on CPUs (such as SIMD >>> instructions) or offloading to hardware accelerators (such as the >>> SPARC Data >>> Accelerator co-processor, further referred to as DAX [1]). >>> >>> To identify a path to improving performance and power efficiency, >>> Project >>> Trinity will explore how libraries like Streams can be enhanced to >>> leverage >>> data processing hardware features to execute Streams more efficiently. >>> >>> Directions for exploration include: >>> - Building a streams-like library optimized for offload to >>> -- hardware accelerators (such as DAX), or >>> -- a GPU, or >>> -- SIMD instructions; >>> - Optimizations in the Graal compiler to automatically transform >>> suitable >>> Streams pipelines, taking advantage of data processing hardware >>> features; >>> - Explorations with Project Valhalla to expand the range of effective >>> acceleration to Streams of value types. >>> >>> Success will be evaluated based upon: >>> (1) speedups and resource efficiency gains achieved for a broad range of >>> representative streams calculations under offload, >>> (2) ease of use of the hardware acceleration capability, and >>> (3) ensuring that there is no time or space overhead for non-accelerated >>> calculations. >>> >>> Can I please request the support of the Core Libraries Group as the >>> Sponsoring Group with myself as the Project Lead. >>> >>> Warm Regards, >>> Karthik Ganesan >>> >>> [1] https://community.oracle.com/docs/DOC-994842 >>> > From karthik.ganesan at oracle.com Wed Nov 16 22:39:55 2016 From: karthik.ganesan at oracle.com (Karthik Ganesan) Date: Wed, 16 Nov 2016 16:39:55 -0600 Subject: Project Proposal: Trinity In-Reply-To: <3a56402c-6bab-7159-50be-9a3668896aab@oracle.com> References: <5829E4E9.9080604@oracle.com> <3a56402c-6bab-7159-50be-9a3668896aab@oracle.com> Message-ID: <582CE03B.5040109@oracle.com> Hi Vladimir, Thanks for the support. I certainly agree with you regarding the value in collaboration between the ongoing vector API efforts and this project, going forward. I believe that the tools that will be provided by Panama and Valhalla have set the stage to start exploring something like Trinity, which may either consume some of these artifacts and/or choose to improve/expand them over the course of this project. It would be great if the members familiar with Vector API can participate in the initial discussions of this project and help steer the relevant aspects of Trinity in the right direction. Regards, Karthik On 16-11-16 05:01 AM, Vladimir Ivanov wrote: > Thanks for the clarifications, Karthik. > > Have you considered conducting the experiments in Project Panama? > > IMO the goals you stated fit Panama really well. There are different > directions being explored at the momemnt and most of them are relevant > for the project you propose. > > Though current explorations in Vector API are primarily focused on > SIMD support, there are no inherent barriers in extending the scope to > GPUs and special-purpose HW accelerators. It doesn't mean in any way > it should be part of Vector API work, but both projects should win > from such collaboration. > > During explorations you can rely on new FFI, foreign data layout > support, machine code snippets, and value types (since they are > important for Panama, there will be regular syncs between projects and > the engineering costs can be amortized). > > Overall, I expect all participants to benefit from such synergy. > > Best regards, > Vladimir Ivanov > > On 11/15/16 8:57 AM, Karthik Ganesan wrote: >> Hi Volker, >> >> Thanks for your comments and the relevant questions. We have reviewed >> projects Sumatra and Panama and talked to members who are familiar with >> the projects. >> >> Project Sumatra was aimed at translation of Java byte code to execute on >> GPU, which was an ambitious goal and a challenging task to take up. In >> this project, we aim to come up with APIs targeting the most common >> Analytics operations that can be readily offloaded to accelerators >> transparently. Most of the information needed for offload to the >> accelerator is expected to be readily provided by the API semantics and >> there by, simplifying the need to do tedious byte code analysis. >> >> While the vector API (part of Panama) brings some most wanted >> abstraction for vectors, it is still loop based and is most useful for >> superword type of operations leveraging SIMD units on general purpose >> cores. The aim of this proposed project is to provide a more abstract >> API (similar to the Streams API) that will directly work on streams of >> data and transparently accommodate a wider set of heterogeneous >> accelerators like DAX, GPUs underneath. Initially, the project will >> focus on coming up with a complete set of APIs, relevant input/output >> formats, optimized data structures and storage format that can be used >> as building blocks to build high performance analytics >> applications/frameworks in Java. Simple examples of such operations will >> include Scan, select, filter, lookup, transcode, merge, sort etc. >> Additionally, this project will also require more functionality like >> operating system library calls, handling Garbage Collection needs amidst >> offload etc. >> >> The artifacts provided by Project Panama including the code snippets (or >> even the Vector API) along with value types from project Valhalla will >> come in handy to be leveraged wherever it is applicable in this project. >> Overall, I feel that the goals of this project and the needed work are >> different from what the Vector API is targeting. Hope this answers your >> question. >> >> Regards, >> >> Karthik >> >> On 11/14/2016 10:49 AM, Volker Simonis wrote: >>> Hi Karthik, >>> >>> we had project "Sumatra" [1] for this which is inactive since quite >>> some time. >>> We also have project "Panama" [2] which, as far as I understand, is >>> also looking into auto-parallelization/vectorization. See for example >>> the "Vectors for Java" presentation from JavaOne which describes some >>> very similar ideas to yours. >>> >>> What justifies the creation of yet another project instead of doing >>> this work in the context of the existing projects? >>> What in your approach is different to the one described in [3] which >>> is already, at least partially, implemented in project Panama? >>> >>> Thanks, >>> Volker >>> >>> [1] http://openjdk.java.net/projects/sumatra/ >>> [2] http://openjdk.java.net/projects/panama/ >>> [3] >>> http://cr.openjdk.java.net/~psandoz/conferences/2016-JavaOne/j1-2016-vectors-for-java-CON1560.pdf >>> >>> >>> >>> On Mon, Nov 14, 2016 at 5:23 PM, Karthik Ganesan >>> wrote: >>>> Hi, >>>> >>>> I would like to propose the creation of a new Project: Project >>>> Trinity. >>>> >>>> This Project would explore enhanced execution of bulk aggregate >>>> calculations >>>> over Streams through offloading calculations to hardware accelerators. >>>> >>>> Streams allow developers to express calculations such that data >>>> parallelism >>>> can be efficiently exploited. Such calculations are prime >>>> candidates for >>>> leveraging enhanced data-oriented instructions on CPUs (such as SIMD >>>> instructions) or offloading to hardware accelerators (such as the >>>> SPARC Data >>>> Accelerator co-processor, further referred to as DAX [1]). >>>> >>>> To identify a path to improving performance and power efficiency, >>>> Project >>>> Trinity will explore how libraries like Streams can be enhanced to >>>> leverage >>>> data processing hardware features to execute Streams more efficiently. >>>> >>>> Directions for exploration include: >>>> - Building a streams-like library optimized for offload to >>>> -- hardware accelerators (such as DAX), or >>>> -- a GPU, or >>>> -- SIMD instructions; >>>> - Optimizations in the Graal compiler to automatically transform >>>> suitable >>>> Streams pipelines, taking advantage of data processing hardware >>>> features; >>>> - Explorations with Project Valhalla to expand the range of effective >>>> acceleration to Streams of value types. >>>> >>>> Success will be evaluated based upon: >>>> (1) speedups and resource efficiency gains achieved for a broad >>>> range of >>>> representative streams calculations under offload, >>>> (2) ease of use of the hardware acceleration capability, and >>>> (3) ensuring that there is no time or space overhead for >>>> non-accelerated >>>> calculations. >>>> >>>> Can I please request the support of the Core Libraries Group as the >>>> Sponsoring Group with myself as the Project Lead. >>>> >>>> Warm Regards, >>>> Karthik Ganesan >>>> >>>> [1] https://community.oracle.com/docs/DOC-994842 >>>> >> From paul.sandoz at oracle.com Tue Nov 22 00:11:22 2016 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 21 Nov 2016 16:11:22 -0800 Subject: Project Proposal: Trinity In-Reply-To: <5829E4E9.9080604@oracle.com> References: <5829E4E9.9080604@oracle.com> Message-ID: Hi Karthik, Thanks for sending this. Some thoughts. I can see a number of DAX API focused explorations here: 1) A DAX-specific API bound to libdax using JNI 2) A DAX-specific API bound to libdax using Panama 3) A DAX-like API leveraging technologies in either 1) or 2) Each may allow one to get the most out of a DAX accelerator. I think 2) and 3) are complimentary to efforts in Panama. 3) is where alternative implementations leveraging SIMDs and GPUs might also be a good fit. As one goes further down the abstraction road it gets a little fuzzier and there may be duplication and IMO we should be vigilant and consider consolidating particular aspects in such cases. And as one goes further down the abstraction road, to say java.util.stream.Stream, i believe the problem gets much harder. The set of valid j.u.stream.Stream pipelines that might map to DAX operations, and further might map efficiently, is likely to be quite small and to the developer the performance model unclear. Cracking lambdas is certainly not easy, and is likely to be costly as well. To some extent project Sumatra ran into such difficulties, although i think in your case the problem is a little easier than that Sumatra is trying to solve. Still, it?s not easy to detect and translate appropriate j.u.stream.Stream pipelines into another form. As i understand it DAX provides a number of fairly simple bulk transformation operations over arrays of data, with some flexibility in the element layout of that data. Focusing an API on those operations and layouts is likely to be a more tractable problem. That might include off-heap memory with compatible panama layouts, or on-heap somehow compatible with layouts for simple value types. Cue hand-waving :-) but in the spirit of 3) this might be the sweet spot. Paul. > On 14 Nov 2016, at 08:23, Karthik Ganesan wrote: > > Hi, > > I would like to propose the creation of a new Project: Project Trinity. > > This Project would explore enhanced execution of bulk aggregate calculations over Streams through offloading calculations to hardware accelerators. > > Streams allow developers to express calculations such that data parallelism can be efficiently exploited. Such calculations are prime candidates for leveraging enhanced data-oriented instructions on CPUs (such as SIMD instructions) or offloading to hardware accelerators (such as the SPARC Data Accelerator co-processor, further referred to as DAX [1]). > > To identify a path to improving performance and power efficiency, Project Trinity will explore how libraries like Streams can be enhanced to leverage data processing hardware features to execute Streams more efficiently. > > Directions for exploration include: > - Building a streams-like library optimized for offload to > -- hardware accelerators (such as DAX), or > -- a GPU, or > -- SIMD instructions; > - Optimizations in the Graal compiler to automatically transform suitable Streams pipelines, taking advantage of data processing hardware features; > - Explorations with Project Valhalla to expand the range of effective acceleration to Streams of value types. > > Success will be evaluated based upon: > (1) speedups and resource efficiency gains achieved for a broad range of representative streams calculations under offload, > (2) ease of use of the hardware acceleration capability, and > (3) ensuring that there is no time or space overhead for non-accelerated calculations. > > Can I please request the support of the Core Libraries Group as the Sponsoring Group with myself as the Project Lead. > > Warm Regards, > Karthik Ganesan > > [1] https://community.oracle.com/docs/DOC-994842 > From lavlozm at gmail.com Wed Nov 23 16:00:43 2016 From: lavlozm at gmail.com (Abdessamed MANSOURI) Date: Wed, 23 Nov 2016 17:00:43 +0100 Subject: Backport Java 7 Regex to Java 6 runtime Message-ID: Hello all, We want to backport Java 7 Regex to Java 6 runtime to benifits of some Java 7 Regex features as Named Capturing Group ..., i already read some of OpenJDK 7 codes and it doens't seems too much complicated cause there isn't much dependencies to others packages (java.*, sun.* ...) nor native dependencies from java.regex I just want to ask you (if you have time ofc) if we authorized to backport regex according to GPL v2 licence and put it in closed source application, and if you have some points and advice to backport it please share it. Thank you and have nice day. -- Thanks, Abdessamed MANSOURI From aph at redhat.com Wed Nov 23 16:16:25 2016 From: aph at redhat.com (Andrew Haley) Date: Wed, 23 Nov 2016 16:16:25 +0000 Subject: Backport Java 7 Regex to Java 6 runtime In-Reply-To: References: Message-ID: <4d1a2118-7d5c-a197-f1f7-b6beddd519eb@redhat.com> On 23/11/16 16:00, Abdessamed MANSOURI wrote: > We want to backport Java 7 Regex to Java 6 runtime to benifits of some Java > 7 Regex features as Named Capturing Group ..., i already read some of > OpenJDK 7 codes and it doens't seems too much complicated cause there isn't > much dependencies to others packages (java.*, sun.* ...) nor native > dependencies from java.regex > > I just want to ask you (if you have time ofc) if we authorized to backport > regex according to GPL v2 licence and put it in closed source application, > and if you have some points and advice to backport it please share it. Two things: We can't do it if in any way it deviates from the JDK 6 specification. JDK 6 is very old. It's highly questionable if this makes any sense. Andrew. From karthik.ganesan at oracle.com Wed Nov 23 20:43:23 2016 From: karthik.ganesan at oracle.com (Karthik Ganesan) Date: Wed, 23 Nov 2016 14:43:23 -0600 Subject: Project Proposal: Trinity In-Reply-To: References: <5829E4E9.9080604@oracle.com> Message-ID: <5835FF6B.8020202@oracle.com> Hi Paul, Thanks for the well thought out comments and suggestions. Overall, the suggested directions sound reasonable to me as a good starting point for the project team to explore further. Regards, Karthik On 16-11-21 06:11 PM, Paul Sandoz wrote: > Hi Karthik, > > Thanks for sending this. Some thoughts. > > I can see a number of DAX API focused explorations here: > > 1) A DAX-specific API bound to libdax using JNI > 2) A DAX-specific API bound to libdax using Panama > 3) A DAX-like API leveraging technologies in either 1) or 2) > > Each may allow one to get the most out of a DAX accelerator. > > I think 2) and 3) are complimentary to efforts in Panama. > > 3) is where alternative implementations leveraging SIMDs and GPUs might also be a good fit. > > As one goes further down the abstraction road it gets a little fuzzier and there may be duplication and IMO we should be vigilant and consider consolidating particular aspects in such cases. > > And as one goes further down the abstraction road, to say java.util.stream.Stream, i believe the problem gets much harder. The set of valid j.u.stream.Stream pipelines that might map to DAX operations, and further might map efficiently, is likely to be quite small and to the developer the performance model unclear. Cracking lambdas is certainly not easy, and is likely to be costly as well. To some extent project Sumatra ran into such difficulties, although i think in your case the problem is a little easier than that Sumatra is trying to solve. Still, it?s not easy to detect and translate appropriate j.u.stream.Stream pipelines into another form. > > As i understand it DAX provides a number of fairly simple bulk transformation operations over arrays of data, with some flexibility in the element layout of that data. Focusing an API on those operations and layouts is likely to be a more tractable problem. That might include off-heap memory with compatible panama layouts, or on-heap somehow compatible with layouts for simple value types. Cue hand-waving :-) but in the spirit of 3) this might be the sweet spot. > > Paul. > > > >> On 14 Nov 2016, at 08:23, Karthik Ganesan wrote: >> >> Hi, >> >> I would like to propose the creation of a new Project: Project Trinity. >> >> This Project would explore enhanced execution of bulk aggregate calculations over Streams through offloading calculations to hardware accelerators. >> >> Streams allow developers to express calculations such that data parallelism can be efficiently exploited. Such calculations are prime candidates for leveraging enhanced data-oriented instructions on CPUs (such as SIMD instructions) or offloading to hardware accelerators (such as the SPARC Data Accelerator co-processor, further referred to as DAX [1]). >> >> To identify a path to improving performance and power efficiency, Project Trinity will explore how libraries like Streams can be enhanced to leverage data processing hardware features to execute Streams more efficiently. >> >> Directions for exploration include: >> - Building a streams-like library optimized for offload to >> -- hardware accelerators (such as DAX), or >> -- a GPU, or >> -- SIMD instructions; >> - Optimizations in the Graal compiler to automatically transform suitable Streams pipelines, taking advantage of data processing hardware features; >> - Explorations with Project Valhalla to expand the range of effective acceleration to Streams of value types. >> >> Success will be evaluated based upon: >> (1) speedups and resource efficiency gains achieved for a broad range of representative streams calculations under offload, >> (2) ease of use of the hardware acceleration capability, and >> (3) ensuring that there is no time or space overhead for non-accelerated calculations. >> >> Can I please request the support of the Core Libraries Group as the Sponsoring Group with myself as the Project Lead. >> >> Warm Regards, >> Karthik Ganesan >> >> [1] https://community.oracle.com/docs/DOC-994842 >> From akashche at redhat.com Thu Nov 24 09:54:22 2016 From: akashche at redhat.com (Alex Kashchenko) Date: Thu, 24 Nov 2016 09:54:22 +0000 Subject: Backport Java 7 Regex to Java 6 runtime In-Reply-To: <4d1a2118-7d5c-a197-f1f7-b6beddd519eb@redhat.com> References: <4d1a2118-7d5c-a197-f1f7-b6beddd519eb@redhat.com> Message-ID: <5836B8CE.2050200@redhat.com> Hi Abdessamed, On 11/23/2016 04:16 PM, Andrew Haley wrote: > On 23/11/16 16:00, Abdessamed MANSOURI wrote: >> We want to backport Java 7 Regex to Java 6 runtime to benifits of some Java >> 7 Regex features as Named Capturing Group ..., Maybe this regex library will suite your needs - https://github.com/tony19/named-regexp >> i already read some of >> OpenJDK 7 codes and it doens't seems too much complicated cause there isn't >> much dependencies to others packages (java.*, sun.* ...) nor native >> dependencies from java.regex >> >> I just want to ask you (if you have time ofc) if we authorized to backport >> regex according to GPL v2 licence and put it in closed source application, >> and if you have some points and advice to backport it please share it. > > Two things: > > We can't do it if in any way it deviates from the JDK 6 specification. > > JDK 6 is very old. It's highly questionable if this makes any sense. > > Andrew. > -- -Alex From lavlozm at gmail.com Thu Nov 24 12:30:18 2016 From: lavlozm at gmail.com (Abdessamed MANSOURI) Date: Thu, 24 Nov 2016 13:30:18 +0100 Subject: Backport Java 7 Regex to Java 6 runtime In-Reply-To: <5836B8CE.2050200@redhat.com> References: <4d1a2118-7d5c-a197-f1f7-b6beddd519eb@redhat.com> <5836B8CE.2050200@redhat.com> Message-ID: Thank you all for your time. Andrew, we have an application which turns of Java 6 (the migration to Java 7 costs too much time) and we want to benifits of some Java 7 features, so we want to backport it (in our client enviroment), i'm not suggesting that to the community (I'm not author nor committer ...), but as you invoked it, i would like to make a comment, why does Java standard lib over many years was distributed as monolithic lib?, nearly all classes are in rt.jar bundled with sun.*, com.sun.* ... packages , if there were a jar for regex the upgrade would be a little easy (i know that is another subject but its the source of problem as i think). Alex, Thank you for the link, unfortunnately it doesn't support Unicode character class, so i think the best solution is to process on backport. 2016-11-24 10:54 GMT+01:00 Alex Kashchenko : > Hi Abdessamed, > > On 11/23/2016 04:16 PM, Andrew Haley wrote: > >> On 23/11/16 16:00, Abdessamed MANSOURI wrote: >> >>> We want to backport Java 7 Regex to Java 6 runtime to benifits of some >>> Java >>> 7 Regex features as Named Capturing Group ..., >>> >> > Maybe this regex library will suite your needs - > https://github.com/tony19/named-regexp > > > i already read some of >>> OpenJDK 7 codes and it doens't seems too much complicated cause there >>> isn't >>> much dependencies to others packages (java.*, sun.* ...) nor native >>> dependencies from java.regex >>> >>> I just want to ask you (if you have time ofc) if we authorized to >>> backport >>> regex according to GPL v2 licence and put it in closed source >>> application, >>> and if you have some points and advice to backport it please share it. >>> >> >> Two things: >> >> We can't do it if in any way it deviates from the JDK 6 specification. >> >> JDK 6 is very old. It's highly questionable if this makes any sense. >> >> Andrew. >> >> > -- > -Alex > > -- Thanks, Abdessamed MANSOURI From aph at redhat.com Thu Nov 24 13:09:35 2016 From: aph at redhat.com (Andrew Haley) Date: Thu, 24 Nov 2016 13:09:35 +0000 Subject: Backport Java 7 Regex to Java 6 runtime In-Reply-To: References: <4d1a2118-7d5c-a197-f1f7-b6beddd519eb@redhat.com> <5836B8CE.2050200@redhat.com> Message-ID: On 24/11/16 12:30, Abdessamed MANSOURI wrote: > Andrew, we have an application which turns of Java 6 (the migration to Java > 7 costs too much time) and we want to benifits of some Java 7 features, so > we want to backport it (in our client enviroment), i'm not suggesting that > to the community (I'm not author nor committer ...), Sure. > but as you invoked it, i would like to make a comment, why does Java > standard lib over many years was distributed as monolithic lib?, > nearly all classes are in rt.jar bundled with sun.*, com.sun.* > ... packages , if there were a jar for regex the upgrade would be a > little easy (i know that is another subject but its the source of > problem as i think). Not entirely, because other parts of the runtime library use regexps. Surely it's easy anyway: all you have to do is import the regex library and put it in an appropriate package. But to answer your original question a little better, the code is GPL + Classpath exception, which might well be what you need legally speaking. But I am not a lawyer and cannot give you legal advice. Andrew. From thomas.wuerthinger at oracle.com Fri Nov 25 23:07:28 2016 From: thomas.wuerthinger at oracle.com (Thomas Wuerthinger) Date: Sat, 26 Nov 2016 00:07:28 +0100 Subject: Project Proposal: Trinity In-Reply-To: <5835FF6B.8020202@oracle.com> References: <5829E4E9.9080604@oracle.com> <5835FF6B.8020202@oracle.com> Message-ID: <361E4150-5C8E-4A24-A537-9E7E8A9D9172@oracle.com> Karthik, On your point below to automatically transform suitable Java code via the Graal compiler for execution with DAX: The Graal team is very interested to assist you in exploring this area. It could make DAX more widely applicable also to third party Java data processing libraries. - thomas > On 23 Nov 2016, at 21:43, Karthik Ganesan wrote: > > Hi Paul, > > Thanks for the well thought out comments and suggestions. Overall, the suggested directions sound reasonable to me as a good starting point for the project team to explore further. > > Regards, > Karthik > > On 16-11-21 06:11 PM, Paul Sandoz wrote: >> Hi Karthik, >> >> Thanks for sending this. Some thoughts. >> >> I can see a number of DAX API focused explorations here: >> >> 1) A DAX-specific API bound to libdax using JNI >> 2) A DAX-specific API bound to libdax using Panama >> 3) A DAX-like API leveraging technologies in either 1) or 2) >> >> Each may allow one to get the most out of a DAX accelerator. >> >> I think 2) and 3) are complimentary to efforts in Panama. >> >> 3) is where alternative implementations leveraging SIMDs and GPUs might also be a good fit. >> >> As one goes further down the abstraction road it gets a little fuzzier and there may be duplication and IMO we should be vigilant and consider consolidating particular aspects in such cases. >> >> And as one goes further down the abstraction road, to say java.util.stream.Stream, i believe the problem gets much harder. The set of valid j.u.stream.Stream pipelines that might map to DAX operations, and further might map efficiently, is likely to be quite small and to the developer the performance model unclear. Cracking lambdas is certainly not easy, and is likely to be costly as well. To some extent project Sumatra ran into such difficulties, although i think in your case the problem is a little easier than that Sumatra is trying to solve. Still, it?s not easy to detect and translate appropriate j.u.stream.Stream pipelines into another form. >> >> As i understand it DAX provides a number of fairly simple bulk transformation operations over arrays of data, with some flexibility in the element layout of that data. Focusing an API on those operations and layouts is likely to be a more tractable problem. That might include off-heap memory with compatible panama layouts, or on-heap somehow compatible with layouts for simple value types. Cue hand-waving :-) but in the spirit of 3) this might be the sweet spot. >> >> Paul. >> >> >> >>> On 14 Nov 2016, at 08:23, Karthik Ganesan wrote: >>> >>> Hi, >>> >>> I would like to propose the creation of a new Project: Project Trinity. >>> >>> This Project would explore enhanced execution of bulk aggregate calculations over Streams through offloading calculations to hardware accelerators. >>> >>> Streams allow developers to express calculations such that data parallelism can be efficiently exploited. Such calculations are prime candidates for leveraging enhanced data-oriented instructions on CPUs (such as SIMD instructions) or offloading to hardware accelerators (such as the SPARC Data Accelerator co-processor, further referred to as DAX [1]). >>> >>> To identify a path to improving performance and power efficiency, Project Trinity will explore how libraries like Streams can be enhanced to leverage data processing hardware features to execute Streams more efficiently. >>> >>> Directions for exploration include: >>> - Building a streams-like library optimized for offload to >>> -- hardware accelerators (such as DAX), or >>> -- a GPU, or >>> -- SIMD instructions; >>> - Optimizations in the Graal compiler to automatically transform suitable Streams pipelines, taking advantage of data processing hardware features; >>> - Explorations with Project Valhalla to expand the range of effective acceleration to Streams of value types. >>> >>> Success will be evaluated based upon: >>> (1) speedups and resource efficiency gains achieved for a broad range of representative streams calculations under offload, >>> (2) ease of use of the hardware acceleration capability, and >>> (3) ensuring that there is no time or space overhead for non-accelerated calculations. >>> >>> Can I please request the support of the Core Libraries Group as the Sponsoring Group with myself as the Project Lead. >>> >>> Warm Regards, >>> Karthik Ganesan >>> >>> [1] https://community.oracle.com/docs/DOC-994842 >>> > From lavlozm at gmail.com Sat Nov 26 01:37:30 2016 From: lavlozm at gmail.com (Abdessamed MANSOURI) Date: Sat, 26 Nov 2016 02:37:30 +0100 Subject: Backport Java 7 Regex to Java 6 runtime In-Reply-To: References: <4d1a2118-7d5c-a197-f1f7-b6beddd519eb@redhat.com> <5836B8CE.2050200@redhat.com> Message-ID: Thank you Andrew for your time and response, we did it, it was easy as you've mentionned, we will check the license with a lawyer. 2016-11-26 2:36 GMT+01:00 Abdessamed MANSOURI : > Thank you Andrew for your time and response, we did it, it was easy as > you've mentionned, we will check the license with a lawyer. > > 2016-11-24 14:29 GMT+01:00 Daniel Drozdzewski < > daniel.drozdzewski at gmail.com>: > >> Abdessamed, >> >> what *sort* of Unicode support do you require? >> >> I appreciate that you might not want to disclose too much, however >> posting a generic regex question somewhere (Stack Overflow) would yield >> some valuable insights. >> >> Porting SDK libraries between Java versions sounds very drastic. >> >> Please note that mentioned library uses java.util.regex.Pattern so you >> should be fine (I think). >> >> Please inspect unicode support in Character class in Java 6: >> https://docs.oracle.com/javase/6/docs/api/java/lang/Character.html >> >> ... specifically look at General Unicode categories - these can be used >> in patters. >> >> >> >> >> >> >> On 24 November 2016 at 12:30, Abdessamed MANSOURI >> wrote: >> >>> Thank you all for your time. >>> >>> Andrew, we have an application which turns of Java 6 (the migration to >>> Java >>> 7 costs too much time) and we want to benifits of some Java 7 features, >>> so >>> we want to backport it (in our client enviroment), i'm not suggesting >>> that >>> to the community (I'm not author nor committer ...), but as you invoked >>> it, >>> i would like to make a comment, why does Java standard lib over many >>> years >>> was distributed as monolithic lib?, nearly all classes are in rt.jar >>> bundled with sun.*, com.sun.* ... packages , if there were a jar for >>> regex >>> the upgrade would be a little easy (i know that is another subject but >>> its >>> the source of problem as i think). >>> >>> Alex, Thank you for the link, unfortunnately it doesn't support Unicode >>> character class, so i think the best solution is to process on backport. >>> >>> 2016-11-24 10:54 GMT+01:00 Alex Kashchenko : >>> >>> > Hi Abdessamed, >>> > >>> > On 11/23/2016 04:16 PM, Andrew Haley wrote: >>> > >>> >> On 23/11/16 16:00, Abdessamed MANSOURI wrote: >>> >> >>> >>> We want to backport Java 7 Regex to Java 6 runtime to benifits of >>> some >>> >>> Java >>> >>> 7 Regex features as Named Capturing Group ..., >>> >>> >>> >> >>> > Maybe this regex library will suite your needs - >>> > https://github.com/tony19/named-regexp >>> > >>> > >>> > i already read some of >>> >>> OpenJDK 7 codes and it doens't seems too much complicated cause there >>> >>> isn't >>> >>> much dependencies to others packages (java.*, sun.* ...) nor native >>> >>> dependencies from java.regex >>> >>> >>> >>> I just want to ask you (if you have time ofc) if we authorized to >>> >>> backport >>> >>> regex according to GPL v2 licence and put it in closed source >>> >>> application, >>> >>> and if you have some points and advice to backport it please share >>> it. >>> >>> >>> >> >>> >> Two things: >>> >> >>> >> We can't do it if in any way it deviates from the JDK 6 specification. >>> >> >>> >> JDK 6 is very old. It's highly questionable if this makes any sense. >>> >> >>> >> Andrew. >>> >> >>> >> >>> > -- >>> > -Alex >>> > >>> > >>> >>> >>> -- >>> Thanks, >>> >>> Abdessamed MANSOURI >>> >> >> >> >> -- >> Daniel Drozdzewski >> > > > > -- > Abdessamed MANSOURI > Consultant d?veloppeur en nouvelles technologies - ALTI Alg?rie > Lot N? 30 ? Lotissement 20 ao?t 1955 ? Oued Romane ? EL Achour ? 16106 ? > Alger > T?l 1 : 06 73 37 72 58 > T?l 2 : 05 56 66 57 56 > Email : amansouri at alti-dz.com > -- Thanks, Abdessamed MANSOURI From karthik.ganesan at oracle.com Mon Nov 28 20:39:40 2016 From: karthik.ganesan at oracle.com (Karthik Ganesan) Date: Mon, 28 Nov 2016 14:39:40 -0600 Subject: Project Proposal: Trinity In-Reply-To: <361E4150-5C8E-4A24-A537-9E7E8A9D9172@oracle.com> References: <5829E4E9.9080604@oracle.com> <5835FF6B.8020202@oracle.com> <361E4150-5C8E-4A24-A537-9E7E8A9D9172@oracle.com> Message-ID: <583C960C.7030309@oracle.com> Hi Thomas, Thank you for the support and I look forward to the collaboration. Regards, Karthik On 16-11-25 05:07 PM, Thomas Wuerthinger wrote: > Karthik, > > On your point below to automatically transform suitable Java code via the Graal compiler for execution with DAX: The Graal team is very interested to assist you in exploring this area. It could make DAX more widely applicable also to third party Java data processing libraries. > > - thomas > > >> On 23 Nov 2016, at 21:43, Karthik Ganesan wrote: >> >> Hi Paul, >> >> Thanks for the well thought out comments and suggestions. Overall, the suggested directions sound reasonable to me as a good starting point for the project team to explore further. >> >> Regards, >> Karthik >> >> On 16-11-21 06:11 PM, Paul Sandoz wrote: >>> Hi Karthik, >>> >>> Thanks for sending this. Some thoughts. >>> >>> I can see a number of DAX API focused explorations here: >>> >>> 1) A DAX-specific API bound to libdax using JNI >>> 2) A DAX-specific API bound to libdax using Panama >>> 3) A DAX-like API leveraging technologies in either 1) or 2) >>> >>> Each may allow one to get the most out of a DAX accelerator. >>> >>> I think 2) and 3) are complimentary to efforts in Panama. >>> >>> 3) is where alternative implementations leveraging SIMDs and GPUs might also be a good fit. >>> >>> As one goes further down the abstraction road it gets a little fuzzier and there may be duplication and IMO we should be vigilant and consider consolidating particular aspects in such cases. >>> >>> And as one goes further down the abstraction road, to say java.util.stream.Stream, i believe the problem gets much harder. The set of valid j.u.stream.Stream pipelines that might map to DAX operations, and further might map efficiently, is likely to be quite small and to the developer the performance model unclear. Cracking lambdas is certainly not easy, and is likely to be costly as well. To some extent project Sumatra ran into such difficulties, although i think in your case the problem is a little easier than that Sumatra is trying to solve. Still, it?s not easy to detect and translate appropriate j.u.stream.Stream pipelines into another form. >>> >>> As i understand it DAX provides a number of fairly simple bulk transformation operations over arrays of data, with some flexibility in the element layout of that data. Focusing an API on those operations and layouts is likely to be a more tractable problem. That might include off-heap memory with compatible panama layouts, or on-heap somehow compatible with layouts for simple value types. Cue hand-waving :-) but in the spirit of 3) this might be the sweet spot. >>> >>> Paul. >>> >>> >>> >>>> On 14 Nov 2016, at 08:23, Karthik Ganesan wrote: >>>> >>>> Hi, >>>> >>>> I would like to propose the creation of a new Project: Project Trinity. >>>> >>>> This Project would explore enhanced execution of bulk aggregate calculations over Streams through offloading calculations to hardware accelerators. >>>> >>>> Streams allow developers to express calculations such that data parallelism can be efficiently exploited. Such calculations are prime candidates for leveraging enhanced data-oriented instructions on CPUs (such as SIMD instructions) or offloading to hardware accelerators (such as the SPARC Data Accelerator co-processor, further referred to as DAX [1]). >>>> >>>> To identify a path to improving performance and power efficiency, Project Trinity will explore how libraries like Streams can be enhanced to leverage data processing hardware features to execute Streams more efficiently. >>>> >>>> Directions for exploration include: >>>> - Building a streams-like library optimized for offload to >>>> -- hardware accelerators (such as DAX), or >>>> -- a GPU, or >>>> -- SIMD instructions; >>>> - Optimizations in the Graal compiler to automatically transform suitable Streams pipelines, taking advantage of data processing hardware features; >>>> - Explorations with Project Valhalla to expand the range of effective acceleration to Streams of value types. >>>> >>>> Success will be evaluated based upon: >>>> (1) speedups and resource efficiency gains achieved for a broad range of representative streams calculations under offload, >>>> (2) ease of use of the hardware acceleration capability, and >>>> (3) ensuring that there is no time or space overhead for non-accelerated calculations. >>>> >>>> Can I please request the support of the Core Libraries Group as the Sponsoring Group with myself as the Project Lead. >>>> >>>> Warm Regards, >>>> Karthik Ganesan >>>> >>>> [1] https://community.oracle.com/docs/DOC-994842 >>>> From dalibor.topic at oracle.com Tue Nov 29 11:46:06 2016 From: dalibor.topic at oracle.com (dalibor topic) Date: Tue, 29 Nov 2016 12:46:06 +0100 Subject: JDK 9 Outreach survey summary Message-ID: Hi, thanks to everyone who participated in the JDK 9 Outreach survey! There were 37 respondents in total. The respondents were active in a broad range of free and open source software projects, from Apache Software Foundation projects like Apache Ant, Apache BookKeeper, Apache Kafka, Apache Lucene/Solr, Apache Maven and Apache POI via Eclipse Foundation projects such as Eclipse and vert.x, and language runtimes such as Apache Groovy, Clojure and ruby, over enterprise development oriented projects such Hibernate, WildFly, JBoss WS, Spring Security, MVC 1.0 and SnoopEE to independent projects such as JUnit 5, GraphHopper, Orient DB, Groovy FX, JavaSlang, JITWatch, JMRI, LWJGL3 and of course OpenJDK itself, along with Project Jigsaw. 33 respondents (i.e. 89%) have tried building or running their project with JDK 9 Early Access builds, while just 4 (i.e. 11%) had not done so at the time of the survey. The majority of respondents (20, i.e. 54%) indicated they planned to support JDK 9 in their project within 6 months after JDK 9 GA. The next largest group (16%, i.e. 6 respondents) indicated that they planned to support JDK 9 immediately from the get go, i.e. the GA date The third largest group (14%, i.e. 5 respondents) indicated that they planned to support JDK 9 within 12 months of JDK 9 GA. Along with one respondent whose response differentiated between immediate support at JDK 9 GA and within a few months after the release based on the type of project they were working on, that brings the tally of respondents planning to support JDK 9 in their projects within the first year of JDK 9's release to 32, i.e. 86%. 30 respondents (i.e. 89%) rated their experience migrating or adopting JDK 9 so far, with the average rating of of 3.2 falling between Mediocre (3.0) and Good (4.0). The majority of respondents (40%, i.e. 12) rated it as Good, while 11, i.e. 37% rated it as Mediocre. The comments provided some insight into the very varied challenges, from balancing support for JDK 1.5 - JDK 9 in a code base, to challenges surfaced by strong encapsulation of JDK internals, such as instrumentation of such classes, edge cases with reflection based hacks in popular libraries, and the general pace of the larger ecosystem of dependencies catching up and adjusting to JDK 9 changes. On the desktop side, one respondent reported insurmountable challenges in getting Eclipse to build JavaFX projects using JDK 9, while another one reported that they ended up re-implementing desktop support in their project using JavaScript to be able to support both JDK 8 and JDK 9 due to changes in native platform APIs on OS X in particular. One respondent felt that the process was nice, and that everyone from OpenJDK, in particular Oracle and Red Hat, were being very helpful. Three respondents provided URLs to announcements of their projects plans to support features from JDK 9: https://marketplace.eclipse.org/content/java-9-support-beta-neon , http://jmri.org/releasenotes/jmri4.5.3.shtml and https://cwiki.apache.org/confluence/display/MAVEN/Java+9+-+Jigsaw . Last but not least, 8 respondents provided general feedback. One participant pointed out that the available information is incomplete in some accounts, and spread across multiple documents, with undocumented or not yet implemented compiler options, along with the not yet complete specification. The regular changes in the JDK & JRE, in particular regarding class loading, had affected their project multiple times. Another participant remarked that uncertainty around Project Jigsaw was delaying adding support for JDK 9 in their project. One participant pointed out that they can begin testing their project efficiently once their build tool supports JDK 9. Meanwhile, another participant pointed out that their testing was limited to JARs without module information, and as such was testing the module system in a limited fashion. Finally, one participant commented that their inability to comment directly on reported bugs made bug handling cumbersome. cheers, dalibor topic -- Dalibor Topic | Principal Product Manager Phone: +494089091214 | Mobile: +491737185961 ORACLE Deutschland B.V. & Co. KG | K?hneh?fe 5 | 22761 Hamburg ORACLE Deutschland B.V. & Co. KG Hauptverwaltung: Riesstr. 25, D-80992 M?nchen Registergericht: Amtsgericht M?nchen, HRA 95603 Komplement?rin: ORACLE Deutschland Verwaltung B.V. Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697 Gesch?ftsf?hrer: Alexander van der Ven, Jan Schultheiss, Val Maher Oracle is committed to developing practices and products that help protect the environment