From doug.simon at oracle.com Tue Jan 15 22:49:45 2019 From: doug.simon at oracle.com (Doug Simon) Date: Tue, 15 Jan 2019 23:49:45 +0100 Subject: JVMCI 0.54 released Message-ID: <68831D77-85FB-401B-B9A8-34866A33612F@oracle.com> Changes in JVMCI 0.54 include: ? GR-13330: Add workaround for resolving Object.clone against an array type (JDK-8215748 ). ? GR-13308: Support use of JVMCI class loader when using JVMCI shared library. ? GR-13307: Replace JVMCIJavaMode with boolean JVMCIUseSharedLib flag. ? GR-12528: Remove HotSpotJVMCIMetaAccessContext. ? GR-13066: Push/pop local JNI frame for top-level call into JVMCI shared library. ? GR-13075: Use VM call to determine if a type can have its methods intrinsified. The OracleJDK based binaries will soon be available at: http://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html The OpenJDK Windows, Linux and macOS binaries are mirrored at: https://github.com/graalvm/openjdk8-jvmci-builder/releases/tag/jvmci-0.54 -Doug From usrinivasan at twitter.com Thu Jan 17 00:47:48 2019 From: usrinivasan at twitter.com (Uma Srinivasan) Date: Wed, 16 Jan 2019 16:47:48 -0800 Subject: Graal workshop/CGO 2019 Call For Participation In-Reply-To: <296533564.683804.1547685347123@mail.yahoo.com> References: <3808f18500eb5460c61d18a87.98850042a3.20190112182252.f25761a304.9a9d2d04@mail254.atl101.mcdlv.net> <296533564.683804.1547685347123@mail.yahoo.com> Message-ID: Registration for Graal workshop at CGO is open: http://cgo.org/cgo2019/registration/ 2019 IEEE/ACM International Symposium on Code Generation and Optimization Feb 16th to 20th, 2019, Washington DC, USA (Co-located with PPoPP and HPCA and CC.) The International Symposium on Code Generation and Optimization (CGO) provides a premier venue to bring together researchers and practitioners working at the interface of hardware and software on a wide range of optimization and code generation techniques and related issues. The conference spans the spectrum from purely static to fully dynamic approaches, and from pure software-based methods to specific architectural features and support for code generation and optimization. Registration is open at http://cgo.org/cgo2019/registration/ **Early registration ends on January 31st 5 pm EST, 2019.** **CGO 2019 is also offering travel support for students attending US or non-US universities.** For more information see: http://cgo.org/cgo2019/travel-grants/ *Workshops & Tutorials* The following workshops and tutorials will be co-located with CGO:http://cgo.org/cgo2019/acceptedWorkshopTutorial/ -- Tools and Languages Mentoring Workshop (TLMW). Organizers: Michel Steuwer -- LLVM Performance Workshop. Organizers: Johannes Doerfert, Sebastian Pop, Aditya Kumar -- Optimization, Modeling, Auto-Tuning and Space Exploration (OMASE) Workshop. Organizers: Martin KONG, Tobias GROSSER, Mary HALL -- Science, Art, Voodoo: Using and Developing The Graal JIT Compiler. Organizers: Uma Srinivasan, Chris Thalinger, Flavio Brasil -- The First International Workshop on the Intersection of High Performance Computing and Machine Learning. Organizers: Jiajia Li, Guoyang Chen, Shuaiwen Leon Song, Guangming Tan, and Weifeng Zhang -- International Workshop On Code Optimisation For Multi And Many-Cores (COSMIC). Organizers: Pavlos Petoumenos, Chris Cummins, Zheng Wang, Hugh Leather -- Workshop on Compilers for Machine Learning. Organizers: Albert Cohen, Jacques Pienaar and Tatiana Shpeisman -- Tutorial -- Vulkan: Graphics and compute compilation on GPU. Organizers: Chu-Cheow Lim, Ruihao Zhang,Chunling Hu,Alexander Bakst *Accepted Paper List*http://cgo.org/cgo2019/accepted/ For more information, please visit the CGO 2019 conference web site athttp://cgo.org/cgo2019/ *Copyright ? 2019 International Symposium on Code Generation and Optimization, All rights reserved.* Attended a CGO conference *Our mailing address is:* International Symposium on Code Generation and Optimization 2 Penn Plaza, Suite 701 New York, NY 10121 Add us to your address book Want to change how you receive these emails? You can update your preferences or unsubscribe from this list . [image: Email Marketing Powered by Mailchimp] From yifei.zhang1992 at outlook.com Thu Jan 17 08:57:19 2019 From: yifei.zhang1992 at outlook.com (Yifei Zhang) Date: Thu, 17 Jan 2019 08:57:19 +0000 Subject: Building GraalVM upon JDK 11 Message-ID: <0E86E4A9-2E1E-4F1F-AA78-07BEEC18DFD3@outlook.com> Dear all, I am trying to build Graal suites other than compiler (e.g., substratevm, graaljs) based on JDK 11. The following commands are executed under the grail/vm directory: $ export JAVA_HOME=/jdk11 $ export EXTRA_JAVA_HOMES=/jdk8 $ mx --dynamicimports /substratevm,/graal-nodejs build But building fails. It seems that mx is trying to find some files, such as jvmci-services.jar, under the directory $JAVA_HOME/jre/lib, which does not exist in JDK 11. I was wondering if the building commands above are correct. Is there anyone trying to build Graal upon JDK 11? Does anyone encounter the same issue? Cheers, Yifei From doug.simon at oracle.com Thu Jan 17 09:21:15 2019 From: doug.simon at oracle.com (Doug Simon) Date: Thu, 17 Jan 2019 10:21:15 +0100 Subject: Building GraalVM upon JDK 11 In-Reply-To: <0E86E4A9-2E1E-4F1F-AA78-07BEEC18DFD3@outlook.com> References: <0E86E4A9-2E1E-4F1F-AA78-07BEEC18DFD3@outlook.com> Message-ID: <0FAA4CE8-0107-4D3F-A9D9-B6612B3973A5@oracle.com> Hi Yifei, Building a complete GraalVM is not yet supported on any JDK other than 8. We are currently working on native-image support for later JDKs. The https://github.com/oracle/graal/blob/master/vm/README.md file will be updated once this support is ready. -Doug > On 17 Jan 2019, at 09:57, Yifei Zhang wrote: > > Dear all, > I am trying to build Graal suites other than compiler (e.g., substratevm, graaljs) based on JDK 11. The following commands are executed under the grail/vm directory: > > $ export JAVA_HOME=/jdk11 > $ export EXTRA_JAVA_HOMES=/jdk8 > $ mx --dynamicimports /substratevm,/graal-nodejs build > > But building fails. It seems that mx is trying to find some files, such as jvmci-services.jar, under the directory $JAVA_HOME/jre/lib, which does not exist in JDK 11. > I was wondering if the building commands above are correct. Is there anyone trying to build Graal upon JDK 11? Does anyone encounter the same issue? > > Cheers, > Yifei From jean-philippe.halimi at intel.com Fri Jan 18 00:05:37 2019 From: jean-philippe.halimi at intel.com (Halimi, Jean-Philippe) Date: Fri, 18 Jan 2019 00:05:37 +0000 Subject: Dynamic type in method substitution Message-ID: Dear all, Happy new year! :) I am spending time implementing DigestBase::shaImplCompressMultiBlock stub for Graal as it provides hashing performance improvement. The method is implemented in DigestBase base class, but the stub depends on which derived class the actual instance is of. Quick diagram: DigestBase - Implements shaImplCompressMB | \ \ SHA1 SHA2 SHA5 state state state Each SHA class has its own state object, but DigestBase implements the shaImplCompressMultiBlock intrinsified method. This stub needs to figure out which SHA class it is an instance of. I have the following code: @MethodSubstitution(isStatic = false) static int shaImplCompressMB(Object receiver, byte[] buf, int ofs, int limit) { ResolvedJavaType type = INJECTED_INTRINSIC_CONTEXT.getIntrinsicMethod().getDeclaringClass(); ResolvedJavaType sha1 = HotSpotReplacementsUtil.getType(INJECTED_INTRINSIC_CONTEXT, "sun/security/provider/SHA"); ResolvedJavaType sha256 = HotSpotReplacementsUtil.getType(INJECTED_INTRINSIC_CONTEXT, "sun/security/provider/SHA2"); ResolvedJavaType sha512 = HotSpotReplacementsUtil.getType(INJECTED_INTRINSIC_CONTEXT, "sun/security/provider/SHA5"); if (type == sha1) { Object realReceiver = getRealReceiver(receiver); Object state = getState(realReceiver, "sun/security/provider/SHA"); return HotSpotBackend.shaImplCompressMBStub(getBufAddr(buf, ofs), getStateAddr(state), ofs, limit); } else if (type == sha256) { Object realReceiver = getRealReceiver(receiver); Object state = getState(realReceiver, "sun/security/provider/SHA2"); return HotSpotBackend.sha2ImplCompressMBStub(getBufAddr(buf, ofs), getStateAddr(state), ofs, limit); } else if (type == sha512) { Object realReceiver = getRealReceiver(receiver); Object state = getState(realReceiver, "sun/security/provider/SHA5"); return HotSpotBackend.sha5ImplCompressMBStub(getBufAddr(buf, ofs), getStateAddr(state), ofs, limit); } else { return -1; } } I would like to know if this is the correct way to proceed, especially lines 3 4 5 6, to respectively get the type of the object instance and the types of each derived class. Thank you for your help. -Jp From tom.rodriguez at oracle.com Fri Jan 18 01:08:19 2019 From: tom.rodriguez at oracle.com (Tom Rodriguez) Date: Thu, 17 Jan 2019 17:08:19 -0800 Subject: Dynamic type in method substitution In-Reply-To: References: Message-ID: <5b0ca4b2-2776-b84d-cb54-831c9538f200@oracle.com> Halimi, Jean-Philippe wrote on 1/17/19 4:05 PM: > Dear all, > > Happy new year! :) > > I am spending time implementing DigestBase::shaImplCompressMultiBlock stub for Graal as it provides hashing performance improvement. The method is implemented in DigestBase base class, but the stub depends on which derived class the actual instance is of. > > Quick diagram: > > DigestBase > - Implements shaImplCompressMB > | \ \ > SHA1 SHA2 SHA5 > state state state > > Each SHA class has its own state object, but DigestBase implements the shaImplCompressMultiBlock intrinsified method. This stub needs to figure out which SHA class it is an instance of. C2 does this using a complicated PredicatedCallGenerator that appears to forcibly load the SHA1/SHA256/SHA512 during code generation so I think this is a reasonable Grall interpretation of this pattern. > > I have the following code: > > @MethodSubstitution(isStatic = false) > static int shaImplCompressMB(Object receiver, byte[] buf, int ofs, int limit) { > ResolvedJavaType type = INJECTED_INTRINSIC_CONTEXT.getIntrinsicMethod().getDeclaringClass(); > ResolvedJavaType sha1 = HotSpotReplacementsUtil.getType(INJECTED_INTRINSIC_CONTEXT, "sun/security/provider/SHA"); > ResolvedJavaType sha256 = HotSpotReplacementsUtil.getType(INJECTED_INTRINSIC_CONTEXT, "sun/security/provider/SHA2"); > ResolvedJavaType sha512 = HotSpotReplacementsUtil.getType(INJECTED_INTRINSIC_CONTEXT, "sun/security/provider/SHA5"); > > if (type == sha1) { > Object realReceiver = getRealReceiver(receiver); > Object state = getState(realReceiver, "sun/security/provider/SHA"); > return HotSpotBackend.shaImplCompressMBStub(getBufAddr(buf, ofs), getStateAddr(state), ofs, limit); > } else if (type == sha256) { > Object realReceiver = getRealReceiver(receiver); > Object state = getState(realReceiver, "sun/security/provider/SHA2"); > return HotSpotBackend.sha2ImplCompressMBStub(getBufAddr(buf, ofs), getStateAddr(state), ofs, limit); > } else if (type == sha512) { > Object realReceiver = getRealReceiver(receiver); > Object state = getState(realReceiver, "sun/security/provider/SHA5"); > return HotSpotBackend.sha5ImplCompressMBStub(getBufAddr(buf, ofs), getStateAddr(state), ofs, limit); > } else { > return -1; > } > } > > I would like to know if this is the correct way to proceed, especially lines 3 4 5 6, to respectively get the type of the object instance and the types of each derived class. Your type test is wrong though. You can't use INJECTED_INTRINSIC_CONTEXT like this. It's a placeholder value for an @Fold method and can't simply be used directly. It's also not the value you want since it would be DigestBase I think. You need to be type checking the receiver. I think it should be: if (doInstanceof(sha1, realReceiver)) { See CipherBlockChainingSubstitutions for an example of this. I think those are ok but make sure that HotSpotReplacementsUtil.getType has an @Fold annotation on it. Otherwise it will try to inline the whole thing instead of appearing as a constant. I thought this had already appeared in another PR but don't see it right now. It should look like this: @Fold static ResolvedJavaType getType(@InjectedParameter IntrinsicContext context, String typeName) { Shouldn't the final return -1 case be a dispatch to the real method in case there are other subclasses? As in: } else { return shaImplCompressMB(receiver, ofs, buf limit); } This should result in a normal invoke site being emitted. I think that should get you close to the a working result. tom > > Thank you for your help. > -Jp > From tbaldridge at gmail.com Sun Jan 20 20:52:43 2019 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Sun, 20 Jan 2019 13:52:43 -0700 Subject: Efficient function composition with Truffle Message-ID: Something that I haven't figured out yet in my studies of Truffle, is the best way to deal with higher-order functions in a Truffle language. Let's say we have a function like comp from Clojure: (defn comp [a b] (fn inner [x] (a (b x)))) The semantics here are fairly simple, comp takes two functions and composes them. A problem in on the JVM (and I think in Truffle) is that these call sites quickly become megamorphic. There may be thousands of functions in a runtime, and so if comp is used a lot, the callsites in the inner function (calls to a and b) have to become indirect. This problem exists in may situations, for example with reduce: (defn sum [coll] (reduce + 0 coll)) Reduce may be called in many places in the runtime, but in this specific case, the callsite that invokes + can be monomorphic. So what's the best way to code this sort of thing? Do I manually clone AST trees that use higher-order-functions? Is there some sort of feature in Truffle that allows me to say "duplicate this entire AST whenever this node changes?" Thanks, Timothy Baldridge From chris.seaton at oracle.com Sun Jan 20 21:07:48 2019 From: chris.seaton at oracle.com (Chris Seaton) Date: Sun, 20 Jan 2019 21:07:48 +0000 Subject: Efficient function composition with Truffle In-Reply-To: References: Message-ID: <811C4294-E9D1-42F7-82AF-0F0C20F8AAB9@oracle.com> Truffle automatically splits methods that have become megamorphic. Splitting means that multiple copies are made which can specialise independently with independent inline caches. Splitting can be recursive. It happens based on heuristics that looks at how many specialisation your node are actually using and similar properties, and can be tweaked by the language implementor if needed, or forced or disabled. This is a major advantage over implementing languages on the JVM in the conventional way, where, as you say, higher order methods quickly become megamorphic. The effect is particularly strong in languages like Ruby where many control structures are method calls. Are you not seeing splitting working automatically already? You should see references to ?split? in -Dgraal.TraceTruffleCompilation=true. Chris > On 20 Jan 2019, at 20:52, Timothy Baldridge wrote: > > Something that I haven't figured out yet in my studies of Truffle, is the > best way to deal with higher-order functions in a Truffle language. Let's > say we have a function like comp from Clojure: > > (defn comp [a b] > (fn inner [x] > (a (b x)))) > > The semantics here are fairly simple, comp takes two functions and composes > them. A problem in on the JVM (and I think in Truffle) is that these call > sites quickly become megamorphic. There may be thousands of functions in a > runtime, and so if comp is used a lot, the callsites in the inner function > (calls to a and b) have to become indirect. > > This problem exists in may situations, for example with reduce: > > (defn sum [coll] > (reduce + 0 coll)) > > Reduce may be called in many places in the runtime, but in this specific > case, the callsite that invokes + can be monomorphic. > > So what's the best way to code this sort of thing? Do I manually clone AST > trees that use higher-order-functions? Is there some sort of feature in > Truffle that allows me to say "duplicate this entire AST whenever this node > changes?" > > Thanks, > > Timothy Baldridge From tbaldridge at gmail.com Sun Jan 20 21:21:57 2019 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Sun, 20 Jan 2019 14:21:57 -0700 Subject: Efficient function composition with Truffle In-Reply-To: <811C4294-E9D1-42F7-82AF-0F0C20F8AAB9@oracle.com> References: <811C4294-E9D1-42F7-82AF-0F0C20F8AAB9@oracle.com> Message-ID: Thanks for the info! No, I simply wasn't seeing any information about this in the documentation. On that subject, how would I know this from the docs? Almost everything I can read on Truffle is either super high-level "Truffle merges Nodes, and look, you can be polyglot on the JVM!", or is something like the API docs that are so low level it's hard to know how it all fits together. On Sun, Jan 20, 2019 at 2:07 PM Chris Seaton wrote: > Truffle automatically splits methods that have become megamorphic. > > Splitting means that multiple copies are made which can specialise > independently with independent inline caches. Splitting can be recursive. > It happens based on heuristics that looks at how many specialisation your > node are actually using and similar properties, and can be tweaked by the > language implementor if needed, or forced or disabled. > > This is a major advantage over implementing languages on the JVM in the > conventional way, where, as you say, higher order methods quickly become > megamorphic. The effect is particularly strong in languages like Ruby where > many control structures are method calls. > > Are you not seeing splitting working automatically already? You should see > references to ?split? in -Dgraal.TraceTruffleCompilation=true. > > Chris > > > On 20 Jan 2019, at 20:52, Timothy Baldridge > wrote: > > > > Something that I haven't figured out yet in my studies of Truffle, is the > > best way to deal with higher-order functions in a Truffle language. Let's > > say we have a function like comp from Clojure: > > > > (defn comp [a b] > > (fn inner [x] > > (a (b x)))) > > > > The semantics here are fairly simple, comp takes two functions and > composes > > them. A problem in on the JVM (and I think in Truffle) is that these call > > sites quickly become megamorphic. There may be thousands of functions in > a > > runtime, and so if comp is used a lot, the callsites in the inner > function > > (calls to a and b) have to become indirect. > > > > This problem exists in may situations, for example with reduce: > > > > (defn sum [coll] > > (reduce + 0 coll)) > > > > Reduce may be called in many places in the runtime, but in this specific > > case, the callsite that invokes + can be monomorphic. > > > > So what's the best way to code this sort of thing? Do I manually clone > AST > > trees that use higher-order-functions? Is there some sort of feature in > > Truffle that allows me to say "duplicate this entire AST whenever this > node > > changes?" > > > > Thanks, > > > > Timothy Baldridge > > -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From chris.seaton at oracle.com Sun Jan 20 21:32:13 2019 From: chris.seaton at oracle.com (Chris Seaton) Date: Sun, 20 Jan 2019 21:32:13 +0000 Subject: Efficient function composition with Truffle In-Reply-To: References: <811C4294-E9D1-42F7-82AF-0F0C20F8AAB9@oracle.com> Message-ID: <404FF04A-5257-48A4-BB62-77534302AFEC@oracle.com> There?s this file and others in the same directory https://github.com/oracle/graal/blob/master/truffle/docs/splitting/MonomorphizationUseCases.md We sometimes call splitting ?monomorphization?, as that?s what it achieves, and we also sometimes call it ?cloning? as that?s how it?s achieved, sorry for all the names! I know I?ve covered it in some of my talks, but I haven?t particularly signposted it. It?s not mentioned much in the API documentation as it?s supposed to be automatic. Here?s one reference https://www.graalvm.org/truffle/javadoc/com/oracle/truffle/api/nodes/DirectCallNode.html#cloneCallTarget-- > On 20 Jan 2019, at 21:21, Timothy Baldridge wrote: > > Thanks for the info! No, I simply wasn't seeing any information about this in the documentation. On that subject, how would I know this from the docs? Almost everything I can read on Truffle is either super high-level "Truffle merges Nodes, and look, you can be polyglot on the JVM!", or is something like the API docs that are so low level it's hard to know how it all fits together. > > On Sun, Jan 20, 2019 at 2:07 PM Chris Seaton wrote: > Truffle automatically splits methods that have become megamorphic. > > Splitting means that multiple copies are made which can specialise independently with independent inline caches. Splitting can be recursive. It happens based on heuristics that looks at how many specialisation your node are actually using and similar properties, and can be tweaked by the language implementor if needed, or forced or disabled. > > This is a major advantage over implementing languages on the JVM in the conventional way, where, as you say, higher order methods quickly become megamorphic. The effect is particularly strong in languages like Ruby where many control structures are method calls. > > Are you not seeing splitting working automatically already? You should see references to ?split? in -Dgraal.TraceTruffleCompilation=true. > > Chris > > > On 20 Jan 2019, at 20:52, Timothy Baldridge wrote: > > > > Something that I haven't figured out yet in my studies of Truffle, is the > > best way to deal with higher-order functions in a Truffle language. Let's > > say we have a function like comp from Clojure: > > > > (defn comp [a b] > > (fn inner [x] > > (a (b x)))) > > > > The semantics here are fairly simple, comp takes two functions and composes > > them. A problem in on the JVM (and I think in Truffle) is that these call > > sites quickly become megamorphic. There may be thousands of functions in a > > runtime, and so if comp is used a lot, the callsites in the inner function > > (calls to a and b) have to become indirect. > > > > This problem exists in may situations, for example with reduce: > > > > (defn sum [coll] > > (reduce + 0 coll)) > > > > Reduce may be called in many places in the runtime, but in this specific > > case, the callsite that invokes + can be monomorphic. > > > > So what's the best way to code this sort of thing? Do I manually clone AST > > trees that use higher-order-functions? Is there some sort of feature in > > Truffle that allows me to say "duplicate this entire AST whenever this node > > changes?" > > > > Thanks, > > > > Timothy Baldridge > > > > -- > ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? > (Robert Firth) From boris.spasojevic at oracle.com Mon Jan 21 08:33:51 2019 From: boris.spasojevic at oracle.com (Boris Spasojevic) Date: Mon, 21 Jan 2019 09:33:51 +0100 Subject: Efficient function composition with Truffle In-Reply-To: <404FF04A-5257-48A4-BB62-77534302AFEC@oracle.com> References: <811C4294-E9D1-42F7-82AF-0F0C20F8AAB9@oracle.com> <404FF04A-5257-48A4-BB62-77534302AFEC@oracle.com> Message-ID: <67b428f6-8b0e-5cd7-2c96-5755b80ede1d@oracle.com> Hi Timothy, You might want to take a look at the entire contents of the https://github.com/oracle/graal/blob/master/truffle/docs/splitting as it describes the direction we are taking splitting. The current splitting heuristic is bad for a multitude of reasons and we are (hopefully soon) moving to the new approach described in the linked documentation. The short version is that we are giving the language implementor the option to decide which nodes are likely to benefit from being split, and annotate them (@ ReportPolymorphism http://www.graalvm.org/truffle/javadoc/com/oracle/truffle/api/dsl/ReportPolymorphism.html). Such nodes will inform the runtime when they turn polymorphic, allowing the runtime to attempt to "monomorphize" that node through splitting. BoriS On 01/20/2019 10:32 PM, Chris Seaton wrote: > There?s this file and others in the same directory > > https://github.com/oracle/graal/blob/master/truffle/docs/splitting/MonomorphizationUseCases.md > > We sometimes call splitting ?monomorphization?, as that?s what it achieves, and we also sometimes call it ?cloning? as that?s how it?s achieved, sorry for all the names! I know I?ve covered it in some of my talks, but I haven?t particularly signposted it. > > It?s not mentioned much in the API documentation as it?s supposed to be automatic. Here?s one reference > > https://www.graalvm.org/truffle/javadoc/com/oracle/truffle/api/nodes/DirectCallNode.html#cloneCallTarget-- > >> On 20 Jan 2019, at 21:21, Timothy Baldridge wrote: >> >> Thanks for the info! No, I simply wasn't seeing any information about this in the documentation. On that subject, how would I know this from the docs? Almost everything I can read on Truffle is either super high-level "Truffle merges Nodes, and look, you can be polyglot on the JVM!", or is something like the API docs that are so low level it's hard to know how it all fits together. >> >> On Sun, Jan 20, 2019 at 2:07 PM Chris Seaton wrote: >> Truffle automatically splits methods that have become megamorphic. >> >> Splitting means that multiple copies are made which can specialise independently with independent inline caches. Splitting can be recursive. It happens based on heuristics that looks at how many specialisation your node are actually using and similar properties, and can be tweaked by the language implementor if needed, or forced or disabled. >> >> This is a major advantage over implementing languages on the JVM in the conventional way, where, as you say, higher order methods quickly become megamorphic. The effect is particularly strong in languages like Ruby where many control structures are method calls. >> >> Are you not seeing splitting working automatically already? You should see references to ?split? in -Dgraal.TraceTruffleCompilation=true. >> >> Chris >> >>> On 20 Jan 2019, at 20:52, Timothy Baldridge wrote: >>> >>> Something that I haven't figured out yet in my studies of Truffle, is the >>> best way to deal with higher-order functions in a Truffle language. Let's >>> say we have a function like comp from Clojure: >>> >>> (defn comp [a b] >>> (fn inner [x] >>> (a (b x)))) >>> >>> The semantics here are fairly simple, comp takes two functions and composes >>> them. A problem in on the JVM (and I think in Truffle) is that these call >>> sites quickly become megamorphic. There may be thousands of functions in a >>> runtime, and so if comp is used a lot, the callsites in the inner function >>> (calls to a and b) have to become indirect. >>> >>> This problem exists in may situations, for example with reduce: >>> >>> (defn sum [coll] >>> (reduce + 0 coll)) >>> >>> Reduce may be called in many places in the runtime, but in this specific >>> case, the callsite that invokes + can be monomorphic. >>> >>> So what's the best way to code this sort of thing? Do I manually clone AST >>> trees that use higher-order-functions? Is there some sort of feature in >>> Truffle that allows me to say "duplicate this entire AST whenever this node >>> changes?" >>> >>> Thanks, >>> >>> Timothy Baldridge >> >> >> -- >> ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? >> (Robert Firth) From jay1_324 at icloud.com Tue Jan 22 21:33:59 2019 From: jay1_324 at icloud.com (Jordan Jones) Date: Tue, 22 Jan 2019 16:33:59 -0500 Subject: My email Message-ID: Jay_so_drippy at icloud.com Sent from my iPhone