From alexander.senier at tu-dresden.de Fri Jun 2 10:45:38 2017 From: alexander.senier at tu-dresden.de (Alexander Senier) Date: Fri, 2 Jun 2017 12:45:38 +0200 Subject: Dumping graphs on the command line Message-ID: <9cea193d-a408-e87c-b91b-f88125e3898a@tu-dresden.de> Hi all, I'm looking into Graal to create data flow graphs from Java programs. Following the info from the repository I successfully got graphs in IGV from the unittests (Linux, version 0.23 from the website): $ JAVA_HOME=../../labsjdk1.8.0_121-jvmci-0.26/ ../../mx/mx igv& $ PATH=/opt/graalvm-0.23/bin:$PATH JAVA_HOME=../../labsjdk1.8.0_121-jvmci-0.26/ ../../mx/mx unittest -Dgraal.Dump= -Dgraal.PrintCFG=true BC_aload I see the graphs in IGV as well as the CFGPrinter log output. Subsequently, I tried to get the same graphs for a custom example: $ /opt/graalvm-0.23/bin/javac DH.java $ /opt/graalvm-0.23/bin/java -Dgraal.Dump= -Dgraal.PrintCFG=true -XX:+UseJVMCICompiler DH While the program is compiled and run, no graphs are generated in IGV nor do I see any log files. I see no graal related messages either. What is the canonical way to dump graphs from a compilation, preferably on the Linux command line? Thanks! Cheers, Alex -- Dipl.-Inf. Alexander Senier Scientific Assistant TU Dresden Faculty of Computer Science Institute of System Architecture Chair of Privacy and Data Security 01062 Dresden Tel.: +49 351 463-38719 Fax : +49 351 463-38255 From doug.simon at oracle.com Fri Jun 2 11:13:47 2017 From: doug.simon at oracle.com (Doug Simon) Date: Fri, 2 Jun 2017 13:13:47 +0200 Subject: Dumping graphs on the command line In-Reply-To: <9cea193d-a408-e87c-b91b-f88125e3898a@tu-dresden.de> References: <9cea193d-a408-e87c-b91b-f88125e3898a@tu-dresden.de> Message-ID: <0C83C0FA-A14A-4F70-8B61-833FAF41FD2E@oracle.com> Hi Alex, Graphs will only be dumped for methods compiled by Graal. Most likely, your example is small enough that it never reaches the top tier compiler. To force a method to be compiled by Graal, you need to either make sure it's hot (e.g., by placing calls to it in a loop with many iterations) or use -XX:CompileOnly[1]. The former is usually a better approach since you'll get a compilation with a more realistic profile. To see what is being compiled by Graal, use either the -XX:+PrintCompilation or -Dgraal.PrintCompilation=true options (the latter shows C1 compilations as well). To restrict compilation only to Graal, use -XX:-TieredCompilation. As you're probably aware, you need -XX:+UseJVMCICompiler in all cases. -Doug [1] https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html > On 2 Jun 2017, at 12:45, Alexander Senier wrote: > > Hi all, > > I'm looking into Graal to create data flow graphs from Java programs. > Following the info from the repository I successfully got graphs in IGV > from the unittests (Linux, version 0.23 from the website): > > $ JAVA_HOME=../../labsjdk1.8.0_121-jvmci-0.26/ ../../mx/mx igv& > $ PATH=/opt/graalvm-0.23/bin:$PATH > JAVA_HOME=../../labsjdk1.8.0_121-jvmci-0.26/ ../../mx/mx unittest > -Dgraal.Dump= -Dgraal.PrintCFG=true BC_aload > > I see the graphs in IGV as well as the CFGPrinter log output. > Subsequently, I tried to get the same graphs for a custom example: > > $ /opt/graalvm-0.23/bin/javac DH.java > $ /opt/graalvm-0.23/bin/java -Dgraal.Dump= -Dgraal.PrintCFG=true > -XX:+UseJVMCICompiler DH > > While the program is compiled and run, no graphs are generated in IGV > nor do I see any log files. I see no graal related messages either. > > What is the canonical way to dump graphs from a compilation, preferably > on the Linux command line? > > Thanks! > > Cheers, > Alex > > -- > Dipl.-Inf. Alexander Senier > Scientific Assistant > > TU Dresden > Faculty of Computer Science > Institute of System Architecture > Chair of Privacy and Data Security > 01062 Dresden > > Tel.: +49 351 463-38719 > Fax : +49 351 463-38255 From doug.simon at oracle.com Fri Jun 2 13:06:37 2017 From: doug.simon at oracle.com (Doug Simon) Date: Fri, 2 Jun 2017 15:06:37 +0200 Subject: Change in graph dump levels Message-ID: There was a recent change in the default Graal graph dumping levels. Dump level 1 = Only ~5 graphs per method: after parsing, after inlining, after high tier, after mid tier, after low tier Dump level 2 = One graph after each applied top-level phase Dump level 3 = One graph after each phase (including sub phases) Dump level 4 = Graphs within phases where interesting for a phase, max ~5 per phase Dump level 5 = Graphs per node before/after Phases run during inlining are not considered ?top level? - i.e., phases preparing inlined graphs will only appear on Inline:3 or higher levels. Graphs prepared as part of snippet or stub installation will not appear except when -Dgraal.DebugStubsAndSnippets=true is specified. The levels are further documented here: https://github.com/graalvm/graal/blob/3b1d7f2e9c3bb6a57600dbbcad20e5ecea36a20f/compiler/src/org.graalvm.compiler.debug/src/org/graalvm/compiler/debug/Debug.java#L131-L176 The motivation is to only have a few graphs dumped by default that represent the major stages in the compilation pipeline. This makes it easier to capture smaller bug reports from customers. -Doug From alexander.senier at tu-dresden.de Fri Jun 2 13:12:02 2017 From: alexander.senier at tu-dresden.de (Alexander Senier) Date: Fri, 2 Jun 2017 15:12:02 +0200 Subject: Dumping graphs on the command line In-Reply-To: <0C83C0FA-A14A-4F70-8B61-833FAF41FD2E@oracle.com> References: <9cea193d-a408-e87c-b91b-f88125e3898a@tu-dresden.de> <0C83C0FA-A14A-4F70-8B61-833FAF41FD2E@oracle.com> Message-ID: Hi Doug, thanks, that was very helpful to put me on the right track. I found that -XX:-TieredCompilation in combination with -Xcomp (and a lot of patience ;-) results in all intersting methods being dumped. Cheers, Alex On 06/02/2017 01:13 PM, Doug Simon wrote: > Hi Alex, > > Graphs will only be dumped for methods compiled by Graal. Most likely, your example is small enough that it never reaches the top tier compiler. To force a method to be compiled by Graal, you need to either make sure it's hot (e.g., by placing calls to it in a loop with many iterations) or use -XX:CompileOnly[1]. The former is usually a better approach since you'll get a compilation with a more realistic profile. To see what is being compiled by Graal, use either the -XX:+PrintCompilation or -Dgraal.PrintCompilation=true options (the latter shows C1 compilations as well). To restrict compilation only to Graal, use -XX:-TieredCompilation. As you're probably aware, you need -XX:+UseJVMCICompiler in all cases. > > -Doug > > [1] https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html > >> On 2 Jun 2017, at 12:45, Alexander Senier wrote: >> >> Hi all, >> >> I'm looking into Graal to create data flow graphs from Java programs. >> Following the info from the repository I successfully got graphs in IGV >> from the unittests (Linux, version 0.23 from the website): >> >> $ JAVA_HOME=../../labsjdk1.8.0_121-jvmci-0.26/ ../../mx/mx igv& >> $ PATH=/opt/graalvm-0.23/bin:$PATH >> JAVA_HOME=../../labsjdk1.8.0_121-jvmci-0.26/ ../../mx/mx unittest >> -Dgraal.Dump= -Dgraal.PrintCFG=true BC_aload >> >> I see the graphs in IGV as well as the CFGPrinter log output. >> Subsequently, I tried to get the same graphs for a custom example: >> >> $ /opt/graalvm-0.23/bin/javac DH.java >> $ /opt/graalvm-0.23/bin/java -Dgraal.Dump= -Dgraal.PrintCFG=true >> -XX:+UseJVMCICompiler DH >> >> While the program is compiled and run, no graphs are generated in IGV >> nor do I see any log files. I see no graal related messages either. >> >> What is the canonical way to dump graphs from a compilation, preferably >> on the Linux command line? >> >> Thanks! >> >> Cheers, >> Alex >> >> -- >> Dipl.-Inf. Alexander Senier >> Scientific Assistant >> >> TU Dresden >> Faculty of Computer Science >> Institute of System Architecture >> Chair of Privacy and Data Security >> 01062 Dresden >> >> Tel.: +49 351 463-38719 >> Fax : +49 351 463-38255 > -- Dipl.-Inf. Alexander Senier Scientific Assistant TU Dresden Faculty of Computer Science Institute of System Architecture Chair of Privacy and Data Security 01062 Dresden Tel.: +49 351 463-38719 Fax : +49 351 463-38255 From timothy.l.harris at oracle.com Mon Jun 5 15:36:00 2017 From: timothy.l.harris at oracle.com (Tim Harris) Date: Mon, 5 Jun 2017 08:36:00 -0700 (PDT) Subject: EconomicMapImpl.setKey ArrayIndexOutOfBoundsException Message-ID: Hi, I am occasionally getting an ArrayIndexOutOfBoundsException during start-up of a multi-threaded program (below). I suspect this is a data race with the entries array being expanded in one thread concurrent with the access from the static constructor in another thread. If that is correct then does EconomicMapImpl need synchronization here, or should the change be somewhere else? --Tim Java HotSpot(TM) 64-Bit Server VM (build 25.71-b01-internal-jvmci-0.26, mixed mode) Caused by: java.lang.ArrayIndexOutOfBoundsException: 62 at org.graalvm.util.impl.EconomicMapImpl.setKey(EconomicMapImpl.java:781) at org.graalvm.util.impl.EconomicMapImpl.put(EconomicMapImpl.java:432) at org.graalvm.compiler.nodes.graphbuilderconf.InvocationPlugins.put(InvocationPlugins.java:634) at org.graalvm.compiler.nodes.graphbuilderconf.InvocationPlugins.register(InvocationPlugins.java:913) at org.graalvm.compiler.nodes.graphbuilderconf.InvocationPlugins$Registration.registerMethodSubstitution(InvocationPlugins.java:378) at org.graalvm.compiler.nodes.graphbuilderconf.InvocationPlugins$Registration.registerMethodSubstitution(InvocationPlugins.java:361) at com.oracle.rts.WorkRequestSubstitutions.(WorkRequestSubstitutions.java:148) From doug.simon at oracle.com Mon Jun 5 19:29:31 2017 From: doug.simon at oracle.com (Doug Simon) Date: Mon, 5 Jun 2017 21:29:31 +0200 Subject: EconomicMapImpl.setKey ArrayIndexOutOfBoundsException In-Reply-To: References: Message-ID: <4652DB63-3F43-4E75-BD42-FF6238CD73AE@oracle.com> Hi Tim, Since you are registering plugins at an arbitrary point in time (and not as part of Graal initialization), you need to use org.graalvm.compiler.nodes.graphbuilderconf.InvocationPlugins.LateRegistration like NFI does[1]. -Doug [1] https://github.com/graalvm/graal/blob/3b1d7f2e9c3bb6a57600dbbcad20e5ecea36a20f/compiler/src/org.graalvm.compiler.truffle.hotspot/src/org/graalvm/compiler/truffle/hotspot/nfi/NativeCallStubGraphBuilder.java#L110 > On 5 Jun 2017, at 17:36, Tim Harris wrote: > > Hi, > > > > I am occasionally getting an ArrayIndexOutOfBoundsException during start-up of a multi-threaded program (below). > > > > I suspect this is a data race with the entries array being expanded in one thread concurrent with the access from the static constructor in another thread. > > > > If that is correct then does EconomicMapImpl need synchronization here, or should the change be somewhere else? > > > > --Tim > > > > > > > > > > Java HotSpot(TM) 64-Bit Server VM (build 25.71-b01-internal-jvmci-0.26, mixed mode) > > > > Caused by: java.lang.ArrayIndexOutOfBoundsException: 62 > > at org.graalvm.util.impl.EconomicMapImpl.setKey(EconomicMapImpl.java:781) > > at org.graalvm.util.impl.EconomicMapImpl.put(EconomicMapImpl.java:432) > > at org.graalvm.compiler.nodes.graphbuilderconf.InvocationPlugins.put(InvocationPlugins.java:634) > > at org.graalvm.compiler.nodes.graphbuilderconf.InvocationPlugins.register(InvocationPlugins.java:913) > > at org.graalvm.compiler.nodes.graphbuilderconf.InvocationPlugins$Registration.registerMethodSubstitution(InvocationPlugins.java:378) > > at org.graalvm.compiler.nodes.graphbuilderconf.InvocationPlugins$Registration.registerMethodSubstitution(InvocationPlugins.java:361) > > at com.oracle.rts.WorkRequestSubstitutions.(WorkRequestSubstitutions.java:148) > > From adinn at redhat.com Tue Jun 6 08:41:45 2017 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 6 Jun 2017 09:41:45 +0100 Subject: New PR for AArch64 Address Lowering Message-ID: Last night I submitted a new PR for the AArch64 Address Lowering patch (which enables use of offset addressing on AArch64-generated code). It includes changes required to pass the style checker: https://github.com/graalvm/graal/pull/222 I had been seeing problems with this patch when running a jigsaw--enabled netbeans on jdk9 -- exceptions were being thrown when reading binary file data. I now believe the exceptions were related to my local disk retaining a combination of old projects and netbeans config files. Having cleared out these dirs I was able to run and, more importantly, rerun a netbeans session with no exceptions being thrown. I also successfully ran some other basic smoke tests, including javac, and jedit, with no errors. So, I think the patch is ready for merge. The Travis job for the PR https://travis-ci.org/graalvm/graal/builds/239644906?utm_source=github_status&utm_medium=notification appears to have failed with (javac) compiler errors in one of the tests e.g. /home/travis/build/graalvm/graal/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/hotspot/test/HotSpotGraalMBeanTest.java:241: error: MockResolvedJavaMethod is not abstract and does not override abstract method isIntrinsicCandidate() in HotSpotResolvedJavaMethod private static class MockResolvedJavaMethod implements HotSpotResolvedJavaMethod { All three failed build jobs have encountered the same problem. This looks like it is nothing to do with my changes. I suspect the latest graal tree is borked. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From doug.simon at oracle.com Tue Jun 6 13:28:11 2017 From: doug.simon at oracle.com (Doug Simon) Date: Tue, 6 Jun 2017 15:28:11 +0200 Subject: New PR for AArch64 Address Lowering In-Reply-To: References: Message-ID: <670ACB5C-CB13-41E6-9E76-DE3666B1F14F@oracle.com> > On 6 Jun 2017, at 10:41, Andrew Dinn wrote: > > Last night I submitted a new PR for the AArch64 Address Lowering patch > (which enables use of offset addressing on AArch64-generated code). It > includes changes required to pass the style checker: > > https://github.com/graalvm/graal/pull/222 > > I had been seeing problems with this patch when running a > jigsaw--enabled netbeans on jdk9 -- exceptions were being thrown when > reading binary file data. I now believe the exceptions were related to > my local disk retaining a combination of old projects and netbeans > config files. Having cleared out these dirs I was able to run and, more > importantly, rerun a netbeans session with no exceptions being thrown. I > also successfully ran some other basic smoke tests, including javac, and > jedit, with no errors. So, I think the patch is ready for merge. > > The Travis job for the PR > > > https://travis-ci.org/graalvm/graal/builds/239644906?utm_source=github_status&utm_medium=notification > > appears to have failed with (javac) compiler errors in one of the tests e.g. > > /home/travis/build/graalvm/graal/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/hotspot/test/HotSpotGraalMBeanTest.java:241: > error: MockResolvedJavaMethod is not abstract and does not override > abstract method isIntrinsicCandidate() in HotSpotResolvedJavaMethod > > private static class MockResolvedJavaMethod implements > HotSpotResolvedJavaMethod { > > > All three failed build jobs have encountered the same problem. This > looks like it is nothing to do with my changes. I suspect the latest > graal tree is borked. The issue is that when adding HotSpotResolvedJavaMethod.isIntrinsicCandidate(), I did it inconsistently between jvmci-8 and jvmci-9: http://hg.openjdk.java.net/graal/graal-jvmci-8/file/6523380966cd/jvmci/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaMethod.java#l114 http://hg.openjdk.java.net/jdk9/dev/hotspot/file/e64b1cb48d6e/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaMethod.java#l114 I'll push through a fix for this to Graal at which point you can retry the Travis gate. -Doug From v.vergu at gmail.com Wed Jun 7 06:33:34 2017 From: v.vergu at gmail.com (Vlad Vergu) Date: Wed, 7 Jun 2017 09:33:34 +0300 Subject: Getting delegate of an EngineTruffleObject Message-ID: Hi guys, I?m working on a (meta-)interpreter for a dynamic semantics specification language. The interpreter uses Truffle. I?m in the process of updating from Truffle 0.15 to present times. I got stuck at 0.23. The meta-interpreter had half-implemented support for foreign access which it doesn?t need and i don?t want to maintain it further until i get the chance to redo it properly. The issue i?m having is that calls to PolyglotEngine.eval(?) return an EngineTruffleObject instances since 0.23. The EngineTruffleObject class is package private. How do i get the delegate object? Thank you. meta-issue: is this the right list for these questions? Cheers, Vlad From java at stefan-marr.de Wed Jun 7 07:47:03 2017 From: java at stefan-marr.de (Stefan Marr) Date: Wed, 7 Jun 2017 09:47:03 +0200 Subject: Getting delegate of an EngineTruffleObject In-Reply-To: References: Message-ID: <1F7D8AAB-5B17-4090-94EB-6457D9C9CA7D@stefan-marr.de> Hi Vlad: > On 7 Jun 2017, at 08:33, Vlad Vergu wrote: > > The issue i?m having is that calls to PolyglotEngine.eval(?) return an EngineTruffleObject instances since 0.23. The EngineTruffleObject class is package private. How do i get the delegate object? I am using something like this: Value result = engine.eval(?); Obj o = result.as(Object.class); or SObject o = result.as(SObject.class); > meta-issue: is this the right list for these questions? Yes, beside this list, I think there?s only the gitter channel: https://gitter.im/graalvm/graal-core Best regards Stefan -- Stefan Marr Johannes Kepler Universit?t Linz http://stefan-marr.de/research/ From v.vergu at gmail.com Thu Jun 8 08:19:59 2017 From: v.vergu at gmail.com (Vlad Vergu) Date: Thu, 8 Jun 2017 11:19:59 +0300 Subject: Getting delegate of an EngineTruffleObject In-Reply-To: <1F7D8AAB-5B17-4090-94EB-6457D9C9CA7D@stefan-marr.de> References: <1F7D8AAB-5B17-4090-94EB-6457D9C9CA7D@stefan-marr.de> Message-ID: <874DCFD2-01EC-464E-BD0F-23B309FA80BE@gmail.com> Hi Stefan, That works, thank you! Best regards, Vlad > On 7 Jun 2017, at 10:47, Stefan Marr wrote: > > Hi Vlad: > >> On 7 Jun 2017, at 08:33, Vlad Vergu wrote: >> >> The issue i?m having is that calls to PolyglotEngine.eval(?) return an EngineTruffleObject instances since 0.23. The EngineTruffleObject class is package private. How do i get the delegate object? > > I am using something like this: > > Value result = engine.eval(?); > Obj o = result.as(Object.class); > > or > > SObject o = result.as(SObject.class); > >> meta-issue: is this the right list for these questions? > > Yes, beside this list, I think there?s only the gitter channel: https://gitter.im/graalvm/graal-core > > Best regards > Stefan > > > -- > Stefan Marr > Johannes Kepler Universit?t Linz > http://stefan-marr.de/research/ > > > From v.vergu at gmail.com Fri Jun 9 08:44:40 2017 From: v.vergu at gmail.com (Vlad Vergu) Date: Fri, 9 Jun 2017 11:44:40 +0300 Subject: How to initialize a TruffleLanguage was Getting delegate of an EngineTruffleObject In-Reply-To: <874DCFD2-01EC-464E-BD0F-23B309FA80BE@gmail.com> References: <1F7D8AAB-5B17-4090-94EB-6457D9C9CA7D@stefan-marr.de> <874DCFD2-01EC-464E-BD0F-23B309FA80BE@gmail.com> Message-ID: Hi, I?m running into a different issue now, trying to update to Truffle 0.25. When I create a RootNode in the implementation of TruffleLanguage#parse(ParsingRequest) an IllegalArgumentException is thrown with message ?Truffle language instance is not initialized?. The language class has a @TruffleLanguage.Registration. What should i be doing to initialize the language? Thanks. Best regards, Vlad > On 8 Jun 2017, at 11:19, Vlad Vergu wrote: > > Hi Stefan, > > That works, thank you! > > Best regards, > Vlad > >> On 7 Jun 2017, at 10:47, Stefan Marr wrote: >> >> Hi Vlad: >> >>> On 7 Jun 2017, at 08:33, Vlad Vergu wrote: >>> >>> The issue i?m having is that calls to PolyglotEngine.eval(?) return an EngineTruffleObject instances since 0.23. The EngineTruffleObject class is package private. How do i get the delegate object? >> >> I am using something like this: >> >> Value result = engine.eval(?); >> Obj o = result.as(Object.class); >> >> or >> >> SObject o = result.as(SObject.class); >> >>> meta-issue: is this the right list for these questions? >> >> Yes, beside this list, I think there?s only the gitter channel: https://gitter.im/graalvm/graal-core >> >> Best regards >> Stefan >> >> >> -- >> Stefan Marr >> Johannes Kepler Universit?t Linz >> http://stefan-marr.de/research/ >> >> >> > From java at stefan-marr.de Fri Jun 9 08:52:15 2017 From: java at stefan-marr.de (Stefan Marr) Date: Fri, 9 Jun 2017 10:52:15 +0200 Subject: How to initialize a TruffleLanguage was Getting delegate of an EngineTruffleObject In-Reply-To: References: <1F7D8AAB-5B17-4090-94EB-6457D9C9CA7D@stefan-marr.de> <874DCFD2-01EC-464E-BD0F-23B309FA80BE@gmail.com> Message-ID: <1265007C-3745-4830-A29C-38F3392D68FF@stefan-marr.de> Hi Vlad: > On 9 Jun 2017, at 10:44, Vlad Vergu wrote: > > When I create a RootNode in the implementation of TruffleLanguage#parse(ParsingRequest) an IllegalArgumentException is thrown with message ?Truffle language instance is not initialized?. The language class has a @TruffleLanguage.Registration. What should i be doing to initialize the language? Thanks. Sounds familiar. Don?t remember the exact details thought. I presume, you reached parse() via an eval() call on the polyglot engine? Make sure you don?t have the singleton instance of the language around anymore, I think that interfered with initialization in my case. And make sure that the instance of your language is the correct one. Best regards Stefan -- Stefan Marr Johannes Kepler Universit?t Linz http://stefan-marr.de/research/ From v.vergu at gmail.com Fri Jun 9 09:00:16 2017 From: v.vergu at gmail.com (Vlad Vergu) Date: Fri, 9 Jun 2017 12:00:16 +0300 Subject: How to initialize a TruffleLanguage was Getting delegate of an EngineTruffleObject In-Reply-To: <1265007C-3745-4830-A29C-38F3392D68FF@stefan-marr.de> References: <1F7D8AAB-5B17-4090-94EB-6457D9C9CA7D@stefan-marr.de> <874DCFD2-01EC-464E-BD0F-23B309FA80BE@gmail.com> <1265007C-3745-4830-A29C-38F3392D68FF@stefan-marr.de> Message-ID: <1E1B9794-3DAC-40FF-8E47-0C57E7030A30@gmail.com> Hi Stefan, > On 9 Jun 2017, at 11:52, Stefan Marr wrote: > > Hi Vlad: > >> On 9 Jun 2017, at 10:44, Vlad Vergu wrote: >> >> When I create a RootNode in the implementation of TruffleLanguage#parse(ParsingRequest) an IllegalArgumentException is thrown with message ?Truffle language instance is not initialized?. The language class has a @TruffleLanguage.Registration. What should i be doing to initialize the language? Thanks. > > Sounds familiar. > Don?t remember the exact details thought. > > I presume, you reached parse() via an eval() call on the polyglot engine? Indeed > Make sure you don?t have the singleton instance of the language around anymore, I think that interfered with initialization in my case. > And make sure that the instance of your language is the correct one. Thanks. Removing the singleton instance of language solved this issue. It looks like the build was applying some older annotation processor which complained when the singleton instance was missing. Best regards, Vlad From christian.humer at gmail.com Fri Jun 9 09:39:14 2017 From: christian.humer at gmail.com (Christian Humer) Date: Fri, 09 Jun 2017 09:39:14 +0000 Subject: How to initialize a TruffleLanguage was Getting delegate of an EngineTruffleObject In-Reply-To: <1E1B9794-3DAC-40FF-8E47-0C57E7030A30@gmail.com> References: <1F7D8AAB-5B17-4090-94EB-6457D9C9CA7D@stefan-marr.de> <874DCFD2-01EC-464E-BD0F-23B309FA80BE@gmail.com> <1265007C-3745-4830-A29C-38F3392D68FF@stefan-marr.de> <1E1B9794-3DAC-40FF-8E47-0C57E7030A30@gmail.com> Message-ID: Hi Vlad, Indeed. The annotation processor should complain about that with a warning. Thanks Stefan for helping out. - Christian Humer On 09.06.2017 11:00:16, "Vlad Vergu" wrote: >Hi Stefan, > >> On 9 Jun 2017, at 11:52, Stefan Marr wrote: >> >> Hi Vlad: >> >>> On 9 Jun 2017, at 10:44, Vlad Vergu wrote: >>> >>> When I create a RootNode in the implementation of >>>TruffleLanguage#parse(ParsingRequest) an IllegalArgumentException is >>>thrown with message ?Truffle language instance is not initialized?. >>>The language class has a @TruffleLanguage.Registration. What should i >>>be doing to initialize the language? Thanks. >> >> Sounds familiar. >> Don?t remember the exact details thought. >> >> I presume, you reached parse() via an eval() call on the polyglot >>engine? > >Indeed > >> Make sure you don?t have the singleton instance of the language >>around anymore, I think that interfered with initialization in my >>case. >> And make sure that the instance of your language is the correct one. > >Thanks. Removing the singleton instance of language solved this issue. >It looks like the build was applying some older annotation processor >which complained when the singleton instance was missing. > >Best regards, >Vlad From konstantin.novikov at phystech.edu Sun Jun 11 19:55:38 2017 From: konstantin.novikov at phystech.edu (=?UTF-8?B?0J3QvtCy0LjQutC+0LIsINCa0L7QvdGB0YLQsNC90YLQuNC9INCc0LjRhdCw0LnQu9C+0LLQuA==?= =?UTF-8?B?0Yc=?=) Date: Sun, 11 Jun 2017 22:55:38 +0300 Subject: Graal benchmarking Message-ID: Hello, everyone I'm working on graal register allocation optimization and have troubles with testing. I've seen many presentations that tell how to build and run distinct tests, but they don't describe how to run whole suits, only results of such testings. Is there the way to run for example a DaCapo suit using mx or other tool? How could one test not only performance, but change in generated codesize? P.S. graaltest also fails to give correct output because clean repository fails in four tests and it uses seconds as the unit of measurement Thankfully, Bachelor of Moscow Institute of Physics and Technology Konstantin Novikov From alexander.senier at tu-dresden.de Mon Jun 12 06:52:52 2017 From: alexander.senier at tu-dresden.de (Alexander Senier) Date: Mon, 12 Jun 2017 08:52:52 +0200 Subject: BIGV file format Message-ID: <3c3cd0ac-db4d-ae3c-6113-0a2573a2e529@tu-dresden.de> Hi, I need to extend an existing tool to read the graphs output by Graal. The BinaryParser class in IGV and Graals BinaryGraphPrinter give some hints what to do, but I was wondering whether the format is documented somewhere? Thanks! Cheers, Alex -- Dipl.-Inf. Alexander Senier Scientific Assistant TU Dresden Faculty of Computer Science Institute of System Architecture Chair of Privacy and Data Security 01062 Dresden Tel.: +49 351 463-38719 Fax : +49 351 463-38255 From doug.simon at oracle.com Mon Jun 12 08:18:27 2017 From: doug.simon at oracle.com (Doug Simon) Date: Mon, 12 Jun 2017 10:18:27 +0200 Subject: BIGV file format In-Reply-To: <3c3cd0ac-db4d-ae3c-6113-0a2573a2e529@tu-dresden.de> References: <3c3cd0ac-db4d-ae3c-6113-0a2573a2e529@tu-dresden.de> Message-ID: <1C295C8C-EDDD-40D9-A449-8D7E7198A914@oracle.com> Hi Alex, At this point in time, there is no documentation of the format other than the code (including BinaryGraphPrinter[1]). What's more, we are currently changing the format to improve the encoding. @Jaroslav: As part of the format changes, I think it wouldn't hurt to document the format (e.g., docs/BIGV.md). -Doug [1] https://github.com/graalvm/graal/blob/c3411d374452a7000e80c6000aa3ee0e8d8c4d41/compiler/src/org.graalvm.compiler.printer/src/org/graalvm/compiler/printer/BinaryGraphPrinter.java > On 12 Jun 2017, at 08:52, Alexander Senier wrote: > > Hi, > > I need to extend an existing tool to read the graphs output by Graal. > The BinaryParser class in IGV and Graals BinaryGraphPrinter give some > hints what to do, but I was wondering whether the format is documented > somewhere? > > Thanks! > > Cheers, > Alex > > -- > Dipl.-Inf. Alexander Senier > Scientific Assistant > > TU Dresden > Faculty of Computer Science > Institute of System Architecture > Chair of Privacy and Data Security > 01062 Dresden > > Tel.: +49 351 463-38719 > Fax : +49 351 463-38255 From jaroslav.tulach at oracle.com Mon Jun 12 09:36:26 2017 From: jaroslav.tulach at oracle.com (Jaroslav Tulach) Date: Mon, 12 Jun 2017 11:36:26 +0200 Subject: BIGV file format In-Reply-To: <1C295C8C-EDDD-40D9-A449-8D7E7198A914@oracle.com> References: <3c3cd0ac-db4d-ae3c-6113-0a2573a2e529@tu-dresden.de> <1C295C8C-EDDD-40D9-A449-8D7E7198A914@oracle.com> Message-ID: <3435954.ONjNyydAnv@pracovni> On pond?l? 12. ?ervna 2017 10:18:27 CEST Doug Simon wrote: > Hi Alex, > > At this point in time, there is no documentation of the format other than > the code (including BinaryGraphPrinter[1]). What's more, we are currently > changing the format to improve the encoding. > > @Jaroslav: As part of the format changes, I think it wouldn't hurt to > document the format (e.g., docs/BIGV.md). Yes, I am currently enhancing the protocol to be more efficient in various ways. I can also document the format when I am at it. -jt > [1] > https://github.com/graalvm/graal/blob/c3411d374452a7000e80c6000aa3ee0e8d8c4 > d41/compiler/src/org.graalvm.compiler.printer/src/org/graalvm/compiler/print > er/BinaryGraphPrinter.java > > On 12 Jun 2017, at 08:52, Alexander Senier > > wrote: > > > > Hi, > > > > I need to extend an existing tool to read the graphs output by Graal. > > The BinaryParser class in IGV and Graals BinaryGraphPrinter give some > > hints what to do, but I was wondering whether the format is documented > > somewhere? > > > > Thanks! > > > > Cheers, > > Alex From aleksandar.prokopec at oracle.com Mon Jun 12 11:38:56 2017 From: aleksandar.prokopec at oracle.com (Aleksandar Prokopec) Date: Mon, 12 Jun 2017 13:38:56 +0200 Subject: Graal benchmarking In-Reply-To: References: Message-ID: <67928990-c5e5-312a-b8af-d61742572dca@oracle.com> Hi Konstantin, There is a standard way to run the benchmarks used in Graal with our build tool mx. You need to run the following for DaCapo benchmarks: $ mx build $ mx benchmark 'dacapo:*' To run a specific benchmark, e.g. fop, you need to do: $ mx benchmark dacapo:fop To get a detailed list of all the options and benchmark suites, you can run: $ mx benchmark --help These benchmarks will typically show you the running times. To see other things, such as the generated codesize, you will need to activate one of Graal's debug counters or output VM statistics. $ mx benchmark dacapo:fop -- -Dgraal.Count= ... |-> BytecodesParsed=18058 |-> CE_ImprovedPhis=39 |-> CE_KilledIfs=16 |-> CanonicalizationConsideredNodes=56611 |-> CanonicalizedNodes=3398 |-> CompiledAndInstalledBytecodes=17619 |-> CompiledBytecodes=17619 ... Since you want to track code size after register allocation, an even better way is to use VM flags: $ mx benchmark dacapo:fop -- -XX:+PrintNMethodStatistics You should look for JVMCI-compiled methods here (not C1, that's a different compiler): ... Statistics for 315 bytecoded nmethods for JVMCI: total in heap = 487096 header = 98280 relocation = 15816 main code = 249815 stub code = 4753 oops = 1552 metadata = 14352 scopes data = 48256 scopes pcs = 47328 dependencies = 3232 handler table = 192 ... Alternatively, this option will give you some additional information about the total compiled bytecode size and the total compiled code size: $ mx benchmark dacapo:fop -- -XX:+CITime Hope this helps, Alex On 11.06.2017 21:55, ???????, ?????????? ?????????? wrote: > Hello, everyone > > I'm working on graal register allocation optimization and have troubles > with testing. I've seen many presentations that tell how to build and run > distinct tests, but they don't describe how to run whole suits, only > results of such testings. > Is there the way to run for example a DaCapo suit using mx or other tool? > How could one test not only performance, but change in generated codesize? > > P.S. graaltest also fails to give correct output because clean repository > fails in four tests and it uses seconds as the unit of measurement > > Thankfully, > Bachelor of Moscow Institute of Physics and Technology > Konstantin Novikov From doug.simon at oracle.com Mon Jun 12 12:50:00 2017 From: doug.simon at oracle.com (Doug Simon) Date: Mon, 12 Jun 2017 14:50:00 +0200 Subject: BIGV file format In-Reply-To: <1C295C8C-EDDD-40D9-A449-8D7E7198A914@oracle.com> References: <3c3cd0ac-db4d-ae3c-6113-0a2573a2e529@tu-dresden.de> <1C295C8C-EDDD-40D9-A449-8D7E7198A914@oracle.com> Message-ID: <1ECA7808-C3E8-44F1-B870-52FFE52EB241@oracle.com> Hi Alex, Are you able to provide more information about the tool you are developing/using that reads the BIGV format? This may help guide us in terms of what's important. -Doug > On 12 Jun 2017, at 10:18, Doug Simon wrote: > > Hi Alex, > > At this point in time, there is no documentation of the format other than the code (including BinaryGraphPrinter[1]). What's more, we are currently changing the format to improve the encoding. > > @Jaroslav: As part of the format changes, I think it wouldn't hurt to document the format (e.g., docs/BIGV.md). > > -Doug > > [1] https://github.com/graalvm/graal/blob/c3411d374452a7000e80c6000aa3ee0e8d8c4d41/compiler/src/org.graalvm.compiler.printer/src/org/graalvm/compiler/printer/BinaryGraphPrinter.java > >> On 12 Jun 2017, at 08:52, Alexander Senier wrote: >> >> Hi, >> >> I need to extend an existing tool to read the graphs output by Graal. >> The BinaryParser class in IGV and Graals BinaryGraphPrinter give some >> hints what to do, but I was wondering whether the format is documented >> somewhere? >> >> Thanks! >> >> Cheers, >> Alex >> >> -- >> Dipl.-Inf. Alexander Senier >> Scientific Assistant >> >> TU Dresden >> Faculty of Computer Science >> Institute of System Architecture >> Chair of Privacy and Data Security >> 01062 Dresden >> >> Tel.: +49 351 463-38719 >> Fax : +49 351 463-38255 > From alexander.senier at tu-dresden.de Tue Jun 13 14:46:55 2017 From: alexander.senier at tu-dresden.de (Alexander Senier) Date: Tue, 13 Jun 2017 16:46:55 +0200 Subject: BIGV file format In-Reply-To: <1ECA7808-C3E8-44F1-B870-52FFE52EB241@oracle.com> References: <3c3cd0ac-db4d-ae3c-6113-0a2573a2e529@tu-dresden.de> <1C295C8C-EDDD-40D9-A449-8D7E7198A914@oracle.com> <1ECA7808-C3E8-44F1-B870-52FFE52EB241@oracle.com> Message-ID: <07892d26-7d12-47db-0f9c-96b11a8991fa@tu-dresden.de> Hi Doug, sure, I'm happy to do that. We develop a toolset to partition security protocol implementations into interacting components. Based on the security guarantees (confidentiality, integrity) required by the elements of the protocol we divide it into partitions. The idea is that some partitions are untrusted and require no guarantees (e.g. networking) while others do (encryption). If components are isolated and interact over well-defined interfaces only, errors in untrusted partitions don't affect security critical ones. If we could do this for large software automatically, this would greatly decrease the trusted code size. This is where Graal comes into play. So far, our models are derived manually from a specification, which is tedious, error-prone and doesn't scale. The idea is that we use an existing implementation of a security protocol and build a sample app around it. For this application we perform a data flow analysis between subprograms. We may, however, need to analyze the control flow to identify security-critical branches in some cases (e.g. when checking whether a signature was correct). As for the our needs wrt. to the output format, the information I see in IGV right now looks sufficient. I don't think we require any graph from the compile steps after parsing for our purpose, so a dump level 0 with only one graph per method would be nice. From the output, we need be able to unambiguously identify the methods invoked so that we can annotate them (e.g. to mark the bounds of our model or map library functions to known primitives). In the future, we may also want to map back the log output to the source code to be able to create source code for the components we have identified. A standard binary format for which libraries exist would be great. I have no strong opinion on that - there are plenty of potential options (CBOR, BSON, XDR...). What is slightly annoying for integration into an automatic toolchain is the non-deterministic names of the log files. One single file with a fixed name would be nice here. But I guess that's a multithreading issue and it's not super critical. Hopefully I could make our intention clear. Thanks for asking! I'm interested in feedback if you see a better way to obtain the information we need. Cheers, Alex On 06/12/2017 02:50 PM, Doug Simon wrote: > Hi Alex, > > Are you able to provide more information about the tool you are developing/using that reads the BIGV format? This may help guide us in terms of what's important. > > -Doug > -- Dipl.-Inf. Alexander Senier Scientific Assistant TU Dresden Faculty of Computer Science Institute of System Architecture Chair of Privacy and Data Security 01062 Dresden Tel.: +49 351 463-38719 Fax : +49 351 463-38255 From doug.simon at oracle.com Tue Jun 13 15:29:49 2017 From: doug.simon at oracle.com (Doug Simon) Date: Tue, 13 Jun 2017 17:29:49 +0200 Subject: BIGV file format In-Reply-To: <07892d26-7d12-47db-0f9c-96b11a8991fa@tu-dresden.de> References: <3c3cd0ac-db4d-ae3c-6113-0a2573a2e529@tu-dresden.de> <1C295C8C-EDDD-40D9-A449-8D7E7198A914@oracle.com> <1ECA7808-C3E8-44F1-B870-52FFE52EB241@oracle.com> <07892d26-7d12-47db-0f9c-96b11a8991fa@tu-dresden.de> Message-ID: Hi Alex, Thanks for the background. Sounds like an interesting tool you are developing. One suggestion is to use Graal as an analysis framework instead of relying on graph dumping output. We do this ourselves to check certain invariants in the Graal code base with the CheckgraalInvariants[1] unit test. This way you don't need to force the code you're interesting in analyzing to be compiled Graal but simply feed the relevant jars to the tool. -Doug [1] https://github.com/graalvm/graal/blob/1c5e3a5e544335e85dd271853161bceb635ac376/compiler/src/org.graalvm.compiler.core.test/src/org/graalvm/compiler/core/test/CheckGraalInvariants.java > On 13 Jun 2017, at 16:46, Alexander Senier wrote: > > Hi Doug, > > sure, I'm happy to do that. We develop a toolset to partition security > protocol implementations into interacting components. Based on the > security guarantees (confidentiality, integrity) required by the > elements of the protocol we divide it into partitions. The idea is that > some partitions are untrusted and require no guarantees (e.g. > networking) while others do (encryption). If components are isolated and > interact over well-defined interfaces only, errors in untrusted > partitions don't affect security critical ones. If we could do this for > large software automatically, this would greatly decrease the trusted > code size. > > This is where Graal comes into play. So far, our models are derived > manually from a specification, which is tedious, error-prone and doesn't > scale. The idea is that we use an existing implementation of a security > protocol and build a sample app around it. For this application we > perform a data flow analysis between subprograms. We may, however, need > to analyze the control flow to identify security-critical branches in > some cases (e.g. when checking whether a signature was correct). > > As for the our needs wrt. to the output format, the information I see in > IGV right now looks sufficient. I don't think we require any graph from > the compile steps after parsing for our purpose, so a dump level 0 with > only one graph per method would be nice. From the output, we need be > able to unambiguously identify the methods invoked so that we can > annotate them (e.g. to mark the bounds of our model or map library > functions to known primitives). In the future, we may also want to map > back the log output to the source code to be able to create source code > for the components we have identified. > > A standard binary format for which libraries exist would be great. I > have no strong opinion on that - there are plenty of potential options > (CBOR, BSON, XDR...). What is slightly annoying for integration into an > automatic toolchain is the non-deterministic names of the log files. One > single file with a fixed name would be nice here. But I guess that's a > multithreading issue and it's not super critical. > > Hopefully I could make our intention clear. Thanks for asking! I'm > interested in feedback if you see a better way to obtain the information > we need. > > Cheers, > Alex > > On 06/12/2017 02:50 PM, Doug Simon wrote: >> Hi Alex, >> >> Are you able to provide more information about the tool you are developing/using that reads the BIGV format? This may help guide us in terms of what's important. >> >> -Doug >> > > -- > Dipl.-Inf. Alexander Senier > Scientific Assistant > > TU Dresden > Faculty of Computer Science > Institute of System Architecture > Chair of Privacy and Data Security > 01062 Dresden > > Tel.: +49 351 463-38719 > Fax : +49 351 463-38255 From aleksandar.prokopec at oracle.com Wed Jun 14 08:09:30 2017 From: aleksandar.prokopec at oracle.com (Aleksandar Prokopec) Date: Wed, 14 Jun 2017 10:09:30 +0200 Subject: Entry point publishing in Truffle OptimizedCallTarget, on AArch64 Message-ID: Dear Andrew, I have yesterday submitted a pull request to the Graal compiler, which should resolve a rare race condition that happens during Truffle's code installation. In short, the race condition existed because: after the code got installed with JVMCI, the "entryPoint" field in the "InstalledCode" class became visible to other threads, but the Truffle-level assumptions were not yet associated with that code [2]. My fix solves this by introducing the rule that Truffle OptimizedCallTarget's entry points addresses can only be jumped to if the lowest bit of the "entryPoint" field is set to "1". In other words, setting the lowest bit means that the "entryPoint" is published. The tail call code that gets patched at the beginning of the code of every OptimizedCallTarget must now not only check that the "entryPoint" is non-null, but also check that the lowest bit is "1". I have implemented the fix on x86, SPARC and AArch64. However, I was only able to run and test this x86 and SPARC, since we have no AArch64 machines. Since you've been involved with a lot of AArch64 maintenance in the past, I am assuming that you have access to a proper machine. If so, could you perhaps validate that my fix on AArch64 is correct (files under links [3] [4] [5]), or suggest changes? Note that the new functionality, "entryPoint" tagging, is currently disabled [6] on AArch64, since I was not sure about my fix. Thanks a lot, Alex [1] https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646 [2] https://github.com/graalvm/graal/blob/master/compiler/src/org.graalvm.compiler.truffle/src/org/graalvm/compiler/truffle/TruffleCompiler.java#L234 [3] https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646#diff-d53ecccd225be6a6ece41d8c64579c7a [4] https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646#diff-7a3deff5793b399e8a3b008e14fb14cf [5] https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646#diff-6a1d15e2ba69aa5ed34a906ecf895a1f [6] https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646#diff-b171c7113ea386f9a5af29d46cd4e7bcR238 From adinn at redhat.com Wed Jun 14 10:54:06 2017 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 14 Jun 2017 11:54:06 +0100 Subject: Entry point publishing in Truffle OptimizedCallTarget, on AArch64 In-Reply-To: References: Message-ID: Hi Aleksander, I have quickly eyeballed your patch and it looks ok. I'll try applying the patch and building it this afternoon (once I have got some other graal fixes I am working on out of the way). If it builds ok I will then take out the disabling check and try to run it. Is there a specific test I can use to ensure I exercise the changed code? regards, Andrew Dinn ----------- On 14/06/17 09:09, Aleksandar Prokopec wrote: > Dear Andrew, > > I have yesterday submitted a pull request to the Graal compiler, which > should resolve a rare race condition that happens during Truffle's code > installation. In short, the race condition existed because: after the > code got installed with JVMCI, the "entryPoint" field in the > "InstalledCode" class became visible to other threads, but the > Truffle-level assumptions were not yet associated with that code [2]. > > My fix solves this by introducing the rule that Truffle > OptimizedCallTarget's entry points addresses can only be jumped to if > the lowest bit of the "entryPoint" field is set to "1". In other words, > setting the lowest bit means that the "entryPoint" is published. The > tail call code that gets patched at the beginning of the code of every > OptimizedCallTarget must now not only check that the "entryPoint" is > non-null, but also check that the lowest bit is "1". > > I have implemented the fix on x86, SPARC and AArch64. However, I was > only able to run and test this x86 and SPARC, since we have no AArch64 > machines. Since you've been involved with a lot of AArch64 maintenance > in the past, I am assuming that you have access to a proper machine. If > so, could you perhaps validate that my fix on AArch64 is correct (files > under links [3] [4] [5]), or suggest changes? > > Note that the new functionality, "entryPoint" tagging, is currently > disabled [6] on AArch64, since I was not sure about my fix. > > Thanks a lot, > Alex > > [1] > https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646 > > > [2] > https://github.com/graalvm/graal/blob/master/compiler/src/org.graalvm.compiler.truffle/src/org/graalvm/compiler/truffle/TruffleCompiler.java#L234 > > > [3] > https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646#diff-d53ecccd225be6a6ece41d8c64579c7a > > > [4] > https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646#diff-7a3deff5793b399e8a3b008e14fb14cf > > > [5] > https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646#diff-6a1d15e2ba69aa5ed34a906ecf895a1f > > > [6] > https://github.com/graalvm/graal/commit/28b5474a1f6882f30c883c33012cd1bac8c2a646#diff-b171c7113ea386f9a5af29d46cd4e7bcR238 > > > -- regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From aleksandar.prokopec at oracle.com Wed Jun 14 12:09:38 2017 From: aleksandar.prokopec at oracle.com (Aleksandar Prokopec) Date: Wed, 14 Jun 2017 14:09:38 +0200 Subject: Entry point publishing in Truffle OptimizedCallTarget, on AArch64 In-Reply-To: References: Message-ID: Hi Andrew, Thanks a lot! Yes, pulling out the check in TruffleCompiler should enable this on AArch64. To make it even easier, I added a change with a flag that turns the new functionality on and off on AArch64 (should land on github in half an hour or so). On SPARC and x86, I was testing the fix with plain "mx unittest", and this helped me detect correctness issues. Best, Alex On 14.06.2017 12:54, Andrew Dinn wrote: > Hi Aleksander, > > I have quickly eyeballed your patch and it looks ok. I'll try applying > the patch and building it this afternoon (once I have got some other > graal fixes I am working on out of the way). > > If it builds ok I will then take out the disabling check and try to run > it. Is there a specific test I can use to ensure I exercise the changed > code? > > regards, > > > Andrew Dinn > ----------- > > On 14/06/17 09:09, Aleksandar Prokopec wrote: >> Dear Andrew, >> >> I have yesterday submitted a pull request to the Graal compiler, which >> should resolve a rare race condition that happens during Truffle's code >> installation. In short, the race condition existed because: after the >> code got installed with JVMCI, the "entryPoint" field in the >> "InstalledCode" class became visible to other threads, but the >> Truffle-level assumptions were not yet associated with that code [2]. >> >> My fix solves this by introducing the rule that Truffle >> OptimizedCallTarget's entry points addresses can only be jumped to if >> the lowest bit of the "entryPoint" field is set to "1". In other words, >> setting the lowest bit means that the "entryPoint" is published. The >> tail call code that gets patched at the beginning of the code of every >> OptimizedCallTarget must now not only check that the "entryPoint" is >> non-null, but also check that the lowest bit is "1". >> >> I have implemented the fix on x86, SPARC and AArch64. However, I was >> only able to run and test this x86 and SPARC, since we have no AArch64 >> machines. Since you've been involved with a lot of AArch64 maintenance >> in the past, I am assuming that you have access to a proper machine. If >> so, could you perhaps validate that my fix on AArch64 is correct (files >> under links [3] [4] [5]), or suggest changes? >> >> Note that the new functionality, "entryPoint" tagging, is currently >> disabled [6] on AArch64, since I was not sure about my fix. >> >> Thanks a lot, >> Alex >> >> [1] >> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_graal_commit_28b5474a1f6882f30c883c33012cd1bac8c2a646&d=DwIDaQ&c=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10&r=F9qKVJ7-WS5ke7QixbHuGLRTnPdc51bZWn_11BBmt4s&m=nqpBoAHLwE9FcNDH-mvEOCWhxi3DPd5KVv87hCYV0fs&s=xR_uGNp9T1nrrYUvnneQ8qumHggTC_2Kyf_k8HgZHOE&e= >> >> >> [2] >> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_graal_blob_master_compiler_src_org.graalvm.compiler.truffle_src_org_graalvm_compiler_truffle_TruffleCompiler.java-23L234&d=DwIDaQ&c=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10&r=F9qKVJ7-WS5ke7QixbHuGLRTnPdc51bZWn_11BBmt4s&m=nqpBoAHLwE9FcNDH-mvEOCWhxi3DPd5KVv87hCYV0fs&s=hHnh-d6lrCrovDLAqoxgojcuxMEwILqk3U9z6W-rCgc&e= >> >> >> [3] >> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_graal_commit_28b5474a1f6882f30c883c33012cd1bac8c2a646-23diff-2Dd53ecccd225be6a6ece41d8c64579c7a&d=DwIDaQ&c=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10&r=F9qKVJ7-WS5ke7QixbHuGLRTnPdc51bZWn_11BBmt4s&m=nqpBoAHLwE9FcNDH-mvEOCWhxi3DPd5KVv87hCYV0fs&s=QFoNpzZDa5btWFKI7vrfPzOq4f39eqNtrVpCzXcs2m0&e= >> >> >> [4] >> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_graal_commit_28b5474a1f6882f30c883c33012cd1bac8c2a646-23diff-2D7a3deff5793b399e8a3b008e14fb14cf&d=DwIDaQ&c=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10&r=F9qKVJ7-WS5ke7QixbHuGLRTnPdc51bZWn_11BBmt4s&m=nqpBoAHLwE9FcNDH-mvEOCWhxi3DPd5KVv87hCYV0fs&s=EJMEqOujL_qiUKdJZ2CtGPCpbt7z8dqEQEsjpPPvWt0&e= >> >> >> [5] >> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_graal_commit_28b5474a1f6882f30c883c33012cd1bac8c2a646-23diff-2D6a1d15e2ba69aa5ed34a906ecf895a1f&d=DwIDaQ&c=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10&r=F9qKVJ7-WS5ke7QixbHuGLRTnPdc51bZWn_11BBmt4s&m=nqpBoAHLwE9FcNDH-mvEOCWhxi3DPd5KVv87hCYV0fs&s=vIJBLetDqbnS5RKo0TMXdCxKuzQNRGFOMBPTZ_XJ0kI&e= >> >> >> [6] >> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_graal_commit_28b5474a1f6882f30c883c33012cd1bac8c2a646-23diff-2Db171c7113ea386f9a5af29d46cd4e7bcR238&d=DwIDaQ&c=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10&r=F9qKVJ7-WS5ke7QixbHuGLRTnPdc51bZWn_11BBmt4s&m=nqpBoAHLwE9FcNDH-mvEOCWhxi3DPd5KVv87hCYV0fs&s=lcRjsxWQo2lOr4GSwtqMdRQjd6gicBorHLaauTaFlcU&e= >> >> >> From adinn at redhat.com Wed Jun 14 12:57:26 2017 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 14 Jun 2017 13:57:26 +0100 Subject: New PR fixes issue with shared RawAddress instances in AArch4 address lowering Message-ID: The problems I had seen with reading of binary files in netbeans recurred once the AArch64 address lowering patch went live. I tracked it down to a problem with RawAddress instances being shared between reads with different loaded data types (more details in the PR). The fix is a fairly simple tweak to ensure that addresses are lowered one at a time. n.b. if lowering of a shared RawAddress from two different uses does actually result in the same displacement mode AArch64Address (because the loaded data types are the same) then the unique add in the lower method should ensure only a single AArch64Address appears in the graph. I submitted the following PR https://github.com/graalvm/graal/pull/224 which has passed checks and is sitting ready to be pulled. Could someone please review it? regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Wed Jun 14 15:45:03 2017 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 14 Jun 2017 16:45:03 +0100 Subject: Entry point publishing in Truffle OptimizedCallTarget, on AArch64 In-Reply-To: References: Message-ID: <5ee23a21-ec32-b5c2-0420-71ad349d39ee@redhat.com> Hi Aleksander, On 14/06/17 13:09, Aleksandar Prokopec wrote: > Yes, pulling out the check in TruffleCompiler should enable this on > AArch64. To make it even easier, I added a change with a flag that turns > the new functionality on and off on AArch64 (should land on github in > half an hour or so). It built ok on AArch64 using the latest head -- n.b. that's minus the switch you mentioned. > On SPARC and x86, I was testing the fix with plain "mx unittest", and > this helped me detect correctness issues. Well, it's not entirely clear that this is ok on AArch64 because the unit tests do not run cleanly on the latest head (23 failures). However, I ran the unit tests with your disabling check in place and then with the check removed and I saw the same number of failures. So, I think your patch is ok. I also fired up netbeans to exercise your patch i a heavily multi-threaded app. It ran without any errors manifesting (I also included my pending fix for the AArch64 address lowering code which is needed to avoid bytebuffer access errors). So, in sum: review passed, ship it! regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Wed Jun 14 16:09:04 2017 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 14 Jun 2017 17:09:04 +0100 Subject: Entry point publishing in Truffle OptimizedCallTarget, on AArch64 In-Reply-To: <5ee23a21-ec32-b5c2-0420-71ad349d39ee@redhat.com> References: <5ee23a21-ec32-b5c2-0420-71ad349d39ee@redhat.com> Message-ID: <390b91a7-dd59-de7a-dabc-fa6892feb55a@redhat.com> On 14/06/17 16:45, Andrew Dinn wrote: > Hi Aleksander, > > On 14/06/17 13:09, Aleksandar Prokopec wrote: > >> Yes, pulling out the check in TruffleCompiler should enable this on >> AArch64. To make it even easier, I added a change with a flag that turns >> the new functionality on and off on AArch64 (should land on github in >> half an hour or so). > > It built ok on AArch64 using the latest head -- n.b. that's minus the > switch you mentioned. > >> On SPARC and x86, I was testing the fix with plain "mx unittest", and >> this helped me detect correctness issues. > > Well, it's not entirely clear that this is ok on AArch64 because the > unit tests do not run cleanly on the latest head (23 failures). > > However, I ran the unit tests with your disabling check in place and > then with the check removed and I saw the same number of failures. So, I > think your patch is ok. Correction: Looking at the details of the test failures it appears that many of them are caused by an error that might be related to your code. Many of the tests are manifesting exceptions under method AArch64MacroAssembler.patchJumpTarget in a call to ConditionFlag.fromEncoding. Here is the offending code: int instruction = getInt(branch); int branchOffset = jumpTarget - branch; PatchLabelKind type = PatchLabelKind.fromEncoding(instruction); switch (type) { case BRANCH_CONDITIONALLY: ConditionFlag cf = ConditionFlag.fromEncoding(instruction >>> PatchLabelKind.INFORMATION_OFFSET); super.b(cf, branchOffset, branch); break; case BRANCH_UNCONDITIONALLY: super.b(branchOffset, branch); . . . So, there is something in the format of a generated branch that the assembler does not understand. If you are generating a TBZ that is susceptible to patching -- even if it is not actually installed -- then that might account for why I am seeing these specific unit test failures. I'll take a deeper look to see if I can diagnose whether the offending instruction is a TBZ and where it is being generated from. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Wed Jun 14 16:37:39 2017 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 14 Jun 2017 17:37:39 +0100 Subject: Entry point publishing in Truffle OptimizedCallTarget, on AArch64 In-Reply-To: <390b91a7-dd59-de7a-dabc-fa6892feb55a@redhat.com> References: <5ee23a21-ec32-b5c2-0420-71ad349d39ee@redhat.com> <390b91a7-dd59-de7a-dabc-fa6892feb55a@redhat.com> Message-ID: On 14/06/17 17:09, Andrew Dinn wrote: > Correction: > > Looking at the details of the test failures it appears that many of them > are caused by an error that might be related to your code. > > Many of the tests are manifesting exceptions under method > AArch64MacroAssembler.patchJumpTarget in a call to > ConditionFlag.fromEncoding. Here is the offending code: > > int instruction = getInt(branch); > int branchOffset = jumpTarget - branch; > PatchLabelKind type = PatchLabelKind.fromEncoding(instruction); > switch (type) { > case BRANCH_CONDITIONALLY: > ConditionFlag cf = > ConditionFlag.fromEncoding(instruction >>> > PatchLabelKind.INFORMATION_OFFSET); > super.b(cf, branchOffset, branch); > break; > case BRANCH_UNCONDITIONALLY: > super.b(branchOffset, branch); > . . . > > So, there is something in the format of a generated branch that the > assembler does not understand. If you are generating a TBZ that is > susceptible to patching -- even if it is not actually installed -- then > that might account for why I am seeing these specific unit test > failures. I'll take a deeper look to see if I can diagnose whether the > offending instruction is a TBZ and where it is being generated from. This is certainly the problem that is causing some of the unit tests to fail (I don't yet know about all of them). The implementation of tbz/nz is using the same encoding as cbz/nz to encode patch details into the instruction slot for a branch with an as yet unbound label. This is use later y the jump patch routine to construct the required jump instruction. So, with the current implementation of tbz that fails for two reasons. The patch info doesn't include the uimm6 argument needed to provide the test bit position. It also means that the patch code tries to patch the jump as a cbz/nz rather than a tbz/nz. I'll see if I can come up with a correct implementation and then see what that does to the failing unit test count. This probably requires a new type of patch info encoding (TEST_BRANCH_CONDITIONALLY.encoding) with a new layout for the other data that includes the uimm6 but position value. I'll let you know when I have a suitable patch. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From java at stefan-marr.de Thu Jun 15 08:52:57 2017 From: java at stefan-marr.de (Stefan Marr) Date: Thu, 15 Jun 2017 10:52:57 +0200 Subject: Unsafe access and aot-image building Message-ID: Hi: I am experimenting with using SVM/the AOT image building from GraalVM 0.24 for SOMns. One difficulty is that I am not aware of much publicly available information. So, I am kind of flying blind here. The next challenge is that SOMns still uses my own object model, which is implemented with unsafe operations. And, I?d really like to keep it, if at all possible. With a bit of guidance, I got so far that I get the following error: error: Field AnalysisField is used as an offset in an unsafe operation, but no value recomputation found. And the `fieldOffset` is the offset used by the unsafe access. In code [1] this looks like this: public static final class ObjectDirectStorageLocation { private final long fieldOffset; public ObjectDirectStorageLocation(final ObjectLayout layout, final SlotDefinition slot, final int objFieldIdx) { super(layout, slot); fieldOffset = SObject.getObjectFieldOffset(objFieldIdx); } The underlying problem is that this of course depends on functionality to determine the object offset, roughly something like this: private static long FIRST_OBJECT_FIELD_OFFSET = getFirstObjectFieldOffset(); private static final long OBJECT_FIELD_LENGTH = getObjectFieldLength(); public static final int NUM_OBJECT_FIELDS = 5; public static long getObjectFieldOffset(final int fieldIndex) { assert 0 <= fieldIndex && fieldIndex < NUM_OBJECT_FIELDS; return FIRST_OBJECT_FIELD_OFFSET + fieldIndex * 8 /* OBJECT_FIELD_LENGTH */; } `getFirstObjectFieldOffset()` is normally using reflection, but Peter?s help, I used the svm.jar to build a replacement class, which is supposed to make sure that the image will use the correct offset for the object access at run time: @TargetClass(SObject.class) public final class SObjectReplacement { @RecomputeFieldValue(kind = Kind.FieldOffset, declClass = SMutableObject.class, name = "field1") @Alias private static long FIRST_OBJECT_FIELD_OFFSET; } However, when I do this, it still has cause the very same error. I am sure that SVM picks up the SObjectReplacement class, because it throws an error if the class is not marked final. Any help, pointers, and suggestions would be very welcome. I suppose I am missing something to make the analysis happy with the replacement. But I have no clue what that might be. Do I need to do something with the `fieldOffset` field itself, instead of the fields it relies indirectly on? Thanks Stefan [1] https://github.com/smarr/SOMns/blob/master/src/som/interpreter/objectstorage/StorageLocation.java#L152 -- Stefan Marr Johannes Kepler Universit?t Linz http://stefan-marr.de/research/ From aleksandar.prokopec at oracle.com Thu Jun 15 09:17:07 2017 From: aleksandar.prokopec at oracle.com (Aleksandar Prokopec) Date: Thu, 15 Jun 2017 11:17:07 +0200 Subject: Entry point publishing in Truffle OptimizedCallTarget, on AArch64 In-Reply-To: References: <5ee23a21-ec32-b5c2-0420-71ad349d39ee@redhat.com> <390b91a7-dd59-de7a-dabc-fa6892feb55a@redhat.com> Message-ID: <57fb8ec9-63fe-0faa-6799-0d49f0502e6e@oracle.com> Hi Andrew, Of course, I should have taken a look at the patching code more closely. It looks like this will definitely require adding the TEST_BRANCH_CONDITIONALLY value, and doing a different emit during patching based on that. Btw, you probably saw it, but in my latest push, the new functionality is enabled with the "-Dgraal. AArch64EntryPointTagging=true" option. Thanks a lot for taking a look at this, Alex On 14.06.2017 18:37, Andrew Dinn wrote: > On 14/06/17 17:09, Andrew Dinn wrote: >> Correction: >> >> Looking at the details of the test failures it appears that many of them >> are caused by an error that might be related to your code. >> >> Many of the tests are manifesting exceptions under method >> AArch64MacroAssembler.patchJumpTarget in a call to >> ConditionFlag.fromEncoding. Here is the offending code: >> >> int instruction = getInt(branch); >> int branchOffset = jumpTarget - branch; >> PatchLabelKind type = PatchLabelKind.fromEncoding(instruction); >> switch (type) { >> case BRANCH_CONDITIONALLY: >> ConditionFlag cf = >> ConditionFlag.fromEncoding(instruction >>> >> PatchLabelKind.INFORMATION_OFFSET); >> super.b(cf, branchOffset, branch); >> break; >> case BRANCH_UNCONDITIONALLY: >> super.b(branchOffset, branch); >> . . . >> >> So, there is something in the format of a generated branch that the >> assembler does not understand. If you are generating a TBZ that is >> susceptible to patching -- even if it is not actually installed -- then >> that might account for why I am seeing these specific unit test >> failures. I'll take a deeper look to see if I can diagnose whether the >> offending instruction is a TBZ and where it is being generated from. > This is certainly the problem that is causing some of the unit tests to > fail (I don't yet know about all of them). > > The implementation of tbz/nz is using the same encoding as cbz/nz to > encode patch details into the instruction slot for a branch with an as > yet unbound label. This is use later y the jump patch routine to > construct the required jump instruction. So, with the current > implementation of tbz that fails for two reasons. The patch info doesn't > include the uimm6 argument needed to provide the test bit position. It > also means that the patch code tries to patch the jump as a cbz/nz > rather than a tbz/nz. > > I'll see if I can come up with a correct implementation and then see > what that does to the failing unit test count. This probably requires a > new type of patch info encoding (TEST_BRANCH_CONDITIONALLY.encoding) > with a new layout for the other data that includes the uimm6 but > position value. I'll let you know when I have a suitable patch. > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From vojin.jovanovic at oracle.com Thu Jun 15 23:03:01 2017 From: vojin.jovanovic at oracle.com (Vojin Jovanovic) Date: Fri, 16 Jun 2017 01:03:01 +0200 Subject: Unsafe access and aot-image building In-Reply-To: References: Message-ID: Hi Stefan, What fails is the verification that use to avoid accidental SEGFAULTS that requires the `fieldOffset` to be recomputed which is not the case here. What you did is correct, but our verification is too narrow to figure that out. We will find a way to loosen this verification step for the future. For now, you can disable offset verification by adding -H:-ThrowUnsafeOffsetErrors to your image build command. Hope this helps. Cheers, -- Vojin On 6/15/17 10:52 AM, Stefan Marr wrote: > Hi: > > I am experimenting with using SVM/the AOT image building from GraalVM 0.24 for SOMns. > > One difficulty is that I am not aware of much publicly available information. > So, I am kind of flying blind here. > > The next challenge is that SOMns still uses my own object model, which is implemented with unsafe operations. And, I?d really like to keep it, if at all possible. > > With a bit of guidance, I got so far that I get the following error: > > error: Field AnalysisField is used as an offset in an unsafe operation, but no value recomputation found. > > And the `fieldOffset` is the offset used by the unsafe access. > > In code [1] this looks like this: > > public static final class ObjectDirectStorageLocation { > > private final long fieldOffset; > > public ObjectDirectStorageLocation(final ObjectLayout layout, final SlotDefinition slot, > final int objFieldIdx) { > super(layout, slot); > fieldOffset = SObject.getObjectFieldOffset(objFieldIdx); > } > > > > The underlying problem is that this of course depends on functionality to determine the object offset, roughly something like this: > > private static long FIRST_OBJECT_FIELD_OFFSET = getFirstObjectFieldOffset(); > private static final long OBJECT_FIELD_LENGTH = getObjectFieldLength(); > public static final int NUM_OBJECT_FIELDS = 5; > > public static long getObjectFieldOffset(final int fieldIndex) { > assert 0 <= fieldIndex && fieldIndex < NUM_OBJECT_FIELDS; > return FIRST_OBJECT_FIELD_OFFSET + fieldIndex * 8 /* OBJECT_FIELD_LENGTH */; > } > > `getFirstObjectFieldOffset()` is normally using reflection, but Peter?s help, I used the svm.jar to build a replacement class, which is supposed to make sure that the image will use the correct offset for the object access at run time: > > @TargetClass(SObject.class) > public final class SObjectReplacement { > @RecomputeFieldValue(kind = Kind.FieldOffset, declClass = SMutableObject.class, name = "field1") > @Alias > private static long FIRST_OBJECT_FIELD_OFFSET; > } > > However, when I do this, it still has cause the very same error. > I am sure that SVM picks up the SObjectReplacement class, because it throws an error if the class is not marked final. > > Any help, pointers, and suggestions would be very welcome. > I suppose I am missing something to make the analysis happy with the replacement. But I have no clue what that might be. > Do I need to do something with the `fieldOffset` field itself, instead of the fields it relies indirectly on? > > Thanks > Stefan > > > > [1] https://github.com/smarr/SOMns/blob/master/src/som/interpreter/objectstorage/StorageLocation.java#L152 > From christian.wimmer at oracle.com Thu Jun 15 23:19:24 2017 From: christian.wimmer at oracle.com (Christian Wimmer) Date: Thu, 15 Jun 2017 16:19:24 -0700 Subject: Unsafe access and aot-image building In-Reply-To: References: Message-ID: <8916b3da-c760-79d4-030d-63768f6bfc96@oracle.com> Stefan, You might not have all necessary substitutions in place yet. If the ObjectDirectStorageLocation instance is created during native image generation, then it still has the wrong field offset in place. In that case, you need a separate substitution also for ObjectDirectStorageLocation.fieldOffset Even if you don't have such instances in the image (yet), you can add the substitution, it doesn't hurt. And then the verification will pass too. If you access a field directly via its offset, you have to register that field in a Feature.beforeAnalysis method with registerAsUnsafeWritten() In general, you have to be very careful when accessing fields with unsafe and when handling raw field offsets. For example, field order is different between HotSpot and Substrate VM. So what you think is the "first field" might no longer be the first field in the image. And finding all places where a field offset might leak to is non trivial. -Christian On 06/15/2017 04:03 PM, Vojin Jovanovic wrote: > Hi Stefan, > > What fails is the verification that use to avoid accidental SEGFAULTS > that requires the `fieldOffset` to be recomputed which is not the case > here. What you did is correct, but our verification is too narrow to > figure that out. We will find a way to loosen this verification step for > the future. > > For now, you can disable offset verification by adding > > -H:-ThrowUnsafeOffsetErrors > > to your image build command. > > Hope this helps. > > Cheers, > > -- Vojin > > On 6/15/17 10:52 AM, Stefan Marr wrote: >> Hi: >> >> I am experimenting with using SVM/the AOT image building from GraalVM >> 0.24 for SOMns. >> >> One difficulty is that I am not aware of much publicly available >> information. >> So, I am kind of flying blind here. >> >> The next challenge is that SOMns still uses my own object model, which >> is implemented with unsafe operations. And, I?d really like to keep >> it, if at all possible. >> >> With a bit of guidance, I got so far that I get the following error: >> >> error: Field >> AnalysisField> accessed: false reads: true written: true> is used as an offset in an >> unsafe operation, but no value recomputation found. >> >> And the `fieldOffset` is the offset used by the unsafe access. >> >> In code [1] this looks like this: >> >> public static final class ObjectDirectStorageLocation { >> >> private final long fieldOffset; >> >> public ObjectDirectStorageLocation(final ObjectLayout layout, >> final SlotDefinition slot, >> final int objFieldIdx) { >> super(layout, slot); >> fieldOffset = SObject.getObjectFieldOffset(objFieldIdx); >> } >> >> >> >> The underlying problem is that this of course depends on functionality >> to determine the object offset, roughly something like this: >> >> private static long FIRST_OBJECT_FIELD_OFFSET = >> getFirstObjectFieldOffset(); >> private static final long OBJECT_FIELD_LENGTH = >> getObjectFieldLength(); >> public static final int NUM_OBJECT_FIELDS = 5; >> >> public static long getObjectFieldOffset(final int fieldIndex) { >> assert 0 <= fieldIndex && fieldIndex < NUM_OBJECT_FIELDS; >> return FIRST_OBJECT_FIELD_OFFSET + fieldIndex * 8 /* >> OBJECT_FIELD_LENGTH */; >> } >> >> `getFirstObjectFieldOffset()` is normally using reflection, but >> Peter?s help, I used the svm.jar to build a replacement class, which >> is supposed to make sure that the image will use the correct offset >> for the object access at run time: >> >> @TargetClass(SObject.class) >> public final class SObjectReplacement { >> @RecomputeFieldValue(kind = Kind.FieldOffset, declClass = >> SMutableObject.class, name = "field1") >> @Alias >> private static long FIRST_OBJECT_FIELD_OFFSET; >> } >> >> However, when I do this, it still has cause the very same error. >> I am sure that SVM picks up the SObjectReplacement class, because it >> throws an error if the class is not marked final. >> >> Any help, pointers, and suggestions would be very welcome. >> I suppose I am missing something to make the analysis happy with the >> replacement. But I have no clue what that might be. >> Do I need to do something with the `fieldOffset` field itself, instead >> of the fields it relies indirectly on? >> >> Thanks >> Stefan >> >> >> >> [1] >> https://github.com/smarr/SOMns/blob/master/src/som/interpreter/objectstorage/StorageLocation.java#L152 >> >> > From adinn at redhat.com Fri Jun 16 10:09:02 2017 From: adinn at redhat.com (Andrew Dinn) Date: Fri, 16 Jun 2017 11:09:02 +0100 Subject: Entry point publishing in Truffle OptimizedCallTarget, on AArch64 In-Reply-To: <57fb8ec9-63fe-0faa-6799-0d49f0502e6e@oracle.com> References: <5ee23a21-ec32-b5c2-0420-71ad349d39ee@redhat.com> <390b91a7-dd59-de7a-dabc-fa6892feb55a@redhat.com> <57fb8ec9-63fe-0faa-6799-0d49f0502e6e@oracle.com> Message-ID: Hi Aleksandr, I fixed up the tbz implementation and removed the guard option/checks and this allows all the unit tests which were broken by this change on AArch64 to complete successfully. There are still 12 CountedLoop tests that are failing but they relate to a different issue (SafeSignedDiv node is found to have null before state during snippet replacement). I'll investigate that separately. The PR to fix the tbz code is here https://github.com/graalvm/graal/pull/225 regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From aleksandar.prokopec at oracle.com Fri Jun 16 10:14:15 2017 From: aleksandar.prokopec at oracle.com (Aleksandar Prokopec) Date: Fri, 16 Jun 2017 12:14:15 +0200 Subject: Entry point publishing in Truffle OptimizedCallTarget, on AArch64 In-Reply-To: References: <5ee23a21-ec32-b5c2-0420-71ad349d39ee@redhat.com> <390b91a7-dd59-de7a-dabc-fa6892feb55a@redhat.com> <57fb8ec9-63fe-0faa-6799-0d49f0502e6e@oracle.com> Message-ID: Hi Andrew, Thanks a lot for fixing the tbz implementation! I will go through your PR now. Regards, Alex On 16.06.2017 12:09, Andrew Dinn wrote: > Hi Aleksandr, > > I fixed up the tbz implementation and removed the guard option/checks > and this allows all the unit tests which were broken by this change on > AArch64 to complete successfully. > > There are still 12 CountedLoop tests that are failing but they relate to > a different issue (SafeSignedDiv node is found to have null before state > during snippet replacement). I'll investigate that separately. > > The PR to fix the tbz code is here > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_graal_pull_225&d=DwICaQ&c=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10&r=F9qKVJ7-WS5ke7QixbHuGLRTnPdc51bZWn_11BBmt4s&m=q18pEVlu0g0G9XUD3XwLb_E4r5bT-ffrg_qkidgSuPI&s=OJvs00Ebd8EhLrvrWLZmnafEP-hrGhndcVCYF1ql_yE&e= > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Fri Jun 16 11:13:57 2017 From: adinn at redhat.com (Andrew Dinn) Date: Fri, 16 Jun 2017 12:13:57 +0100 Subject: RFR: PR to fix failing CountedLoop unit tests Message-ID: <7d952b0e-52ce-663b-00fa-7b72e568f344@redhat.com> After fixing the issues with tbz on AArch64 there were still 12 test failures in the CountedLoop tests. It turns out they were to do with the way Div/Rem nodes were being replaced with snippets including some underlying AArch64-specific SafeDiv/Rem nodes. The following PR fixes the tests: https://github.com/graalvm/graal/pull/226 The SafeDiv/Rem nodes override the corresponding Div/Rem nodes inheriting their state and behaviour. he latter includes a eforeState used to pass a FrameState in the call to the generators emitDiv/Rem methods. The snippet deals with any cases that might generate exceptions. Hence the SafeDiv/Rem don't need to be assigned a a beforeState. Ditto, the generator doesn't need to construct a FrameState to pass to emitDev/Rem. Unfortunately, the SafeDiv/Rem nodes inherit their generate call and so inherit an intervening call to gen.state(this). The problme is fixed by overriding generate on classes DivNode, UnsignedDivMode, RemNode and UnsignedRemNode to make the same call as their parent except with the FrameState argument passed as null. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From jaroslav.tulach at oracle.com Mon Jun 19 12:16:28 2017 From: jaroslav.tulach at oracle.com (Jaroslav Tulach) Date: Mon, 19 Jun 2017 14:16:28 +0200 Subject: NetBeans formatter for mx projects Message-ID: <1628943.Cv1offkIYD@pracovni> Dear NetBeans IDE users, since today you can instruct your favorite IDE to format sources on save in the exact format requested by your mx configuration. Following steps are needed: 1. get the most recent mx - e.g. call `mx update` 2. regenerate IDE projects - e.g. call `mx netbeansinit` 3. install the formatter module into your IDE: http://plugins.netbeans.org/ plugin/50877/eclipse-code-formatter-for-java-eclipse-mars-4-5 That is all. Once you save a file, it shall be properly reformatted thanks to https://github.com/graalvm/mx/commit/fb045df21796d36dc57fcd04f24773d2c4cce967 and hard work of Benno. Enjoy and send many thanks to Benno who created the formatter module. -jt From adinn at redhat.com Mon Jun 19 13:02:30 2017 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 19 Jun 2017 14:02:30 +0100 Subject: RFR: null checks for architectures which cannot fold uncompress into address Message-ID: <0bed12bd-f053-a9c3-b39e-a6dca8a0ab5f@redhat.com> Hi, I just submitted a PR to fix a problem that manifests on AArch64 and, possibly, on SPARC. https://github.com/graalvm/graal/pull/227 The problem on AArch64 is that null checks are still being inserted before reads from narrow oop base addresses -- whereas on x86 the read is used to perform an implicit null check. This happens when UseTrappingNullChecksPhase detects a Read from an oop base occurring within the scope of an enclosing IF IsNull(oop) test. The IF is successfully removed but the Read fails to be recognised as referring to the same oop as the IsNull check. So, instead of the Read being used to trap the null case it is prefixed with a separate NullCheck node. The failure happens because the oop compare assumes that address lowering has removed the intervening CompressNode which feeds the read. Aarch64 cannot convert field addresses to the required form to achieve this result because it does not provide a shifted register with offset addressing mode. The fix is to make the comparison smarter, indirecting through any CompressionNode found as the base or index of the read address. I suspect this will also help on SPARC since it too is not able to fold out CompressionNode instances in its address lowering. See the PR for full details. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From java at stefan-marr.de Mon Jun 19 16:08:07 2017 From: java at stefan-marr.de (Stefan Marr) Date: Mon, 19 Jun 2017 18:08:07 +0200 Subject: Instrumentation API Basics, Debugger, and Threading Message-ID: <08FBF2EE-1DF1-477D-941A-D6452ACCD10F@stefan-marr.de> Hi: I am having issues tracking down, what I think is a race condition in the instrumentation code for the debugger. From time to time, I see the following assertion failing [1] (the second one): assert source.getSteppingLocation() == SteppingLocation.AFTER_CALL; // there is only one binding that can lead to a after event if (stepping.get()) { assert source.getContext().lookupExecutionEventNode(callBinding) == source; <-- This one fails So, now I am wondering whether my assumption about how things work are correct. Are any of the nodes involved here created on-demand? Could it be that `source` is somehow already outdated at this line? I have a program that runs two threads and sets breakpoints and does stepping on the same user code. However, sometimes, the first thread doesn?t stop where it should, and continues instead. And the above assertion is failing during debugging. The two might be related, but I can?t tell for sure. I guess, what I am asking for is a basic description of the general assumptions made in this code in the DebuggerSession. Thanks Stefan [1] https://github.com/graalvm/graal/blob/master/truffle/src/com.oracle.truffle.api.debug/src/com/oracle/truffle/api/debug/DebuggerSession.java#L682 -- Stefan Marr Johannes Kepler Universit?t Linz http://stefan-marr.de/research/ From adinn at redhat.com Tue Jun 20 14:20:34 2017 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 20 Jun 2017 15:20:34 +0100 Subject: RFR: Implement PrefetchAllocate for AArch64 Message-ID: <1d122c4d-dd09-6b83-98c9-8dc54f6c0459@redhat.com> I just submitted a PR for $SUBJECT https://github.com/graalvm/graal/pull/228 The implementation is complete and functional and all unit tests pass so I think it is ready to be integrated (full details in the PR). One word of warning is in order (again full details are in the PR). I expected all prefetch instructions to use a base register plus small displacement (3, 4 and 5 * cache line size). However, it appears that some eager constant coalescing in FixReads.replaceConstantInputs can lead to prefetch instructions being generated with offsets incremented by the allocated object size. If that size is large this can foil displacement embedding, forcing some ugly constant loads that are actually easily avoidable. I think this is a problem in FixReads, which seems to be pre-empting decisions better left to the AddressLowering implementation. I would be interested to hear opinions from other devs. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Tue Jun 20 14:39:59 2017 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 20 Jun 2017 15:39:59 +0100 Subject: Travis build is currently borked Message-ID: It seems there is no valid jdk9 available on the travis build host See https://travis-ci.org/graalvm/graal/jobs/244940121 . . . $ ${JAVA_HOME}/bin/java -version /home/travis/.travis/job_stages: line 54: /home/travis/build/graalvm/graal/../jdk-9/bin/java: No such file or directory . . . regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From doug.simon at oracle.com Tue Jun 20 15:15:26 2017 From: doug.simon at oracle.com (Doug Simon) Date: Tue, 20 Jun 2017 17:15:26 +0200 Subject: Travis build is currently borked In-Reply-To: References: Message-ID: <8F9ED7C1-E925-4A24-897A-847A5BF3060B@oracle.com> This is incredibly frustrating. Looks like we're forced to update to 9-ea+174 now. I'll start the process... We really need https://github.com/travis-ci/travis-ci/issues/7337 to be resolved. -Doug > On 20 Jun 2017, at 16:39, Andrew Dinn wrote: > > It seems there is no valid jdk9 available on the travis build host > > See https://travis-ci.org/graalvm/graal/jobs/244940121 > > . . . > $ ${JAVA_HOME}/bin/java -version > > /home/travis/.travis/job_stages: line 54: > /home/travis/build/graalvm/graal/../jdk-9/bin/java: No such file or > directory > . . . > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Tue Jun 20 15:42:14 2017 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 20 Jun 2017 16:42:14 +0100 Subject: Travis build is currently borked In-Reply-To: <8F9ED7C1-E925-4A24-897A-847A5BF3060B@oracle.com> References: <8F9ED7C1-E925-4A24-897A-847A5BF3060B@oracle.com> Message-ID: <37faa22a-cdc1-e8ba-65fb-e360c7b3e18e@redhat.com> Hi Doug, On 20/06/17 16:15, Doug Simon wrote: > This is incredibly frustrating. Looks like we're forced to update to > 9-ea+174 now. I'll start the process... Thanks for looking into this. Does the current graal work on 9-ea+174 or will that require some fixes to jvmci/graal code? > We really need https://github.com/travis-ci/travis-ci/issues/7337 to > be resolved. Hmm, well -- I am not sure but I think the latest Red Hat jdk8 releases include all the 8u121 bug fixes ;-) regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From doug.simon at oracle.com Tue Jun 20 19:34:48 2017 From: doug.simon at oracle.com (Doug Simon) Date: Tue, 20 Jun 2017 21:34:48 +0200 Subject: Travis build is currently borked In-Reply-To: <37faa22a-cdc1-e8ba-65fb-e360c7b3e18e@redhat.com> References: <8F9ED7C1-E925-4A24-897A-847A5BF3060B@oracle.com> <37faa22a-cdc1-e8ba-65fb-e360c7b3e18e@redhat.com> Message-ID: > On 20 Jun 2017, at 17:42, Andrew Dinn wrote: > > Hi Doug, > > On 20/06/17 16:15, Doug Simon wrote: >> This is incredibly frustrating. Looks like we're forced to update to >> 9-ea+174 now. I'll start the process... > > Thanks for looking into this. > > Does the current graal work on 9-ea+174 or will that require some fixes > to jvmci/graal code? Yes, there are some minor fixes (mostly related to https://bugs.openjdk.java.net/browse/JDK-8180785) for which I already have an internal PR. > >> We really need https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_travis-2Dci_travis-2Dci_issues_7337&d=DwICaQ&c=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10&r=i1-Hef1Qrt47JSTmUR8-SfhVlDSGnFCGV-TedFESCK4&m=VNTQTi3OamBY3G86X96RPm7WpLZ2bRHSeQeQtQaMg5k&s=j2p6Ub6Ob1JYd4nrF-ChKeFEIAh82tNldLFJHkTC2_Y&e= to >> be resolved. > > Hmm, well -- I am not sure but I think the latest Red Hat jdk8 releases > include all the 8u121 bug fixes ;-) Maybe you can convince the Travis team to provide Red Hat binaries in addition to Ubuntu binaries ;-) -Doug From adinn at redhat.com Wed Jun 21 14:32:55 2017 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 21 Jun 2017 15:32:55 +0100 Subject: RFR: New test for direct byte buffers Message-ID: <73ac6f86-402a-8d85-40e5-a37777429b30@redhat.com> As requested, I have created some new tests for direct byte buffers which are submitted via the following PR: https://github.com/graalvm/graal/pull/229 This passes unit test on x86 and AArch64. Also, I reverted the tweak to the AArch64 address lowering patch and found that it failed on AArch64 as expected i.e. it would have caught the original error (yeah, I also eyeballed the generated code and it did indeed fail for the expected reason). regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Thu Jun 22 12:00:57 2017 From: adinn at redhat.com (Andrew Dinn) Date: Thu, 22 Jun 2017 13:00:57 +0100 Subject: Problem building latest head with latest jdk9 Message-ID: I had to patch the latest head as below to get it to build. Did someone forget to update this mock class? regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander ----- 8 -------- 8 -------- 8 -------- 8 -------- 8 -------- 8 --- diff --git a/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/hotspot/test/HotSpotGraalMBeanTest.java b/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/hotspot/test/HotSpotGraalMBeanTest.java index 7c0b0a8..7e32496 100644 --- a/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/hotspot/test/HotSpotGraalMBeanTest.java +++ b/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/hotspot/test/HotSpotGraalMBeanTest.java @@ -626,6 +626,12 @@ public class HotSpotGraalMBeanTest { } @Override + public ResolvedJavaType getHostClass() { + throw new UnsupportedOperationException(); + } + + + @Override public boolean isInstance(JavaConstant obj) { throw new UnsupportedOperationException(); } ----- 8 -------- 8 -------- 8 -------- 8 -------- 8 -------- 8 --- From jaroslav.tulach at oracle.com Fri Jun 23 06:22:32 2017 From: jaroslav.tulach at oracle.com (Jaroslav Tulach) Date: Fri, 23 Jun 2017 08:22:32 +0200 Subject: Problem building latest head with latest jdk9 In-Reply-To: References: Message-ID: <1612294.0DAaYLfB3y@pracovni> On ?tvrtek 22. ?ervna 2017 13:00:57 CEST Andrew Dinn wrote: > I had to patch the latest head as below to get it to build. Did someone > forget to update this mock class? I wrote this Mock class and it compiled when I wrote it. Looks like things changed incompatibly since then. Probably nobody expected the HotSpotResolvedObjectType would be implemented by an independent party. -jt > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander > > > ----- 8 -------- 8 -------- 8 -------- 8 -------- 8 -------- 8 --- > diff --git > a/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/ho > tspot/test/HotSpotGraalMBeanTest.java > b/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/h > otspot/test/HotSpotGraalMBeanTest.java index 7c0b0a8..7e32496 100644 > --- > a/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/ho > tspot/test/HotSpotGraalMBeanTest.java +++ > b/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/ho > tspot/test/HotSpotGraalMBeanTest.java @@ -626,6 +626,12 @@ public class > HotSpotGraalMBeanTest { > } > > @Override > + public ResolvedJavaType getHostClass() { > + throw new UnsupportedOperationException(); > + } > + > + > + @Override > public boolean isInstance(JavaConstant obj) { > throw new UnsupportedOperationException(); > } > ----- 8 -------- 8 -------- 8 -------- 8 -------- 8 -------- 8 --- From doug.simon at oracle.com Fri Jun 23 09:33:54 2017 From: doug.simon at oracle.com (Doug Simon) Date: Fri, 23 Jun 2017 11:33:54 +0200 Subject: Problem building latest head with latest jdk9 In-Reply-To: <1612294.0DAaYLfB3y@pracovni> References: <1612294.0DAaYLfB3y@pracovni> Message-ID: <7C4E767A-2FBF-4D05-AC97-8FC89D2801AC@oracle.com> The fix has just landed on github: https://github.com/graalvm/graal/commit/4cbc5cd7e0951d1ebd156bad317749224e7d2589 > On 23 Jun 2017, at 08:22, Jaroslav Tulach wrote: > > On ?tvrtek 22. ?ervna 2017 13:00:57 CEST Andrew Dinn wrote: >> I had to patch the latest head as below to get it to build. Did someone >> forget to update this mock class? > > I wrote this Mock class and it compiled when I wrote it. Looks like things > changed incompatibly since then. Probably nobody expected the > HotSpotResolvedObjectType would be implemented by an independent party. > > -jt > > >> ----------- >> Senior Principal Software Engineer >> Red Hat UK Ltd >> Registered in England and Wales under Company Registration No. 03798903 >> Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander >> >> >> ----- 8 -------- 8 -------- 8 -------- 8 -------- 8 -------- 8 --- >> diff --git >> a/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/ho >> tspot/test/HotSpotGraalMBeanTest.java >> b/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/h >> otspot/test/HotSpotGraalMBeanTest.java index 7c0b0a8..7e32496 100644 >> --- >> a/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/ho >> tspot/test/HotSpotGraalMBeanTest.java +++ >> b/compiler/src/org.graalvm.compiler.hotspot.test/src/org/graalvm/compiler/ho >> tspot/test/HotSpotGraalMBeanTest.java @@ -626,6 +626,12 @@ public class >> HotSpotGraalMBeanTest { >> } >> >> @Override >> + public ResolvedJavaType getHostClass() { >> + throw new UnsupportedOperationException(); >> + } >> + >> + >> + @Override >> public boolean isInstance(JavaConstant obj) { >> throw new UnsupportedOperationException(); >> } >> ----- 8 -------- 8 -------- 8 -------- 8 -------- 8 -------- 8 --- > > From java at stefan-marr.de Thu Jun 29 13:24:25 2017 From: java at stefan-marr.de (Stefan Marr) Date: Thu, 29 Jun 2017 15:24:25 +0200 Subject: [CfP][Meta'17] Workshop on Meta-Programming Techniques and Reflection Message-ID: Call for Papers: Meta?17 ======================== Workshop on Meta-Programming Techniques and Reflection Co-located with SPLASH 2017 October 22, 2017, Vancouver, Canada Twitter @MetaAtSPLASH http://2017.splashcon.org/track/meta-2017 The heterogeneity of mobile computing, cloud applications, multicore architectures, and other systems leads to increasing complexity of software and requires new approaches to programming languages and software engineering tools. To manage the complexity, we require generic solutions that can be adapted to specific application domains or use cases, making metaprogramming an important topic of research once more. However, the challenges with metaprogramming are still manifold. They start with fundamental issues such as typing of reflective programs, continue with practical concerns such as performance and tooling, and reach into the empirical field to understand how metaprogramming is used and how it affects software maintainability. Thus, while industry accepted metaprogramming on a wide scale with Ruby, Scala, JavaScript and others, academia still needs to answer a wide range of questions to bring it to the same level of convenience, tooling, and programming styles to cope with the increasing complexity of software systems. This workshop aims to explore meta-level technologies that help tackling the heterogeneity, scalability and openness requirements of emerging computations platforms. ### Topics of Interest The workshop is a venue for all approaches that embrace metaprogramming: - from static to dynamic techniques - reflection, meta-level architectures, staging, open language runtimes applications to middleware, frameworks, and DSLs - optimization techniques to minimize runtime overhead - contract systems, or typing of reflective programs reflection and metaobject protocols to enable tooling - case studies and evaluation of such techniques, e.g., to build applications, language extensions, or tools - empirical evaluation of metaprogramming solutions - security in reflective systems and capability-based designs - meta-level architectures and reflective middleware for modern runtime platforms (e.g. IoT, cyber-physical systems, mobile/cloud/grid computing, etc) - surveys, conceptualization, and taxonomization of existing approaches In short, we invite contributions to the workshop on a wide range of topics related to design, implementation, and application of reflective APIs and meta-programming techniques, as well as empirical studies and typing for such systems and languages. ### Workshop Format and Submissions This workshop welcomes the presentation of new ideas and emerging problems as well as mature work as part of a mini-conference format. Furthermore, we plan interactive brainstorming and demonstration sessions between the formal presentations to enable an active exchange of ideas. The workshop papers will be published in the ACM DL, if not requested otherwise by the authors. Thus, they will be part of SPLASH workshop proceedings. Therefore, papers are to be submitted using the SIGPLAN acmart style: http://www.sigplan.org/Resources/Author/. Please use the provided double-column templates for Latex http://www.sigplan.org/sites/default/files/acmart/current/acmart-sigplanproc-template.tex) or Word http://www.acm.org/publications/proceedings-template. - technical paper: max. 8 pages, excluding references - position and work-in-progress paper: 1-4 pages, excluding references - technology demos or a posters: 1-page abstract Demos, posters, position and work-in-progress papers can be submitted on a second, later deadline to discuss the latest results and current work. For the submission, please use the submission system at: https://meta17.hotcrp.com/ ### Important Dates Abstract Submission: 07 August 2017 Paper Submission: 14 August 2017 Author Notification: 06 September 2017 Position/WIP Paper Deadline: 08 September 2017 Camera Ready Deadline: 18 September 2017 Position/WIP Notification: 21 September 2017 ### Program Committee The program committee consists of the organizers and the following reviewers: Anya Helen Bagge, University of Bergen, Norway Daniele Bonetta, Oracle Labs, Austria Nicolas Cardozo, Universidad de los Andes, Colombia Sebastian Erdweg, TU Delf, The Nederlands Robert Hirschfeld, HPI, Germany Roberto Ierusalimschy, PUC-Rio, Brazil Pablo Inostroza, CWI, The Nederlands Kim Mens, Universite Catholique de Louvain, Belgium Cyrus Omar, Carnegie Mellon University, USA Guillermo Polito, CNRS, France Tiark Rompf, Purdue University, USA Tom Van Cutsem, Nokia Bell Labs, Belgium Takuo Watanabe, Tokyo Institute of Technology, Japan ### Workshop Organizers Shigeru Chiba, University of Tokyo Elisa Gonzalez Boix, Vrije Universiteit Brussel Stefan Marr, Johannes Kepler University Linz -- Stefan Marr Johannes Kepler Universit?t Linz http://stefan-marr.de/research/