<!DOCTYPE html>
<html data-lt-installed="true">
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body style="padding-bottom: 1px;">
    <p>Hi Paul, <br>
    </p>
    <p>   This sounds great. We (the TornadoVM team at the University of
      Manchester) would like to collaborate and support this project
      moving forward. <br>
    </p>
    <p>Juan<br>
    </p>
    <div class="moz-cite-prefix">On 14/09/2023 00:31, Paul Sandoz wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:0FB7C8EC-BC30-4697-9DE0-8D90EDE6D330@oracle.com">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <div dir="auto"
style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
        Hi Juan,<br>
        <div><br>
          <blockquote type="cite">
            <div>On Sep 13, 2023, at 10:03 AM, Juan Fumero
              <a class="moz-txt-link-rfc2396E" href="mailto:juan.fumero@paravox.ai"><juan.fumero@paravox.ai></a> wrote:</div>
            <br class="Apple-interchange-newline">
            <div>
              <div style="padding-bottom: 1px;">
                <p>Hi Paul, <br>
                     I think this is a great initiative and very
                  well-needed in the Java world. I have a few questions.
                  <br>
                  <br>
                  1) <br>
                  <i>> Babylon will ensure that code reflection is
                    fit for purpose by creating a  GPU programming model
                    for Java that leverages code reflection and is 
                    implemented as a Java library.</i><br>
                  <br>
                  Does this mean that one of the goals of the project is
                  to define how GPUs should be programmed using the Code
                  Reflection API, or for Java in General?
                </p>
              </div>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>The intent is a general approach that depends on the
            support of code reflection (and Panama FFM).</div>
          <div><br>
          </div>
          <div>I think it is up to us, as members of the OpenJDK
            community, to determine where we head with regards to the
            GPU programming model, any concrete artifacts that could be
            produced, and where the dividing lines may be between APIs,
            implementations, and vendors. Gary can speak more to this
            than I.</div>
          <br>
          <blockquote type="cite">
            <div>
              <div style="padding-bottom: 1px;">
                <p>Is Babylon limited to GPUs? Are you also considering
                  other types of accelerators (e.g., AI accelerators,
                  RISC-V accelerators, etc).
                  <br>
                  <br>
                </p>
              </div>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>In principle it's not limited. As you have shown with
            TornadoVM the same programming model for GPUs can apply to
            other forms of hardware that are highly parallel processors,
            like FPGAs where a program is “printed out” (?) or uniquely
            arranged in some malleable hardware. In this case, assuming
            the programming model is applicable, it seems predominantly
            an area of implementation focus someone could choose to take
            on in their own implementations.  </div>
          <div><br>
          </div>
          <div>I think the more specialized the hardware the more
            limited the programming. So in some cases a parallel
            programming model may not apply, like with hardware that
            specializes only in multiplying tensors, which in effect
            reduces to some form of library calls.</div>
        </div>
        <div><br>
          <blockquote type="cite">
            <div>
              <div style="padding-bottom: 1px;">
                <p>We have other programming models such as TornadoVM
                  [1], which can be programmed using different styles
                  (e.g., loop parallel programs and kernel APIs). How
                  the new model/s will accommodate existing solutions?
                  Is this to be defined?<br>
                </p>
              </div>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>Again Gary can speak more to this, but I suspect the
            design will focus predominantly on a range-based kernel
            model (similar to Tornado’s kernel API). But, in principle I
            imagine it may be possible to plugin different kernel models
            (or copy parts of the design) where code reflection could be
            applied with different and more sophisticated approaches to
            program analysis and compilation, such as for a loop-based
            kernel model.</div>
          <div><br>
          </div>
          <div>Two key ares of focus I see are:</div>
          <div><br>
          </div>
          <div>1) the extraction of kernel call graphs using code
            reflection, as discussed in Gary’s JVMLS talk. Thus a
            developer does not have to explicitly build a task graph (as
            currently required by TornadoVM) and instead a specialized
            compiler does that work. (Note, it does not render any
            existing task graph API redundant, it just moves it more
            into the background as an important lower-level building
            block where the developer is not required to use it).</div>
          <div><br>
          </div>
          <div>2) the ability to call pre-defined “native” kernels that
            exist in some where else e.g., GPU-enabled library, which
            may also be a solution for leveraging more exotic but
            constrained limited hardware.</div>
          <div><br>
          </div>
          <blockquote type="cite">
            <div>
              <div style="padding-bottom: 1px;">
                <p>2) <br>
                  <i>> We do not currently plan to deliver the GPU
                    programming model into the JDK. However, work on
                    that model could identify JDK features and
                    enhancements of general utility which could be
                    addressed in future work.</i><br>
                  <br>
                  Does this mean that the GPU programming model will be
                  only used as a motivation to develop the Code
                  Reflection APIs for different use cases?<br>
                </p>
                <p>3) Is there any intent to support JVM languages with
                  these models (e.g., R, Scala, etc), or will it be
                  specific for the Java language?
                  <br>
                  <br>
                </p>
              </div>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>It’s specific to the Java language and reflection of Java
            code.</div>
          <div><br>
          </div>
          <blockquote type="cite">
            <div>
              <div style="padding-bottom: 1px;">
                <p>4) I believe we also need new types. As we discussed
                  in JVMLS this year, we will also need NDArray and
                  Tensor types, Vector types and Panama-based types for
                  AI and Heterogeneous Computing. This is aligned to the
                  Gary's talk at JVMLS [2] in which he proposed the HAT
                  initiative (Heterogeneous Accelerator Toolkit) and
                  Panama-based types. Will be this also part of the
                  Babylon project? 
                </p>
              </div>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>I think we will inevitably explore some of that, and they
            may be of such “general utility” we could decide to address
            in future work. However, I am wary of overly focusing on
            imperfections in this effort, esp. as in many of these cases
            there is a tendency to focus on syntax rather than the
            underlying model e.g., arrays (which requires much deeper
            and careful thinking, but result will be much better for
            that). It won’t be perfect and we can feed those
            imperfections into possible future work.</div>
          <div><br>
          </div>
          <div>Paul.</div>
          <div><br>
          </div>
          <br>
          <blockquote type="cite">
            <div>
              <div style="padding-bottom: 1px;">
                <p>[1] <a class="moz-txt-link-freetext"
href="https://tornadovm.readthedocs.io/en/latest/programming.html#core-programming"
                    moz-do-not-send="true">
https://tornadovm.readthedocs.io/en/latest/programming.html#core-programming</a></p>
                <p>[2] <a class="moz-txt-link-freetext"
                    href="https://www.youtube.com/watch?v=lbKBu3lTftc"
                    moz-do-not-send="true">
                    https://www.youtube.com/watch?v=lbKBu3lTftc</a><br>
                </p>
                <p><br>
                  Thanks<br>
                  Juan<br>
                </p>
                <p><br>
                </p>
                <div class="moz-cite-prefix">On 13/09/2023 01:37, Paul
                  Sandoz wrote:<br>
                </div>
                <blockquote type="cite"
cite="mid:EFFC47D1-408D-4BD6-9316-294F8A9BCCAB@oracle.com">
                  Hi Ethan,
                  <div><br>
                  </div>
                  <div>Current/prior work includes Mojo, MLIR, C# LINQ,
                    Julia [1], Swift for TensorFlow [2], Haskell [3].</div>
                  <div><br>
                  </div>
                  <div>In the context of lunch and Python what I had in
                    mind is machine learning and all those frameworks,
                    and I was also thinking about introspection of
                    Python code which IIUC is what TorchDynamo [4]
                    does. </div>
                  <div><br>
                  </div>
                  <div>Paul. </div>
                  <div>
                    <div><br>
                    </div>
                    <div>[1] <a href="https://arxiv.org/abs/1712.03112"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">https://arxiv.org/abs/1712.03112</a></div>
                    <div><br>
                    </div>
                    <div>[2] <a
href="https://llvm.org/devmtg/2018-10/slides/Hong-Lattner-SwiftForTensorFlowGraphProgramExtraction.pdf"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">https://llvm.org/devmtg/2018-10/slides/Hong-Lattner-SwiftForTensorFlowGraphProgramExtraction.pdf</a></div>
                    <div><br>
                    </div>
                    <div>[3] <a
href="http://conal.net/papers/essence-of-ad/essence-of-ad-icfp.pdf"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">http://conal.net/papers/essence-of-ad/essence-of-ad-icfp.pdf</a></div>
                    <div><br>
                    </div>
                    <div>[4] <a
href="https://pytorch.org/docs/stable/dynamo/index.html"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">https://pytorch.org/docs/stable/dynamo/index.html</a></div>
                    <div><br>
                      <blockquote type="cite">
                        <div>On Sep 12, 2023, at 12:31 PM, Ethan McCue <a
                            class="moz-txt-link-rfc2396E"
                            href="mailto:ethan@mccue.dev"
                            moz-do-not-send="true">
                            <ethan@mccue.dev></a> wrote:</div>
                        <br class="Apple-interchange-newline">
                        <div>
                          <div dir="ltr">Can you elaborate more on prior
                            work / the state of affairs in other
                            language ecosystems? In the talk you
                            reference Python "eating Java's lunch" - do
                            they have a comparable set of features or
                            some mechanism that serves the same goal
                            (write code in Python, derive GPU
                            kernel/autodiffed/etc. code)?</div>
                          <br>
                          <div class="gmail_quote">
                            <div dir="ltr" class="gmail_attr">On Wed,
                              Sep 6, 2023 at 12:44 PM Paul Sandoz <<a
                                href="mailto:paul.sandoz@oracle.com"
                                moz-do-not-send="true"
                                class="moz-txt-link-freetext">paul.sandoz@oracle.com</a>>
                              wrote:<br>
                            </div>
                            <blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                              I hereby invite discussion of a new
                              Project, Babylon, whose primary goal<br>
                              will be to extend the reach of Java to
                              foreign programming models such as <br>
                              SQL, differentiable programming, machine
                              learning models, and GPUs.<br>
                              <br>
                              Focusing on the last example, suppose a
                              Java developer wants to write a GPU <br>
                              kernel in Java and execute it on a GPU.
                              The developer’s Java code must, <br>
                              somehow, be analyzed and transformed into
                              an executable GPU kernel. A Java <br>
                              library could do that, but it requires
                              access to the Java code in symbolic <br>
                              form. Such access is, however, currently
                              limited to the use of non-standard <br>
                              APIs or to conventions at different points
                              in the program’s life cycle <br>
                              (compile time or run time), and the
                              symbolic forms available (abstract <br>
                              syntax trees or bytecodes) are often
                              ill-suited to analysis and transformation.<br>
                              <br>
                              Babylon will extend Java's reach to
                              foreign programming models with an <br>
                              enhancement to reflective programming in
                              Java, called code reflection. This <br>
                              will enable standard access, analysis, and
                              transformation of Java code in a <br>
                              suitable form. Support for a foreign
                              programming model can then be more <br>
                              easily implemented as a Java library.<br>
                              <br>
                              Babylon will ensure that code reflection
                              is fit for purpose by creating a <br>
                              GPU programming model for Java that
                              leverages code reflection and is <br>
                              implemented as a Java library. To reduce
                              the risk of bias we will also <br>
                              explore, or encourage the exploration of,
                              other programming models such as <br>
                              SQL and differentiable programming, though
                              we may do so less thoroughly.<br>
                              <br>
                              Code reflection consists of three parts:<br>
                              <br>
                              1) The modeling of Java programs as code
                              models, suitable for access,<br>
                                 analysis, and transformation.<br>
                              2) Enhancements to Java reflection,
                              enabling access to code models at compile<br>
                                 time and run time.<br>
                              3) APIs to build, analyze, and transform
                              code models.<br>
                              <br>
                              For further details please see the JVM
                              Language Summit 2023 presentations <br>
                              entitled "Code Reflection" [1] and "Java
                              and GPU … are we nearly there yet?" <br>
                              [2].<br>
                              <br>
                              I propose to lead this Project with an
                              initial set of Reviewers that<br>
                              includes, but is not limited to, Maurizio
                              Cimadamore, Gary Frost, and<br>
                              Sandhya Viswanathan.<br>
                              <br>
                              For code reflection this Project will
                              start with a clone of the current JDK <br>
                              main-line release, JDK 22, and track
                              main-line releases going forward.<br>
                              For the GPU programming model this Project
                              will create a separate repository,<br>
                              that is dependent on code reflection
                              features as they are developed.<br>
                              <br>
                              We expect to deliver Babylon over time, in
                              a series of JEPs that will likely<br>
                              span multiple feature releases.<br>
                              We do not currently plan to deliver the
                              GPU programming model into the JDK.<br>
                              However, work on that model could identify
                              JDK features and enhancements of <br>
                              general utility which could be addressed
                              in future work.<br>
                              <br>
                              Comments?<br>
                              <br>
                              Paul.<br>
                              <br>
                              [1] <a
href="https://cr.openjdk.org/~psandoz/conferences/2023-JVMLS/Code-Reflection-JVMLS-23-08-07.pdf"
                                rel="noreferrer" target="_blank"
                                moz-do-not-send="true"
                                class="moz-txt-link-freetext">
https://cr.openjdk.org/~psandoz/conferences/2023-JVMLS/Code-Reflection-JVMLS-23-08-07.pdf</a><br>
                                  <a
href="https://urldefense.com/v3/__https://youtu.be/xbk9_6XA_IY__;!!ACWV5N9M2RV99hQ!Pi_JEFeTachQ7GPUzCbX43Gh_znVj4rdfF5nwlwB6Ge37ghWGq6BLIbq-KlIM2mmm18hSL0CdCRECtQy0Q$"
                                rel="noreferrer" target="_blank"
                                moz-do-not-send="true">
                                https://youtu.be/xbk9_6XA_IY</a><br>
                              <br>
                              [2] <a
href="https://urldefense.com/v3/__https://youtu.be/lbKBu3lTftc__;!!ACWV5N9M2RV99hQ!Pi_JEFeTachQ7GPUzCbX43Gh_znVj4rdfF5nwlwB6Ge37ghWGq6BLIbq-KlIM2mmm18hSL0CdCRhl4eQWQ$"
                                rel="noreferrer" target="_blank"
                                moz-do-not-send="true">
                                https://youtu.be/lbKBu3lTftc</a><br>
                              <br>
                            </blockquote>
                          </div>
                        </div>
                      </blockquote>
                    </div>
                    <br>
                  </div>
                </blockquote>
              </div>
              <lt-container></lt-container></div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <pre class="moz-signature" cols="72">-- 
CTO, Paravox Ltd</pre>
  </body>
</html>