[code-reflection] RFR: Onnx subgraphs, lambda execution and BB removal [v3]
Adam Sotona
asotona at openjdk.org
Wed Feb 26 18:45:06 UTC 2025
On Wed, 26 Feb 2025 18:39:09 GMT, Adam Sotona <asotona at openjdk.org> wrote:
>> cr-examples/onnx/src/test/java/oracle/code/onnx/MNISTDemo.java line 52:
>>
>>> 50:
>>> 51: public static Tensor<Float> cnn(Tensor<Float> inputImage) {
>>> 52: return OnnxRuntime.execute(() -> {
>>
>> I think it's preferable to eventually express the following:
>>
>> OnnxRuntime.execute(() -> cnn(inputImage));
>>
>> i.e. i would like to see that expression embedded in the drawing code.
>>
>> This generally requires that we support invocations to other ONNX functions (or we inline when transforming), which we anyway should eventually work out.
>>
>> For the case of a single method invocation it should be possible to take a short cut for now (for clues see `LambdaOp.methodReference` which we could refine). So i would recommend following up on the short cut and leaning into that so we can pull out the execute from the `cnn` method.
>>
>> With this approach we still need to eventually workout the difference between initializers and inputs. I believe the former is the generally accepted way of representing constant input rather than more directly as constants in the graph.
>
> Initializers are now on top of my ToDo list. They are technically just differently serialized Constant ops. I'm still not sure how we should model them - maybe Initializer op?
Method call in a lambda is another level of complexity. I guess OnnxTransformer should inline it?
-------------
PR Review Comment: https://git.openjdk.org/babylon/pull/326#discussion_r1972150336
More information about the babylon-dev
mailing list