[nicl] NativeScope to allocate larger than 64K
Henry Jen
henry.jen at oracle.com
Thu May 31 06:18:22 UTC 2018
Hi,
While we have being able to get away of this so far, there are cases we need to transfer data larger than 64K. One example I tried is the LabelImage[1], where it read whole image into java array.
This webrev[2] implement a simple and straightforward strategy, if the memory is not enough for current block, shrink and shovel the current block and allocate a new one. However, if we try to allocate larger than 32K, we simply allocate that and shovel that new block instead. This way, we can be assure the current block has more than 32K available for small allocations.
Thoughts?
Cheers,
Henry
[1] https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/src/main/java/org/tensorflow/examples/LabelImage.java
[2] http://cr.openjdk.java.net/~henryjen/panama/NativeScopeIncrease/webrev/
More information about the panama-dev
mailing list