A few FFM questions

John Rose john.r.rose at oracle.com
Tue Jul 11 02:35:25 UTC 2023


(For all you teachers out there, Maurizio’s reply is a
master class in API design.)

On 10 Jul 2023, at 17:09, Maurizio Cimadamore wrote:

> On 10/07/2023 16:02, Brian S O’Neill wrote:
>
>    I want to avoid polluting the other thread, which no longer has a
>    valid subject line anyhow…
>
>    1) The size limit of a memory segment is defined by a signed value,
>    thus limiting the address space to be 63-bit. Is it likely that all
>    64-bit architectures today and in the future only use the lower 
> half
>    of the address range?
>
> While it’s true that a memory segment can never be big enough as the 
> full address space, in reality that limit is big enough - e.g. it is 
> hard to imagine a developer wanting to express a region of memory 
> whose size is bigger than 2^63. I believe the angle you are coming 
> from here is that you want to use a memory segment to model the entire 
> heap (at least judging from the other thread you started), and then 
> wanting to just use a raw long pointer as an “offset” into the 
> segment. Given the unsigned limitations, this will not work 100%: the 
> FFM API specifies that the “long” argument you pass to 
> MemorySegment::get (same for var handle) is a positive offset, 
> relative to the start of the segment. That is, a negative offset is, 
> by definition, /outside/ the memory segment. So, I do not see a way to 
> add support for full unsigned long as memory segment sizes/offsets - 
> in fact, I believe that allowing negative values in places where we 
> expect an offset is almost always a smell, and a place for bugs to 
> hide.
>
> In terms of interfacing with native code, note that the FFM API can 
> still wrap whatever address a native library throws at it, by wrapping 
> the base address (a long) into a segment using 
> MemorySegment::ofAddress. This method takes a raw address value, which 
> can be negative. Of course the size of the segment will be limited to 
> “only” Long.MAX_VALUE, which, seems reasonable enough.
>
> While it’s theoretically possible to add a different kind of segment 
> that behaves in a different way (e.g. allowing negative offsets), I 
> believe the cost vs. benefit ratio for doing so would be very 
> unfavourable.
>
>    2) The MemorySegment copy operation is safe for use against
>    overlapping ranges, and is thus a “memmove” operation. My 
> application
>    would benefit from also having a “memcpy” operation, for those 
> cases
>    where I know that the ranges don’t overlap. Can such an operation 
> be
>    added?
>    This is not an issue, or at least not one that should be solved via
>    new API surface: MemorySegment::copy relies on Unsafe::copyMemory
>    which does memmove vs. memcpy depending on wether it can prove that
>    ranges overlap. Rather than duplicating the API, if we find cases
>    where the existing logic doesn’t work well, we should probably
>    invest in rectifying and/or improving that logic.
>
>    3) Dynamic memory allocation is performed against Arenas, but 
> freeing
>    the memory is only allowed when the Arena is closed. I find this to
>    be cumbersome at times, and in one case I ended up creating a 
> single
>    Arena instance paired with each allocation. The other solution is 
> to
>    directly call malloc/free, which is what I’m using in some 
> places.
>    Both solutions are messy, so is it possible to add a simple
>    malloc/free API?
>
> The API embraces the idea that one lifetime might be associated with 
> multiple allocations quite deeply (this is explained in more details 
> here [1]). While this is an arbitrary choice (after all, there is no 
> perfect way to deal with deallocation of native resources in a way 
> that is both safe and efficient), malloc/free is not the right 
> primitive to design a safe API to manage off-heap resources for at 
> least two reasons:
>
>  * it is too fine-grained, there’s no way to group together
>    logically-related resources (think of the relationship between
>    dlopen, dlsym and dlclose)
>  * it is not safe-by-default: anyone can free a pointer, even one they
>    did not create
>
> In contrast, the lifetime-centric approach adopted by the FFM API 
> allows developers to group logically related segments in the same 
> lifetime, which is, often, a very useful move, and allows the API to 
> scale for managing lifetime of things that are not just malloc’ed 
> segments (such specifying lifetime of an upcall stub, or that of a 
> native library loaded with dlopen). This allows the API to detect 
> pesky conditions such as memory leaks and/or use-after free. There are 
> of course cases where this way of doing things is not a perfect fit, 
> and a lower-level access to malloc/free is preferrable. In these 
> cases, developers can, as you observed, just call malloc/free downcall 
> method handles, and deal with memory allocation/deallocation 
> completely manually. Or they can create a one-off arena to deal with 
> that allocation.
>
> To do the latter, you probably would need some kind of abstraction on 
> top:
>
> |record HeapAllocation(MemorySegment segment, Arena arena) implements 
> AutoCloseable { public close() { arena().close(); } static 
> HeapAllocation mallocConfined(long bytes) { Arena arena = 
> Arena.ofConfined(); return new HeapAllocation(arena.allocate(bytes), 
> arena); } } |
>
> To do the former you need to write some wrappers for malloc/free:
>
> |static final MethodHandle MALLOC = Linker.nativeLinker().... static 
> final MethodHandle FREE = Linker.nativeLinker().... static 
> MemorySegment malloc(long bytes) { return MALLOC.invokeExact(bytes); } 
> static MemorySegment free(MemorySegment segment) { return 
> FREE.invokeExact(segment); } |
>
> Of course the former approach provides more temporal safety (but might 
> also result in more overhead for enforcing said safety, which, I’m 
> not sure you’d be too happy with). In the latter, what you see is 
> what you get, you are playing the “power user” card, ensuring 
> correctness (read: avoid use-after-free) is now up to you.
>
> I don’t think either approach looks too “messy”. Of course 
> one-lifetime-per-segment is not the ideal sweet spot the FFM API is 
> designed for, and one has to write some extra code, but the FFM API 
> still allows you to do what you need to do (or to completely bypass 
> temporal safety alltogether, if you decide to do so). After having 
> spent considerable time looking at possible approaches to deal with 
> memory safety, we did not find a “simpler malloc/free API” that 
> was good enough as a building block for managing temporal resources in 
> the FFM API.
>
>    4) GuardUnsafeAccess. I understand that this was added to ensure 
> that
>    accessing a memory mapped file which was been truncated doesn’t 
> crash
>    the JVM. What is the overhead of this check? Given that my module
>    requires restricted access anyhow, it can already crash the JVM.
>    Would it be possible to support unguarded memory access operations
>    too?
>
> This seems another case (like memcpy vs memmove) where it is assumed 
> there’s some overhead associated to operation XYZ (which the JVM 
> does to ensure some kind of safety in the common case), hence the 
> request for a backdoor. I’d like to turn these kind of arguments 
> around and instead ask for some benchmark showing that the costs 
> associated with these operations are too high (and, if so, we might be 
> able to find ways to improve the status quo w/o necessarily changing 
> the API).
>
> Cheers
> Maurizio
>
> [1] - https://cr.openjdk.org/~mcimadamore/panama/why_lifetimes.html
>
>-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/panama-dev/attachments/20230710/ffbb1454/attachment-0001.htm>


More information about the panama-dev mailing list