Off heap vs on heap memory access performance for a DBS
Brian S O'Neill
bronee at gmail.com
Fri Jul 28 13:45:51 UTC 2023
On 2023-07-28 12:15 AM, Johannes Lichtenberger wrote:
> Hello,
>
> I think I mentioned it already, but currently I'm thinking about it again.
>
> Regarding the index trie in my spare time project I'm thinking if it
> makes sense, as currently I'm creating fine granular on heap nodes
> during insertions/updates/deletes (1024 per page). Once a page is read
> again from storage I'm storing these nodes in a byte array of byte
> arrays until read for the first time. One thing though is, that the
> nodes may store strings inline and thus are of variable size (and thus,
> the pages are of variable size, too, padded to word aligned IIRC).
>
> I'm currently auto-committing after approx 500_000 nodes have been
> created (afterwards they can be garbage collected) and in total there
> are more than 320 million nodes in one test.
>
> I think I could store the nodes in MemorySegments instead of using on
> heap classes / instances and dynamically reallocate memory if a node
> value is changed.
>
> However, I'm not sure as it means a lot of work and maybe off heap
> memory access is always slightly worse than on heap!?
>
I've been maintaining a (mostly) pure Java database library for several
years now, and I can give you the short answer: Yes, you should consider
supporting off heap memory. Now for the long answer...
The Tupl database supports a cache backed by Java byte arrays or by off
heap (native) memory. By design, the cache is very friendly to
generational garbage collectors -- all the memory is allocated up front,
and it's recycled without any GC involvement. Database pages are fixed
in size (4096 bytes by default), which simplifies memory management and
persistence. Under load, GC activity is very low.
The off heap memory is managed by using the Unsafe class, but I'm
converting it to use the Panama API. The Panama version is almost as
fast as using Unsafe, and Maurizio (as he mentioned earlier) is working
on closing the performance gap.
For small caches (1GB), the performance of the byte[] version and the
off heap version is the same. The problem is scalability. With larger
caches, even generational garbage collectors slow down. With a 16GB
cache, the byte[] version is about 50% slower than the off heap version.
This is with the G1 collector. Generational ZGC is better, with a
performance ratio of about 25%. Things get much much worse as the cache
gets even larger, but I don't have numbers readily available.
Note that if you're allocating and freeing MemorySegments all the time,
you might not see as much of a performance improvement compared to byte
arrays. A generational collector is very good at managing short lived
objects. To see a huge win, you'll need to do your own memory management.
More information about the panama-dev
mailing list