Observations and Questions on Flattening Behavior and Memory Alignment in Value Objects

Joe Mwangi joemwangimburu at gmail.com
Sun Oct 26 13:30:52 UTC 2025


Hi Valhalla Development Team,

Thank you for providing the latest build for testing progress on value
objects in the JVM. I’ve been running a few experiments to understand the
behavior of custom value types and their memory characteristics, and I must
say I’m very impressed by the implementation so far. Memory usage appears
significantly reduced in many cases (sometimes to nearly ¾ of the original).

Here’s a simple test I used:

public class ValhallaTest {
    value record PointRecord(short x, short y, short z) {}

    void main() throws InterruptedException {
        Thread.sleep(9000); // allow time to attach VisualVM
        System.out.println("Starting");
        int size = 10_000_000;
        var pointRecords = new PointRecord[size];
        for (int i = 0; i < size; i++) {
            pointRecords[i] = new PointRecord((short) 2, (short) 2, (short) 3);
        }
        Thread.sleep(20000);
    }
}

Using VisualVM, I inspected live objects and heap usage, with the following
observations:

   1.

   No individual PointRecord objects were detected — only the PointRecord[]
   array, confirming full flattening (no identity objects).
   2.

   PointRecord(short, short, short) logically occupies 6 bytes, but the
   array reports *80 000 000 B* for 10 M elements → 8 bytes per element,
   suggesting alignment to 64 bits.
   3.

   PointRecord(short x, short y) → *40 000 000 B* → 4 bytes per element.
   4.

   PointRecord(byte x) → *20 000 000 B* → 2 bytes per element.

It appears the prototype aligns flattened array elements to the smallest
power of two ≥ the logical size (not just 4-byte boundaries). Beyond 64
bits (≥ 8 bytes), flattening seems to stop, possibly reverting to identity
semantics, which makes sense given mutation and tearing concerns.

A few questions came up from these results:

   1.

   Will arrays need to be immutable to guarantee flattening for value
   elements larger than 64 bits? Think about parsing large files, where a
   large array with larger sized value object will be important, hence mutable
   array being key, and then do simd parsing.
   2.

   Since value objects are scalarized across stack calls, will there be
   tooling to analyze whether scalarization actually occurs (e.g.,
   register-based limits determined by the JVM, if not enough register space,
   value object gains identity)?
   3.

   In C, struct size can be predicted from field types. For Java value
   objects, since layout is JVM-dependent, is there a plan for tooling
   (perhaps jcmd or JFR integration) to expose explicit size/layout
   information beyond array inspection? The default above example shows that a
   6 bytes size value object is actually 8 bytes, but in C, it shall remain 6
   byte size.

Overall, this is very exciting work. The model feels both efficient and
semantically robust, offering a fresh take compared to languages that rely
on purely compile-time memory determinism. I’ll continue exploring
performance and GC interaction aspects later, but even this preliminary
testing shows remarkable promise.

Thanks again to the entire team for the great work.

Kind regards,
*Joe Mwangi*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/valhalla-dev/attachments/20251026/b34e1654/attachment-0001.htm>


More information about the valhalla-dev mailing list