MaxRAM questions

Thomas Stüfe thomas.stuefe at gmail.com
Thu Aug 31 13:50:10 UTC 2023


Hi,

I recently looked more closely at MaxRAM, and there are some things I
cannot quite figure out.

MaxRAM has default values hard-coded into the JVM. Those values are quite
low by today's standards, e.g. on x64, we run with a default value of 128
GB.

First off, these defaults had always been dependent on compiler (C1, C2)
and architecture, for the whole observable repo history. Why, though? What
do these defaults have to do with JIT and architecture?

In older JVMs (up to and including JDK11, changed with
https://bugs.openjdk.org/browse/JDK-8222252), we cap the amount of physical
memory with MaxRAM: If you have a box of, say, 4 TB main memory,
-XX:MaxRAMPercentage=25 will give you only 32 GB, not 1 TB.

After https://bugs.openjdk.org/browse/JDK-8222252, it got more complex.
Now, if you *manually* specify MaxRAMPercentage, you get the full physical
memory. So, running with -XX:MaxRAMPercentage=25 will give you 1 TB. But if
you instead relied on the default value of MaxRAMPercentage, which is 25,
the 128 GB maximum still applies, and we still run with 32 GB.

The comment history under https://bugs.openjdk.org/browse/JDK-8222252
suggests that MaxRAM defaults are something of a safety belt not to use
excessive memory. But then, 128GB is just an arbitrary number. I can see
the point of a safety belt as %-of-physical memory (and the 25% is
ingrained into all developers by now), but I don't get the additional cap
at 32 GB. Well, it is conveniently still in range for CompressedOops. Was
that the reason for the 128 GB?

Does anyone know more details?

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20230831/27536a36/attachment-0001.htm>


More information about the hotspot-gc-dev mailing list