RFR: 8350596: [Linux] Increase default MaxRAMPercentage for containerized workloads
Kirk Pepperdine
kirk at kodewerk.com
Thu Sep 18 16:51:58 UTC 2025
Hi all,
I have to strongly agree with Erik on this point. Going from one “guess” to another doesn’t seem like a good solution to this question. We have deep experience with the current “guess”. Much of what we have has been built around this “guess”. To change this “guess” to another “guess” risks breaking everything that has been built on the previous guess.
One of the confusions that I’ve seen and have tried to educate towards a better understanding is the difference between virtual and real memory (address spaces). The risk here is that if you give Java heap a new default setting then you will be committing more real memory and this will have an impact on those relying of defaults in that they will immediately need more real memory because if you give Java heap more memory, it will use it. If you want to OOM a significant number of apps that are relying on defaults, then a great way to do this is to change the default to a larger value.
Kind regards,
Kirk
> On Sep 18, 2025, at 2:03 AM, Erik Österlund <eosterlund at openjdk.org> wrote:
>
> On Wed, 7 May 2025 09:29:16 GMT, Severin Gehwolf <sgehwolf at openjdk.org> wrote:
>
>> Please take a look at this proposal to fix the "Java needs so much memory" perception in containers. The idea would be to bump the default `MaxRAMPercentage` to a higher value. The patch proposes 75%, but we could just as well use 50% if people feel more comfortable about it. Right now the default deployment in containers with resource limits in place (common for Kubernetes deployments) where a single process runs in the container isn't well catered for today for an application that just uses the default configuration. Only 25% of the container memory will be used for the Java heap, arguably wasting much of the remaining memory that has been granted to the container by a memory limit (that the JVM would detect and use as physical memory).
>>
>> I've filed a CSR for this as well for which I'm looking for reviewers too and intend to write a release note as well about this change as it has some risk associated with it, although the escape hatch is pretty simple: set `-XX:MaxRAMPercentage=25.0` to go back to the old behavour.
>>
>> Testing:
>> - [x] GHA - tier 1 (windows failures seem infra related)
>> - [x] hotspot and jdk container tests on cg v2 and cg v1 including the two new tests.
>>
>> Thoughts? Opinions?
>
> The question how much heap memory an arbitrary Java program needs to run is in many ways similar to the halting problem of answering how much time a Java program needs to run. There is no great answer. 25% was a wild guess. Sometimes it's okay, sometimes it is awful. There are plenty of situations when it is not at all what you want. But I'm very skeptical against coming up with a new wild guess hoping to improve performance, while risking getting killed. In a 512 MB container, 25% might be too small and you really want to use 75% to get better performance, except when direct mapped byte buffers use too much memory. But in a 128 MB container, 75% might be too much as the JIT compiled code and meatspace might need more than 32 MB.
>
> I think trying to find a good answer to how much heap a Java should use without running it is hopeless, and don't feel thrilled about changing the guesses from one bad guess to another bad guess, rather than having a more complete way of reasoning about *why* a limit is too high or too low, and adapting accordingly at runtime.
>
> When these sort of proposals started popping up, I started working on automatic heap sizing instead so that we would be able to recognize that there is actually no static limit if the user hasn't said so, but we can deal with that without exhausting memory with some clever policies. Now there is a JEP for both ZGC (cf. https://openjdk.org/jeps/8329758) and G1 (cf. https://openjdk.org/jeps/8359211) to do automatic heap sizing. Given their arrival, do we still need to mess around with these guesses? If not, then I think changing from one bad guess to another bad guess might just introduce risk. I'd prefer to let automatic heap sizing solve this better instead.
>
> -------------
>
> PR Comment: https://git.openjdk.org/jdk/pull/25086#issuecomment-3306363643
More information about the hotspot-gc-dev
mailing list