RFR: 8350596: [Linux] Increase default MaxRAMPercentage for containerized workloads

Severin Gehwolf sgehwolf at openjdk.org
Thu Sep 18 17:26:24 UTC 2025


On Thu, 18 Sep 2025 09:00:23 GMT, Erik Österlund <eosterlund at openjdk.org> wrote:

> The question how much heap memory an arbitrary Java program needs to run is in many ways similar to the halting problem of answering how much time a Java program needs to run. There is no great answer.

Agreed. Nevertheless there is a problem worth solving: Improve the memory *utilization* issue of Java when deploying to the cloud. This isn't a *performance* measure. It should be performance neutral. The goal would be to improve the memory utilization story in the default config.

> 25% was a wild guess. Sometimes it's okay, sometimes it is awful. There are plenty of situations when it is not at all what you want.

At the time container deployments with a single process in it weren't a thing. They are now.

> But I'm very skeptical against coming up with a new wild guess hoping to improve performance, while risking getting killed. In a 512 MB container, 25% might be too small and you really want to use 75% to get better performance, except when direct mapped byte buffers use too much memory. But in a 128 MB container, 75% might be too much as the JIT compiled code and meatspace might need more than 32 MB.

We have more data now, so the default should adjust. Keep in mind that the proposal is to change the `MaxRAMPercentage` value in a fairly specific setup: in a container with a memory limit set on the container level. Since this is changing only `MaxRAMPercentage` it feeds into the heuristics machinery in determining the actual `MaxHeapSize`. I've attached two charts to the bug showing that the `MaxRAMPercentage` bump has no effect on JVMs with 250MB of memory or less.
 
> I think trying to find a good answer to how much heap a Java should use without running it is hopeless, and don't feel thrilled about changing the guesses from one bad guess to another bad guess, rather than having a more complete way of reasoning about _why_ a limit is too high or too low, and adapting accordingly at runtime.

Hopeless seems a stretch. I'm arguing we have more data (today) and we need to think about how to slowly adjust the defaults to the new normal.

> When these sort of proposals started popping up, I started working on automatic heap sizing instead so that we would be able to recognize that there is actually no static limit if the user hasn't said so, but we can deal with that without exhausting memory with some clever policies. Now there is a JEP for both ZGC (cf. https://openjdk.org/jeps/8329758) and G1 (cf. https://openjdk.org/jeps/8359211) to do automatic heap sizing. Given their arrival, do we still need to mess around with these guesses?

I think so. While those JEP drafts are really heading in the right direction, it doesn't solve the problem for the rest of the GCs. Keep in mind that for many small deployments, say 1 core, you'd get Serial GC, not solving the problem there.

> If not, then I think changing from one bad guess to another bad guess might just introduce risk. I'd prefer to let automatic heap sizing solve this better instead.

I agree that automatic heap sizing should solve this problem. But it needs to do so for all cases. Also, it's not clear when those JEPs will be widely available. Until that's the case we should try to adjust defaults to the changed reality to ease some of this pain.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/25086#issuecomment-3308676336


More information about the hotspot-runtime-dev mailing list