Container-aware heap sizing for OpenJDK
Jonathan Joo
jonathanjoo at google.com
Wed Sep 21 01:58:49 UTC 2022
Hi all,
Apologies for the late response - I had missed the replies due to the way I
had set up my inbox filtering 😓. I may have also missed entire emails due
to this, so please feel free to re-reply to this email if I have not
addressed your questions.
I'll try to address all the email comments in one go, so please bear with a
long email ahead!
-----
@ Thomas Schatzl
Long time no talk :) Thanks for your detailed response.
> Most of these suggestions seem to be fairly consistent with existing
> RFEs (e.g. [1], [2], [3], ...) that have been discussed before with you
> (e.g. in [4]) and been considered really nice to have iirc.
>
> Agreed that there is a lot of overlap between AHS and the currently open
RFEs. Happy to converge on these now that I have a better understanding of
this area from working on it! Notably, [1] and [2] are very similar to the
two manageable flags that are part of AHS, so I agree that there is room
for collaboration here. For [3], I think that is something that is somewhat
orthogonal (in terms of implementation) to AHS, but would be very helpful
for AHS, since the more frequently we can uncommit memory, the closer we
can get the heap to our target heap size.
I am not convinced that having a thread inside the JVM is really the
> best solution. Constantly querying the _environment_ for changes seems
> to be traditionally outside of the scope of the JVM.
[...]
Using some external process (however it is distributed) seems to be a
much more flexible option (not only in customizability but also in terms
> of the release cycle for it). I would suggest to at least separate this
> effort from improving the JVM capabilities.
Just to clarify - the AHS thread is not part of the JVM itself. It is a
separate thread that is kicked off in our Java launcher at the same time as
the JVM, but is a completely separate process. Thus the functionality that
pulls from the environment and sets the manageable flags is not part of
hotspot/the JVM. The actual amount of changes to the JVM are not actually
that intrusive and actually follow somewhat similar logic to
https://github.com/tschatzl/jdk/tree/8238687-investigate-memory-uncommit-during-young-gc2
.
some applications that exhibit a very "phased" behavior
> results show very bad behavior.
This is helpful to know - as someone who has never heard of SPECjbb2015,
would it be easy for me to try running it with my prototype?
-----
@ Severin:
1. How is AHS enabled? Is it on by default or is it opt-in?
>
AHS is controlled by a flag in Google's version of the JDK launcher. Right
now at Google it is opt-in, but we plan to enable it by default for certain
subsets of jobs (namely those already enrolled in a service meant to make
tuning more hands-off). If that rollout goes smoothly, we will broaden
adoption (but probably still leave as opt-in).
> 2. Is the prototype working for all GCs available in OpenJDK or
> specific to G1?
>
This prototype currently only works for G1 GC, and we don't currently have
the bandwidth to extend the prototype to other GCs :(
> 3. Would this be a Linux only feature?
>
Currently yes - it is dependent on things like cgroups which I'm not sure
is available in other platforms. That being said, If an equivalent feature
exists in other platforms, I don't see why it wouldn't work!
-----
@ Thomas Stüfe
Can you describe the adjustment logic in more detail or is there a public
> prototypec?
>
There isn't currently a public prototype - I'll have to double check with
legal about the level of detail I can include in a public forum and get
back to you. (I imagine it should be fine, but a public-facing doc hasn't
been written yet :P)
-----
@ Fazil
Any chance to be considered as a JEP in OpenJDK?
>
I'm not familiar with the process of turning an idea/prototype into a JEP
- do you have any suggestions on how to go about doing this? Is there some
approval process for creating a JEP, or is anyone open to making one?
-----
@ Volker
1. Is there a public prototype available?
>
Not yet, unfortunately!
> 2. For which GCs have you implemented it?
>
Just G1 GC - the implementation I would say is fairly GC-dependent, so it
may be a bit of work to get it to work on other GCs. (That being said I'm
not familiar with the other GCs so maybe it won't be as bad as I think?)
> 3. On which platforms does it currently work?
>
Just Linux at the moment.
> 4. Do you use information from procfs on Linux (or what else) to get
> the memory state of the system?
>
I can provide more details once I get approval from legal to share more
specifics publicly!
> 5. How often do you query the system and JVM state and update the
> settings? Is this (or can it be) correlated to the JVMs allocation
> rate?
>
Right now it defaults to once every 5 seconds, but is configurable to run
at whatever frequency is appropriate for the server. That's a good idea to
make it correlated with the JVMs allocation rate, that should definitely be
doable.
> 6. Can you please explain your approach in some more detail (e.g. how
> does heap shrinking work)? I suppose you have "Current target heap
> size" <= "Current maximum heap expansion size" <= Xmx. I can
> understand how your monitoring thread updates "Current maximum heap
> expansion size" based on the systems memory usage. But it's harder to
> understand how you set "Current target heap size" based GC CPU
> overhead and when and how do you trigger heap resizing (both
> increasing and shrinking) based on these values? And why do you need
> two new variables? Wouldn't "Current target heap size" be enough"
> (also see next question for more context)?
>
Will provide more details regarding these questions when legal approves!
But on a high level, the target heap size is a soft target that we try to
get the heap to, but if say we have a large amount of legitimate heap usage
that cannot be cleaned up (but is still higher than the target heap size),
we will allow the heap size to stay above the target heap size
indefinitely.
This heap size target is determined by having some GC CPU overhead target,
and if we are spending more CPU time on GC than the GC CPU target, then we
increase the heap size target, and vice versa.
The second flag (Current maximum heap expansion size) is a hard limit that
prevents allocations no matter what that would cause us to hit container
OOM.
> 7. What you propose is very similar to the CurrentMaxHeapSize proposed
> by "JEP draft: Dynamic Max Memory Limit" [1] except that the latter
> proposal misses the part about how to automatically set and update
> this value.
8. We already have "JEP 346: Promptly Return Unused Committed Memory
> from G1" [2] since JDK 12. Can you explain why this isn't enough? I.e.
> if the GC already returns as much heap memory as possible back to the
> system, what else can you do to further improve the situation?
Agreed! When initially sketching out the proposal, I had looked into both
of the JEPs you mentioned in 7. and 8. JEP 346 is not sufficient for our
use case since it is not available for JDK11, and IIRC there was trouble
backporting that to JDK11, hence it was not usable from our end at least
for a while.
Furthermore, while periodic GC would help to some extent, but that leaves
us trying to determine the optimal periodicity of GC to achieve what we
want per server. This seems to be a less informed decision than
specifically forcing more GC at times of low container free space. We
actually have experimented with similar features within Google and saw that
it was not sufficient for preventing container OOMs.
-----
@ Stefan
It would probably be good to use the same name for the other GCs.
>
Acknowledged - will change before upstreaming.
-----
@ Kirk
>From your description here, you're using CPU (GC overheads) to help you
> resize. Do you mind elaborating on how this works?
>
We haven't spent much time focusing on the distribution of heap size
between young and old generations, but I agree that this is an area that
needs more active investigation for our prototype. Currently our model is
just doing the simplest way of reducing and expanding the heap when
necessary and not modifying the ratio from young and old gen. But we plan
to look more into this as we encounter more types of workloads.
Might I suggest that a quicker way is to start large and then resize to
> smaller. The reason for doing this is because small clips the signals you
> need to look at to know how big things need to be. Starting big should give
> you a cleaner, unclipped signal to work with.
>
Thank you for the suggestion -- noted! One issue we've run into is that G1
tends to use as much heap as it is given, so often times, starting large
does not give us the right signals as to why we should reduce the
container. But I think with AHS, this becomes a viable approach.
-----
@ Ioi
- In the simplest case, you have a single JVM process running inside the
> container. How do you balance its Java heap vs non-Java heap usage?
We don't bound non-Java heap usage -- we assume that there are no
memory leaks and that non-Java heap usage is valid usage. Thus, with
AHS enabled, the onus becomes on the JVM heap to adapt to increases in
non-Java heap usage.
> - If you have two JVM processes running inside the container, how do
> they coordinate?
We haven't yet really tried AHS on multiple JVM processes running
inside the same container, but I imagine it should work really mostly
the same. Assuming that there is indeed enough space in the container
for both processes to work with the target GC CPU overhead, then heap
usage for both should stay fairly constant, and both AHS threads
should prevent each JVM from exceeding a heap usage that would result
in container OOMs.
> - If the fluctuation is caused by other processes, can the JVM react
> quickly (run GC and free up caches) to respond to quick spikes? Do we
> need to configure the container to allow temporarily over-budget
> (something like "you can be 100MB over budget for less than 20ms") so
> the JVM has time to shrink itself?
- Conversely, how can a spiky process request the JVM to temporarily
> give up some memory?
The JVM can react pretty quickly - basically if free space decreases
quickly, then the next time the JVM tries to expand the heap due to an
allocation, it will fail to expand, and thus runs GC to free up some
space. During this GC it will then do its best to shrink the heap to
its target size. Rather than allowing the container to go over-budget,
we have some buffer so that we don't allow expansions if it would
cause container usage to exceed 95% of the container limit.
> It seems to me that for the more complex scenarios, it's not enough for
> each individual JVM to make decisions on its own. We may need some sort
> of intra-process coordination.
Agreed that this is not a one-size-fits-all solution for all possible
scenarios, especially the more complex ones.
-----
Again, apologies for the delay in responding to the questions, but I hope I
answered everything here. Will be more diligent about monitoring this
discussion thread.
Appreciate all the thoughtful questions and discussions!
~ Jonathan
On Fri, Sep 16, 2022 at 10:41 AM Kirk Pepperdine <kirk.pepperdine at gmail.com>
wrote:
> Hi Jonathan,
>
> Very interesting experiment. This sizing issue is something that is
> befuddling a significant portion of those responsible for
> deploying containerized Java applications. Lio nicely points out that the
> old goal of "play nice" when configuring memory is in conflict with the new
> goal of "be greedy". Thus a re-visiting of memory sizing ergonomics is
> something that I certainly welcome. The cloud providers have been
> interested in better (for some weakly definition of better) memory resizing
> dynamics for quite some time so also a hot button topic.
>
> I'm not sure how much I have to add over what others have commented on
> but, I don't believe we need an inter-process communication, at least not
> in the first instance nor do we need a watcher thread (again, at least not
> in the first instance). The one thing that I see here, if I'm reading this
> correctly, is that there is a focus on total heap size. For generational
> collectors, like G1, young and tenured play two different roles and thus
> require different tuning strategies. Tuning young is about controlling the
> promotion of transients into tenured. The two big things that drive
> transients into tenured are undersized survivor space and frequency
> collections (accelerated aging). Thus young sizing should be heavily
> influenced by allocation rates. This is considerably different than tenured
> where the driving metric is live set size (LSS). Thus tenured should be
> LSS + some working space. From this, it follows that max heap will be the
> sum of the parts. From your description here, you're using CPU (GC
> overheads) to help you resize. Do you mind elaborating on how this works?
>
> Another side note is that you mention sizing is trial and error where you
> start small and then make bigger as needed. Might I suggest that a quicker
> way is to start large and then resize to smaller. The reason for doing this
> is because small clips the signals you need to look at to know how big
> things need to be. Starting big should give you a cleaner, unclipped signal
> to work with.
>
> Kind regards,
> Kirk
>
>
> On Tue, Sep 13, 2022 at 12:17 PM Jonathan Joo <jonathanjoo at google.com>
> wrote:
>
>> Hello hotspot-dev and hotspot-gc-dev,
>>
>> My name is Jonathan, and I'm working on the Java Platform Team at Google.
>> Here, we are working on a project to address Java container memory issues,
>> as we noticed that a significant number of Java servers hit container OOM
>> issues due to people incorrectly tuning their heap size with respect to the
>> container size. Because our containers have other RAM consumers which
>> fluctuate over time, it is often difficult to determine a priori what is an
>> appropriate Xmx to set for a particular server.
>>
>> We set about trying to solve this by dynamically adjusting the Java
>> heap/gc behavior based on the container usage information that we pass into
>> the JVM. We have seen promising results so far, reducing container OOMs by
>> a significant amount, and oftentimes also reducing average heap usage (with
>> the tradeoff of more CPU time spent doing GC).
>>
>> Below (under the dotted line) is a more detailed explanation of our
>> initial approach. Does this sound like something that may be useful for the
>> general OpenJDK community? If so, would some of you be open to further
>> discussion? I would also like to better understand what container
>> environments look like outside of Google, to see how we could modify our
>> approach for the more general case.
>>
>> Thank you!
>>
>>
>> Jonathan
>> ------------------------------------------------------------------------
>> Introduction:
>>
>> Adaptable Heap Sizing (AHS) is a project internal to Google that is meant
>> to simplify configuration and improve the stability of applications in
>> container environments. The key is that in a containerized environment, we
>> have access to container usage and limit information. This can be used as a
>> signal to modify Java heap behavior, helping prevent container OOMs.
>> Problem:
>>
>> -
>>
>> Containers at Google must be properly sized to not only the JVM heap,
>> but other memory consumers as well. These consumers include non-heap Java
>> (e.g. native code allocations), and simultaneously running non-Java
>> processes.
>> -
>>
>> Common antipattern we see here at Google:
>> -
>>
>> We have an application running into container OOMs.
>> -
>>
>> An engineer raises both container memory limit and Xmx by the same
>> amount, since there appears to be insufficient memory.
>> -
>>
>> The application has reduced container OOMs, but is still prone to
>> them, since G1 continues to use most of Xmx.
>> -
>>
>> This results in many jobs being configured with much more RAM than
>> they need, but still running into container OOM issues.
>>
>> Hypothesis:
>>
>> -
>>
>> For preventing container OOM: Why can't heap expansions be bounded by
>> the remaining free space in the container?
>> -
>>
>> For preventing the `unnecessarily high Xmx` antipattern: Why can't
>> target heap size be set based on GC CPU overhead?
>> -
>>
>> From our work on Adaptable Heap Sizing, it appears they can!
>>
>> Design:
>>
>> -
>>
>> We add two manageable flags in the JVM
>> -
>>
>> Current maximum heap expansion size
>> -
>>
>> Current target heap size
>> -
>>
>> A separate thread runs alongside the JVM, querying:
>> -
>>
>> Container memory usage/limits
>> -
>>
>> GC CPU overhead metrics from the JVM.
>> -
>>
>> This thread then uses this information to calculate new values for
>> the two new JVM flags, and continually updates them at runtime.
>> -
>>
>> The `Current maximum heap expansion size` informs the JVM what is the
>> maximum amount we can expand the heap by, while staying within container
>> limits. This is a hard limit, and trying to expand more than this amount
>> results in behavior equivalent to hitting the Xmx limit.
>> -
>>
>> The `Current target heap size` is a soft target value, which is used to
>> resize the heap (when possible) so as to bring GC CPU overhead toward its
>> target value.
>>
>>
>> Results:
>>
>> -
>>
>> At Google, we have found that this design works incredibly well in
>> our initial rollout, even for large and complex workloads.
>> -
>>
>> After deploying this to dozens of applications:
>> -
>>
>> Significant memory savings for previously misconfigured jobs (many
>> of which reduced their heap usage by 50% or more)
>> -
>>
>> Significantly reduced occurrences of container OOM (100% reduction
>> in vast majority of cases)
>> -
>>
>> No correctness issues
>> -
>>
>> No latency regressions*
>> -
>>
>> We plan to deploy AHS across a much wider subset of applications
>> by EOY '22.
>>
>>
>> *Caveats:
>>
>> - Enabling this feature might require tuning of the newly introduced
>> default GC CPU overhead target to avoid regressions.
>> -
>>
>> Time spent doing GC for an application may increase significantly
>> (though generally we've seen in practice that even if this is the case,
>> end-to-end latency does not increase a noticeable amount)
>> -
>>
>> Enabling AHS results in frequent heap resizings, but we have not seen
>> evidence of any negative effects as a result of these more frequent heap
>> resizings.
>> -
>>
>> AHS is not necessarily a replacement for proper JVM tuning, but
>> should generally work better than an untuned or improperly tuned
>> configuration.
>> -
>>
>> AHS is not intended for every possible workload, and there could be
>> pathological cases where AHS results in worse behavior.
>>
>>
>
> --
> Kind regards,
> Kirk Pepperdine
>
> http://www.kodewerk.com
> http://www.javaperformancetuning.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-dev/attachments/20220920/d9c7d4e2/attachment-0001.htm>
More information about the hotspot-dev
mailing list