RFR: 8222145: Add -XX:SoftMaxHeapSize flag

Per Liden per.liden at oracle.com
Fri Apr 12 08:16:48 UTC 2019


Hi,

On 04/11/2019 02:02 PM, Rodrigo Bruno wrote:
> Hello all,
> 
> the main target of these two patches is highly dynamic environments, 
> where the application should get, at each moment, just enough
> resources to run. Consider the following stack: application - JVM - 
> customer container - cloud provider VM. The container is charged for
> memory consumption. Samples are taken frequently.
> 
> The CurrentMaxHeapSize idea is to allow the customer to inform the 
> application of how much memory it should use at most. This is necessary
> because many applications will read the max heap size and will just use 
> all the available memory for example, to cache data. With this patch,
> we give the application an opportunity to read the new max heap size, 
> which will return the CurrentMaxHeapSize containing the value that
> the user wants.
> 
> The difference here from using the update_target_occupancy (besides 
> potential overshooting) is that this is not visible to the application
> (please correct me if this not true). In other words, if I change the 
> threshold at runtime, can the application see the change?
> 
> Probably a good solution is somewhere in the middle, where we can let 
> the application know the current limit and maybe be a bit more
> aggressive when we get closer to the threshold (current max heap size).
> 
> What do you think?

It can certainly be discussed if setting SoftMaxHeapSize should or 
shouldn't be reflected in the heap MXBean, Runtime.maxMemory(), etc. 
However, I'm thinking it would be better to expose this information 
through some new API. An app that uses Runtime.maxMemory() to determine 
the size of some cache would kind of have to be SoftMaxHeapSize-aware 
anyway, in the sense that it would need to continuously poll 
Runtime.maxMemory() in case the limit is adjusted.

It would also be a behavioral change, so there are compatibility 
questions lurking here. Like, Runtime.maxMemory() not returning a stable 
value, and it could be less than Runtime.totalMemory(). Introducing 
Runtime.softMaxMemory() or something like that would be safer.

cheers,
Per

> 
> Cheers,
> rodrigo
> 
> Thomas Schatzl <thomas.schatzl at oracle.com 
> <mailto:thomas.schatzl at oracle.com>> escreveu no dia quinta, 11/04/2019 
> à(s) 12:11:
> 
>     Hi Ruslan,
> 
>     On Wed, 2019-04-10 at 18:53 +0200, Ruslan Synytsky wrote:
>      > So happy to get this long-awaited improvement!
>      >
>      > There are additional materials related to the same issue: JEP draft:
>      > Dynamic Max Memory Limit
>      > https://openjdk.java.net/jeps/8204088
>      > https://bugs.openjdk.java.net/browse/JDK-8204088
> 
>     The most recent webrev is at
>     http://cr.openjdk.java.net/~tschatzl/jelastic/cmx/ (I will call it
>     CurrentMaxHeapSize approach in the following); note that that change
>     implements something slightly different: it modifies the current *hard*
>     max heap size. Which means that the heap size, once successfully set,
>     will not go over this value, and if the application does, throw an
>     OOME.
> 
>     The -XX:SoftMaxHeapSize value, is only a goal for the collector to
>     start reclaiming memory only. It may still, at will, use more memory
>     (up to regular MaxHeapSize).
>     Of course, that overshoot will depend on the collector.
> 
>     The effect is similar, but not exactly that in border cases; if there
>     were somebody to work on this (maybe Lin from the other sub-thread or
>     you?), so would something like that an acceptable solution for you too?
> 
>     I believe for this to work (and this is mostly a guess), it may be
>     sufficient to call G1IHOPControl::update_target_occupancy() with the
>     new SoftMaxHeapSize flag at certain places(*) instead of modifying the
>     actual maximum heap sizes like in the other attempt.
>     The heap sizing algorithm should then automatically do the right thing,
>     particularly in combination with JEP 346.
> 
>     (*) Being very conservative, only at (induced) safepoints; however
>     since this would only change the timing when and how gcs are triggered,
>     there should not be too much harm otherwise.
> 
>     This is different to changing the current (hard) max heap size; more
>     care may needs to be taken (i.e. more analysis) there, that's why it
>     has not been picked up by the Oracle gc team at this time.
> 
>     There is at least one, if not more, ways of misusing the
>     CurrentMaxHeapSize approach that comes to my mind (totally random):
>     what would be the expectation of the user if he updated
>     CurrentMaxHeapSize in an OOME handler caused by reaching
>     CurrentMaxHeapSize?
> 
>     In addition to that, some details need to be worked out what should
>     happen when, e.g. should memory automatically be uncommitted asap, or
>     some time later? Unfortunately JDK-8222145 also does not give an
>     answer.
> 
>      > Rodrigo Bruno implemented a patch for G1 already. The algorithm is
>      > described at the research
>      > http://ranger.uta.edu/~jrao/papers/ISMM18.pdf (flag
>      > name CurrentMaxMemory).
>      >
>      > Should we join the efforts and deliver it together?
> 
>     We would be happy for contributions :) - unfortunately we at Oracle at
>     this time do not have the time for significant work on it similar to
>     JEP 346 right now.
> 
>     Thanks,
>        Thomas
> 
> 



More information about the hotspot-gc-dev mailing list