Heap Size Ergonomics in docker containers
Bob Vandette
bob.vandette at oracle.com
Wed Oct 25 12:44:22 UTC 2017
> On Oct 25, 2017, at 5:36 AM, Thomas Schatzl <thomas.schatzl at oracle.com> wrote:
>
> On Tue, 2017-10-24 at 14:43 -0400, Bob Vandette wrote:
>> I’m trying to test the container support that I’m adding to
>> JDK 18.3 and I’m running into an issue with the Heap size
>> calculation.
>>
>> If I create a docker container that has 100MB of total memory on a
>> 64-bit Linux box, I’m ending up with a 126MB Max Heap which results
>> in the process getting killed by the OOM killer.
>>
>> This is due to the fact that MaxHeapSize is 126MB and phys_mem
>> (100MB) is
>> not less than 50% of MinRAMPercentage of MaxHeapSize.
>>
>> I would think in a small memory system, you don’t ever want the Heap
>> size to be more than somewhere around 70% of total RAM.
>>
>> A quick fix for my problem might be this, but this still
>> leaves some pathological cases where phys_mem is greater than but
>> close to MaxHeapSize.
>>
>> if phys_mem < MaxHeapSize || phys_mem < MaxHeapSize *
>> MinRAMPercentage
>> use phys_mem * MinRAMPercentage
>>
>> I could improve on this with:
>>
>> if MaxHeapSize > 70% of phys_mem || phys_mem < MaxHeapSize *
>> MinRAMPercentage
>> use phys_mem * MinRAMPercentage
>>
>> [...]
>> Let me know what you think.
>>
>
> that seems to be almost the same as making the default value of
> MaxHeapSize 70% smaller.
Not really. That just creates the same problem if the containers memory is set
to some number near the lower value.
> Also that 70% only seems to serve the same
> purpose as MaxRAMPercentage, i.e. imo MaxRAMPercentage should have
> covered that.
I think it makes sense to have a MaxRAMPercentage with a MinRAMPercentage.
You don’t want to use the same % for large GB’s of RAM as you’d use with 128M so
I don’t agree that we should cover all cases with MaxRAMPercentage.
>
> I kind of question this line in the code:
>
> // Not-small physical memory, so require a heap at least
> // as large as MaxHeapSize
> reasonable_max = MAX2(reasonable_max, (julong)MaxHeapSize);
>
> So we just found out a "reasonable max" as a part of the calculation,
> and then we ignore it…
I don’t think it’s ignored. reasonable_max will be set to 25% of phys_mem
which could be larger than MaxHeapSize.
>
> The problem seems to be the default value of MaxHeapSize being 126M for
> 64 bit machines. I guess this has been made to provide a "good Java
> experience" with machines with only a few 100 MBs (overriding
> MaxRAMPercentage). From that point of view, I am not sure why the
> default value is that large given typical (PC) memory sizes nowadays.
>
> So one option could be reducing the default MaxHeapSize to some lower,
> other random value (E.g. the absolute minimum it needs to run a Hello
> World program plus some buffer for good measure)
Not going there. This would cause too many regressions for existing
deployments that upgrade to JDK 18.3.
>
> Then again, the given 126MB as absolute minimum seems to be as good as
> any other value (96MB for 32 bit systems) - everyone else just needs to
> set -Xmx. I mean, if the next person wants to run the VM in a 50M
> docker container, and defaults fail, the same complaints will start
> again, so as long as the MAX2() calculation is there any random minimum
> heap size will cause this issue.
>
> So another option would be to remove that MAX2 calculation…
Without changing anything else, this would cause us to select 25% of total available
which I believe is too small.
I still think we need to find a way of determining when MaxHeapSize is set too close or
higher than physical memory in order to know when to use MinRAMPercentage. My test
for 70% is the best solution I’ve been able to come up with so far.
Bob.
>
> Thomas
>
More information about the hotspot-gc-dev
mailing list