Pls review 6887571

Peter B. Kessler Peter.Kessler at Sun.COM
Mon Oct 5 18:21:10 PDT 2009


I think that's

    http://bugs.sun.com/view_bug.do?bug_id=6255634

The argument against it is multiple (identical?) JVM's, each requesting 75% of the _installed_ memory.  Or multiple JVM's the first of which gets 75% of the _available_ memory, and the second of which gets 75% of what's left, and the third of which gets 75% of what's left, etc.  But for a single JVM on a machine, it might make sense, at least as an option.  Really what's wanted is "external ergonomics", and chunked heaps.

			... peter

David Holmes - Sun Microsystems wrote:
> Any reason -Xmx has to be hardcoded rather than calculated based on the 
> installed/available memory ? Then OldSize and newSize can/should be 
> fractions on mx value.
> 
> Just a thought ...
> 
> David
> 
> Paul Hohensee said the following on 10/03/09 03:05:
>> You're right, they should migrate back.  If there's a need for them to 
>> diverge
>> in the future, we can split them back out then.
>>
>> Thanks,
>>
>> Paul
>>
>> Y.S.Ramakrishna at Sun.COM wrote:
>>> Hi Paul --
>>>
>>> Looks good (and about time too!).
>>> Should the parameters whose defaults are now
>>> uniform across platforms migrate back to globals.hpp
>>> from their current lobals_*.hpp locations? Or do you
>>> see the need for continuing to keep them in platform-specific
>>> globals_*.hpp?
>>>
>>> -- ramki
>>>
>>> On 10/02/09 07:32, Paul Hohensee wrote:
>>>> 6887571: Increase default heap config sizes
>>>>
>>>> Webrev at
>>>>
>>>> http://cr.openjdk.java.net/~phh/6887571/webrev.00/
>>>>
>>>>  From the CR description:
>>>>
>>>> The default client vm heap config since ~2000 has been the 
>>>> equivalent of
>>>>
>>>> -Xmx64m -XX:OldSize=4m -XX:NewSize=2m -XX:NewRatio=8
>>>> -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=15
>>>>
>>>> for sparc32, and
>>>>
>>>> -Xmx64m -XX:OldSize=4m -XX:NewSize=1m -XX:NewRatio=12
>>>> -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=15
>>>>
>>>> for x86.  OldSize and NewSize are the initial committed sizes of the 
>>>> old
>>>> and young gens respectively.  A full gc is required to increase the 
>>>> committed
>>>> size of the old gen.
>>>>
>>>> At the time, 64m was half of the 128m of memory typically available 
>>>> on high-end
>>>> desktops, many client applications were satisfied with small heaps 
>>>> (hence the
>>>> low -Xms value), and gc times were such that the young gen had to be 
>>>> fairly small
>>>> in order to minimize pause times.
>>>>
>>>> Since that time, low end desktops and laptops, as well as netbooks 
>>>> and smartbooks,
>>>> typically come with 256m, client applications have become much more 
>>>> "server-like",
>>>> and we've realized that small young gen sizes increase the frequency 
>>>> of young gcs
>>>> and the amount of transient data promoted to the old gen to levels 
>>>> that noticeably
>>>> impact startup and steady-state performance, principally by 
>>>> provoking full GCs.
>>>> We also note that young gen collection times are proportional to the 
>>>> total survivor
>>>> size rather than young gen size and that small (in absolute terms) 
>>>> survivor spaces
>>>> cause promotion of transient objects, thereby eventually provoking 
>>>> unnecessary
>>>> full GCs.
>>>>
>>>> This change make the default heap config
>>>>
>>>> -Xmx128m -XX:OldSize=14m -XX:NewSize=4m -XX:NewRatio=2
>>>> -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=15
>>>>
>>>> I.e., it leaves SurvivorRatio and MaxTenuringThreshold alone, but 
>>>> increases absolute
>>>> survivor space size significantly.  We still want as many objects to 
>>>> die in the young
>>>> gen as possible, so MaxTenuringThreshold reamins at maximum.  
>>>> NewRatio is
>>>> set to the server default of 2, thereby increasing reducing the 
>>>> number of young collections.
>>>>
>>>> JavaFX startup benchmark runs show an almost 11% improvement, while 
>>>> generic
>>>> client startup benchmark runs show up to 14% improvement.  Footprint 
>>>> increases
>>>> somewhat, ranging from 2% for noop to 37% for netbeans.
>>>>
>>>> Thanks,
>>>>
>>>> Paul
>>>>
>>>



More information about the hotspot-dev mailing list