RFE (m): JDK-7197666: java -d64 -version core dumps in a box with lots of memory
Jon Masamitsu
jon.masamitsu at oracle.com
Fri Apr 5 18:48:59 PDT 2013
On 4/5/2013 8:15 AM, Bengt Rutisson wrote:
>
> Hi Jon,
>
> Thanks for looking at this!
>
> On 4/4/13 9:54 PM, Jon Masamitsu wrote:
>> Bengt,
>>
>> http://cr.openjdk.java.net/~brutisso/7197666/webrev.01/src/share/vm/memory/allocation.inline.hpp.frames.html
>>
>>
>> 118 if (_use_malloc) {
>> 119 _addr = AllocateHeap(_size, F);
>> 120 return (E*)_addr;
>> 121 }
>>
>>
>> Did you consider checking the value of _addr and going on
>> to the mmap based allocation if the malloc() failed?
>
> That's a good idea! I added that logic. I only try mmap if the size is
> larger than one page. It seems to me that if we are failing really
> small allocations we should not be trying to continue and if we would
> start mmapping really small sizes we will waste a lot of memory since
> mmapped memory needs to be page aligned.
>
> Here is an updated webrev including the fixed comment suggested by
> John Cuthbertson too:
>
> http://cr.openjdk.java.net/~brutisso/7197666/webrev.02/
Thanks. Looks good.
Jon
>
> Thanks,
> Bengt
>
>>
>> Jon
>>
>> On 4/4/13 5:17 AM, Bengt Rutisson wrote:
>>>
>>> Hi all,
>>>
>>> Coleen and Thomas have already looked at the preliminary version of
>>> this request. Thanks!
>>>
>>> Removing the preliminary part of this request now and asking for
>>> full reviews.
>>>
>>> Here is an updated webrev based on the comments from Coleen and Thomas:
>>>
>>> http://cr.openjdk.java.net/~brutisso/7197666/webrev.01/
>>>
>>> Thanks,
>>> Bengt
>>>
>>>
>>> On 3/28/13 11:09 PM, Bengt Rutisson wrote:
>>>>
>>>> Hi all,
>>>>
>>>> Sending this to both runtime and GC since I think it concerns both
>>>> areas.
>>>>
>>>> I'd like some feedback on this preliminary change. I still want to
>>>> do some more testing and evaluation before I ask for final reviews:
>>>>
>>>> http://cr.openjdk.java.net/~brutisso/7197666/webrev.00/
>>>>
>>>> In particular I would like some feedback on these questions:
>>>>
>>>> - I am adding a flag that has the same value on all platforms
>>>> except Solaris x86. There is the product_pd flag macro to support
>>>> this. But there is no experimental_pd marcro. I would have
>>>> preferred to make my new flag experimental. Should I add
>>>> experimental_pd or should I just use a product flag?
>>>>
>>>> - Even with product_pd I think I still have to go in to all the
>>>> different platform files and add the exact same code to give the
>>>> flag a default value on all platforms. Is there a way to have a
>>>> default value and only override it on Solaris x86?
>>>>
>>>> - The class I am adding, ArrayAllocator, wants to choose between
>>>> doing malloc and mmap. Normally we use ReservedSpace and
>>>> VirtualSpace to get mapped memory. However, those classes are kind
>>>> of clumsy when I just want to allocate one chunk of memory. It is
>>>> much simpler to use the os::reserve_memory() and
>>>> os::commit_memory() methods directly. I think my use case here
>>>> motivate using these methods directly, but is there some reason not
>>>> to do that?
>>>>
>>>> Some background on the change:
>>>>
>>>> The default implementation of malloc on Solaris has several
>>>> limitation compared to malloc on other platforms. One limitation is
>>>> that it can only use one consecutive chunk of memory. Another
>>>> limitation is that it always allocates in this single chunk of
>>>> memory no matter how large the requested amount of memory is. Other
>>>> malloc implementations normally use mapped memory for large
>>>> allocations.
>>>>
>>>> The Java heap is mapped in memory and we try to pick a good address
>>>> for it. The lowest allowed address is controlled by
>>>> HeapBaseMinAddress. This is only 256 MB on Solaris x86 (other
>>>> platforms have at least 2 GB). Since the C heap ends up below the
>>>> Java heap it means that in some cases it is limited to 256 MB.
>>>>
>>>> When we run with ParallelOldGC we get three task queues per GC
>>>> thread. Each task queue takes mallocs 1MB. The failing machine in
>>>> the bug report has lots of CPUs and ends up with 83 GC threads.
>>>> This is 249 MB, which is more than we can get out of the 256 MB
>>>> limited C heap considering that there are other things that get
>>>> malloced too.
>>>>
>>>> So, the problems occur mostly on Solaris x86. My suggested fix
>>>> tries to address this by letting the task queues be mapped instead
>>>> of malloced on Solaris x86. Instead of inlining this logic in
>>>> taskqueue.cpp I added a more general class. The reason for this is
>>>> that I think we need to use the same logic in more places,
>>>> especially for G1, which is mallocing quite a lot.
>>>>
>>>> Since I think malloc on other platforms use mapped memory for large
>>>> malloc requests I think it is enough for this change to have effect
>>>> on Solaris. The other platforms probably have better heuristics
>>>> than I can come up with for which sizes should be mapped. On Sparc
>>>> we have the same limitation with malloc, but we have more memory
>>>> available for the C heap. This is why I have only enabled this for
>>>> Solaris x86.
>>>>
>>>> Also, I will be on vacation for a few days. Back in the office
>>>> Thrusday April 4. I'm happy for any feedback on this, but if I
>>>> don't respond until next week you know why :)
>>>>
>>>> Thanks,
>>>> Bengt
>>>
>>
>
>
More information about the hotspot-runtime-dev
mailing list