RFE (m) (Prelminary): JDK-7197666: java -d64 -version core dumps in a box with lots of memory
Bengt Rutisson
bengt.rutisson at oracle.com
Thu Mar 28 22:09:30 UTC 2013
Hi all,
Sending this to both runtime and GC since I think it concerns both areas.
I'd like some feedback on this preliminary change. I still want to do
some more testing and evaluation before I ask for final reviews:
http://cr.openjdk.java.net/~brutisso/7197666/webrev.00/
In particular I would like some feedback on these questions:
- I am adding a flag that has the same value on all platforms except
Solaris x86. There is the product_pd flag macro to support this. But
there is no experimental_pd marcro. I would have preferred to make my
new flag experimental. Should I add experimental_pd or should I just use
a product flag?
- Even with product_pd I think I still have to go in to all the
different platform files and add the exact same code to give the flag a
default value on all platforms. Is there a way to have a default value
and only override it on Solaris x86?
- The class I am adding, ArrayAllocator, wants to choose between doing
malloc and mmap. Normally we use ReservedSpace and VirtualSpace to get
mapped memory. However, those classes are kind of clumsy when I just
want to allocate one chunk of memory. It is much simpler to use the
os::reserve_memory() and os::commit_memory() methods directly. I think
my use case here motivate using these methods directly, but is there
some reason not to do that?
Some background on the change:
The default implementation of malloc on Solaris has several limitation
compared to malloc on other platforms. One limitation is that it can
only use one consecutive chunk of memory. Another limitation is that it
always allocates in this single chunk of memory no matter how large the
requested amount of memory is. Other malloc implementations normally use
mapped memory for large allocations.
The Java heap is mapped in memory and we try to pick a good address for
it. The lowest allowed address is controlled by HeapBaseMinAddress. This
is only 256 MB on Solaris x86 (other platforms have at least 2 GB).
Since the C heap ends up below the Java heap it means that in some cases
it is limited to 256 MB.
When we run with ParallelOldGC we get three task queues per GC thread.
Each task queue takes mallocs 1MB. The failing machine in the bug report
has lots of CPUs and ends up with 83 GC threads. This is 249 MB, which
is more than we can get out of the 256 MB limited C heap considering
that there are other things that get malloced too.
So, the problems occur mostly on Solaris x86. My suggested fix tries to
address this by letting the task queues be mapped instead of malloced on
Solaris x86. Instead of inlining this logic in taskqueue.cpp I added a
more general class. The reason for this is that I think we need to use
the same logic in more places, especially for G1, which is mallocing
quite a lot.
Since I think malloc on other platforms use mapped memory for large
malloc requests I think it is enough for this change to have effect on
Solaris. The other platforms probably have better heuristics than I can
come up with for which sizes should be mapped. On Sparc we have the same
limitation with malloc, but we have more memory available for the C
heap. This is why I have only enabled this for Solaris x86.
Also, I will be on vacation for a few days. Back in the office Thrusday
April 4. I'm happy for any feedback on this, but if I don't respond
until next week you know why :)
Thanks,
Bengt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20130328/9799fca3/attachment.htm>
More information about the hotspot-gc-dev
mailing list