RFR: Add NMT support for Java heap

Per Liden per.liden at oracle.com
Tue Dec 12 12:06:31 UTC 2017


On 2017-12-12 12:28, Aleksey Shipilev wrote:
> On 12/12/2017 12:20 PM, Per Liden wrote:
>> On 2017-12-12 11:50, Aleksey Shipilev wrote:
>>> On 12/12/2017 11:42 AM, Per Liden wrote:
>>>> As Aleksey noticed, we don't register the Java heap with the native memory tracker. Here's a patch
>>>> to do that.
>>>>
>>>> http://cr.openjdk.java.net/~pliden/zgc/nmt_java_heap/webrev.0/
>>>
>>> Patch looks good, but the NMT "reserved" data is off the charts:
>>>
>>> Total: reserved=17180280855KB, committed=17143487KB
>>> -                 Java Heap (reserved=17179869184KB, committed=16777216KB)
>>>                             (mmap: reserved=17179869184KB, committed=16777216KB)
>>>
>>> I guess this should not pass ZAddressSpaceSize, and instead tell the reserved space of the first
>>> mapping?
>>>
>>> +  // Register address space with native memory tracker
>>> +  nmt_reserve(ZAddressSpaceStart, ZAddressSpaceSize);
>>
>> I think this is correct actually. But it depends on how one views things I guess. As I see it, I
>> want to be able to look in /proc/../maps and with NMT see what the different mappings correlate to.
>> If we only registered the first heap view, then there would be a big mysterious reservation that
>> would go unaccounted. That doesn't sound right to me, but I'm open for hearing other opinions on
>> this. The big number there covers all addresses for all heap views/mappings (i.e. the actual address
>> space that is reserved). It should be noted that, in ZGC, the heap address space doesn't have a 1:1
>> relation with max heap size.
>
> In single-mapping GCs with -Xmx100g, I would expect to see reserved=100G for Java heap.
>
> In multi-mapping GCs with -Xmx100g, I would expect to see either reserved=100G, or reserved=N*100G,
> where N is the number of mappings.
>
> Looking at /proc for ZGC, it seems we reserve the entire bulk from "lo" of first mapping to "hi" of
> the last mapping for Java heap?
>
> VmPeak:	18256719348 kB
> VmSize:	18256719348 kB
>
> Oh wow. So NMT is not lying there.
>
> But, this does look overly pessimistic thing to do. If there are multiple mappings that differ in
> upper bits, that means there is enough unused space between the mappings, and we don't actually have
> to reserve it?

We actually do want to reserve it anyway, for two reasons.

1) In ZGC, by having a heap address spare much larger than the heap size 
we are essentially immune to address space fragmentation. I.e. we can 
always find a hole big enough for any allocation, without the need to 
first compact the heap.

2) CollectedHeap::is_in_reserved() is often used as a inexpensive check 
if something point into the heap. By reserving all addresses in all heap 
views this check remains inexpensive as we know that some other random 
mmap() call in the VM didn't end up in-between two heap views. This is 
also very useful when debugging. For example, when dumping memory or 
looking at stack traces you can easily see if something is an oop or not 
(oops always start with 0x00001..., 0x000008... or 0x000004...).

>
> The way I look at it, "reserve" is something that helps to diagnose memory handling problems, and
> over-reservation masks most of that.

I agree that this would also be a reasonable way of looking at this.

> It probably makes sense to commit this NMT patch, and then
> figure out if we want to reserve less?

I'll give others a change to have an opinion before committing.

Thanks for reviewing!

cheers,
Per

>
> Thanks,
> -Aleksey
>


More information about the zgc-dev mailing list