RFR: 8017629: G1: UseSHM in combination with a G1HeapRegionSize > os::large_page_size() falls back to use small pages
Stefan Karlsson
stefan.karlsson at oracle.com
Mon Apr 11 13:52:27 UTC 2016
Hi Thomas,
On 2016-04-11 14:39, Thomas Stüfe wrote:
> Hi Stefan,
>
> short question, why the mmap before the shmat? Why not shmat right
> away at the requested address?
If we have a requested_address we do exactly what you propose.
if (req_addr == NULL && alignment > os::large_page_size()) {
return shmat_with_large_alignment(shmid, bytes, alignment);
} else {
return shmat_with_normal_alignment(shmid, req_addr);
}
...
static char* shmat_with_normal_alignment(int shmid, char* req_addr) {
char* addr = (char*)shmat(shmid, req_addr, 0);
if ((intptr_t)addr == -1) {
shm_warning_with_errno("Failed to attach shared memory.");
return NULL;
}
return addr;
}
It's when you don't have a requested address that mmap is used to find a
large enough virtual memory area.
>
> Also note that mmap- and shmat-allocated memory may have different
> alignment requirements: mmap requires a page-aligned request address,
> whereas shmat requires alignment to SHMLBA, which may be multiple
> pages (e.g. for ARM:
> http://lxr.free-electrons.com/source/arch/arm/include/asm/shmparam.h#L9).
> So, for this shat-over-mmap trick to work, request address has to be
> aligned to SHMLBA, not just page size.
>
> I see that you assert alignment of requ address to
> os::large_page_size(), which I would assume is a multiple of SHMLBA,
> but I am not sure of this.
I've added some defensive code and asserts to catch this if/when this
assumption fails:
http://cr.openjdk.java.net/~stefank/8017629/webrev.02.delta/
http://cr.openjdk.java.net/~stefank/8017629/webrev.02
I need to verify that this works on other machines than my local Linux
x64 machine.
Thanks,
StefanK
>
> Kind Regards, Thomas
>
>
>
> On Mon, Apr 11, 2016 at 1:03 PM, Stefan Karlsson
> <stefan.karlsson at oracle.com <mailto:stefan.karlsson at oracle.com>> wrote:
>
> Hi all,
>
> Please review this patch to enable SHM large page allocations even
> when the requested alignment is larger than os::large_page_size().
>
> http://cr.openjdk.java.net/~stefank/8017629/webrev.01
> <http://cr.openjdk.java.net/%7Estefank/8017629/webrev.01>
> https://bugs.openjdk.java.net/browse/JDK-8017629
>
> G1 is affected by this bug since it requires the heap to start at
> an address that is aligned with the heap region size. The patch
> fixes this by changing the UseSHM large pages allocation code.
> First, virtual memory with correct alignment is pre-reserved and
> then the large pages are attached to this memory area.
>
> Tested with vm.gc.testlist and ExecuteInternaVMTests
>
> Thanks,
> StefanK
>
>
More information about the hotspot-dev
mailing list