Huge pages
Volker Simonis
volker.simonis at gmail.com
Thu Apr 11 07:19:28 PDT 2013
On Wed, Apr 10, 2013 at 10:49 PM, Tiago Stürmer Daitx <
tdaitx at linux.vnet.ibm.com> wrote:
> I'll try the patch test later on. For now I'll just give a feedback on
> the other questions...
>
> On Wed, 2013-04-10 at 19:26 +0200, Volker Simonis wrote:
>
> > But this looks good. Is this really the case where you get the "(errno
> > = 16)" warning?
>
> Please, ignore the previous results as I mixed the results by running
> either with or without the properties files. I finally tracked the error
> down to the maximum heap size.
>
> With "-XX:+UseCompressedOops -Xmx1984m -XX:+UseLargePages -XX:+UseSHM"
> hugepages work and strace gives:
> [pid 37859] shmget(IPC_PRIVATE, 2147483648, IPC_CREAT|SHM_HUGETLB|0600)
> = 5046272
> [pid 37859] shmat(5046272, 0x80000000, 0) = 0x80000000
> [pid 37859] shmctl(5046272, IPC_RMID, 0) = 0
>
>
> With -XX:+UseCompressedOops -Xmx1985m -XX:+UseLargePages -XX:+UseSHM"
> hugepages do not work and strace gives:
> [pid 37926] shmget(IPC_PRIVATE, 2181038080, IPC_CREAT|SHM_HUGETLB|0600)
> = 5079040
> [pid 37926] shmat(5079040, 0x77e000000, 0) = -1 EBUSY (Device or
> resource busy)
> [pid 37926] shmctl(5079040, IPC_RMID, 0) = 0
> OpenJDK 64-Bit Server VM warning: Failed to attach shared memory (errno
> = 16).
>
> As expected, after shmat fails it allocates the same address using mmap.
>
> In short, a maximum heap size equal or larger than 1985m fails to
> allocate a hugepage through shm calls. BTW, I tried to understand how
> the shmaddr is calculate but it does not seem to be something I can
> learn in a couple of minutes. What I notice though is that the shmaddr
> varies a lot at the 1984/1985m threshold:
> 1. Xmx = 1985m: 0x77e000000
> 2. Xmx = 1984m: 0x80000000
>
>
This is also related to compressed oops. Compressed oops need 'special'
(i.e. as low as possible) addresses to work effectively. Best is to have
the whole heap below 4GB in which case you can just use 32bit pointers. If
you can't aloocate your complete heap below 4GB you still can use 32bit
pointers but you'll have to shift them to get the actual, virtual address.
The worst case is if you can't allocate below 32BG in which case will have
to shift and add an offset to your 32bit oop before you'll get the actual
address. The VM tries to reserve the heap as low as possible in order to
get the best compressed oops mode. You can trace this with:
-XX:+PrintCompressedOopsMode in debug builds and
-XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsMode in a product
VM. Notice that the actual heap is slightly bigger than the value you
specified with -Xmx because the VM allocates some additional amount of
space for internal purpose:
java *-Xmx1984m* -XX:+UseCompressedOops -XX:+UnlockDiagnosticVMOptions
-XX:+PrintCompressedOopsMode -version
...
heap address: 0x0000000080000000, size: *2048 MB*, zero based Compressed
Oops, 32-bits Oops
vs.
java *-Xmx1985m* -XX:+UseCompressedOops -XX:+UnlockDiagnosticVMOptions
-XX:+PrintCompressedOopsMode -version
...
heap address: 0x000000077e000000, size: *2080 MB*, zero based Compressed
Oops
> > I don't really get this. Can you please explain the commands before
> > the java call in more detail.
> >
> For libhugetlbfs I have just followed the tutorial by Bill Burros -
> available at http://goo.gl/uIjio - in which I just replaced some env
> variables by the hugectl command (available in the libhugetlbfs-utils
> package).
>
OK, as far as I understand, libhugetlbfs is a library which can be
preloaded and which overwrites malloc to use huge pages. I think this is an
extra level of comlexity and we should first concentrate on making both
UseHugeTLBFS and UseSHM work.
> From hugectl man:
> --shm This option overrides shmget() to back shared memory regions with
> hugepages if possible. Segment size requests will be aligned to fit to
> the default hugepage size region.
>
> > By the way, just for reference, large page support was implemented by
> > Andrew Haley from RedHat:
> >
> > PING: Linux: Support transparent hugepages
> > 7034464 : Support transparent large pages on Linux
> >
> > but probably nobody has tested it on Linux/PPC :(
> >
> I'll check if someone in LTC did work with or tested hugepages on
> Linux/PPC - I highly doubt they wouldn't tackle that. I do know that
> there are guys who had some trouble with hugepages on J9, but that wasa
> long time ago and they didn't recall if the error was related to EBUSY.
>
>
> > I don't think you need the mounts for using Java with large pages
> > because the VM only uses shmat/shmget system calls or mmap with
> > MAP_HUGETLB (see
> > http://www.mjmwired.net/kernel/Documentation/vm/hugetlbpage.txt)
> >
> Yeah, I actually don't, but when forcing its use (by preloading the
> libhugetlbfs.so lib or using hugectl) the mounts are required AFAIK.
>
>
> > But I'll be quite happy if you could try it out and tell me your
> > experience. I would be especially interested if you can observer
> > some performance improvements for specjvm for example.
>
> As for performance I did see an improved performance with hugepages - I
> had to use "-Xmx1984m" for it to work, previously all tests were run
> using "-Xmx2560m".
>
> Regards,
> Tiago
>
> --
> Tiago Stürmer Daitx
> tdaitx at linux.vnet.ibm.com
> IBM - Linux Technology Center
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/ppc-aix-port-dev/attachments/20130411/26ae2475/attachment.html
More information about the ppc-aix-port-dev
mailing list