YGC time increasing suddenly

Luciano Molinari lucmolinari at gmail.com
Sat Dec 21 08:15:07 PST 2013


Hi Aaron,

Thanks for sharing your experience and these links. I'm not also familiar
with NUMA systems, so I'm studying it as well.

I'll run some tests based on your comments.

Regards


On Fri, Dec 20, 2013 at 6:27 PM, Aaron Daubman <daubman at gmail.com> wrote:

> Luciano,
>
> Until today I wasn't using huge pages, but today I ran a test locking my
>> app in only one NUMA node (Wolfgang's tip) and using
>> huge pages. As the app has less resources in this case, I decreased the
>> request rate to 7k/s. The app ran properly for
>> about ~35min and then the same problem appeared.
>>
>
> Interesting that I ran the exact same tests last night - enabling
> hugepages and shared memory on a NUMA system and then binding the JVM to a
> single socket. Apologies if all this is already common knowledge (i've had
> to do a lot of reading on this recently at least ;-))
>
> Note that hugepage allocation must be done post-boot in order to actually
> allocate the desired number of hugepages on the socket you will be working
> with.
> I decided to try out the hugeadm utility, and found this to work for me:
>
> /usr/bin/numactl --cpunodebind=0 --membind=0 hugeadm --pool-pages-min
> 2M:12G --obey-mempolicy --pool-pages-max 2M:12G --create-group-mounts
> <groupname> --set-recommended-shmmax --set-shm-group echonest
> --set-recommended-min_free_kbytes
>
> This would allocate 12G of 2M hugepages on socket0.
> Note that if you use /etc/grub.conf or even sysctl to allocate hugepages,
> they will be evenly split among sockets, and so you will have less usable
> than desired if locking a process to a single socket.
>
> I found I was actually able to handle slightly higher load at similar
> response times using half the systems resources, so I am essentially
> wasting money on this dual-socket system =(
>
> Some reading I found useful for this:
> https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
> "Interaction of Task Memory Policy with Huge Page Allocation/Freeing"
>
> https://www.kernel.org/doc/Documentation/sysctl/vm.txt
>
>
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/main-cpu.html#s-cpu-tuning
> "4.1.2.2. Controlling NUMA Policy with numactl"
>
> http://users.sdsc.edu/~glockwood/comp/affinity.php#define:numactl
>
> on hugeadm:
> https://lwn.net/Articles/376606/
>
> I verified this was working as-desired by:
> # cat /sys/devices/system/node/node*/meminfo | fgrep Huge
> Node 0 HugePages_Total:  6144
> Node 0 HugePages_Free:   1541
> Node 0 HugePages_Surp:      0
> Node 1 HugePages_Total:     0
> Node 1 HugePages_Free:      0
> Node 1 HugePages_Surp:      0
>
>
>
>> What's the best way to setup
>> the JVM considering NUMA?Disable NUMA?Run JVM with -XX:+UseNUMA?I'll run a
>> test with this parameter.
>>
>
> +UseNUMA should definitely help if you are going to run on a multi-socket
> NUMA system and if you (IIRC default enabled for your config) enable
> +UseParallelGC:
>
> http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html#numa
>
> As a side question, I am trying to stick with G1, which still lacks NUMA
> compatibility, does anybody know why this still is or if it will be coming
> anytime soon?
> https://bugs.openjdk.java.net/browse/JDK-7005859
> http://openjdk.java.net/jeps/157
>
> Another side question that may potentially be useful:
> I have found that I need to add -XX:+UseSHM in addition to
> -XX:+UseLargePages in order to actually use LargePages - does this make
> sense / is this expected? I could find very little documentation /
> reference to UseSHM.
>
>
>


-- 
Luciano Davoglio Molinari
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131221/56124b82/attachment.html 


More information about the hotspot-gc-use mailing list