[PATCH] JDK-8205051 (UseNUMA memory interleaving vs cpunodebind & localalloc)
roshan mangal
roshanmangal at gmail.com
Tue Sep 25 06:48:19 UTC 2018
Hi All,
This Patch is for https://bugs.openjdk.java.net/browse/JDK-8205051
Issue:
If the JVM isn't allowed to run on all of the nodes (by numactl, cgroups,
docker, etc), then a significant fraction of the Java heap will be
unusable, causing early GC.
Every Thread captures their locality group(lgrp) and allocates memory from
that lgrp.
lgrp id is same as NUMA node id.
Thread running on CPU belongs to NUMA node 0, will capture Thread->lgrp as
lgrp0 and will allocate memory from NUMA node 0. Once NUMA node 0 is full,
it will trigger GC irrespective of other NUMA node having memory.
Solution proposed:
Create List of NUMA nodes based on distance and allocate memory from near
NUMA node when other closest NUMA node is/are full.
Below system has eight NUMA nodes and distance table given below.
node distances:
node 0 1 2 3 4 5 6 7
0: 10 16 16 16 32 32 32 32
1: 16 10 16 16 32 32 32 32
2: 16 16 10 16 32 32 32 32
3: 16 16 16 10 32 32 32 32
4: 32 32 32 32 10 16 16 16
5: 32 32 32 32 16 10 16 16
6: 32 32 32 32 16 16 10 16
7: 32 32 32 32 16 16 16 10
The corresponding list for each lgrp will be like this.
Thread's lgrp
Order of Allocation in NUMA node
lgrp0 [ numaNode0->numaNode1->numaNode2->numaNode3->
numaNode4->numaNode5->numaNode6->numaNode7 ]
lgrp1 [ numaNode1->numaNode0->numaNode2->numaNode3->
numaNode4->numaNode5->numaNode6->numaNode7 ]
lgrp2 [ numaNode2->numaNode0->numaNode1->numaNode3->
numaNode4->numaNode5->numaNode6->numaNode7 ]
lgrp3 [ numaNode3->numaNode0->numaNode1->numaNode2->
numaNode4->numaNode5->numaNode6->numaNode7 ]
lgrp4 [ numaNode4->numaNode5->numaNode6->numaNode7->
numaNode0->numaNode1->numaNode2->numaNode3 ]
lgrp5 [ numaNode5->numaNode4->numaNode6->numaNode7->
numaNode0->numaNode1->numaNode2->numaNode3 ]
lgrp6 [ numaNode6->numaNode4->numaNode5->numaNode7->
numaNode0->numaNode1->numaNode2->numaNode3 ]
lgrp7 [ numaNode7->numaNode4->numaNode5->numaNode6->
numaNode0->numaNode1->numaNode2->numaNode3 ]
Allocation on NUMA node, which is far from CPU can lead to performance
issue. Sometimes triggering GC is a better option than allocating from NUMA
node at large distance i.e. high memory latency.
For this, I have added option "NumaAllocationDistanceLimit", which will
restrict memory allocation from the far nodes.
In above system if we set -XX:NumaAllocationDistanceLimit=16.
The corresponding list for each lgrp will be like this.
Thread's lgrp Order of Allocation in NUMA node
lgrp0 [ numaNode0->numaNode1->numaNode2->numaNode3 ]
lgrp1 [ numaNode1->numaNode0->numaNode2->numaNode3 ]
lgrp2 [ numaNode2->numaNode0->numaNode1->numaNode3 ]
lgrp3 [ numaNode3->numaNode0->numaNode1->numaNode2 ]
lgrp4 [ numaNode4->numaNode5->numaNode6->numaNode7 ]
lgrp5 [ numaNode5->numaNode4->numaNode6->numaNode7 ]
lgrp6 [ numaNode6->numaNode4->numaNode5->numaNode7 ]
lgrp7 [ numaNode7->numaNode4->numaNode5->numaNode6 ]
#################################################### PATCH
################################
diff --git a/src/hotspot/os/linux/globals_linux.hpp b/src/hotspot/os/linux/
globals_linux.hpp
--- a/src/hotspot/os/linux/globals_linux.hpp
+++ b/src/hotspot/os/linux/globals_linux.hpp
@@ -62,6 +62,9 @@
product(bool, UseContainerSupport, true, \
"Enable detection and runtime container configuration support") \
\
+ product(int, NUMAAllocationDistanceLimit, INT_MAX, \
+ "NUMA node distance limit for across lgrp memory allocation") \
+ \
product(bool, PreferContainerQuotaForCPUCount, true, \
"Calculate the container CPU availability based on the value" \
" of quotas (if set), when true. Otherwise, use the CPU" \
diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_
linux.cpp
--- a/src/hotspot/os/linux/os_linux.cpp
+++ b/src/hotspot/os/linux/os_linux.cpp
@@ -2939,10 +2939,13 @@
set_numa_nodes_ptr((struct bitmask **)libnuma_dlsym(handle,
"numa_nodes_ptr"));
// Create an index -> node mapping, since nodes are not always
consecutive
_nindex_to_node = new (ResourceObj::C_HEAP, mtInternal)
GrowableArray<int>(0, true);
+ // Create a numaNode index array for node mapping. Each index
points to link list of numaNodes
+ _nindex_to_numaNode = new (ResourceObj::C_HEAP, mtInternal)
GrowableArray<os::Linux::numaNode*>(0, true);
rebuild_nindex_to_node_map();
// Create a cpu -> node mapping
_cpu_to_node = new (ResourceObj::C_HEAP, mtInternal)
GrowableArray<int>(0, true);
rebuild_cpu_to_node_map();
+ build_numaNode_distance_map();
return true;
}
}
@@ -2968,6 +2971,34 @@
}
}
+void os::Linux::build_numaNode_distance_map() {
+ // Get highest numbered numa node in system
+ int highest_node_number = Linux::numa_max_node();
+ nindex_to_numaNode()->clear();
+ // In each NumaNode list add first node as self node
+ for (int node = 0; node <= highest_node_number; node++) {
+ os::Linux::numaNode* newNumaNode = NULL;
+ if (Linux::isnode_in_existing_nodes(node) &&
Linux::isnode_in_bound_nodes(node)) {
+ newNumaNode = new (ResourceObj::C_HEAP, mtInternal)
os::Linux::numaNode(node, numa_distance(node, node));
+ }
+ nindex_to_numaNode()->append(newNumaNode);
+ }
+ // In each list add other NumaNodes in ascending order of distance
+ for (int node = 0; node <= highest_node_number; node++) {
+ if (Linux::isnode_in_existing_nodes(node) &&
Linux::isnode_in_bound_nodes(node) && nindex_to_numaNode()->at(node) !=
NULL) {
+ for (int next_node = 0; next_node <= highest_node_number;
next_node++) {
+ if (node != next_node && Linux::isnode_in_existing_nodes(next_node)
&& Linux::isnode_in_bound_nodes(next_node)) {
+ int distance = numa_distance(node, next_node);
+ // Insert next_node in list corresponing to node if distance is
within NUMAAllocationDistanceLimit
+ if (distance <= NUMAAllocationDistanceLimit) {
+ os::Linux::numaNode::insert_node(&nindex_to_numaNode()->at(node),
next_node, distance);
+ }
+ }
+ }
+ }
+ }
+}
+
// rebuild_cpu_to_node_map() constructs a table mapping cpud id to node id.
// The table is later used in get_node_by_cpu().
void os::Linux::rebuild_cpu_to_node_map() {
@@ -3051,6 +3082,7 @@
GrowableArray<int>* os::Linux::_cpu_to_node;
GrowableArray<int>* os::Linux::_nindex_to_node;
+GrowableArray<os::Linux::numaNode *>* os::Linux::_nindex_to_numaNode;
os::Linux::sched_getcpu_func_t os::Linux::_sched_getcpu;
os::Linux::numa_node_to_cpus_func_t os::Linux::_numa_node_to_cpus;
os::Linux::numa_max_node_func_t os::Linux::_numa_max_node;
diff --git a/src/hotspot/os/linux/os_linux.hpp b/src/hotspot/os/linux/os_
linux.hpp
--- a/src/hotspot/os/linux/os_linux.hpp
+++ b/src/hotspot/os/linux/os_linux.hpp
@@ -66,7 +66,46 @@
// BB, Minor Version
// CC, Fix Version
static uint32_t _os_version;
+ public:
+
+ // Class numaNode has NUMA node number and distance field, to create
list of nodes
+ // w.r.t one numaNode in ascending order of numa distance
+ class numaNode : public ResourceObj {
+ public:
+ int nodeId;
+ int nodeDistance;
+ numaNode* next;
+ numaNode() {
+ nodeId = -1;
+ nodeDistance = INT_MAX;
+ next = NULL;
+ }
+ numaNode(int newNodeId, int newNodeDistance) {
+ nodeId = newNodeId;
+ nodeDistance = newNodeDistance;
+ next = NULL;
+ }
+ // Insert node in list with head_ref and ascending order of numa
distances.
+ static void insert_node(numaNode** head_ref, int newNodeId, int
newNodeDistance) {
+ // Create node with newNodeId and newNodeDistance
+ numaNode* new_node = new (ResourceObj::C_HEAP, mtInternal)
numaNode(newNodeId, newNodeDistance);
+ // List is empty, make new_node as first node
+ if (*head_ref == NULL || (*head_ref)->nodeDistance >
newNodeDistance) {
+ new_node->next = *head_ref;
+ *head_ref = new_node;
+ } else {
+ numaNode* itr = *head_ref;
+ // Insert new_node in ascending order of nodeDistance in List
+ while (itr->next != NULL && itr->next->nodeDistance <=
newNodeDistance) {
+ itr = itr->next;
+ }
+ new_node->next = itr->next;
+ itr->next = new_node;
+ }
+ }
+ };
+ static GrowableArray<os::Linux::numaNode *>* _nindex_to_numaNode;
protected:
static julong _physical_memory;
@@ -90,6 +129,7 @@
static void rebuild_cpu_to_node_map();
static void rebuild_nindex_to_node_map();
+ static void build_numaNode_distance_map();
static GrowableArray<int>* cpu_to_node() { return _cpu_to_node; }
static GrowableArray<int>* nindex_to_node() { return _nindex_to_node; }
@@ -120,6 +160,7 @@
static void *dlopen_helper(const char *name, char *ebuf, int ebuflen);
static void *dll_load_in_vmthread(const char *name, char *ebuf, int
ebuflen);
+ static GrowableArray<os::Linux::numaNode*>* nindex_to_numaNode() {
return _nindex_to_numaNode; }
static void init_thread_fpu_state();
static int get_fpu_control_word();
static void set_fpu_control_word(int fpu_control);
diff --git a/src/hotspot/share/gc/parallel/mutableNUMASpace.cpp
b/src/hotspot/share/gc/parallel/mutableNUMASpace.cpp
--- a/src/hotspot/share/gc/parallel/mutableNUMASpace.cpp
+++ b/src/hotspot/share/gc/parallel/mutableNUMASpace.cpp
@@ -795,36 +795,55 @@
thr->set_lgrp_id(lgrp_id);
}
- int i = lgrp_spaces()->find(&lgrp_id, LGRPSpace::equals);
+ // Get the nearest list of nodes for lgrp_id which are sorted based on
NUMA distance
+ os::Linux::numaNode* itr_numaNode = os::Linux::nindex_to_numaNode(
)->at(lgrp_id);
- // It is possible that a new CPU has been hotplugged and
- // we haven't reshaped the space accordingly.
- if (i == -1) {
- i = os::random() % lgrp_spaces()->length();
- }
+ LGRPSpace* ls = NULL;
+ MutableSpace* s = NULL;
+ HeapWord* p = NULL;
+ // Try in each lgrp in list till get memory
+ while (itr_numaNode != NULL) {
+ // numaNode id is same as lgrp_id
+ int nearest_avaliable_lg = itr_numaNode->nodeId;
+ int i = lgrp_spaces()->find(&nearest_avaliable_lg, LGRPSpace::equals);
+ // It is possible that a new CPU has been hotplugged and
+ // we haven't reshaped the space accordingly.
+ if (i == -1) {
+ i = os::random() % lgrp_spaces()->length();
+ }
+ ls = lgrp_spaces()->at(i);
+ s = ls->space();
+ p = s->allocate(size);
- LGRPSpace* ls = lgrp_spaces()->at(i);
- MutableSpace *s = ls->space();
- HeapWord *p = s->allocate(size);
-
- if (p != NULL) {
- size_t remainder = s->free_in_words();
- if (remainder < CollectedHeap::min_fill_size() && remainder > 0) {
- s->set_top(s->top() - size);
- p = NULL;
+ if (p != NULL) {
+ size_t remainder = s->free_in_words();
+ if (remainder < CollectedHeap::min_fill_size() && remainder > 0) {
+ s->set_top(s->top() - size);
+ p = NULL;
+ }
}
- }
- if (p != NULL) {
- if (top() < s->top()) { // Keep _top updated.
- MutableSpace::set_top(s->top());
+ if (p != NULL) {
+ if (top() < s->top()) { // Keep _top updated.
+ MutableSpace::set_top(s->top());
+ }
}
- }
- // Make the page allocation happen here if there is no static binding..
- if (p != NULL && !os::numa_has_static_binding()) {
- for (HeapWord *i = p; i < p + size; i += os::vm_page_size() >>
LogHeapWordSize) {
- *(int*)i = 0;
+ // Make the page allocation happen here if there is no static binding..
+ if (p != NULL && !os::numa_has_static_binding()) {
+ for (HeapWord* j = p; j < p + size; j += os::vm_page_size() >>
LogHeapWordSize) {
+ *(int*)j = 0;
+ }
}
- }
+ // If p is NULL , Move to next nearest numaNode for allocation from
numaNode list
+ if (p == NULL) {
+ ls->set_allocation_failed();
+ itr_numaNode = itr_numaNode->next;
+ if (itr_numaNode != NULL) {
+ thr->set_lgrp_id(itr_numaNode->nodeId);
+ }
+ } else {
+ return p;
+ }
+ } // End of while
if (p == NULL) {
ls->set_allocation_failed();
}
@@ -839,43 +858,62 @@
lgrp_id = os::numa_get_group_id();
thr->set_lgrp_id(lgrp_id);
}
+ // Get the nearest list of nodes for lgrp_id which are sorted based on
NUMA distance
+ os::Linux::numaNode* itr_numaNode = os::Linux::nindex_to_numaNode(
)->at(lgrp_id);
- int i = lgrp_spaces()->find(&lgrp_id, LGRPSpace::equals);
- // It is possible that a new CPU has been hotplugged and
- // we haven't reshaped the space accordingly.
- if (i == -1) {
- i = os::random() % lgrp_spaces()->length();
- }
- LGRPSpace *ls = lgrp_spaces()->at(i);
- MutableSpace *s = ls->space();
- HeapWord *p = s->cas_allocate(size);
- if (p != NULL) {
- size_t remainder = pointer_delta(s->end(), p + size);
- if (remainder < CollectedHeap::min_fill_size() && remainder > 0) {
- if (s->cas_deallocate(p, size)) {
- // We were the last to allocate and created a fragment less than
- // a minimal object.
- p = NULL;
- } else {
- guarantee(false, "Deallocation should always succeed");
+ LGRPSpace* ls = NULL;
+ MutableSpace* s = NULL;
+ HeapWord* p = NULL;
+
+ while (itr_numaNode != NULL) {
+ // numaNode id is same as lgrp_id
+ int nearest_avaliable_lg = itr_numaNode->nodeId;
+ int i = lgrp_spaces()->find(&nearest_avaliable_lg, LGRPSpace::equals);
+ // It is possible that a new CPU has been hotplugged and
+ // we haven't reshaped the space accordingly.
+ if (i == -1) {
+ i = os::random() % lgrp_spaces()->length();
+ }
+ ls = lgrp_spaces()->at(i);
+ s = ls->space();
+ p = s->cas_allocate(size);
+ if (p != NULL) {
+ size_t remainder = pointer_delta(s->end(), p + size);
+ if (remainder < CollectedHeap::min_fill_size() && remainder > 0) {
+ if (s->cas_deallocate(p, size)) {
+ // We were the last to allocate and created a fragment less than
+ // a minimal object.
+ p = NULL;
+ } else {
+ guarantee(false, "Deallocation should always succeed");
+ }
}
}
- }
- if (p != NULL) {
- HeapWord* cur_top, *cur_chunk_top = p + size;
- while ((cur_top = top()) < cur_chunk_top) { // Keep _top updated.
- if (Atomic::cmpxchg(cur_chunk_top, top_addr(), cur_top) == cur_top) {
- break;
+ if (p != NULL) {
+ HeapWord* cur_top, *cur_chunk_top = p + size;
+ while ((cur_top = top()) < cur_chunk_top) { // Keep _top updated.
+ if (Atomic::cmpxchg(cur_chunk_top, top_addr(), cur_top) ==
cur_top) {
+ break;
+ }
}
}
- }
-
- // Make the page allocation happen here if there is no static binding.
- if (p != NULL && !os::numa_has_static_binding() ) {
- for (HeapWord *i = p; i < p + size; i += os::vm_page_size() >>
LogHeapWordSize) {
- *(int*)i = 0;
+ // Make the page allocation happen here if there is no static binding.
+ if (p != NULL && !os::numa_has_static_binding() ) {
+ for (HeapWord *j = p; j < p + size; j += os::vm_page_size() >>
LogHeapWordSize) {
+ *(int*)j = 0;
+ }
}
- }
+ // If p is NULL , Move to next nearest numaNode for allocation from
numaNode list
+ if (p == NULL) {
+ ls->set_allocation_failed();
+ itr_numaNode = itr_numaNode->next;
+ if (itr_numaNode != NULL) {
+ thr->set_lgrp_id(itr_numaNode->nodeId);
+ }
+ } else {
+ return p;
+ }
+ } // End of while
if (p == NULL) {
ls->set_allocation_failed();
}
################################# PATCH END ##############################
##################
CMD1 and CMD2 are tests mentioned in bug.
CMD1> numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA
-version
...
[0.212s][info][gc,heap,exit ] PSYoungGen total 611840K, used 13129K
[0x0000000580100000, 0x00000005aab80000, 0x0000000800000000)
[0.212s][info][gc,heap,exit ] eden space 524800K, 2% used
[0x0000000580100000,0x0000000580dd2700,0x00000005a0180000)
[0.212s][info][gc,heap,exit ] lgrp 0 space 65600K, 20% used
[0x0000000580100000,0x0000000580dd2700,0x0000000584110000)
[0.212s][info][gc,heap,exit ] lgrp 1 space 65600K, 0% used
[0x0000000584110000,0x0000000584110000,0x0000000588120000)
[0.212s][info][gc,heap,exit ] lgrp 2 space 65600K, 0% used
[0x0000000588120000,0x0000000588120000,0x000000058c130000)
[0.212s][info][gc,heap,exit ] lgrp 3 space 65600K, 0% used
[0x000000058c130000,0x000000058c130000,0x0000000590140000)
[0.212s][info][gc,heap,exit ] lgrp 4 space 65600K, 0% used
[0x0000000590140000,0x0000000590140000,0x0000000594150000)
[0.212s][info][gc,heap,exit ] lgrp 5 space 65600K, 0% used
[0x0000000594150000,0x0000000594150000,0x0000000598160000)
[0.212s][info][gc,heap,exit ] lgrp 6 space 65600K, 0% used
[0x0000000598160000,0x0000000598160000,0x000000059c170000)
[0.212s][info][gc,heap,exit ] lgrp 7 space 65600K, 0% used
[0x000000059c170000,0x000000059c170000,0x00000005a0180000)
...
CMD2 > numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC
-XX:+UseNUMA -version
...
[0.221s][info][gc,heap,exit ] PSYoungGen total 611840K, used 13129K
[0x0000000580100000, 0x00000005aab80000, 0x0000000800000000)
[0.221s][info][gc,heap,exit ] eden space 524800K, 2% used
[0x0000000580100000,0x0000000580dd26f8,0x00000005a0180000)
[0.221s][info][gc,heap,exit ] lgrp 0 space 65600K, 20% used
[0x0000000580100000,0x0000000580dd26f8,0x0000000584110000)
[0.221s][info][gc,heap,exit ] lgrp 1 space 65600K, 0% used
[0x0000000584110000,0x0000000584110000,0x0000000588120000)
[0.221s][info][gc,heap,exit ] lgrp 2 space 65600K, 0% used
[0x0000000588120000,0x0000000588120000,0x000000058c130000)
[0.221s][info][gc,heap,exit ] lgrp 3 space 65600K, 0% used
[0x000000058c130000,0x000000058c130000,0x0000000590140000)
[0.221s][info][gc,heap,exit ] lgrp 4 space 65600K, 0% used
[0x0000000590140000,0x0000000590140000,0x0000000594150000)
[0.221s][info][gc,heap,exit ] lgrp 5 space 65600K, 0% used
[0x0000000594150000,0x0000000594150000,0x0000000598160000)
[0.221s][info][gc,heap,exit ] lgrp 6 space 65600K, 0% used
[0x0000000598160000,0x0000000598160000,0x000000059c170000)
[0.221s][info][gc,heap,exit ] lgrp 7 space 65600K, 0% used
[0x000000059c170000,0x000000059c170000,0x00000005a0180000)
...
Note: -- localalloc is default configuration in numactl, Hence there
is no difference in CMD1 and CMD2. If the "local node" is low on free
memory, the kernel will try to allocate memory from other
nodes.
There is documentation issue in numactl man page. I have filed
bug for that
https://bugzilla.kernel.org/show_bug.cgi?id=200777
CMD3( with Fix)>numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC
-XX:+UseNUMA GCBench
...
[0.465s][info][gc,heap,exit ] Heap
[0.465s][info][gc,heap,exit ] PSYoungGen total 611840K, used 498561K
[0x0000000580100000, 0x00000005aab80000, 0x0000000800000000)
[0.465s][info][gc,heap,exit ] eden space 524800K, 95% used
[0x0000000580100000,0x000000059e7e0550,0x00000005a0180000)
[0.465s][info][gc,heap,exit ] lgrp 0 space 65600K, 100% used
[0x0000000580100000,0x0000000584110000,0x0000000584110000)
[0.465s][info][gc,heap,exit ] lgrp 1 space 65600K, 100% used
[0x0000000584110000,0x0000000588120000,0x0000000588120000)
[0.465s][info][gc,heap,exit ] lgrp 2 space 65600K, 100% used
[0x0000000588120000,0x000000058c130000,0x000000058c130000)
[0.465s][info][gc,heap,exit ] lgrp 3 space 65600K, 100% used
[0x000000058c130000,0x0000000590140000,0x0000000590140000)
[0.465s][info][gc,heap,exit ] lgrp 4 space 65600K, 100% used
[0x0000000590140000,0x0000000594150000,0x0000000594150000)
[0.465s][info][gc,heap,exit ] lgrp 5 space 65600K, 100% used
[0x0000000594150000,0x0000000598160000,0x0000000598160000)
[0.465s][info][gc,heap,exit ] lgrp 6 space 65600K, 100% used
[0x0000000598160000,0x000000059c170000,0x000000059c170000)
[0.465s][info][gc,heap,exit ] lgrp 7 space 65600K, 60% used
[0x000000059c170000,0x000000059e7e0550,0x00000005a0180000)
...
After fix , allocation happens from all lgrps.
CMD4(Without Fix)> numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC
-XX:+UseNUMA GCBench
...
[0.991s][info][gc,heap,exit ] Heap
[0.991s][info][gc,heap,exit ] PSYoungGen total 2186240K, used 120371K
[0x0000000580100000, 0x000000060ad00000, 0x0000000800000000)
[0.991s][info][gc,heap,exit ] eden space 2099200K, 5% used
[0x0000000580100000,0x0000000586d70358,0x0000000600300000)
[0.991s][info][gc,heap,exit ] lgrp 0 space 262400K, 42% used
[0x0000000580100000,0x0000000586d70358,0x0000000590140000)
[0.991s][info][gc,heap,exit ] lgrp 1 space 262400K, 0% used
[0x0000000590140000,0x0000000590140000,0x00000005a0180000)
[0.991s][info][gc,heap,exit ] lgrp 2 space 262400K, 0% used
[0x00000005a0180000,0x00000005a0180000,0x00000005b01c0000)
[0.991s][info][gc,heap,exit ] lgrp 3 space 262400K, 0% used
[0x00000005b01c0000,0x00000005b01c0000,0x00000005c0200000)
[0.991s][info][gc,heap,exit ] lgrp 4 space 262400K, 0% used
[0x00000005c0200000,0x00000005c0200000,0x00000005d0240000)
[0.991s][info][gc,heap,exit ] lgrp 5 space 262400K, 0% used
[0x00000005d0240000,0x00000005d0240000,0x00000005e0280000)
[0.991s][info][gc,heap,exit ] lgrp 6 space 262400K, 0% used
[0x00000005e0280000,0x00000005e0280000,0x00000005f02c0000)
[0.991s][info][gc,heap,exit ] lgrp 7 space 262400K, 0% used
[0x00000005f02c0000,0x00000005f02c0000,0x0000000600300000)
...
Without fix, allocation only in lgrp0.
I did run “make run-test TEST="tier1 tier2" JTREG="JOBS=1"”. No new
regression failure after fix.
Thanks,
Roshan Mangal
More information about the hotspot-dev
mailing list