<!DOCTYPE html><html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p>The aliases you have mailed are not support aliases.</p>
<p>I suggest checking with your JDK vendor for assistance.<br>
</p>
<p>-Joe<br>
</p>
<div class="moz-cite-prefix">On 2/28/2024 8:33 PM, shanghe chen
wrote:<br>
</div>
<blockquote type="cite" cite="mid:CY5PR11MB62353B46130B7BEAA4C442C3EA5F2@CY5PR11MB6235.namprd11.prod.outlook.com">
<div dir="ltr">
<div>
<div>
<div dir="ltr">Hi! We are still suspecting that some peak
allocation behavior of C2Compiler causes the memory used
by java reach a peak memory exceeding the limit of docker
and invokes the oom killer:
<div dir="ltr"><br>
</div>
<div>For continuing checking the docker oom occurred in
our java web service hosted inside a docker container
with 32GB memory limit, with version</div>
<div dir="ltr"><br>
</div>
<div>openjdk version "11.0.14.1" 2022-02-08</div>
<div>OpenJDK Runtime Environment Temurin-11.0.14.1+1
(build 11.0.14.1+1)</div>
<div>OpenJDK 64-Bit Server VM Temurin-11.0.14.1+1 (build
11.0.14.1+1, mixed mode)</div>
<div dir="ltr"><br>
</div>
<div>with the following java options:</div>
<div dir="ltr"><br>
</div>
<div>-Xms28g</div>
<div>-Xmx28g</div>
<div>-Xss256k</div>
<div>-XX:+UseG1GC</div>
<div>-XX:MaxGCPauseMillis=200</div>
<div>-XX:ParallelGCThreads=12</div>
<div>-XX:MetaspaceSize=512m</div>
<div>-XX:MaxMetaspaceSize=512m</div>
<div>-XX:InitialCodeCacheSize=128m</div>
<div>-XX:ReservedCodeCacheSize=512m</div>
<div>-XX:MinHeapFreeRatio=30</div>
<div>-XX:MaxHeapFreeRatio=50</div>
<div>-XX:CICompilerCount=4</div>
<div>-XX:+UseCompressedOops</div>
<div>-XX:SoftRefLRUPolicyMSPerMB=0</div>
<div>-XX:-OmitStackTraceInFastThrow</div>
<div>-XX:+AggressiveOpts</div>
<div>-XX:+PrintGC</div>
<div>-XX:+PrintGCDetails</div>
<div>-XX:+PrintClassHistogram</div>
<div>-XX:MaxTenuringThreshold=10</div>
<div>-XX:+IgnoreUnrecognizedVMOptions</div>
<div>-Djava.lang.Integer.IntegerCache.high=1000000</div>
<div>-Dcustomer.java.lang.Integer.IntegerCache.high=1000000</div>
<div>-Djava.util.concurrent.ForkJoinPool.common.parallelism=8</div>
<div>-Djdk.attach.allowAttachSelf=true</div>
<div>-Dfastjson.parser.safeMode=true</div>
<div>-Dlog4j2.formatMsgNoLookups=true</div>
<div>-Dio.netty.allocator.numDirectArenas=8</div>
<div dir="ltr"><br>
</div>
<div>For checking this in a larger memory limit
environment, using strace we found that there is a
continous Arena growth during the following stack trace
of C2Compiler::compile_method like:</div>
<div dir="ltr"><br>
</div>
<div>[pid 24629] 20:40:44 mmap(NULL, 335544320,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c35378000</div>
<div>> /usr/lib64/libc-2.17.so(mmap64+0x3a) [0xf8fca]</div>
<div>>
/opt/container/lib/libjemalloc.so(je_pages_map+0x47)
[0x50657]</div>
<div>>
/opt/container/lib/libjemalloc.so(je_extent_alloc_mmap+0x13)
[0x4ab23]</div>
<div>>
/opt/container/lib/libjemalloc.so(extent_grow_retained+0x709)
[0x48ed9]</div>
<div>>
/opt/container/lib/libjemalloc.so(je_extent_alloc_wrapper+0x5bf)
[0x4997f]</div>
<div>>
/opt/container/lib/libjemalloc.so(je_arena_extent_alloc_large+0x173)
[0x20123]</div>
<div>>
/opt/container/lib/libjemalloc.so(je_large_malloc+0xb9)
[0x4b739]</div>
<div>>
/opt/container/lib/libjemalloc.so(je_malloc_default+0x6c3)
[0xf3f3]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(os::malloc(unsigned
long, MemoryType, NativeCallStack const&)+0xfc)
[0xc5edec]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(Arena::grow(unsigned
long, AllocFailStrategy::AllocFailEnum)+0x109)
[0x44d1f9]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(Arena::Arealloc(void*,
unsigned long, unsigned long,
AllocFailStrategy::AllocFailEnum)+0x201) [0x44d541]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(Node_Array::grow(unsigned
int)+0x56) [0xc348a6]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(PhaseCFG::insert_anti_dependences(Block*,
Node*, bool)+0x750) [0x827160]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(PhaseCFG::schedule_late(VectorSet&,
Node_Stack&)+0x447) [0x82a0c7]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(PhaseCFG::global_code_motion()+0x314)
[0x82d564]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(PhaseCFG::do_global_code_motion()+0x49)
[0x82dea9]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(Compile::Code_Gen()+0x1d4)
[0x66cc64]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(Compile::Compile(ciEnv*,
C2Compiler*, ciMethod*, int, bool, bool, bool, bool,
DirectiveSet*)+0xd4a) [0x6704da]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(C2Compiler::compile_method(ciEnv*,
ciMethod*, int, DirectiveSet*)+0xd3) [0x588193]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(CompileBroker::invoke_compiler_on_method(CompileTask*)+0x445)
[0x67a8c5]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(CompileBroker::compiler_thread_loop()+0x5a7)
[0x67c1f7]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(JavaThread::thread_main_inner()+0x1b9)
[0xed4899]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(Thread::call_run()+0x14e)
[0xed149e]</div>
<div>>
/usr/java/jdk11/lib/server/libjvm.so(thread_native_entry(Thread*)+0xed)
[0xc714cd]</div>
<div>> /usr/lib64/libpthread-2.17.so(start_thread+0xc4)
[0x7ea4]</div>
<div>> /usr/lib64/libc-2.17.so(__clone+0x6c) [0xfeb0c]</div>
<div dir="ltr"><br>
</div>
<div>the strace of mmap shows that (all with the above
stack trace) that sums up to more than 1GB memory:</div>
<div dir="ltr"><br>
</div>
<div>[pid 24629] 20:40:33 mmap(NULL, 29360128,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5ca6678000</div>
<div>[pid 24629] 20:40:35 mmap(NULL, 50331648,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c9ee78000</div>
<div>[pid 24629] 20:40:36 mmap(NULL, 58720256,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c9b678000</div>
<div>[pid 24629] 20:40:36 mmap(NULL, 67108864,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c97678000</div>
<div>[pid 24629] 20:40:36 mmap(NULL, 83886080,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c92678000</div>
<div>[pid 24629] 20:40:39 mmap(NULL, 100663296,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c8c678000</div>
<div>[pid 24629] 20:40:41 mmap(NULL, 134217728,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c7d678000</div>
<div>[pid 24629] 20:40:42 mmap(NULL, 167772160,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c73678000</div>
<div>[pid 24629] 20:40:42 mmap(NULL, 201326592,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c67678000</div>
<div>[pid 24629] 20:40:43 mmap(NULL, 234881024,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c59378000</div>
<div>[pid 24629] 20:40:43 mmap(NULL, 268435456,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c49378000</div>
<div>[pid 24629] 20:40:44 mmap(NULL, 335544320,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =
0x7f5c35378000</div>
<div dir="ltr"><br>
</div>
<div>And in /proc/$pid/status we see that the metric VmHWM
grows from 31.04GB to 32.12GB at the same time, which
means at some moment the process used memory at much
32.12GB. But in the NativeMemoryTool we only sample a
record with java.nmt.compiler.committed=122MB.</div>
<div>With -XX:+PrintCompilation option it seems no too
much methods being compiled (20:40:33 to 20:40:44 is of
range seconds 1937 to 1948, names of some packages
replaced)</div>
<div dir="ltr"><br>
</div>
<div>1936335 58263 4
com.wecorp.webiz.product.service.v1.PublicParametersEntity::put
(4606 bytes)</div>
<div>1936376 52134 3
org.apache.catalina.valves.AbstractAccessLogValve$RequestURIElement::addElement
(24 bytes) made not entrant</div>
<div>1936507 58259 3
com.wecorp.webiz.facade.mapper.PublicParametersEntityMapper::mapToInternal
(6527 bytes)</div>
<div>1936634 58262 4
com.wecorp.webiz.cache.DataBusinessV2::getHash (143
bytes)</div>
<div>1936986 56014 3
com.wecorp.webiz.cache.DataBusinessV2::getHash (143
bytes) made not entrant</div>
<div>1937112 58251 4
java.util.IdentityHashMap::containsKey (55 bytes)</div>
<div>1937141 898 3 <span id="ms-outlook-ios-cursor"></span>java.util.IdentityHashMap::containsKey
(55 bytes) made not entrant</div>
<div>1937288 58264 4
com.wecorp.webiz.business.dynamic.simplefilter.Filter$$Lambda$1463/0x0000000800b54040::apply
(13 bytes)</div>
<div>1937334 36436 2
com.wecorp.webiz.business.dynamic.simplefilter.Filter$$Lambda$1463/0x0000000800b54040::apply
(13 bytes) made not entrant</div>
<div>1938478 58267 4
com.wecorp.webiz.facade.mapper.WebizFacilitiesEntityMapper::mapToInternal
(1541 bytes)</div>
<div>1938588 52139 ! 3
org.apache.catalina.core.StandardHostValve::invoke (402
bytes) made not entrant</div>
<div>1938739 58272 ! 4
io.grpc.internal.MessageDeframer::readRequiredBytes (558
bytes)</div>
<div>1939036 53988 4
com.wecorp.webiz.cache.BaseInfoBusinessV10::getByBaseId
(109 bytes) made not entrant</div>
<div>1942257 23486 ! 3
io.grpc.internal.MessageDeframer::readRequiredBytes (558
bytes) made not entrant</div>
<div>1942481 58276 4
com.wecorp.webiz.business.plan.ResourceTag::getSpecialTypes
(552 bytes)</div>
<div>1942733 47364 3
com.wecorp.webiz.facade.mapper.WebizFacilitiesEntityMapper::mapToInternal
(1541 bytes) made not entrant</div>
<div>1942768 58166 3
com.wecorp.webiz.business.plan.ResourceTag::getSpecialTypes
(552 bytes) made not entrant</div>
<div>1943829 58278 4
com.wecorp.webiz.business.plan.model.hour.HourStgy::isHour
(14 bytes)</div>
<div>1943841 58282 4
com.wecorp.webiz.business.plan.model.hour.CodeHour::isHour
(73 bytes)</div>
<div>1944863 58167 3
com.wecorp.webiz.business.plan.model.hour.CodeHour::isHour
(73 bytes) made not entrant</div>
<div>1944928 58168 3
com.wecorp.webiz.business.plan.model.hour.HourStgy::isHour
(14 bytes) made not entrant</div>
<div>1945127 58271 4
com.wecorp.webiz.business.factory.lazy.LazyToWebizVMS::isSupport
(39 bytes)</div>
<div>1945134 46559 3
com.wecorp.webiz.business.factory.lazy.LazyToWebizVMS::isSupport
(39 bytes) made not entrant</div>
<div>1946397 58269 4
io.grpc.Deadline$SystemTicker::nanoTime (4 bytes)</div>
<div>1946418 23483 3
io.grpc.Deadline$SystemTicker::nanoTime (4 bytes) made
not entrant</div>
<div>1947083 58295 4
com.wecorp.infra.kafka.common.protocol.types.Struct::instance
(13 bytes)</div>
<div>1947448 54974 4
com.wecorp.webiz.business.dynamic.webizfilterrules.FilterSpecialWebiz::filter
(174 bytes) made not entrant</div>
<div>1949042 40631 3
com.wecorp.infra.kafka.common.protocol.types.Struct::instance
(13 bytes) made not entrant</div>
<div>1949471 58297 4
com.wecorp.webiz.business.dynamic.webizfilterrules.FilterSpecialWebiz::filter
(174 bytes)</div>
<div>1949474 58306 4
com.wecorp.webiz.product.cache.Cache::isDataReady (26
bytes)</div>
<div>1949479 27680 3
com.wecorp.webiz.product.cache.Cache::isDataReady (26
bytes) made not entrant</div>
<div dir="ltr"><br>
</div>
<div>May I ask why there is a continuous peak alloc in
C2Compiler::compile_method, namely in
<span style="text-decoration: none; display: inline !important; background-color: rgb(255, 255, 255);">
PhaseCFG::insert_anti_dependences</span>? it seems
that the code of Arena::grow (in
<a class="moz-txt-link-freetext" href="https://github.com/openjdk/jdk11u/blob/master/src/hotspot/share/memory/arena.cpp#L353">https://github.com/openjdk/jdk11u/blob/master/src/hotspot/share/memory/arena.cpp#L353</a>)
does not free each chunk? It means that at some point
all the allocated memory is using?</div>
<div>And can we avoid this kind of continuously large size
allocation in this limited memory environment in docker
but still enable C2Compiler?</div>
<div dir="ltr"><br>
</div>
Thank you for any response!</div>
</div>
</div>
</div>
</blockquote>
</body>
</html>