Metaspace OOM before reaching MaxMetaspaceSize
Shengpei Zhang
shengpei92 at gmail.com
Fri Apr 26 00:19:50 UTC 2019
Hi,
We recently saw a weird Metaspace OOM. The hs_err_pid log showed the
metaspace usage was 325078K, while the MaxMetaspaceSize was set to 4G.
The machine was running JAVA10.
Few observations:
1. It only happened at one machine, and we were not able to reproduce it.
2. Error log indicated memory allocation failure in native memory.
[2019-03-29T11:18:01.298-0700][126538][info ][gc,metaspace,freelist]
[2019-03-29T11:18:01.298-0700][126538][info ][gc,metaspace,freelist]
Chunkmanager (non-class):
[2019-03-29T11:18:01.298-0700][126538][info ][gc,metaspace,freelist]
62578 specialized (128 bytes) chunks, total 64079872 bytes
[2019-03-29T11:18:01.298-0700][126538][info ][gc,metaspace,freelist] 5778
small (512 bytes) chunks, total 23666688 bytes
[2019-03-29T11:18:01.298-0700][126538][info ][gc,metaspace,freelist] 27
medium (8192 bytes) chunks, total 1769472 bytes
[2019-03-29T11:18:01.298-0700][126538][info ][gc,metaspace,freelist] 13
humongous chunks, total 8890368 bytes
[2019-03-29T11:18:01.298-0700][126538][info ][gc,metaspace,freelist]
total size: 98406400 bytes.
[2019-03-29T11:18:01.298-0700][126538][info ][gc,metaspace,freelist]
Chunkmanager (class):
[2019-03-29T11:18:01.298-0700][126538][info ][gc,metaspace,freelist]
unavailable.
java.lang.OutOfMemoryError: Metaspace
Dumping heap to java_pid240.hprof ...
vmop [ threads: total
initially_running wait_to_block ][ time: spin block sync cleanup
vmop ] page_trap_count
66467.977: ICBufferFull [ 3427
154 353 ][ 0 60 63 8 0 ]
58
[2019-03-29T11:18:01.308-0700][459 ][info ][safepoint ] Total
time for which application threads were stopped: 0.0725507 seconds,
Stopping threads took: 0.0630158 seconds
[thread 384508 also had an error]
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 16384 bytes for committing
reserved memory.
# An error report file with more information is saved as:
# /data/hs_err_pid240.log
3. The hs_err_pid log showed the metaspace memory usage was 325078K, it is
far less than the reserved size 568176K. Also, the MaxMetaspaceSize was set
to 4G. I checked the machine had plenty of free native memory at that time.
If there was a huge allocation, I assume JVM would try to reserve more
memory first?
Heap:
garbage-first heap total 188743680K, used 47178320K [0x00007ee320000000,
0x00007f1020000000)
region size 32768K, 70 young (2293760K), 0 survivors (0K)
Metaspace used 325078K, capacity 451348K, committed 547448K,
reserved 568176K
Also it showed that there was 5G free memory on the machine:
/proc/meminfo:
MemTotal: 263892284 kB
MemFree: 5111480 kB
MemAvailable: 42423728 kB
Buffers: 752620 kB
Cached: 36011560 kB
SwapCached: 0 kB
Active: 221763832 kB
Inactive: 31633872 kB
4. The stacktrace
--------------- T H R E A D ---------------
Current thread (0x00007edec0023840): JavaThread
"http-client-exchange-39327" daemon [_thread_new, id=384507,
stack(0x00007ed6c8a6b000,0x00007ed6c8c6c000)]
Stack: [0x00007ed6c8a6b000,0x00007ed6c8c6c000], sp=0x00007ed6c8c6aa90,
free space=2046k
Native frames: (J=compiled Java code, A=aot compiled Java code,
j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0xdf3ee2] VMError::report_and_die(int, char const*, char
const*, __va_list_tag*, Thread*, unsigned char*, void*, void*, char const*,
int, unsigned long)+0x162
V [libjvm.so+0xdf4b8b] VMError::report_and_die(Thread*, char const*, int,
unsigned long, VMErrorType, char const*, __va_list_tag*)+0x2b
V [libjvm.so+0x6a7c98] report_vm_out_of_memory(char const*, int, unsigned
long, VMErrorType, char const*, ...)+0xd8
V [libjvm.so+0xbfdc8e] os::pd_commit_memory(char*, unsigned long,
bool)+0xce
V [libjvm.so+0xbf647f] os::commit_memory(char*, unsigned long, bool)+0x1f
V [libjvm.so+0xbff1f8] os::pd_create_stack_guard_pages(char*, unsigned
long)+0x68
V [libjvm.so+0xd9013c] JavaThread::create_stack_guard_pages()+0x5c
V [libjvm.so+0xd92c7c] JavaThread::run()+0x2c
V [libjvm.so+0xc03e82] thread_native_entry(Thread*)+0xf2
C [libpthread.so.0+0x7e25] start_thread+0xc5
We have never seen such issue before. Does anyone have insights?
Thanks,
Shengpei
More information about the hotspot-dev
mailing list