RFR(xs): 8185033: On Metaspace OOM, ChunkManager composition should be logged.

Thomas Stüfe thomas.stuefe at gmail.com
Fri Jul 21 12:21:34 UTC 2017


Hi Mikael,

thanks for looking at this!

the const correctness stuff makes sense, I started to do this myself but
reverted that part and added it as a Todo to
https://bugs.openjdk.java.net/browse/JDK-8185034. But sure, we can do this
in this change.

As for making the counters size_t, I am not sure. I do not think that
size_t is a good fit for numbers-of-things, size_t, to me, always means
memory size or address range. I think counters are better represented by
int or unsigned, if you do not like uint32_t. I also think that the
original decision to make the chunk counters size_t is debatable.

But I do not have strong emotions here, if you prefer I use size_t, I can
use size_t.

Thanks, Thomas


On Fri, Jul 21, 2017 at 10:46 AM, Mikael Gerdin <mikael.gerdin at oracle.com>
wrote:

> Hi Thomas,
>
>
> On 2017-07-21 08:54, Thomas Stüfe wrote:
>
>> Dear all,
>>
>> May I please get reviews and a sponsor for the following small
>> enhancement.
>>
>> Issue: https://bugs.openjdk.java.net/browse/JDK-8185033
>> Webrev:
>> http://cr.openjdk.java.net/~stuefe/webrevs/8185033-On-Metasp
>> ace-OOM-chunkManager-composition-should-be-logged/webrev.00/webrev/
>>
>> This patch adds a bit of logging if a Metaspace OOM happens. We will now
>> get a printout of the composition of the global chunk managers (non-class
>> and, if necessary, class).
>>
>> Before it looked like this:
>>
>> [272.727s][info][gc,metaspace,freelist] All Metaspace:
>> [272.728s][info][gc,metaspace,freelist] data space:   Chunk accounting:
>> used in chunks 100752K + unused in chunks 650K  +  capacity in free chunks
>> 8293K = 109696K  capacity in allocated chunks 101403K
>> [272.728s][info][gc,metaspace,freelist] class space:   Chunk accounting:
>> used in chunks 4047K + unused in chunks 97K  +  capacity in free chunks
>> 6085K = 10230K  capacity in allocated chunks 4145K
>> [272.729s][info][gc,metaspace,freelist] Total fragmentation waste (words)
>> doesn't count free space
>> [272.729s][info][gc,metaspace,freelist]   data: 183 specialized(s) 960,
>> 169
>> small(s) 41, 1507 medium(s) 4480, large count 1
>> [272.729s][info][gc,metaspace,freelist]  class: 183 specialized(s) 0, 13
>> small(s) 12, 111 medium(s) 8, large count 1
>>
>> Now:
>> [250.548s][info][gc,metaspace,freelist] All Metaspace:
>> [250.549s][info][gc,metaspace,freelist] data space:   Chunk accounting:
>> used in chunks 100748K + unused in chunks 638K  +  capacity in free chunks
>> 10357K = 111744K  capacity in allocated chunks 101387K
>> [250.550s][info][gc,metaspace,freelist] class space:   Chunk accounting:
>> used in chunks 4047K + unused in chunks 97K  +  capacity in free chunks
>> 6085K = 10230K  capacity in allocated chunks 4145K
>> [250.550s][info][gc,metaspace,freelist] Total fragmentation waste (words)
>> doesn't count free space
>> [250.550s][info][gc,metaspace,freelist]   data: 183 specialized(s) 960,
>> 165
>> small(s) 27, 1507 medium(s) 4504, large count 1
>> [250.550s][info][gc,metaspace,freelist]  class: 183 specialized(s) 0, 13
>> small(s) 12, 111 medium(s) 14, large count 1
>> [250.550s][info][gc,metaspace,freelist] Chunkmanager (non-class):
>> [250.550s][info][gc,metaspace,freelist]   89 specialized (128 bytes)
>> chunks, total 91136 bytes
>> [250.550s][info][gc,metaspace,freelist]   2567 small (512 bytes) chunks,
>> total 10514432 bytes
>> [250.550s][info][gc,metaspace,freelist]   0 medium (8192 bytes) chunks,
>> total 0 bytes
>> [250.550s][info][gc,metaspace,freelist]   0 humongous chunks, total 0
>> bytes
>> [250.550s][info][gc,metaspace,freelist]   total size: 10605568.
>> [250.550s][info][gc,metaspace,freelist] Chunkmanager (class):
>> [250.550s][info][gc,metaspace,freelist]   87 specialized (128 bytes)
>> chunks, total 89088 bytes
>> [250.550s][info][gc,metaspace,freelist]   2999 small (256 bytes) chunks,
>> total 6141952 bytes
>> [250.550s][info][gc,metaspace,freelist]   0 medium (4096 bytes) chunks,
>> total 0 bytes
>> [250.550s][info][gc,metaspace,freelist]   0 humongous chunks, total 0
>> bytes
>> [250.550s][info][gc,metaspace,freelist]   total size: 6231040.
>>
>> This helps in understanding the underlying cause of the OOM, if that cause
>> is related to chunks lingering around in the freelist. See also [1], a Jep
>> proposing to improve the metaspace chunk allocation to increase the chance
>> of chunk reuse. This patch here is also a preparation for the prototype
>> work of [1].
>>
>> Note that admittedly, metaspace.cpp will again gain a bit of complexity by
>> this patch. When examining the coding, I was tempted to refactor but
>> refrained from doing so to keep the patch focused. But I think
>> metaspace.cpp (especially the coding associated with statistics) could be
>> cleaned up a bit. I opened a follow up item [2] to collect cleanup-issues.
>> I am willing to do that myself, but would prefer it to be done after
>> functional patches - which still have a slight chance of being backported
>> to jdk9 - are all being added.
>>
>
> I know this is a bit counter to what you said about keeping the patch
> small but what do you think about doing this on top of your change?
> http://cr.openjdk.java.net/~mgerdin/8185033/webrev.00/src/sh
> are/vm/memory/metaspace.cpp.udiff.html
>
> I'm not sure why you went with uint32_t for the lengths since it's just a
> couple of bytes and you can get rid of the casts.
> I also took the liberty to squash out two const_casts which were making my
> eyes twitch... :)
>
> /Mikael
>
>
>
>> Patch was built and tested on Windows x64, Linux x64 and AIX. I also ran
>> jtreg hotspot/runtime/Metaspace tests, no issues.
>>
>> Kind Regards, Thomas
>>
>> ----
>>
>> [1] https://bugs.openjdk.java.net/browse/JDK-8166690
>> [2] https://bugs.openjdk.java.net/browse/JDK-8185034
>>
>>


More information about the hotspot-dev mailing list