Request for review: JDK-8009561 NPG: Metaspace fragmentation when retiring a Metachunk

Mikael Gerdin mikael.gerdin at oracle.com
Fri Aug 2 11:46:18 UTC 2013


Jon,

On 2013-08-01 17:02, Jon Masamitsu wrote:
> Mikael,
>
>>
>> The "4x size limit" experiment uses Dither::atLeast but returns the
>> block if it's 4x larger than the allocation request. My intention was
>> to avoid unnecessary fragmentation of the large free blocks.
>
> By the "word size total" (of allocations) this seems to be the best
> strategy.
> Do you have any hesitancy about using this strategy?

I ran the refworkload footprint3_real benchmark on linux-x64 and did not 
see any regressions with the "4x size limit" experiment compared to the 
current strategy.

I don't think the benchmark causes that many freelist allocations but it 
feels a bit safer to not have caused any severe regressions at least.

The incremental patch for this change is a simple:

diff -r cddb9731fdd6 src/share/vm/memory/metaspace.cpp
--- a/src/share/vm/memory/metaspace.cpp
+++ b/src/share/vm/memory/metaspace.cpp
@@ -810,6 +810,15 @@
    if (free_block == NULL) {
      return NULL;
    }
+
+  if (UseNewCode3) {
+    size_t block_size = free_block->size();
+    if (block_size > 4 * word_size) {
+      return_block((MetaWord*)free_block, block_size);
+      return NULL;
+    }
+  }
+
    if (UseNewCode2) {
      gclog_or_tty->print_cr("=== Got a free block of size: %d for 
allocation of size: %d",
          free_block->word_size(), word_size);

/Mikael

>
> Jon
>
> On 8/1/13 4:42 AM, Mikael Gerdin wrote:
>> Jon,
>>
>> On 2013-06-12 00:51, Jon Masamitsu wrote:
>>>
>>> On 6/11/13 2:46 PM, Mikael Gerdin wrote:
>>>> Jon
>>>>
>>>> On 06/07/2013 07:36 PM, Jon Masamitsu wrote:
>>>>>
>>>>> On 6/7/2013 8:28 AM, Mikael Gerdin wrote:
>>>>>> Jon,
>>>>>>
>>>>>>
>>>>>> On 2013-06-06 16:50, Jon Masamitsu wrote:
>>>>>>> Mikael,
>>>>>>>
>>>>>>> Thanks.  I'd be interested in seeing the instrumentation you
>>>>>>> add.  Might be worth adding as an enhancement in a later
>>>>>>> changeset.
>>>>>>
>>>>>> I did a 1hr KS run today with and without block splitting, here's
>>>>>> what
>>>>>> I came up with (in an entirely non-scientific way)
>>>>>>
>>>>>> http://cr.openjdk.java.net/~mgerdin/8009561/splitting.txt
>>>>>> http://cr.openjdk.java.net/~mgerdin/8009561/splitting.png
>>>>>
>>>>> Good graphs.
>>>>>
>>>>> The behavior is what we expect (I think).  With splitting we are
>>>>> able to
>>>>> do more
>>>>> small allocations from the dictionary (where we split a larger
>>>>> block to
>>>>> get a smaller
>>>>> block) and get fewer larger blocks allocated (some have been split).
>>>>>
>>>>>
>>>>>>
>>>>>> We hit the HWM 4 times with splitting and 5 times without splitting.
>>>>>
>>>>> Because we don't have to expand (get new chunks as often, which is
>>>>> good) I
>>>>> would surmise.
>>>>>
>>>>>> On the other hand: splitting did lead us with more metaspace memory
>>>>>> committed in the end.
>>>>>
>>>>> One explanation would be that allocations of larger block need to come
>>>>> out of newly committed space instead of the dictionary (where the
>>>>> large
>>>>> blocks have been broken up).
>>>>>
>>>>> Is there a policy that we could use that says
>>>>>
>>>>> "break up a larger block for a smaller block allocation only if ..."
>>>>>
>>>>> You fill in the blank?
>>>>
>>>> ...only if the larger block is less than 4 times larger than the
>>>> allocation? 2 times? 8 times?
>>>>
>>>> I could try some more KS runs but I'm unsure if the figures I come up
>>>> with are actually relevant.
>>>
>>> I also don't know if more KS runs would be relevant.    Can you ask the
>>> dictionary
>>> how many blocks there are of the size you're going to split?  If we only
>>> split if
>>> there are more than 4 blocks of that size, that would moderate the
>>> splitting
>>> a bit.
>>
>> I did some more experiments and made generated some more histograms:
>> http://cr.openjdk.java.net/~mgerdin/8009561/splitting2.txt
>> http://cr.openjdk.java.net/~mgerdin/8009561/splitting2.png
>>
>> The "4x size limit" experiment uses Dither::atLeast but returns the
>> block if it's 4x larger than the allocation request. My intention was
>> to avoid unnecessary fragmentation of the large free blocks.
>>
>> The "4 freelist entry limit" experiment uses Dither::atLeast and
>> queries the freelist for the amount of blocks of that particular size,
>> freelist allocation is refused if there are less than 4 blocks
>> remaining of the block size returned from the Dither::atLeast query.
>>
>> I also added the total amount words of successfully allocated from the
>> freelists for each experiment.
>>
>> /Mikael
>>
>>>
>>> Jon
>>>>
>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> I put up the very simple instrumentation at:
>>>>>> http://cr.openjdk.java.net/~mgerdin/8009561/instrument/webrev
>>>>>>
>>>>>> I also changed the allocation_from_dictionary_limit to 4k to force us
>>>>>> to make more freelist allocations.
>>>>>
>>>>> Does it really make sense to have any
>>>>> allocation_from_dictionary_limit?
>>>>> I know it was initially added because allocation from a freelist takes
>>>>> longer
>>>>> but to have a static limit like that just seems to put that space
>>>>> forever
>>>>> beyond reach.
>>>>
>>>> I thought you had added the limit. I sort of feel that 64k is a bit
>>>> much but the code would definitely be simpler if there was none.
>>>> We already take the hit of acquiring a Mutex for each Metaspace
>>>> allocation so maybe the dictionary lookup isn't that expensive?
>>>>
>>>>>
>>>>> Thanks for the numbers.
>>>>
>>>> You're welcome.
>>>>
>>>> /Mikael
>>>>
>>>>>
>>>>> Jon
>>>>>
>>>>>>
>>>>>> /Mikael
>>>>>>
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>>
>>>>>>> On 6/6/13 2:22 AM, Mikael Gerdin wrote:
>>>>>>>> Jon,
>>>>>>>>
>>>>>>>> On 2013-06-06 04:41, Jon Masamitsu wrote:
>>>>>>>>>
>>>>>>>>> On 6/5/2013 7:04 AM, Mikael Gerdin wrote:
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> Can I have some reviews of this small fix to the Metaspace memory
>>>>>>>>>> allocation path.
>>>>>>>>>>
>>>>>>>>>> Problem:
>>>>>>>>>> When a Metaspace allocation request cannot be satisfied by the
>>>>>>>>>> current
>>>>>>>>>> chunk the chunk is retired and a new chunk is requested. This
>>>>>>>>>> causes
>>>>>>>>>> whatever is left in the chunk to be effectively leaked.
>>>>>>>>>>
>>>>>>>>>> Suggested fix:
>>>>>>>>>> Put the remaining memory in each chunk on the Metablock freelist
>>>>>>>>>> so it
>>>>>>>>>> can be used to satisfy future allocations.
>>>>>>>>>>
>>>>>>>>>> Possible addition:
>>>>>>>>>> When allocating from the block free list, use
>>>>>>>>>> FreeBlockDictionary<Metablock>::atLeast instead of
>>>>>>>>>> FreeBlockDictionary<Metablock>::exactly and split the
>>>>>>>>>> Metablock if
>>>>>>>>>> it's large enough.
>>>>>>>>>>
>>>>>>>>>> One might argue that this increases the fragmentation of the
>>>>>>>>>> memory on
>>>>>>>>>> the block free list but I think that we primarily want to use the
>>>>>>>>>> block free list for small allocations and allocate from chunks
>>>>>>>>>> for
>>>>>>>>>> large allocations.
>>>>>>>>>>
>>>>>>>>>> Webrev:
>>>>>>>>>> Only fix:
>>>>>>>>>> http://cr.openjdk.java.net/~mgerdin/8009561/webrev.0/
>>>>>>>>>
>>>>>>>>> The "Only fix" looks good.  Did you test with
>>>>>>>>> metaspace_slow_verify=true?
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Incremental webrev for splitting blocks:
>>>>>>>>>> http://cr.openjdk.java.net/~mgerdin/8009561/webrev.0%2b/
>>>>>>>>>
>>>>>>>>> Change looks good.
>>>>>>>>>
>>>>>>>>> Did you do any long running tests with the block splitting?
>>>>>>>>> Such as
>>>>>>>>> 24hours with kitchensink?  Something that would reuse Metablocks
>>>>>>>>> so that we can see if we are fragmenting instead of reusing?
>>>>>>>>>
>>>>>>>>
>>>>>>>> I did some runs earlier but I don't have any data from them.
>>>>>>>> I can try to get an instrumented build together and run KS over the
>>>>>>>> weekend.
>>>>>>>>
>>>>>>>> /Mikael
>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Bug links:
>>>>>>>>>> https://jbs.oracle.com/bugs/browse/JDK-8009561
>>>>>>>>>> http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=8009561
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>> /Mikael
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>
>>>>
>>>
>



More information about the hotspot-gc-dev mailing list