RFR (M) 8164921: Memory leaked when instrumentation.retransformClasses() is called repeatedly

Coleen Phillimore coleen.phillimore at oracle.com
Mon Oct 3 15:46:01 UTC 2016


Serguei, thank you for looking at this.

On 9/30/16 7:57 PM, serguei.spitsyn at oracle.com wrote:
> Hi Coleen,
>
>
> Just wanted to share a couple of comments I have so far.
>
>
> http://cr.openjdk.java.net/~coleenp/8164921.01/webrev/src/share/vm/memory/metaspace.cpp.frames.html
>
> 302 debug_only(GrowableArray<Metablock*>* _all_blocks);
> 855 DEBUG_ONLY(_all_blocks = new (ResourceObj::C_HEAP, mtClass) 
> GrowableArray<Metablock*>(100, true)); 863 DEBUG_ONLY(delete 
> _all_blocks;) 891 
> DEBUG_ONLY(_all_blocks->remove((Metablock*)new_block)); 908 
> DEBUG_ONLY(_all_blocks->remove((Metablock*)free_block)); 922 
> DEBUG_ONLY(_all_blocks->remove((Metablock*)new_block));
>   Not clear, why different macros debug_only() is used at L302.   
> Could it be DEBUG_ONLY as well? 
Yes, good catch.  I thought I had recompiled after adding the 
DEBUG_ONLY, but I must not have.  It should be DEBUG_ONLY.
>   866 void BlockFreelist::return_block(MetaWord* p, size_t word_size) {
>   867   Metablock* free_chunk = ::new (p) Metablock(word_size);
> 868 if (word_size < SmallBlocks::small_block_min_size()) {
> 869 // Dark matter
> 870 return; 871 } else if (word_size < 
> SmallBlocks::small_block_max_size()) {
>  In case of dark matter the free_chunk is leaked.  It is better to 
> rearrange the fragment to something like this:
>   void BlockFreelist::return_block(MetaWord* p, size_t word_size) {
> if (word_size < SmallBlocks::small_block_min_size()) {
> return; // Dark matter
> }
> Metablock* free_chunk = ::new (p) Metablock(word_size); if (word_size 
> < SmallBlocks::small_block_max_size()) {
Okay, yes, this seems like it would be a bug since 
SmallBlocks::small_block_min_size() should equal sizeof Metablock in 
words, but since we align up to sizeof Metablock during allocation, we 
won't ever hit this.   But I'll change as suggested.
>   886 MetaWord* BlockFreelist::get_block(size_t word_size) {
> 887 if (word_size < SmallBlocks::small_block_max_size() && 
> small_blocks()->list_at(word_size).count() > 0) {
>   . . .
> 892 return new_block;
> 893 }
> 894
> 895 if (word_size < BlockFreelist::min_size()) {
>   896     // Dark matter.  Too small for dictionary.
>   897     return NULL;
>   898   }
>   It'd be better to reorder the fragments 887-893 and 895-898. Thanks, 
> Serguei 
The order is important because we first try for something in the 
small_blocks array, then try for something in the dictionary.   
BlockFreelist::min_size() is the size of a minimal dictionary entry, not 
the same as SmallBlocks::small_block_min_size(). So the code is, look 
for something in the small blocks (3-11 words) but if the size is too 
small for the dictionary (12 words), return.  Then the block is 
allocated from the Metachunk and not the small_blocks/dictionary of 
returned blocks. Thank you for starting your review.  Please let me know 
when you have further comments. Coleen
> On 9/30/16 12:02, Coleen Phillimore wrote:
>> Summary: Return Metablocks smaller than dictionary's dark matter. 
>> This change contributed by Jon Masamitsu and myself.  To reclaim 
>> "dark matter" this change adds an array of small blocks by size, 
>> created lazily, to return Metablocks smaller than the 
>> BinaryTreeDictionary entry's minimum size.   This change also fixed a 
>> bug in small object double free and adds debugging code to check for 
>> this case. With this change, the submitted test case runs 
>> indefinitely.   Also passed rbt tier 1-5 testing. open webrev at 
>> http://cr.openjdk.java.net/~coleenp/8164921.01/webrev bug link 
>> https://bugs.openjdk.java.net/browse/JDK-8164921 Thanks, Coleen and Jon 


More information about the hotspot-dev mailing list