Request for review (s) - 8031323: Optionally align objects copied to survivor spaces

Jon Masamitsu jon.masamitsu at oracle.com
Wed Jul 30 04:27:01 UTC 2014


Bengt and Thomas,

I updated with the latest hs-gc and had to do some merging by
hand.  The part of the change that merged cleanly is here.

http://cr.openjdk.java.net/~jmasa/8031323/webrev_merge.03/

The changes I merged by hand are

http://cr.openjdk.java.net/~jmasa/8031323/webrev_delta.03/

The complete change is

http://cr.openjdk.java.net/~jmasa/8031323/webrev.03/

I moved the file parGCAllocBuffer.inline.hpp to

src/share/vm/gc_implementation/shared/parGCAllocBuffer.inline.hpp

from  src/share/vm/memory

I compared the webrev.02 with these and believe these are
consistent with the webrev.02.  I re-tested and there were no
surprises.

Jon

On 5/14/2014 8:38 AM, Bengt Rutisson wrote:
>
> Hi Jon,
>
> Thanks for fixing all of this!
>
> Looks good. Reviewed.
>
> Bengt
>
> On 5/14/14 4:15 PM, Jon Masamitsu wrote:
>> Bengt,
>>
>> Thanks for looking at this.  Responses in-line.
>>
>> On 5/5/2014 4:52 AM, Bengt Rutisson wrote:
>>>
>>> Hi Jon,
>>>
>>> On 2014-05-02 19:25, Jon Masamitsu wrote:
>>>> New webrev with problems addressed except as noted below.
>>>>
>>>> http://cr.openjdk.java.net/~jmasa/8031323/webrev.01/
>>>
>>>
>>> To make it compile I had to remove the inline keyword from the 
>>> declaration of CollectedHeap::align_allocation_or_fail():
>>>
>>> diff --git a/src/share/vm/gc_interface/collectedHeap.hpp 
>>> b/src/share/vm/gc_interface/collectedHeap.hpp
>>> --- a/src/share/vm/gc_interface/collectedHeap.hpp
>>> +++ b/src/share/vm/gc_interface/collectedHeap.hpp
>>> @@ -353,7 +353,7 @@
>>>
>>>    // Return the address "addr" aligned by "alignment_in_bytes" if such
>>>    // an address is below "end".  Return NULL otherwise.
>>> -  inline static HeapWord* align_allocation_or_fail(HeapWord* addr,
>>> +  static HeapWord* align_allocation_or_fail(HeapWord* addr,
>>>                                                     HeapWord* end,
>>>                                                     unsigned short 
>>> alignment_in_bytes);
>>>
>>>
>>> The includes for the usages of 
>>> CollectedHeap::align_allocation_or_fail() need to be updated.
>>>
>>> space.cpp should include collectedHeap.inline.hpp.
>>>
>>> psPromotionLAB.hpp is using 
>>> CollectedHeap::align_allocation_or_fail() and thus would need the 
>>> include too, but we should not include inline files from hpp files, 
>>> so you need to move PSYoungPromotionLAB::allocate() into 
>>> psPromotionLAB.inline.hpp and have that include 
>>> collectedHeap.inline.hpp.
>>>
>>> Similarly parGCAllocBuffer.hpp would need the include too since 
>>> ParGCAllocBuffer::allocate_aligned() uses 
>>> CollectedHeap::align_allocation_or_fail(). But again we should not 
>>> include inline files in hpp files so 
>>> ParGCAllocBuffer::allocate_aligned() should probably be moved in to 
>>> the cpp file. Or, if we are worried about performance regressions we 
>>> need to introduce a parGCAllocBuffer.inline.hpp file and move it there.
>>>
>> Now compiles with and without USE_PRECOMPILED_HEADER=0.  I left the
>> inline methods as inline.
>>
>>>
>>>
>>> In CollectedHeap::align_allocation_or_fail(), I think we can replace 
>>> this:
>>>
>>>   if(new_addr > addr && new_addr < end) {
>>>
>>> with:
>>>
>>>   assert(new_addr > addr, "some good err_msg");
>>>   if(new_addr < end) {
>>>
>>>
>>
>> Fixed.
>>
>>>
>>> In ContiguousSpace::allocate_aligned() I think we can make the 
>>> assert for the alignment of obj a bit stronger. Now we do:
>>>
>>> assert(is_aligned(obj) && is_aligned(new_top), "checking alignment");
>>>
>>> But we have just gone through some trouble to make sure that obj is 
>>> aligned to SurvivorAlignmentInBytes, so I think we should assert 
>>> that rather than just the 8 byte alignment that Space::is_aligned() 
>>> gives us.
>>>
>>
>> Fixed.
>>
>>>
>>> PSYoungPromotionLAB::allocate() and 
>>> ContiguousSpace::allocate_aligned() is part of the code duplication 
>>> that you plan on fixing with JDK-8042321. I'm fine with fixing that 
>>> in a separate change, but it would be nice if we can make these 
>>> methods look more similar. Right now they use different return 
>>> patterns.
>>>
>>> PSYoungPromotionLAB::allocate() uses:
>>>
>>> if (allocation worked) {
>>>   return obj;
>>> }
>>> ...
>>> return NULL;
>>>
>>> But ContiguousSpace::allocate_aligned() uses an else statement:
>>>
>>> if (allocation worked) {
>>>   return obj;
>>> } else {
>>>   ...
>>>   return NULL;
>>> }
>>>
>> Fixed.  Both have an else clause.
>>
>>> Similarly to the comment about the assert in 
>>> ContiguousSpace::allocate_aligned() I think the assert in 
>>> PSYoungPromotionLAB::allocate() can be strengthened to assert that 
>>> obj is SurvivorAlignmentInBytes aligned.
>>>
>>
>> Fixed.
>>
>>>
>>> A minor thing about oop.pcgc.inline.hpp
>>>
>>> -// Used by parallel old GC.
>>> +// forward_to_atomic() is used by parallel old and G1 GCs.
>>>
>>> You updated this comment to include G1, but the method is actually 
>>> also used by ParNew (see 
>>> ParNewGeneration::copy_to_survivor_space_avoiding_promotion_undo()) 
>>> and I don't find any usages in ParallelOld. I would suggest to just 
>>> get rid of the comment. It is not important to have a comment saying 
>>> who uses it.
>>>
>>>
>> Deleted comment.
>>
>>>
>>> Another very minor thing. The extra line feed added around line 1200 
>>> in parNewGeneration.cpp does not seem related to this change:
>>
>> I think the extra blank line was in the original.  I added it back.
>>
>>>
>>> http://cr.openjdk.java.net/~jmasa/8031323/webrev.01/src/share/vm/gc_implementation/parNew/parNewGeneration.cpp.cdiff.html 
>>>
>>>
>>>
>>> Testing. There are tests for this feature coming, right? What kind 
>>> of testing have you done so far on the current patch?
>>
>> I added diagnostics to count the number of allocations that were aligned
>> to 64 bytes.  Then ran with different values of 
>> SurvivorAlignmentInBytes.
>> In particular 8 bytes and 64 bytes.   I printed the fraction of 
>> survivor allocations
>> and promoted allocations that were aligned and not aligned after each 
>> collection.
>> I looked at the output such as (for 8byte alignment)
>>
>> Promoted addresses aligned 462 missed 3178 p_fraction  0.126923
>> Survivor addresses aligned 933 missed 6050 ss_fraction  0.133610
>>
>> and (for 64 byte alignment)
>>
>> Promoted addresses aligned 471 missed 3168 p_fraction  0.129431
>> Survivor addresses aligned 7107 missed 0 ss_fraction  1.000000
>>
>> to see if it was as I expected it to be.
>>
>> The webrev with the changes are at
>>
>> http://cr.openjdk.java.net/~jmasa/8031323/webrev_delta.02/
>>
>> The webrev shows the changes but also contains (if you look at other
>> than just the changes), the diagnostic printing code I described above.
>> You can ignore that since it is not in the complete webrev.
>>
>> The complete webrev is at
>>
>> http://cr.openjdk.java.net/~jmasa/8031323/webrev.02/
>>
>> Thanks again.
>>
>> Jon
>>
>>
>>
>>>
>>> Thanks,
>>> Bengt
>>>
>>>
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> On 03/18/2014 06:30 AM, Bengt Rutisson wrote:
>>>>>
>>>>> Hi Jon,
>>>>>
>>>>> On 2014-03-17 05:40, Jon Masamitsu wrote:
>>>>>> 8031323: Optionally align objects copied to survivor spaces
>>>>>>
>>>>>> Add the option to specify an alignment for allocations into the
>>>>>> survivor spaces.  Survivor space allocations will overflow if the
>>>>>> survivor spaces have not been increased.   The expected use
>>>>>> is to align survivor space allocations to cache line sizes.
>>>>>>
>>>>>> http://cr.openjdk.java.net/~jmasa/8031323/webrev.00/
>>>>>
>>>>> For G1 it looks to me like we are aligning objects in the old 
>>>>> regions as well. I thought the intent was to only align in the 
>>>>> survivor regions. Am I missing something?
>>>>
>>>> Yes, that was an error.   Fixed in the new webrev.
>>>>
>>>>>
>>>>> Note that it is not enough to check the alloc purpose for G1 since 
>>>>> G1CollectedHeap::par_allocate_during_gc() only takes the purpose 
>>>>> as a hint and may still return buffers from other regions than 
>>>>> expected for a purpose. But I guess that in this case it is not 
>>>>> too important to get it exactly right, so it may be enough to 
>>>>> check the alloc purpose value.
>>>>
>>>> I did use the purpose and there are cases where alignments in the 
>>>> survivor spaces
>>>> can be missed.   I think the number missed is a very small 
>>>> percentage of the total
>>>> so I'm going to let those slip by.  I thought that preferable to 
>>>> the more extensive
>>>> coding changes it would take to catch them.
>>>>>
>>>>>
>>>>> The code in PSYoungPromotionLAB::allcoate(), 
>>>>> ParGCAllocBuffer::allocate_aligned() and 
>>>>> ContiguousSpace::allocate_aligned() is very similar. Is there a 
>>>>> way to consolidate these methods?
>>>>>
>>>> I looked at this but did not come up with a suitable way to fix it. 
>>>> I've opened
>>>> CR to track the issue.
>>>>
>>>> https://bugs.openjdk.java.net/browse/JDK-8042321
>>>>
>>>>
>>>>>
>>>>> The code in CollectedHeap::align_allocation_or_fail() looks quite 
>>>>> different from other places in the code base where we do padding. 
>>>>> What do you think about doing something like this instead?
>>>>>
>>>>> inline HeapWord* CollectedHeap::align_allocation_or_fail(HeapWord* 
>>>>> addr, HeapWord* end, unsigned short alignment_in_bytes) {
>>>>>   if (alignment_in_bytes <= ObjectAlignmentInBytes) {
>>>>>     return addr;
>>>>>   }
>>>>>
>>>>>   assert(is_ptr_aligned(addr, HeapWordSize), "");
>>>>>   assert(is_size_aligned(alignment_in_bytes, HeapWordSize), "");
>>>>>
>>>>>   HeapWord* new_addr = (HeapWord*) align_pointer_up(addr, 
>>>>> alignment_in_bytes);
>>>>>   size_t padding = pointer_delta(new_addr, addr);
>>>>> https://bugs.openjdk.java.net/browse/JDK-8042321
>>>>>   if (padding == 0) {
>>>>>     return addr;
>>>>>   }
>>>>>
>>>>>   if (padding < CollectedHeap::min_fill_size()) {
>>>>>     padding += alignment_in_bytes / HeapWordSize;
>>>>>     assert(padding >= CollectedHeap::min_fill_size(), "");
>>>>>     new_addr = addr + padding;
>>>>>   }
>>>>>
>>>>>   if(new_addr > addr && new_addr < end) {
>>>>>     CollectedHeap::fill_with_object(addr, padding);
>>>>>     return new_addr;
>>>>>   } else {
>>>>>     return NULL;
>>>>>   }
>>>>> }
>>>>
>>>> Fixed that as suggested.
>>>>
>>>> Jon
>>>>>
>>>>> Thanks,
>>>>> Bengt
>>>>>
>>>>>>
>>>>>> Jon
>>>>>
>>>>
>>>
>>
>




More information about the hotspot-gc-dev mailing list