RFR (S): 8145672: Remove dependency of G1FromCardCache to HeapRegionRemSet
Jon Masamitsu
jon.masamitsu at oracle.com
Mon Dec 21 21:22:13 UTC 2015
On 12/21/2015 03:37 AM, Thomas Schatzl wrote:
> Hi Jon,
>
> thanks for the review.
>
> Comments inline.
>
> On Fri, 2015-12-18 at 10:23 -0800, Jon Masamitsu wrote:
>> Thomas,
>>
>> A couple of questions about code that are not part of your change (you
>> can ignore) and a suggestion on expanding a comment.
>>
>> http://cr.openjdk.java.net/~tschatzl/8145672/webrev/src/share/vm/gc/g1/g1CollectedHeap.cpp.frames.html
>>
>> Not a part of your change but is the guarantee() in the right place?
>>
>>> 1936 const uint max_region_idx = (1U << (sizeof(RegionIdx_t)*BitsPerByte-1)) - 1;
>>> 1937 guarantee((max_regions() - 1) <= max_region_idx, "too many regions");
>>> 1938
>>> 1939 G1RemSet::init_heap(max_regions());
>> Meaning, isn't _max_regions initialized in init_heap() after the
>> guarantee()?
> There is no _max_regions, it is derived from
> G1CollectedHeap::reserved(). Also, the init_heap() you are referring to
> is G1RemSet::init_heap().
>
> I renamed the method to G1RemSet::initialize(), which is of course more
> generic and bland, but I hope it avoids this kind of confusion. (I did
> not want to use after_heap_initialized() or such as I found it even
> worse).
Ok.
>> http://cr.openjdk.java.net/~tschatzl/8145672/webrev/src/share/vm/gc/g1/g1FromCardCache.cpp.frames.html
>>
>> Again, not part of your change but can lines 52 and 53 be lifted out of
>> the loop at 51?
>>
>>> 51 for (uint i = 0; i < G1RemSet::num_par_rem_sets(); i++) {
>>> 52 uint end_idx = (start_idx + (uint)new_num_regions);
>>> 53 assert(end_idx <= _max_regions, "Must be within max.");
>>> 54 for (uint j = start_idx; j < end_idx; j++) {
>>> 55 set(i, j, InvalidCard);
>>> 56 }
>>> 57 }
> Done.
>
>> http://cr.openjdk.java.net/~tschatzl/8145672/webrev/src/share/vm/gc/g1/g1RemSet.hpp.frames.html
>>
>> Thank you for adding this comment.
>>
>>> 79 // Gives an approximation on how many threads add records to a
>>> remembered set
>>> 80 // in parallel for sizing buffers to decrease performance losses
>>> due to sharing.
>>> 81 // Examples are mutator threads together with the concurrent
>>> refinement threads
>>> 82 // or GC threads.
>>> 83 static uint num_par_rem_sets();
>> Can you add what kind of buffers to "for sizing ??? buffers"?
> I changed it to:
>
> 78 // Gives an approximation on how many threads can be expected to
> add records to
> 79 // a remembered set in parallel. This can be used for sizing data structures to
> 80 // decrease performance losses due to data structure sharing.
> 81 // Examples for quantities that influence this value are the maximum number of
> 82 // mutator threads, maximum number of concurrent refinement or GC threads.
>
> I hope this is somewhat better.
Thanks.
>
> The current formula may actually be overkill for very large system (like
> using the processor count in the formula directly). I think it could be
> scaled down to some degree for "large" systems without performance
> penalty. However that is different work.
>
>> Would this be too specific to the current implementation? Meaning do
>> you want a more
>> general descriptions?
>>
>> // Examples are the maximum number of mutator threads plus the maximum
>> number of
>> // concurrent refinement threads or the maximum number of GC threads,
>> whichever is
>> // larger.
> See above.
>
>> Leave out the "maximum" (in all 3 places if it is understood).
>>
>> Otherwise, looks good.
>>
>> None of these are critical to the patch (since mostly not part of
>> your change) so no need to delay your push for any of these.
> New webrevs at
> http://cr.openjdk.java.net/~tschatzl/8145672/webrev.1/ (full)
> http://cr.openjdk.java.net/~tschatzl/8145672/webrev.0_to_1/ (diff)
>
http://cr.openjdk.java.net/~tschatzl/8145672/webrev.0_to_1/src/share/vm/gc/g1/g1RemSet.hpp.udiff.html
This doesn't quite sound right.
+ // Initialize data that depends on that the heap size is known.
Maybe
// Initialize data that depends on the heap size being known.
I don't need a new webrev.
Jon
More information about the hotspot-gc-dev
mailing list