RFR: 8150676: Use BufferNode index

Jon Masamitsu jon.masamitsu at oracle.com
Wed Mar 2 20:09:05 UTC 2016


Kim,

http://cr.openjdk.java.net/~kbarrett/8150676/webrev.00/src/share/vm/gc/g1/dirtyCardQueue.hpp.frames.html

55 // Apply the closure to all active elements, from index to size. If


Change "size" to "_sz"?  It's hard to understand what "size" is from 
only the
signature of apply_closure().  Using "_sz" gives a (weak) hint that the 
upper limit
is a field in the class.

   69   static bool apply_closure_to_buffer(CardTableEntryClosure* cl,

Would "apply_closure_to_node" now be a better name?

http://cr.openjdk.java.net/~kbarrett/8150676/webrev.00/src/share/vm/gc/g1/satbMarkQueue.cpp.frames.html

> 129 if (retain_entry(entry, g1h)) {
> 130 // Found keeper. Search high to low for an entry to discard.
> 131 while ((src < --dst) && retain_entry(*dst, g1h)) { }
> 132 if (src >= dst) break; // Done if no discard found.

If I'm at line 132 then "src" points to a keeper and is the lowest
(addressed) keeper.  If that's true, then if "dst" is less than "src"
as seems to be allowed at line 132, then the index calculation

> 137 _index = pointer_delta(dst, buf, 1);


seems like it will be off (too small).

http://cr.openjdk.java.net/~kbarrett/8150676/webrev.00/src/share/vm/gc/g1/ptrQueue.cpp.frames.html

209 BufferNode* node = BufferNode::make_node_from_buffer(_buf, _index);

Move 209 above 190

  190     if (_lock) {

and delete 222?


222 BufferNode* node = BufferNode::make_node_from_buffer(_buf, _index);

Jon


On 03/01/2016 08:00 PM, Kim Barrett wrote:
>> On Mar 1, 2016, at 9:56 PM, Kim Barrett <kim.barrett at oracle.com> wrote:
>>
>> Please review this change to PtrQueue and its derivatives to maintain
>> the index in BufferNode where needed.  This allowed the removal of
>> code to fill inactive leading portions of buffers with NULL and to
>> remove code to skip over NULL entries.
>>
>> Removed unused DirtyCardQueueSet::apply_closure_to_all_completed_buffers,
>> rather than fixing it's BufferNode manipulation.
>>
>> Further changed SATBMarkQueue::filter to use two-fingered compaction,
>> which may further reduce the number of writes to the buffer during
>> filtering.  For example, using specjbb2015, with over 2.5M buffers
>> processed, the number of writes using the new two-fingered compaction
>> (12M) was a factor of 50 fewer than needed by the (non-NULLing)
>> sliding algorithm (60M), and a factor of 250 fewer than the original
>> sliding algorithm (330M).
> Oops, the relative factors are correct, but the values for the sliding write counts
> are wrong.  Should be 12M (two-fingered), 600M (new slide) and 3300M (old slide).
>
>>   On average, filtering a buffer removed
>> about 75% of the entries in that test.
>>
>> CR:
>> https://bugs.openjdk.java.net/browse/JDK-8150676
>>
>> Webrev:
>> http://cr.openjdk.java.net/~kbarrett/8150676/webrev.00
>>
>> Testing:
>> JPRT
>> Aurora ad-hoc defaults + GC nightly + Runtime nightly
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20160302/3985e76b/attachment.htm>


More information about the hotspot-gc-dev mailing list