RFR(M/L): 7145569: G1: optimize nmethods scanning

Mikael Gerdin mikael.gerdin at oracle.com
Thu Jun 27 08:25:52 UTC 2013


John,

On 2013-06-26 23:30, John Cuthbertson wrote:
> Hi Everyone,
>
> A new version of the changes can be found at:
> http://cr.openjdk.java.net/~johnc/7145569/webrev.2/
>
> Changes in this version include:
>
> * The verification code blob closure in g1CollectedHeap.cpp now inherits
> directly from CodeBlobClosure rather than CodeBlobToOopClosure to
> address Mikael's final comment.
> * The list of nmethods has been moved from the HeapRegion class to the
> HeapRegionRemSet class. The routines in HeapRegion are now, as Thomas
> suggested, wrappers and invoke the associated method from the RSet owned
> by the heap region.
> * Added a routine to return the number of bytes currently occupied by
> the list of nmethods and used that in the G1SummarizeRSetStats output
> and the G1PrintRegionLivenessInfo output. I've been back and forth about
> whether to include the size of the memory occupied by the code roots
> into the value returned by HeapRegionRemSet::mem_size(). In this
> implementation I've included it but also included separate output for it
> in RSet stats and region liveness output.
>
> In the webrev I have folded Mikael's and Thomas' first set comments into
> the main patch:
>
> http://cr.openjdk.java.net/~johnc/7145569/webrev.2/webrev.1.optimize-nmethod-scanning/
>
> Thomas' second set of comments are given in:
>
> http://cr.openjdk.java.net/~johnc/7145569/webrev.2/webrev.2.thomas-comments-2/
>
> And, as always, the whole tamale is in:
>
> http://cr.openjdk.java.net/~johnc/7145569/webrev.2/webrev.all/

Looks good to me.

/Mikael

>
> An example of the new RSet stats output is:
>
>   Concurrent RS processed 154 cards
>    Of 21 completed buffers:
>             21 (100.0%) by concurrent RS threads.
>              0 (  0.0%) by mutator threads.
>    Concurrent RS threads times (s)
>            0.00     0.00     0.00     0.00
>    Concurrent sampling threads times (s)
>            0.00
>    Total heap region rem set sizes = 231K.  Max = 1K.
>    Static structures = 32K, free_lists = 0K.
>      0 occupied cards represented.
>      Max size region = 0:(O)[0xac400000,0xac446590,0xac500000], size =
> 2K, occupied = 0K.
>      Did 0 coarsenings.
> /  Total heap region code-root set sizes = 10K.  Max = 0K.//
> //    Max size region = 0:(O)[0xac400000,0xac446590,0xac500000], size =
> 1K, num_elems = 32./
>
>
> The new G1PrintRegionLivenessInfo output is:
>
> ### PHASE Post-Marking @ 1.673
> ### HEAP  committed: 0xac400000-0xb5b00000  reserved:
> 0xac400000-0xec400000  region-size: 1048576
> ###
> ###   type         address-range       used  prev-live
> next-live          gc-eff     remset  code-roots
> ###                                 (bytes)    (bytes) (bytes)
> (bytes/ms)    (bytes)    (bytes)
> ###   OLD  0xac400000-0xac500000     288144     284688 284688
> 295956.8       1852        196
> ###   FREE 0xac500000-0xac600000          0 0          0
> 0.0       1732         76
> ###   FREE 0xac600000-0xac700000          0 0          0
> 0.0       1732         76
> ###   FREE 0xac700000-0xac800000          0 0          0
> 0.0       1732         76
> ###   FREE 0xac800000-0xac900000          0 0          0
> 0.0       1732         76
> ###   FREE 0xac900000-0xaca00000          0 0          0
> 0.0       1732         76
> ###   FREE 0xaca00000-0xacb00000          0 0          0
> 0.0       1732         76
> [snip]
> ###   FREE 0xb5700000-0xb5800000          0 0          0
> 0.0       1732         76
> ###   FREE 0xb5800000-0xb5900000          0 0          0
> 0.0       1732         76
> ###   FREE 0xb5900000-0xb5a00000          0 0          0
> 0.0       1732         76
> ###   FREE 0xb5a00000-0xb5b00000          0 0          0
> 0.0       1732         76
> ###
> ### SUMMARY  capacity: 151.00 MB  used: 12.40 MB / 8.21 % prev-live:
> 12.40 MB / 8.21 %  next-live: 2.67 MB / 1.77 %  remset: 0.28 MB
> code-roots: 0.01 MB
>
> Testing:
> specjvm98, client/server with -XX:+VerifyBeforeGC -XX:+VerifyAfterGC
> -XX:+VerifyDuringGC -XX:+G1VerifyHeapRegionCodeRoots
> -XX:+G1SummarizeRSetStats -XX:G1SummarizeRSetStatsPeriod=5
> -XX:+G1PrintRegionLivenessInfo
> Kitchensink (4h runs), client/server with -XX:+VerifyBeforeGC
> -XX:+VerifyAfterGC -XX:+VerifyDuringGC -XX:+G1VerifyHeapRegionCodeRoots
>
> Thanks,
>
> JohnC
>
> On 6/21/2013 2:59 AM, Thomas Schatzl wrote:
>> Hi,
>>
>> On Thu, 2013-06-20 at 14:00 -0700, John Cuthbertson wrote:
>>> Hi Thomas,
>>>
>>> On 6/19/2013 1:33 PM, Thomas Schatzl wrote:
>>>> Hi,
>>>>
>>>>     one issue I forgot to mention: currently when printing remembered set
>>>> sizes, the new nmethod remembered set is not taken into account.
>>>>
>>>> I am asking myselves, isn't the _strong_code_root_list not a type of
>>>> remembered set, and shouldn't it be added to this size? What do you
>>>> think?
>>> I think you're right. In fact I've been calling it a sort of remembered set.
>> Great.
>>
>>>> Additionally it would be really nice to make this information available
>>>> in some form so that further (size related) issues in that area could be
>>>> diagnosed.
>>> In the short term we could certainly add a strong_code_root_mem_size()
>>> routine to HeapRegion and call that from the appropriate places. It
>>> should be straight forward - GrowableArrays keep track of their capacity.
>> This seems suboptimal as this would require all places that use
>> G1RemSet::mem_size() to add this call extra call (but possible); my
>> intention is to include this value in regular printouts of remembered
>> set sizes.
>>
>>>> Unfortunately at the moment the _strong_roots_code_list is located in
>>>> HeapRegion, not in the remembered set, where the method to get its size
>>>> is.
>>> Again calling a routine from the HeapRegion that "own" the RSet should
>>> be OK.  I'll send out a new webrev when I've done this.
>>>
>>> Longer term moving the code roots in to the actual Rset structure (not
>>> OtherRegionsTable though) is desirable. I'll investigate how much effort
>>> it will be to this.
>> That's exactly what I had in mind. Thanks for looking into this.
>>
>> Thanks,
>>    Thomas
>>
>



More information about the hotspot-gc-dev mailing list