RFR (M) 8202813: Move vm_weak processing from SystemDictionary to WeakProcessor

coleen.phillimore at oracle.com coleen.phillimore at oracle.com
Tue May 29 13:13:36 UTC 2018



On 5/29/18 8:38 AM, Stefan Johansson wrote:
>
>
> On 2018-05-29 13:46, coleen.phillimore at oracle.com wrote:
>>
>>
>> On 5/29/18 6:27 AM, Stefan Johansson wrote:
>>> Hi Coleen,
>>>
>>> On 2018-05-25 17:32, coleen.phillimore at oracle.com wrote:
>>>> Summary: SystemDictionary has all strong roots.  The weak 
>>>> oop_storage is processed by the WeakProcessor so it can be scanned 
>>>> and cleared concurrently and/or by parallel threads.
>>>>
>>>> Please see bug for explanation.
>>>>
>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8202813.01/webrev
>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8202813
>>>>
>>>> There might be more GC code that can be cleaned out since 
>>>> G1RootClosures->weak_oops() seems unused now, but I'd like to defer 
>>>> that to a smaller cleanup.  Also ClassLoaderData is still processed 
>>>> inconsistently by the GCs, which would be nice to clean up.
>>>>
>>>> Tested with mach5 hs-tier1-5 with no failures.  Per's 
>>>> gc-test-suite, dev-submit for performance testing, and runThese 
>>>> with all collectors, with and without -XX:-ClassUnloading.  Also 
>>>> tested with class unloading gc tests (which are also in hs-tier 
>>>> number tests).
>>>
>>> The changes looks good, just a couple of questions:
>>> * In your performance runs did you check the pause times? From what 
>>> I can see some of the processing have moved from a parallel phase to 
>>> a serial one, which could be problematic if the vm_weak_oop_storage 
>>> is large.
>>
>> Thank you for looking at this Stefan.  The 
>> SystemDictionary::vm_weak_oop_storage is not large (~500 entries avg 
>> in my measurements with SPECjbb2015).  I don't think it benefited 
>> from parallelism.  That said, I don't think processing the few strong 
>> roots in SystemDictionary are worth processing in their own thread 
>> anymore either (it used to be a lot larger when this was all set 
>> up).   It might make sense to move all of these strong roots together 
>> into Universe.
>>
>> The goal is to make the oop_storage processing parallel, which Kim is 
>> working on, and not have these separate.
>>
> Sounds good.
>
>> I did generate all of the gc.log files from a performance run but 
>> haven't dug them out yet.  Eric Caspole's been helping me graph the 
>> pause times so can we talk offline for what to look for in these files?
>>
> Graphs are always nice :) One good thing, that works again with the 
> latest JFR changes, is to do the aurora-perf runs with JFR enabled. 
> That will give some basic GC events which are automatically parsed and 
> displayed in the report. That makes it a lot easier to spot pause time 
> regressions since they seldom show up in the benchmark result.
>
> What I would look out for is longer Young GC pause times, looking at 
> the average over a run is usually a good first step.
>
>>>
>>> * When turning of ClassUnloading we treated the vm_weak_oop_storage 
>>> as strong roots in the past, was that not needed? I don't see this 
>>> being done after this change.
>>
>> It's not needed because the oops in the vm_weak_oop_storage relating 
>> to ClassUnloading will be marked when we process the 
>> ClassLoaderDataGraph so this extra walk only marked the additional 
>> ResolvedMethodTable oops. This table has appropriate NULL checks and 
>> should probably be removed from SystemDictionary.  It was added there 
>> for convenience.
>>
>
> Thanks for the explanation, with that the change looks good code wise 
> and if you feel comfortable with the performance I don't see any 
> problem moving forward with the change.

The overall performance results were neutral (critical jOPS), so I think 
they are fine, but I'll dig out the data and post it in the bug report 
for the record (and to double check nothing is amiss).

Thanks!
Coleen

>
> Cheers,
> Stefan
>
>> Thank you!
>> Coleen
>>
>>>
>>> Thanks,
>>> Stefan
>>>
>>>>
>>>> Thanks,
>>>> Coleen
>>



More information about the hotspot-dev mailing list