RFR 8247808: Move JVMTI strong oops to OopStorage
serguei.spitsyn at oracle.com
serguei.spitsyn at oracle.com
Wed Jul 15 22:26:29 UTC 2020
Hi Coleen,
Thank you for the explanation.
Thanks,
Serguei
On 7/15/20 12:45, coleen.phillimore at oracle.com wrote:
>
> Thank you for reviewing this, Serguei.
>
> On 7/15/20 1:33 PM, serguei.spitsyn at oracle.com wrote:
>> Hi Coleen,
>>
>> The update looks okay to me.
>> Also, I wonder what should happen to the JvmtiExport::weak_oops_do().
>
> Unfortunately, JvmtiExport::weak_oops_do() calls
> JvmtiTagMap::weak_oops_do which ends up doing more than just doing GC
> on the weak oops in the hashtable that is stored for object tagging.
> Since the hash code for the objects tagged is the address of the oop,
> GC also has to rehash the objects if they've been moved.
>
> I had a patch once to try go fix this to use weak OopStorage and use
> object->identity_hash() but hashing all the objects that JVMTI was
> trying to tag didn't turn out to be a good thing to do. I ended up
> abandoning that change.
>
> Thanks,
> Coleen
>>
>> Thanks,
>> Serguei
>>
>>
>> On 7/15/20 08:38, coleen.phillimore at oracle.com wrote:
>>>
>>> Hi, This patch has been reviewed and I was waiting for the ability
>>> to define different OopStorages, but I'd like to fix that in a
>>> further change after the GC changes have been agreed upon and
>>> reviewed. Adding a new JVMTI OopStorage in the new mechanism is a
>>> smaller change.
>>>
>>> open webrev at
>>> http://cr.openjdk.java.net/~coleenp/2020/8247808.01/webrev
>>>
>>> Retested with tier1-3.
>>>
>>> Thanks,
>>> Coleen
>>>
>>>
>>>
>>> On 6/18/20 3:48 PM, coleen.phillimore at oracle.com wrote:
>>>>
>>>>
>>>> On 6/18/20 3:58 AM, Thomas Schatzl wrote:
>>>>> Hi,
>>>>>
>>>>> On 18.06.20 03:09, coleen.phillimore at oracle.com wrote:
>>>>>>
>>>>>>
>>>>>> On 6/17/20 7:49 PM, David Holmes wrote:
>>>>>>> Hi Coleen,
>>>>>>>
>>>>>>> On 18/06/2020 7:25 am, coleen.phillimore at oracle.com wrote:
>>>>>>>> Summary: Remove JVMTI oops_do calls from JVMTI and GCs
>>>>>>>>
>>>>>>>> Tested with tier1-3, also built shenandoah to verify shenandoah
>>>>>>>> changes.
>>>>>>>>
>>>>> [...]
>>>>>>
>>>>>> Kim noticed that G1 and ParallelGC should be processing these
>>>>>> roots in parallel (with many threads, since OopStorage has that
>>>>>> support) and he's going to or has filed a bug to fix it. As we
>>>>>> add more things to OopStorage (see upcoming RFRs), this will
>>>>>> become important.
>>>>>>
>>>>>
>>>>> I do not know which exact roots you want to move into OopStorage,
>>>>> but I would like to mention this concern: with moving everything
>>>>> into a single OopStorage (i.e. vm_globals in this case), I am
>>>>> worried that every time important information about the source for
>>>>> these gets lost.
>>>>>
>>>>> Which makes it hard to understand from where these oops came from
>>>>> when there is a performance problem in the "VM Globals" bucket.
>>>> Hi Thomas,
>>>>
>>>> I understand this concern. On the GC list there is a discussion
>>>> about having the ability to create different strong OopStorages,
>>>> changing the OopStorage code to process these roots and report
>>>> statistics in parallel (and/or concurrent), and not having to
>>>> cascade the code through all the GCs.
>>>>
>>>> I'm going to hold this change until this discussion is complete and
>>>> move the JVMTI and services/management oops_do oops into a
>>>> different OopStorage that can make use of this. Then you'll have
>>>> your statistics and we won't have classes needing traversal with
>>>> oops_do.
>>>>
>>>> Thanks,
>>>> Coleen
>>>>
>>>>>
>>>>> This may not apply to JVMTI oops, but others may occasionally have
>>>>> a significant amount of oops where it would be very interesting to
>>>>> know from where a particular slowdown comes from.
>>>>>
>>>>> So I would prefer keep some accounting here.
>>>>>
>>>>> Thanks,
>>>>> Thomas
>>>>
>>>
>>
>
More information about the serviceability-dev
mailing list