RFR 8247808: Move JVMTI strong oops to OopStorage
coleen.phillimore at oracle.com
coleen.phillimore at oracle.com
Thu Jun 18 19:48:11 UTC 2020
On 6/18/20 3:58 AM, Thomas Schatzl wrote:
> Hi,
>
> On 18.06.20 03:09, coleen.phillimore at oracle.com wrote:
>>
>>
>> On 6/17/20 7:49 PM, David Holmes wrote:
>>> Hi Coleen,
>>>
>>> On 18/06/2020 7:25 am, coleen.phillimore at oracle.com wrote:
>>>> Summary: Remove JVMTI oops_do calls from JVMTI and GCs
>>>>
>>>> Tested with tier1-3, also built shenandoah to verify shenandoah
>>>> changes.
>>>>
> [...]
>>
>> Kim noticed that G1 and ParallelGC should be processing these roots
>> in parallel (with many threads, since OopStorage has that support)
>> and he's going to or has filed a bug to fix it. As we add more
>> things to OopStorage (see upcoming RFRs), this will become important.
>>
>
> I do not know which exact roots you want to move into OopStorage, but
> I would like to mention this concern: with moving everything into a
> single OopStorage (i.e. vm_globals in this case), I am worried that
> every time important information about the source for these gets lost.
>
> Which makes it hard to understand from where these oops came from when
> there is a performance problem in the "VM Globals" bucket.
Hi Thomas,
I understand this concern. On the GC list there is a discussion about
having the ability to create different strong OopStorages, changing the
OopStorage code to process these roots and report statistics in parallel
(and/or concurrent), and not having to cascade the code through all the GCs.
I'm going to hold this change until this discussion is complete and move
the JVMTI and services/management oops_do oops into a different
OopStorage that can make use of this. Then you'll have your statistics
and we won't have classes needing traversal with oops_do.
Thanks,
Coleen
>
> This may not apply to JVMTI oops, but others may occasionally have a
> significant amount of oops where it would be very interesting to know
> from where a particular slowdown comes from.
>
> So I would prefer keep some accounting here.
>
> Thanks,
> Thomas
More information about the hotspot-dev
mailing list