RFR: 8201318: Introduce GCThreadLocalData to abstract GC-specific data belonging to a thread
Per Liden
per.liden at oracle.com
Wed Apr 11 12:38:02 UTC 2018
Thanks Martin!
/Per
On 04/11/2018 12:28 PM, Doerr, Martin wrote:
> Hi Per,
>
> thanks for simplifying the platform code. I just verified that it still builds on PPC64 and s390.
>
> Best regards,
> Martin
>
>
> -----Original Message-----
> From: Per Liden [mailto:per.liden at oracle.com]
> Sent: Mittwoch, 11. April 2018 11:55
> To: Aleksey Shipilev <shade at redhat.com>; Roman Kennke <rkennke at redhat.com>; Doerr, Martin <martin.doerr at sap.com>; hotspot-dev at openjdk.java.net; Robbin Ehn <robbin.ehn at oracle.com>; OSTERLUND,ERIK <erik.osterlund at oracle.com>
> Subject: Re: RFR: 8201318: Introduce GCThreadLocalData to abstract GC-specific data belonging to a thread
>
> Hi Aleksey,
>
> On 04/10/2018 05:04 PM, Aleksey Shipilev wrote:
>> On 04/10/2018 02:51 PM, Per Liden wrote:
>>> A couple of commits were pushed, which causes conflicts with this change, so here's a rebased version:
>>>
>>> http://cr.openjdk.java.net/~pliden/8201318/webrev.1
>>
>> *) I wonder if we can make the special accessors for the composite offsets? E.g. instead of
>>
>>
>> Address in_progress(thread, in_bytes(G1ThreadLocalData::satb_mark_queue_offset() +
>> SATBMarkQueue::byte_offset_of_active()));
>> Address queue_index(thread, in_bytes(G1ThreadLocalData::satb_mark_queue_offset() +
>> SATBMarkQueue::byte_offset_of_index()));
>> Address buffer(thread, in_bytes(G1ThreadLocalData::satb_mark_queue_offset() +
>> SATBMarkQueue::byte_offset_of_buf()));
>>
>> ...do:
>>
>> Address in_progress(thread, in_bytes(G1ThreadLocalData::satb_mark_queue_active_offset()));
>> Address queue_index(thread, in_bytes(G1ThreadLocalData::satb_mark_queue_index_offset()));
>> Address buffer(thread, in_bytes(G1ThreadLocalData::satb_mark_queue_buf_offset()));
>>
>> That probably eliminates the header dependency on SATBMarkQueue/DirtyCardQueue from
>> compiler/assembler stubs?
>
> Done.
>
>>
>> *) Should these new methods accept JavaThread*? I.e. we are expecting the GC thread-local data only
>> for JavaThreads, like attach/detach does?
>>
>> virtual void on_thread_create(Thread* thread);
>> virtual void on_thread_destroy(Thread* thread);
>> virtual void on_thread_attach(JavaThread* thread);
>> virtual void on_thread_detach(JavaThread* thread);
>
> The intention is that on_thread_create/destroy is called for all
> threads, where as on_thread_attach/detach is only called for JavaThreads
> (i.e. threads that are added to the "threads list"). There's been
> discussions about treating all threads in the same way, i.e. they would
> all be on the threads list and they would all receive attach/detach
> calls. It's not clear yet if that is a good idea, but if that change is
> made in the future, then we might want to revisit this interface.
>
>>
>> *) Wait, why do we call JVMCI members JavaThread_*?
>>
>> 64 static int JavaThread_satb_mark_queue_offset;
>> 65 static int JavaThread_dirty_card_queue_offset;
>
> Good catch! I switched to using the declare_constant_with_value()
> construct, so these members are now gone.
>
>>
>> *) I would probably add assert(UseG1GC) to G1ThreadLocalData accessor methods
>
> Done. Note however that we can't add asserts to the *_offset()
> functions, as those will be called by Graal regardless of weather G1 is
> enabled or not.
>
> As discussed else where in this thread, the reason for keeping the
> GCThreadLocalData member early in Thread is to allow GC barriers to
> generate compact code. As Erik indicates, this may not matter that much
> in the end, but I also don't think it hurts to let it sit there.
>
> Also, I've shrunk GCThreadLocalData to 112 bytes, which is what G1
> currently needs. ZGC currently uses 156 bytes (IIRC), but that's a
> change that should stay the ZGC repo for now.
>
> Updated webrev here:
>
> Diff: http://cr.openjdk.java.net/~pliden/8201318/webrev.1vs2/
> Full: http://cr.openjdk.java.net/~pliden/8201318/webrev.2/
>
> Thanks for reviewing.
>
> /Per
>
>>
>> Thanks,
>> -Aleksey
>>
More information about the hotspot-dev
mailing list