TLAB and NUMA aware allocator

Vitaly Davidovich vitalyd at gmail.com
Tue Oct 2 11:29:43 UTC 2012


Thanks Igor, makes sense.

Vitaly

Sent from my phone
On Oct 1, 2012 7:42 PM, "Igor Veresov" <iggy.veresov at gmail.com> wrote:

> On Linux we just hope that the scheduler will leave a thread on the same
> node, which is what happens in reality. Also, per generational hypothesis
> we hope that in most cases the data will be already dead when such a
> migration happens.
>
> And like Jon said it's not an issue on Solaris.
>
> igor
>
> On Sep 27, 2012, at 12:41 PM, Vitaly Davidovich <vitalyd at gmail.com> wrote:
>
> Thanks Jon -- that blog entry was useful.
>
> Vitaly
>
> On Thu, Sep 27, 2012 at 12:55 AM, Jon Masamitsu <jon.masamitsu at oracle.com>wrote:
>
>> Vitaly,
>>
>> The current implementation depends on a thread not migrating
>> between nodes.  On solaris that naturally happens.  I don't
>> remember the details but it's something like Solaris sees that
>> a thread XX is executing on node AA and using memory on AA
>> so it leaves XX on AA.  On linux I'm guessing (really guessing)
>> that there is a way to create an affinity between XX on AA.
>>
>> This has all the things I ever knew about it.
>>
>> https://blogs.oracle.com/**jonthecollector/entry/help_**
>> for_the_numa_weary<https://blogs.oracle.com/jonthecollector/entry/help_for_the_numa_weary>
>>
>> Jon
>>
>>
>> On 9/26/2012 4:09 PM, Vitaly Davidovich wrote:
>>
>>> Hi guys,
>>>
>>> If I understand it correctly, the NUMA allocator splits eden into regions
>>> and tries to ensure that an allocated object is in a region local to the
>>> mutator thread.  How does this affect tlabs? Specifically, a tlab will be
>>> handed out to a thread from the current node.  If the java thread then
>>> migrates to a different node, its tlab is presumably still on the
>>> previous
>>> node, leading to cross-node traffic? Is there a notion of a processor
>>> local
>>> tlab? In that case, access to already allocated objects will take a hit
>>> but
>>> new allocations will not.
>>>
>>> The way I imagine a processor local tlab working is when a thread
>>> migrates,
>>> the previous tlab becomes available for whichever java thread is onproc
>>> there now - that is, tlab ownership changes.  The migrated thread then
>>> picks up allocations in the new tlab.
>>>
>>> It can still be a bump the pointer since only one hardware thread can be
>>> running at a time on the processor.
>>>
>>> Is this or something like it already there? If not, what challenges am I
>>> overlooking from my high-level view?
>>>
>>> Thanks
>>>
>>> Sent from my phone
>>>
>>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20121002/b0c78a6d/attachment.htm>


More information about the hotspot-gc-dev mailing list