<html><head><meta http-equiv="Content-Type" content="text/html charset=iso-8859-1"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">On Linux we just hope that the scheduler will leave a thread on the same node, which is what happens in reality. Also, per generational hypothesis we hope that in most cases the data will be already dead when such a migration happens.<div><br></div><div>And like Jon said it's not an issue on Solaris.</div><div><br></div><div>igor<br><div><br><div><div>On Sep 27, 2012, at 12:41 PM, Vitaly Davidovich <<a href="mailto:vitalyd@gmail.com">vitalyd@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">Thanks Jon -- that blog entry was useful.<div><br></div><div>Vitaly<br><br><div class="gmail_quote">On Thu, Sep 27, 2012 at 12:55 AM, Jon Masamitsu <span dir="ltr"><<a href="mailto:jon.masamitsu@oracle.com" target="_blank">jon.masamitsu@oracle.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Vitaly,<br>
<br>
The current implementation depends on a thread not migrating<br>
between nodes. On solaris that naturally happens. I don't<br>
remember the details but it's something like Solaris sees that<br>
a thread XX is executing on node AA and using memory on AA<br>
so it leaves XX on AA. On linux I'm guessing (really guessing)<br>
that there is a way to create an affinity between XX on AA.<br>
<br>
This has all the things I ever knew about it.<br>
<br>
<a href="https://blogs.oracle.com/jonthecollector/entry/help_for_the_numa_weary" target="_blank">https://blogs.oracle.com/<u></u>jonthecollector/entry/help_<u></u>for_the_numa_weary</a><span class="HOEnZb"><font color="#888888"><br>
<br>
Jon</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
On 9/26/2012 4:09 PM, Vitaly Davidovich wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi guys,<br>
<br>
If I understand it correctly, the NUMA allocator splits eden into regions<br>
and tries to ensure that an allocated object is in a region local to the<br>
mutator thread. How does this affect tlabs? Specifically, a tlab will be<br>
handed out to a thread from the current node. If the java thread then<br>
migrates to a different node, its tlab is presumably still on the previous<br>
node, leading to cross-node traffic? Is there a notion of a processor local<br>
tlab? In that case, access to already allocated objects will take a hit but<br>
new allocations will not.<br>
<br>
The way I imagine a processor local tlab working is when a thread migrates,<br>
the previous tlab becomes available for whichever java thread is onproc<br>
there now - that is, tlab ownership changes. The migrated thread then<br>
picks up allocations in the new tlab.<br>
<br>
It can still be a bump the pointer since only one hardware thread can be<br>
running at a time on the processor.<br>
<br>
Is this or something like it already there? If not, what challenges am I<br>
overlooking from my high-level view?<br>
<br>
Thanks<br>
<br>
Sent from my phone<br>
<br>
</blockquote>
</div></div></blockquote></div><br></div>
</blockquote></div><br></div></div></body></html>