The Azul concurrent collector also makes heavy use of virtual memory tricks.<div><br></div><div>The downside of implementing these methods is typically that mucking with VM mappings can be very inefficient: there's a process-wide lock involved, and it also causes TLB shootdowns to invalidate prior mappings. So, doing it fine grained costs a lot, and you need to be relatively clever to get good performance. (Azul uses a kernel module with bulk remap operations and the ability to multiply map physical memory to multiple virtual locations at the same time).</div>
<div><br></div><div>I highly recommend reading the paper, though: <a href="http://dl.acm.org/citation.cfm?id=1993491&dl=ACM&coll=DL&CFID=197266941&CFTOKEN=89353319">http://dl.acm.org/citation.cfm?id=1993491&dl=ACM&coll=DL&CFID=197266941&CFTOKEN=89353319</a></div>
<div><br></div><div>-Todd</div><div><br><div class="gmail_quote">On Tue, Mar 26, 2013 at 3:40 PM, Jesper Wilhelmsson <span dir="ltr"><<a href="mailto:jesper.wilhelmsson@oracle.com" target="_blank">jesper.wilhelmsson@oracle.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Anjul,<br>
<br>
Similar things have been done before. The example I come to think about right now is the mapping collector [1], but I know there have been other work on this as well.<br>
/Jesper<br>
<br>
[1] <a href="http://dl.acm.org/citation.cfm?id=1346281.1346294&coll=DL&dl=GUIDE&CFID=197206966&CFTOKEN=64021895" target="_blank">http://dl.acm.org/citation.<u></u>cfm?id=1346281.1346294&coll=<u></u>DL&dl=GUIDE&CFID=197206966&<u></u>CFTOKEN=64021895</a><br>
<br>
<br>
<br>
Anjul skrev 26/3/13 6:35 AM:<div class="HOEnZb"><div class="h5"><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Garbage collectors move data to eliminate holes in virtual memory both to make<br>
new allocations faster and to make it feasible to allocate large contiguous chunks.<br>
<br>
In modern 64-bit OSes two significant optimization possibilities seem to arise.<br>
One is to simply not do compaction. Instead, unmap pages that contain no live<br>
objects and keep allocating new objects to further and further areas of virtual<br>
memory by mapping pages in. On a 64-bit system this could be pretty sustainable.<br>
The other possibility is that if a large object does need to be compacted/moved<br>
to a different virtual address, then the pages that contain it could simply be<br>
remapped to a different area of virtual memory without copying any data.<br>
<br>
There would be extra work, relative to copying, for reorganizing the page<br>
tables, but I think that might be logarithmically smaller.<br>
<br>
This seems to ensure that there is no hole larger than a page. Sparsely occupied<br>
pages could be copied as usual or with some bookkeeping used for allocating<br>
small objects.<br>
<br>
Is there a problem in this scheme? Are there any JVMs out there that do this or<br>
are shortly expected to do so?<br>
<br>
</blockquote>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Todd Lipcon<br>Software Engineer, Cloudera
</div>