Proposal: Better HashMap.resize() when memory is tight
Peter Arrenbrecht
peter.arrenbrecht at gmail.com
Wed Nov 21 08:52:03 UTC 2007
Hi all,
I recently thought about how to resize hashmaps. When looking at the
JDK 6 source, I saw that java.util.HashMap always transfers old
entries from the old table. When memory is tight, it might be better
to first concatenate the entry lists into one big list, throw away the
old table, allocate the new one, and then fill it from the
concatenated list. So:
@Override
void resize( int _newCapacity )
{
try {
fastResize( _newCapacity ); // This would be the current resize code.
}
catch (OutOfMemoryError e) {
tightResize( _newCapacity );
}
}
@SuppressWarnings( "unchecked" )
private void tightResize( int newCapacity )
{
Entry head = joinLists();
table = new Entry[ newCapacity ];
threshold = (int) (newCapacity * loadFactor);
transfer( head, table );
}
@SuppressWarnings("unchecked")
private Entry joinLists()
{
final Entry[] src = table;
final int n = src.length;
Entry head = null;
int i = 0;
while (i < n && null == head) {
head = src[ i++ ];
}
Entry tail = head;
assert i >= n || null != tail;
while (i < n) {
Entry e = src[ i++ ];
if (null != e) {
tail.next = e;
do {
tail = e;
e = e.next;
} while (null != e);
}
}
return head;
}
@SuppressWarnings("unchecked")
private void transfer( Entry head, Entry[] tgt )
{
int n = capacity();
while (head != null) {
Entry e = head;
head = head.next;
int i = indexFor( e.hash, n );
e.next = tgt[ i ];
tgt[ i ] = e;
}
}
What do you think?
-peo
More information about the core-libs-dev
mailing list