HashMap bug for large sizes

Kasper Nielsen kasperni at gmail.com
Fri Jun 1 19:28:22 UTC 2012


On 01-06-2012 21:12, Eamonn McManus wrote:
> It seems to me that since the serialization of HashMaps with more than
> Integer.MAX_VALUE entries produces an output that cannot be
> deserialized, nobody can be using it, and we are free to change it.
> For example we could say that if the read size is -1 then the next
> item in the stream is a long that is the true size, and arrange for
> that to be true in writeObject when there are more than
> Integer.MAX_VALUE entries.
Yeah,
I thought of something along the lines of:

long mapSize;
s.writeInt(mapSize> Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) 
mapSize );
for (int i=0;i<Integer.MAX_VALUE;i++) {write elements}

if (mapSize>=Integer.MAX_VALUE) {
   s.writeLong(size);//write the real size
   for (long i=Integer.MAX_VALUE;i<mapSize;i++) {  ...write remaining 
elements }
}

> Whether there really are people who have HashMaps with billions of
> entries that they want to serialize with Java serialization is another
> question.

But it is not just serializing a HashMap that does not work. 
HashMap.size() and HashMap.clear() isn't working as well.

- Kasper



More information about the core-libs-dev mailing list