AW: BigByteBuffer/MappedBigByteBuffer

Jürgen Baier baier at ontoprise.de
Thu Jun 11 06:54:36 PDT 2009


Hi Alan,

thanks for the quick answer.

> The original early draft did propose a set of 64-bit addressable buffers
but
> we aren't taking that proposal any further. The real demand is for 64-bit
> arrays or collections. For I/O, the use-case is contiguous mapping of file
> regions larger than 2GB. At one point Doug Lea, and the collections group
> looked into creating a package of big arrays for each of the scaler types.
> With such a solution we could do something to allow the array be backed by
a
> file mapping. More recently, there was a proposal for large arrays [1].
This
> one is on the list for "further consideration" [2] by the COIN project.
>
> Can you say any more about your needs?

There are two important requirements:

1. 64-bit buffers
-----------------
If I have (e.g.) 12 GB RAM available then it should be possible to address
more than 32-bit addresses. This is often required for simulations,
mathematical software, software for bio informatics, main-memory databases
etc. 
Without native support for large arrays/buffers developers have to implement

some workaround (e.g. use multiple arrays of ByteBuffers/32-bit arrays or 
implement some native code).

64-bit arrays might be a good replacement for the big buffers. But (at least
on
my system) the direct byte buffer is much faster than the non-direct (heap)
byte
buffer.


2. Memory-mapped I/O
--------------------
Contiguous mapping of files larger than 2 GB is very useful, too (for all
kinds of applications which have to deal with files of this size).

> At one point Doug Lea, and the collections group
> looked into creating a package of big arrays for each of the scaler types.
> With such a solution we could do something to allow the array be backed by
a
> file mapping.

This sounds very interesting, but the implementation might be difficult.



However, for me it seems that for most use cases the existing NIO buffer API

would be fine if it were 64-bit addressable. A BigByteBuffer is just a new
API (and developers who need it can use it), but a native support for 64-bit
arrays
seems to be much more effort (and most developers never need it). So I'd
rather
vote for big buffers than for big arrays.

The ByteBuffer implemenation of Sun's JDK uses the sun.mis.Unsafe class
which already
seems to be able to address 64 bits (look at ByteBuffer.allocateDirect() and
Unsafe.allocateMemory()). So I got the impression that (at least
for this implemenation) just a few method signatures would have to be
changed
(e.g. allocateDirect(long)).

-Juergen




More information about the nio-discuss mailing list