Optimization potential in Reader#read(CharBuffer)
Philippe Marschall
philippe.marschall at gmail.com
Thu Dec 10 20:03:48 UTC 2020
Hello
I recently came across Reader#read(CharBuffer) and noticed it was
missing an optimization for heap buffers. In the case of heap buffers
we can write directly into the backing array. Something like the
following
public int read(CharBuffer target) throws IOException {
int len = target.remaining();
int n;
if (target.hasArray()) {
char[] cbuf = target.array();
int off = target.arrayOffset();
n = this.read(cbuf, off, len);
} else {
char[] cbuf = new char[len];
n = read(cbuf, 0, len);
if (n > 0)
target.put(cbuf, 0, n);
}
return n;
}
This would get rid of the intermediate allocation and copy in the case
of a heap buffer.
I don't have any microbenchmarks to prove this is faster but it seems intuitive.
Additionally there seem to be the following optimization potentials:
* The offheap path potentially allocates a very large, larger than
TRANSFER_BUFFER_SIZE, intermediate array. It may be worth considering
limiting the array size to TRANSFER_BUFFER_SIZE. Options are to use a
loop, this may require acquiring #lock to keep the read atomic, or
simply let the caller deal with it.
* It may be worth looking into overriding #read(CharBuffer) in
InputStreamReader and pass the CharBuffer to the StreamDecoder to
avoid more intermediate allocations and copies there.
Sorry if this is the wrong mailing list and should go to core-libs-dev
or a different list instead.
Cheers
Philippe
More information about the nio-dev
mailing list