RFR: 8316156: (ch) Channels.newOutputStream(ch).write(bigByteArray) allocates a lot of direct memory [v5]
Alan Bateman
alanb at openjdk.org
Tue Sep 19 10:14:42 UTC 2023
On Tue, 19 Sep 2023 09:37:39 GMT, Paul Wagland <duke at openjdk.org> wrote:
> Doesn't this just fix only the reported problem though? If there is a InputStream out there that that can also drop a large buffer in, or even if there is code that just does a `channelOutputStream.write(largeBuffer)`, that this will still fail?
>
> The main issue that I had when I reported the problem is that I can no longer safely use `Files.copy`, since it relies on the passedInputStream only doing "small" writes, at least in the face of a limited DirectMemory allocation. This fixes the particular InputStream that I was using, but doesn't fix the generic problem, which means that Files.copy is _still_ unsafe?
There's a compatibility impact to changing the channel read/write implementations to limit direct buffer usage when called with a heap based ByteBuffer that contains a large number of bytes. We have to be very cautious about changing anything there as it will break existing broken code that assumes a short read/write is not possible. We found two cases in the JDK, one is fixed, the other is WIP. Changing the adaptor class used by Channels.newInputStream has the same concern. I talked to @bplb off-list about this and we agreed that it requires a lot more thinking about changing anything.
As regards Files.copy(InputStream, Path). It uses the input stream's transferTo method to transfer the bytes to the target file. It could be changed to write in chunks or the input streams that have all bytes in the heap could write in chunks. One or both is okay and doesn't have compatibility concerns.
-------------
PR Comment: https://git.openjdk.org/jdk/pull/15733#issuecomment-1725217098
More information about the nio-dev
mailing list