RFR: 8316156: (ch) Channels.newOutputStream(ch).write(bigByteArray) allocates a lot of direct memory [v4]

Bernd duke at openjdk.org
Thu Sep 14 23:37:39 UTC 2023


On Thu, 14 Sep 2023 06:35:56 GMT, Alan Bateman <alanb at openjdk.org> wrote:

>> src/java.base/share/classes/sun/nio/ch/ChannelInputStream.java line 73:
>> 
>>> 71:         int rem = bb.limit() - pos;
>>> 72:         while (rem > 0) {
>>> 73:             int size = Integer.min(rem, DEFAULT_BUFFER_SIZE);
>> 
>> Should this limit in the read case not apply for direct buffers? (I.e. they are allocated already?). Also should it really use „DeFAULT_“ maybe more like a CHUnK_LIMIT more around 128k?
>
>> Should this limit in the read case not apply for direct buffers? (I.e. they are allocated already?). Also should it really use „DeFAULT_“ maybe more like a CHUnK_LIMIT more around 128k?
> 
> There is an argument that the channels (the implementations SocketChannel, FileChannel, ...) should clamp the size when called with a ByteBuffer that is backed by a byte[]. I think we have to be cautious about changing things at that level as it would have much wider impact. It also gets more complicated with scatter/gather ops.
> 
> So clamping in the input stream/output streams as done in the COS.write change is okay.

Yes capping makes sense, but not at 8k level. If my app happens to know it can write to a raid5 system with 128k segment size (just to mention one particular example) and it already has allocated a direct buffer for this, it’s really wasteful to copy it in 8k chunks with heap buffers (especially as it would also trigger partial writes or rewrites on os level). That’s why I think the cap should be larger for some cases - that could depend on the target (file handle typically can be large blocks and network handles could depend on window size). I think synthetic benchmarks don’t cover that very well (at least that’s what I see when you talk about 8 or 16k chunks)

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/15733#discussion_r1326616686


More information about the nio-dev mailing list