RFR: 8351339: WebSocket::sendBinary assume that user supplied buffers are BIG_ENDIAN [v5]
Volkan Yazici
vyazici at openjdk.org
Mon Mar 24 09:52:08 UTC 2025
On Sun, 23 Mar 2025 08:50:30 GMT, Chen Liang <liach at openjdk.org> wrote:
>> src/java.net.http/share/classes/jdk/internal/net/http/websocket/Frame.java line 147:
>>
>>> 145: * Positions the {@link #offset} at 0, which is needed for vectorized masking, by masking necessary amount of bytes.
>>> 146: */
>>> 147: private void initVectorMask(ByteBuffer src, ByteBuffer dst) {
>>
>> This method and `applyPlainMask` uses big-endian `maskBytes` which can be wrong if `dst` is not big endian. Should we just assert `dst` is big endian everywhere, as it seems to be the case?
>
> Opened intellij and verified the `dst` is always big endian coming from `ByteBuffer.allocate` in both client and server contexts.
@liach, `initVectorMask()` operates byte-by-byte, that is, nothing vectorized, and, hence, no endianness concerns there. `applyVectorMask()` chooses the mask with correct endianness based on the input:
assert src.order() == dst.order() : "vectorized masking is only allowed on matching byte orders";
long maskLong = ByteOrder.LITTLE_ENDIAN == src.order() ? maskLongLe : maskLongBe;
AFAICT, both methods are ready to perform vectorization independent of the input endianness – granted `src` and `dst` endiannesses match. Am I missing something?
-------------
PR Review Comment: https://git.openjdk.org/jdk/pull/24033#discussion_r2009822775
More information about the net-dev
mailing list