RFR: 8351339: WebSocket::sendBinary assume that user supplied buffers are BIG_ENDIAN [v5]

Chen Liang liach at openjdk.org
Mon Mar 24 14:53:14 UTC 2025


On Mon, 24 Mar 2025 09:47:14 GMT, Volkan Yazici <vyazici at openjdk.org> wrote:

>> Opened intellij and verified the `dst` is always big endian coming from `ByteBuffer.allocate` in both client and server contexts.
>
> @liach, `initVectorMask()` operates byte-by-byte, that is, nothing vectorized, and, hence, no endianness concerns there. `applyVectorMask()` chooses the mask with correct endianness based on the input:
> 
> 
> assert src.order() == dst.order() : "vectorized masking is only allowed on matching byte orders";
> long maskLong = ByteOrder.LITTLE_ENDIAN == src.order() ? maskLongLe : maskLongBe;
> 
> 
> AFAICT, both methods are ready to perform vectorization independent of the input endianness – granted `src` and `dst` endiannesses match. Am I missing something?

For this line below:

dst.put(j, (byte) (src.get(i) ^ maskBytes[offset]));

Because `maskBytes` is big-endian, if `dst` is little-endian (which is not the case at all right now because trusted callers are all using BE dst ByteBuffer), we should use `maskBytes[4 - offset]`, right?

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/24033#discussion_r2010339869


More information about the net-dev mailing list