RFR JDK-8157273: Simplify outgoing messages queueing policy in WebSocket API

Pavel Rappo pavel.rappo at oracle.com
Thu Jun 9 13:20:53 UTC 2016


      “I’ve been waiting for you”
(Speaking in Darth Vader’s voice)


> On 9 Jun 2016, at 13:38, Simone Bordet <sbordet at webtide.com> wrote:
> 
> I'm not sure I understand all this.
> 
> I thought the idea was that you always want *at most* one outstanding write.
> Multiple writes should be chained via CS.
> Concurrent writes are handled by applications.
> 
> Are you saying that with this change, you allow infinite buffering ?
> To be clear, do you want to allow this (pseudocode):
> 
> ByteBuffer buffer = ...;
> while (true) {
>  int read = readFromHugeFile(buffer);
>  ws.sendBinary(buffer, read < 0);
> }
> 
> Thanks !

Yes, you are correct. Here are some reasons behind this decision:

1. This is more friendly. One doesn't *have to* carefully build a tree of CS
dependencies.

2. "One outstanding write" works perfect only in one-by-one mode:

   webSocket.sendX(message).thenRun(() -> webSocket.request(1))
   
Though it might look good for a conference slide, as soon as we decide to use
some other strategy, e.g. having a window akin to one used in this example in
java.util.concurrent.Flow:

*   public void onSubscribe(Subscription subscription) {
*     long initialRequestSize = bufferSize;
*     count = bufferSize - bufferSize / 2; // re-request when half consumed
*     (this.subscription = subscription).request(initialRequestSize);
*   }
*   public void onNext(T item) {
*     if (--count <= 0)
*       subscription.request(count = bufferSize - bufferSize / 2);
*     consumer.accept(item);
*   }

it becomes a noticeably more difficult to maintain a strict non-overlapping
order. A user will *have to* have their own queue of last `bufferSize` CFs.

3. It's good for implementations. A performance obsessed implementation might
decide to group several writes into one gathering write on the lowe level,
squeezing extra microseconds out of latency and reducing the number of calls to
OS.

4. I don't think it somehow conflicts with the back-pressure we have in the API.
After all, it's up to a user how many outstanding writes they want to have. The
implementation controls *the incoming flow*, not allowing unrequested messages
to be read/cached/stored. Well, it's a user's responsibility to keep an eye on
their outgoing queue to not allow it to overflow. After all, the user is
provided with everything to do this in a non-blocking async fashion!

I would appreciate to hear from you on this.

Thanks,
-Pavel



More information about the net-dev mailing list