RFR JDK-8157273: Simplify outgoing messages queueing policy in WebSocket API

Pavel Rappo pavel.rappo at oracle.com
Fri Jun 10 13:17:55 UTC 2016


> On 10 Jun 2016, at 11:09, Simone Bordet <simone.bordet at gmail.com> wrote:
> 
> Hi,
> 
> On Thu, Jun 9, 2016 at 3:20 PM, Pavel Rappo <pavel.rappo at oracle.com> wrote:
>> Yes, you are correct. Here are some reasons behind this decision:
>> 
>> 1. This is more friendly. One doesn't *have to* carefully build a tree of CS
>> dependencies.
> 
> This is a bit weak.
> On the other hand is less "friendly" because it allows application to
> blow up the heap, and discover that only in production.

I think it wouldn't be wise to rely on API to ensure it will catch a potentially
bad behaviour for you. Even with a single outstanding write you cannot be sure
the implementation will not throw an IllegalStateException *sometime later* in
production due to a peculiar state you've missed while coding. How's that any
different from "blowing up the heap"? I hope no-one will suggest to catch ISE
and to "try again later".

The bottom line, it should not be a substitute of good battery of rigorous
tests.

If you want a guard dog, you can always have an atomic counter that keeps an eye
on the total number of outstanding sends. Increment it each time a sendX has
returned a CF. Decrement when the returned CF has completed. If at any time the
number goes above a certain threshold, throw an exception of your choice, log
the thing, send an important email, etc.

I suppose the vast majority of users will either use the one-by-one mode or will
initially request a Long.MAX_VALUE messages, thus reducing everything to a
simple callback push mode.

Queueing will work for both case, while "one-outstanding" write will only work
for the former one. 

>> 4. I don't think it somehow conflicts with the back-pressure we have in the API.
>> After all, it's up to a user how many outstanding writes they want to have. The
>> implementation controls *the incoming flow*, not allowing unrequested messages
>> to be read/cached/stored. Well, it's a user's responsibility to keep an eye on
>> their outgoing queue to not allow it to overflow. After all, the user is
>> provided with everything to do this in a non-blocking async fashion!
>> 
>> I would appreciate to hear from you on this.
> 
> With this bullet you are basically saying that it's applications'
> responsibility to handle A) non-completed sequential writes and B)
> concurrent writes.

Sorry Simone, could you please elaborate on what a "non-completed sequential
write" is? As for concurrent writes, the answer is no. Maybe this requires a bit
of spec clarification, we'll see. The point is Text and Binary messages are very
simple. They work in a simple FIFO. If the app ensures sendX are invoked in a
particular order:

   sendText(A), sendText(B), sendText(C), ...

then the messages will be sent in the same order:

   A, B, C...

Ping, Pong and Close are as you know a bit different. These are so called
"control" messages. They can be interjected in between frames of a "non-control"
message (e.g. Text or Binary).

I'm sure the API should be open for this behaviour. And in this case the only
way to enforce a particular order is to correlate sends with a completions of
previous ones. Not returns of CFs, but completions of CFs. For example, if I
want to make sure a Close message is sent *after* the huge Text, then I do this:

    webSocket.sendText("In the beginning God created the heaven and the earth...")
             .thenCompose(WebSocket::sendClose);

rather than this:

    webSocket.sendText("In the beginning God created the heaven and the earth...");
    webSocket.sendClose();

With one outstanding write we close the door to the possibility of interjecting.

> Having said that, I think this is more an implementation problem than
> an API problem.
> The API supports both a model with an "at most one outstanding write"
> as well as other models, such as:
> 
> * allow only 1 outstanding write from the same thread, but allow
> concurrent writes
> * allow a bounded number of outstanding writes from any thread
> * allow an unbounded number of outstanding writes from any thread
> 
> FYI, we had this problem in javax.websocket, where Jetty chose one
> policy, but Tomcat another.
> Wrote a chat application using standard APIs against Jetty, worked
> fine; deployed in Tomcat failed.
> This resulted in changing the application to assume the minimum
> behavior (i.e. at most 1 outstanding write).
> 
> If, for any reason, you have to change the implementation and go back
> to an at most 1 outstanding write model, you will break all
> applications out there.
> 
> Again FYI, API design is not only about the signature of the API, but
> also about their semantic.

Exactly. I've never said otherwise. API is an interconnected system of types
with well-defined semantics. Ideally implementations should be arbitrarily
substitutable. Akin to "is-a" relationship with their LSP.

> In Servlet 3.1 the semantic of
> Servlet[Input|Output]Stream.[read|write]() was changed (from blocking
> to non-blocking), despite no signature changes in the API.
> This resulted in breaking all Servlet Filters implementation that were
> wrapping the request or the response to process the content.'

This is a very good example for all the API designers out there, thanks!

> I think that having a stricter semantic such as at most 1 outstanding
> write helps since applications will know exactly what to expect and
> will be forced to think about handling non-completed sequential writes
> and concurrent writes, and do that properly, and be in full control.
> 
> A wider semantic of unbounded outstanding writes may lead to write
> applications that don't care about non-completed sequential writes and
> concurrent writes, that may possibly blow up the heap in production.
> The fix is the same as above, write the application thinking this
> properly, but now there is a double work: the application queueing,
> and the implementation queueing.
> 
> Alternatively, you can make this semantic explicitly configurable in
> the Builder ?

I wouldn't engage with this right now.

Thanks a lot for your comments!
-Pavel



More information about the net-dev mailing list