Benefits of Rx, Without the Complexity

eric at kolotyluk.net eric at kolotyluk.net
Tue Jun 21 13:51:00 UTC 2022


Thank you for that Brian… when people started expounding on backpressure, it was like they invented flow-control… where I had implemented X.25 (including HDLC) twice in the 80s on two different platforms…

 

I agree that we need to analyze the system to put the right constraints in the right places, but can this be automated more, such that it is more ‘magic’? Is anyone researching this? Not like a magic bullet, but well… I never thought cars could drive themselves… but they do…

 

Largely, I was hoping there were better ways to build systems that don’t thrash, which is appalling behaviour I have seen before, even in the systems I have built. Systems that better utilize resources by default, so we don’t have to waste resources to avoid thrashing. However, I believe Virtual Threads make better use of resources the way Virtual Memory does, so we are definitely heading in the right direction.

 

I guess for now we need to continue to be aware. Several times I have seen the advice in loom to not use thread pools with virtual threads, but to use semaphores instead, and for good reason. These are good lessons.

 

Anyway, great answers, thanks.

 

Cheers, Eric

 

From: Brian Goetz <brian.goetz at oracle.com> 
Sent: June 21, 2022 6:11 AM
To: eric at kolotyluk.net; 'loom-dev' <loom-dev at openjdk.java.net>
Subject: Re: Benefits of Rx, Without the Complexity

 

"Backpressure" is just a fancy new term for the age-old concept of bounding resource utilization by stalling or refusing excessive requests.  Examples include: 

 - thread pools -- limit the number of concurrently executing tasks
 - semaphores -- limit the number of a critical resource (socket connections, open files, etc), by stalling incremental requests until the resource becomes free
 - producer/consumer with blocking queues -- when consumers are overloaded, stall the producers
 - queuing of socket connect requests in the OS, and refusing additional requests after the queue gets too long
 - various networking flow control protocols using send credits, sliding windows, etc (xmodem, TCP)

These are often used in conjunction; a thread pool may have a fixed number of threads *and* a bounded queue for waiting tasks.  OSes queue socket requests until some limit, and then refuse incremental requests.  These are all techniques of slowly pushing the load back to the source.  

But, backpressure is not magic; it still requires analysis and control.  Before you can use it effectively, you have to identify what the resources are that might get over-consumed, and choose a strategy for managing it.  Obvious strategies include "don't call me, I'll call you", queue size limits, etc.  But none of these are applied magically; they require you to configure them.  Reactive's contribution, such as it is, is to put these concepts in the foreground, where users are reminded to think about them.  

All of these techniques are still available to us.  But we have to identify what resources are in danger of being overconsumed, and protect them appropriately.  The concepts for doing so are older then Java -- semaphores, blocking queues, etc.  

On 6/20/2022 2:33 PM, eric at kolotyluk.net <mailto:eric at kolotyluk.net>  wrote:

After tinkering with loom and learning a lot of revised synchronous style practices, I was recently watching another presentation on Reactive Programming, and it got me thinking about how some of the asynchronous practices, such as Backpressure, could be expressed in the synchronous world of Virtual Threads, Structured Concurrency, etc.

 

After working on Akka Scala for years, using Reactive Practices, I had a sense that it might be possible to build applications/services that would not thrash. They would go up to 100% utilization, without thrashing, and then just refuse more work. Sorry mate, I won’t do that now, maybe talk to the Load Balancer about spawning some more siblings… I don’t know how true this sense is, only that it’s a hopeful sense.

 

While I have dabbled with java.util.concurrent.Flow using Virtual Threads successfully, I still find the cognitive load for using Rx APIs higher than I would like, but it is well disciplined and has many other benefits, such as backpressure.

 

In the future, can we build simpler synchronous APIs with the benefits of asynchronous APIs such as Rx, leveraging the scalability/throughput of Virtual Threads and the discipline of Structure Concurrency? I guess I am just lazy, and I don’t want to think harder than I have to.

 

Cheers, Eric

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20220621/e2ba0f7c/attachment-0001.htm>


More information about the loom-dev mailing list