Positive Feedback: Structured concurrency is amazing!

Mike Rettig mike.rettig at gmail.com
Thu Jul 30 18:53:17 UTC 2020


>I swapped the map out with a concurrent hashmap, wrapped the block with try-with-resources and a virtual executor service and threw the loading operations onto that pool.

Did you try this with a fixed size native thread pool for your
executor service? If you are using a database connection pool, then
I'd recommend creating a fixed size pool that is equal in size to your
connection pool to get optimal performance. If you aren't using a
connection pool, then be careful about overwhelming the database with
requests when using many native or virtual threads.  Database
connections are typically more expensive than threads (native or
virtual).

>Without loom the way to solve this would have involved weaving a shared IO Executor to the right location.

See above. Why can't you use an executor service backed by native threads?

Mike

On Wed, Jul 29, 2020 at 1:16 PM Thomas May <tmay at clearwateranalytics.com> wrote:
>
> After watching one of the presentations it was mentioned that positive real world feedback would be appreciated.  So, that's what I'm here to give.
>
> In working with a real-world app, I was getting frustrated with how much time that application was spending loading data.  There was a portion of code which sequentially went through and loaded some values from the database into a map.
>
> I swapped the map out with a concurrent hashmap, wrapped the block with try-with-resources and a virtual executor service and threw the loading operations onto that pool.  All in all, it was about 3 lines of code change.
>
> The loading code went from an approximate 30 minute load time to a 1 minute load time with hardly any extra memory used (I didn't notice any).  The number of new system threads spawned was fairly limited, and it was downright fun watching the VM chew through the data.  This ended up saving me a bunch of time waiting for loading data as I was doing other work.
>
> Without loom the way to solve this would have involved weaving a shared IO Executor to the right location.  The codebase is old which makes doing that extremely daunting.
>
> Structured concurrency was fast, simple, and the right thing to do.  Making things run concurrently has never been this simple.  I look forward to the day when it stabilizes!
>
> Ya'll have done a great job and hit on something really special with this.
>
> (This was with a JDK 15 build of loom using G1 garbage collector, if that helps at all).
>
> ________________________________
>
> NOTICE: This e-mail message, together with any attachments, contains information of Clearwater Analytics and/or its affiliates that may be confidential, proprietary copyrighted and/or legally privileged, and is intended solely for the use of the individual or entity named on this message. If you are not the intended recipient, and have received this message in error, please immediately delete it. The information we provide is from sources Clearwater Analytics considers reliable, but Clearwater Analytics provides no warranties regarding the accuracy of the information. Further, nothing in the email should be construed as legal, financial, or tax advice, and any questions regarding the intended recipient's individual circumstances should be addressed to that recipient's lawyer and/or accountant.
>
> Clearwater Analytics, 777 W. Main St, Boise, ID 83702
> If you prefer not to receive emails from Clearwater Analytics you may unsubscribe<https://clearwater-analytics.com/unsubscribe>.


More information about the loom-dev mailing list