Feedback on Structured Concurrency (JEP 525, 6th Preview)

Jige Yu yujige at gmail.com
Mon Oct 13 23:48:00 UTC 2025


On Mon, Oct 13, 2025 at 10:37 AM <forax at univ-mlv.fr> wrote:

>
>
> ------------------------------
>
> *From: *"Jige Yu" <yujige at gmail.com>
> *To: *"Remi Forax" <forax at univ-mlv.fr>
> *Cc: *"loom-dev" <loom-dev at openjdk.org>
> *Sent: *Sunday, October 12, 2025 6:49:19 PM
> *Subject: *Re: Feedback on Structured Concurrency (JEP 525, 6th Preview)
>
> Thanks for the quick reply, Remi!
> I'll focus on discussing alternatives, which hopefully should also help
> clarify my concerns of the current API.
>
> On Sun, Oct 12, 2025 at 6:43 AM Remi Forax <forax at univ-mlv.fr> wrote:
>
>>
>>
>> ------------------------------
>>
>> *From: *"Jige Yu" <yujige at gmail.com>
>> *To: *"loom-dev" <loom-dev at openjdk.org>
>> *Sent: *Sunday, October 12, 2025 7:32:33 AM
>> *Subject: *Feedback on Structured Concurrency (JEP 525, 6th Preview)
>>
>> Hi Project Loom.First and foremost, I want to express my gratitude for
>> the effort that has gone into structured concurrency. API design in this
>> space is notoriously difficult, and this feedback is offered with the
>> greatest respect for the team's work and in the spirit of collaborative
>> refinement.
>>
>> My perspective is that of a developer looking to use Structured
>> Concurrency for common, IO-intensive fan-out operations. My focus is to
>> replace everyday async callback hell, or reactive chains with something
>> simpler and more readable.
>>
>> It will lack depth in the highly specialized concurrent programming area.
>> And I acknowledge this viewpoint may bias my feedback.
>> ------------------------------
>>
>> [...]
>>
>> *Suggestions for a Simpler Model*
>>
>> My preference is that the API for the most common use cases should be
>> more *declarative and functional*.
>>
>>    1.
>>
>>    *Simplify the "Gather All" Pattern:* The primary "fan-out and gather"
>>    use case could be captured in a simple, high-level construct. An average
>>    user shouldn't need to learn the wide API surface of StructuredTaskScope +
>>    Joiner + the lifecycles. For example:
>>    Java
>>
>>    // Ideal API for the 80% use case
>>    Robot robot = Concurrently.call(
>>        () -> fetchArm(),
>>        () -> fetchLeg(),
>>        (arm, leg) -> new Robot(arm, leg)
>>    );
>>
>>
>>
>> I'm curious how you want to type that API, does it work only for two
>> tasks, do you have an overload for each arity (2 tasks, 3 tasks, etc).
>> And how exceptions are supposed to work given that the type system of
>> Java is not able to merge type variable representing exceptions correctly.
>>
>
>
> Just a handful of overloads. Looking from Google's internal code base, up
> to 5 concurrent fanout probably covers 95% of use cases. The other 5% can
> either build their own helpers like:
>
> // MoreConcurrency
> <T1, T2, ..., T10, R> R concurrently(
>     Supplier<T1>, ..., Supplier<T10>,
>     Function10<T1, T2, ..., T10, R> combiner) {
>   return concurrently(  // just nest some concurrent calls
>      () -> concurrently(task1, task2, ..., task5, Tuple5::new),
>      () -> concurrently(task6, ..., task10, Tuple5::new),
>      (tuple1, tuple2) -> combiner.apply(tuple1.a(), tuple1.b(), ..., tuple2.e());
> }
>
>
> Or, they can use the homogeneous mapConcurrent() gatherer, and deal with
> some type casting.
>
> In terms of exceptions, directly propagating checked exception across
> threads may not always be desirable because their stack trace will be
> confusing. This is why traditionally Future throws ExecutionException with
> the stack traces chained together. It should be a conscious choice of the
> developer if they don't mind losing the extra stack trace.
>
> I was thinking of one of Google's internal compile-time plugins to help
> with exception propagation. But before I dive into the details, allow me to
> clarify the principle that I implicitly adheres to:
>
> *Any Checked Exception Must Be Explicitly Caught or Declared As Throws*
>
> There must be no secret pathway where it can become unchecked without the
> developer's explicit acknowledgement.
>
> And that is why I'm concerned about the current SC API, where the checked
> exception can be thrown in the Callable lambda, not have to be caught. And
> then at the call site it has become unchecked.
>
> (well, except maybe InterruptedException, which probably shouldn't have
> required the developer to catch and handle)
>
> Now I'll explain what the Google's internal plugin does, it's called
> TunnedException, which is an unchecked exception. For streams, it's used
> like:
>
> try {
>   return list.stream().map(v -> tunnel(() -> process(v))).toList();
> } catch (TunnelException e) {
>   try {
>     // If you forgot a checked exception, compilation will FAIL
>
>     throw e.rethrow(IOException.class, InvalidSyntaxException.class);
>   } catch (IOExeption e) {
>     ...
>   } catch (InvalidSyntaxException e) {
>      ...
>   }
> }
>
>
> At the javac level, tunnel() expects a Callable, which does allow checked
> exceptions to be magically "unchecked" as TunnelException. And at runtime,
> the TunnelException will be thrown as is by Stream.
>
> But in the ErrorProne plugin, it will recognize that the special tunnel() call
> has suppressed a few checked exception types (in this case, IOException and
> InvalidSyntaxException). And then the plugin will validate that within the
> same lexical scope, rethrow() with the two exception types must be called.
> Thus compile-time enforcement of checked exceptions remains. And at the
> catch site we still have the compiler-check about which checked exception
> that we have forgotten to catch, or the checked exception type cannot
> possibly be thrown.
>
> I played with this idea inside Google, using it for this functional
> concurrently() flavor of structured concurrency. And it worked out ok:
>
> try {
>   return Concurrently.call(
>       () -> tunnel(() -> fetchArm()),
>       () -> tunnel(() -> fetchLeg()),
>       (arm, leg) -> new Robot(arm, leg)
>   );
> } catch (TunnelException e) {
>   throw e.rethrow(RpcException.class);
>   // or wrap it in an appropriate application-level exception
> }
>
>
> I'm not saying that the Google's ErrorrProne plugin be adopted verbatim by
> Loom. I actually had hoped that the Java team, being the god of Java, can
> do more, giving us a more systematic solution to checked exceptions in
> structured concurrency. Google's ErrorProne plugin can be considered a
> baseline, that at worst, this is what we can do.
>
> That said, it's understandable that this whole
> checked-exception-does-not-work-across-abstractions problem is considered
> an orthogonal issue and Loom decides it's not in scope.
>
> But even then, it's probably prudent to use Supplier instead of Callable
> for fork(), or in this hypothetical functional SC.
>
> The reason I prefer Supplier is that it's consistent with the established
> checked exception philosophy, and will force the developer to handle the
> checked exceptions. Even if you do want to propagate it in unchecked, it
> should be an explicit choice. Either by using plain old try-catch-rethrow,
> or the developer (or Project Loom) can provide an explicit "unchecker"
> helper to help save boilerplate:
>
> public static <T> Supplier<T> unchecked(Callable<T> task) {
>   return () -> {
>     try {
>       return task.call();
>     } catch (RuntimeException e) {
>       throw e;
>     } catch (Exception e) {
>       throw new UncheckedExecutionException(e);
>     }
>   };
> }
>
> Then it's only a matter of changing the call site to the following:
>
>   return Concurrently.call(
>       unchecked(() -> fetchArm()),
>       unchecked(() -> fetchLeg()),
>       (arm, leg) -> new Robot(arm, leg));
>
>
> Exceptions management is really really hard in Java, mostly because of
> checked exceptions and IDE failing to implement the fact that exception
> should be catch as late as possible.
>
> You can use a Supplier or any other functional interfaces of
> java.util.function to force users to manually deal with exceptions, sadly
> what i'm seeing is that my students write code that shallow exceptions or
> throw everything as a RuntimeException (the default behavior of Eclipse and
> IntelliJ respectively).
>
> We have already a way to deal with exceptions in Executor/Callable/Future,
> the default behavior wraps every exceptions,
> Yes, you get only one part of the tunneling, you have to write the
> rethrowing part yourself, but at least that default behavior is better than
> letting users to deal with exceptions.
>

Agreed that checked exception management is hard. But by using
Function/Supplier there is at least a consistency card: it will be the same
user experience as Stream.

Users still have to catch checked exceptions in the lambdas, but they have
to do that already with streams anyways.

And as you said: it's important to have the API integrate seamlessly with
the rest of Java. So sticking to the same precedent of Stream, and not
allow checked exceptions to sneakily become unchecked, would seem like a
safe bet.


>
>
>
>>    1.
>>
>>    *Separate Race Semantics into Composable Operations:* The "race"
>>    pattern feels like a distinct use case that could be implemented more
>>    naturally using composable, functional APIs like Stream gatherers, rather
>>    than requiring a specialized API at all. For example, if
>>    mapConcurrent() fully embraced structured concurrency, guaranteeing
>>    fail-fast and happens-before, a recoverable race could be written
>>    explicitly:
>>    Java
>>
>>    // Pseudo-code for a recoverable race using a stream gatherer
>>    <T> T race(Collection<Callable<T>> tasks, int maxConcurrency){
>>        var exceptions = new ConcurrentLinkedQueue<RpcException>();
>>        return tasks.stream()
>>            .gather(mapConcurrent(maxConcurrency, task -> {
>>                try {
>>                    return task.call();
>>                } catch (RpcException e) {
>>                    if (isRecoverable(e)) { // Selectively recover
>>                        exceptions.add(e);
>>                        return null; // Suppress and continue
>>                    }
>>                    throw new RuntimeException(e); // Fail fast on non-recoverable
>>                }
>>            }))
>>            .filter(Objects::nonNull)
>>            .findFirst() // Short-circuiting and cancellation
>>            .orElseThrow(() -> new AggregateException(exceptions));
>>    }
>>
>>    While this is slightly more verbose than the JEP example, it's
>>    familiar Stream semantics that people have already learned, and it offers
>>    explicit control over which exceptions are recoverable versus fatal. The
>>    boilerplate for exception aggregation could easily be wrapped in a helper
>>    method.
>>
>>
>> Several points :
>> - I believe the current STS API has no way to deal with if the exception
>> is recoverable or not because it's far easier to do that at the end of the
>> callable.
>>   Your example becomes :
>>
>>     sts.fork(() -> {
>>       try {
>>         taskCall();
>>       } catch(RPCException e) {
>>         ...
>>       }
>>     });
>>
>> Yes. Though my point is that this now becomes an *opt-in*. It should be
> an opt-out.  Swallowing exceptions should not be the default behavior.
>
> And for the anySuccessfulOrThrow() joiner, I don't know it helps much
> because even if it's not recoverable,you'd still throw in the lambda, and
> it will still be swallowed by the joiner.
>
>
> anySuccessfulResultOrThrow() has the semantics of stopping the STS when
> one result is found.
> So you may never run some callables, so you may never know if a Callable
> fails or not.
>
> Given that semantics, not propagating the exceptions through the joiner
> seems the right thing to do,
> again, you are not even sure that all callables will run.
>

This is inconsistent with the fail-fast semantics we get from Streams
though.

list.parallelStream()
    .map(item -> processAndMayThrow(item))
    .findAny();


It still truthfully throws whatever exception that was thrown before the
first successful item was found. Sure, you can't predict which item will be
processed. But failures that have already happened cannot be ignored.

Otherwise, unexpected fatal errors won't be reported until a success is
found. If the success takes a long time, or if it blocks and waits for
things, it can defeat fail fast, or even hang the program.  A throttled
error won't stop the program from flooding the server; a security audit
error won't stop the subtasks from doing whatever bad things they would be
doing.


>
>> - You do not want to post the result/exception of a task into a
>> concurrent data structure, i think the idea of the STS API in this case is
>> to fork all the tasks and then take a look to all the subtasks.
>>
>
> It probably is. What I was trying to say is that the mapConcurrent()
> approach feels more natural, and safer.
>
>
>>   I believe it's more efficient because there is no CAS to be done if the
>> main thread take a look to the subtasks afterward than if the joiner tries
>> to maintain a concurrent data structure.
>>
>> This may be my blind spot. I've always assumed that structured
> concurrency where I need to fan out IO-blocking tasks isn't usually the hot
> path. Even with virtual threads, context switching still isn't cheap enough
> to worry about low-level micro optimizations ?
>
>
>>    1.
>>
>>    *Reserve Complexity for Complex Cases:* The low-level
>>    StructuredTaskScope and its policy mechanism are powerful tools.
>>    However, they should be positioned as the "expert-level" API for building
>>    custom frameworks. Or perhaps just keep them in the traditional
>>    ExecutorService API. The everyday developer experience should be centered
>>    around simpler, declarative constructs that cover the most frequent needs.
>>
>>
>> For me, that's why you have an open Joiner interface for expert and
>> already available Joiner (like all.../any...) that are more for everyday
>> developers.
>>
>>
> Yeah. My point is the current Joiner interface looks too much like an
> inviting couch that an average developer would immediately start to think:
> "oh I have a use case I may be able to implement by overriding
> onComplete()!". But *you don't really need it*.
>
> In an analogy, there is Stream API. Most of us would just use the Steam
> API, passing in lambdas, collectors etc. We would not think of implementing
> our own BaseStream, which imho would have been an unfortunate distraction.
>
>
> Wrong guy, i've implemented quite a lot of spliterators (the abstraction
> used by the Stream implementation).
>
> More seriously, yes you may implement onComplete or the Predicate of
> allUntil() when you should not, but it's like implementing a Spliterator,
> not a lot of people will do it anyway, it's clearly marked for expert.
>
>
>
> ------------------------------
> *InterruptedException*
> Lastly, my view of InterruptedException is like what you've said: it being
> a checked exception is unfortunate. It forces people to catch it, which
> then makes it easier to make the mistake of forgetting to re-interrupt the
> thread. And actually, few people even understand it (where it comes from,
> what triggers it,what needs to be done).
>
> Even if you do painstakingly declare throws InterruptedException all the
> way up the call stack, as the usual best practice suggests, the end result
> is still just as if it were unchecked in the first place, only that way it
> wouldn't have mandated so much maintenance effort of the developers: the
> top-level handler catch and handle it once and for all.
>
> So I'd consider it a plus if the SC API hides away InterruptedException.
> Heck, mapConcurrent() already hides it away without forcing users to catch
> it.
>
> If you expect average users to mis-handle it, the better alternative may
> be to handle it for them already, including perhaps re-interrupting the
> thread, and turning it into an UncheckedInterruptedException, so that most
> developers won't be given the chance to make the mistake.
>
>
> Again, you can think that InterruptedException should not be a checked
> exception, i will go even further saying Java should not have checked
> exceptions,
> but this is not the kind of fix you should do in an API, it should be done
> at the language level.
> It's more important to have an API that integrate seamlessly with the rest
> of Java, hence using InterruptedException when a blocking join() is
> interrupted.
>

Personally, I like mapConcurrent()'s model. It doesn't throw
InterruptedException, but it interrupts the subtask threads, and leaves it
to the subtask lambdas to catch and respond to IE. That to me seems to be
reasonable.

>
> regards,
> Rémi
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20251013/83325f00/attachment-0001.htm>


More information about the loom-dev mailing list