RFR 9: 8077350 Process API Updates Implementation Review

Peter Levart peter.levart at gmail.com
Tue May 19 08:34:42 UTC 2015


Hi Roger,

On 05/18/2015 11:49 PM, Roger Riggs wrote:
> The earlier spec specifically used the common pool but from a 
> specification point of view
> it allows more flexibility in the implementation not to bind onExit to 
> the commonPool.
> Its not clear that the attributes of threads used to handle output or 
> managing Processes
> are the same as those attributed to the commonPool.
>
> The default sizing of the commonPool is based on the number of processors
> and does not grow (except via the ManagedBlockers) mechanism.
> The number of threads waiting for processes is unrelated to the number 
> of CPUs.
> It is simpler to maintain to use a separate pool and not depend on 
> consistent
> assumptions between Process handling and the commonPool.

I'm not advocating to use commonPool for waiting on the process exit(s). 
We have special low-stack-size reaper threads for this purpose and in 
future there could be just one thread maybe. I'm also not advocating to 
use commonPool in default Process.onExit() implementation where the 
thread must wait for process to exit. I'm just questiong the use of 
commonPool vs. AsyncExecutor in ProcessImpl.onExit() and 
ProcessHandleImpl.onExit() where there is an async CompletableFuture 
stage inserted to shield from exposing internal low-stack-size threads.

The code that is executed by commonPool (or AsyncExecutor) by onExit() 
continuations doesn't wait for process to exit. The number of processes 
spawned does not matter directly. Only the rate of process exits does. 
The code executed will be a cleanup-type / post-exit-actions-type of 
code which seems to be typical code for commonPool. Code dealing with 
process input/output will typically be executed concurrently with the 
Process while the process is still running and not by onExit() 
continuations.

On 05/18/2015 11:49 PM, Roger Riggs wrote:
> One alternative is for onExit to return a CF subclass that maps 
> synchronous methods
> to the corresponding async methods.  Each call to onExit would return 
> a new proxy
> that would delegate to a new CF created from the 
> ExitCompletion.handle(...).
> For all requests that involved calling app code it would delegate to 
> the corresponding
> async method.  For example,
>
>         CompletableFuture<ProcessHandle> cf =
>                 ProcessHandleImpl.completion(getPid(), false)
>                     .handle((exitStatus, unusedThrowable) -> this);
>         return new CompletionProxy(cf);
>
> The CompletionProxy can ensure that all CFs that call application code 
> are Async.
> It would override the syncronous methods and redirect to the 
> corresponding Async method.
>
>         public <U> CompletableFuture<U> thenApply(Function<? super 
> ProcessHandle, ? extends U> fn) {
>             return thenApplyAsync(fn);
>         }
>
> The CompletionProxy can also supply the Executor as needed unless 
> supplied by the application
>
> It adds a simple delegation but removed the extra task dispatch except 
> where needed.
>
> Does that work as you would expect?
>
> Thanks, Roger

I thought about that  too. Unfortunately, there are also xxxAsync(..., 
Executor) methods that appear to attach asynchronous continuations. And 
they do, if passed-in Executor dispatches to separate threads. But users 
can pass in a "SynchronousExecutor" like the following:

     public class SynchronousExecutor implements Executor {
         @Override
         public void execute(Runnable command) {
             command.run();
         }
     }

...which makes xxxAsync(..., Executor) methods equivalent to 
synchronous-continuation-attaching methods.

It would be nice to have an API that guaranteed division of threads 
between internal-code and user-code threads, but such API would have to 
be constructed around something different than Executor interface. I 
think what we have with insertion of an async stage is the best we can 
do with CompletableFuture API.

The only question remaining is whether to use AsyncExecutor in 
ProcessImpl and ProcessHandleImpl .onExit() methods for executing the 
inserted async stage (and any synchronous continuation of it), or only 
in default implementation of Process.onExit() for waiting on process 
exit. The AsyncExecutor properties are such that it doesn't pre-start 
any threads and if not used, doesn't represent any system overhead. This 
is good if it is normally not used and would make it a good candidate 
for default Process.onExit() implementation which is used only in 
uncommon custom Process implementations. Using it in ProcessImpl and 
ProcessHandleImpl .onExit() methods makes it being used after each 
onExit() call, albeit potentially only briefly, so there's a chance that 
only one thread will ever be started and will die after every 60s of 
inactivity. commonPool() is no different from that perspective, but 
there is a chance, being a common pool, that it is used for other 
purposes too, so potentially less thread starts and stops can be 
achieved. There's also the lack of uniformity if AsyncExecutor is used 
which can have surprising effects. For example, a user switches from 
using synchronous onExit() continuation to asynchronous one and as a 
result the executor changes which can results in different behavior.

I would be *for* using the AsyncExecutor in Process[Handle]Impl if it 
could be set as default Executor for any continuation born from returned 
CompletableFuture transitively. There were some changes announced from 
jsr166 team that would allow that (subclassing CF so that the methods 
returned a subclass of CF), so this could perhaps be the way to go...

Regards, Peter




More information about the core-libs-dev mailing list