NIO.2 and GC activity

Leon Finker leonfin at optonline.net
Thu Apr 16 18:30:43 PDT 2009


I agree there is no advantage for the writes, and I don't advocate it. On
the reads I disagree that the order can't be guaranteed or re-sequenced on
completion/failure. One is always able to re-establish the original order of
reads that were scheduled on their completion or failure simply by
associating the counter with the scheduled reads and then re-establishing
the sequence. Yes, one has to be aware that if they schedule a read on non
I/O windows pool thread and thread immediately exits, the I/O request will
be canceled. But this applies to current NIO.2 implementation even with non
simultaneous reads. I agree the code is somewhat involved, and one has to
deal with concurrency, but it's an option for core communication framework
libraries out there. This optimization is not for every case, it's for
specific communication/protocol patterns. So, I'm not sure if comparison
will be fair. It's just a suggestion based on experience. If there is no
need for this from the community, then you're right it can be added later.

-----Original Message-----
From: Alan.Bateman at Sun.COM [mailto:Alan.Bateman at Sun.COM] 
Sent: Thursday, April 16, 2009 11:57 AM
To: Leon Finker
Cc: nio-dev at openjdk.java.net
Subject: Re: NIO.2 and GC activity

Leon Finker wrote:
> I realized after I sent previous email maybe this is not cross platform so
> it 
> may not be feasible. It is fully supported on windows. Yes I'm referring
to 
> buffer(s) always being available to the driver. The goal is to try and
> saturate 
> driver with outstanding read buffers (to a limit of course) to minimize
any 
> buffering on its side. The buffering in driver will in most cases happen
> while 
> NIO.2 hands off the read buffer and application does some simplistic
> processing 
> on it. With more than one outstanding read buffer, this can be minimized.
Yes, I'm familiar with it but it's a bit more complicated in that it 
requires a guarantee on the ordering that the I/O operations are 
executed and also requires the application to be able to deal with 
concurrent or even out of order notifications. There are a couple of 
cases where the ordering can't be guaranteed. For example, I/O 
operations are never initiated directly by non-pooled threads (because 
the thread may terminate causing any outstanding I/O operations it 
initiated to abort; this is Windows kernel thing). This and other cases 
means that it is possible for multiple reads on the same channel to be 
initiated in the kernel in a different order than might be expected, 
making it impossible to re-sequence the completion notifications. 
Furthermore, it just doesn't make sense for write operations - for 
example a write may complete without writing all bytes, in which case a 
queued write would corrupt the stream. I'm not against queuing of reads 
on streams but it just may be too advanced for many developers and 
creates the potential for some nasty bugs. Maybe in the future it is 
something to add, as an opt-in rather than the default. In any case, it 
would be interesting to compare the performance against code that 
initiates a read early in the completion handler.

-Alan.




More information about the nio-dev mailing list