8199791: (se) More Selector cleanup

David Lloyd david.lloyd at redhat.com
Tue Mar 20 16:19:01 UTC 2018


On Tue, Mar 20, 2018 at 10:46 AM, Alan Bateman <Alan.Bateman at oracle.com> wrote:
> On 20/03/2018 15:25, David Lloyd wrote:
>>
>> :
>> My understanding is that eventfd has lower overhead in the kernel (no
>> buffer space allocation for example, and dispatch is faster as a
>> result of this and other things) and certainly in userspace (using
>> only one FD instead of two).
>>
> The number of Selectors is usually small (at least since the use of
> temporary selectors was eliminated some time ago) so it would be good to get
> some data to see if this is worth doing.

Makes sense.  I can say a few things for sure though:

* Frameworks do use temp selectors to simulate blocking (it would be
pretty nice to have an API to replace this FWIW), so this could mean
many hundreds or thousands of selectors lying around depending on how
that is done; halving the FD requirements of that would be a good
thing
* Writing to an empty pipe apparently causes a page to be allocated
for its buffer; on systems with bigger pages (64K for example, found
on some ARM-based systems) that might amount to a lot of pointless
overhead for what will usually amount to a single byte here or there
* I believe that sockets and socketpairs have similar behavior
* An eventfd can queue 2^64 events without any additional memory
overhead (though this is unlikely to be a problem in practice outside
of extreme environments)
* Eventfd allows all pending events to be cleared in a single operation

I don't think there is likely to be any substantial performance
difference in terms of latency or event delivery based on what
existing research I can discover.  But it does seem like an easy way
to shed some weight from a common process.  Collecting data will
likely entail some process whereby kernel memory usage and allocation
can be monitored; I'm not quite sure how that could be approached yet
though.  I'll think about it some more.
-- 
- DML


More information about the nio-dev mailing list