RFR(s): 8150460: (linux|bsd|aix)_close.c: file descriptor table may become large or may not work at all

Thomas Stüfe thomas.stuefe at gmail.com
Wed Mar 2 09:43:54 UTC 2016

Hi Andrew,

On Wed, Mar 2, 2016 at 9:28 AM, Andrew Haley <aph at redhat.com> wrote:

> On 01/03/16 10:20, Dmitry Samersoff wrote:
> > The bug: https://bugs.openjdk.java.net/browse/JDK-8150460
> >> The Webrev:
> >>
> http://cr.openjdk.java.net/~stuefe/webrevs/8150460-linux_close-fdTable/webrev.00/webrev/
> Why use calloc here?  Surely it makes more sense to use
> mmap(MAP_NORESERVE), at least on linux.  We're probably only
> going to be using a small number of FDs, and there's no real
> point reserving a big block of memory we won't use.
> Andrew.
I am aware of this. I do not allocate all memory in one go, I allocate on
demand in n-sized-steps - that was the point of my implementation as a
sparse array.

Changing my implementation to mmap(MAP_NORESERVE) would not make the code

I would have to commit the memory before usage. So, I have to put some
committed-pages-management atop the reserved range to keep track of which
pages are committed, which aren't. File descriptors come in in no
predictable order (usually sequentially, but there is no guarantee), so I
cannot use a simple watermark model either, where I commit pages to cover
the highest file descriptor. I mean I could, but that would be potentially
wasteful if you have big holes in file descriptor value ranges.

In the end I would end up with exactly the same implementation I have now,
only swapping mmap(MAP_RESERVE) for calloc() and driving up reserved memory
size for this process. And arguably, an even more complicated


More information about the core-libs-dev mailing list