RFR: 8359706: Add file descriptor count and maximum limit to VM.info [v2]
Joel Sikström
jsikstro at openjdk.org
Wed Oct 29 14:51:44 UTC 2025
On Wed, 29 Oct 2025 12:32:51 GMT, Kieran Farrell <kfarrell at openjdk.org> wrote:
>> Currently, it is only possible to read the number of open file descriptors of a Java process via the `UnixOperatingSystemMXBean` which is only accessible via JMX enabled tools. To improve servicability, it would be benifical to be able to view this information from jcmd VM.info output or hs_err_pid crash logs. This could help diagnose resource exhaustion and troubleshoot "too many open files" errors in Java processes on Unix platforms.
>>
>> This PR adds reporting the current open file descriptor count to both jcmd VM.info output or hs_err_pid crash logs by refactoring the native JNI logic from `Java_com_sun_management_internal_OperatingSystemImpl_getOpenFileDescriptorCount0` of the `UnixOperatingSystemMXBean` into hotspot. Apple's API for retrieving open file descriptor count provides an array of the actual FDs to determine the count. To avoid using `malloc` to store this array in a potential signal handling context where stack space may be limited, the apple implementation instead allocates a fixed 32KB struct on the stack to store the open FDs and only reports the result if the struct is less than the max (1024 FDs). This should cover the majoirty of use cases.
>
> Kieran Farrell has updated the pull request incrementally with one additional commit since the last revision:
>
> updates
Just FYI, you can test what's being printed in hs_err files easily by using the `-XX:ErrorHandlerTest=14` flag, which crashes the VM in a "controlled way".
> To improve servicability, it would be benifical to be able to view this information from jcmd VM.info output or hs_err_pid crash logs. This could help diagnose resource exhaustion and troubleshoot "too many open files" errors in Java processes on Unix
I'm trying to understand what functionality you're after here. Is the specific number of open file descriptors important or is it enough to just report that there are "a lot" of file descriptors open? If so, @tstuefe's suggested approach of looking at `/proc/self/status` and FDSize and don't report the exact number if over some limit is a good compromise I think.
I couldn't find any good documentation for Mac's `proc_pidinfo`, but looking at the source code it's not an expensive operation at all (https://github.com/apple/darwin-xnu/blob/2ff845c2e033bd0ff64b5b6aa6063a1f8f65aa32/bsd/kern/proc_info.c#L486). I know very little of AIX to have an opinion, but if it's an issue maybe we can skip reporting it for now?
src/hotspot/os/bsd/os_bsd.cpp line 2511:
> 2509: const int MAX_SAFE_FDS = 1024;
> 2510: struct proc_fdinfo fds[MAX_SAFE_FDS];
> 2511: struct proc_bsdinfo bsdinfo;
Unused.
src/hotspot/os/bsd/os_bsd.cpp line 2515:
> 2513: kern_return_t kres;
> 2514: int res;
> 2515: size_t fds_size;
Unused.
-------------
PR Review: https://git.openjdk.org/jdk/pull/27971#pullrequestreview-3393702318
PR Comment: https://git.openjdk.org/jdk/pull/27971#issuecomment-3461988789
PR Review Comment: https://git.openjdk.org/jdk/pull/27971#discussion_r2473317135
PR Review Comment: https://git.openjdk.org/jdk/pull/27971#discussion_r2473317397
More information about the hotspot-dev
mailing list