RFR: 8330077: Allow max number of events to be buffered to be configurable to avoid OVERFLOW_EVENT [v2]

Brian Burkhalter bpb at openjdk.org
Thu Apr 11 22:03:42 UTC 2024


On Thu, 11 Apr 2024 21:53:14 GMT, Fabian Meumertzheim <duke at openjdk.org> wrote:

>> If you are going the system property route, then something like this (not tested) might be in order:
>> 
>> 
>>     private static final int SYSTEM_MAX_EVENT_LIST_SIZE = 100_000; // value TBD
>>     private static final int DEFAULT_MAX_EVENT_LIST_SIZE = 512;
>> 
>>     static final int MAX_EVENT_LIST_SIZE;
>>     static {
>>         String rawValue = GetPropertyAction.privilegedGetProperty
>>             ("jdk.nio.file.maxWatchEvents",
>>             String.valueOf(DEFAULT_MAX_EVENT_LIST_SIZE));
>>         int intValue;
>>         try {
>>             intValue = Math.min(Math.max(Math.toIntExact(Long.decode(rawValue)),
>>                                          DEFAULT_MAX_EVENT_LIST_SIZE),
>>                                 SYSTEM_MAX_EVENT_LIST_SIZE);
>>         } catch (NumberFormatException nfe) {
>>             intValue = DEFAULT_MAX_EVENT_LIST_SIZE;
>>         } catch (ArithmeticException ae) { // rawValue overflowed an int
>>             intValue = SYSTEM_MAX_EVENT_LIST_SIZE;
>>         }
>>         MAX_EVENT_LIST_SIZE = intValue;
>>     }
>
> Thanks for the suggestion, the validation was definitely lacking. I pushed a new version that uses `Math.clamp` to clamp to `[512, Integer.MAX_VALUE]` and thus at least cover the lower bound. 
> 
> I learned that e.g. NTFS allows up to 2^32 entries per directory (in theory, of course), so I don't see a strong argument in favor of an upper bound motivated by hardware. Do you have a particular bottleneck or problematic use case in mind?
> 
> What about the case where the event size reaches `Integer.MAX_SIZE`? An OVERFLOW event could be better than an OOM due to the list size getting too large, what do you think?

I wrote a prototype of `WatchService` for macOS that used `kqueue(2)` and it ultimately was inviable because it used up all the file descriptors. That was what prompted my comment, but it is irrelevant, really.

In the absence of such a real world system constraint, possibly some moderately large upper bound could be established based on empirical observation. It you are reacting to the OVERFLOW events already, I doubt we need to get to `Integer.MAX_VALUE`. Maybe all that's needed is something large enough to reduce the frequency of OVERFLOW events. I don't have any insight into that.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/18739#discussion_r1561802263


More information about the nio-dev mailing list