RFR: 6478546: FileInputStream.read() throws OutOfMemoryError when there is plenty available [v2]

Brian Burkhalter bpb at openjdk.org
Wed Jul 26 00:03:41 UTC 2023


On Tue, 25 Jul 2023 05:51:26 GMT, Alan Bateman <alanb at openjdk.org> wrote:

>> It's based on micro-benchmarks. Having the loops in Java reduces throughput but allocating memory using `malloc(len)` also reduces throughput as `len` gets larger and this threshold appears to balance the two.
>
>> It's based on micro-benchmarks. Having the loops in Java reduces throughput but allocating memory using `malloc(len)` also reduces throughput as `len` gets larger and this threshold appears to balance the two.
> 
> Are these micro benchmarks dropping the file system cache so there is real file I/O? I wasn't expecting to see a buffer larger than 1Mb so curious what the benchmarks say.

There was no dropping of the file system cache. I would think this would have more of an effect on measuring the throughput of writing than reading.

I since modified the benchmark to invoke `FileChannel::force` in a `@TearDown` at the invocation level. That is not really copacetic but appears to give more believable results.

New runs show that if the malloc limit is 1Mb, then the revised code underperforms the current code until the size of array surpasses 1.5Mb. If the limit is 1.5Mb however, the new code improved throughput for all sizes tested. Based on this I reduced the malloc limit to 1572864.

These results are for macOS only thus far.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/14981#discussion_r1274227677



More information about the security-dev mailing list