Shouldn't InputStream/Files::readAllBytes throw something other than OutOfMemoryError?
Anthony Vanelverdinghe
anthony.vanelverdinghe at gmail.com
Sun Mar 12 14:24:06 UTC 2017
Files::readAllBytes is specified to throw an OutOfMemoryError "if an
array of the required size cannot be allocated, for example the file is
larger that 2G". Now in Java 9, InputStream::readAllBytes does the same.
However, this overloads the meaning of OutOfMemoryError: either "the JVM
is out of memory" or "the resultant array would require long-based indices".
In my opinion, this overloading is problematic, because:
- OutOfMemoryError has very clear semantics, and I don't see the link
between OOME and the fact that a resultant byte[] would need to be >2G.
If I have 5G of free heap space, and try to read a 3G file, I'd expect
something like an UnsupportedOperationException, but definitely not an
OutOfMemoryError.
- the former meaning is an actual Error, whereas the latter is an
Exception from which the application can recover.
- developers might be tempted to catch the OOME and retry to read the
file/input stream in chunks, no matter the cause of the OOME.
What was the rationale for using OutOfMemory here? And would it still be
possible to change this before Rampdown Phase 2?
Kind regards,
Anthony
More information about the core-libs-dev
mailing list