sun.nio.ch.Util: Don't cache an unlimited amount of memory
Ariel Weisberg
ariel at weisberg.ws
Tue Dec 29 16:57:01 UTC 2015
Hi,
Channel.new[Input | Output]Stream can trigger this as well. It's an odd
thing to put in the javadoc there since this is an artifact of a
specific Channel implementation and not Channels in general.
One solution that would be nice is to break down large writes into
several large smaller writes so the per thread buffer size is at least
bounded and not set to the size of the largest seen IO.
For sockets this is usually not a problem although I can imagine there
is some implementation somewhere that will care. For files this is a bit
more problematic especially if O_SYNC or O_DIRECT are in use.
Filesystems on down also trigger different behaviors based off of
read/write size so hiding the fact that something is one big IO can have
an impact. My intuition is that something based on breaking down IOs is
not going o pass muster, but it exists.
Could this be handled better with a different pooling approach that is
not per thread over some threshold? There are a bunch of permutations
that could trade off allocation/deallocation for concurrency vs.
blocking threads waiting for operations to complete freeing up globally
pooled buffers to bound peak footprint.
Regards, Ariel
On Tue, Dec 29, 2015, at 10:59 AM, Evan Jones wrote:
> No, we don't do scatter/gather I/O. Most Twitter services (Finagle
> which uses Netty 3) call NIO with variable sized heap ByteBuffers. The
> "leak" is caused by the fact that each thread caches a single direct
> ByteBuffer of the maximum size it has ever seen. Hence, if 0.01% of
> requests eventually cause a ~100 MB chunk to be sent, then each thread
> ends up with slowly growing native memory usage that never decreases,
> even if the "typical" size is ~1 MB. I still think it is very
> surprising and wrong that the JDK caches such enormous amounts of
> memory in this scenario. I would much rather have a "performance
> problem" that I can fix by managing my own buffers, than a mysterious
> native memory leak that is difficult to track down.
>
> My (simple) proposal will have a performance impact for applications
> that do large I/O with heap ByteBuffers. However, I would argue
> those apps are already slow because of the copy, and probably won't
> notice :).
>
> Would you be interested in a more sophisticated solution that would
> allow huge cached buffers to eventually expire? For example, if the
> "recent" usage of the cache has been much smaller than the allocated
> buffers, free the buffers and re-allocate? For the problematic
> application that I found, *any* policy for when to expire would
> effectively solve this problem, since the "peak" usage is
> significantly worse than the "average" usage.
>
> My final suggestion: I would be happy to attempt to revise the javadoc
> about direct buffers to make it clearer that using heap buffers for
> I/O will cause copies, and will also cause the JDK to cache native
> memory. At least then we could argue that this behaviour is "as
> designed." :)
>
>
> On Tue, Dec 29, 2015 at 4:17 AM, Alan Bateman
> <Alan.Bateman at oracle.com> wrote:
>>
>>
On 27/12/2015 20:35, Evan Jones wrote:
>>>
Summary: nio Util caches an unlimited amount of memory for temporary
direct ByteBuffers, which effectively is a native memory leak.
Applications that do large I/Os can inadvertently waste gigabytes of
native memory or run out of memory. I suggest it should only cache a
"small" amount of memory per-thread (e.g. 1 MB), and maybe have a flag
to allow changing the limit for applications where this causes a
performance regression.
>>>
>>>
1. Would JDK committers support this change?
>>>
2. If so, any suggestions for the default and/or how to override it
with a flag?
>>>
>>>
Tony Printezis (CCed here) added a flag to Twitter's internal JVM/JDK to
limit the size of this cache, which we could probably use that as a
starting point for a patch.
>>
Limiting the size of the buffer cache might help in some scenarios, it
just means a bit more complexity and yet another tuning option.
>>
>>
Do you do scatter/gather I/O? The current implementation will cache up
to IOV_MAX buffers per thread but if you aren't doing scatter/gather I/O
then caching a maximum of one buffer per thread should reduce the memory
usage. It wouldn't be hard to modify the BufferCache implementation to
track the number of per-thread buffers in use so that this count is the
maximum cached rather than IOV_MAX. I'm curious if you've looked into
doing anything along those lines.
>>
>>
-Alan
>>
>
>
>
> --
> Evan Jones http://evanjones.ca/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/nio-dev/attachments/20151229/42d8fb48/attachment.html>
More information about the nio-dev
mailing list