MemorySegment JVM memory leak

Uwe Schindler uschindler at apache.org
Wed Apr 22 11:15:18 UTC 2020


Hi,

Just some comments from opposite site:

> > I am doing also some testing with C using mmap and sometimes having
> > same java issue where memory consumption very high. But still doing
> > investigation and not yet concluded.
> >
> I did exactly the same to narrow down the issue, and I too was having
> very high memory consumption with big mappings.
> 
> This is my main loop in C:
> 
> char * region = mmap(.......);
> 
> for (long l = 0 ; l < SIZE ; l+= 4096) {
>      memcpy(buf, &region[l], 4096);
>      madvise(region, l, MADV_DONTNEED); // <--------
>    }

This exact behavior is wanted for memory mapped files in most cases, the resident memory should be cleaned up later and the OS kernel does a good job with it. E.g., if Lucene/Solr/Elasticsearch would use MADV_DONTNEED its whole IO would go crazy. Why Lucene/Solr/Elasticsearch relies on this is the type of I/O its doing: https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html - Those servers are relying on the fact it behaves like the linux kernel handles it by default! So please don't add anything like this into MappedByteBuffer and the segment API! It's no memory leak, its just normal behaviour!

I agree, it's a problem for anonymous mapping if they can't be cleaned up, but those should be unmapped after usage. For mmapped files, the memory consumption is not higher, it's just better use of file system cache. If the kernel lacks enough space for other stuff that has no disk backend (anonymous mapping), it will free the occupied resources or add a disk backend by using swap file.

I still have some wish: Sometimes we would like to pass MADV_DONTNEED for memory mapped files, e.g. when we only read them once. With MappedByteBuffer this is not possible at the moment, there was already the proposal to allow setting those flags. Same for normal file IO, where its fadvise and should be implemented as open option in java.nio.file.Files class. So maybe add some OpenOption.WONTNEED to Files API. Thanks.

Uwe

> That second line allowed me to get back to normal consumption.
> 
> Maurizio
> 
> > Regards, Ahmed.
> >
> > *From:*Maurizio Cimadamore <maurizio.cimadamore at oracle.com>
> > *Sent:* Wednesday, April 22, 2020 2:07 PM
> > *To:* Ahmed <ahmdprog.java at gmail.com>; panama-dev at openjdk.java.net
> > *Subject:* Re: MemorySegment JVM memory leak
> >
> > On 22/04/2020 09:42, Ahmed wrote:
> >
> >     Maurizio,
> >
> >     I am doing more investigation regarding memory consumption issue.
> >     A quick question, are you using linux mmap API to implement Java
> >     memory segment classes ?.
> >
> > Yes, mapped memory segment (like mapped bytebuffer) are implemented
> > using mmap.
> >
> > What I did discover (thanks to Jim for the hint) is that sometimes
> > Linux systems benefit from also calling madvise(DONT_NEED) after you
> > are done with a certain segment. This helps telling the OS to unload
> > the mapped pages - this did the trick for me even in your first
> > example - which is why I've added the new 'unload' method. It is
> > possible that this issue was also present with direct buffers, but the
> > fact that we could not create mappings larger than 2G had somehow
> > concealed the problem.
> >
> > Maurizio
> >
> >     Regards, Ahmed
> >
> >     *From:*Maurizio Cimadamore <maurizio.cimadamore at oracle.com>
> >     <mailto:maurizio.cimadamore at oracle.com>
> >     *Sent:* Tuesday, April 21, 2020 2:28 PM
> >     *To:* Ahmed <ahmdprog.java at gmail.com>
> >     <mailto:ahmdprog.java at gmail.com>; panama-dev at openjdk.java.net
> >     <mailto:panama-dev at openjdk.java.net>
> >     *Subject:* Re: MemorySegment JVM memory leak
> >
> >     Thanks for the extra info. From the looks of it, you have no
> >     "leak" in the sense that, after GC, you always end up with 2M of
> >     memory allocated on heap.
> >
> >     In Java, a memory leak typically manifests when the size of the
> >     heap _after GC_ keeps growing - but this doesn't seem to be the
> >     case here.
> >
> >     I see that on application startup you get this: "Periodic GC
> >     disabled", which I don't get, but I don't think that's the problem.
> >
> >     In your second image you can clearly see that your resident memory
> >     is 29G, whereas your heap, even before GC is 300M, so it's not
> >     Java heap eating up your memory - the problem you have is native
> >     memory consumption.
> >
> >     To double check, I got your second test [2] again, and re-run it
> >     on two different machines, in both cases I get constant resident
> >     memory, at around 130M (with both latest Panama build and JDK 14).
> >     I honestly don't see how this test could behave differently, given
> >     that you are creating smaller mapped segments and you are
> >     unmapping them after each memory copy - this should be enough to
> >     tell the OS to get rid of the mapped pages!
> >
> >     The very first test you shared had an issue though, and we are
> >     addressing that through a new API method, which allows the memory
> >     segment API to tell the OS that you are "done" with a given
> >     portion of the mapped segment (otherwise the OS might keep it
> >     around for longer, resulting in thrashing).
> >
> >     I've just integrated this:
> >
> >     https://github.com/openjdk/panama-foreign/pull/115
> >
> >     Which has support for a new MappedMemorySegment::unload method,
> >     which should be useful in this context. With this method, in
> >     principle, you should be able to take your original example [1]
> >     and modify it a bit so that:
> >
> >     * on each iteration you take a slice from the original segment
> >     * you do the copy from the byte array to the slice
> >     * you call unload() on the slice
> >
> >     This should keep the memory pressure constant during your
> >     benchmark. If that works, you will want then to tune your test so
> >     that the calls to the 'unload' method are not too many (so as not
> >     to generate to many system calls) and not too few (so as not to
> >     overload your memory).
> >
> >     Maurizio
> >
> >     [1] -
> >     https://mail.openjdk.java.net/pipermail/panama-dev/2020-
> April/008555.html
> >
> >     [2] -
> >     https://mail.openjdk.java.net/pipermail/panama-dev/2020-
> April/008569.html
> >
> >     On 21/04/2020 07:00, Ahmed wrote:
> >
> >         Maurizio,
> >
> >
> >
> >         I went further crazy in testing. I reformat the server and installed latest
> Oracle Linux 8.1 and latest JDK 14.0.1. and I have a dedicated SSD disk for this
> testing.
> >
> >
> >
> >         Find the attached screenshots before and while executing my java code I
> provided you earlier with option you suggested.
> >
> >
> >
> >         Regards, Ahmed.
> >
> >
> >
> >
> >
> >         -----Original Message-----
> >
> >         From: Maurizio Cimadamore<maurizio.cimadamore at oracle.com>
> <mailto:maurizio.cimadamore at oracle.com>
> >
> >         Sent: Monday, April 20, 2020 7:31 PM
> >
> >         To:ahmdprog.java at gmail.com
> <mailto:ahmdprog.java at gmail.com>;panama-dev at openjdk.java.net
> <mailto:panama-dev at openjdk.java.net>
> >
> >         Subject: Re: MemorySegment JVM memory leak
> >
> >
> >
> >
> >
> >         On 20/04/2020 16:13,ahmdprog.java at gmail.com
> <mailto:ahmdprog.java at gmail.com>  wrote:
> >
> >             Maurizio,
> >
> >
> >
> >             Since JDK 14 released and I am doing testing to my project. I confirm
> you all time memory leak caused in writing/reading huge amount of data. I
> tested in my mac + oracle Linux. All my testing using map files on disk.
> >
> >
> >
> >             Moreover, mappedbytebuffer Is much faster than the new
> MemorySegment Introduced in JDK 14.
> >
> >
> >
> >         Have you tried running with the option I've suggested?
> >
> >
> >
> >         Maurizio
> >
> >
> >
> >
> >
> >             Regards, Ahmed.
> >
> >
> >
> >             -----Original Message-----
> >
> >             From: Maurizio Cimadamore<maurizio.cimadamore at oracle.com>
> <mailto:maurizio.cimadamore at oracle.com>
> >
> >             Sent: Monday, April 20, 2020 6:54 PM
> >
> >             To:ahmdprog.java at gmail.com
> <mailto:ahmdprog.java at gmail.com>;panama-dev at openjdk.java.net
> <mailto:panama-dev at openjdk.java.net>
> >
> >             Subject: Re: MemorySegment JVM memory leak
> >
> >
> >
> >
> >
> >             On 20/04/2020 15:09,ahmdprog.java at gmail.com
> <mailto:ahmdprog.java at gmail.com>  wrote:
> >
> >                 Maurizio,
> >
> >
> >
> >                 All time reading/writing huge data using memory segment. JVM eat
> all my 32G ram.
> >
> >             Are you sure the memory being "eaten" is heap memory?
> >
> >
> >
> >             Try running with -verbose:gc
> >
> >
> >
> >             In my case it prints:
> >
> >
> >
> >             [0.764s][info][gc] GC(0) Pause Young (Normal) (G1 Evacuation Pause)
> >
> >             23M->2M(252M) 1.346ms
> >
> >             [1.951s][info][gc] GC(1) Pause Young (Normal) (G1 Evacuation Pause)
> >
> >             50M->2M(252M) 0.996ms
> >
> >             [5.478s][info][gc] GC(2) Pause Young (Normal) (G1 Evacuation Pause)
> >
> >             148M->2M(252M) 3.701ms
> >
> >             [9.196s][info][gc] GC(3) Pause Young (Normal) (G1 Evacuation Pause)
> >
> >             148M->2M(252M) 0.908ms
> >
> >             [14.374s][info][gc] GC(4) Pause Young (Normal) (G1 Evacuation Pause)
> >
> >             148M->2M(252M) 1.283ms
> >
> >             [18.810s][info][gc] GC(5) Pause Young (Normal) (G1 Evacuation Pause)
> >
> >             148M->2M(252M) 1.026ms
> >
> >
> >
> >             As you can see, after an initial (normal) ramp up, _heap_ memory
> usage
> >
> >             stabilizes at ~148M
> >
> >
> >
> >             Maurizio
> >
> >
> >
> >                 I am using Oracle Linux server.
> >
> >
> >
> >                 Regards, Ahmed.
> >
> >
> >
> >                 -----Original Message-----
> >
> >                 From: Maurizio Cimadamore<maurizio.cimadamore at oracle.com>
> <mailto:maurizio.cimadamore at oracle.com>
> >
> >                 Sent: Monday, April 20, 2020 5:59 PM
> >
> >                 To:ahmdprog.java at gmail.com
> <mailto:ahmdprog.java at gmail.com>;panama-dev at openjdk.java.net
> <mailto:panama-dev at openjdk.java.net>
> >
> >                 Subject: Re: MemorySegment JVM memory leak
> >
> >
> >
> >
> >
> >                 On 20/04/2020 14:40,ahmdprog.java at gmail.com
> <mailto:ahmdprog.java at gmail.com>  wrote:
> >
> >                     Thank you Maurizio for feedback and explanation.
> >
> >
> >
> >                     But there is something else. Even I am closing the memory
> segment, JVM heap keep occupied. What I did also I slice memory segment to
> bytebuffer as below code and it is same behaviour.
> >
> >
> >
> >                     I did a lot of testing regarding memory segment reading/writing.
> All the time same behaviour JVM memory consumed and very high.
> >
> >
> >
> >                     The purpose of off-heap memory segment is not to touch the
> heap memory of JVM, but unfortunately not in current JVM implementation.
> >
> >                 Your new example is essentially unmapping memory all the time, so
> it should not run into any leak issues. On my machine resident memory stays
> constant at approx 200M. This is not too different from what I get using a
> simple "hello world" Java application which just prints same string in a loop. Do
> you observe more heap usage than just 200M on your machine?
> >
> >
> >
> >                 Maurizio
> >
> >
> >
> >                     public static void testingMemorySegmentV2() {
> >
> >                                 String strFileName = "/disk3/data.index" +
> System.currentTimeMillis();
> >
> >                                 File fileObjectFileName = new File(strFileName);
> >
> >                                 if (fileObjectFileName.exists() == false) {
> >
> >                                     try {
> >
> >                                         fileObjectFileName.createNewFile();
> >
> >                                     } catch (IOException e) {
> >
> >                                     } catch (Exception e) {
> >
> >                                     }
> >
> >                                 }
> >
> >                                 long lngMemorySegmentFileSize = 107374182400l; // 100
> G
> >
> >                                 byte[] bytesArrayString = new byte[4096];
> >
> >                                 MemorySegment sourceSegment =
> MemorySegment.ofArray(bytesArrayString);
> >
> >                                 long lngTotalNumberOfPagesForAllFile =
> lngMemorySegmentFileSize / 4096;
> >
> >                                 try {
> >
> >                                     for (long i = 0; i < lngTotalNumberOfPagesForAllFile; i++)
> {
> >
> >                                         MemorySegment memorySegmentTmp =
> MemorySegment.mapFromPath(new File(strFileName).toPath(),
> lngMemorySegmentFileSize, FileChannel.MapMode.READ_WRITE);
> >
> >                                         MemorySegment memorySegmentTmp2 =
> memorySegmentTmp.asSlice(i * 4096, 4096);
> >
> >                                         ByteBuffer buffer =
> memorySegmentTmp2.asByteBuffer();
> >
> >                                         buffer.put(bytesArrayString);
> >
> >                                         memorySegmentTmp.close();
> >
> >                                     }
> >
> >                                 } catch (IOException e) {
> >
> >                                     e.printStackTrace();
> >
> >                                 }
> >
> >                             }
> >
> >
> >
> >                     Regards, Ahmed.
> >
> >
> >
> >                     -----Original Message-----
> >
> >                     From: Maurizio Cimadamore<maurizio.cimadamore at oracle.com>
> <mailto:maurizio.cimadamore at oracle.com>
> >
> >                     Sent: Monday, April 20, 2020 5:16 PM
> >
> >                     To:ahmdprog.java at gmail.com
> <mailto:ahmdprog.java at gmail.com>;panama-dev at openjdk.java.net
> <mailto:panama-dev at openjdk.java.net>
> >
> >                     Subject: Re: MemorySegment JVM memory leak
> >
> >
> >
> >                     Hi,
> >
> >                     I've tried your example and I think it's running as expected. With
> a little caveat (described below).
> >
> >
> >
> >                     On my machine, the test completes  - with resident memory
> pegged at about 8G, while virtual memory was 100G. The latter is normal, since
> in order to make the memory accessible to your process, mmap has to reserve
> 100G memory in the virtual address space. These 100G will not of course be
> committed all at once - the policy by which this is done is heavily OS
> dependent. Most OS will have some logic in order to discard unused pages, so
> that your application will not crash; also, most OS will also attempt to
> "prefetch" more than one page in order to speed up access.
> >
> >
> >
> >                     Now, what's puzzling is why the resident memory is so high - and
> I think I found out what happens: basically this test is generating an awful lot of
> dirty pages - since these pages are not flushed back to disk (e.g. in a way similar
> to what MappedByteBuffer::force does), all these pages have to be kept around
> for longer in main memory (again, the details and thresholds are system
> specific).
> >
> >
> >
> >                     Since I'm in the middle of adding force() support to mapped
> segments:
> >
> >
> >
> >                     https://git.openjdk.java.net/panama-foreign/pull/115
> >
> >
> >
> >                     I did a simple experiment: I added a call to the new
> >
> >                     MappedMemorySegment::force() after the call to
> MemoryAddress::copy,
> >
> >                     and re-ran the test. And now the resident memory was pegged at
> 150KB
> >
> >                     :-)
> >
> >
> >
> >                     So, I believe your issue is that, when managing very large file you
> have to be disciplined in how you sync contents of main memory back into the
> mapped file - if you leave it implicit (e.g. to the OS), you might end up in a not-
> so-desirable place.
> >
> >
> >
> >                     Does this help?
> >
> >
> >
> >                     Maurizio
> >
> >
> >
> >                     On 18/04/2020 17:33,ahmdprog.java at gmail.com
> <mailto:ahmdprog.java at gmail.com>  wrote:
> >
> >                         Gentlemen,
> >
> >
> >
> >
> >
> >
> >
> >                         There is memory leak in JVM while writing and reading byte
> array to
> >
> >                         memory segment. The below is simple example that generates
> 100G
> >
> >                         file with zero bytes. While running the below code, you will see
> >
> >                         that JVM consumes all server memory.
> >
> >
> >
> >
> >
> >
> >
> >                         Unfortunately, I tested also in reading array of bytes. It has
> same
> >
> >                         issue of memory leak.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >                         public static void testingMemorySegment() {
> >
> >
> >
> >                                      String strFileName = "/disk3/bigdata.index" +
> >
> >                         System.currentTimeMillis();
> >
> >
> >
> >                                      File fileObjectFileName = new File(strFileName);
> >
> >
> >
> >                                      if (fileObjectFileName.exists() == false) {
> >
> >
> >
> >                                          try {
> >
> >
> >
> >                                              fileObjectFileName.createNewFile();
> >
> >
> >
> >                                          } catch (IOException e) {
> >
> >
> >
> >                                          } catch (Exception e) {
> >
> >
> >
> >                                          }
> >
> >
> >
> >                                      }
> >
> >
> >
> >                                      long lngMemorySegmentFileSize = 107374182400l; //
> 100 G
> >
> >
> >
> >                                      byte[] bytesArrayString = new byte[4096];
> >
> >
> >
> >                                      MemorySegment sourceSegment =
> >
> >                         MemorySegment.ofArray(bytesArrayString);
> >
> >
> >
> >                                      long lngTotalNumberOfPagesForAllFile =
> >
> >                         lngMemorySegmentFileSize / 4096;
> >
> >
> >
> >                                      try {
> >
> >
> >
> >                                          MemorySegment memorySegmentTmp =
> >
> >                         MemorySegment.mapFromPath(new
> File(strFileName).toPath(),
> >
> >                         lngMemorySegmentFileSize,
> FileChannel.MapMode.READ_WRITE);
> >
> >
> >
> >                                          MemoryAddress address =
> >
> >                         memorySegmentTmp.baseAddress();
> >
> >
> >
> >                                          MemoryAddress sourceAddress =
> >
> >                         sourceSegment.baseAddress();
> >
> >
> >
> >                                          for (long i = 0; i <
> >
> >                         lngTotalNumberOfPagesForAllFile;
> >
> >                         i++) {
> >
> >
> >
> >                                              MemoryAddress.copy(sourceAddress,
> >
> >                         address.addOffset(i
> >
> >                         * 4096), 4096);
> >
> >
> >
> >                                          }
> >
> >
> >
> >                                          memorySegmentTmp.close();
> >
> >
> >
> >                                      } catch (IOException e) {
> >
> >
> >
> >                                          e.printStackTrace();
> >
> >
> >
> >                                      }
> >
> >
> >
> >                                  }
> >
> >
> >



More information about the panama-dev mailing list