MemorySegment JVM memory leak

ahmdprog.java at gmail.com ahmdprog.java at gmail.com
Mon Apr 20 13:40:28 UTC 2020


Thank you Maurizio for feedback and explanation. 

But there is something else. Even I am closing the memory segment, JVM heap keep occupied. What I did also I slice memory segment to bytebuffer as below code and it is same behaviour.

I did a lot of testing regarding memory segment reading/writing. All the time same behaviour JVM memory consumed and very high.

The purpose of off-heap memory segment is not to touch the heap memory of JVM, but unfortunately not in current JVM implementation.

public static void testingMemorySegmentV2() {
        String strFileName = "/disk3/data.index" + System.currentTimeMillis();
        File fileObjectFileName = new File(strFileName);
        if (fileObjectFileName.exists() == false) {
            try {
                fileObjectFileName.createNewFile();
            } catch (IOException e) {
            } catch (Exception e) {
            }
        }
        long lngMemorySegmentFileSize = 107374182400l; // 100 G
        byte[] bytesArrayString = new byte[4096];
        MemorySegment sourceSegment = MemorySegment.ofArray(bytesArrayString);
        long lngTotalNumberOfPagesForAllFile = lngMemorySegmentFileSize / 4096;
        try {
            for (long i = 0; i < lngTotalNumberOfPagesForAllFile; i++) {
                MemorySegment memorySegmentTmp = MemorySegment.mapFromPath(new File(strFileName).toPath(), lngMemorySegmentFileSize, FileChannel.MapMode.READ_WRITE);
                MemorySegment memorySegmentTmp2 = memorySegmentTmp.asSlice(i * 4096, 4096);
                ByteBuffer buffer = memorySegmentTmp2.asByteBuffer();
                buffer.put(bytesArrayString);
                memorySegmentTmp.close();
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

Regards, Ahmed.

-----Original Message-----
From: Maurizio Cimadamore <maurizio.cimadamore at oracle.com> 
Sent: Monday, April 20, 2020 5:16 PM
To: ahmdprog.java at gmail.com; panama-dev at openjdk.java.net
Subject: Re: MemorySegment JVM memory leak

Hi,
I've tried your example and I think it's running as expected. With a little caveat (described below).

On my machine, the test completes  - with resident memory pegged at about 8G, while virtual memory was 100G. The latter is normal, since in order to make the memory accessible to your process, mmap has to reserve 100G memory in the virtual address space. These 100G will not of course be committed all at once - the policy by which this is done is heavily OS dependent. Most OS will have some logic in order to discard unused pages, so that your application will not crash; also, most OS will also attempt to "prefetch" more than one page in order to speed up access.

Now, what's puzzling is why the resident memory is so high - and I think I found out what happens: basically this test is generating an awful lot of dirty pages - since these pages are not flushed back to disk (e.g. in a way similar to what MappedByteBuffer::force does), all these pages have to be kept around for longer in main memory (again, the details and thresholds are system specific).

Since I'm in the middle of adding force() support to mapped segments:

https://git.openjdk.java.net/panama-foreign/pull/115

I did a simple experiment: I added a call to the new
MappedMemorySegment::force() after the call to MemoryAddress::copy, and re-ran the test. And now the resident memory was pegged at 150KB :-)

So, I believe your issue is that, when managing very large file you have to be disciplined in how you sync contents of main memory back into the mapped file - if you leave it implicit (e.g. to the OS), you might end up in a not-so-desirable place.

Does this help?

Maurizio

On 18/04/2020 17:33, ahmdprog.java at gmail.com wrote:
> Gentlemen,
>
>   
>
> There is memory leak in JVM while writing and reading byte array to 
> memory segment. The below is simple example that generates 100G file 
> with zero bytes. While running the below code, you will see that JVM 
> consumes all server memory.
>
>   
>
> Unfortunately, I tested also in reading array of bytes. It has same 
> issue of memory leak.
>
>   
>
>   
>
> public static void testingMemorySegment() {
>
>          String strFileName = "/disk3/bigdata.index" + 
> System.currentTimeMillis();
>
>          File fileObjectFileName = new File(strFileName);
>
>          if (fileObjectFileName.exists() == false) {
>
>              try {
>
>                  fileObjectFileName.createNewFile();
>
>              } catch (IOException e) {
>
>              } catch (Exception e) {
>
>              }
>
>          }
>
>          long lngMemorySegmentFileSize = 107374182400l; // 100 G
>
>          byte[] bytesArrayString = new byte[4096];
>
>          MemorySegment sourceSegment = 
> MemorySegment.ofArray(bytesArrayString);
>
>          long lngTotalNumberOfPagesForAllFile = 
> lngMemorySegmentFileSize / 4096;
>
>          try {
>
>              MemorySegment memorySegmentTmp = 
> MemorySegment.mapFromPath(new File(strFileName).toPath(), 
> lngMemorySegmentFileSize, FileChannel.MapMode.READ_WRITE);
>
>              MemoryAddress address = memorySegmentTmp.baseAddress();
>
>              MemoryAddress sourceAddress = 
> sourceSegment.baseAddress();
>
>              for (long i = 0; i < lngTotalNumberOfPagesForAllFile; 
> i++) {
>
>                  MemoryAddress.copy(sourceAddress, address.addOffset(i 
> * 4096), 4096);
>
>              }
>
>              memorySegmentTmp.close();
>
>          } catch (IOException e) {
>
>              e.printStackTrace();
>
>          }
>
>      }
>



More information about the panama-dev mailing list