MemorySegment JVM memory leak

Uwe Schindler uschindler at apache.org
Wed Apr 22 12:46:04 UTC 2020


Hi,

> > Just some comments from opposite site:
> >
> >>> I am doing also some testing with C using mmap and sometimes having
> >>> same java issue where memory consumption very high. But still doing
> >>> investigation and not yet concluded.
> >>>
> >> I did exactly the same to narrow down the issue, and I too was having
> >> very high memory consumption with big mappings.
> >>
> >> This is my main loop in C:
> >>
> >> char * region = mmap(.......);
> >>
> >> for (long l = 0 ; l < SIZE ; l+= 4096) {
> >>       memcpy(buf, &region[l], 4096);
> >>       madvise(region, l, MADV_DONTNEED); // <--------
> >>     }
> > This exact behavior is wanted for memory mapped files in most cases, the
> resident memory should be cleaned up later and the OS kernel does a good job
> with it. E.g., if Lucene/Solr/Elasticsearch would use MADV_DONTNEED its
> whole IO would go crazy. Why Lucene/Solr/Elasticsearch relies on this is the
> type of I/O its doing: https://blog.thetaphi.de/2012/07/use-lucenes-
> mmapdirectory-on-64bit.html - Those servers are relying on the fact it behaves
> like the linux kernel handles it by default! So please don't add anything like this
> into MappedByteBuffer and the segment API! It's no memory leak, its just
> normal behaviour!
> 
> I'm not proposing to add this so that it's called automatically!
> 
> What I did is to add an extra method (similar to load()) which does the
> madvise. If you don't want to use it, you don't have to!

OK, that's fine - actually after I sent my mail I had seen your comment about the new API. This is also something which might be useful directly on MappedByteBuffer, not only on the segment api, so consider adding it there, too. Should not be too complicated. We would really appreciate it, especially as we can't go to the new segment API at the moment because of the thread-confinement issues.

> > I agree, it's a problem for anonymous mapping if they can't be cleaned up,
> but those should be unmapped after usage. For mmapped files, the memory
> consumption is not higher, it's just better use of file system cache. If the kernel
> lacks enough space for other stuff that has no disk backend (anonymous
> mapping), it will free the occupied resources or add a disk backend by using
> swap file.
> 
> This is not what I observed on my machines (and I suspect Ahmed also
> seeing the same). If you just do a loop iterating over a 100G mapped
> file, you will eventually run out of RAM and the system will start
> swapping like crazy to the point of stopping being responsive. I don't
> think this is an acceptable behavior, at least in this specific case.

The problem is that you are looping from the beginning to the end of the region. I am not fully familiar with the Linux kernel code in recent Lucene versions, but it tries to be intelligent regarding memory mapping. If you read the whole file like this you are somehow misusing mmap API. Reading the file with sequential IO is much better. MMAP is ideal for random access to files where you need not all at once.

If you touch every block one by one it's the same like MappedMyteBuffer#load(). Stuff that was recently loaded is preferred to be kept in physical memory, so stuff that was longer not access has to go to swap. How this happens depends on the vm.swappiness sysctl kernel setting (which is 60% by default in default, a bad setting e.g. for some worksloads on servers, see below). With 60% swapping out is preferred over just freeing recently acclaimed buffers. Especially with the sequential read antipattern, I would not be surprised, if Linux kernel has an optimization to assume this stuff seems to be needed more often (as sequential reads are mostly a sign of database scans, where file system caching is hardly required).

I'd test my code with "sysctl -w vm.swappiness=10" (or similar settings close to 0).

In addition, if you read a 100 GiB file using an InputStream, you will also see that all your system might begin swapping! The reason is that the kernel wants the 100 GiB file in filesystem cache, because you have read it. The difference is only, that you don't see the memory bound to your process in TOP, so it might get overseen, but it still tries to load all that stuff. Also here, when reading file sequential and only one-time, you should pass fadvice(fd, ...WONT_NEED...).

This is why I suggested to add OpenOption.WONT_NEED, or OpenOption.DONT_CACHE (to be more operating system neutral) for the Java File API and pass this as recommendation using fadvise to the file descriptor, or add some API in Java do control it also on RandomAccessFile. See end of my last mail.

> In the Lucene case, IIRC, you are using mapped regions that are not this
> huge, so maybe you are not seeing the issue?

If you remember my talk on the committers meeting in Brussels: Elasticsearch servers sometimes memorymap up to a terabyte of memory on 64 or 128 GiB pyhsical RAM machines and still work fine. All of this is mmapped in MappedByteBuffers each 1 GiB of size (due to 32 bit limitation). The difference to your stuff is: We use random access and not everyting is needed at same time. IO pressure is much lower for your synthetic test, where system has no time to cleanup, as it wants to swap in pages as fast as possible. If you have random access not with sequential access, the system also has time to free other resource and do a decision to free resources that were longer not used.

In addition for Solr and Elasticsearch the recommendation is to either disable swap completely or run it with vm.swapiness=1 or 10 (see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html). As memory usage of Lucene is low (you can run Elasticsearch server with 4 or 8 Gigs of Heap space, still managing a terabyte of index), a node dedicated for search can simply work without swap. In that case the operating system has no better way than freeing the unused pages.

> Of course I'm not advocating for doing an madvise as frequently as in my
> example above - that was just a hacky snippet - a well behaved client
> will have to balance the memory pressure with all the other concerns.
> But it seems to me that (while I know it's no memory leak) the "just
> leave the OS doing its job" strategy is not cutting it, at least not in
> this particular use case, and for this mapping size, and the OS needs
> some kicking in order to keep memory pressure under acceptable terms.
> 
> Maurizio
> >
> > I still have some wish: Sometimes we would like to pass MADV_DONTNEED
> for memory mapped files, e.g. when we only read them once. With
> MappedByteBuffer this is not possible at the moment, there was already the
> proposal to allow setting those flags. Same for normal file IO, where its fadvise
> and should be implemented as open option in java.nio.file.Files class. So maybe
> add some OpenOption.WONTNEED to Files API. Thanks.

Adding the FADVISE setting also to nio.Files OpenOptions is still something which would be nice to have, especially as noted above.

> >> That second line allowed me to get back to normal consumption.
> >>
> >> Maurizio
> >>
> >>> Regards, Ahmed.
> >>>
> >>> *From:*Maurizio Cimadamore <maurizio.cimadamore at oracle.com>
> >>> *Sent:* Wednesday, April 22, 2020 2:07 PM
> >>> *To:* Ahmed <ahmdprog.java at gmail.com>; panama-
> dev at openjdk.java.net
> >>> *Subject:* Re: MemorySegment JVM memory leak
> >>>
> >>> On 22/04/2020 09:42, Ahmed wrote:
> >>>
> >>>      Maurizio,
> >>>
> >>>      I am doing more investigation regarding memory consumption issue.
> >>>      A quick question, are you using linux mmap API to implement Java
> >>>      memory segment classes ?.
> >>>
> >>> Yes, mapped memory segment (like mapped bytebuffer) are implemented
> >>> using mmap.
> >>>
> >>> What I did discover (thanks to Jim for the hint) is that sometimes
> >>> Linux systems benefit from also calling madvise(DONT_NEED) after you
> >>> are done with a certain segment. This helps telling the OS to unload
> >>> the mapped pages - this did the trick for me even in your first
> >>> example - which is why I've added the new 'unload' method. It is
> >>> possible that this issue was also present with direct buffers, but the
> >>> fact that we could not create mappings larger than 2G had somehow
> >>> concealed the problem.
> >>>
> >>> Maurizio
> >>>
> >>>      Regards, Ahmed
> >>>
> >>>      *From:*Maurizio Cimadamore <maurizio.cimadamore at oracle.com>
> >>>      <mailto:maurizio.cimadamore at oracle.com>
> >>>      *Sent:* Tuesday, April 21, 2020 2:28 PM
> >>>      *To:* Ahmed <ahmdprog.java at gmail.com>
> >>>      <mailto:ahmdprog.java at gmail.com>; panama-dev at openjdk.java.net
> >>>      <mailto:panama-dev at openjdk.java.net>
> >>>      *Subject:* Re: MemorySegment JVM memory leak
> >>>
> >>>      Thanks for the extra info. From the looks of it, you have no
> >>>      "leak" in the sense that, after GC, you always end up with 2M of
> >>>      memory allocated on heap.
> >>>
> >>>      In Java, a memory leak typically manifests when the size of the
> >>>      heap _after GC_ keeps growing - but this doesn't seem to be the
> >>>      case here.
> >>>
> >>>      I see that on application startup you get this: "Periodic GC
> >>>      disabled", which I don't get, but I don't think that's the problem.
> >>>
> >>>      In your second image you can clearly see that your resident memory
> >>>      is 29G, whereas your heap, even before GC is 300M, so it's not
> >>>      Java heap eating up your memory - the problem you have is native
> >>>      memory consumption.
> >>>
> >>>      To double check, I got your second test [2] again, and re-run it
> >>>      on two different machines, in both cases I get constant resident
> >>>      memory, at around 130M (with both latest Panama build and JDK 14).
> >>>      I honestly don't see how this test could behave differently, given
> >>>      that you are creating smaller mapped segments and you are
> >>>      unmapping them after each memory copy - this should be enough to
> >>>      tell the OS to get rid of the mapped pages!
> >>>
> >>>      The very first test you shared had an issue though, and we are
> >>>      addressing that through a new API method, which allows the memory
> >>>      segment API to tell the OS that you are "done" with a given
> >>>      portion of the mapped segment (otherwise the OS might keep it
> >>>      around for longer, resulting in thrashing).
> >>>
> >>>      I've just integrated this:
> >>>
> >>>      https://github.com/openjdk/panama-foreign/pull/115
> >>>
> >>>      Which has support for a new MappedMemorySegment::unload method,
> >>>      which should be useful in this context. With this method, in
> >>>      principle, you should be able to take your original example [1]
> >>>      and modify it a bit so that:
> >>>
> >>>      * on each iteration you take a slice from the original segment
> >>>      * you do the copy from the byte array to the slice
> >>>      * you call unload() on the slice
> >>>
> >>>      This should keep the memory pressure constant during your
> >>>      benchmark. If that works, you will want then to tune your test so
> >>>      that the calls to the 'unload' method are not too many (so as not
> >>>      to generate to many system calls) and not too few (so as not to
> >>>      overload your memory).
> >>>
> >>>      Maurizio
> >>>
> >>>      [1] -
> >>>      https://mail.openjdk.java.net/pipermail/panama-dev/2020-
> >> April/008555.html
> >>>      [2] -
> >>>      https://mail.openjdk.java.net/pipermail/panama-dev/2020-
> >> April/008569.html
> >>>      On 21/04/2020 07:00, Ahmed wrote:
> >>>
> >>>          Maurizio,
> >>>
> >>>
> >>>
> >>>          I went further crazy in testing. I reformat the server and installed
> latest
> >> Oracle Linux 8.1 and latest JDK 14.0.1. and I have a dedicated SSD disk for
> this
> >> testing.
> >>>
> >>>
> >>>          Find the attached screenshots before and while executing my java
> code I
> >> provided you earlier with option you suggested.
> >>>
> >>>
> >>>          Regards, Ahmed.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>          -----Original Message-----
> >>>
> >>>          From: Maurizio Cimadamore<maurizio.cimadamore at oracle.com>
> >> <mailto:maurizio.cimadamore at oracle.com>
> >>>          Sent: Monday, April 20, 2020 7:31 PM
> >>>
> >>>          To:ahmdprog.java at gmail.com
> >> <mailto:ahmdprog.java at gmail.com>;panama-dev at openjdk.java.net
> >> <mailto:panama-dev at openjdk.java.net>
> >>>          Subject: Re: MemorySegment JVM memory leak
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>          On 20/04/2020 16:13,ahmdprog.java at gmail.com
> >> <mailto:ahmdprog.java at gmail.com>  wrote:
> >>>              Maurizio,
> >>>
> >>>
> >>>
> >>>              Since JDK 14 released and I am doing testing to my project. I
> confirm
> >> you all time memory leak caused in writing/reading huge amount of data. I
> >> tested in my mac + oracle Linux. All my testing using map files on disk.
> >>>
> >>>
> >>>              Moreover, mappedbytebuffer Is much faster than the new
> >> MemorySegment Introduced in JDK 14.
> >>>
> >>>
> >>>          Have you tried running with the option I've suggested?
> >>>
> >>>
> >>>
> >>>          Maurizio
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>              Regards, Ahmed.
> >>>
> >>>
> >>>
> >>>              -----Original Message-----
> >>>
> >>>              From: Maurizio Cimadamore<maurizio.cimadamore at oracle.com>
> >> <mailto:maurizio.cimadamore at oracle.com>
> >>>              Sent: Monday, April 20, 2020 6:54 PM
> >>>
> >>>              To:ahmdprog.java at gmail.com
> >> <mailto:ahmdprog.java at gmail.com>;panama-dev at openjdk.java.net
> >> <mailto:panama-dev at openjdk.java.net>
> >>>              Subject: Re: MemorySegment JVM memory leak
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>              On 20/04/2020 15:09,ahmdprog.java at gmail.com
> >> <mailto:ahmdprog.java at gmail.com>  wrote:
> >>>                  Maurizio,
> >>>
> >>>
> >>>
> >>>                  All time reading/writing huge data using memory segment. JVM
> eat
> >> all my 32G ram.
> >>>              Are you sure the memory being "eaten" is heap memory?
> >>>
> >>>
> >>>
> >>>              Try running with -verbose:gc
> >>>
> >>>
> >>>
> >>>              In my case it prints:
> >>>
> >>>
> >>>
> >>>              [0.764s][info][gc] GC(0) Pause Young (Normal) (G1 Evacuation
> Pause)
> >>>
> >>>              23M->2M(252M) 1.346ms
> >>>
> >>>              [1.951s][info][gc] GC(1) Pause Young (Normal) (G1 Evacuation
> Pause)
> >>>
> >>>              50M->2M(252M) 0.996ms
> >>>
> >>>              [5.478s][info][gc] GC(2) Pause Young (Normal) (G1 Evacuation
> Pause)
> >>>
> >>>              148M->2M(252M) 3.701ms
> >>>
> >>>              [9.196s][info][gc] GC(3) Pause Young (Normal) (G1 Evacuation
> Pause)
> >>>
> >>>              148M->2M(252M) 0.908ms
> >>>
> >>>              [14.374s][info][gc] GC(4) Pause Young (Normal) (G1 Evacuation
> Pause)
> >>>
> >>>              148M->2M(252M) 1.283ms
> >>>
> >>>              [18.810s][info][gc] GC(5) Pause Young (Normal) (G1 Evacuation
> Pause)
> >>>
> >>>              148M->2M(252M) 1.026ms
> >>>
> >>>
> >>>
> >>>              As you can see, after an initial (normal) ramp up, _heap_ memory
> >> usage
> >>>              stabilizes at ~148M
> >>>
> >>>
> >>>
> >>>              Maurizio
> >>>
> >>>
> >>>
> >>>                  I am using Oracle Linux server.
> >>>
> >>>
> >>>
> >>>                  Regards, Ahmed.
> >>>
> >>>
> >>>
> >>>                  -----Original Message-----
> >>>
> >>>                  From: Maurizio
> Cimadamore<maurizio.cimadamore at oracle.com>
> >> <mailto:maurizio.cimadamore at oracle.com>
> >>>                  Sent: Monday, April 20, 2020 5:59 PM
> >>>
> >>>                  To:ahmdprog.java at gmail.com
> >> <mailto:ahmdprog.java at gmail.com>;panama-dev at openjdk.java.net
> >> <mailto:panama-dev at openjdk.java.net>
> >>>                  Subject: Re: MemorySegment JVM memory leak
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>                  On 20/04/2020 14:40,ahmdprog.java at gmail.com
> >> <mailto:ahmdprog.java at gmail.com>  wrote:
> >>>                      Thank you Maurizio for feedback and explanation.
> >>>
> >>>
> >>>
> >>>                      But there is something else. Even I am closing the memory
> >> segment, JVM heap keep occupied. What I did also I slice memory segment
> to
> >> bytebuffer as below code and it is same behaviour.
> >>>
> >>>
> >>>                      I did a lot of testing regarding memory segment
> reading/writing.
> >> All the time same behaviour JVM memory consumed and very high.
> >>>
> >>>
> >>>                      The purpose of off-heap memory segment is not to touch the
> >> heap memory of JVM, but unfortunately not in current JVM implementation.
> >>>                  Your new example is essentially unmapping memory all the time,
> so
> >> it should not run into any leak issues. On my machine resident memory stays
> >> constant at approx 200M. This is not too different from what I get using a
> >> simple "hello world" Java application which just prints same string in a loop.
> Do
> >> you observe more heap usage than just 200M on your machine?
> >>>
> >>>
> >>>                  Maurizio
> >>>
> >>>
> >>>
> >>>                      public static void testingMemorySegmentV2() {
> >>>
> >>>                                  String strFileName = "/disk3/data.index" +
> >> System.currentTimeMillis();
> >>>                                  File fileObjectFileName = new File(strFileName);
> >>>
> >>>                                  if (fileObjectFileName.exists() == false) {
> >>>
> >>>                                      try {
> >>>
> >>>                                          fileObjectFileName.createNewFile();
> >>>
> >>>                                      } catch (IOException e) {
> >>>
> >>>                                      } catch (Exception e) {
> >>>
> >>>                                      }
> >>>
> >>>                                  }
> >>>
> >>>                                  long lngMemorySegmentFileSize = 107374182400l; //
> 100
> >> G
> >>>                                  byte[] bytesArrayString = new byte[4096];
> >>>
> >>>                                  MemorySegment sourceSegment =
> >> MemorySegment.ofArray(bytesArrayString);
> >>>                                  long lngTotalNumberOfPagesForAllFile =
> >> lngMemorySegmentFileSize / 4096;
> >>>                                  try {
> >>>
> >>>                                      for (long i = 0; i < lngTotalNumberOfPagesForAllFile;
> i++)
> >> {
> >>>                                          MemorySegment memorySegmentTmp =
> >> MemorySegment.mapFromPath(new File(strFileName).toPath(),
> >> lngMemorySegmentFileSize, FileChannel.MapMode.READ_WRITE);
> >>>                                          MemorySegment memorySegmentTmp2 =
> >> memorySegmentTmp.asSlice(i * 4096, 4096);
> >>>                                          ByteBuffer buffer =
> >> memorySegmentTmp2.asByteBuffer();
> >>>                                          buffer.put(bytesArrayString);
> >>>
> >>>                                          memorySegmentTmp.close();
> >>>
> >>>                                      }
> >>>
> >>>                                  } catch (IOException e) {
> >>>
> >>>                                      e.printStackTrace();
> >>>
> >>>                                  }
> >>>
> >>>                              }
> >>>
> >>>
> >>>
> >>>                      Regards, Ahmed.
> >>>
> >>>
> >>>
> >>>                      -----Original Message-----
> >>>
> >>>                      From: Maurizio
> Cimadamore<maurizio.cimadamore at oracle.com>
> >> <mailto:maurizio.cimadamore at oracle.com>
> >>>                      Sent: Monday, April 20, 2020 5:16 PM
> >>>
> >>>                      To:ahmdprog.java at gmail.com
> >> <mailto:ahmdprog.java at gmail.com>;panama-dev at openjdk.java.net
> >> <mailto:panama-dev at openjdk.java.net>
> >>>                      Subject: Re: MemorySegment JVM memory leak
> >>>
> >>>
> >>>
> >>>                      Hi,
> >>>
> >>>                      I've tried your example and I think it's running as expected.
> With
> >> a little caveat (described below).
> >>>
> >>>
> >>>                      On my machine, the test completes  - with resident memory
> >> pegged at about 8G, while virtual memory was 100G. The latter is normal,
> since
> >> in order to make the memory accessible to your process, mmap has to
> reserve
> >> 100G memory in the virtual address space. These 100G will not of course be
> >> committed all at once - the policy by which this is done is heavily OS
> >> dependent. Most OS will have some logic in order to discard unused pages,
> so
> >> that your application will not crash; also, most OS will also attempt to
> >> "prefetch" more than one page in order to speed up access.
> >>>
> >>>
> >>>                      Now, what's puzzling is why the resident memory is so high -
> and
> >> I think I found out what happens: basically this test is generating an awful
> lot of
> >> dirty pages - since these pages are not flushed back to disk (e.g. in a way
> similar
> >> to what MappedByteBuffer::force does), all these pages have to be kept
> around
> >> for longer in main memory (again, the details and thresholds are system
> >> specific).
> >>>
> >>>
> >>>                      Since I'm in the middle of adding force() support to mapped
> >> segments:
> >>>
> >>>
> >>>                      https://git.openjdk.java.net/panama-foreign/pull/115
> >>>
> >>>
> >>>
> >>>                      I did a simple experiment: I added a call to the new
> >>>
> >>>                      MappedMemorySegment::force() after the call to
> >> MemoryAddress::copy,
> >>>                      and re-ran the test. And now the resident memory was pegged
> at
> >> 150KB
> >>>                      :-)
> >>>
> >>>
> >>>
> >>>                      So, I believe your issue is that, when managing very large file
> you
> >> have to be disciplined in how you sync contents of main memory back into
> the
> >> mapped file - if you leave it implicit (e.g. to the OS), you might end up in a
> not-
> >> so-desirable place.
> >>>
> >>>
> >>>                      Does this help?
> >>>
> >>>
> >>>
> >>>                      Maurizio
> >>>
> >>>
> >>>
> >>>                      On 18/04/2020 17:33,ahmdprog.java at gmail.com
> >> <mailto:ahmdprog.java at gmail.com>  wrote:
> >>>                          Gentlemen,
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>                          There is memory leak in JVM while writing and reading byte
> >> array to
> >>>                          memory segment. The below is simple example that
> generates
> >> 100G
> >>>                          file with zero bytes. While running the below code, you will
> see
> >>>
> >>>                          that JVM consumes all server memory.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>                          Unfortunately, I tested also in reading array of bytes. It has
> >> same
> >>>                          issue of memory leak.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>                          public static void testingMemorySegment() {
> >>>
> >>>
> >>>
> >>>                                       String strFileName = "/disk3/bigdata.index" +
> >>>
> >>>                          System.currentTimeMillis();
> >>>
> >>>
> >>>
> >>>                                       File fileObjectFileName = new File(strFileName);
> >>>
> >>>
> >>>
> >>>                                       if (fileObjectFileName.exists() == false) {
> >>>
> >>>
> >>>
> >>>                                           try {
> >>>
> >>>
> >>>
> >>>                                               fileObjectFileName.createNewFile();
> >>>
> >>>
> >>>
> >>>                                           } catch (IOException e) {
> >>>
> >>>
> >>>
> >>>                                           } catch (Exception e) {
> >>>
> >>>
> >>>
> >>>                                           }
> >>>
> >>>
> >>>
> >>>                                       }
> >>>
> >>>
> >>>
> >>>                                       long lngMemorySegmentFileSize = 107374182400l;
> //
> >> 100 G
> >>>
> >>>
> >>>                                       byte[] bytesArrayString = new byte[4096];
> >>>
> >>>
> >>>
> >>>                                       MemorySegment sourceSegment =
> >>>
> >>>                          MemorySegment.ofArray(bytesArrayString);
> >>>
> >>>
> >>>
> >>>                                       long lngTotalNumberOfPagesForAllFile =
> >>>
> >>>                          lngMemorySegmentFileSize / 4096;
> >>>
> >>>
> >>>
> >>>                                       try {
> >>>
> >>>
> >>>
> >>>                                           MemorySegment memorySegmentTmp =
> >>>
> >>>                          MemorySegment.mapFromPath(new
> >> File(strFileName).toPath(),
> >>>                          lngMemorySegmentFileSize,
> >> FileChannel.MapMode.READ_WRITE);
> >>>
> >>>
> >>>                                           MemoryAddress address =
> >>>
> >>>                          memorySegmentTmp.baseAddress();
> >>>
> >>>
> >>>
> >>>                                           MemoryAddress sourceAddress =
> >>>
> >>>                          sourceSegment.baseAddress();
> >>>
> >>>
> >>>
> >>>                                           for (long i = 0; i <
> >>>
> >>>                          lngTotalNumberOfPagesForAllFile;
> >>>
> >>>                          i++) {
> >>>
> >>>
> >>>
> >>>                                               MemoryAddress.copy(sourceAddress,
> >>>
> >>>                          address.addOffset(i
> >>>
> >>>                          * 4096), 4096);
> >>>
> >>>
> >>>
> >>>                                           }
> >>>
> >>>
> >>>
> >>>                                           memorySegmentTmp.close();
> >>>
> >>>
> >>>
> >>>                                       } catch (IOException e) {
> >>>
> >>>
> >>>
> >>>                                           e.printStackTrace();
> >>>
> >>>
> >>>
> >>>                                       }
> >>>
> >>>
> >>>
> >>>                                   }
> >>>
> >>>
> >>>



More information about the panama-dev mailing list