NMT does not seem to count malloc correctly
Thomas Stüfe
thomas.stuefe at gmail.com
Thu Aug 6 15:45:59 UTC 2020
Oh of course. How silly of me. Thanks Zhengyu.
On Thu, Aug 6, 2020, 17:38 Zhengyu Gu <zgu at redhat.com> wrote:
> By commenting out ::free(), you created real memory leak, but not from
> NMT perspective, because it already adjusted internal counting to
> reflect os:free().
>
> -Zhengyu
>
>
>
> On 8/6/20 11:28 AM, Thomas Stüfe wrote:
> > Hi,
> >
> > maybe I am missing something fundamental, or there is something wrong
> with
> > NMT.
> >
> > As an experiment for an unrelated issue, I created a C-Heap leak by
> simply
> > commenting out the ::free() call inside os::free():
> >
> > --- a/src/hotspot/share/runtime/os.cpp Thu Aug 06 17:08:02 2020 +0200
> > +++ b/src/hotspot/share/runtime/os.cpp Thu Aug 06 17:08:08 2020 +0200
> > @@ -804,7 +804,7 @@
> > size_t size = guarded.get_user_size();
> > inc_stat_counter(&free_bytes, size);
> > membase = guarded.release_for_freeing();
> > - ::free(membase);
> > +// ::free(membase);
> > #else
> > void* membase = MemTracker::record_free(memblock,
> > MemTracker::tracking_level());
> > ::free(membase);
> >
> > As expected, this leads quickly to a very high memory usage. In my tiny
> > example (which really does not matter) I quickly end up with ~4G RSS, and
> > the glibc reports about 3G outstanding allocations:
> >
> > Process Memory:
> > Virtual Size: 11136852K (peak: 11136852K)
> > Resident Set Size: 4057860K (peak: 4057860K) (anon: 4013076K, file:
> 44784K,
> > shmem: 0K)
> > Swapped out: 0K
> > C-Heap outstanding allocations: 3070399K
> >
> >
> > However, NMT reports just about 600M committed memory (into this number
> > should fall all outstanding allocations done with os::malloc() AFAICS):
> >
> > thomas at mainframe:~$ jjjcmd Interl VM.native_memory scale=1
> >
> > 19441:
> >
> > Native Memory Tracking:
> >
> > Total: reserved=6227876720, committed=626285424
> >
> > ....
> >
> > And the malloc information adds up to about only 68M:
> >
> > thomas at mainframe:~$ jjjcmd Interl VM.native_memory scale=1 | grep
> malloc
> > (malloc=4044913 #55107)
> > (malloc=38688 #146)
> > (malloc=10812522 #258557)
> > (malloc=18157899 #3629)
> > (malloc=42496 #311)
> > (malloc=216 #7)
> > (malloc=594920 #1788)
> > (malloc=27304 #2)
> > (malloc=2781872 #60716)
> > (malloc=543008 #5144)
> > (malloc=200416)
> > (malloc=152 #7)
> > (malloc=6788 #214)
> > (malloc=18977 #485)
> > (malloc=994280 #2797)
> > (malloc=153248 #744)
> > (malloc=2152 #12)
> >
> > I must be missing something obvious here. What is it?
> >
> > Thank you,
> >
> > Thomas
> >
>
>
More information about the hotspot-runtime-dev
mailing list