[RFR]8215623: Add incremental dump for jmap histo
臧琳
zanglin5 at jd.com
Sun May 5 02:42:07 UTC 2019
Dear Serguei,
Thanks a lot for your reviewing.
System.err.println(" incremental dump support:");
+ System.err.println(" chunkcount=<N> object number counted (in Kilo) to trigger incremental dump");
+ System.err.println(" maxfilesize=<N> size limit of incremental dump file (in KB)");
From this description is not clear at all what does the chunkcount mean.
Is it to define how many heap objects are dumped in one chunk?
If so, would it better to name it chunksize instead where chunksize is measured in heap objects?
Then would it better to use the same units to define the maxfilesize as well?
(I'm not insisting on this, just asking.)
The original meaning of “chunkcount" is how many objects are dumped in one chunk, and the “maxfilesize” is the limited size of the dump file.
For example, “chunkcount=1, maxfilesize=10” means that intermediated data will be written to the dump file for every 1000 objects, and
when the dump file is larger than 10k,erase the file and rewrite it with the latest dumped data.
The reason I didn’t use object count to control the dump file size is that there can be humongous object, which may cause the file too large.
Do you think use object size instead of chunkcount is a good option? So the two options can be with same units.
BRs,
Lin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/serviceability-dev/attachments/20190505/38390545/attachment.html>
More information about the serviceability-dev
mailing list