Low-Overhead Heap Profiling

Tony Printezis tprintezis at twitter.com
Thu Jun 25 21:08:45 UTC 2015


Hi Kirk,

(long time!) See inline.

On June 25, 2015 at 2:54:04 AM, Kirk Pepperdine (kirk.pepperdine at gmail.com) wrote:


But, seriously, why didn’t you like my proposal? It can do anything your scheme can with fewer and simpler code changes. The only thing that it cannot do is to sample based on object count (i.e., every 100 objects) instead of based on object size (i.e., every 1MB of allocations). But I think doing sampling based on size is the right approach here (IMHO).



I would think that the size based sampling would create a size based bias in your sampling. 


That’s actually true. And this could be good (if you’re interested in what’s filling up your eden, the larger objects might be of more interest) or bad (if you want to get a general idea of what’s being allocated, the size bias might make you miss some types of objects / allocation sites).



Since IME, it’s allocation frequency is more damaging to performance, I’d prefer to see time boxed sampling


Do you mean “sample every X ms, say”?



Tony




Kind regards,
Kirk Pepperdine



-----

Tony Printezis | JVM/GC Engineer / VM Team | Twitter

@TonyPrintezis
tprintezis at twitter.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20150625/c27fb50f/attachment.htm>


More information about the hotspot-gc-dev mailing list