Low-Overhead Heap Profiling

Kirk Pepperdine kirk.pepperdine at gmail.com
Fri Jun 26 07:01:34 UTC 2015


Hi Jeremy,

Sorry I wasn’t so clear, it’s not about collection, it’s about allocation. In this regard it’s not about about size, it’s about the frequency. People tend allocate small objects frequently and they will avoid allocating large objects frequently. The assumption is, large is expensive but small isn’t. These event will show up using execution profilers but given the safe-point bias of execution profilers and other factors, it’s often clearer to view this problem using memory profilers.

Kind regards,
Kirk

On Jun 25, 2015, at 7:34 PM, Jeremy Manson <jeremymanson at google.com> wrote:

> Why would allocation frequency be more damaging to performance?  Allocation is cheap, and as long as they become dead before the YG collection, it costs the same to collect one 1MB object as it does to collection 1000 1K objects.
> 
> Jeremy
> 
> On Wed, Jun 24, 2015 at 11:54 PM, Kirk Pepperdine <kirk.pepperdine at gmail.com> wrote:
>> 
>> But, seriously, why didn’t you like my proposal? It can do anything your scheme can with fewer and simpler code changes. The only thing that it cannot do is to sample based on object count (i.e., every 100 objects) instead of based on object size (i.e., every 1MB of allocations). But I think doing sampling based on size is the right approach here (IMHO).
>> 
>> 
> 
> I would think that the size based sampling would create a size based bias in your sampling. Since IME, it’s allocation frequency is more damaging to performance, I’d prefer to see time boxed sampling
> 
> Kind regards,
> Kirk Pepperdine
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20150626/df0a43eb/attachment.htm>


More information about the hotspot-gc-dev mailing list