Low-Overhead Heap Profiling

JC Beyler jcbeyler at google.com
Mon Apr 17 19:37:02 UTC 2017


Hi all,

I worked on getting a few numbers for overhead and accuracy for my feature.
I'm unsure if here is the right place to provide the full data, so I am
just summarizing here for now.

- Overhead of the feature

Using the Dacapo benchmark (http://dacapobench.org/). My initial results
are that sampling provides 2.4% with a 512k sampling, 512k being our
default setting.

- Note: this was without the tradesoap, tradebeans and tomcat benchmarks
since they did not work with my JDK9 (issue between Dacapo and JDK9 it
seems)
- I want to rerun next week to ensure number stability

- Accuracy of the feature

I wrote a small microbenchmark that allocates from two different
stacktraces at a given ratio. For example, 10% of stacktrace S1 and 90%
from stacktrace S2. The microbenchmark was run 20 times, I averaged the
results and looked for accuracy. It seems that statistically it is sound
since if I allocated10% S1 and 90% S2, with a sampling rate of 512k, I
obtained 9.61% S1 and 90.49% S2.

Let me know if there are any questions on the numbers and if you'd like to
see some more data.

Note: this was done using our internal JDK8 implementation since the webrev
provided by http://cr.openjdk.java.net/~rasbold/heapz/webrev.00/index.html does
not yet contain the whole implementation and therefore would have been
misleading.

Thanks,
Jc


On Tue, Apr 4, 2017 at 3:55 PM, JC Beyler <jcbeyler at google.com> wrote:

> Hi all,
>
> To move the discussion forward, with Chuck Rasbold's help to make a
> webrev, we pushed this:
> http://cr.openjdk.java.net/~rasbold/heapz/webrev.00/index.html
> 415 lines changed: 399 ins; 13 del; 3 mod; 51122 unchg
>
> This is not a final change that does the whole proposition from the JBS
> entry: https://bugs.openjdk.java.net/browse/JDK-8177374; what it does
> show is parts of the implementation that is proposed and hopefully can
> start the conversation going as I work through the details.
>
> For example, the changes to C2 are done here for the allocations:
> http://cr.openjdk.java.net/~rasbold/heapz/webrev.00/src/share/vm/
> opto/macro.cpp.patch
>
> Hopefully this all makes sense and thank you for all your future comments!
> Jc
>
>
> On Tue, Dec 13, 2016 at 1:11 PM, JC Beyler <jcbeyler at google.com> wrote:
>
>> Hello all,
>>
>> This is a follow-up from Jeremy's initial email from last year:
>> http://mail.openjdk.java.net/pipermail/serviceability-dev/20
>> 15-June/017543.html
>>
>> I've gone ahead and started working on preparing this and Jeremy and I
>> went down the route of actually writing it up in JEP form:
>> https://bugs.openjdk.java.net/browse/JDK-8171119
>>
>> I think original conversation that happened last year in that thread
>> still holds true:
>>
>>  - We have a patch at Google that we think others might be interested in
>>     - It provides a means to understand where the allocation hotspots are
>> at a very low overhead
>>     - Since it is at a low overhead, we can leave it on by default
>>
>> So I come to the mailing list with Jeremy's initial question:
>> "I thought I would ask if there is any interest / if I should write a
>> JEP / if I should just forget it."
>>
>> A year ago, it seemed some thought it was a good idea, is this still true?
>>
>> Thanks,
>> Jc
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/serviceability-dev/attachments/20170417/2c43f074/attachment-0001.html>


More information about the serviceability-dev mailing list