RFR (S): 7177917: Failed test java/lang/Math/PowTests.java
roland.westrelin at oracle.com
Tue Jun 26 01:53:50 PDT 2012
>> I'm not sure I understand what you mean by: "Then run the same methods over different values (to cove at least some cases in our code) which will produce NaNs to force recompile (or not as in your first implementation). Measure performance with NaNs.".
> Your first implementation does not deoptimize NaN cases so I wanted to see if there is difference in performance.
Sorry. It's unclear to me. Is there anything beyond these:
>> I wrote a micro benchmark that:
>> - chooses 1 million "good" random values for pow
>> - time the computation of pow for the 1 million values
>> - force the uncommon trap and recompilation
>> - do the measurement again with the same 1 million values
that you wanted to see measured?
>> same thing with exp and I did the measurement with the previous and current version of the code but I don't see any difference.
> Did you verify that during first round you did not get NaN and deoptimized code already? There should be a difference. Did you pre-generate randoms values?
Yes to both questions.
Here is the micro benchmark:
More information about the hotspot-compiler-dev