valhalla/valhalla/hotspot: multiple value return caused interpreter performance regression

Sergey Kuksenko sergey.kuksenko at
Fri Jun 9 23:29:56 UTC 2017


see my notes below

On 06/09/2017 08:15 AM, Karen Kinnear wrote:
> I agree with Roland on pushing Frederic’s changes before considering the vreturn improvements
> (I think it is waiting on a code review?)
> My perspective on interpreter performance:
>      We do not expect the interpreter implementation in MVT to give significant performance
> benefits for value types over references.
Neither do I.
> Our goal has been to try to break even and minimize
> performance loss.
Exactly, it's what I am tracking right now.
>      In general for hotspot, our guidance on interpreter performance has been to look
> at it in two contexts
>           1) general throughput when run with a JIT
The most easiest way - easy to measure, results are stable, etc.
>           2) startup impact, since it runs prior to the JIT

> i.e. performance numbers for -Xint alone are not sufficient to drive decisions.

It's quite a hard way. I have to say that right now, we don't have 
startup benchmarks where interpreter has significant impact to startup 
performance. Other components like classloading, JIT compilation, 
verification and others have higher impact than interpreter. I could 
guess that even getting 3x times slowdown of the interpreter - we won't 
get more than 10% slowdown of startup time, that is acceptable. I will 
think about such kind of startup benchmarks after finishing JIT 
At the same moment, simply measure interpreter performance (-Xint) we 
get upper bound estimations how interpreter may has influence to startup 
(I agree that it's too "upper").
For example, before multiple value return patch the interpreter for 
value types was ~10% slower than analogue code with references, and in 
this case there is no necessity to worry about startup performance (I 
mean interpreter's impact).

In case of 3x interpreter slowdown.
We won't see any startup regression (on any application) right now due 
to the fact the all base classes loaded at startup are value-free and 
interpreting of value-free classes is untouched by that slowdown. And I 
could guess that we won't see any startup regressions caused by 
interpreter withing MVT project activity (It doesn't mean that I won't 
check it, but I don't expect any visible). The same situation is for 
extra allocations (boxing) in the interpreter - extra boxing or value 
buffering will have the same performance impact to general startup.
Of course the situation may change later - when base classes will use 
value type heavily or customers will create application with lot of 
value classes at startup.
If we've got 3x slowdown (with -Xint) and if (and only if) developers 
have ideas how to fix easily  - we should fix it. If it can't fixed 
without significant efforts - lets postpone it, but not forget.

> I haven’t seen the results
>      Sergey - do you have a wiki page with test results for MVT or a way to share intermediate
>      performance information for specific changes? (delighted you are testing our optimizations!)
yes. I working on this right now.
> thanks,
> Karen
>> On Jun 9, 2017, at 6:54 AM, Roland Westrelin <rwestrel at> wrote:
>> Hi Sergey,
>>> I've just realized that this patch causes more than 3x time interpreter
>>> performance regression (if method contains local variable of value type).
>>> Question to all: How big INTERPRETER performance regression will be
>>> considered as OK?
>> Thanks for the performance number. My change adds 2 runtime calls on
>> method returns with a value type from interpreted callee to interpreted
>> caller so a big performance drop is not surprising.
>> My understanding is that we don't care too much about interpreter
>> performance at this point. In any case, I would rather wait for
>> Frederic's buffering change to be pushed before considering any
>> improvement to the vreturn implementation.
>> Roland.

Best regards,
Sergey Kuksenko

More information about the valhalla-dev mailing list