[aarch64-port-dev ] aarch64 and arm64 jdk benchmarks result sharing

Zhongwei Yao zhongwei.yao at linaro.org
Thu Apr 20 02:03:12 UTC 2017


On 19 April 2017 at 19:30, Andrew Haley <aph at redhat.com> wrote:

> Looking at this some more, I can't tell which benchmarks are
> throughput (big is good) and which are time (small is good).  This
> makes it very hard for me to see where community AArch64 is lagging
> performance.
>
> I think Dacapo is throughput?  And TeraSort is time?
>
> The results for Partner B's hardware on Dacapo are so weird that I
> think there must be something wrong.  lusearch didn't work at all, so
> there must be a code generation problem somewhere.


On our Partner B's hardware, lusearch could run but it failed to converge
at warmup stage at every time. So the result is set to 0 in our result.

And for sunflow case, it is in similar case like lusearch. Sunflow could
run but it failed to converge at most time. I've checked it succeeds to
converge only once in our result. So the standard deviation is 0 because of
there are only one. I should have added note for this case. Sorry for
confusion.


> SPECjvm I understand, it's well-designed and stable, and I'm quite
> satisfied with the results there.
>
> With regard to the default size of the code cache, I think that's a
> hangover from the time whan we hadn't solved the problem of generating
> far calls, and I'm considering changing it to be the same as other
> architectures.  The problem with switching, though, is that while it
> won't make any significant difference to throughput it will increase
> code size, but I can see that it makes sense that we dont run out of
> code cache before other architectures do.
>
> Andrew.
>



-- 
Best regards,
Zhongwei


More information about the aarch64-port-dev mailing list