RFR: 8282365: Consolidate and improve division by constant idealizations [v39]

Quan Anh Mai qamai at openjdk.org
Tue Dec 26 07:34:12 UTC 2023


On Sun, 24 Dec 2023 01:23:08 GMT, John R Rose <jrose at openjdk.org> wrote:

>> I took the formula quite literally from the paper so I don't think there is a need for a separate test for those cases. It is also covered in the transformation tests from the Java side.
>> 
>> ![image](https://github.com/openjdk/jdk/assets/49088128/cc1b1c5e-a37b-4510-9aff-839f0337a532)
>
> It is great that we are factoring out these algorithm steps into their own separately reviewable and maintainable API points.  I would even say it is *necessary* to do this, compared with the alternatives, such as (what we used to do) write random file-local helper functions or even hand-inlined statements.
> 
> A big part of the advantage is that we can catch bugs at the subroutine level, rather than at the system level.  But this only works if we write (or at least try hard to write) unit tests for each subroutine.  It doesn’t matter very much whether the subroutine comes from a published source or whether we created it ourselves somehow.  (The risk model for where failures come from differs a little, since the published source is presumably better reviewed than our own work.)  In either case a unit test (gtest) adds a lot of value.  The gtest can detect either of two interesting problems: 1. an error in the algorithm (this happens even if it is published) or 2. an error in our encoding of the algorithm.  The second is probably more likely.  It can happen due to source code errors or due to compiler bugs.  Either way, the gtest adds value by making it more likely that, if something goes wrong, we will find it before system integration and deployment.
> 
> So, based on what I can see here, I recommend writing at least a simple gtest for each subroutine we are writing, regardless of its source.
> 
> (Reminder:  When using random numbers as test inputs, please ensure that the gtests are seeded reproducibly.  Double check for pre-existing uses of random generators in the gtests that satisfy this requirement.)
> 
> Thanks for this very good work; I’m very glad you tackled it.  Getting numerics correct is always tricky, but this will pay off.
> 
> P.S. another reason to use a unit test on a published algorithm: the system validity can be demonstrated without reference to the publication. Self contained proof is better for us.

@rose00 Thanks for your input, I have added a unit test for this case.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/9947#discussion_r1436298767


More information about the hotspot-compiler-dev mailing list