Array equality, comparison and mismatch

Paul Sandoz paul.sandoz at oracle.com
Tue Oct 13 09:22:35 UTC 2015


> On 13 Oct 2015, at 05:46, Mike Duigou <openjdk at duigou.org> wrote:
> 
>>> - I apologize if this was discussed earlier in the thread but why is the comparison of floats and doubles done by first == operator of the int bits and only then the compare method ?
>> I was being consistent with the test used for the existing equals
>> methods of float[] and double[]. Note that the Float/Double.*to*Bits
>> methods are intrinsic.
> 
> I guess my worry is that the == floatToIntBits would be redundant as the implementation of compare() might have exactly the same test as it's first step--it would be a reasonable optimization since it would have the benefit of loading the values into registers before doing the more expensive relational comparison.
> 

I did not concentrate on this area too much since this code is likely to change, but i just looked a little more closely at some benchmark results and generated code.

Analysis so far indicate big gains are to be had on larger arrays with better or no impact on small arrays if i do the following instead:

  if (Double.doubleToRawLongBits(a[i]) !=
      Double.doubleToRawLongBits(b[i])) {
      int c = Double.compare(a[i], b[i]);
      if (c != 0) return c;
  }

(If C2 inlining occurs the registers correspond to the two array elements, e.g. xmm*, should be reused).

That gets the NaN checking off the critical path. I think that is reasonable to do given the assumption that varying forms of NaN’s are likely to be rare. That same assumption also applies to Unsafe vectorization which first compares raw bits.

Similar modifications can be made to the equals methods.

Even though these methods are likely to change i will probably modify the current float/double methods. Better to come out with good performance now if for some reason the unsafe vectorization does not make it in.

Thanks,
Paul.



More information about the core-libs-dev mailing list