The value of floatToRawIntBits(0.0f/0.0f) is different on x86_64 and aarch64?

Tianhua huang huangtianhua223 at gmail.com
Tue Jul 16 04:04:58 UTC 2019


@Joe, thanks for your reply.
In fact I was run the scala tests of apache/spark on aarch64 platform, see
https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala#L732
and the test failed, because the result is equals on aarch64 platform(Int =
2143289344).  The scala test is expect that the results should be different
see assert(floatToRawIntBits(0.0f/0.0f) != floatToRawIntBits(Float.NaN)),
they are same in java, but expected to be different in scala. But the
behaviour is different for x86_64 and aarch64, so confused me so much. I
opened an
issue on scala https://github.com/scala/bug/issues/11632 and opened a topic
https://users.scala-lang.org/t/the-value-of-floattorawintbits-0-0f-0-0f-is-different-on-x86-64-and-aarch64-platforms/4845,
if you are interested, welcome to discuss, thank you.

On Tue, Jul 16, 2019 at 12:57 AM Joe Darcy <joe.darcy at oracle.com> wrote:

> Hello,
>
> Adding a bit more background below...
>
> On 7/15/2019 12:10 AM, Aleksey Shipilev wrote:
> > On 7/15/19 8:46 AM, Aleksey Shipilev wrote:
> >> On 7/15/19 6:05 AM, Tianhua huang wrote:
> >>> The value of floatToRawIntBits(0.0f/0.0f) is different on x86_64 and
> >>> aarch64? Does it depends on the platform? I think the behaviour should
> be
> >>> same on different platforms, right?
> >> Why should it be? 0.0f/0.0f is NaN. There are multiple allowed
> representations of NaN in IEEE-754.
> >> And there is a difference about "raw" conversions:
> >>
> >> "If the argument is NaN, the result is the integer representing the
> actual NaN value. Unlike the
> >> floatToIntBits method, floatToRawIntBits does not collapse all the bit
> patterns encoding a NaN to a
> >> single "canonical" NaN value."
> >>   (
> https://docs.oracle.com/javase/8/docs/api/java/lang/Float.html#floatToRawIntBits-float-
> )
> >>
>
> As Aleksey notes, the IEEE 754 standard defines many possible bit
> strings to encode NaN values. Additionally, the default NaN bit pattern
> for a freshly created NaN is not specified by the standard and is
> platform-dependent and does indeed vary by platform. Additionally, the
> floating-point standard and at least its 2008 revision do not specify
> the bits of the NaN output if an operation has multiple NaN inputs.
>
> The intention of this design of NaN handling was to allow flexibility
> for "retrospective diagnostic" debugging features to be developed, which
> did not occur very much.
>
> (And I won't even go into the platform-specific differences between
> quiet NaNs and signaling NaNs, differences not exposed by the Java
> platform.)
>
> When it comes to reproducing the raw *bits* of floating-point results,
> the Java platform is not necessarily reproducible because the underlying
> standard allows implementation variation. When it comes to reproducing
> the *values* of floating-point results, the Java platform is
> reproducible (subject to a complicated disclaimer about over/underflow
> and non-strict floating-point).
>
> For reproducability, Float.floatToIntBits has an internal NaN-check and
> returns a canonical NaN bit pattern. For seeing if two floating-point
> values x and y are semantically equivalent
>
>      Double.compare(x, y) == 0
>
> will do a sensible comparison.
>
> I don't see any JDK bug here and the platform provides the necessarily
> primitives to do reproducible comparison of floating-point values.
>
> HTH,
>
> -Joe
>
>


More information about the jdk-dev mailing list