RFR: 8302976: C2 intrinsification of Float.floatToFloat16 and Float.float16ToFloat yields different result than the interpreter
Joe Darcy
darcy at openjdk.org
Wed Feb 22 03:49:25 UTC 2023
On Wed, 22 Feb 2023 02:08:27 GMT, Sandhya Viswanathan <sviswanathan at openjdk.org> wrote:
> Change the java/lang/float.java and the corresponding shared runtime constant expression evaluation to generate QNaN.
> The HW instructions generate QNaNs and not SNaNs for floating point instructions. This happens across double, float, and float16 data types. The most significant bit of mantissa is set to 1 for QNaNs.
I'd like to see a more informative description of the problem:
"float16 NaN values handled differently with and without intrinsification"
If that is issue reported, it may not be a problem as opposed to
"incorrect value returned under Float.float16ToFloat intrinsification", etc.
src/java.base/share/classes/java/lang/Float.java line 1101:
> 1099: return (short)(sign_bit
> 1100: | 0x7c00 // max exponent + 1
> 1101: | 0x0200 // QNaN
I don't understand what is being done here. From IEEE 754-2019:
"Besides issues such as byte order which affect all data
interchange, certain implementation options allowed by this standard must also be considered:
― for binary formats, how signaling NaNs are distinguished from quiet NaNs
― for decimal formats, whether binary or decimal encoding is used.
This standard does not define how these parameters are to be communicated."
The code in java.lang.Float in particular is meant to be usable on all host CPUs so architecture-specific assumptions about QNAN vs SNAN should be avoided.
-------------
PR: https://git.openjdk.org/jdk/pull/12704
More information about the core-libs-dev
mailing list