OpenJDK or SE Java Floating Point Options?

sminervini.prism sminervini.prism at protonmail.com
Fri Apr 15 02:32:18 UTC 2022


To core-libs-dev at openjdk.java.net,

The Java OpenJDK, particularly the runtime, in its present state, has one key problem. Arithmetic performed by operators on number variables or data, of types float and double, are able to gain denormal or pronormal values, alongside method calls made on java.lang.StrictMath. This is is always quoted as being because of Java adherence to standard IEEE 754,
which has a 2008 version, and also has a more recent but non-free paper released in 2019.

Programmers need range accurate integer and decimal mathematics, in higher level computer languages such as Java, certainly higher level languages than Assembly or C. While Java does have a workaround approach to these floating point problems, involving BigDecimal and BigInteger, and another approach that supplies a calculator class, the big-math library,

https://github.com/eobermuhlner/big-math ,

using and rely on these stop-gap, work-around solutions is a poor approach, which can eventually become problematic. BigInteger and BigDecimal are larger in memory than needed, they are slower, messier in source code to program, debug, understand and edit, of themselves exclude access to operator syntax, and also are never reified with Java official library classes or interfaces.

The standard document IEEE 754, with its capitulation of floating point arithmetic, mentions traps, state exceptions and flags, phenomena that as of OpenJDK 17 Java has not adopted. It has been the view that because 754 doesn't stipulate what happens with underflow or overflow specifically, nor does it stipulate that it is a base 10 system on top of a binary one exactly, that therefore, because of this, underflow and overflow aren't (and can't be) bugs or software logic errors, by "definition of IEEE 754".

It has been this which has lead to the rejection of a series of related bug requests on the Java online bug database submission system to simply be denied.

The original justification for floating point denormal and pronormals is a view which wants to compromise digit place accuracy for speed. This is diametrically fused with a more important view that must have all accurate digit places for full range accuracy, because that is simply required by some technical task that the language is designed for and supported to perform. However, in modern times it has become the case that this compromise isn't needed any more, since accuracy can dovetail with speed, given modern hardware evolution, in the shape of the SSE phenomenon and similar.

Since 754 is not stated to be either a base 10 or base 2 system, therefore from base 10's point of view, it could be either.

Binary numbers and their arithmetic are for the sake of machines. Decimal mathematics is for the sake of human beings.
Given this, it makes things the case that IEEE 754 is incomplete, leading to inconsistency and finally incoherency. These mean that 754 has a "blind spot" which does lead to logic errors.

This state of affairs has made original floating point correction, at the level of the original operator and StrictMath method call level, within the Java ubiquity itself, both pertinent and critical.

Notwistanding calculator classes and comparisons, consider the following two code fragments:

//----------------------------------------------------------
//The C++ Language. Arithmetic only, no comparisons.

#include

using namespace std;

int main()
{
cout << "Program has started..." << endl;
double a = 0.1D;
double b = 0.1D;
double c = a*b;
cout << endl << c << endl << endl;
float d = 0.1F;
float e = 0.1F;
float f = d*e;
cout << f << endl << endl;
cout << "Program has Finished.";
return 0;
}

/*
Program has started...

0.01

0.01

Program has Finished.*/

//----------------------------------------------------------
//The Java Language. Arithmetic only, no comparisons.

import static java.lang.System.*;
public class Start
{
public static void main(String ... args)
{
out.println("Program has started...");
double a = 0.1D;
double b = 0.1D;
double c = a*b;
out.println();
out.println(c);
float d = 0.1F;
float e = 0.1F;
float f = d*e;
out.println();
out.println(f);
out.println();
out.println("Program has Finished.");
}}
/*
Program has started...

0.010000000000000002

0.010000001

Program has Finished.*/
//----------------------------------------------------------**

The first fragment runs on 64 bit Windows 10, and compiles by the Twilight Dragon Media C++ compiler (available for Windows from https://jmeubank.github.io/tdm-gcc/download/ ). Similar GNU C++ compilers are available for the Mac, Unix, Linux, and others, for free, with availability to their source code too.

The first code fragment from above is an example of accurate ranged floating arithmetic, which can be accomplished quickly, available from Free, Open Source Software. Something like this can be even more easily accomplished in Java by refactoring the former into the OpenJDK.

While Java has comparators ==, !=, >,<,>=,<= immediately working on float and double (which C++ hasn't), C++ does have
range accurate, base 10 decimal floating point arithmetic on its float, double, long float, long double types. This combines with the C++ code fragment included above to provide a case model for both what is possible, and as a starting point for where bases for extant code resources may be found.

C++ taps into SSE, or equivalent, additional hardware bit registers, with updated arithmetic implementation code, to deal with straddling decimal to binary carries past the range end of floating point type, so that the final base 10 digit calculated to-and-fro in relation to binary, is considered properly and placed properly.

The concern has been that repairing this will generate some kind of incompatability for Java. This concern simply doesn't ultimately make any sense, since floating point denormal or pronormal values are only ever innaccurate, and operate at similar speeds to SSE oriented correction anyway. They cannot be specifically needed for anything, and correcting them in place will only lead to advantage, certainly at the same speeds anyway.

Correction could be implemented by altering the default operation behaviour of the runtime. However, if compatability is of greater concern, something like a main method annotation and similar annotations for Threads and Runnables can be included, that the compiler detects, causing it to compile in a bit for that code space, switching, therein, to the floating point arithmetic/StrictMath behaviour required therein. This might even allow both modes to interact together, if done carefully enough.

Given all these things, is it possible for the Java Community Process to consider these matters in these sorts of terms, and implement in-place removal of floating point denormal and pronormal values (utilising SSE equivalent hardware registers and refactoring of available faster code), with an improved StrictMath in some manner?

So that all Java programmers can develop their software in a more straightforward and efficient manner, even along any of these paths?

Many thanks,

Sergio Minervini.

Sent with [ProtonMail](https://protonmail.com/) secure email.


More information about the core-libs-dev mailing list