Questions about Java Floating Point and StrictMath alteration in JRE.

A Z poweruserm at live.com.au
Thu Mar 17 02:18:24 UTC 2022


To core-libs-dev,

Raffaello has said:

'Let's get rid of binary floating-point arithmetic in Java, after 27
years of honorable service!
Let's adopt decimal floating-point arithmetic, where 1 / 3 + 1 / 3 is
still different from 2 / 3, but who cares?'

I am not suggesting that we get rid of floating point binary
implementation behind decimal expression, for arithmetic,
comparisons and functions. I am trying to suggest that we
should use an imminent way to correct and remove all
denormal and pronormal cases, so that float and double
operations are base 10 digit accurate for all operation cases.
All this is possible by means of SSE hardware and similar.

Java floating point has been a problem, requiring too elaborate
a working around, for as long as denormal and pronormal
values have been generated by it.  It has been formally enough
stated elsewhere that floating point denormal and pronormal
values were originally designed as a trade-off between range
and accuracy.  My point in all this is that even more of the time,
people still will need range and accuracy to equate, whether or
not accuracy degrades only outside the range.  Things
like language specifications and standards have done nothing
to assist Java in these needs areas.

There are cases, since there is diametric thought involved,
where java programmers need continuous range accuracy
with the use of float and double, yet these secondary possibilities
have not be accorded to in efficient a manner as has become
possible, and necessary.  2D Graphics and 3D world operations,
where pixel accuracy must always be maintained, can and are
a real example. The context yet is, however, that there could
be dual mode floating point, with both the present mode and
an enhanced mode made accessible.

I am aware that recurring decimals, like 1/3 and 2/3 can't perfectly
be expressed in a truncated range floating point number, however
that is a situation and a property of rational numbers, and the
necessary limited range, which is to do with decimal numbers,
and not floating point.  This is not any real kind of problem,
since rational number truncation is rationally inevitable,
anyway.

C++, alongside many scientific pocket calculators, does
have the phenomenon of final decimal place rounding,
which is apparently a property of calculating for humans,
and is not related to floating point 'errors'. Notwithstanding
how C++ comparisons work too, which Java has
for all purposes corrected, the following is a clear example,
contradicting Raffaello:

#include <iostream>

using namespace std;

int main()
{
    cout << "Program has started..." << endl;
    long double a = 1.0;
    long double b = 3.0;
    long double c = a/b;
    cout << endl << c << endl << endl;
    long double d = c + c;
    cout << d << endl << endl;
    long double e = 2.0D/3.0D;
    cout << e << endl << endl;
    cout << "Program has Finished.";
    return 0;
}

With this fragment running, Raffaello's example
turns out to be inaccurate. As it will for
the rest of C++ range arithmetic, arithmetic which Java
can adopt. Since anything can prevent final decimal
place rounding, take version 4 and later of of Qalculate!,
for example, https://qalculate.github.io/,
with that done, fairly well any coherent language
can therefore uphold

1.0F/3.0F + 1.0F/3.0F == 2.0F/3.0F

1.0D/3.0D + 1.0D/3.0D == 2.0D/3.0D

alongside any other such salient example.
Raffaello's example doesn't make ultimate, possible sense
for Java SE, Java OpenJDK, or any other, possible,
Hardware Box Computer language.  It is possible, and already
present elsewhere, certainly overall, that computer languages,
Java included, ultimately can uphold range accurate
decimal mathematics, by means of floating point,
which may inherently be kept, altered or even
mutually augmented, in the Java language.
That includes the presence and operation
of binary storage, and binary algorithms for manipulation
behind that, using 64 bit registers, and additional SSE-style
carry registers too.

To Andrew Dinn, I may reply with the following:

I am aware of IEEE 754, the Java Language Specification
and the Java Virtual Machine Specification. My understanding
is that Java doesn't adhere to these standards enough.  At any
rate, the end standard of base 10 mathematics is, rationally,
the greater issue.

You can have range binary mathematics, or range decimal
mathematics, and you can convert in between either way,
but you can't really convolute both of them at the same
time, the way that Java presently does.  What you end up
with is something which is only useful as a push-and-pull
format, requiring yet another implementation on top of
that, like BigDecimal, BigInteger, and a Calculator class in
type for them, such as the 3rd party big-math library.

This kind of arrangement is slow, wastes memory, is
more difficult and messy than needed to produce, debug,
and edit, and even loses other properties, such as operator syntax.

Is there anyone on core-libs-dev who is prepared to see these
things enter in to Java, or to make manifest Java floating point
arithmetic and java.lang.StrictMath dual-mode operation?
Runtime Switches and manifest file line entries are all viable,
straight-forward options.

?

Sergio Minervini.






More information about the core-libs-dev mailing list