Floating Point Repair?
sminervini.prism
sminervini.prism at protonmail.com
Wed Apr 27 10:47:11 UTC 2022
To the Java OpenJDK Java Community Process,
The way that we see float and double arithmetic and functions working is that a decimal value is submitted, maybe typed, and converted and stored in binary, operated on in binary, and the answer is successfully converted back to decimal. This appears to be a rational approach, given the way that all of Java's constructors, methods, fields, classes, interfaces and libraries are all oriented. Certainly given also the basic precept that binary is for computers and that decimal is for humans. Isn't this how all of IT should work, anyway?
IEE 754 doesn't specifically say anything about the base 10 digit degradations that can happen at the right hand side of float and double in numerous known examples, be it caused by representation, arithmetic, method calls, rounding, or anything else. However all examples like these, given the submitted understanding, become logic errors, that need to be repaired in either a default or mutual way, certainly an efficient way, simply because denary accuracy is required out of the compiled Java program, and for any further of that Java program.
The present workarounds, being BigInteger, BigDecimal, and the public, non-Oracle or OpenJDK, big-math, are too slow and too large in memory for key purposes, and don't allow for the use of arithmetic operators. Things that we need, as well as others too, further afield. While Valhalla may address some of these problems here, however far away that is, it still won't be as efficient and useful, but necessary, as decimal value range corrected float and double.
Our understanding is that the digit degredation can be repaired finitely, to work within values range, for float and double. Certainly given that generally, all method calls, and therefore all arithmetic operations, boil down to two binary numbers at a time with one operator, where the range of those two binary numbers is limited to the upper and lower range of the decimal number data that they have to have come from to start. Couldn't just a few extra registries always be able to capture just a smaller more amount of information, if all relevant maths is always innately two numbers, one operator, and the maximum decimal digit is 9, leading to the following telling result in little-endian binary,
910 = 10012 ?
What seems to be required is having, on top of two range value limits each for float and double, between minimum and maximum, float 32 bit and double 64 bit, hopefully when using an optimised floating point formula, maybe an adjustment of the present one,
is two more consideration range limits past the right hand side, past the smallest decimal digit. Extra SSE registers, or their descendents, exist in all relevant Java compatible CPU hardware for 32 bit and 64 bit registries in Floating Point Units in those CPUs. Can't these extra bit registries, even if outside of 32 and 64 bit, catch any extra-only bits, to tidyup the binary for the denary? Or is what may happen with SSE and RAM, or RAM on its own, a prohibitive consideration?
For decimal, with float and doubles, and the original view to them both, If changing the default is not an option, then surelya dual mode, compatability approach would address all concerns. Even a separate patch need not be out of reckoning.
Is all this thought and facthood and context enough to convince the JCP to genuinely do something to repair Java floating point errors, errors from the total range denary point of view?
Sent with [ProtonMail](https://protonmail.com/) secure email.
More information about the core-libs-dev
mailing list