RFR: JDK-8277175 : Add a parallel multiply method to BigInteger [v3]

kabutz duke at openjdk.java.net
Wed Nov 17 20:49:43 UTC 2021


On Wed, 17 Nov 2021 19:48:25 GMT, kabutz <duke at openjdk.java.net> wrote:

>> BigInteger currently uses three different algorithms for multiply. The simple quadratic algorithm, then the slightly better Karatsuba if we exceed a bit count and then Toom Cook 3 once we go into the several thousands of bits. Since Toom Cook 3 is a recursive algorithm, it is trivial to parallelize it. I have demonstrated this several times in conference talks. In order to be consistent with other classes such as Arrays and Collection, I have added a parallelMultiply() method. Internally we have added a parameter to the private multiply method to indicate whether the calculation should be done in parallel.
>> 
>> The performance improvements are as should be expected. Fibonacci of 100 million (using a single-threaded Dijkstra's sum of squares version) completes in 9.2 seconds with the parallelMultiply() vs 25.3 seconds with the sequential multiply() method. This is on my 1-8-2 laptop. The final multiplications are with very large numbers, which then benefit from the parallelization of Toom-Cook 3. Fibonacci 100 million is a 347084 bit number.
>> 
>> We have also parallelized the private square() method. Internally, the square() method defaults to be sequential.
>> 
>> Some benchmark results, run on my 1-6-2 server:
>> 
>> 
>> Benchmark                                          (n)  Mode  Cnt      Score      Error  Units
>> BigIntegerParallelMultiply.multiply            1000000    ss    4     51.707 ±   11.194  ms/op
>> BigIntegerParallelMultiply.multiply           10000000    ss    4    988.302 ±  235.977  ms/op
>> BigIntegerParallelMultiply.multiply          100000000    ss    4  24662.063 ± 1123.329  ms/op
>> BigIntegerParallelMultiply.parallelMultiply    1000000    ss    4     49.337 ±   26.611  ms/op
>> BigIntegerParallelMultiply.parallelMultiply   10000000    ss    4    527.560 ±  268.903  ms/op
>> BigIntegerParallelMultiply.parallelMultiply  100000000    ss    4   9076.551 ± 1899.444  ms/op
>> 
>> 
>> We can see that for larger calculations (fib 100m), the execution is 2.7x faster in parallel. For medium size (fib 10m) it is 1.873x faster. And for small (fib 1m) it is roughly the same. Considering that the fibonacci algorithm that we used was in itself sequential, and that the last 3 calculations would dominate, 2.7x faster should probably be considered quite good on a 1-6-2 machine.
>
> kabutz has updated the pull request incrementally with one additional commit since the last revision:
> 
>   Removed JVM flags from benchmark

One concern I had was what would happen when the common pool parallelism was set to 0. In the parallelSort mechanism, they only sort in parallel if the parallelism is at least 2. Thus a parallel sort on a dual-core machine (without hyperthreading) will run sequentially. However, the parallelBigInteger seems to work OK with different common pool sizes.  We always have one thread more than the common pool parallelism working, since the thread that calls parallelMultiply() also does calculations:

Showing only the cases for Fibonacci(100m):


Method            (n)           common_parallelism  ms/op
multiply          100_000_000   0                   24619.004         
parallelMultiply  100_000_000   0                   24977.280        
multiply          100_000_000   1                   24526.498
parallelMultiply  100_000_000   1                   18169.969
multiply          100_000_000   2                   24528.111         
parallelMultiply  100_000_000   2                   11073.637         
multiply          100_000_000   3                   24861.324
parallelMultiply  100_000_000   3                    9472.986
multiply          100_000_000   4                   25036.295         
parallelMultiply  100_000_000   4                    9151.940         
multiply          100_000_000   5                   25129.139
parallelMultiply  100_000_000   5                    8964.403
multiply          100_000_000  11                   24717.125         
parallelMultiply  100_000_000  11                    9024.319

-------------

PR: https://git.openjdk.java.net/jdk/pull/6409


More information about the core-libs-dev mailing list