RFR: JDK-8277175 : Add a parallel multiply method to BigInteger [v7]
kabutz
duke at openjdk.java.net
Fri Jan 14 09:18:28 UTC 2022
On Thu, 16 Dec 2021 21:54:43 GMT, Paul Sandoz <psandoz at openjdk.org> wrote:
>>> > embarrassingly parallelizable
>>>
>>> Having looked at [embarrassingly parallel](https://en.wikipedia.org/wiki/Embarrassingly_parallel), I'm not certain that this particular problem would qualify. The algorithm is easy to parallelize, but in the end we still have some rather large numbers, so memory will be our primary dominator. I'd expect to see a linear speedup if it was "perfectly parallel", but this does not come close to that.
>>
>> I ran fibonacci(100_000_000) with multiply() and parallelMultiply(). For multiply() we had:
>>
>>
>> real 0m25.627s
>> user 0m26.767s
>> sys 0m1.197s
>>
>>
>> and for parallelMultiply() we had
>>
>>
>> real 0m10.030s
>> user 1m2.205s
>> sys 0m2.489s
>>
>>
>> Thus we are 2.5 times faster on a 1-6-2 machine, but use more than 2x the user time. If it were perfectly parallel, shouldn't we expect to see the parallelMultiply() be close to user time of 26s and real time 4.3s?
>
>> > embarrassingly parallelizable
>>
>> Having looked at [embarrassingly parallel](https://en.wikipedia.org/wiki/Embarrassingly_parallel), I'm not certain that this particular problem would qualify. The algorithm is easy to parallelize, but in the end we still have some rather large numbers, so memory will be our primary dominator. I'd expect to see a linear speedup if it was "perfectly parallel", but this does not come close to that.
>
> Ok, fair point, to avoid possible confusion I have removed "embarrassingly". I don't think we need to refer to other algorithms.
Hi @PaulSandoz is there anything else that we need to do? Or is this in the hopper for Java 19 already?
-------------
PR: https://git.openjdk.java.net/jdk/pull/6409
More information about the core-libs-dev
mailing list