Interesting Benchmarks
Eric Kolotyluk
eric at kolotyluk.net
Tue Nov 9 21:20:31 UTC 2021
Okay, after correcting some code mistakes, my latest benchmarks are
* Benchmark tested throughput ratio
* PrimeThreads.platformPrimesTo_1000 500 4.098 0.274
* PrimeThreads.platformPrimesTo_10_000 5,000 5.068 0.054
* PrimeThreads.platformPrimesTo_10_000_000 5,000,000 4.791 0.032
* PrimeThreads.virtualPrimesTo_1000 500 14.961 3.651
* PrimeThreads.virtualPrimesTo_10_000 5,000 93.500 18.449
* PrimeThreads.virtualPrimesTo_10_000_000 5,000,000 151.248 31.569
This is based on
public static void futurePrimes22(long limit, ThreadFactory threadFactory) {
try (var executorService =
Executors.newThreadPerTaskExecutor(threadFactory)) {
var tasks = LongStream.iterate(3, x -> x < limit, x -> x + 2)
.mapToObj(candidate -> {
return executorService.submit(() ->
isPrime(candidate, 10, 30) ? candidate : null);
}).collect(Collectors.toList());
var result = tasks.stream().filter(x -> {
try {
return x.get() != null;
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
return false;
});
}
}
and
static boolean isPrime(long candidate, long minimumLag, long maximumLag) {
BinaryOperator<Long> lag = (minimum, maximum) -> {
if (minimum <= 0 || maximum <= 0) return 0L;
var approx = (long) Math.nextUp(Math.sqrt(maximum - minimum));
try {
var approximateLag = minimum + approx;
Thread.sleep(approximateLag);
return approximateLag;
} catch (InterruptedException e) {
e.printStackTrace();
}
finally {
return 0L;
}
};
lag.apply(minimumLag, maximumLag); // Simulate network request overhead
if (candidate == 2) return true;
if ((candidate & 1) == 0) return false; // filter out even numbers
var limit = (long) Math.nextUp(Math.sqrt(candidate));
for (long divisor = 3; divisor <= limit; divisor += 2) {
// Thread.onSpinWait(); // If you think this will help, it likely won't
if (candidate % divisor == 0) return false;
}
lag.apply(minimumLag, maximumLag); // Simulate network response overhead
return true;
}
where I vary limit and threadFactory.
So, I am trying to understand Ignaz's observation that I would not be
launching 5,000,000 threads at once because I am using
newThreadPerTaskExecutor(), because I want to be able to explain this to
others. Given that the first thing isPrime() does is call threadSleep() for
10 to 30 ms, wouldn't that give newThreadPerTaskExecutor() a chance to ramp
up a lot of threads? Is there something that limits the number of threads
that will be started? Is 10 - 30 ms too short to start that many threads?
Sorry if I seem a little naive, but I want to understand.
Cheers, Eric
On Mon, Nov 8, 2021 at 8:19 PM Eric Kolotyluk <eric at kolotyluk.net> wrote:
> Thanks for the insights Ignaz...
>
> Yes, I am using Executors.newThreadPerTaskExecutor(threadFactory), but I
> am new to this API. At this point, I don't really care if I did have
> 5,000,000 Platform Threads running, only wanted to compare the throughput
> of Platform Threads to Virtual Threads. If you are suggesting this was not
> a good experiment, I want to correct that.
>
> I can see now some more mistakes I made, so will have to rerun the
> benchmarks again.
>
> Cheers, Eric
>
>
>
>
>
> On Mon, Nov 8, 2021 at 12:50 PM Ignaz Birnstingl <ignazb at gmail.com> wrote:
>
>> Hi Eric,
>>
>> > I would
>> > not have thought it possible to run 5,000,000 Platform Threads.
>>
>> As far as I can tell by looking at your code you didn't. I think you
>> submit the tasks sequentially to an ExecutorService created with
>> Executors.newThreadPerTaskExecutor(). This causes many tasks (=threads)
>> to
>> end before the last one is submitted.
>>
>> I would suggest you use a different (thread caching) ExecutorService for
>> platform threads. Otherwise you are mainly measuring thread creation
>> time
>> of virtual threads vs platform threads.
>>
>> Ignaz
>>
>
More information about the loom-dev
mailing list