Better default for ParallelGCThreads and ConcGCThreads by using number of physical cores and CPU mask.
Jungwoo Ha
jwha at google.com
Mon Jan 13 21:39:07 UTC 2014
>
> In CMSCollector there is still this code to change the value for
> ConcGCThreads based on AdjustGCThreadsToCores.
>
>
> 639 if (AdjustGCThreadsToCores) {
> 640 FLAG_SET_DEFAULT(ConcGCThreads, ParallelGCThreads / 2);
> 641 } else {
> 642 FLAG_SET_DEFAULT(ConcGCThreads, (3 + ParallelGCThreads) / 4);
> 643 }
>
> Do you think that is needed or can we use the same logic in both cases
> given that ParallelGCThreads has a different value if
> AdjustGCThreadsToCores is enabled.
>
I am happy to just use FLAG_SET_DEFAULT(ConcGCThreads, ParallelGCThreads /
2);
The original hotspot code used FLAG_SET_DEFAULT(ConcGCThreads, (3 +
ParallelGCThreads) / 4); which I think is somewhat arbitrary.
Now that ParallelGCThreads will reduce on some configuration, dividing it
into 4 seems to make the ConcGCThreads too small.
>
> Also, I don't fully understand the name AdjustGCThreadsToCores. In
> VM_Version::calc_parallel_worker_threads() for x86 we simply
> active_core_count with 2 if this flag is enabled. So, the flag does not
> really adjust to the cores. It seems like it is reduces the number of GC
> threads. How about calling the flag ReduceGCThreads or something like that?
>
The flag can be named better. However, ReduceGCThreads doesn't seem to
reflect what this flag does.
I am pretty bad at naming, so let me summarize what this flag is actually
doing.
The flag adjusts the GC threads to the number of "available" physical cores
reported by /proc filesystem and the CPU mask set by sched_setaffinity.
For example, ParallelGCThreads will remain the same regardless of whether
hyperthreading is turned on/off.
Current hotspot code will have twice more GC threads if hyperthreading is
on.
Usually, GC causes huge number of cache misses, thus having two GC threads
competing for the same physical core hurts the GC throughput.
Current hotspot code doesn't consider CPU mask at all.
For example, even though the machine has 64 cores, if CPU mask is set for 2
cores, current hotspot calculates the number of GC threads based on 64.
Thus, this flag is actually evaluating the number of GC threads to the
number of physical cores available for the JVM process.
I think I pointed this out earlier, but I don't feel comfortable reviewing
> the changes in os_linux_x86.cpp. I hope someone from the Runtime team can
> review that.
>
Can you clarify what you meant? /proc & cpu mask is dependent on Linux &
x86, and I only tested on that platform.
The assumptions I used here is based on the x86 cache architecture.
Jungwoo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20140113/df401846/attachment.htm>
More information about the hotspot-gc-dev
mailing list