RFR: 8364638: Refactor and make accumulated GC CPU time code generic [v4]

Albert Mingkun Yang ayang at openjdk.org
Tue Aug 12 11:56:19 UTC 2025


On Thu, 7 Aug 2025 09:49:12 GMT, Jonas Norlinder <duke at openjdk.org> wrote:

>> Hi all,
>> 
>> This PR refactors the newly added GC CPU time code from [JDK-8359110](https://bugs.openjdk.org/browse/JDK-8359110).
>> 
>> As a stepping-stone to enable consolidation of CPU time tracking in e.g. hsperf counters and GCTraceCPUTime and to have a unified interface for tracking CPU time of various components in Hotspot this code can be refactored. This PR introduces a new interface to retrieve CPU time for various Hotspot components and it currently supports:
>> 
>> CPUTimeUsage::GC::total() // the sum of gc_threads(), vm_thread(), stringdedup()
>> 
>> CPUTimeUsage::GC::gc_threads()
>> CPUTimeUsage::GC::vm_thread()
>> CPUTimeUsage::GC::stringdedup()
>> 
>> CPUTimeUsage::Runtime::vm_thread()
>> 
>> 
>> I moved `CPUTimeUsage` to `src/hotspot/share/services` since it seemed fitting as it housed similar performance tracking code like `RuntimeService`, as this is no longer a class that is only specific to GC.
>> 
>> I also made a minor improvement in the CPU time logging during exit. Since `CPUTimeUsage` supports more components than just GC I changed the logging flag to from `gc,cpu` to `cpu` and created a detailed table:
>> 
>> 
>> [71.425s][info][cpu   ] === CPU time Statistics =============================================================
>> [71.425s][info][cpu   ]                                                                             CPUs
>> [71.425s][info][cpu   ]                                                                s       %  utilized
>> [71.425s][info][cpu   ]    Process
>> [71.425s][info][cpu   ]      Total                                             1616.3627  100.00      22.6
>> [71.425s][info][cpu   ]      VM Thread                                            5.2992    0.33       0.1
>> [71.425s][info][cpu   ]      Garbage Collection                                  83.7322    5.18       1.2
>> [71.425s][info][cpu   ]        GC Threads                                        82.7671    5.12       1.2
>> [71.425s][info][cpu   ]        VM Thread                                          0.9651    0.06       0.0
>> [71.425s][info][cpu   ] =====================================================================================
>> 
>> 
>> Additionally, if CPU time retrieval fails it should not be the caller's responsibility to log warnings as this would bloat the code unnecessarily. I've noticed that `os` does log a warning for some methods if they fail so I continued on this path.
>
> Jonas Norlinder has updated the pull request incrementally with one additional commit since the last revision:
> 
>   Improve robustness

src/hotspot/os/linux/os_linux.cpp line 4953:

> 4951:       // to detach itself from the VM - which should result in ESRCH.
> 4952:       assert_status(rc == ESRCH, rc, "pthread_getcpuclockid failed");
> 4953:       log_warning(os)("Could not sample thread CPU time");

Maybe diff msgs can be printed so that we know better what went wrong if `-1` is returned.

src/hotspot/share/gc/shared/collectedHeap.cpp line 610:

> 608: }
> 609: 
> 610: double calc_usage(double component_cpu_time, double process_cpu_time) {

Could `percent_of` be used instead?

src/hotspot/share/gc/shared/collectedHeap.cpp line 627:

> 625: 
> 626:   LogTarget(Info, cpu) cpuLog;
> 627:   if (cpuLog.is_enabled()) {

Can use early-return to reduce one indentation level.

src/hotspot/share/gc/shared/collectedHeap.hpp line 468:

> 466:   virtual void gc_threads_do(ThreadClosure* tc) const = 0;
> 467: 
> 468:   jlong elapsed_gc_cpu_time() const;

Seems unused.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/26621#discussion_r2269575257
PR Review Comment: https://git.openjdk.org/jdk/pull/26621#discussion_r2269582730
PR Review Comment: https://git.openjdk.org/jdk/pull/26621#discussion_r2269586175
PR Review Comment: https://git.openjdk.org/jdk/pull/26621#discussion_r2269589768


More information about the serviceability-dev mailing list