RFR: 8258431: Provide a JFR event with live set size estimate

Aleksey Shipilev shade at openjdk.java.net
Mon Mar 1 14:21:40 UTC 2021


On Mon, 22 Feb 2021 17:20:49 GMT, Thomas Schatzl <tschatzl at openjdk.org> wrote:

>> The purpose of this change is to  expose a 'cheap' estimate of the current live set size (the meaning of 'current' is dependent on each particular GC implementation but in worst case 'at last full GC') in form of a periodically emitted JFR event.
>> 
>> ## Introducing new JFR event
>> 
>> While there is already 'GC Heap Summary' JFR event it does not fit the requirements as it is closely tied to GC cycle so eg. for ZGC or Shenandoah it may not happen for quite a long time, increasing the risk of not having the heap summary events being present in the JFR recording at all. 
>> Because of this I am proposing to add a new 'Heap Usage Summary' event which will be emitted periodically, by default on each JFR chunk, and will contain the information abut the heap capacity, the used and live bytes. This information is available from all GC implementations and can be provided at literally any time.
>> 
>> ## Implementation
>> 
>> The implementation differs from GC to GC because each GC algorithm/implementation provides a slightly different way to track the liveness. The common part is `size_t live() const` method added to `CollectedHeap` superclass and the use of a cached 'liveness' value computed after the last GC cycle. If `liveness` hasn't been calculated yet the implementation will default to returning 'used' value.
>> 
>> The implementations are based on my (rather shallow) knowledge of inner working of the respective GC engines and I am open to suggestions to make them better/correct.
>> 
>> ### Epsilon GC
>> 
>> Trivial implementation - just return `used()` instead.
>> 
>> ### Serial GC
>> 
>> Here we utilize the fact that mark-copy phase is naturally compacting so the number of bytes after copy is 'live' and that the mark-sweep implementation keeps an internal info about objects being 'dead' but excluded from the compaction effort and we can these numbers to derive the old-gen live set size (used bytes minus the cumulative size of the 'un-dead' objects).
>> 
>> ### Parallel GC
>> 
>> For Parallel GC the liveness is calculated as the sum of used bytes in all regions after the last GC cycle. This seems to be a safe bet because this collector is always compacting (AFAIK).
>> 
>> ### G1 GC
>> 
>> Using `G1ConcurrentMark::remark()` method the live set size is computed as a sum of `_live_words` from the associated `G1RegionMarkStats` objects. Here I am not 100% sure this approach covers all eventualities and it would be great to have someone skilled in G1 implementation to chime in so I can fix it. However, the numbers I am getting for G1 are comparable to other GCs for the same application.
>> 
>> ### Shenandoah
>> 
>> In Shenandoah, the regions are keeping the liveness info. However, the VM op that is used for iterating regions is a safe-pointing one so it would be great to run it in an already safe-pointed context.
>> This leads to hooking into `ShenandoahConcurrentMark::finish_mark()` and `ShenandoahSTWMark::mark()` where at the end of the marking process the liveness info is summarized and set to `ShenandoahHeap::_live` volatile field - which is later read by the event emitting code.
>> 
>> ### ZGC
>> 
>> `ZStatHeap` is already holding the liveness info - so this implementation is just making it accessible via `ZCollectedHeap::live()` method.
>
> The change also misses liveness update after G1 Full GC: it should at least reset the internal liveness counter to 0 so that `used()` is used.
> I think there is the same issue for Parallel Full GC. Serial seems to be handled.

Another general comment about Shenandoah. It would seem easier to piggyback liveness summarization on region iteration that heuristics does at the end of mark anyway. See `ShenandoahHeuristics::choose_collection_set`. I can do that when you are done with your changes, or try it yourself.

-------------

PR: https://git.openjdk.java.net/jdk/pull/2579



More information about the hotspot-gc-dev mailing list