RFR: 8258431: Provide a JFR event with live set size estimate

Aleksey Shipilev shade at openjdk.java.net
Thu Feb 18 10:29:40 UTC 2021


On Mon, 15 Feb 2021 17:23:44 GMT, Jaroslav Bachorik <jbachorik at openjdk.org> wrote:

> The purpose of this change is to  expose a 'cheap' estimate of the current live set size (the meaning of 'current' is dependent on each particular GC implementation but in worst case 'at last full GC') in form of a periodically emitted JFR event.
> 
> ## Introducing new JFR event
> 
> While there is already 'GC Heap Summary' JFR event it does not fit the requirements as it is closely tied to GC cycle so eg. for ZGC or Shenandoah it may not happen for quite a long time, increasing the risk of not having the heap summary events being present in the JFR recording at all. 
> Because of this I am proposing to add a new 'Heap Usage Summary' event which will be emitted periodically, by default on each JFR chunk, and will contain the information abut the heap capacity, the used and live bytes. This information is available from all GC implementations and can be provided at literally any time.
> 
> ## Implementation
> 
> The implementation differs from GC to GC because each GC algorithm/implementation provides a slightly different way to track the liveness. The common part is `size_t live() const` method added to `CollectedHeap` superclass and the use of a cached 'liveness' value computed after the last GC cycle. If `liveness` hasn't been calculated yet the implementation will default to returning 'used' value.
> 
> The implementations are based on my (rather shallow) knowledge of inner working of the respective GC engines and I am open to suggestions to make them better/correct.
> 
> ### Epsilon GC
> 
> Trivial implementation - just return `used()` instead.
> 
> ### Serial GC
> 
> Here we utilize the fact that mark-copy phase is naturally compacting so the number of bytes after copy is 'live' and that the mark-sweep implementation keeps an internal info about objects being 'dead' but excluded from the compaction effort and we can these numbers to derive the old-gen live set size (used bytes minus the cumulative size of the 'un-dead' objects).
> 
> ### Parallel GC
> 
> For Parallel GC the liveness is calculated as the sum of used bytes in all regions after the last GC cycle. This seems to be a safe bet because this collector is always compacting (AFAIK).
> 
> ### G1 GC
> 
> Using `G1ConcurrentMark::remark()` method the live set size is computed as a sum of `_live_words` from the associated `G1RegionMarkStats` objects. Here I am not 100% sure this approach covers all eventualities and it would be great to have someone skilled in G1 implementation to chime in so I can fix it. However, the numbers I am getting for G1 are comparable to other GCs for the same application.
> 
> ### Shenandoah
> 
> In Shenandoah, the regions are keeping the liveness info. However, the VM op that is used for iterating regions is a safe-pointing one so it would be great to run it in an already safe-pointed context.
> This leads to hooking into `ShenandoahConcurrentMark::finish_mark()` and `ShenandoahSTWMark::mark()` where at the end of the marking process the liveness info is summarized and set to `ShenandoahHeap::_live` volatile field - which is later read by the event emitting code.
> 
> ### ZGC
> 
> `ZStatHeap` is already holding the liveness info - so this implementation is just making it accessible via `ZCollectedHeap::live()` method.

Interesting! Cursory review follows.

src/hotspot/share/gc/g1/g1CollectedHeap.cpp line 4578:

> 4576: 
> 4577: void G1CollectedHeap::set_live(size_t bytes) {
> 4578:   Atomic::release_store(&_live_size, bytes);

I don't think this requires `release_store`, regular `store` would be enough. G1 folks can say for sure.

src/hotspot/share/gc/parallel/parallelScavengeHeap.hpp line 100:

> 98:   HeapWord* mem_allocate_old_gen(size_t size);
> 99: 
> 100: 

Excess newline?

src/hotspot/share/gc/shared/collectedHeap.hpp line 217:

> 215:   virtual size_t capacity() const = 0;
> 216:   virtual size_t used() const = 0;
> 217:   // a best-effort estimate of the live set size

Suggestion:

// Returns the estimate of live set size. Because live set changes over time,
// this is a best-effort estimate by each of the implementations. These usually
// are most precise right after the GC cycle.

src/hotspot/share/gc/shared/genCollectedHeap.cpp line 1144:

> 1142:   _old_gen->prepare_for_compaction(&cp);
> 1143:   _young_gen->prepare_for_compaction(&cp);
> 1144: 

Stray newline?

src/hotspot/share/gc/shared/genCollectedHeap.hpp line 183:

> 181:     size_t live = _live_size;
> 182:     return live > 0 ? live : used();
> 183:   };

I think the implementation belongs to `genCollectedHeap.cpp`.

src/hotspot/share/gc/shared/generation.hpp line 140:

> 138:   virtual size_t used() const = 0;      // The number of used bytes in the gen.
> 139:   virtual size_t free() const = 0;      // The number of free bytes in the gen.
> 140:   virtual size_t live() const = 0;

Needs a comment to match the lines above? Say, `// The estimate of live bytes in the gen.`

src/hotspot/share/gc/shenandoah/shenandoahConcurrentGC.cpp line 579:

> 577:     event.set_heapLive(heap->live());
> 578:     event.commit();
> 579:   }

On the first sight, this belongs in `ShenandoahConcurrentMark::finish_mark()`. Placing the event here would fire the event when concurrent GC is cancelled, which is not what you want.

src/hotspot/share/gc/shenandoah/shenandoahConcurrentMark.cpp line 265:

> 263:   ShenandoahHeap* const heap = ShenandoahHeap::heap();
> 264:   heap->set_concurrent_mark_in_progress(false);
> 265:   heap->mark_finished();

Let's not rename this method. Introduce a new method, `ShenandoahHeap::update_live`, and call it every time after `mark_complete_marking_context()` is called.

src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp line 627:

> 625: 
> 626: size_t ShenandoahHeap::live() const {
> 627:   size_t live = Atomic::load_acquire(&_live);

I understand you copy-pasted from the same file. We have removed `_acquire` with #2504. Do `Atomic::load` here.

src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp line 655:

> 653: 
> 654: void ShenandoahHeap::set_live(size_t bytes) {
> 655:   Atomic::release_store_fence(&_live, bytes);

Same, do `Atomic::store` here.

src/hotspot/share/gc/shenandoah/shenandoahHeap.inline.hpp line 494:

> 492:   mark_complete_marking_context();
> 493: 
> 494:   class ShenandoahCollectLiveSizeClosure : public ShenandoahHeapRegionClosure {

We don't usually use the in-method declarations like these, pull it out of the method.

src/hotspot/share/gc/shenandoah/shenandoahHeap.inline.hpp line 511:

> 509: 
> 510:   ShenandoahCollectLiveSizeClosure cl;
> 511:   heap_region_iterate(&cl);

I think you want `parallel_heap_region_iterate` on this path, and do `Atomic::add(&_live, r->get_live_data_bytes())` in the closure. We shall see if this makes sense to make fully concurrently...

src/hotspot/share/gc/epsilon/epsilonHeap.hpp line 80:

> 78:   virtual size_t capacity()     const { return _virtual_space.committed_size(); }
> 79:   virtual size_t used()         const { return _space->used(); }
> 80:   virtual size_t live()         const { return used(); }

I'd prefer to call `_space->used()` directly here. Minor optimization, I know.

-------------

Changes requested by shade (Reviewer).

PR: https://git.openjdk.java.net/jdk/pull/2579


More information about the hotspot-jfr-dev mailing list