RFR: 8338534: GenShen: Handle alloc failure differently when immediate garbage is pending

William Kemper wkemper at openjdk.org
Sat Aug 31 00:10:34 UTC 2024


On Wed, 21 Aug 2024 14:48:55 GMT, Kelvin Nilsen <kdnilsen at openjdk.org> wrote:

> Several changes are implemented here:
> 
> 1. Re-order the phases that execute immediately after final-mark so that we do concurrent-cleanup quicker (but still after concurrent weak references)
> 2. After immediate garbage has been reclaimed by concurrent cleanup, notify waiting allocators
> 3. If an allocation failure occurs while immediate garbage recycling is pending, stall the allocation but do not cancel the concurrent gc.

Changes requested by wkemper (Committer).

src/hotspot/share/gc/shenandoah/shenandoahController.cpp line 72:

> 70:                  byte_size_in_proper_unit(req.size() * HeapWordSize), proper_unit_for_byte_size(req.size() * HeapWordSize));
> 71: 
> 72:     if (Atomic::load(&_anticipated_immediate_garbage) < req.size()) {

To make sure I understand... here we are saying that if final mark anticipates this much immediate garbage (computed when it rebuilt the freeset after choosing the collection set), then we aren't going to cancel the GC if this particular request could be satisfied. Instead we will block as though the gc has already been cancelled. This thread will be notified when concurrent cleanup completes.

src/hotspot/share/gc/shenandoah/shenandoahController.cpp line 101:

> 99: 
> 100: void ShenandoahController::notify_alloc_failure_waiters(bool clear_alloc_failure) {
> 101:   if (clear_alloc_failure) {

Why would we not clear the alloc failure? This seems like it would confuse the control thread. Isn't this going to have the control thread attempt to notify alloc failure waiters again when the cycle is finished?

-------------

PR Review: https://git.openjdk.org/shenandoah/pull/479#pullrequestreview-2273675393
PR Review Comment: https://git.openjdk.org/shenandoah/pull/479#discussion_r1739549118
PR Review Comment: https://git.openjdk.org/shenandoah/pull/479#discussion_r1739551197


More information about the shenandoah-dev mailing list