RFR: Enforce max regions [v3]
Kelvin Nilsen
kdnilsen at openjdk.org
Wed Dec 7 21:30:13 UTC 2022
On Wed, 7 Dec 2022 18:19:50 GMT, Y. Srinivas Ramakrishna <ysr at openjdk.org> wrote:
>> Kelvin Nilsen has updated the pull request incrementally with one additional commit since the last revision:
>>
>> Fix white space and add an assertion
>
> src/hotspot/share/gc/shenandoah/shenandoahHeapRegion.cpp line 1040:
>
>> 1038: // Then fall through to finish the promotion after releasing the heap lock.
>> 1039: } else {
>> 1040: return 0;
>
> This is interesting. Doing some thinking out loud here.
>
> I realize we want to very strictly enforce the generation sizes (indicated by the affiliation of regions to generations in a formal sense of generation sizes), but I do wonder if humongous regions should not enter into that calculus at all? In this case, the reason we would typically want to designate a humongous object as old (via promotion via this method) is because we don't want to have to spend effort scanning its contents. After all we never spend any time copying it when it survives a minor collection. Under the circumstances, it appears as if we would always want humongous objects that are primitive type arrays to stay in young (never be promoted, although I admit that it might make sense to not pay even the cost of marking it if it's been around forever per generational hypothesis), and if a humongous object that has references (i.e. ages into the old generation) then it's affiliated with old and is "promoted" even if there aren't any available regions in old. In other wor
ds, humongous objects, because they are never copied, have affiliations that do not affect the promotion calculus in a strict manner.
>
> For these reasons, I'd think that humongous object promotions should be treated specially and old generation size should not be a criterion for determining generational affiliation of humongous regions.
I'm going to add a TODO comment here, so that we can think about changing this behavior. I totally agree with your rationale. Problem is that we have "assumptions" and "invariants" scattered throughout the existing implementation that need to be carefully reconsidered if we allow the rules to bend. (For example: there are lots of size_t subtractions that may overflow to huge unmeaningful numbers, and if we run with ShenandoahVerify enabled, it will complain if the size of the generation exceeds it capacity.
-------------
PR: https://git.openjdk.org/shenandoah/pull/179
More information about the shenandoah-dev
mailing list