[crac] RFR: Wait until G1 GC has finished before creating a snapshot. [v6]
Anton Kozlov
akozlov at openjdk.org
Thu Aug 3 12:07:03 UTC 2023
On Sat, 29 Jul 2023 01:26:38 GMT, Jan Kratochvil <jkratochvil at openjdk.org> wrote:
>> @rvansa did report his snapshots are about 2x-3x bigger than they should be. He then also found it only happens if the snapshot is done too quickly after GC should have been run.
>>
>> One can reproduce the race case by:
>>
>> --- a/src/hotspot/share/gc/g1/g1UncommitRegionTask.hpp
>> +++ b/src/hotspot/share/gc/g1/g1UncommitRegionTask.hpp
>> @@ -35,7 +35,7 @@ class G1UncommitRegionTask : public G1ServiceTask {
>> // is short, while still making reasonable progress.
>> static const uint UncommitSizeLimit = 128 * M;
>> // Initial delay in milliseconds after GC before the regions are uncommitted.
>> - static const uint UncommitInitialDelayMs = 100;
>> + static const uint UncommitInitialDelayMs = 10*1000;
>> // The delay between two uncommit task executions.
>> static const uint UncommitTaskDelayMs = 10;
>
> Jan Kratochvil has updated the pull request incrementally with one additional commit since the last revision:
>
> Make -active volatile.
> - bugreported by Sergey Nazarkin
src/hotspot/share/gc/g1/g1UncommitRegionTask.cpp line 133:
> 131: // Nothing more to do, change state and report a summary.
> 132: set_active(false);
> 133: _active_sem.signal();
AFAICS the signal() increments the semaphore count. So the count is the number of executions of the uncommit task. That's, another wait() may succeed even when the opeartion is in progress (by decrementing the value accumulated during previous exections).
And I share concerns about access to the _active. For safe access there should be a critical section with acquire and release.
Do we really need to synchronize with the G1UncommitRegionTask? Maybe, can we call `G1CollectedHeap::uncommit_regions` directly (that is being called by the UncommitTask eventually)?
-------------
PR Review Comment: https://git.openjdk.org/crac/pull/93#discussion_r1283105222
More information about the crac-dev
mailing list