RFR: Purge batched matrix cleanup
Christine Flood
cflood at redhat.com
Fri Oct 13 17:14:29 UTC 2017
This looks like a good cleanup.
Thank You,
Christine
On Fri, Oct 13, 2017 at 11:43 AM, Roman Kennke <rkennke at redhat.com> wrote:
> Am 13.10.2017 um 17:36 schrieb Aleksey Shipilev:
>
>> http://cr.openjdk.java.net/~shade/shenandoah/matrix-no-batched/webrev.01/
>>
>> Before asynchronous region recycling, we had faced problems with matrix
>> cleanups: it takes a while,
>> and we could not accept it for the pause. There, we had to do deferred,
>> batched, parallel Matrix
>> cleanup [1] to alleviate STW costs. Now, when region recycling and matrix
>> cleanups are handled in
>> the concurrent phase, we care about this much less.
>>
>> Current code handles an interesting complication: we cannot add the
>> batched-cleanup regions to the
>> free set, for the fear we would clean up matrix for the regions that are
>> used by freeset for
>> allocation, thus breaking the matrix. On closer inspection, the same
>> thing happens when allocation
>> paths *assist* with recycling some of the trash regions into empty ones!
>> Which is rare case, but it
>> is nevertheless a bug.
>>
>> This can be mitigated by acquiring the heap lock for the batched matrix
>> cleanup, but it would
>> potentially block allocators for hundreds of milliseconds, which defeats
>> the purpose.
>>
>> My suggestion is to ditch the batched matrix cleanup code, and leverage
>> async recycling for doing
>> the right thing. Allocators would normally assist with matrix cleanup, if
>> async recycling is late.
>> The experiments show this adds around 100us latency on allocation path
>> with 32K regions (which is
>> above our target anyhow), and it is negligible with the 4K target.
>>
>> Testing: hotspot_gc_shenandoah
>>
>> Thanks,
>> -Aleksey
>>
>> [1] http://mail.openjdk.java.net/pipermail/shenandoah-dev/2017-M
>> ay/002299.html
>>
>> I think that makes sense. Patch looks good.
>
>
> Roman
>
>
More information about the shenandoah-dev
mailing list