RFR: Improve abstraction for runtime allocations
Aleksey Shipilev
shade at redhat.com
Mon May 14 09:12:58 UTC 2018
On 05/13/2018 06:45 PM, Roman Kennke wrote:
> This rebases the patch on top of the PLAB change. It basically reverts
> some changes in plab that I needed to do there:
>
> Diff:
> http://cr.openjdk.java.net/~rkennke/allocations-rt/webrev.01.diff/
> Full:
> http://cr.openjdk.java.net/~rkennke/allocations-rt/webrev.01/
The patch looks good.
But the whole lot of reshuffling in shared code unnerves me. In my mind, it feels safer to do the
mem_allocate rework in upstream, make sure upstream does not break with it, and then pick it up from
there. Otherwise, we are risking investing into some code shape that would be flat-out rejected
upstream, and we would have to redo it again.
Alternative: seeing how the only use of mem_allocate is in CH::common_mem_allocate_noinit, maybe we
should instead do just disable TLAB alloc block there, and do only Shenandoah part of mem_allocate
changes. E.g.:
---------- 8< ---------------------------------------------------------------
if (!UseShenandoahGC) {
HeapWord* result = NULL;
if (UseTLAB) {
result = allocate_from_tlab(klass, THREAD, size);
if (result != NULL) {
assert(!HAS_PENDING_EXCEPTION,
"Unexpected exception, will result in uninitialized storage");
return result;
}
}
}
bool gc_overhead_limit_was_exceeded = false;
result = Universe::heap()->mem_allocate(size, klass, THREAD,
&gc_overhead_limit_was_exceeded);
---------- 8< ---------------------------------------------------------------
Then have the overload of CH::mem_allocate that ignores new Klass* and Thread* params. This keeps
shared changes to minimum, until upstream accepts them.
-Aleksey
More information about the shenandoah-dev
mailing list