RFR: Parallel +AlwaysPreTouch should run with max workers
Aleksey Shipilev
shade at redhat.com
Mon Jun 11 13:53:40 UTC 2018
Current +AlwaysPreTouch is not parallel (it used to be!), because default policy starts with a
single active GC worker during init. We need to wind up more active workers for this phase. We know
this affects sensitive workloads on NUMA machines, because it basically interleaves memory, so this
should improve performance there.
diff -r e4a301c3f11b src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp
--- a/src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp Mon Jun 11 13:54:58 2018 +0200
+++ b/src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp Mon Jun 11 15:49:38 2018 +0200
@@ -280,10 +280,11 @@
assert (!AlwaysPreTouch, "Should have been overridden");
// For NUMA, it is important to pre-touch the storage under bitmaps with worker threads,
// before initialize() below zeroes it with initializing thread. For any given region,
// we touch the region and the corresponding bitmaps from the same thread.
+ ShenandoahWorkerScope scope(workers(), _max_workers);
log_info(gc, heap)("Parallel pretouch " SIZE_FORMAT " regions with " SIZE_FORMAT " byte pages",
_num_regions, page_size);
ShenandoahPretouchTask cl(bitmap0.base(), bitmap1.base(), _bitmap_size, page_size);
_workers->run_task(&cl);
Testing: tier1_gc_shenandoah, +AlwaysPreTouch with -Xmx100g
Thanks,
-Aleksey
More information about the shenandoah-dev
mailing list