RFR: Purge batched matrix cleanup

Dominik Inführ dominik.infuehr at gmail.com
Sat Oct 14 21:59:59 UTC 2017


The class ShenandoahMatrixCleanupTask seems to be not used anymore after
this cleanup.

Dominik

On Fri, Oct 13, 2017 at 7:14 PM, Christine Flood <cflood at redhat.com> wrote:

> This looks like a good cleanup.
>
> Thank You,
>
> Christine
>
>
> On Fri, Oct 13, 2017 at 11:43 AM, Roman Kennke <rkennke at redhat.com> wrote:
>
> > Am 13.10.2017 um 17:36 schrieb Aleksey Shipilev:
> >
> >> http://cr.openjdk.java.net/~shade/shenandoah/matrix-no-
> batched/webrev.01/
> >>
> >> Before asynchronous region recycling, we had faced problems with matrix
> >> cleanups: it takes a while,
> >> and we could not accept it for the pause. There, we had to do deferred,
> >> batched, parallel Matrix
> >> cleanup [1] to alleviate STW costs. Now, when region recycling and
> matrix
> >> cleanups are handled in
> >> the concurrent phase, we care about this much less.
> >>
> >> Current code handles an interesting complication: we cannot add the
> >> batched-cleanup regions to the
> >> free set, for the fear we would clean up matrix for the regions that are
> >> used by freeset for
> >> allocation, thus breaking the matrix. On closer inspection, the same
> >> thing happens when allocation
> >> paths *assist* with recycling some of the trash regions into empty ones!
> >> Which is rare case, but it
> >> is nevertheless a bug.
> >>
> >> This can be mitigated by acquiring the heap lock for the batched matrix
> >> cleanup, but it would
> >> potentially block allocators for hundreds of milliseconds, which defeats
> >> the purpose.
> >>
> >> My suggestion is to ditch the batched matrix cleanup code, and leverage
> >> async recycling for doing
> >> the right thing. Allocators would normally assist with matrix cleanup,
> if
> >> async recycling is late.
> >> The experiments show this adds around 100us latency on allocation path
> >> with 32K regions (which is
> >> above our target anyhow), and it is negligible with the 4K target.
> >>
> >> Testing: hotspot_gc_shenandoah
> >>
> >> Thanks,
> >> -Aleksey
> >>
> >> [1] http://mail.openjdk.java.net/pipermail/shenandoah-dev/2017-M
> >> ay/002299.html
> >>
> >> I think that makes sense. Patch looks good.
> >
> >
> > Roman
> >
> >
>
-------------- next part --------------
diff --git a/src/share/vm/gc/shenandoah/shenandoahConnectionMatrix.cpp b/src/share/vm/gc/shenandoah/shenandoahConnectionMatrix.cpp
--- a/src/share/vm/gc/shenandoah/shenandoahConnectionMatrix.cpp
+++ b/src/share/vm/gc/shenandoah/shenandoahConnectionMatrix.cpp
@@ -93,46 +93,3 @@
     }
   }
 }
-
-class ShenandoahMatrixCleanupTask : public AbstractGangTask {
-private:
-  volatile size_t _idx;
-  const size_t    _stride;
-  jbyte* const    _matrix;
-  const size_t*   _idxs;
-  const size_t    _count;
-public:
-  ShenandoahMatrixCleanupTask(jbyte* const matrix, size_t stride,
-                              size_t* const idxs, size_t count) :
-          AbstractGangTask("Shenandoah Matrix Cleanup task"),
-          _idx(0), _stride(stride), _matrix(matrix),
-          _idxs(idxs), _count(count) {};
-
-  void work(uint worker_id) {
-    size_t chunk_size = 256; // educated guess
-
-    size_t stride = _stride;
-    size_t count = _count;
-    const size_t* idxs = _idxs;
-    jbyte* matrix = _matrix;
-
-    while (true) {
-      size_t chunk_end = Atomic::add(chunk_size, &_idx);
-      size_t chunk_start = chunk_end - chunk_size;
-      chunk_end = MIN2(stride, chunk_end);
-
-      if (chunk_start >= stride) return;
-
-      for (size_t r = chunk_start; r < chunk_end; r++) {
-        size_t start = r * stride;
-        for (size_t i = 0; i < count; i++) {
-          size_t t = start + idxs[i];
-          if (matrix[t] != 0) {
-            matrix[t] = 0;
-          }
-        }
-      }
-    }
-  }
-
-};


More information about the shenandoah-dev mailing list