RFR: 8324933: ConcurrentHashTable::statistics_calculate synchronization is expensive

Thomas Schatzl tschatzl at openjdk.org
Wed Jan 31 11:15:02 UTC 2024


On Tue, 30 Jan 2024 10:48:18 GMT, Erik Österlund <eosterlund at openjdk.org> wrote:

> In the ConcurrentHashTable::statistics_calculate function, we enter and exit a ScopedCS with the global counter for every single bucket. This has showed up to be pretty intense on some machines. We should make the synchronization a bit less intense here. This patch adds simple batching so we synchronize once per 128 buckets instead of every single one.

Changes requested by tschatzl (Reviewer).

src/hotspot/share/utilities/concurrentHashTable.inline.hpp line 1238:

> 1236:     } else {
> 1237:       // Not last batch; walk over the current batch
> 1238:       batch_end = batch_start + batch_size;

Something like (untested):
Suggestion:

  for (size_t start_batch = 0; start_batch <= _table->_size; start_batch += batch_size {
    size_t batch_end = MIN2(start_batch + batch_size, _table->_size);


seems to be much much easier to follow (and corresponds to "usual" code for such loops) than the suggested one.
For extra performance `_table->_size` could be hoisted as well, but the compiler may alread do that anyway.

-------------

PR Review: https://git.openjdk.org/jdk/pull/17629#pullrequestreview-1853472676
PR Review Comment: https://git.openjdk.org/jdk/pull/17629#discussion_r1472664852


More information about the hotspot-dev mailing list