RFR: 8296401: ConcurrentHashTable::bulk_delete might miss to delete some objects
Ivan Walulya
iwalulya at openjdk.org
Mon Dec 5 11:04:04 UTC 2022
On Mon, 5 Dec 2022 10:44:54 GMT, Leo Korinth <lkorinth at openjdk.org> wrote:
> If the user of the hash table has buckets sized above 256, the exponential growing of the extra array will start at a very small size and it will not be optimal. It will work, and that is IMO enough.
I thought the benefit of having a `BULK_DELETE_LIMIT` was to limit the cleanup done by an individual thread. This effectively limits the duration of the critical section.
-------------
PR: https://git.openjdk.org/jdk/pull/10983
More information about the hotspot-dev
mailing list