RFR: 8296401: ConcurrentHashTable::bulk_delete might miss to delete some objects

Leo Korinth lkorinth at openjdk.org
Mon Dec 5 13:38:09 UTC 2022


On Mon, 5 Dec 2022 11:00:22 GMT, Ivan Walulya <iwalulya at openjdk.org> wrote:

> > If the user of the hash table has buckets sized above 256, the exponential growing of the extra array will start at a very small size and it will not be optimal. It will work, and that is IMO enough.
> 
> I thought the benefit of having a `BULK_DELETE_LIMIT` was to limit the cleanup done by an individual thread. This effectively limits the duration of the critical section.

That is true. Do you think that is a problem with the new implementation?

-------------

PR: https://git.openjdk.org/jdk/pull/10983


More information about the hotspot-dev mailing list