RFR: 8296401: ConcurrentHashTable::bulk_delete might miss to delete some objects

Leo Korinth lkorinth at openjdk.org
Mon Nov 7 09:31:13 UTC 2022


On Fri, 4 Nov 2022 13:38:23 GMT, Leo Korinth <lkorinth at openjdk.org> wrote:

> ConcurrentHashTable::bulk_delete might miss to delete some objects if a bucket has more than 256 entries. Current uses of ConcurrentHashTable are not harmed by this behaviour. 
> 
> I modified gtest:ConcurrentHashTable to detect the problem (first commit), and fixed the problem in the code (second commit).
> 
> Tests passes tier1-3.

They will be deleted BULK_DELETE_LIMIT (256) at a time until the bucket is empty, and the control flow will then exit with the help of a `break` (instead of a `continue`).

-------------

PR: https://git.openjdk.org/jdk/pull/10983


More information about the hotspot-dev mailing list