RFR: 8296401: ConcurrentHashTable::bulk_delete might miss to delete some objects

Leo Korinth lkorinth at openjdk.org
Mon Nov 7 09:01:44 UTC 2022


On Fri, 4 Nov 2022 13:38:23 GMT, Leo Korinth <lkorinth at openjdk.org> wrote:

> ConcurrentHashTable::bulk_delete might miss to delete some objects if a bucket has more than 256 entries. Current uses of ConcurrentHashTable are not harmed by this behaviour. 
> 
> I modified gtest:ConcurrentHashTable to detect the problem (first commit), and fixed the problem in the code (second commit).
> 
> Tests passes tier1-3.

The function `delete_check_nodes` breaks after 256 entries:

     if (dels == num_del) {
        break; 

My added `for (;;)` will remove entries until the bucket is empty.

-------------

PR: https://git.openjdk.org/jdk/pull/10983


More information about the hotspot-dev mailing list