RFR: 8296401: ConcurrentHashTable::bulk_delete might miss to delete some objects [v2]
Axel Boldt-Christmas
aboldtch at openjdk.org
Mon Dec 5 10:42:12 UTC 2022
On Mon, 5 Dec 2022 10:17:18 GMT, Leo Korinth <lkorinth at openjdk.org> wrote:
>> ConcurrentHashTable::bulk_delete might miss to delete some objects if a bucket has more than 256 entries. Current uses of ConcurrentHashTable are not harmed by this behaviour.
>>
>> I modified gtest:ConcurrentHashTable to detect the problem (first commit), and fixed the problem in the code (second commit).
>>
>> Tests passes tier1-3.
>
> Leo Korinth has updated the pull request incrementally with two additional commits since the last revision:
>
> - growable stacked
> - Revert "working!"
>
> This reverts commit 5366f22c7202eaa2182976c084d02e9af4f56de0.
src/hotspot/share/utilities/concurrentHashTable.inline.hpp line 524:
> 522: }
> 523: for (size_t node_it = 0; node_it < nd; node_it++) {
> 524: Node* ndel = node_it < BULK_DELETE_LIMIT ? ndel_stack[node_it] : extra.at(node_it - BULK_DELETE_LIMIT);
Needs to be a reference (`Node*& ndel`) or `DEBUG_ONLY(ndel = (Node*)POISON_PTR;)` will be a no-op.
-------------
PR: https://git.openjdk.org/jdk/pull/10983
More information about the hotspot-dev
mailing list