RFR: 8296401: ConcurrentHashTable::bulk_delete might miss to delete some objects
Leo Korinth
lkorinth at openjdk.org
Mon Dec 5 10:48:55 UTC 2022
On Mon, 7 Nov 2022 21:01:47 GMT, David Holmes <dholmes at openjdk.org> wrote:
> Thanks for the explanation. What is the point of having a BULK_DELETE_LIMIT?
One advantage is that you can stack allocate the buffer needed to temporary store the pointers. I guess the original idea was to loop, and reuse the stack buffer as I did in my first implementation (now reverted, thanks to comments from Robbin). The problem with doing that is that you can not guarantee progress. If you could find a hash collision, you could possibly fill the bucket faster than the system could empty it. My new implementation will instead of an extra loop, use a growable array to fill extra data. The new implementation will thus only read the bucket head once, and the problem will not appear. The growable array is initialized with a zero argument, which to my understanding of the code does *not* allocate any backing array. Thus I think the code should be almost equally performant on normal cases with a bucket size <= 256. If the user of the hash table has buckets sized above 256, the exponential growing of the extra array will start at a very small size and it will
not be optimal. It will work, and that is IMO enough.
I have also tried two other solutions:
1) Using a normal array (but then I need to count the length of the list before allocating, and optionally reuse the predicate function). This will maybe hurt normal usage performance.
2) Dynamically allocate the growable array (only) if we need it. The code was not as good looking although I guess it could be somewhat faster on really big buckets (but then performance really does not matter and we should arguably assert).
Do you have any good code to benchmark this change?
-------------
PR: https://git.openjdk.org/jdk/pull/10983
More information about the hotspot-dev
mailing list