RFR (S) 8206471: Race with ConcurrentHashTable deleting items on insert with cleanup thread

coleen.phillimore at oracle.com coleen.phillimore at oracle.com
Tue Jul 10 00:55:25 UTC 2018


Thanks for the code review and suggestions for improvement.
Coleen

On 7/9/18 7:16 PM, David Holmes wrote:
> Looks good Coleen - thanks!
>
> David
>
> On 10/07/2018 3:13 AM, coleen.phillimore at oracle.com wrote:
>>
>>
>> On 7/9/18 7:32 AM, coleen.phillimore at oracle.com wrote:
>>>
>>> On 7/8/18 9:20 PM, David Holmes wrote:
>>>> Hi Coleen,
>>>>
>>>> On 7/07/2018 5:41 AM, coleen.phillimore at oracle.com wrote:
>>>>> Summary: Only fetch Node::next once and use that result.
>>>>>
>>>>> A racing thread could NULL next->next()->next(). The Node itself 
>>>>> is stable until the write_synchronize() but the pointers may be 
>>>>> updated. See bug for more detail.
>>>>>
>>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8206471.01/webrev
>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8206471
>>>>
>>>> The change looks good.
>>>>
>>>> Could there be a similar race at:
>>>>
>>>>  552       bucket->release_assign_node_ptr(rem_n_prev, rem_n->next());
>>>>  553       rem_n = rem_n->next();
>>>>
>>
>> Probably not because this instance has the bucket lock, but I'll 
>> change it anyway.
>>
>>>> Even if not, it is marginally more performant to only do the 
>>>> load-acquire once.
>>>>
>>>> Similarly:
>>>>
>>>>  663 new_table->get_bucket(odd_index)->release_assign_node_ptr(odd,
>>>>  664 aux->next());
>>>>  665 new_table->get_bucket(even_index)->release_assign_node_ptr(even,
>>>>  666 aux->next());
>>>>
>>>> combined with:
>>>>
>>>>  685     aux = aux->next();
>>>>
>>>> makes for 3 load-acquire (and 2 if we take the else at line #675).
>>>>
>>>> And again:
>>>>
>>>>  982       bucket->release_assign_node_ptr(rem_n_prev, rem_n->next());
>>>>  983       rem_n = rem_n->next();
>>>
>> I believe the value of next() is stable in all these cases, and it's 
>> fine to only load once.
>>
>> open webrev at http://cr.openjdk.java.net/~coleenp/8206471.02/webrev
>>
>> Ran the extensive gtests that Robbin wrote to cover zipping and 
>> unzipping the hashtable and rerunning hs-tier1,2.
>>
>> Thanks,
>> Coleen
>>
>>> Thank you for noticing these other double loads.  I'll study them to 
>>> see if there's a race or see if there's a reason they have double 
>>> loads, but I'll change them unless there is a reason not to.
>>>
>>> Thanks!
>>> Coleen
>>>
>>>>
>>>> Thanks,
>>>> David
>>>> -----
>>>>
>>>>> Tested with SymbolTable changes and tests that failed.  Also 
>>>>> tested with mach5 hs-tier1-5 (in progress).
>>>>>
>>>>> This is actually Robbin's fix, and my review is that it looks good.
>>>>>
>>>>> Thanks,
>>>>> Coleen
>>>
>>



More information about the hotspot-runtime-dev mailing list