RFR(L): Low latency hashtable for read-mostly scenarios

Robbin Ehn robbin.ehn at oracle.com
Wed May 2 08:34:30 UTC 2018


Hi Coleen,

On 2018-05-01 00:18, coleen.phillimore at oracle.com wrote:
> 
> Robbin,
> 
> What is gi_bd in these names in the test?  Can you give them more descriptive names?
> 
> cht_gi_bd_insert_new
> cht_gi_bd_get_old
> 
> Are they get/insert bulk/delete?

Yes, fixed. Update to original mail.

> 
> I'm fine with the rest of the code.   It's a huge project and it enables 
> concurrent string and symbol table cleanup, so this is a big enhancement.

Thanks Robbin.

> 
> thanks,
> Coleen
> 
> On 4/26/18 3:38 AM, Robbin Ehn wrote:
>>
>> Hi all, please review.
>>
>> The lower latency of the new gcs have higher demands on runtime data-structure
>> in terms of latency. This is a concurrent hashtable using the global-counter
>> (8195099).
>>
>> Webrev:
>> http://cr.openjdk.java.net/~rehn/8195098/v0/webrev/
>>
>> Bug:
>> https://bugs.openjdk.java.net/browse/JDK-8195098
>>
>> * Readers never blocks or spins.
>> * Inserts do CAS from a read-side, need to re-CAS during re-size/deletion in 
>> targeted bucket or insert collision. (inserts are conceptually a reader)
>> * Deletes locks the targeted bucket, all other buckets are free for operations.
>> * Re-size locks one bucket at the time, all other buckets are free for 
>> operations.
>>
>> It does concurrent re-size by doubling size and use one more bit from hash.
>> That means a bucket in the old table contains nodes to either one of two buckets
>> in the new table. Each node in the chain is sorted into one of the two buckets
>> with concurrent readers. Shrinking just take two node chains and put them
>> together in the corresponding bucket. To keep track of what is visible to the
>> concurrent readers it's uses a global-counter, needed during re-size and for
>> deletes.
>>
>> A gtest is provided which passes on our platforms, we also have a prototype of 
>> the stringtable using this which passes tier 1-5 on our platforms.
>>
>> Various people have pre-reviewed various parts, thanks! And a special thanks to
>> Coleen for a lot of reviewing!
>>
>> Thanks, Robbin
> 


More information about the hotspot-dev mailing list