RFR(L): 8195098: Low latency hashtable for read-mostly scenarios

Robbin Ehn robbin.ehn at oracle.com
Thu May 3 09:30:43 UTC 2018


Hi all,

After feedback from DavidH (volatile) here is an update:
Inc:
http://cr.openjdk.java.net/~rehn/8195098/v2/inc/webrev/
Full:
http://cr.openjdk.java.net/~rehn/8195098/v2/full/webrev/

Thanks!

/Robbin

On 05/02/2018 11:03 AM, Robbin Ehn wrote:
> Hi all,
> 
> Here is an update with Gerard's and Coleen's input.
> 
> Inc:
> http://cr.openjdk.java.net/~rehn/8195098/v1/inc/webrev/
> Full:
> http://cr.openjdk.java.net/~rehn/8195098/v1/full/webrev/
> 
> Thanks, Robbin
> 
> On 2018-04-26 09:38, Robbin Ehn wrote:
>>
>> Hi all, please review.
>>
>> The lower latency of the new gcs have higher demands on runtime data-structure
>> in terms of latency. This is a concurrent hashtable using the global-counter
>> (8195099).
>>
>> Webrev:
>> http://cr.openjdk.java.net/~rehn/8195098/v0/webrev/
>>
>> Bug:
>> https://bugs.openjdk.java.net/browse/JDK-8195098
>>
>> * Readers never blocks or spins.
>> * Inserts do CAS from a read-side, need to re-CAS during re-size/deletion in 
>> targeted bucket or insert collision. (inserts are conceptually a reader)
>> * Deletes locks the targeted bucket, all other buckets are free for operations.
>> * Re-size locks one bucket at the time, all other buckets are free for 
>> operations.
>>
>> It does concurrent re-size by doubling size and use one more bit from hash.
>> That means a bucket in the old table contains nodes to either one of two buckets
>> in the new table. Each node in the chain is sorted into one of the two buckets
>> with concurrent readers. Shrinking just take two node chains and put them
>> together in the corresponding bucket. To keep track of what is visible to the
>> concurrent readers it's uses a global-counter, needed during re-size and for
>> deletes.
>>
>> A gtest is provided which passes on our platforms, we also have a prototype of 
>> the stringtable using this which passes tier 1-5 on our platforms.
>>
>> Various people have pre-reviewed various parts, thanks! And a special thanks to
>> Coleen for a lot of reviewing!
>>
>> Thanks, Robbin


More information about the hotspot-dev mailing list