RFR: 8355726: LinkedBlockingDeque fixes and improvements
kabutz
duke at openjdk.org
Fri May 2 15:24:45 UTC 2025
On Fri, 2 May 2025 13:19:13 GMT, Doug Lea <dl at openjdk.org> wrote:
>> We logged several bugs on the LinkedBlockingDeque. This aggregates them into a single bug report and PR.
>>
>> 1. LinkedBlockingDeque does not immediately throw InterruptedException in put/take
>>
>> The LinkedBlockingDeque does not behave consistently with other concurrency components. If we call putFirst(), putLast(), takeFirst(), or takeLast() with a thread that is interrupted, it does not immediately throw an InterruptedException, the way that ArrayBlockingQueue and LInkedBlockingQueue does, because instead of lockInterruptibly(), we call lock(). It will only throw an InterruptedException if the queue is full (on put) or empty (on take). Since interruptions are frequently used as a shutdown mechanism, this might prevent code from ever shutting down.
>>
>> 2. LinkedBlockingDeque.clear() should preserve weakly-consistent iterators
>>
>> LinkedBlockingDeque.clear() should preserve weakly-consistent iterators by linking f.prev and f.next back to f, allowing the iterators to continue from the first or last respectively. This would be consistent with how the other node-based weakly consistent queues LinkedBlockingQueue LinkedTransferQueue, ConcurrentLinkedQueue/Deque work.
>>
>> The LBD already supports self-linking, since that is done by the unlinkFirst() and unlinkLast() methods, and the iterators and spliterator thus all support self-linking.
>>
>> This can be fixed very easily by linking both f.prev and f.next back to f.
>>
>> 3. LinkedBlockingDeque offer() creates nodes even if capacity has been reached
>>
>> In the JavaDoc of LinkedBlockingDeque, it states: "Linked nodes are dynamically created upon each insertion unless this would bring the deque above capacity." However, in the current implementation, nodes are always created, even if the deque is full. This is because count is non-volatile, and we only check inside the linkFirst/Last() methods whether the queue is full. At this point we have already locked and have created the Node. Instead, the count could be volatile, and we could check before locking.
>>
>> In the current version, calling offer() on a full LinkedBlockingDeque creates unnecessary objects and contention. Similarly for poll() and peek(), we could exit prior to locking by checking the count field.
>>
>> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>>
>> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then add that chain in bulk to the existing nodes. We count the nodes in "int n" and then whilst hol...
>
> I agree that these changes make behavior more consistent with LinkedBlockingQueue, which is worth doing. Thanks for finding straightforward ways to do these that don't impact anything else.
> Changing count field to volatile and sometimes read outside of lock introduces more potential non-sequential-consistency (for example checking size while adding) but doesn't impact observable guarantees. It may have small performance impact in either direction that could vary across platforms, but I have not detected any.
> There are possible (not likely, and still legal according to spec) observable consequences of using lockInterruptibly, but better to have them now not surprisingly different than other blocking queues.
Thanks @DougLea !
-------------
PR Comment: https://git.openjdk.org/jdk/pull/24925#issuecomment-2847463059
More information about the core-libs-dev
mailing list