RFR: Locked allocation
Zhengyu Gu
zgu at redhat.com
Thu Dec 15 15:11:39 UTC 2016
Looks good!
One minor thing:
ShenandoahHeap::assert_heaplock_owned_by_current_thread() can be debug_only method.
Thanks,
-Zhengyu
On 12/15/2016 09:40 AM, Roman Kennke wrote:
> So, here comes the update.
>
> - I improved the lock to be a scoped locker (similar to MutexLocker),
> this should help to keep things in order.
> - It also keeps track of the locking thread in debug builds, and
> provides asserts that the current thread holds the lock.
> - I added this check in a few places in ShenandoahFreeSet, and then
> realized that consequently I should also require the same lock in any
> code that reads or modifies the ShenandoahFreeSet structure. Therefore
> I added locking to the few places that build the free list. While
> strictly speaking this is overkill, it doesn't hurt either.
>
> - I also found the reason for the assert: the implementations of
> current() and next() have been a little inconsistent, which lead to
> allocating thread seeing the same region 2x when hitting the upper
> boundary (i.e. shortly before OOM), and therefore accounting the
> remaining free space 2x. I changed current() to only return the region
> at the current ptr, and next() to only advance that ptr (but not
> returning anything), and adjusted calling code.
>
> - I fixed the few things that Aleksey and Zhengyu mentioned too.
>
> Tested with specjvm, jmh-specjvm, gc-bench, jtreg in release and
> fastdebug.
>
> http://cr.openjdk.java.net/~rkennke/lockedalloc/webrev.01
>
> Ok?
>
> Roman
>
>
> Am Mittwoch, den 14.12.2016, 19:57 +0100 schrieb Roman Kennke:
>> Am Mittwoch, den 14.12.2016, 13:10 -0500 schrieb Zhengyu Gu:
>>> Great job! It simplifies the logic a lot!
>>>
>>> A few minor suggestions:
>>>
>>> - ShenandoahFreeSet::clear()
>>>
>>> I only see one path to this method and it is from safepoint. so
>>> replacing fence with safepoint assertion should be appropriate.
>> Ah yes. I was thinking it solved the assert that you and others were
>> facing. My reasoning was that other threads within the same safepoint
>> would need to see the update. However, now that I think about it,
>> those
>> other threads would need to go through our new-fangled lock, and thus
>> a
>> CAS, and thus a fence... hmmm. Will need to try again. You may be
>> right and this fence is bogus.
>>
>>> - asserting on _heap_lock == 1 on code paths that are protected by
>>> the lock
>>> makes code more readable.
>> Yes. I was actually having the same idea as you and store the locking
>> thread for debug checking, and do an opaque lock object, and even a
>> scoped locker. All that should contribute to sanity.
>>
>>> - Will this lock be hot?
>> I don't think it's very hot.
>>
>>> and you want to check safepoint during spinning?
>> Nope. The whole point of this excerise was to avoid potentially
>> safepointing (and thus requiring oopmap, debug-info, etc blah blah at
>> write barriers) :-)
>>
>>> I wonder if it has impact on TTSP
>> I doubt. gc-bench didn't show any such thing. In fact, it might be
>> better than before now, at least when you've got threads racing to
>> allocate humongous objects. The previous code was not even guaranteed
>> to complete (could interleave claiming regions, never finding a
>> contiguous block).
>>
>> Will come up with a patch later. Need food first. ;-)
>>
>> Roman
>>
>>> Thanks,
>>>
>>> -Zhengyu
>>>
>>> On 12/14/2016 11:36 AM, Roman Kennke wrote:
>>>> This patch throws out all the lockfree allocation madness, and
>>>> implements a much simpler locked allocation. Since we can't
>>>> easily
>>>> use
>>>> Mutex and friends, and also don't need most of their
>>>> functionality
>>>> (wait/notify, nesting, etc), I implemented a very simple (simple
>>>> as
>>>> in,
>>>> can read-and-understand it in one glance) CAS based spin-lock.
>>>> This
>>>> is
>>>> wrapped around the normal allocation path, the humongous
>>>> allocation
>>>> path and the heap growing path. It is not locking around the call
>>>> to
>>>> full-gc, as this involves other locks and as CHF says, there are
>>>> alligators there ;-)
>>>>
>>>> This does immensely simplify ShenandoahFreeSet, especially the
>>>> racy
>>>> humongous allocation path. It does fix the bug that some people
>>>> have
>>>> encountered about used not consistent with capacity.
>>>>
>>>> I've tested it using gc-bench (no regression in allocation
>>>> throughput),
>>>> SPECjvm and jtreg tests. Looks all fine.
>>>>
>>>> When reviewing, please pay special attention to the lock in
>>>> ShenandoahHeap::allocate_memory()!
>>>>
>>>> http://cr.openjdk.java.net/~rkennke/lockedalloc/webrev.00/
>>>>
>>>> Ok?
>>>>
>>>> Roman
>>>
More information about the shenandoah-dev
mailing list