RFR 8197518: Kerberos krb5 authentication: AuthList's put method leads to performance issue
Daniel Fuchs
daniel.fuchs at oracle.com
Fri Feb 23 16:38:01 UTC 2018
Hi Max,
85 // has larger timestamp.
86 entries.addFirst(t);
shouldn't you set oldestTime here as well?
best regards,
-- daniel
On 23/02/2018 15:21, Xuelei Fan wrote:
> Looks fine to me.
>
> Xuelei
>
> On 2/23/2018 6:13 AM, Weijun Wang wrote:
>> Updated at http://cr.openjdk.java.net/~weijun/8197518/webrev.02/.
>>
>> 1. ConcurrentHashMap used in MemoryCache. No more
>> "content.remove(key)" but it's actually useless because rc.isEmpty()
>> will not be true normally. I'll think about how to clean up unused
>> AuthList objects in another RFE.
>>
>> 2. AuthList now removes entries from tail backwards until one is
>> within lifespan. I still kept an oldestTime field. In my experiment it
>> is useful but a bigger value does not help much, so I just hardcoded
>> it to 5 seconds.
>>
>> Thanks
>> Max
>>
>>> On Feb 23, 2018, at 4:57 PM, Weijun Wang <weijun.wang at oracle.com> wrote:
>>>
>>> Brilliant idea. I will do more experiment.
>>>
>>> Thanks
>>> Max
>>>
>>>> On Feb 23, 2018, at 11:25 AM, Xuelei Fan <xuelei.fan at oracle.com> wrote:
>>>>
>>>> Hi Weijun,
>>>>
>>>> I thought more about the performance impact. The impact may mainly
>>>> come from the big size of the cached entries.
>>>>
>>>> The current implementation needs to go over the full list: find the
>>>> 1st expired item and then remove the rest. The previous one have an
>>>> additional round with entries.indexOf(). Could we just start from
>>>> the end of the list?
>>>>
>>>> while (true) {
>>>> E e = entries.removeLast();
>>>> if e not expired {
>>>> add it back and break;
>>>> }
>>>> };
>>>>
>>>> If the list is really big, and the lifetime is significant big as
>>>> well (>> 1 minute), iterate from the oldest item (backward from the
>>>> end of the list) may be much more effective. LinkedList itself is
>>>> not synchronized, so as if there is not too much items to go over,
>>>> the performance should be fine. I'm hesitate to hard code a cleanup
>>>> every 1 minute strategy. If we clean often, there may be not too
>>>> much items to go over in the list. So we might be able to remove
>>>> the "at most every minute" strategy.
>>>>
>>>> Xuelei
>>>>
>>>> On 2/22/2018 5:55 PM, Weijun Wang wrote:
>>>>> Updated webrev at
>>>>> http://cr.openjdk.java.net/~weijun/8197518/webrev.01/.
>>>>>> On Feb 23, 2018, at 9:02 AM, Weijun Wang <weijun.wang at oracle.com>
>>>>>> wrote:
>>>>>>
>>>>>> You mean I can save it somewhere and only update it when a cleanup
>>>>>> is performed?
>>>>>>
>>>>>> This should be better. Now there will be only isEmpty(),
>>>>>> getFirst() and addFirst(), and one less getLast().
>>>>>>
>>>>>> Thanks
>>>>>> Max
>>>>>>
>>>>>>> On Feb 23, 2018, at 1:45 AM, Xuelei Fan <xuelei.fan at oracle.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Looks like list synchronization is a factor of the performance
>>>>>>> impact. Maybe, you can have a private time for the oldest entry
>>>>>>> so don't access/iterate/cleanup entries list until necessary.
>>>>>>> The "at most every minute" may be not a good strategy in some
>>>>>>> situations.
>>>>> In fact, it's now almost "exactly every minute". What situations do
>>>>> you think it's not good? I cannot use size() because I have to
>>>>> remember all entries with lifespan to be correct.
>>>>> Thanks
>>>>> Max
>>>>>>>
>>>>>>> Xuelei
>>>>>>>
>>>>>>> On 2/22/2018 12:36 AM, Weijun Wang wrote:
>>>>>>>> Please take a review at
>>>>>>>> http://cr.openjdk.java.net/~weijun/8197518/webrev.00/
>>>>>>>> Two notes:
>>>>>>>> 1. I tried list.subList(here, end).clear() but it's not faster.
>>>>>>>> 2. I have looked at ConcurrentHashMap + ConcurrentSkipListMap
>>>>>>>> but will need more time to verify its correctness and measure
>>>>>>>> the performance gain. Since the bug is reported on 8u, a safer
>>>>>>>> fix looks better.
>>>>>>>> Noreg-perf.
>>>>>>>> Thanks
>>>>>>>> Max
>>>>>>
>>>
>>
More information about the security-dev
mailing list