Request for reviews (S): 6916062: assert(_inserts <= _insert_limit, "hash table overflow") in NodeHash::hash_insert
Tom Rodriguez
tom.rodriguez at oracle.com
Thu Sep 30 16:03:50 PDT 2010
Excellent. That all looks great then. Maybe a limit of 16 * unique would be a better default instead of 1024? 1024 is ok as well though.
tom
On Sep 30, 2010, at 3:14 PM, Vladimir Kozlov wrote:
> I ran full CTW with 10*unique and did not get bailouts.
> Only when I used 2*unique I got 6 cases. I rerun them with product VM
> and they exited gracefully:
>
> 2234 COMPILE SKIPPED: infinite loop in PhaseIterGVN::optimize (not retryable)
>
> Vladimir
>
> On 9/30/10 12:54 PM, Vladimir Kozlov wrote:
>> I ran throw CTW previous code webrev.02 (only with assert fastdebug VM) and it is never triggered. It also passed
>> nsk.stress testing. All exits from igvn.optimize() are checked with C->failing(). But I will check if it exit
>> correctly with lower limit.
>>
>> Thanks,
>> Vladimir
>>
>> On 9/30/10 12:19 PM, Tom Rodriguez wrote:
>>> 1024 * unique seems like a high enough limit but did you check ctw to see if it ever triggers? Also would we handle a
>>> bailout here gracefully? It would be good to set the limit lower and make sure we handle it properly. I'd be curious
>>> to see some statistics on how many iterations we normally perform in that loop but maybe that's a question for another
>>> day.
>>>
>>> tom
>>>
>>> On Sep 30, 2010, at 12:00 PM, Vladimir Kozlov wrote:
>>>
>>>> I updated changes with bailout from IGVM optimization loop:
>>>>
>>>> http://cr.openjdk.java.net/~kvn/6916062/webrev.03
>>>>
>>>> Thanks,
>>>> Vladimir
>>>>
>>>> On 9/29/10 12:14 PM, Vladimir Kozlov wrote:
>>>>> Thank you, Tom
>>>>>
>>>>> On 9/29/10 12:00 PM, Tom Rodriguez wrote:
>>>>>> That seems fine for a backport. Maybe we should consider some product level code to bailout if we do too many rounds
>>>>>> of IGVN?
>>>>>
>>>>> Yes, it is good idea to bailout. I will add it.
>>>>>
>>>>> Thanks,
>>>>> Vladimir
>>>>>
>>>>>>
>>>>>> tom
>>>>>>
>>>>>> On Sep 28, 2010, at 7:08 PM, Vladimir Kozlov wrote:
>>>>>>
>>>>>>> http://cr.openjdk.java.net/~kvn/6916062/webrev.02
>>>>>>>
>>>>>>> Fixed 6916062: assert(_inserts<= _insert_limit,"hash table overflow") in NodeHash::hash_insert
>>>>>>>
>>>>>>> Changes for 6711117 put a memory node back on IGVN
>>>>>>> worklist if address type does not match cached type
>>>>>>> in a hope that address nodes will be processed and
>>>>>>> the type will be updated. But it does not check that
>>>>>>> there are nodes on worklist which will trigger address
>>>>>>> processing. As result the memory node is pulled from
>>>>>>> and pushed on IGVN worklist infinitely.
>>>>>>> On each iteration NodeHash::_inserts value is incremented
>>>>>>> forcing hash table grow until integer value NodeHash::_max
>>>>>>> is overflow (became 0) which triggers bug's assert.
>>>>>>>
>>>>>>> I decided to go with simplest fix which is easy to backport
>>>>>>> and leave fixing address type inconsistency for later
>>>>>>> (I have bug 6715629).
>>>>>>> I also added asserts to catch such situations.
>>>>>>>
>>>>>>> Tested with failing case, CTW, JPRT.
>>>>>>
>>>
More information about the hotspot-compiler-dev
mailing list