Request for review: 7110152 assert(size_in_words <= (julong)max_jint) failed: no overflow

Bengt Rutisson bengt.rutisson at oracle.com
Wed Nov 9 23:36:26 UTC 2011


On 2011-11-10 00:14, Vladimir Kozlov wrote:
> Bengt Rutisson wrote:
>>
>> ...except that I just saw that CollectedHeap::array_allocate_nozero() 
>> uses int.
>
> Yes, this is what I tried to tell you :)

Right. Thanks. :-)

> You also missed code in typeArrayKlass::allocate_permanent() and 
> nozero code in typeArrayKlass::allocate_common(). May be also other 
> places where CollectedHeap::array_allocate is called.

Yes, unfortunately this propagates to typeArrayOopDesc::object_size() 
that returns an int. So, this really propagates.

I'll dig in to it a bit, but if the change is getting too large, I guess 
an alternative is to go back to limiting 
arrayOopDesc::max_array_length() to return max_jint - headersize. Seems 
like there is a lot of code that relize on that an object size will fit 
into an int.

> Could you also remove 'size' argument in 
> post_allocation_install_obj_klass() which does not use it?

Sure. Done.

Bengt

>
> Thanks,
> Vladimir
>
>>
>> I'll fix that and update the webrev.
>>
>> Bengt
>>
>> On 2011-11-09 23:47, Bengt Rutisson wrote:
>>>
>>> Vladimir,
>>>
>>> Thanks for looking at this!
>>>
>>> On 2011-11-09 23:06, Vladimir Kozlov wrote:
>>>> Why not change it in all CollectedHeap methods?
>>>
>>> The other methods in CollectedHeap that use int for size are 
>>> handling objects. Those should not have been affected by my other 
>>> change (since it only changed the maximum array length).
>>>
>>> I have not investigated whether or not it is possible to create an 
>>> object (that is not an array) that will have a size larger than int. 
>>> I assume that there is some limit to how many fields an object can 
>>> have. But I don't know what that limit is and whether it will 
>>> guarantee that the largest object fits into an int. It also seems 
>>> much more unlikely that anyone would actually create such a large 
>>> object.
>>>
>>> I'd like to only change this for arrays now since we have failures 
>>> in the nightlies. If you think it is ok I could create a separate CR 
>>> to investigate whether or not the int object sizes in CollectedHeap 
>>> are safe.
>>>
>>> Bengt
>>>
>>>>
>>>> Vladimir
>>>>
>>>> Bengt Rutisson wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> Could I have a couple of reviews for this small change?
>>>>> http://cr.openjdk.java.net/~brutisso/7110152/webrev.01/
>>>>>
>>>>> The problem is that CollectedHeap::array_allocate() takes its size 
>>>>> as an int instead of a size_t. This was exposed by the fix I made 
>>>>> to 7102044 recently.
>>>>>
>>>>> That change allows arrays to be max_jint long on 64 bit systems. 
>>>>> This is correct since we can handle this size in a 64 bit size_t 
>>>>> even when we add the object header. However, it will overflow an int.
>>>>>
>>>>> Here is a small reproducer that will trigger the assert mentioned 
>>>>> in 7110152 when it is run on a 64 bit VM:
>>>>>
>>>>> public class MaxJIntArray {
>>>>>     public static void main(String[] args) {
>>>>>         final int MAX_JINT = 2147483647;
>>>>>         double[] a = new double[MAX_JINT];
>>>>>         System.out.println("Allocated a[" + a.length + "]");
>>>>>     }
>>>>> }
>>>>>
>>>>> The fix is to remove the assert (since it is too strict) and 
>>>>> changing CollectedHeap::array_allocate() to take a size_t instead 
>>>>> of an int.
>>>>>
>>>>> CR:
>>>>> 7110152 assert(size_in_words <= (julong)max_jint) failed: no overflow
>>>>> http://monaco.sfbay.sun.com/detail.jsf?cr=7110152
>>>>>
>>>>> Bengt
>>>
>>




More information about the hotspot-gc-dev mailing list