Request for review (S): 7102044 G1: VM crashes with assert(old_end != new_end) failed: don't call this otherwise
Bengt Rutisson
bengt.rutisson at oracle.com
Thu Oct 27 05:36:23 PDT 2011
Hi again, Vladimir
Inline...
On 2011-10-26 17:44, Vladimir Kozlov wrote:
> Hi Bengt,
>
> I thought VM should throw OOM exception in such cases (as your
> example) since you can't allocate such arrays in 32-bit VM. Does the
> size check for OOM happened after heap resize calculation?
>
> Is SIZE_MAX defined on all platforms?
>
> You still have problem with short[] and char[] since
> 2*max_jint+header_size will overflow. Also max_array_length() puts
> unneeded limitation on double[] and long[] max length in 64 but VM, it
> should be max_jint. I think we can do something like next:
>
> const size_t max_words_per_size_t =
> align_size_down((SIZE_MAX/HeapWordSize - header_size(type)),
> MinObjAlignment);
> const size_t max_elements_per_size_t = HeapWordSize *
> max_words_per_size_t / type2aelembytes(type);
> if ((size_t)max_jint < max_elements_per_size_t);
> return max_jint ;
> return (int32_t)max_elements_per_size_t;
I think I misunderstood this before. You mean that the whole max length
calculation can be replaced by the code above. I like this, so I changed
the implementation to do as you suggested.
Two comments. This actually changes the implementation for 64 bits. We
will always use max_jint (even for double, long, etc) where we used to
use an aligned value with the header subtracted. I think this is more
correct, but is is different than before.
Also an alernative to your implementation would be:
#ifdef _LP64
return max_jint;
#endif
const size_t max_words_per_size_t =
align_size_down((SIZE_MAX/HeapWordSize - header_size(type)),
MinObjAlignment);
const size_t max_elements_per_size_t = HeapWordSize *
max_words_per_size_t / type2aelembytes(type);
return (int32_t)max_elements_per_size_t;
That would be a little clearer and a bit more efficient, but it looks a
little awkward to have an #ifdef here.
What would you prefer?
Bengt
>
>
> check_overflow() should use julong type for expressions instead of
> size_t to catch overflow:
>
> +bool arrayOopDesc::check_overflow(BasicType type) {
> + julong length = (julong)max_array_length(type);
> + julong bytes_per_element = (julong)type2aelembytes(type);
> + julong bytes = length * bytes_per_element +
> (julong)header_size_in_bytes();
> + return (julong)(size_t)bytes == bytes;
> +}
>
> Rename check_overflow() to check_max_length_overflow() or something.
>
> I don't think we need incorrect old_max_array_length() method and
> corresponding checks for LP64.
>
> Initialization and execution of additional JPRT tests will increase
> jobs time. I think you could add InternalVMTests to existing client
> and server tests:
>
> < $(PRODUCT_HOME)/bin/java $(JAVA_OPTIONS) -version
> ---
> > $(PRODUCT_HOME)/bin/java $(JAVA_OPTIONS)
> -XX:+IgnoreUnrecognizedVMOptions -XX:+ExecuteInternalVMTests -version
>
> regards,
> Vladimir
>
> On 10/26/11 6:18 AM, Bengt Rutisson wrote:
>>
>> Hi all,
>>
>> Can I get a couple of reviews to this fairly small change?
>> http://cr.openjdk.java.net/~brutisso/7102044/webrev.01/
>>
>> Sorry for the wide distribution, but the bug is reported on GC, I
>> think the actual change (in arrayOop.hpp) is runtime
>> responsibility and the method that is being changed is used on a few
>> occasions in C2. So, I figured I'd like everyone to
>> know that this is about to go in.
>>
>> CR:
>> 7102044 G1: VM crashes with assert(old_end != new_end) failed: don't
>> call this otherwise
>> http://monaco.sfbay.sun.com/detail.jsf?cr=7102044
>>
>> Background
>> arrayOopDesc::max_array_length() returns the maximum length an array
>> can have based on the type of the array elements.
>> This is passed on through the allcocation path as a word size. When
>> we need to expand the heap we have to convert the
>> word size to a byte size. The problem is that on 32 bit platforms
>> this conversion may overflow a size_t. Which will
>> result in that we call expand() with a much smaller size than
>> required size or even with a size of 0.
>>
>> Here is a small reproducer that will trigger the assert mentioned in
>> the CR if we run it with -XX:+UseG1GC:
>>
>> public class LargeObj {
>> public static void main(String[] args) {
>> int[] a = new int[2147483639];
>> }
>> }
>>
>> It is not only G1 that has this problem. ParallelGC has the same
>> issue, it just doesn't have an assert to detect it. It
>> can be argued that the GCs should check for overflows, but I think
>> that since arrayOopDesc::max_array_length() will
>> catch the problem earlier and solve it for all GCs this is a better
>> place to fix the problem.
>>
>> I added some unit tests to test the new implementation. To make sure
>> that these tests are run I had to update some more
>> files. The actual change is in arrayOop.hpp. The rest of the files
>> have just been changed to allow the unit tests to run.
>>
>> A few questions:
>>
>> (1) In arrayOopDesc::max_array_length() I use MIN2 to make sure that
>> we don't overflow a size_t. This is only needed on
>> 32 bit platforms. I prefer this general code compared to #ifndef
>> _LP64. Any thoughts?
>>
>> (2) I made the unit tests run as JPRT make targets. I guess it can
>> also be done as jtreg tests, but that way it would
>> only be executed for PIT etc. Any thoughts on this?
>>
>> Thanks,
>> Bengt
More information about the hotspot-compiler-dev
mailing list