RFR 8134995(M): [REDO] GC: implement ranges (optionally constraints) for those flags that have them missing

sangheon.kim sangheon.kim at oracle.com
Wed Sep 23 21:00:04 UTC 2015



On 09/23/2015 01:35 PM, Kim Barrett wrote:
> On Sep 21, 2015, at 7:05 PM, sangheon.kim <sangheon.kim at oracle.com> wrote:
>> On 09/21/2015 02:59 PM, Kim Barrett wrote:
>>> On Sep 10, 2015, at 8:01 PM, sangheon.kim <sangheon.kim at oracle.com> wrote:
>>>> CR:
>>>> https://bugs.openjdk.java.net/browse/JDK-8134995
>>>>
>>>> Webrev:
>>>> http://cr.openjdk.java.net/~sangheki/8134995/webrev.00/
>>>> http://cr.openjdk.java.net/~sangheki/8134995/webrev.00_to_8078555
>>>>
>>>> Testing:
>>>> JPRT, UTE(vm.quick-pcl) and test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java.
>>> ------------------------------------------------------------------------------
>>> src/share/vm/gc/g1/g1_globals.hpp
>>>   142   product(intx, G1ConcRefinementServiceIntervalMillis, 300,                 \
>>> ...
>>>   145           range(0, max_jint)                                                \
>>>
>>> Why is this being changed from max_intx to max_jint?
>> This option is used for 'long' type parameter and it is 32bit for Windows.
> I think you are saying that on Windows the value of this option is
> passed to some API that is limited to a 32bit value?  If so, then OK.
Yes, you are right.

>
>>> ------------------------------------------------------------------------------
>>> src/share/vm/gc/g1/g1_globals.hpp
>>>   169   develop(intx, G1RSetRegionEntriesBase, 256,                               \
>>>   173   product(intx, G1RSetRegionEntries, 0,                                     \
>>>   179   develop(intx, G1RSetSparseRegionEntriesBase, 4,                           \
>>>   184   product(intx, G1RSetSparseRegionEntries, 0,                               \
>>>   240   product(uintx, G1ConcRefinementThreads, 0,                                \
>>>
>>> All of these have their max range value changed to be divided by 4.
>>> What's this about?  What is this magic "4"?
>> 'G1RSetRegionEntries', 'G1RSetSparseRegionEntries' and 'G1ConcRefinementThreads' are finally multiplied by pointer size.
>> So we need to divide by 4 for 32bit to avoid overflow. We don't need for 64bit but I thought it will be enough for 64bit.
>> 'G1RSetRegionEntriesBase' and 'G1RSetSparseRegionEntriesBase' changed to have same range as 'G1RSetRegionEntries' and 'G1RSetSparseRegionEntries’.
> Dividing by 4 doesn't account for being multiplied by pointer size on
> a 64bit platform.
You are right. This is limitation for 32bit.
Both 'jint' and 'intx' are 32bit for 32bit platform and these options 
are used for 'size_t' type parameter after multiplying by pointer size.
So we need to divide by 4 on 32bit platform.
When it comes to 64bit platform, we don't need to divide by 4 as 
'size_t' is big enough to cover 'max_jint * 8'.
But I think that value divided by 4 would be enough for 64bit platform.

Jon also pointed '4' seems strange, so I suggested to use 'jintSize'.
What do you think?

> In light of that, I don't understand this part of
> your response: "We don't need for 64bit but I thought it will be
> enough for 64bit."
I think I explained above.

>
>>> ------------------------------------------------------------------------------
>>> src/share/vm/gc/g1/g1_globals.hpp
>>>   240   product(uintx, G1ConcRefinementThreads, 0,                                \
>>> ...
>>>   243           range(0, (max_jint-1)/4)                                          \
>>>
>>> I know we discussed the problem of thread counts.  I'd suggested
>>> perhaps basing the upper bound on the number of processors.  (Some
>>> care might be needed for uniprocessor systems.) I couldn't find any
>>> followup on that suggestion though.
>> Firstly I agreed to your opinion of using the number of processors if we need to give upper bound.
>> But from the email thread that we discussed, I ended not to handle OOM from our range/constraint validation framework.
>> Dmitry told us that testing framework is okay for OOM(only for exit value of 1) and Gerard also agreed not to handle it from validation.
>> Finally, I think the only thing that we need for this option is just excluding this test.
>>
>> If we don't exclude tests that are using too many resources, it would make problem to other tests that running in parallel.
>> So it would be better to exclude these tests.
> See my response to Jon on this part.
>
> I'm OK with going ahead with this changeset despite the above open
> discussions.  The rest of the changes look good to me, and these
> couple of lingering items are no worse than, and perhaps an
> improvement on, the status quo.  We can follow up on the above items
> separately.
Okay, following up seems good idea.

Thanks,
Sangheon


>
>



More information about the hotspot-dev mailing list