[11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code

Per Liden per.liden at oracle.com
Mon Jun 18 09:55:52 UTC 2018


On 06/14/2018 05:01 PM, Chris Phillips wrote:
> Hi
> Any further comments or changes?
> On 06/06/18 05:56 PM, Chris Phillips wrote:
>> Hi Per,
>>
>> On 06/06/18 05:48 PM, Per Liden wrote:
>>> Hi Chris,
>>>
>>> On 06/06/2018 11:15 PM, Chris Phillips wrote:
>>>> Hi Per,
>>>>
>>>> On 06/06/18 04:47 PM, Per Liden wrote:
>>>>> Hi Chris,
>>>>>
>>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 06/06/18 02:23 PM, Per Liden wrote:
>>>>>>> On 2018-06-06 18:29, Andrew Haley wrote:
>>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote:
>>>>>>>>> Please review this set of changes to shared code
>>>>>>>>> related to S390 (31bit) Zero self-build type mis-match failures.
>>>>>>>>>
>>>>>>>>> Bug:    https://bugs.openjdk.java.net/browse/JDK-8203030
>>>>>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0
>>>>>>>>
>>>>>>>> Can you explain this a little more?  What is the type of size_t on
>>>>>>>> s390x?  What is the type of uintptr_t?  What are the errors?
>>>>>>>
>>>>>>> I would like to understand this too.
>>>>>>>
>>>>>>> cheers,
>>>>>>> Per
>>>>>>>
>>>>>>>
>>>>>> Quoting from the original bug  review request:
>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html
>>>>>>
>>>>>> "This
>>>>>> is a problem when one parameter is of size_t type and the second of
>>>>>> uintx type and the platform has size_t defined as eg. unsigned long as
>>>>>> on s390 (32-bit)."
>>>>>
>>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are
>>>>> on s390?
>>>> See Dan's explanation.
>>>>>
>>>>> I fail to see how any of this matters to _entries here? What am I
>>>>> missing?
>>>>>
>>>>
>>>> By changing the type, to its actual usage, we avoid the
>>>> necessity of patching in src/hotspot/share/gc/g1/g1StringDedupTable.cpp
>>>> around line 617, since its consistent usage and local I patched at the
>>>> definition.
>>>>
>>>> - _table->_entries, percent_of(_table->_entries, _table->_size),
>>>> _entry_cache->size(), _entries_added, _entries_removed);
>>>> + _table->_entries, percent_of( (size_t)(_table->_entries),
>>>> _table->_size), _entry_cache->size(), _entries_added, _entries_removed);
>>>>
>>>> percent_of will complain about types otherwise.
>>>
>>> Ok, so why don't you just cast it in the call to percent_of? Your
>>> current patch has ripple effects that you fail to take into account. For
>>> example, _entries is still printed using UINTX_FORMAT and compared
>>> against other uintx variables. You're now mixing types in an unsound way.
>>
>> Hmm missed that, so will do the cast instead as you suggest.
>> (Fixing at the defn is what was suggested the last time around so I
>> tried to do that where it was consistent, obviously this is not.
>> Thanks.
>>
>>> cheers,
>>> Per
>>>
>>>>
>>>>
>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp
>>>>> @@ -120,11 +120,11 @@
>>>>>      // Cache for reuse and fast alloc/free of table entries.
>>>>>      static G1StringDedupEntryCache* _entry_cache;
>>>>>
>>>>>      G1StringDedupEntry**            _buckets;
>>>>>      size_t                          _size;
>>>>> -  uintx                           _entries;
>>>>> +  size_t                          _entries;
>>>>>      uintx                           _shrink_threshold;
>>>>>      uintx                           _grow_threshold;
>>>>>      bool                            _rehash_needed;
>>>>>
>>>>> cheers,
>>>>> Per
>>>>>
>>>>>>
>>>>>> Hope that helps,
>>>>>> Chris
>>>>>>
>>>>>> (I'll answer further if needed but the info is in the bugs and
>>>>>> review thread mostly)
>>>>>> See:
>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030
>>>>>> and:
>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html
>>>>>>
>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938
>>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459
>>>>>> For more info.
>>>>>>
>>>>>
>>>>>
>>>
>>>
>> Cheers!
>> Chris
>>
>>
>>
> 
> Finally through testing and submit run again after Per's requested
> change, here's the knew webrev:
>     http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2
> attached is the passing run fron the submit queue.
> 
> Please review...


src/hotspot/share/gc/cms/cms_globals.hpp
----------------------------------------
Instead of changing the type of ParGCDesiredObjsFromOverflowList, I'd 
suggest you just change the single place where you need a cast, in 
ParScanThreadState::take_from_overflow_stack().

If you change the type of ParGCDesiredObjsFromOverflowList, but you 
otherwise have to clean up a number of places where it's already 
explicitly cast to size_t in concurrentMaskSweepGeneration.cpp.


src/hotspot/share/gc/parallel/parallel_globals.hpp
--------------------------------------------------
Please also change to type of ParallelOldDeadWoodLimiterMean to size_t.


src/hotspot/share/gc/parallel/psParallelCompact.cpp
---------------------------------------------------
No need to cast ParallelOldDeadWoodLimiterStdDev, you're already changed 
its type. And if you change ParallelOldDeadWoodLimiterMean to also being 
size_t you don't need to touch this file at all.


src/hotspot/share/runtime/globals.hpp
-------------------------------------
-define_pd_global(uintx,  InitialCodeCacheSize,       160*K);
-define_pd_global(uintx,  ReservedCodeCacheSize,      32*M);
+define_pd_global(size_t,  InitialCodeCacheSize,       160*K);
+define_pd_global(size_t,  ReservedCodeCacheSize,      32*M);

I would avoid changing these types, otherwise you need to go around and 
clean up a number of other places where it's says it's an uintx, like here:

1909   product_pd(uintx, InitialCodeCacheSize, 
         \
1910           "Initial code cache size (in bytes)") 
         \
1911           range(os::vm_page_size(), max_uintx) 
         \

Also, it seems you've already added the cast you need for 
InitialCodeCacheSize in codeCache.cpp, so that type change looks 
unnecessary.


Btw, patch no longer applies to the latest jdk/jdk.


cheers,
Per

> 
> Chris
> 


More information about the hotspot-dev mailing list