RFR (XXS): 8079156: 32 bit Java 9-fastdebug hit assertion in client mode with StackShadowPages flag value from 32 to 50
David Holmes
david.holmes at oracle.com
Tue Jul 14 23:55:28 UTC 2015
This simple change seems fine to me too. Particularly with the RFE that
was filed.
Thanks,
David
On 15/07/2015 12:06 AM, Coleen Phillimore wrote:
>
> 1536 looks fine to me, but keep your comment. I don't need to see
> another webrev and I'll sponsor it for you.
>
> Thanks,
> Coleen
>
> On 7/14/15 10:01 AM, gerard ziemski wrote:
>> David, Coleen,
>>
>> Should I post a new webrev with the code Validmir just suggested, ie:
>>
>> CodeBuffer buffer("deopt_blob", 1536, 1024);
>>
>> or are we OK to proceed as is?
>>
>>
>> cheers
>>
>> On 07/13/2015 07:42 PM, Vladimir Kozlov wrote:
>>> On David's question. The size of code is rounded to 64 bytes (minimum
>>> allocation unit in CodeCache). So you don't need to rounding.
>>>
>>> We do calculate additional padding on sparc taking into account
>>> instructions sizes and number of StackShadowPages pages (which
>>> banging code is looping on):
>>>
>>> #ifdef ASSERT
>>> if (UseStackBanging) {
>>> pad += StackShadowPages*16 + 32;
>>> }
>>> #endif
>>>
>>> But on x86 we have variable instructions size so it is not easy
>>> pre-calculate it.
>>> So we simple set big number and if we hit the assert in
>>> codeBuffer.hpp we increase it:
>>>
>>> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/file/3e15bdb908cb/src/cpu/x86/vm/sharedRuntime_x86_64.cpp#l2841
>>>
>>>
>>> I would suggest simple increase it by 512 (32-bit code is smaller
>>> then 64-bit) and done with it:
>>>
>>> CodeBuffer buffer("deopt_blob", 1536, 1024);
>>>
>>> Thanks,
>>> Vladimir
>>>
>>> On 7/13/15 10:38 AM, Coleen Phillimore wrote:
>>>>
>>>> It would be nice to get an opinion from the compiler group. The
>>>> deopt_blob had this stack banging as some sort of
>>>> assertion check that the underlying compiled code already had the
>>>> stack banging. I'm not sure how helpful it is if it
>>>> crashes here in finding any problems, because I'm not sure how well
>>>> stack walking for implicit exceptions works through
>>>> a deopt blob. My thought is that the code never expects an implicit
>>>> exception in a deopt blob so wouldn't work very well.
>>>>
>>>> If this is the case, Gerald could simply remove this code from deopt
>>>> blob, avoiding the sizing issue entirely.
>>>>
>>>> Thanks,
>>>> Coleen
>>>>
>>>>
>>>> On 7/13/15 10:17 AM, gerard ziemski wrote:
>>>>> hi David,
>>>>>
>>>>> On 07/13/2015 12:34 AM, David Holmes wrote:
>>>>>> Hi Gerard,
>>>>>>
>>>>>> On 11/07/2015 7:09 AM, gerard ziemski wrote:
>>>>>>> (resending - forgot to include the issue number in the title)
>>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> Please review this very small fix:
>>>>>>>
>>>>>>> bug: https://bugs.openjdk.java.net/browse/JDK-8079156
>>>>>>> webrev: http://cr.openjdk.java.net/~gziemski/8079156_rev0
>>>>>>
>>>>>> I'd like to hear from the compiler folk about this bug as I'm
>>>>>> unclear on why StackBanging adds to the code buffer,
>>>>>> and why this suddenly seems to be a new problem - did something
>>>>>> change? Or have we not exercised the range of values
>>>>>> before?
>>>>>
>>>>> My best educated quess is that the issue was always in there, but
>>>>> we never exercised that path - Dmity's only added
>>>>> his testing framework for exercising the ranges after we got the
>>>>> range/constraints check feature in recently and
>>>>> that's when we found it.
>>>>>
>>>>>
>>>>>>
>>>>>> I'd also like to understand whether the code buffer resizing
>>>>>> should be rounded up (or whether it will be rounded up
>>>>>> internally)? e.g. power of two, multiple of nK for some n etc.
>>>>>
>>>>> I checked the sizes of existing CodeBuffers instr sizes and some
>>>>> values I saw are: 416,536,544,560,568, so I'm not
>>>>> sure what the constraint here is if there really is one (other than
>>>>> divisible by 4, which 112 is as well)
>>>>>
>>>>>
>>>>>>
>>>>>> The 112 value seems odd for 50 pages - is this 2 bytes (words?)
>>>>>> per page plus some fixed overhead? Can it be
>>>>>> expressed as a function of StackShadowPages rather than hardwiring
>>>>>> to 112 which only works for values < 50?
>>>>>
>>>>> Hardcoding the value for 50 pages should be OK here since that's
>>>>> the max value that StackShadowPages can take.
>>>>>
>>>>> Expressing it as some function would not be all that simple - you
>>>>> would need to take in account that the default size
>>>>> is enough for some StackShadowPages (ie.32), then find out the
>>>>> fixed size for the stack banging function. In the end
>>>>> you would end up with some hardcoded values anyhow, so why not make
>>>>> it super simple as we did here?
>>>>>
>>>>> The other way is to calculate things dynamically and I actually did
>>>>> that: my first fix was based on creating a temp
>>>>> CodeBuffer and feeding it only shadow stack banging code to find
>>>>> out the exact size requirement for that code, but I
>>>>> was told that this might confuse some compiler code later that
>>>>> wouldn't expect it. The other unknown was whether the
>>>>> temp code buffer code actually made it into in the cache (is it
>>>>> flushed by the destructor?). I tried to find a way to
>>>>> wipe out the instr section before the destructor, but couldn't find
>>>>> any APIs for doing so.
>>>>>
>>>>> I don't know the answers to those issues, so even though I liked
>>>>> the idea of using a temp buffer to find out precisely
>>>>> how much more memory we used, in the end I settled on the simplest
>>>>> solution that works.
>>>>>
>>>>> Would folks from the compiler like to comment?
>>>>>
>>>>>
>>>>> cheers
>>>>>
>>>>
>>>
>>>
>>
>
More information about the hotspot-dev
mailing list