Casting reference array to any-T array.

Michael Barker mikeb01 at gmail.com
Thu Jan 8 01:20:37 UTC 2015


Aaah, okay.  For some reason I'd convinced myself that VM would have the
mechanics to actually do the specialisation.

On 8 January 2015 at 13:35, Vitaly Davidovich <vitalyd at gmail.com> wrote:

> Problem is specialization is javac-time, VM not involved.
>
> Sent from my phone
> On Jan 7, 2015 7:33 PM, "Michael Barker" <mikeb01 at gmail.com> wrote:
>
>> Yes, but with a slight step further.  As you point out specialising
>> everything will lead to bloating the number of classes.  I was thinking
>> about Hotspot specialising some combinations of generic classes and
>> specific reference types based on some heuristic/profiling information.
>>
>> On 8 January 2015 at 11:32, Vitaly Davidovich <vitalyd at gmail.com> wrote:
>>
>>> Ah, you're talking about specialized classes as a whole (I was referring
>>> to just the arrays aspect).  Yes, if it were to specialize every single
>>> type, then you'd get better type information.  Downside is you now explode
>>> the number of method definitions in the runtime.  In .NET, for example,
>>> generic methods are not specialized for reference types, in part for this
>>> reason I believe.  Generally speaking, the downside to creating distinct
>>> structures per type is the explosion in the number of types at runtime.  I
>>> encourage you to read this oldish blog post by Joe Duffy (MSFT engineer):
>>> http://joeduffyblog.com/2011/10/23/on-generics-and-some-of-the-associated-overheads/
>>>
>>> On Wed, Jan 7, 2015 at 5:23 PM, Michael Barker <mikeb01 at gmail.com>
>>> wrote:
>>>
>>>> My understand is that it does type profiling at the callsite and
>>>> something like HashMap.hash() will encounter such wide variety of classes
>>>> that it will rarely be anything other than fully mega-morphic.  My guess
>>>> was that if there was specialised class for a specific reference type then
>>>> this could become mono-morphic.
>>>>
>>>> On 8 January 2015 at 11:12, Vitaly Davidovich <vitalyd at gmail.com>
>>>> wrote:
>>>>
>>>>> Right, but the reason I'm doubtful that this will have any impact is
>>>>> because the JIT already does type profiling, and the runtime types it sees
>>>>> (and the statistics around that) won't change due to erasure.  My "make its
>>>>> life easier" comment was a guess that perhaps some code paths in the
>>>>> optimizer don't need to be taken (e.g. don't look at profiling info if it
>>>>> now knows statically that an array is composed of final classes).
>>>>>
>>>>> On Wed, Jan 7, 2015 at 5:09 PM, Michael Barker <mikeb01 at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> The current functionality continues with the erasure plan.  However,
>>>>>>>> I wouldn't mind doing better!
>>>>>>>
>>>>>>>
>>>>>>> Yeah, I can't immediately think of a critical reason why it can't
>>>>>>> stay erased.  For JIT optimizer, having a narrower upper bound on the type
>>>>>>> may make its life easier, although I don't know if it'll have any material
>>>>>>> difference.  The one question is what reflection will do (and any code
>>>>>>> based on reflection, such as custom serialization, code generation, etc):
>>>>>>>
>>>>>>
>>>>>> (Caveat, I'm not a compiler expert so this is a bit of a guess.)
>>>>>>
>>>>>> One possible place where this could be used with within the
>>>>>> optimiser.  E.g. if Hotspot could see a specialised HashMap<String, String>
>>>>>> instead of an erased one, then it could determine that calls to hashCode
>>>>>> and equals would be mono-morphic and apply more aggressive in-lining.  This
>>>>>> could lead to jump in performance across a broad ranges of apps (hands up
>>>>>> who uses Strings and HashMaps :-).  My understand is that the mega-morhpic
>>>>>> dispatch (of hashCode/equals) is one of the more significant costs within
>>>>>> HashMap.
>>>>>>
>>>>>> If that was possible then it would be pretty cool!
>>>>>>
>>>>>> Mike.
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>


More information about the valhalla-dev mailing list