Remove redundant calls of toString()

Claes Redestad claes.redestad at oracle.com
Tue Apr 29 09:20:59 UTC 2014


On 2014-04-29 09:31, Remi Forax wrote:
> On 04/28/2014 05:43 PM, Claes Redestad wrote:
>> On 04/28/2014 08:57 AM, David Holmes wrote:
>>> On 28/04/2014 1:05 PM, Otávio Gonçalves de Santana wrote:
>>>> In my opinion not, because Objects.requireNonNull is more readable 
>>>> than
>>>> just string.toString. This way is more understandable which field is
>>>> required and doesn't impact on performance.
>>>
>>> An invocation of requireNonNull is potentially more expensive than 
>>> the implicit null check that happens with foo.toString().
>>>
>>> David
>>> ----- 
>>
>> My thought was that these two would be inlined to the exact same 
>> thing, so I did a quick test to see what happens when you do 
>> foo.toString() versus Objects.requireNonNull(foo) on a set of 
>> randomly generated String[]'s with different amounts of null 
>> elements(0p: no null entries, 1p: 1% null entries etc):
>>
>> Benchmark                                     Mode Samples Mean   
>> Mean error    Units
>> s.m.ThrowAwayBenchmark.nullToString0p        thrpt 6 356653.044     
>> 3573.707   ops/ms
>> s.m.ThrowAwayBenchmark.nullToString1p thrpt         6 353128.903     
>> 2764.102   ops/ms
>> s.m.ThrowAwayBenchmark.nullToString10p thrpt         6 297956.571     
>> 9580.251   ops/ms
>> s.m.ThrowAwayBenchmark.nullToString50p thrpt         6 158172.036     
>> 1893.096   ops/ms
>> s.m.ThrowAwayBenchmark.nullToString100p thrpt         6 
>> 18194.614      472.091   ops/ms
>> s.m.ThrowAwayBenchmark.requireNonNull0p thrpt         6 
>> 357855.126     2979.090   ops/ms
>> s.m.ThrowAwayBenchmark.requireNonNull1p thrpt         6 67601.134     
>> 7004.689   ops/ms
>> s.m.ThrowAwayBenchmark.requireNonNull10p thrpt         6 
>> 8150.595      538.970   ops/ms
>> s.m.ThrowAwayBenchmark.requireNonNull50p thrpt         6 
>> 1604.919      220.903   ops/ms
>> s.m.ThrowAwayBenchmark.requireNonNull100p thrpt         6 
>> 820.626       60.752   ops/ms
>>
>> Yikes! As long as the value is never null they're inlined nicely and 
>> neither have the upper hand performance-wise, but as soon as you get 
>> some null values, Objects.requireNonNull degenerates much faster than 
>> its foo.toString counterpart. I think this is a JIT issue - 
>> optimizing exception-paths might not be the highest priority, but 
>> Objects.requireNonNull is used pretty extensively in the JDK and my 
>> expectation would be that it shouldn't degrade performance when 
>> things actually are null now and then.
>>
>> /Claes
>
> This is a know issue, I think it's not related to the way the JIT 
> handle exception path but what is called 'profile pollution'.
> Hotspot JITs have two ways to do a null check, either do nothing (yes 
> nothing) and let the system do a fault and come back from dead using a 
> signal handler, this solution is named implicit null check or by doing 
> an explicit null check, i.e a conditional jump.
> Implicit null check is faster but if the receiver is null, it cost you 
> an harm, so the VM profiles receiver to remember if the receiver of 
> each call can be null or not. The problem is that when you call 
> foo.toString(), the profile information is associated with the 
> instruction that does foo.toString() while if the call is 
> Objects.requireNonNull(foo), the profile associated with the nullcheck 
> is stored inside the method requireNonNull, so if one call to 
> requireNonNull in the entire program throw a NPE, the profile inside 
> requireNonNull now register that it may fail, so for the VM all calls 
> to requireNonNull may fail. So currently you should not use 
> requireNonNull is the value is not required to be null*
I guess I should have given more details. :-) I ran my micros on a 
number of forked VMs to ensure I don't get excessive run-to-run 
variations, which among other things avoids profile pollution. Also I 
sloppily collected very few samples and only did minimal warmup after I 
saw that the micros produced scores that were orders of magnitude apart 
and the values stabilized after less than two seconds: Flags: -f 3 -wi 4 
-i 2

When I run the micros sequentially in the same VM I see some added 
degradation for the exceptional cases, but the 0p cases still inline to 
the optimal(?) 360k ops/ms, so while it seems profile pollution might 
play in when provoked, HotSpot seems to manage these simple micros well 
enough.
> So the problem is that Hotspot blindly trusts the recorded profile 
> compared to the profile that can come from the arguments of a call.
> The good news is that recently some patches were included in the jdk9 
> tree to fix or at least mitigate that issue.
>
I've now run tests on JDK8 FCS and a recent internal build of JDK9 and 
see the same characteristics. How recently are we talking here? :-)

> cheers,
> Rémi
> * read that last sentence again, it seems very logical, no ?
>
While your reading of the method name and suggested approach makes sense 
right now, the javadoc for Objects.requireNonNull states "this method is 
designed primarily for doing parameter validation in methods and 
constructors". To me, "parameter validation" suggests that we're dealing 
with unknowns and that there's at least some calculated risk that the 
input might be null, so the intent is really to encourage explicit 
fail-fast designs, no? I thus think it makes sense to try and ensure 
these methods have the same performance characteristics as when 
provoking an implicit null check by dereferencing an object.

/Claes



More information about the core-libs-dev mailing list