RFR: 8259065: Optimize MessageDigest.getInstance [v4]
Valerie Peng
valeriep at openjdk.java.net
Thu Jan 7 19:54:00 UTC 2021
On Thu, 7 Jan 2021 03:59:13 GMT, Claes Redestad <redestad at openjdk.org> wrote:
>> By caching default constructors used in `java.security.Provider::newInstanceUtil` in a `ClassValue`, we can reduce the overhead of allocating instances in a variety of places, e.g., `MessageDigest::getInstance`, without compromising thread-safety or security.
>>
>> On the provided microbenchmark `MessageDigest.getInstance(digesterName)` improves substantially for any `digesterName` - around -90ns/op and -120B/op:
>> Benchmark (digesterName) Mode Cnt Score Error Units
>> GetMessageDigest.getInstance md5 avgt 30 293.929 ± 11.294 ns/op
>> GetMessageDigest.getInstance:·gc.alloc.rate.norm md5 avgt 30 424.028 ± 0.003 B/op
>> GetMessageDigest.getInstance SHA-1 avgt 30 322.928 ± 16.503 ns/op
>> GetMessageDigest.getInstance:·gc.alloc.rate.norm SHA-1 avgt 30 688.039 ± 0.003 B/op
>> GetMessageDigest.getInstance SHA-256 avgt 30 338.140 ± 13.902 ns/op
>> GetMessageDigest.getInstance:·gc.alloc.rate.norm SHA-256 avgt 30 640.037 ± 0.002 B/op
>> GetMessageDigest.getInstanceWithProvider md5 avgt 30 312.066 ± 12.805 ns/op
>> GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm md5 avgt 30 424.029 ± 0.003 B/op
>> GetMessageDigest.getInstanceWithProvider SHA-1 avgt 30 345.777 ± 16.669 ns/op
>> GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm SHA-1 avgt 30 688.040 ± 0.003 B/op
>> GetMessageDigest.getInstanceWithProvider SHA-256 avgt 30 371.134 ± 18.485 ns/op
>> GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm SHA-256 avgt 30 640.039 ± 0.004 B/op
>> Patch:
>> Benchmark (digesterName) Mode Cnt Score Error Units
>> GetMessageDigest.getInstance md5 avgt 30 210.629 ± 6.598 ns/op
>> GetMessageDigest.getInstance:·gc.alloc.rate.norm md5 avgt 30 304.021 ± 0.002 B/op
>> GetMessageDigest.getInstance SHA-1 avgt 30 229.161 ± 8.158 ns/op
>> GetMessageDigest.getInstance:·gc.alloc.rate.norm SHA-1 avgt 30 568.030 ± 0.002 B/op
>> GetMessageDigest.getInstance SHA-256 avgt 30 260.013 ± 15.032 ns/op
>> GetMessageDigest.getInstance:·gc.alloc.rate.norm SHA-256 avgt 30 520.030 ± 0.002 B/op
>> GetMessageDigest.getInstanceWithProvider md5 avgt 30 231.928 ± 10.455 ns/op
>> GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm md5 avgt 30 304.020 ± 0.002 B/op
>> GetMessageDigest.getInstanceWithProvider SHA-1 avgt 30 247.178 ± 11.209 ns/op
>> GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm SHA-1 avgt 30 568.029 ± 0.002 B/op
>> GetMessageDigest.getInstanceWithProvider SHA-256 avgt 30 265.625 ± 10.465 ns/op
>> GetMessageDigest.getInstanceWithProvider:·gc.alloc.rate.norm SHA-256 avgt 30 520.030 ± 0.003 B/op
>>
>> See: https://cl4es.github.io/2021/01/04/Investigating-MD5-Overheads.html#reflection-overheads for context.
>
> Claes Redestad has updated the pull request incrementally with one additional commit since the last revision:
>
> Address review comments from @valeriep
src/java.base/share/classes/java/security/Provider.java line 1072:
> 1070: }
> 1071: public int hashCode() {
> 1072: return 31*31 + type.hashCode()*31 + algorithm.hashCode();
Well, perhaps we just revert to Objects.hash(...) (better readability and potential future enhancement in case one is available)? Or, if we want to adopt the same calculation based on current Objects.hash(..) impl, the 31*31 part doesn't not seem to be too useful and can be removed for this particular case.
-------------
PR: https://git.openjdk.java.net/jdk/pull/1933
More information about the security-dev
mailing list