RFR: 8282047: Enhance StringDecode/Encode microbenchmarks

Claes Redestad redestad at openjdk.java.net
Fri Feb 25 09:08:04 UTC 2022


On Thu, 24 Feb 2022 19:01:32 GMT, Brent Christian <bchristi at openjdk.org> wrote:

>> Splitting out these micro changes from #7231
>> 
>> - Clean up and simplify setup and code
>> - Add variants with different inputs with varying lengths and encoding weights, but also relevant mixes of each so that we both cover interesting corner cases but also verify that performance behave when there's a multitude of input shapes. Both simple and mixed variants are interesting diagnostic tools.
>> - Drop all charsets from the default run configuration except UTF-8. Motivation: Defaults should give good coverage but try to keep runtimes at bay. Additionally if the charset tested can't encode the higher codepoints used in these micros the results might be misleading. If you for example test using ISO-8859-1 the UTF-16 bytes in StringDecode.decodeUTF16 will have all been replaced by `?`, so the test is effectively the same as testing ASCII-only.
>
> test/micro/org/openjdk/bench/java/lang/StringDecode.java line 93:
> 
>> 91:     public void decodeAsciiLong(Blackhole bh) throws Exception {
>> 92:         bh.consume(new String(longAsciiString, charset));
>> 93:         bh.consume(new String(longAsciiString, 0, 1024 + 31, charset));
> 
> I imagine the 1024+31 addition gets compiled down, and is not executed during the test, right?

Yes, adding two integer literals will be constant folded already by javac:

    public static void main(String...args) {
        int foo = 1024 + 31;
    }

javap -v output:
```      stack=1, locals=2, args_size=1
         0: sipush        1055
         3: istore_1
         4: return

-------------

PR: https://git.openjdk.java.net/jdk/pull/7516


More information about the core-libs-dev mailing list