Benchmarking Smalltalk on JVM
Mark Roos
mroos at roos.com
Tue Jan 31 16:52:50 PST 2012
I just loaded about 250K lines of Smalltalk code into my jvm
implementation so now I can start
some real benchmarks using our application. All of this was done on a
Mac.
My first try was a object load which takes about 20 files and creates a
pretty complex object set. This
takes 100 seconds in ST and using the initial jdk7 release I also get 100
seconds. Not bad. But
I see that one of the major slowdowns is in my use of boxed integers vs
STs use of Fixnums. So
I did some more detailed experiments.
Using this code snippet which creates and drops about 2 million Integers
which ST does in about 10ms.
| bytes pos sum |
bytes := ByteArray new:1000000.
sum := 0.
pos := 1.
[pos <= 1000000] whileTrue:[
sum := bytes at:pos.
pos := pos + 1].
^sum
For the initial JDK7 I get 400ms, moving to jdk8 b20 it drops to 117ms (
very nice).
I then converted some constructor lookups to statics to get to 66ms.
Then the obvious move to make an integer cache for which I used the jTalk
range of -2000 to 4000 gave 30ms
And finally ( to handle the index integer) I created a MutableInteger
which dropped me to 5ms.
So 2X better than the ST I started with.
But then I upgraded to jsk8b23 and now the best I see is 16ms. It also
seems like the jit sometimes
compiles and sometimes not even using the same startup sequence. Bleeding
edge I would guess.
But for the final test I used jdk7u4 and my load is 73 seconds. Not as
good as the best jdk8b20 ( 60 seconds)
but faster than native Smalltalk
looking good
mark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/mlvm-dev/attachments/20120131/d09e9270/attachment.html
More information about the mlvm-dev
mailing list