How high are he memory costs of polymorphic inline caches?

Raffaello Giulietti raffaello.giulietti at gmail.com
Wed Aug 20 08:09:54 UTC 2014


On 2014-08-20 02:30, Florin Mateoc wrote:
> Raffaello Giulietti <raffaello.giulietti at ...> writes:
>
>>
>> * Most VMs are not multi-core aware. They only expose green-threads to
>> users (they call them "processes") and do not even implement concurrent
>> and/or parallel garbage collectors under the hood.
>>
>
> I don't think this is really an issue for an already written massive
> application such as yours. We have a similarly large Smalltalk application
> that we successfully translated to Java (source-to-source, that's yet
> another option) and, since it is already very complex, mapping it to a more
> complicated concurrency model (native threads vs green threads) is quite a
> challenge. The only place where we are truly using multiprocessing is to
> parallelize the startup which is now much longer, since Java does not have a
> snapshotting capability - in short, I don't think there is any gain to be
> had here.
>

At a minimum, multi-threaded VM can implement concurrent and parallel gc 
under the hood: the ST programmer does not see the intricacies of 
concurrency and still benefits from the hardware.

Further, a multi-threaded VM can expose parallel abstractions to the ST 
programmer, in particular when operating on big collections, without 
much fuss.

For example, on the server side we built and currently heavily benefit 
from a limited framework to exploit parallelism for doing exactly this, 
processing big collections. However, since the VMs are not 
multi-threaded, we do this by running several Smalltalk VMs in parallel, 
coordinating their activities via sockets and shared memory. Not the 
most efficient and transparent way, but what else can we do to exploit 
our servers with 512 GiB of memory and 64 cores?

This overhead could be drastically reduced with the right abstractions 
in a multi-threaded VM and the library classes.

I'm not necessarily advocating the native shared-memory concurrency 
model of the JVM. Rather, I would like to see something akin to Java 8 
parallel streams and see the CPU usage of Smalltalk jump from 12.5% to 
100% on my 1500$, 8 cores, 12 GiB laptop.




>> * Smalltalk on the JVM can leverage the sheer amount of Java libraries
>> in a way that a programmer could feel very natural. Native VMs need a
>> much less transparent approach, e.g., to memory management, when
>> invoking Java code (if at all).
>>
>
> Even this comes with pitfalls for a large application. We now have multiple
> libraries, sometimes different versions of the same library, coming from
> different prerequisite chains, many of them providing again and again the
> same little utility which was already implemented in the main application,
> but nobody bothered to look for it - not that it's an easy challenge to
> already know what's there for new developers.
>
> I should also add that the memory footprint has at least doubled compared to
> Smalltalk, and this is with a 32-bit JRE (we run with both).
>
> At least it does not run slower (nor faster).
>


We have a Smalltalk-to-JVM bridge that exploits the invocation API of 
JNI, thus loading and running the JVM inside the same OS-level process 
that runs Smalltalk. Thus, we are able to leverage Java code from 
Smalltalk. This works nicely, with quite limited a overhead. The only 
problem is memory management in the Java space: the ST has to ensure 
(e.g., with ensure:) that Java resources are released in a timely 
fashion. But this is not always possible in every circumstance and we 
sometimes leak Java objects the ST gc and the JVM gc operate on 
different heaps.



> That said, the other arguments still stand.
>
> Cheers,
>
> Florin
>

Cheers
Raffaello




-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3764 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://mail.openjdk.java.net/pipermail/mlvm-dev/attachments/20140820/54a7bd56/smime-0001.p7s>


More information about the mlvm-dev mailing list