early steps on the road to customization
John Rose
john.r.rose at oracle.com
Sat Aug 29 20:22:48 UTC 2020
On Aug 28, 2020, at 3:12 PM, John Rose <john.r.rose at oracle.com> wrote:
>
> Here’s a discussion of the issue with CV::remove:
>
> https://bugs.openjdk.java.net/browse/JDK-8238260 <https://bugs.openjdk.java.net/browse/JDK-8238260>
I linked your contributions to that bug. Thanks again.
We could generalize the MutableCallSite handshake to
cover both MCS and CV and whatever third and fourth
things we need in the future. I think this is a worthy
and timely project. Since Vladimir Ivanov is the author
of the existing handshakes, I’m CC-ing hom. Here are
some further thoughts on this…
The pieces of the puzzle are, I think, as follows:
1. A compile-time constant folding protocol. This should
accept, at JIT time (not in Java execution) one or more “live”
values (obtained by the JIT using previous constant propagation)
and apply some compile-time function to those values to obtain
a new “live” value, plus an indication of the status of that value.
The status is “unknown” (no constant is available now), or “certain”
(constant is locked in place and will never change), or “speculative”
(constant is known but may change in the future). The “live”
values could be mere Java object references, for starters, but
I think the protocols can also support primitives and Valhalla
inline objects. Arrays probably need some special processing too.
2. For speculative constants, there must be a way of baking a
record of the speculation into the n-method’s dependencies.
This record must include one (or maybe more) live values
which can assist a dependency checker in deciding when to
deoptimize the n-method. (See code/dependencies.hpp
and MethodHandles::add_dependent_nmethod.)
3. There must be an algorithm for detecting revocation of
speculative constants and notifying dependent n-methods.
Such an algorithm probably has phases, an event-triggered
pass which locates a set of n-methods which are possibly
affected (worst case, *all* n-methods), and an analysis pass
which visits each n-method, scans its dependency list for
relevant dependencies, and evaluates a function to detect
invalidated dependencies. (See flush_dependent_nmethods.)
These protocols seem ambitious, but I think we can do things
which simplify them.
1. The JIT cannot execute Java code directly, but it certainly
models Java code completely. Mutable heap variables are
modeled today using @Stable, and a similar annotation could
be created which produces the information needed in this
application. The annotation would have to include an indication
of which events, if any, can invalidate the annotated variable.
(By contracts, with @Stable, invalidation is irrelevant.)
Note that reading heap variables does not require Java code
execution in JIT.
2. If the JIT decides (in step 1) to speculate on the value of a
mutable heap variable, it can easily create a “witness object”
that encodes the observation. The witness could contain a
base address and offset (unsafe Object/long pair) plus the
witnessed value. To be type-polymorphic, the witnessed
value might be encoded as another Object/long pair (in private
storage never modified) plus a size in bytes. To check the
validity of a witness, a bitwise comparison would be enough
in most cases, although maybe not all. The witness would
also encode which events could potentially cause invalidation.
This witness would be stored (as a live reference) in the
n-method dependency list.
3. There must be a (trusted) JVM down-call which posts
invalidation events. This would take one (or maybe more)
parameters which allow the JVM to concentrate only on
n-methods which contain witnesses that pertain to the
invalidation event. This call must correctly sequence
with the update to a heap variable which causes the
invalidation, which probably means that the down-call
actually *performs* the update.
Today, there are two such down-calls in the JVM, named
setCallSiteTargetNormal and setCallSiteTargetVolatile.
They are customized to the two different types of mutable
call site. But we don’t need one down-call for each kind of
invalidation, just one parameterized by a classification token
(and maybe a secondary token). I suppose a Class object
would make a fine token for classifying invalidation events.
A Class could allow CHA-like considerations to play a role in
witness checking (using the built-in isAssignableFrom logic,
which the JVM can execute at any time). Either a primary
or secondary down-call argument could be a Class object
which could be applied to either the base reference (of
the witnessed heap variable) or to the witnessed *value*
of that heap variable. (I don’t have an application for this
at present
So this pencils out to:
1a. An annotation (trusted) like @Stable, say @Speculative.
Perhaps it doesn’t need any parameters (see below) which
would make it simpler to implement in the JVM.
1b. A SpeculativeBinding object (JVM internal) to serve as
a witness. There’s a simple Java API for it, but it’s mostly just
a dumb record, consulted by the n-method dependency
logic. The existing internal class CallSiteContext is a
first cut. Java fields are variableType, variableBase,
variableOffset, valueBase, and valueOffset; others can be
injected by the JVM for handshakes, as currently with
CallSiteContext. Re-validating a SpeculativeBinding requires
loading the two values at the two base/offset pairs (both of
the common type) and comparing them for something
like same-ness (acmp or even just pointer comparison).
It would be reasonable for a JIT-time query to return a
freshly-allocated SpeculativeBinding object as a possible
result to a query against a @Speculative-annotated variable.
2. A new dependency type Dependencies::speculative_value.
This would generalize and replace call_site_target_value.
3. A new JVM down-call SpeculativeBinding::update
which would take care of the necessary state changes.
It seems straightforward to design this as a virtual call,
which means the caller would need to come up with an
instance of a SB in order to tell it to invalidate. That
turns out to be problematic, since when the original
Java library code needs to update a @Speculative heap
variable, it doesn’t really know whether the JIT has
created a witness for it. Instead, the update call needs
to be told to update the heap variable, by whatever means
necessary, and the JVM needs full control over how to
find any relevant witnesses. There are a number of ways
to get around this tricky bit; perhaps the simplest is to
make SpeculativeBinding be an abstract superclass
and fold it into the data structures that contain the
affected heap variables. (Note that some of them are
arrays; the SB superclass would fold into the *holder*
of the array.) This may be too intrusive, but if it works
maybe the annotation is superfluous, which would be
nice. Another possibility is to create a fresh temporary
SpeculativeWitness object for every update, and allow
the JVM to somehow swap in a previously created one
(created by the JIT and stored in n-method dependency
lists) when that is relevant; the update operation would
be virtual on the witness; the JIT would replace the update
call with something suitable that handshakes with n-methods.
Another possibility (most straightforward) is to have the
update call be purely static, and take a bunch of parameters
about what’s getting updated where, plus which invalidation
events might be relevant. The JVM runtime would nose around
for matching n-methods on that variable and DTRT.
— John
More information about the valhalla-dev
mailing list