Bounds checks with unsafe array access

John Rose john.r.rose at oracle.com
Wed Sep 10 20:20:21 UTC 2014


On Sep 10, 2014, at 3:47 AM, Paul Sandoz <paul.sandoz at oracle.com> wrote:

> The patch for JDK-8003585 makes no difference.

My first thought, is, why not?  Isn't that just a bug?

For virtualizing arrays, we need either an explicit range-check intrinsic, or robust canonicalization of the standard idioms into CmpU (which is what we use internally).

Actually, we now have Integer.compareUnsigned!  If we agree that is an intrinsic for range checks, the JIT will have a better optimization target to aim at.   Ultimately, we need an intrinsic (perhaps more intentional than Integer.compareUnsigned) that will encourage the JIT to treat it like a range check, doing iteration range splitting, predication, and all the rest.

> (Note: in general we cannot assume that "int index = i & (a.length - 1)" always occurs before the bounds checks, otherwise i would have explicitly written "if (a.length == 0) throw ...")

Right.  You want to factor the range check into a bit of code that doesn't know how it's going to be used, but then gets the same perks as normal array code, including the a.length==0 stuff, and the loop opts I mentioned.

> Ideally similar code as shown for an aaload should be generated. Any suggestions/ideas on how to make that happen?

First, agree on a range check intrinsic.  Then, treat optimization equity failures as JIT bugs.

On Sep 10, 2014, at 6:16 AM, Paul Sandoz <paul.sandoz at oracle.com> wrote:

> it's about somehow conveying/expressing the constraint that the array length is always > 0.

BTW, the code-shape way of doing this is approximately:

  Thing a = this.a;
  int alen = a.lengthField;
  if (alen <= 0)  throw BigHairyError();
  for (int i = 0; i < alen; i++) {
    // hey, this loop executes a positive number of times!
  }

This is assuming that you want to rule out the negative-length case.  You can do so directly and/or along with the zero case.

The more complicated thing would be defining an int field whose bit-31 is never 1.  This can be done in principle by throwing a hint to the JVM, followed by a special global analysis (and all the reflection and serialization interlocks; yuck).  We hardly ever get that heroic; it would need a huge payoff to justify.

Another way to work towards this would be to slip in some integer-range profiling, somewhere.  Putting that on all integer values would be too expensive, but it would be reasonable to sprinkle it in a few places, like arraylength instructions and the rangecheck intrinsic.  Somewhat similarly, we do type profiling on aastore instructions.

On Sep 10, 2014, at 6:36 AM, Vladimir Ivanov <vladimir.x.ivanov at oracle.com> wrote:

> @Stable isn't a full match for final field case, since it doesn't treat default values as constants. But I agree, it's close.

Lazy finals have to reserve a sentinel value to implement laziness (a side-flag would not be portable enough), and the default is the obvious victim.

@Stable also works on array elements; that would require support for array-of-lazy-final in a lazy final formulation.  (Here, defaults have to be the sentinel again, unless we fill such arrays with some other value on creation.)

> Hotspot already constant folds loads from static final fields and there's an experimental flag TrustFinalNonStaticFields for final instance fields.
> 
> What we miss right now for preserving correctness w.r.t. Reflection API is a way to track dependencies between final fields and nmethods and invalidate all nmethods which (possibly) embed changed final values.

Yes.  Well, we have a tracking bug for this.

— John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/attachments/20140910/99ec5872/attachment.html>


More information about the hotspot-compiler-dev mailing list