RFR: 8332895: Support interpolation for backgrounds and borders [v3]

John Hendrikx jhendrikx at openjdk.org
Sun Aug 4 22:54:36 UTC 2024


On Sun, 4 Aug 2024 14:51:47 GMT, Michael Strauß <mstrauss at openjdk.org> wrote:

>> Sorry if this came across as a bit negative, it's not intended in that way.  I just think optimizing too early has far more drawbacks than it has benefits. In part this is also a reflection on a lot of FX code; some code is clearly optimized from a C/C++ developer perspective which not only translates poorly to Java, but also makes the code "too dangerous" to modify.
>
> I don't understand this argument at all. Why would it be reasonable to _only_ build a more efficient architecture _if_ there's such a huge problem with the "naive" alternative that this alone would be readily apparent? If taken to the extreme, this line of thinking leads to death by a thousand papercuts, no single issue big enough that it's a problem all by itself, but taken together it amounts to sluggish performance.
> 
> Here's the thing: this code could reasonably be executed tens of thousands of times per second. Think of a complex user interface that changes its visual state for many nodes at once. What you seem to perceive as a micro-optimization is actually a part of an architecture that involves many types in the `Background` and `Border` area.
> 
> The basic idea is: if I can rely on `interpolate()` returning existing instances (either the start value or the end value) if nothing has changed, then the "parent object's" interpolate method can use an identity comparison (which is basically for free) to choose whether to return itself (or the end value), or return a new instance.
> 
> Let's look at the `Border.interpolate()` implementation:
> 
> @Override
> public Border interpolate(Border endValue, double t) {
>     Objects.requireNonNull(endValue, "endValue cannot be null");
>     if (t <= 0) return this;
>     if (t >= 1) return endValue;
> 
>     // interpolateListsPairwise() is implemented such that it returns existing list instances
>     // (i.e. the 'this.images' or 'endValue.images' arguments) as an indication that the result
>     // is shallow-equal to either of the input arguments. This allows us to very quickly detect
>     // if we can return 'this' or 'endValue' without allocating a new Border instance.
>     List<BorderImage> newImages = images == endValue.images ?
>         images : InterpolationUtils.interpolateListsPairwise(images, endValue.images, t);
> 
>     List<BorderStroke> newStrokes = strokes == endValue.strokes ?
>         strokes : InterpolationUtils.interpolateListsPairwise(strokes, endValue.strokes, t);
> 
>     if (images == newImages && strokes == newStrokes) {
>         return this;
>     }
> 
>     if (endValue.images == newImages && endValue.strokes == newStrokes) {
>         return endValue;
>     }
> 
>     return new Border(newStrokes, newImages);
> }
> 
> 
> `interpolateListsPairwise()` _already knows_ whether both lists are equal, which makes the `return this` and `return endValue` branches after the interpolation basically free. The alternative to this would be to either pointle...

I think you said that you didn't actually test this, so we don't know if this will perform better. It's based on an assumption of what Java will do, and on assumptions what is "bad" and what is "good".  Without having tested this, you are optimizing blind.  I would be entirely satisfied if you had said "I tested this with a few thousand transitions, the allocation turned out to be a hot spot, so I optimized it which is the reason this code looks a bit out of the ordinary".

I also would have liked a comment explaining any pitfalls introduced by the optimization; the use of `==` by callers is unusual, and a crucial piece of information when I see code like:

        if (secondList.isEmpty()) {
            return firstList.isEmpty() ? firstList : secondList;
        }

It would deserve at least an explanation that you're specifically returning one of the passed in arguments so callers can do reference equality checks.

So you may have been wasting your time, and perhaps future maintainer time as well, making code less flexible, and harder to understand, maybe containing new bugs and edge cases for no real reason.

Java's JIT is very finicky.  For example, an earlier comment asked if the first `if` case is the most common one.  The assumption being that this will improve performance.  However, you can't make that assumption at all (I've done tests that show it's almost impossible to predict which "if order" will be faster on the JVM); re-arranging a function or "unrolling" it can have large negative consequences if that means the JIT will no longer inline your function.

Now, I will give you that this **probably** will be faster, but I'm making an assumption here... I really can't be sure.  That still leaves some explanatory comments that I think you should add with the assumptions that were made for this optimization.

>  If taken to the extreme, this line of thinking leads to death by a thousand papercuts, no single issue big enough that it's a problem all by itself, but taken together it amounts to sluggish performance.

No, when hunting for performance problems, it will be readily apparent what total process is taking up all the time, and how often parts of it are being called and how much time they take in total.  Put a profiler on it, and see where the actual time is being spent on a case with a lot of transitions if you want to optimize it.

What optimization generally does is it that it makes code harder to understand (almost unavoidable when optimizing), the code will make more assumptions (which is where performance is gained), and will often require more debugging time and dealing with edge cases (more complex code has more bugs and edge cases, requiring more tests).  

When applied randomly to a code base (ie. without profiling) it only makes things harder for no tangible benefits, and closes off easy improvements later (undoing optimizations for a better alternative has been the topic of several of my PR's).  So if you're going to make something more complicated, and harder to refactor and extend later (which can have far greater pay-offs), then I'm saying there should be a good reason for it.

-------------

PR Review Comment: https://git.openjdk.org/jfx/pull/1522#discussion_r1703410399


More information about the openjfx-dev mailing list