[OpenJDK 2D-Dev] <AWT Dev> [8] Review request for 8011059 [macosx] Make JDK demos look perfect on retina displays

Jim Graham james.graham at oracle.com
Wed Nov 13 20:15:13 UTC 2013


Hi Sergey,

On 11/13/13 11:25 AM, Sergey Bylokhov wrote:
> On 13.11.2013 22:49, Jim Graham wrote:
>>
>>
>> On 11/13/13 4:33 AM, Sergey Bylokhov wrote:
>>> On 12.11.2013 23:43, Jim Graham wrote:
>>>> - The logic in using the transform is also a bit murky.  I think if
>>>> you set the scale on a retina display to exactly 1/2 it would use the
>>>> HiDPI version even though the scale was 1:1.  Since I support not
>>>> examining the transform there, I'm going to present this as another
>>>> reason why we should just base it on devScale, but the logic could be
>>>> fixed if we really plan to use the transform here.
>>> It also allow to the user to use a transform and a hint and force the
>>> images to draw the scaled version even to the BufferedImage fo ex. Which
>>> can be useful in case of sequential drawing of BI-> BI->retina.
>>
>> I think you misunderstood me here.  I'm not saying anything about the
>> BI->BI->retina case.  I'm saying that even on pure retina your logic
>> chooses the wrong scale because of mistakes made in examining the
>> transform state type.
> No I understood. But a transform of the sg2d already include devScale.

I fully realize that.  I am pointing out a problem that actually 
involves understanding that.  Please read more closely.

> This part of the fix was changed from version to version. We could use
> devScale only, or a transform if it is scaled, or an any transform. If
> nobody objects, we could use any type of transform.

But, you are only referencing the transform under certain conditions and 
the conditions in which you ignore it are important.  I'm not trying to 
question your goals here, I understand what they are - I am trying to 
point out that you implemented *your* goals wrong.

It would be great if you only used devScale, or if you only used the 
transform, or if the logic you wrote had some sane combination of the 
two, but the code that was written has logic errors that cause it to 
make inaccurate assumptions about the relationship of devScale to the 
current transform and that leads to inconsistent behaviors that don't 
match anyone's expectations.  devScale may indeed be an optimization for 
"what is in the transform", but your logic doesn't really detect that 
case correctly.

Note that the current code has the following behaviors some of which are 
inconsistent:

- If I am on a devScale=2 display and I am rendering the image with the 
default transform of 2x then I will get the @2x image.  This is good.

- If I am on a devScale=1 display and I am rendering the image with a 
scaled transform of 2x (the identical transform to the above), then I 
get the @2x image.  This is consistent with Mike Swingler's recent 
clarifications and is good.

- If I am on a devScale=1 display and I am rendering the image with a 
scaled transform of 2x, but rotated by 90 degrees, then I get the 
regular version of the image despite the fact that it is the same "size" 
as the previous case.  The image shouldn't suddenly shift like that if I 
just apply a rotation.  This is inconsistent.

- If I am on a devScale=1 display and I am rendering the image with an 
identity transform, then I get the regular image.  This is good.

- If I am on a devScale=2 display and I render with an identity 
transform (note that I would have to supply a downscale of 0.5 to get 
this, I have the same transform as above, but I get the @2x image, in 
contrast with the exact same sizing in the previous example.  This is 
inconsistent.

- If I am on a devScale=2 display and I downsize to < 0.5, then the 
current code will detect that I should get the regular image because the 
resulting scale when composited with devScale is <1.0 and is registered 
as a SCALE type.  This is good.

- If I am on a devScale=2 display and I downsize to <0.5, but I also 
rotate slightly, then I will get the @2x image downscaled even though I 
would have gotten the regular image had I not rotated.  This is 
inconsistent.

Basically, you've ignored all transformState values other than 
TRANSLATESCALE and the holes in that logic will create many 
inconsistencies only some of which I've outlined above.

> My point about BI1-BI2-retina is that the user cannot control devScale,
> and so cannot force BI1 to use scaled version. But if we take into
> account current transform of the SG2D, then user can control that via
> transform and hint.

I understand that you want to take the transform into account.

I understand that this is consistent with Mike Swingler's latest 
clarifications.

I am trying to help you actually do that, because your current code only 
does it in the most simplistic, but naive way that is full of 
inconsistencies.  I am trying to help you do that *consistently*...

Also, in case you don't know how to extract a scale accurately from a 
GENERAL transform, the code is simple.  You want to figure out how much 
each image pixel is stretched.  That involves delta-transforming unit 
vectors in x and y:

double coords[] = { 1, 0, 0, 1 };
tx.deltaTransform(coords, 0, coords, 0, 2);
xScale = Math.hypot(coords[0], coords[1]);
yScale = Math.hypot(coords[2], coords[3]);

Obviously, for TRANSLATE_ONLY or IDENTITY then xscale/yscale are 1 
(regardless of devScale since if the transform has devolved to a 
translate then the programmer scaled the devScale out of it already). 
You can avoid the calculation in that case (but don't default to devScale).

And for simple scales, the above simplifies to the tx.getScaleX() and 
tx.getScaleY() values as you coded (also ignoring devScale).

And even for the case of GENERAL transform, the above can be simplified. 
  The tx.deltaTransform method in that case simply loads the array with:

{ m00, m10, m01, m11 }

and so the hypot methods are simply:

xScale = Math.hypot(tx.getScaleX(), tx.getShearY());
yScale = Math.hypot(tx.getShearX(), tx.getScaleY());

It might be possible to create an optimization for quadrant-rotated 
images, but that is rare enough and the savings over the general code 
above is not enough to worry about.  If we are really worried about 
performance in the general transform case then we could bake the above 
logic into the invalidateTransform() method at the cost of adding 2 new 
fields to SG2D.  I think the above formula are fine given the complexity 
of rendering with rotated/sheared transforms.

Note that I didn't have any optimizations that use the value in devScale 
as that would only come into play if we had some way to detect "the 
transform is the default transform that was created from the devScale", 
but the transform is immediately classified as "TRANSLATESCALE" on a 
retina drawable and that could mean anything.

The inconsistencies in the current patch are that the devScale is used 
even in the case of an identity transform, but the fact that it is 
identity means that they already reduced the default devScale scale out 
of it so you shouldn't be using devScale there.  The other inconsistency 
is that you don't attempt to compute the pixel scale when the transform 
is "more than just a scale".

			...jim



More information about the 2d-dev mailing list