RFR: 8339974: Graphics2D.drawString doesn't always work with Font derived from AffineTransform [v2]
Daniel Gredler
duke at openjdk.org
Fri Oct 4 21:44:38 UTC 2024
On Fri, 4 Oct 2024 21:10:58 GMT, Phil Race <prr at openjdk.org> wrote:
>> Daniel Gredler has updated the pull request incrementally with one additional commit since the last revision:
>>
>> Add bug ID and summary to test classes
>
> I've run it through all our automated testing and it looks OK - by which I mean the existing tests all pass.
> I would still like to do some manual testing because as you can tell from the gyrations you had to do
> in order to write these tests, visual verification is more certain with many transform related bugs.
>
> Of the 3 times I ran it I had one failure on one platform - Windows Server 2016
> java.lang.RuntimeException: No x-edge at center: scale=1, quadrants=2
> at RotatedScaledFontTest.test(RotatedScaledFontTest.java:74)
> at RotatedScaledFontTest.main(RotatedScaledFontTest.java:39)
> at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
> at java.base/java.lang.reflect.Method.invoke(Method.java:573)
> at com.sun.javatest.regtest.agent.MainWrapper$MainTask.run(MainWrapper.java:138)
> at java.base/java.lang.Thread.run(Thread.java:1576)
>
> I would have expected complete consistency but perhaps that older system uses an older version of
> the Arial font and it just tripped over your "roughlyEqual" threshold ?
> I'm not sure I believe that though .. so a bit of an odd one.
>
> Perhaps you can add some extra tolerance and print out the values that caused the failure
@prrace Thanks for the update. Not a problem, I'll have a look at making the requested changes.
Regarding the test failure, did it succeed twice on Windows Server 2016, then? Very odd, if so. I'm wondering if there's some sort of test interaction. Do the JDK tests run in a deterministic order, or does the test order have a random element to it? Also, do the tests run sequentially, or concurrently?
Definitely agree on the limitations of automated tests in this area... one approach I've seen is to commit a manually-reviewed output image, then allow automated pixel-by-pixel checks against that output. This ensures that the automated tests are checking against high-quality expectations, which is very nice, but has the downside of requiring re-review if something small changes under the covers to change even one output pixel (e.g. a random HarfBuzz upgrade).
-------------
PR Comment: https://git.openjdk.org/jdk/pull/20993#issuecomment-2394707909
More information about the client-libs-dev
mailing list