RFR: 8254129: IR Test Framework to support regex-based matching on the IR in JTreg compiler tests [v2]

Christian Hagedorn chagedorn at openjdk.java.net
Tue Apr 20 13:41:09 UTC 2021


On Mon, 19 Apr 2021 16:59:45 GMT, Igor Ignatyev <iignatyev at openjdk.org> wrote:

>> Maybe first some general background about having a whitelist/blacklist in the first place. When a user writes an `@IR` rule, we do not want him to think about any possible VM flag combination (which he is not specifically using in his test, for example with `addFlags`) that could break his `@IR` rule and then requiring him to restrict the `@IR` rule with `IR::applyIfXX()` for these flags.
>> 
>> I agree that the whitelist is probably not complete and could be improved (but I think we should not whitelist different GCs as these affect the barrier code of the IR). About whitelist vs. blacklist, I had discussions about it with @TobiHartmann. We eventually decided to use a whitelist which seemed easier to maintain and is probably a lot shorter than a blacklist. In addition, if a new flag is introduced or removed, we most likely do not need to worry about updating the whitelist as the (currently) whitelisted flags will most likely remain unchanged. This first whitelist version is also just a best guess solution. We might miss on some other flags that will not affect the IR but we have the confidence that we do not run into surprising failures due to forgetting to blacklist a particular flag. But this is open for discussion of course. It would be interesting to hear how people think about it.
>
> I'll need to think about it a bit more, but my first impression after reading your comment is that we don't actually want to allow any flags, so we might better off just using `@requies vm.flagless` or reusing the underlying code to check that we don't have any flags that can "significantly" change behavior.

Our idea was to only restrict the IR verification done after the test VM terminates with the whitelist (or blacklist). But the framework should still run each test with any flag combination. This showed to be quite useful in Valhalla to find bugs.

>> I think we should keep it for the cases when a user wants a more specific setup for his tests in the calling test class. For example, if he wants to define a new default warm-up for all of his tests (using `setDefaultWarmup`) or if he needs to combine different options for which there is no `runXX` method (helper classes, scenarios, and/or flags). In this case, he needs to create a `TestFramework()` builder and then call the different builder methods. We could argue that a single constructor is enough in which the test class needs to be specified. But I think most of the time, the user wants to test the calling class for which this additional constructor is quite useful.
>
>> But I think most of the time, the user wants to test the calling class for which this additional constructor is quite useful.
> 
> I concur that in most cases the users would want to use the caller as a `testClass`, yet we already have `run` and friends to cover that (most common) use-case, users who would need to construct fancier `TestFramework`, most probably, won't be doing it in their `testClass`, and if they would, the overhead of writing `new TestFramework(getClass())` is just 10 keystrokes and is neglatible (given the assumption of a more complicated use case)

That is true. The entire test will probably look more complex in this case so it's probably reasonable to only provide the constructor with a class argument. However, if we are going to remove most of the (current) `runXX()` methods as discussed in a comment below, then keeping this parameterless constructor might be justified again? Anyhow, I'm fine with both. (Nit: when used from `main()` we would need to use `MyTestClass.class` instead of `getClass()`)

>> I think you are probably right. I added this builder approach later during the development of the framework. It might be better to keep the `TestFramework` interface simple(r). However, I will wait for others to comment on that before changing/removing all of these. 
>> 
>> If we're gonna simplify it, I suggest to keep `run()` and maybe also `runWithFlags()` as (probably?) most common use-case.
>
>> I suggest to keep run() and maybe also runWithFlags() as (probably?) most common use-case.
> 
> I guess you are right, `run()` and `runWithFlags(String[])` will be most-commonly used ones.

Sounds good. Will do these changes if others agree.

>> I will address all `\n` in a separate commit as there are a lot of them. Could I also just use `System.lineSeparator()` anywhere where I'm currently using `\n` like  `System.err.println(System.lineSeparator() + e.getExceptionInfo()`)? Might be easier when I'm using `\n` with a `StringBuilder`, for example.
>
> sure you case use `System.lineSeparator()`, it's just a matter of personal choice/style

�� I fixed them.

>> I guess it's not necessary. Removed it.
>
> �� . I noticed that you have leading `\n` in other exceptions, do you plan to remove it from there as well?

Sometimes, I found the resulting format of the error message nicer with the additional new lines. For example the exception at [L940](https://github.com/openjdk/jdk/blob/a563fb33ead7d2c8c9a5266d7187c956327281b2/test/lib/jdk/test/lib/hotspot/ir_framework/TestFramework.java#L940):


JavaTest Message: Test threw exception: jdk.test.lib.hotspot.ir_framework.TestFormatException: 

Violations (1)
--------------
 - Cannot use explicit compile command annotations ...

vs. no new lines:

JavaTest Message: Test threw exception: jdk.test.lib.hotspot.ir_framework.TestFormatException: Violations (1)
--------------
 - Cannot use explicit compile command annotations ...

I will check the other exceptions again if they really need the new lines or not.

>> I wanted to distinguish the following cases (but I'm open to use a different approach):
>> 
>> - `TestFormat::check` (using `TestFormatException`): The format of any test/helper class does not conform with the format the framework is expecting. The user needs to fix this (for example, using a wrong annotation). I report these kind of failures before actually running any of the tests [here](https://github.com/openjdk/jdk/blob/7ed789dc98691d7c7fc2295e045a9f54b9fa6277/test/lib/jdk/test/lib/hotspot/ir_framework/TestFrameworkExecution.java#L265).
>> - `TestRun::check` (using `TestRunException`): Thrown while the framework is [executing a test](https://github.com/openjdk/jdk/blob/7ed789dc98691d7c7fc2295e045a9f54b9fa6277/test/lib/jdk/test/lib/hotspot/ir_framework/TestFrameworkExecution.java#L799), there is nothing wrong with the format of a test.
>> - `TestFramework::check` (using `TestFrameworkException`): If such a check fails there is a bug in the framework itself (like internal framework assertions). The user cannot really do anything about it without fixing the framework itself.
>> 
>> I have to double check again if I'm using the right kind of exception everywhere.
>
> I understand that you want to use different exception types to distinguish different kinds of failures, I just don't see lots of benefits in have `::check` and `::fail` methods, they are just `if (p) throw new X(m)` or `throw new X(m)`.
> 
> if, for example, you want `TestFrameworkException` to always have `Internal Test Framework exception - please file a bug`, you can prepend it in `TestFrameworkException::<init>`

I see your point. Having `fail` indeed does not add much benefit compared to just having `throw new XX` for `TestFramework` and `TestRun`. I replaced these `fail` methods with direct `throw new XX` statements and only kept the `check` methods. For `TestFormat`, I think it makes makes more sense to keep the `check` and `fail` methods as I'm also logging the failures and sometimes not even throwing an exception directly.

-------------

PR: https://git.openjdk.java.net/jdk/pull/3508


More information about the hotspot-compiler-dev mailing list