Running JDK pre-submit testing on GitHub

Jaikiran Pai jai.forums2013 at gmail.com
Tue Sep 29 14:26:13 UTC 2020


Hello Robin,

Just catching up on the emails and I notice that the jdk repo now has
this pre-submit feature integrated into master. Thank you for
implementing this.

A couple of inputs with my latest round of testing[1] against a personal
branch:

1. The Windows hotspot gc tier1 testing failed[2] due to "not enough
space on disk". I'm not sure if there have been new test additions in
that area which might have caused increased disk space usage or whether
this just is an intermittent issue. I hadn't noticed it in my previous
testing.

2. The macOS one ran for a while and then is marked as failed[3]. The
logs for that run don't contain anything and this is typically a sign
that the VM resources were exhausted. There have been similar issues
reported against the github actions VM runners project where they end up
losing logs from the whole run if VM resources are exhausted.

Finally, I notice that the job now makes available the jtreg results. I
downloaded it and checked a few basic things (like whether my test got
executed) and this all looks good. Thank you very much for implementing
this.

[1] https://github.com/jaikiran/jdk/actions/runs/278458879

[2] https://github.com/jaikiran/jdk/runs/1182195606?check_suite_focus=true

[3] https://github.com/jaikiran/jdk/runs/1182059371?check_suite_focus=true

-Jaikiran

On 24/09/20 4:57 pm, Robin Westberg wrote:
> Hi Jaikiran,
>
>> On 24 Sep 2020, at 13:03, Jaikiran Pai <jai.forums2013 at gmail.com> wrote:
>>
>> Hello Robin,
>>
>> On 24/09/20 1:57 pm, Robin Westberg wrote:
>>>
>>>> On a related note, I see that the job generates jtreg and the JDK
>>>> binaries as artifacts. Would it be possible to generate the jtreg test
>>>> data (logs and other test artifacts) as an output of the job? That might
>>>> sometimes help review some of these test runs easily.
>>> Right now the contents of the “test-results” folder generated by jtreg
>>> is published as an artifact only if test a job fails. Do you think it
>>> would be useful to publish them on success as well? Could perhaps make
>>> that configurable.
>>>
>> I think it will be useful to publish them even when the job succeeds.
>> IMO, publishing these results is more useful than publishing the jtreg
>> or the Java binaries (which I'm not too sure anyone would often
>> download, but I'm just a relatively new contributor to the project and
>> haven't previously used the old "submit" repo to know all the use cases).
> I certainly agree about the non-usefulness of the jtreg and binary test bundles, but unfortunately that’s the only way to reliably propagate binaries between different steps in a job. Ideally these artifacts could be removed when the job successfully completes, but the GitHub API does not (yet) allow this, you can only delete artifacts from a previous, completed job. (The submit repo didn’t publish these either). I’d prefer that a clean run would not publish any artifacts at all (perhaps just a single summary of all the results).
>
>> For example, in my current PR run for which I triggered the job, I would
>> have liked to easily view some of the details (like logs and other
>> details that jtreg provides) of the new test that I added as part of the
>> PR. As for making it configurable, I think that would require a manual
>> intervention between the time when you push a commit to a branch and the
>> job gets auto-triggered, isn't it? If the test results don't generate a
>> huge zip file as compared to the artifacts we already publish, IMO,
>> making it configurable isn't necessary.
> The problem here isn’t really the artifact size I think, but rather that each of the 21 test steps would need to generate their own artifact (a final step could perhaps combine them, but the intermediate artifacts would still be needed to propagate the results there).. So the list of artifacts would become quite long.
>
> Having it configurable would not necessarily mean manual intervention though, the submit job can already be customized by setting special variables on the repository. So for example we could add an option like  `JDK_SUBMIT_ALWAYS_PUBLISH` which would be checked when determining whether or not to publish the test results. It could also be an option when triggering a job manually (which becomes available if the actions file is present on the master branch in the repository).
>
> Best regards,
> Robin
>> -Jaikiran
>>
>>



More information about the jdk-dev mailing list