Early Access Build Test Results

Balchandra Vaidya balchandra.vaidya at oracle.com
Fri Jan 11 07:13:43 PST 2013

On 01/10/13 11:57 AM, Rory O'Donnell Oracle, Dublin Ireland wrote:
> Hi Jon, Joe
> Thank you both for your feedback.
> We will look to post summary.txt in the coming weeks. 


> Secondly, we will look at archiving a summary.txt per build.
> Balchandra will look at the jtreg options and come back to you on that.
> Will update you on our progress.
> Rgds, Rory
> On 10/01/2013 00:21, Joe Darcy wrote:
>> On 01/09/2013 02:03 PM, Jonathan Gibbons wrote:
>>> Rory,
>>> It is good to see that we are now able to publish Early Access Build 
>>> Test Results.
>>> What is being done to address the test failures that you report? 
>>> Ideally, test failures should have corresponding bugs filed on 
>>> JBS/bugs.sun.com.
>>> It would also be good to see the complete list of tests that did not 
>>> pass for a build. Right now, the numbers under Failed and Error do 
>>> not match your list of "known issues".  How about automatically 
>>> publishing tests listed in JTreport/text/summary.txt that are 
>>> reported as "Failed." or "Error."?
>>> -- Jon
>> Hello,
>> I agree it is very welcome to see the regression test results for 
>> builds.
>> I also agree with Jon that it would be very helpful to see the full 
>> summary.txt output files for the test runs.  Such files would allow 
>> developers to compare the test results of their own builds to the 
>> recent promoted builds.  As a point for comparison, when I was 
>> release manager of OpenJDK 6, I published the summary.txt files as 
>> well as the jtdiff output; for a few examples see:
>> https://blogs.oracle.com/darcy/entry/openjdk_6_b22_regression_test
>> https://blogs.oracle.com/darcy/entry/openjdk_6_b21_regression_test
>> https://blogs.oracle.com/darcy/entry/openjdk_6_b20_regression_test
>> https://blogs.oracle.com/darcy/entry/openjdk_6_b19_regression_test
>> It would would be useful to have persistent per-build pages to serve 
>> as an archive to test results over time.
>> Finally, how do the jtreg options used to generate the reported 
>> results compare to the jtreg options used in the "make test" target?

Good question.  Unfortunately, I could not get consistent passes when I run
"make jdk_all" or "make jdk_default" targets.  I ran individual target 
  jdk_security1, jdk_rmi, ....) separately and then merged the results.
However, my chosen targets ran ~3600 tests.

Then,  I used jtreg directly to run the tests under following directories.
So, ~4000 tests passed now.

The above dir.list do not include awt, 2d and some swing test directories
for which I could not get consistent results.

The caveat here (same as choosing the separate make target) is that I may
miss the tests when a new test directory added to the testbase!  Any


>> Thanks,
>> -Joe

More information about the quality-discuss mailing list