Performance patch for AIX system api support of sun.nio.ch.FileChannelImpl.transferTo0()

Deven youdwei at linux.vnet.ibm.com
Wed Dec 14 22:06:54 PST 2011


On 12/15/2011 02:04 PM, Deven wrote:
> On 10/21/2011 01:15 AM, Neil Richards wrote:
>> On Thu, 2011-10-20 at 09:07 +0100, Alan Bateman wrote:
>>> Charles Lee wrote:
>>>> Thanks Alan. Any requirements for performance patches to contribute?
>>>> Test case? Test result?
>>> Speaking generally, when working on a performance fix (or any fix for
>>> that matter) it's always good to understand the test coverage for the
>>> area that you are changing. The math area is a good example where
>>> performance work in 7 required extending the test coverage to give
>>> confidence that performance work didn't cause any correctness issues.
>>> There are examples in other areas too. When fixing bugs then the rule
>>> is that tests should be added (or existing tests updated) to cover the
>>> scenario that is being fixed. However there are cases, including
>>> performance, where it's not feasible or doesn't make sense to write an
>>> automated test. Tests need to run quickly so we wouldn't check in a
>>> test that takes an hour to demonstrate a bug for example. Also it's
>>> worth keeping existing tests in mind to avoid duplicating existing
>>> tests. With performance tests then it's as I was saying in the first
>>> mail, the jdk repository isn't really the best place for them in my
>>> view. It's fine to push tests that print results that we can run
>>> manually in a controlled environment (we have several examples of
>>> this) but we can't have tests that fail if they think that the
>>> performance isn't as expected. At some point it would it great if
>>> OpenJDK could get a repository and framework for micro benchmarks and
>>> maybe larger performance tests but that's a bigger discussion and not
>>> really something for this list.
>>>
>>> -Alan.
>>>
>>>
>> Hi Charles,
>> As Alan says, it can be quite valuable to provide functional tests which
>> enhance the coverage of the existing tests to include the area whose
>> performance is being improved, as this helps to demonstrate that the
>> suggested change does not affect the function.
>>
>> I think providing micro benchmark tests (where possible) can also be
>> valuable to demonstrate the performance benefit of the change (with
>> figures gathered from them given as part of the contribution).
>>
>> I don't think such benchmark tests should have a "pass/fail" criteria -
>> they should merely report the result of their measurements (eg.
>> transactions per second, or whatever it happens to be).
>> This also means it makes no sense to have them run as part of the jtreg
>> functional tests, so they shouldn't use the '@test' tag.
>>
>> Whilst it would be fabulous to have a repository and/or framework for
>> such micro benchmarks to use, as things currently stand, it seems to me
>> that storing such tests somewhere under /test in the jdk repository (for
>> jdk code changes, that is) is about the best that can be done under the
>> existing conventions.
>>
>> (It would be a shame to miss the opportunity to capture the criteria by
>> which a performance change was made merely because an overarching micro
>> benchmark framework and/or repository was not yet available).
>>
>> ----
>>
>> As to the specific change being proposed, for ease of review I've
>> uploaded a webrev [1], marginally modified to address some spacing /
>> alignment issues, and to follow the '#elif defined(<PLATFORM>)'
>> convention that protects the code for the other platforms.
>>
>> In review, the comment at lines 225-226 seem incomplete to me (under
>> what circumstances does send_file() return -1 ?).
>>
>> Also, I wonder if it's really intended that the method returns
>> IOS_UNSUPPORTED_CASE if 'return != -1' and 'sf_iobuf.bytes_sent<= 0' ?
>>
>> (This behaviour is more apparent with the change from #ifdef/#endif to
>> #elif/#else. It looked a little like it might be accidental, rather than
>> considered behaviour).
>>
>> Also, what's the semantic difference between returning
>> 'IOS_UNSUPPORTED' (line 210) and 'IOS_UNSUPPORTED_CASE' (lines 214, 231,
>> 241) ?
>> (I don't see 'IOS_UNSUPPORTED' being used by the other platforms, so
>> wonder if this is a typo).
>>
>> ----
>>
>> For the testcase code given, I'd be tempted to have the client and
>> server code run as two threads within the same process (unless there's
>> some good reason why this wouldn't drive the code properly that I've
>> missed).
>>
>> ----
>>
>> Hope this helps,
>> Regards, Neil
>>
>> [1] http://cr.openjdk.java.net/~ngmr/ojdk-199/webrev.00/index.html
>>
> Hi Neil,
>
> I have refined the patch according to your suggestion and uploaded a 
> webrev [1]. And also refined the test case to have the server and 
> client to run as two threads within the same process.
>
> Please review them.
>
> Thanks a lot!
>
> Best Regards, Deven
>
> [1] http://cr.openjdk.java.net/~luchsh/aix_send_file/

The test case is here.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: TransferToTest.java
Type: text/x-java
Size: 5283 bytes
Desc: not available
Url : http://mail.openjdk.java.net/pipermail/nio-dev/attachments/20111215/7ac68f53/TransferToTest.java 


More information about the nio-dev mailing list