HttpServer - issue in handling of "Expect: 100-continue"
Stewart Gebbie
sgebbie at gethos.net
Fri Mar 27 00:09:11 PDT 2009
Hi,
While I agree that RFC2616 is ambiguous it seems to me that there is another
way to look at this. To me it seems that the intention of the 100-Continue
response was in a sense a way of splitting the server response into two parts.
That way the client could receive early feedback. However, my point is that it
seems to me that a reasonable interpretation is that the complete set of
headers (i.e. those collected with the 100-Continue status response and those
collected with the final status response) should be considered as a single
collection of headers by the client when processing the server response.
If one takes this view and then combines that with the RFC2616 notes in
section 4.4 on message length then it seems reasonable that libcURL complains.
More specifically point 3. in section 4.4 reads:
3.If a Content-Length header field (section 14.13) is present, its
decimal value in OCTETs represents both the entity-length and the
transfer-length. The Content-Length header field MUST NOT be sent if
these two lengths are different (i.e., if a Transfer-Encoding header
field is present). If a message is received with both a Transfer-Encoding
header field and a Content-Length header field, the latter MUST be
ignored.
Thus, when viewing the 100-Continue headers together with the final status
headers as a single set split in two, my summary would actually be that both
the HttpServer and the libcURL are possibly incorrect:
- HttpServer: "Content-Length header field MUST NOT be sent if these two
lengths are different"
- libcURL: "If a message is received with both a Transfer-Encoding header
field and a Content-Length header field, the latter MUST be ignored."
On the server side the it might be better to follow the precedent set by other
HTTP servers. In the case of Apache and the Microsoft-HTTPAPI (.NET) server
just '100-Continue' is sent, with no other headers. I'm not sure how this is
handled by other servers.
So the right thing might be for both the cURL client and the Java HttpServer
to change so that these two components can interoperate.
Regards,
Stewart.
--
Stewart Gebbie <sgebbie at gethos.net>
(C) +27 84 738 2899
On Wed, Mar 25, 2009 at 10:42:00AM +0000, Michael McMahon wrote:
> Hi Stewart,
>
> My reading of RFC2616 on this question suggests
> that the server isn't wrong sending a Content-Length: 0
> header with the initial 100-Continue response, regardless
> of whether or not, the final response will contain a chunked
> response body.
>
> The 100- Continue is a normal response, structured
> in the same way as other responses, ie. it can include
> other header fields. In 14.18, the rfc mentions that
> the Date header is optional in 100-Continue responses.
> So, while it's not explicit, this suggests that other headers
> like Content-Length are also allowable. In 14.13 , it also
> mentions that a Content-Length value of zero, is valid.
>
> Like I said that is my reading of the spec at least.
>
> Regards,
> Michael.
>
> Stewart Gebbie wrote:
>> Hi,
>>
>> I would like to report a possible bug in the com.sun.httpserver.HttpServer
>> implementation.
>>
>> In short this relates to the handling of "Expect: 100-continue" in the
>> case
>> where the server code intends to reply using "Transfer-encoding: chunked".
>>
>> ex.sendResponseHeaders(HttpURLConnection.HTTP_OK, 0);
>>
>> The problem arises because the response by the server (as implemented in
>> sun/net/httpserver/ServerImpl.java) results in:
>>
>> HTTP/1.1 100 Continue
>> Content-Length: 0
>>
>> rather than just:
>>
>> HTTP/1.1 100 Continue
>>
>>
>> This becomes a issue when interacting with cURL (curl-7.19.0
>> http://curl.haxx.se/) as the client, since cURL first sees the content
>> length
>> of 0, and then subsequently sees the server requesting chucked transfer
>> encoding. cURL's "curl_easy_perform" function then fails with the error
>> message (see conversation A below):
>>
>> Code 18: "transfer closed with outstanding read data remaining"
>>
>> the libcurl-errors man page explains this error as follows:
>>
>> "CURLE_PARTIAL_FILE (18)
>> A file transfer was shorter or larger than expected. This
>> hap‐
>> pens when the server first reports an expected transfer
>> size,
>> and then delivers data that doesn't match the previously
>> given
>> size."
>>
>> If I instead change the server to use a fixed length response (see
>> conversation B below):
>>
>> ex.sendResponseHeaders(HttpURLConnection.HTTP_OK,
>> DEFAULT_RESPONSE.length()); // length should be 0 for chucked transfer
>>
>> cURL is happy enough to continue (even though in this case it receives two
>> "Content-Length" headers, the first with a 0 length and the second with
>> the
>> actual data length).
>>
>> To more clearly see what is happening I have included two abbreviated HTTP
>> conversations below.
>>
>> I realise that the other alternative is that cURL ignore the
>> Content-Length in
>> the case of receiving a "Transfer-encoding" header. This seems to be
>> implied
>> by the HTTP RFC:
>>
>> http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.4
>>
>> "3. If a Content-Length header field (section 14.13) is present, its
>> decimal
>> value in OCTETs represents both the entity-length and the
>> transfer-length.
>> The Content-Length header field MUST NOT be sent if these two lengths are
>> different (i.e., if a Transfer-Encoding header field is present). If a
>> message is received with both a Transfer-Encoding header field and a
>> Content-Length header field, the latter MUST be ignored."
>>
>> I simply do not know the requirements of the HTTP protocol well enough to
>> be
>> sure of what the right resolution. However, to me, it still seems still
>> seems
>> reasonable that, in either case, the "Content-Length" header should not be
>> included after the "100 Continue" response.
>>
>> Please could somebody look into this issue.
>>
>> Thanks.
>>
>> Regards,
>> Stewart.
>>
>>
>
>
> !DSPAM:49ca0a8b82081249614846!
>
More information about the net-dev
mailing list