RFR: 8371903: HttpClient: improve handling of HTTP/2 GOAWAY frames with error code [v8]
EunHyunsu
duke at openjdk.org
Fri Dec 19 08:43:03 UTC 2025
On Fri, 19 Dec 2025 08:25:47 GMT, Daniel Jeliński <djelinski at openjdk.org> wrote:
>> Thank you @djelinski for finding this. I've added `e.printStackTrace()` to make the exception visible in test output.
>>
>> If I understand correctly, the issue might be a race between `streams.forEach()` in `handleGoAwayWithError()` and other threads adding new streams. If streams 3 and 5 get added while forEach is running (or just after it finishes), they would miss the `closeAsUnprocessed()` call and get closed with the GOAWAY cause instead. Is that the scenario you're thinking of?
>>
>> The printStackTrace should help confirm what's actually happening. Let me know if there's anything else I should add.
>
> Thanks @ehs208 for making the change. That's exactly the scenario I had in mind.
>
> I observed 2 cases where the second failure had the same stack trace as the original one, and one case where the second failure had this stack trace:
>
> java.util.concurrent.CompletionException: java.net.ProtocolException: Received GOAWAY with error code Protocol error (0x1): Test GOAWAY error from server
> at java.base/java.util.concurrent.CompletableFuture.wrapInCompletionException(CompletableFuture.java:323)
> at java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:412)
> at java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:421)
> at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1173)
> at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:531)
> at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1794)
> at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1090)
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:614)
> at java.base/java.lang.Thread.run(Thread.java:1516)
> Caused by: java.net.ProtocolException: Received GOAWAY with error code Protocol error (0x1): Test GOAWAY error from server
> at java.net.http/jdk.internal.net.http.Http2TerminationCause$ProtocolError.<init>(Http2TerminationCause.java:247)
> at java.net.http/jdk.internal.net.http.Http2TerminationCause$ProtocolError.<init>(Http2TerminationCause.java:241)
> at java.net.http/jdk.internal.net.http.Http2TerminationCause.forH2Error(Http2TerminationCause.java:159)
> at java.net.http/jdk.internal.net.http.Http2Connection.handleGoAwayWithError(Http2Connection.java:1463)
> <the remainder of the stack trace is the same as the original exception>
>
> It would be good to avoid these exceptions, and mark the relevant exchanges as unprocessed too.
@djelinski Thanks for confirming the issue and providing the stack trace. That matches what I was thinking.
Looking at this, it seems like the root cause is that new streams can be added to a connection that's already processing GOAWAY. I'm wondering what the best approach to fix this would be.
**Option 1: Prevent new streams after GOAWAY**
The cleanest solution might be to prevent new streams from being added to the connection once GOAWAY processing starts:
```java
private void handleGoAwayWithError(final GoAwayFrame frame,
final long lastProcessedStream,
final int errorCode) {
// ... error message construction ...
final Http2TerminationCause cause = Http2TerminationCause.forH2Error(errorCode, errorMsg);
// Mark connection as closing to reject new stream attempts
// (New requests would automatically use a new connection)
markAsClosing(); // or similar mechanism
final AtomicInteger numUnprocessed = new AtomicInteger();
streams.forEach((id, stream) -> {
if (id > lastProcessedStream) {
stream.closeAsUnprocessed();
numUnprocessed.incrementAndGet();
}
});
// ... debug logging ...
close(cause);
}
Option 2: Defensive forEach
Alternatively, we could check for late-arriving streams:
streams.forEach((id, stream) -> {
if (id > lastProcessedStream) {
stream.closeAsUnprocessed();
numUnprocessed.incrementAndGet();
}
});
// Check again for streams that were added during the forEach
streams.forEach((id, stream) -> {
if (id > lastProcessedStream) {
stream.closeAsUnprocessed();
}
});
close(cause);
The first approach seems cleaner since it addresses the root cause - streams shouldn't be added to a connection that's already closing. But I'm not sure if there are other considerations I'm missing. What do you think would be the right approach here?
-------------
PR Comment: https://git.openjdk.org/jdk/pull/28632#issuecomment-3674116727
More information about the net-dev
mailing list