Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ProxyConnectConnectionFactoryFilter leaks connection in case of errors #1002

Merged
merged 9 commits into from
Apr 9, 2020
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright © 2018-2019 Apple Inc. and the ServiceTalk project authors
* Copyright © 2018-2020 Apple Inc. and the ServiceTalk project authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -244,7 +244,7 @@ private static <U, R> StreamingHttpClient buildStreaming(final HttpClientBuildCo
if (roConfig.hasProxy() && sslContext != null) {
assert roConfig.connectAddress() != null;
connectionFactoryFilter = new ProxyConnectConnectionFactoryFilter<R, FilterableStreamingHttpConnection>(
roConfig.connectAddress(), reqRespFactory).append(connectionFactoryFilter);
roConfig.connectAddress()).append(connectionFactoryFilter);
}

final HttpExecutionStrategy executionStrategy = ctx.executionContext.executionStrategy();
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright © 2019 Apple Inc. and the ServiceTalk project authors
* Copyright © 2019-2020 Apple Inc. and the ServiceTalk project authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand All @@ -19,12 +19,11 @@
import io.servicetalk.client.api.ConnectionFactoryFilter;
import io.servicetalk.client.api.DelegatingConnectionFactory;
import io.servicetalk.concurrent.SingleSource;
import io.servicetalk.concurrent.api.ListenableAsyncCloseable;
import io.servicetalk.concurrent.api.Single;
import io.servicetalk.http.api.FilterableStreamingHttpConnection;
import io.servicetalk.http.api.HttpExecutionStrategy;
import io.servicetalk.http.api.HttpExecutionStrategyInfluencer;
import io.servicetalk.http.api.StreamingHttpRequestResponseFactory;
import io.servicetalk.http.api.StreamingHttpResponse;
import io.servicetalk.transport.netty.internal.DeferSslHandler;
import io.servicetalk.transport.netty.internal.NettyConnectionContext;

Expand All @@ -47,17 +46,12 @@
* @param <ResolvedAddress> The type of a resolved address that can be used for connecting.
* @param <C> The type of connections created by this factory.
*/
final class ProxyConnectConnectionFactoryFilter<ResolvedAddress, C
extends ListenableAsyncCloseable & FilterableStreamingHttpConnection>
implements ConnectionFactoryFilter<ResolvedAddress, C>,
HttpExecutionStrategyInfluencer {
final class ProxyConnectConnectionFactoryFilter<ResolvedAddress, C extends FilterableStreamingHttpConnection>
implements ConnectionFactoryFilter<ResolvedAddress, C>, HttpExecutionStrategyInfluencer {

private final StreamingHttpRequestResponseFactory reqRespFactory;
private final String connectAddress;

ProxyConnectConnectionFactoryFilter(final CharSequence connectAddress,
final StreamingHttpRequestResponseFactory reqRespFactory) {
this.reqRespFactory = reqRespFactory;
ProxyConnectConnectionFactoryFilter(final CharSequence connectAddress) {
this.connectAddress = connectAddress.toString();
}

Expand All @@ -74,51 +68,63 @@ private ProxyFilter(final ConnectionFactory<ResolvedAddress, C> delegate) {

@Override
public Single<C> newConnection(final ResolvedAddress resolvedAddress) {
return delegate().newConnection(resolvedAddress).flatMap(c ->
// We currently only have access to a StreamingHttpRequester, which means we are forced to provide an
// HttpExecutionStrategy. Because we can't be sure if there is any blocking code in the connection
// filters we use the default strategy which should offload everything to be safe.
c.request(defaultStrategy(),
reqRespFactory.connect(connectAddress).addHeader(CONTENT_LENGTH, ZERO))
.flatMap(response -> {
if (SUCCESSFUL_2XX.contains(response.status())) {
final Channel channel = ((NettyConnectionContext) c.connectionContext()).nettyChannel();
final SingleSource.Processor<C, C> processor = newSingleProcessor();
return delegate().newConnection(resolvedAddress).flatMap(c -> {
try {
// We currently only have access to a StreamingHttpRequester, which means we are forced to provide
// an HttpExecutionStrategy. Because we can't be sure if there is any blocking code in the
// connection filters we use the default strategy which should offload everything to be safe.
return c.request(defaultStrategy(), c.connect(connectAddress).addHeader(CONTENT_LENGTH, ZERO))
.flatMap(response -> handleConnectResponse(c, response))
// Close recently created connection in case of any error while it connects to the proxy
// or cancellation:
.recoverWith(t -> c.closeAsync().concat(failed(t)))
.whenCancel(() -> c.closeAsync().subscribe());
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we really do not know if the cancel has come due to the operation was canceled by the user or due to some operators sending a cancel for the previous source when they move on to the next source (eg: concat()).

Can any operator cancel after success? IIUC they cancel the previous source only for non-success/non-complete cases.

LMK if I need to revert whenFinally here to prevent closure on cancel after success.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can any operator cancel after success?

Yes they do, consider this connFactory.newConnection().concat(executor.timer(1, MILLISECONDS) (eg: to add a delay to respond to connect)

concat() uses SequentialCancellable which cancels the old Cancellable when the new Cancellable is received, which in this case will be after the successful completion of connFactory.newConnection().

More generally, we should not assume anywhere that cancel is only received before success() as Cancellable and Subscriber code paths are concurrent.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at SequentialCancellable and TBH don't see where it cancels the old Cancellable. When the new Cancellable is received it may close the new one immediately if the oldVal was already canceled via SequentialCancellable#cancel().

More generally, we should not assume anywhere that cancel is only received before success() as Cancellable and Subscriber code paths are concurrent.

Agreed. I just thought that it doesn't matter when proxy filter sees cancel: before or after onSuccess we should close the connection if we saw that someone is not interested in the result anymore.

Btw, after #1005, should it be afterCancel or afterFinally?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whenCancel will unconditionally execute the callback when cancel is called regardless if the connection has been delivered downstream. If we have already delivered the connection we shouldn't later close it (regardless if someone cancels or not). In addition to this being the expected control flow, the RS spec has some rules which discuss cancel being a no-op after a terminal signal is delivered [1][2].

afterFinally(SingleTerminalSignalConsumer<T> doFinally) happens to enforce "only a single callback will be executed" but may still result in invoking the onCancel() call back and also calling the downstream Subscriber#onSuccess(...) for the following reasons:

  • Subscription can be invoked on a different thread
  • Data/terminal signals may still be delivered after cancel [3]

So afterFinally is an improvement over afterCancel, but still isn't ideal because we may deliver a closed object (and/or invoke closeAsync() concurrently).

[1] https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.3/README.md#1.6

If a Publisher signals either onError or onComplete on a Subscriber, that Subscriber’s Subscription MUST be considered cancelled.

[2] https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.3/README.md#3.7

After the Subscription is cancelled, additional Subscription.cancel() MUST be NOPs.

[3] https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.3/README.md#2.8

A Subscriber MUST be prepared to receive one or more onNext signals after having called Subscription.cancel() if there are still requested elements pending [see 3.12]. Subscription.cancel() does not guarantee to perform the underlying cleaning operations immediately.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH don't see where it cancels the old Cancellable.

Aah, you are correct. I misread under an older assumption that we cancel() the previous Cancellable.

Anyways, for other reasons me and Scott mention, unconditional close() upon cancel() isn't correct.

Copy link
Member Author

@idelpivnitskiy idelpivnitskiy Apr 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to use whenFinally d2bf22f afterFinally cfd0117

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So afterFinally is an improvement over afterCancel, but still isn't ideal because we may deliver a closed object (and/or invoke closeAsync() concurrently).

Ok ya this seems to be a problem. Can we remove the close-on-cancel part for now?

Connection lifetime is anyways a problem in such situation out of the context of this filter as mentioned in #1002 (comment).

Lets fix the obvious issue of leaking connection for non-200 responses and then handle lifecycle on cancel/early termination later.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed in 57e470b and created #1010.

} catch (Throwable t) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are being overly paranoid here about the calls to c.request() or c.connect() throwing. Any method returning an asynchronous source is not expected to throw. Having said that it is not a big deal so its ok as it is, I will leave it to you to take a call on this.

return c.closeAsync().concat(failed(t));
}
});
}
}

channel.pipeline().addLast(new ChannelInboundHandlerAdapter() {
@Override
public void userEventTriggered(final ChannelHandlerContext ctx, final Object evt) {
if (evt instanceof SslHandshakeCompletionEvent) {
SslHandshakeCompletionEvent event = (SslHandshakeCompletionEvent) evt;
if (event.isSuccess()) {
processor.onSuccess(c);
} else {
processor.onError(event.cause());
}
}
ctx.fireUserEventTriggered(evt);
}
});
private Single<C> handleConnectResponse(final C connection, final StreamingHttpResponse response) {
try {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved this logic to the different method because nested try blocks make code indentation awful.

if (response.status().statusClass() != SUCCESSFUL_2XX) {
return response.payloadBodyAndTrailers().ignoreElements().concat(failed(
new ProxyResponseException("Non-successful response from proxy CONNECT " +
connectAddress, response.status())));
}

DeferSslHandler deferSslHandler = channel.pipeline().get(DeferSslHandler.class);
if (deferSslHandler == null) {
return response.payloadBodyAndTrailers().ignoreElements().concat(failed(
new IllegalStateException("Failed to find a handler of type " +
DeferSslHandler.class + " in channel pipeline.")));
final Channel channel = ((NettyConnectionContext) connection.connectionContext()).nettyChannel();
final SingleSource.Processor<C, C> processor = newSingleProcessor();
channel.pipeline().addLast(new ChannelInboundHandlerAdapter() {
@Override
public void userEventTriggered(final ChannelHandlerContext ctx, final Object evt) {
if (evt instanceof SslHandshakeCompletionEvent) {
SslHandshakeCompletionEvent event = (SslHandshakeCompletionEvent) evt;
if (event.isSuccess()) {
processor.onSuccess(connection);
} else {
processor.onError(event.cause());
}
}
ctx.fireUserEventTriggered(evt);
}
});

deferSslHandler.ready();
final DeferSslHandler deferSslHandler = channel.pipeline().get(DeferSslHandler.class);
if (deferSslHandler == null) {
return response.payloadBodyAndTrailers().ignoreElements().concat(failed(
new IllegalStateException("Failed to find a handler of type " +
DeferSslHandler.class + " in channel pipeline.")));
}
deferSslHandler.ready();

// There is no need to apply offloading explicitly (despite completing `processor` on the
// EventLoop) because `payloadBody()` will be offloaded according to the strategy for the
// request.
return response.payloadBodyAndTrailers().ignoreElements().concat(fromSource(processor));
} else {
return response.payloadBodyAndTrailers().ignoreElements().concat(
failed(new ProxyResponseException("Bad response from proxy CONNECT " + connectAddress,
response.status())));
}
}));
// There is no need to apply offloading explicitly (despite completing `processor` on the
// EventLoop) because `payloadBody()` will be offloaded according to the strategy for the
// request.
return response.payloadBodyAndTrailers().ignoreElements().concat(fromSource(processor));
} catch (Throwable t) {
return response.payloadBodyAndTrailers().ignoreElements().concat(failed(t));
}
}

Expand Down
Loading