Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big Table Errors Upgrading gRPC #628

Closed
bbassingthwaite opened this issue May 13, 2017 · 7 comments
Closed

Big Table Errors Upgrading gRPC #628

bbassingthwaite opened this issue May 13, 2017 · 7 comments
Assignees

Comments

@bbassingthwaite
Copy link

Hi,

We've recently updated google-cloud-go and gRPC to the latest version. Since then we've been unable to use the context from our gRPC handler to make calls to BigTable. We've been forced to temporarily use context.Background()

We get the error 2017/05/13 02:29:53 Retryable error: rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: 1, retrying in 265.091955ms and our calls to Big Table seem to go on for a considerable period of time (> 60 seconds).

I've tried clearing the incoming and outgoing grpc metadata but with no luck. I've ran out of ideas and hoping either I am doing something wrong or this is a bug with gRPC or Big Table.

Thanks!

@garye
Copy link
Contributor

garye commented May 15, 2017

The CBT client library doesn't interact with the context so I doubt that there's much in that layer that could cause this. Have you tried entering a bug against grpc-go?

@bbassingthwaite
Copy link
Author

This only happens when using the big table client. I can use the supplied gRPC context with Spanner, which leads me to believe that this is specific to Big Table.

@garye
Copy link
Contributor

garye commented May 18, 2017

AFAIK I know the "error code: 1" indicates PROTOCOL_ERROR (http://httpwg.org/specs/rfc7540.html#rfc.section.7) which sounds like the type of thing we need to get the gRPC folks involved with. Perhaps it's something to do with how metadata is encoded/decoded in the context but I can only guess...

@garye garye closed this as completed May 18, 2017
@garye garye reopened this May 18, 2017
@bbassingthwaite
Copy link
Author

I have service A(on latest grpc) which is interacting with Big Table. I also have a service B and C which both call to A. All three are backed by gRPC and golang. I updated B to the latest version of gRPC and from my initial findings this allows me to start using the supplied gRPC context again with Big Table, but this breaks C. The worst part is I also have D through N that will also need updating. It's kind of concerning that somewhere along the lines, gRPC has broken the context between versions or something is being propagated through the context that Big Table isn't happy about it. It feels like there are multiple issues here.

@dfawley
Copy link
Contributor

dfawley commented May 18, 2017

Also commented in grpc/grpc-go#1247, but here are some more details:

grpc-go was inadvertently propagating the metadata between incoming and outgoing gRPC calls. This is a security problem, and needed to be addressed. We corrected the issue, but unfortunately the change required backward compatibility to be broken, due to the metadata API itself.

If you are using client-side interceptors to add metadata to a context, then you will need to update them to use metadata.FromOutgoingContext to read the original metadata, not metadata.FromContext. If you are sending your requests with metadata through an intermediate service, you will have to manually propagate the context with something like this:

if md, ok := metadata.FromIncomingContext(ctx); ok {
   ctx = metadata.NewOutgoingContext(ctx, md)
}

@garye
Copy link
Contributor

garye commented May 19, 2017

I believe this should be fixed on the bigtable side via dc9545a. Please reopen if not.

@garye garye closed this as completed May 19, 2017
@bbassingthwaite
Copy link
Author

Thanks @garye!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants