-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix error handling for subgraphs #3328
Conversation
This comment has been minimized.
This comment has been minimized.
CI performance tests
|
eee38bf
to
94978d8
Compare
Note to reviewers. I think the unit tests need refactoring to make them dry, but I haven't got the time to do this right now. |
94978d8
to
f7394fa
Compare
The graphql spec is lax about what strategy to use for processing responses: https://github.com/graphql/graphql-over-http/blob/main/spec/GraphQLOverHTTP.md#processing-the-response "If the response uses a non-200 status code and the media type of the response payload is application/json then the client MUST NOT rely on the body to be a well-formed GraphQL response since the source of the response may not be the server but instead some intermediary such as API gateways, proxies, firewalls, etc." The TLDR of this is that it's really asking us to do the best we can with whatever information we have with some modifications depending on content type. Our goal is to give the user the most relevant information possible in the response errors Rules: 1. If the content type of the response is not `application/json` or `application/graphql-response+json` then we won't try to parse. 2. If an HTTP status is not 2xx it will always be attached as a graphql error. 3. If the response type is `application/json` and status is not 2xx and the body is not valid grapqhql then parse errors will be suppressed. Fixes #3141
e22cdda
to
397a5b4
Compare
As discussed this afternoon: Subgraph errors (no data) when dealing with @requires:
|
Would review from me still be helpful? Have been out since around when this was filed. I'm not clear on what this means in the description:
Is this only for JSON responses? What gets attached as the error? |
@glasser I'm going to work on this today and then I think it's worth reviewing from scratch once it's ready (I'm going to take this back to draft). I'll add some extra comments to clarify things. |
It'll be for all responses. Basically there will never be a situation where http is not success and there are no errors in the grapghql response. |
Spec says: If the data entry in the response is not present, the errors entry in the response must not be empty. It must contain at least one error. The errors it contains should indicate why no data was able to be returned.
Co-authored-by: Jeremy Lempereur <jeremy.lempereur@iomentum.com>
refactoring the subgraph service was a good idea, but here it makes the review really hard to do when we look for the actual error handling changes :/ |
Co-authored-by: Coenen Benjamin <benjamin.coenen@hotmail.com>
Co-authored-by: Coenen Benjamin <benjamin.coenen@hotmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking great!
Some(Ok(Ok(content_type))) if (content_type.ty == APPLICATION && content_type.subty == JSON) => Ok(ContentType::ApplicationJson), | ||
Some(Ok(Ok(content_type))) if (content_type.ty == APPLICATION && content_type.subty == GRAPHQL_RESPONSE && content_type.suffix == Some(JSON)) => Ok(ContentType::ApplicationGraphqlResponseJson), | ||
Some(Ok(Ok(content_type))) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: it may be more readable to match once on Some(Ok(Ok(content_type)))
then have a nested match or branches on content_type
fields
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixes cargo fmt so worth doing: 16de223
The graphql spec is lax about what strategy to use for processing responses: https://github.com/graphql/graphql-over-http/blob/main/spec/GraphQLOverHTTP.md#processing-the-response > If the response uses a non-200 status code and the media type of the response payload is application/json then the client MUST NOT rely on the body to be a well-formed GraphQL response since the source of the response may not be the server but instead some intermediary such as API gateways, proxies, firewalls, etc. The TLDR of this is that it's really asking us to do the best we can with whatever information we have with some modifications depending on content type. Our goal is to give the user the most relevant information possible in the response errors Rules: 1. If the content type of the response is not `application/json` or `application/graphql-response+json` then we won't try to parse. 2. If an HTTP status is not 2xx it will always be attached as a graphql error. 3. If the response type is `application/json` and status is not 2xx and the body is not valid grapqhql then the entire body of the response will be added as an error. Fixes #3141 <!-- start metadata --> **Checklist** Complete the checklist (and note appropriate exceptions) before a final PR is raised. - [x] Changes are compatible[^1] - [ ] Documentation[^2] completed - [ ] Performance impact assessed and acceptable - Tests added and passing[^3] - [x] Unit Tests - [ ] Integration Tests - [ ] Manual Tests **Exceptions** *Note any exceptions here* **Notes** [^1]. It may be appropriate to bring upcoming changes to the attention of other (impacted) groups. Please endeavour to do this before seeking PR approval. The mechanism for doing this will vary considerably, so use your judgement as to how and when to do this. [^2]. Configuration is an important part of many changes. Where applicable please try to document configuration examples. [^3]. Tick whichever testing boxes are applicable. If you are adding Manual Tests: - please document the manual testing (extensively) in the Exceptions. - please raise a separate issue to automate the test and label it (or ask for it to be labeled) as `manual test` --------- Co-authored-by: bryn <bryn@apollographql.com> Co-authored-by: Jeremy Lempereur <jeremy.lempereur@iomentum.com> Co-authored-by: Coenen Benjamin <benjamin.coenen@hotmail.com>
> **Note** > > When approved, this PR will merge into **the `1.24.0` branch** which will — upon being approved itself — merge into `main`. > > **Things to review in this PR**: > - Changelog correctness (There is a preview below, but it is not necessarily the most up to date. See the _Files Changed_ for the true reality.) > - Version bumps > - That it targets the right release branch (`1.24.0` in this case!). > --- ## 🚀 Features ### Add support for delta aggregation to otlp metrics ([PR #3412](#3412)) Add a new configuration option (Temporality) to the otlp metrics configuration. This may be useful to fix problems with metrics when being processed by datadog which tends to expect Delta, rather than Cumulative, aggregations. See: - open-telemetry/opentelemetry-collector-contrib#6129 - DataDog/documentation#15840 for more details. By [@garypen](https://github.com/garypen) in #3412 ## 🐛 Fixes ### Fix error handling for subgraphs ([Issue #3141](#3141)) The GraphQL spec is rather light on what should happen when we process responses from subgraphs. The current behaviour within the Router was inconsistently short circuiting response processing and this producing confusing errors. > #### Processing the response > > If the response uses a non-200 status code and the media type of the response payload is application/json then the client MUST NOT rely on the body to be a well-formed GraphQL response since the source of the response may not be the server but instead some intermediary such as API gateways, proxies, firewalls, etc. The logic has been simplified and made consistent using the following rules: 1. If the content type of the response is not `application/json` or `application/graphql-response+json` then we won't try to parse. 2. If an HTTP status is not 2xx it will always be attached as a graphql error. 3. If the response type is `application/json` and status is not 2xx and the body is not valid grapqhql the entire subgraph response will be attached as an error. By [@BrynCooke](https://github.com/BrynCooke) in #3328 ## 🛠 Maintenance ### chore: router-bridge 0.3.0+v2.4.8 -> =0.3.1+2.4.9 ([PR #3407](#3407)) Updates `router-bridge` from ` = "0.3.0+v2.4.8"` to ` = "0.3.1+v2.4.9"`, note that with this PR, this dependency is now pinned to an exact version. This version update started failing tests because of a minor ordering change and it was not immediately clear why the test was failing. Pinning this dependency (that we own) allows us to only bring in the update at the proper time and will make test failures caused by the update to be more easily identified. By [@EverlastingBugstopper](https://github.com/EverlastingBugstopper) in #3407 ### remove the compiler from Query ([Issue #3373](#3373)) The `Query` object caches information extracted from the query that is used to format responses. It was carrying an `ApolloCompiler` instance, but now we don't really need it anymore, since it is now cached at the query analysis layer. We also should not carry it in the supergraph request and execution request, because that makes the builders hard to manipulate for plugin authors. Since we are not exposing the compiler in the public API yet, we move it inside the context's private entries, where it will be easily accessible from internal code. By [@Geal](https://github.com/Geal) in #3367 ### move AllowOnlyHttpPostMutationsLayer at the supergraph service level ([PR #3374](#3374), [PR #3410](#3410)) Now that we have access to a compiler in supergraph requests, we don't need to look into the query plan to know if a request contains mutations By [@Geal](https://github.com/Geal) in #3374 & #3410 ### update opentelemetry to 0.19.0 ([Issue #2878](#2878)) We've updated the following opentelemetry related crates: ``` opentelemetry 0.18.0 -> 0.19.0 opentelemetry-datadog 0.6.0 -> 0.7.0 opentelemetry-http 0.7.0 -> 0.8.0 opentelemetry-jaeger 0.17.0 -> 0.18.0 opentelemetry-otlp 0.11.0 -> 0.12.0 opentelemetry-semantic-conventions 0.10.0 -> 0.11.0 opentelemetry-zipkin 0.16.0 -> 0.17.0 opentelemetry-prometheus 0.11.0 -> 0.12.0 tracing-opentelemetry 0.18.0 -> 0.19.0 ``` This allows us to close a number of opentelemetry related issues. Note: The prometheus specification mandates naming format and, unfortunately, the router had two metrics which weren't compliant. The otel upgrade enforces the specification, so the affected metrics are now renamed (see below). The two affected metrics in the router were: apollo_router_cache_hit_count -> apollo_router_cache_hit_count_total apollo_router_cache_miss_count -> apollo_router_cache_miss_count_total If you are monitoring these metrics via prometheus, please update your dashboards with this name change. By [@garypen](https://github.com/garypen) in #3421 ### Synthesize defer labels without RNG or collisions ([PR #3381](#3381) and [PR #3423](#3423)) The `@defer` directive accepts a `label` argument, but it is optional. To more accurately handle deferred responses, the Router internally rewrites queries to add labels on the `@defer` directive where they are missing. Responses eventually receive the reverse treatment to look as expected by client. This was done be generating random strings, handling collision with existing labels, and maintaining a `HashSet` of which labels had been synthesized. Instead, we now add a prefix to pre-existing labels and generate new labels without it. When processing a response, the absence of that prefix indicates a synthetic label. By [@SimonSapin](https://github.com/SimonSapin) and [@o0Ignition0o](https://github.com/o0Ignition0o) in #3381 and #3423 ### Move subscription event execution at the execution service level ([PR #3395](#3395)) In order to prepare some future integration I moved the execution loop for subscription events at the execution_service level. By [@bnjjj](https://github.com/bnjjj) in #3395 ## 📚 Documentation ### Document claim augmentation via coprocessors ([Issue #3102](#3102)) Claims augmentation is a common use case where user information from the JWT claims is used to look up more context like roles from databases, before sending it to subgraphs. This can be done with subgraphs, but it was not documented yet, and there was confusion on the order in which the plugins were called. This clears the confusion and provides an example configuration. By [@Geal](https://github.com/Geal) in #3386
The graphql spec is lax about what strategy to use for processing responses: https://github.com/graphql/graphql-over-http/blob/main/spec/GraphQLOverHTTP.md#processing-the-response
The TLDR of this is that it's really asking us to do the best we can with whatever information we have with some modifications depending on content type.
Our goal is to give the user the most relevant information possible in the response errors
Rules:
application/json
orapplication/graphql-response+json
then we won't try to parse.application/json
and status is not 2xx and the body is not valid grapqhql then parse errors will be suppressed.Rule
#3
Is definitely up for debate.Alternative are that:
Fixes #3141
Checklist
Complete the checklist (and note appropriate exceptions) before a final PR is raised.
Exceptions
Note any exceptions here
Notes
[^1]. It may be appropriate to bring upcoming changes to the attention of other (impacted) groups. Please endeavour to do this before seeking PR approval. The mechanism for doing this will vary considerably, so use your judgement as to how and when to do this.
[^2]. Configuration is an important part of many changes. Where applicable please try to document configuration examples.
[^3]. Tick whichever testing boxes are applicable. If you are adding Manual Tests:
- please document the manual testing (extensively) in the Exceptions.
- please raise a separate issue to automate the test and label it (or ask for it to be labeled) as
manual test