-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: update flagd provider specs #1432
Conversation
✅ Deploy Preview for polite-licorice-3db33c ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
'reference/specifications/rpc-providers.md': 'reference/specifications/providers.md#rpc-providers' | ||
'reference/specifications/in-process-providers.md': 'reference/specifications/providers.md#in-process-providers' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redirects from old pages.
MD007: | ||
indent: 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mkdocs doesn't render nested lists properly unless they are double indented (4 spaces) so I've added this rule.
| streamDeadlineMs | FLAGD_STREAM_DEADLINE_MS | deadline for streaming calls, useful as an application-layer keepalive | int | 600000 | rpc & in-process | | ||
| retryBackoffMs | FLAGD_RETRY_BACKOFF_MS | initial backoff for stream retry | int | 1000 | rpc & in-process | | ||
| retryBackoffMaxMs | FLAGD_RETRY_BACKOFF_MAX_MS | maximum backoff for stream retry | int | 120000 | rpc & in-process | | ||
| retryGraceAttempts | FLAGD_RETRY_GRACE_ATTEMPTS | amount of stream retry attempts before provider moves from STALE to ERROR state | int | 5 | rpc & in-process | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
retryGraceAttempts
is a new param, not yet implemented in any provider but I think this is nicer than the "silent first retry" we have in Java, and solves the same problem in a more sensible way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the default be 5? With exponential backoff this would mean 31s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya, 15-30 seemed correct to me, personally, since I think most services and infra would be able to cycle within that time. 5 was more or less based on that idea.
Open to other arguments here though!
In-process flagd providers also inject any properties returned by the [sync-metadata RPC response](./protos.md#getmetadataresponse) into the context. | ||
This allows for static properties defined in flagd to be added to in-process evaluations. | ||
If only a subset of the sync-metadata response is desired to be injected into the evaluation context, you can use the define a mapping function with the `contextEnricher` option. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't yet actually support the addition of arbitrary props into the evaluation context in flagd itself. If this is merged, I will create an issue for that.
bf6b566
to
ce64a43
Compare
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
5c55e9c
to
2e228f0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @toddbaert for consolidating all the different provider implementations into this spec. 🥇
| streamDeadlineMs | FLAGD_STREAM_DEADLINE_MS | deadline for streaming calls, useful as an application-layer keepalive | int | 600000 | rpc & in-process | | ||
| retryBackoffMs | FLAGD_RETRY_BACKOFF_MS | initial backoff for stream retry | int | 1000 | rpc & in-process | | ||
| retryBackoffMaxMs | FLAGD_RETRY_BACKOFF_MAX_MS | maximum backoff for stream retry | int | 120000 | rpc & in-process | | ||
| retryGraceAttempts | FLAGD_RETRY_GRACE_ATTEMPTS | amount of stream retry attempts before provider moves from STALE to ERROR state | int | 5 | rpc & in-process | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the default be 5? With exponential backoff this would mean 31s.
- RPC mode resolves `STALE` from cache where possible | ||
- in-process mode resolves `STALE` from stored `flag set` rules | ||
- on stream reconnection: | ||
- emit `PROVIDER_READY` and `PROVIDER_CONFIGURATION_CHANGED` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it emit PROVIDER_CONFIGURATION_CHANGED
if we reconnect and the config did not change in the meantime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya, we don't know if the config has changed, since we could have missed change events, so we fire a change regardless to make sure any change handlers run; change handlers are only a hook to cause additional evaluations, so if no changes have actually happened, it's not problem (flag values will just be the same).
This is how the Java provider currently works. We could just only run READY in this case, but IMO that's risky since it is possible that a missed change event would never be detected and handlers which re-evaluate flags never run, so I consumer stays out of sync.
|
||
### Custom Name Resolution | ||
|
||
Some implementations support [gRPC custom name resolution](https://grpc.io/docs/guides/custom-name-resolution/), and abstractions to introduce additional resolvers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some implementations support...
In the provider spec we should clarify if this feature is optional. But if it's implemented it should be consistent across implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya the main difficulty is not all gRPC implementations actually have this feature: https://grpc.io/docs/guides/custom-name-resolution/#language-support
| retryBackoffMs | FLAGD_RETRY_BACKOFF_MS | initial backoff for stream retry | int | 1000 | rpc & in-process | | ||
| retryBackoffMaxMs | FLAGD_RETRY_BACKOFF_MAX_MS | maximum backoff for stream retry | int | 120000 | rpc & in-process | | ||
| retryGraceAttempts | FLAGD_RETRY_GRACE_ATTEMPTS | amount of stream retry attempts before provider moves from STALE to ERROR state | int | 5 | rpc & in-process | | ||
| keepAliveTime | FLAGD_KEEP_ALIVE_TIME_MS | http 2 keepalive | long | 0 | rpc & in-process | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still have HTTP 2 keepAlive support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The gRPC keepalive is just an HTTP 2 keepalive.
Though considering how it didn't help us that much, we could consider not adding it at all. WDYT?
Co-authored-by: Guido Breitenhuber <157148191+guidobrei@users.noreply.github.com> Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Co-authored-by: Guido Breitenhuber <157148191+guidobrei@users.noreply.github.com> Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Co-authored-by: Guido Breitenhuber <157148191+guidobrei@users.noreply.github.com> Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Co-authored-by: Guido Breitenhuber <157148191+guidobrei@users.noreply.github.com> Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Co-authored-by: Guido Breitenhuber <157148191+guidobrei@users.noreply.github.com> Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Co-authored-by: Guido Breitenhuber <157148191+guidobrei@users.noreply.github.com> Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Co-authored-by: Guido Breitenhuber <157148191+guidobrei@users.noreply.github.com> Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Co-authored-by: Guido Breitenhuber <157148191+guidobrei@users.noreply.github.com> Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Co-authored-by: Guido Breitenhuber <157148191+guidobrei@users.noreply.github.com> Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Build failure is due to Trivvy rate limit. |
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
@@ -73,18 +35,19 @@ The lifecycle is summarized below: | |||
- if stream connection fails or exceeds the time specified by `deadline`, abort initialization (SDK will emit `PROVIDER_ERROR`), and attempt to [reconnect](#stream-reconnection) | |||
- while connected: | |||
- flags are resolved according to resolver mode; either by calling evaluation RPCs, or by evaluating the stored `flag set` rules | |||
- for RPC providers, flags resolved with `reason=STATIC` are [cached](#flag-evaluation-caching) | |||
- for RPC providers, flags resolved with `reason=STATIC` are [cached](#flag-evaluation-caching) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is under "while connected" so I think it's fine as is.
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
This PR contains significant enhancements to flagd provider specs. If merged, I will be opening a bunch of issues to implement what's described here based on recon I've been doing with the existing implementations
Specifically:
retryGraceAttempts
param, which defines the amount of stream retry attempts before provider moves fromSTALE
toERROR
statecontextEnricher
param, which defines mapping function for sync-metadata to evaluation context for in process providers (exists already in Java provider)