Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Evaluating multiple flags as one batch concurrently #961

Closed
yangzhaox opened this issue Oct 10, 2023 · 12 comments
Closed

[FEATURE] Evaluating multiple flags as one batch concurrently #961

yangzhaox opened this issue Oct 10, 2023 · 12 comments
Labels
enhancement New feature or request

Comments

@yangzhaox
Copy link

Requirements

In our team we have a need to evaluate multiple flags, e.g. 15 of 30 concurrently by specifying the list of flag names, and ideally each flag evaluation goes into a goroutine so the overall latency would be still fast enough. By reading the documentation, I could not find this api, the only relevant one is resolveAll but it's not what we need. Would you consider to support this use case? Thanks.

@yangzhaox yangzhaox added enhancement New feature or request Needs Triage This issue needs to be investigated by a maintainer labels Oct 10, 2023
@yangzhaox yangzhaox changed the title [FEATURE] evaluating multiple flags as one batch concurrently [FEATURE] Evaluating multiple flags as one batch concurrently Oct 10, 2023
@toddbaert
Copy link
Member

toddbaert commented Oct 16, 2023

@yangzhaox Hello! Thanks for your input. I don't have all the details about your use case, so I'll make some assumptions... but firstly I'd like to highlight that flagd is an implementation of an OpenFeature provider, it doesn't have an SDK/API that developers write code for. The resolveAll RPC (and in fact all the RPCs/REST endpoints) in flagd are used internally by the flagd providers for various languages. See here for a bit more background.

If I'm understanding correctly, what you're asking for is a change to the actual OpenFeature Go SDK, which would support bulk-evaluation. This is something we've discussed before, but have decided not to implement it so far (see here for some of that discussion). We're always open to discuss this further, but so far we've found that the problem can be solved better in other ways.

In your case, if the concern is latency, you might be interested in the go-in-process provider. This provider actually embeds flagd's evaluation engine directly into the go client process, reducing latency by removing all I/O. You can then comfortably make feature flag evaluations one-at-a-time as needed (hopefully), which would have similar performance characteristics to evaluating them in bulk. You can connect the in-process provider to a flagd instance which will just propagate the flag configuration to the provider(s).

Let me know what you think about that strategy.

@yangzhaox
Copy link
Author

Hi @toddbaert, thanks for your reply. It's interesting to know that in-process provider has been introduced lately. I saw there were Go and Java in-process providers in the doc, do you also have Node.js in-process provider? Thanks.

@beeme1mr
Copy link
Member

Hey @yangzhaox, yes we plan on building a node.js in-process provider next. @toddbaert could you provide a link to the issue when you have a moment?

@toddbaert
Copy link
Member

Hey @beeme1mr and @yangzhaox . Here is the issue. It's a big one and will not be implemented in a single PR.

@yangzhaox
Copy link
Author

yangzhaox commented Oct 19, 2023

As to getting the flag definition, gRPC is the default protocol, is http(s) also supported for periodically pulling? @toddbaert

@toddbaert
Copy link
Member

As to getting the flag definition, gRPC is the default protocol, is http(s) also supported for periodically pulling? @toddbaert

Currently in-process providers support only gRPC sources.

@yangzhaox
Copy link
Author

Hi @toddbaert, it's great to see that the node.js in-process provider has been completed. Do you have any doc guiding me to connect the in-process provider to a flagd instance through gRPC? And how to configure / start flagd so it can act as a gPRC server streaming its flag definition? I read through the doc but could not figure out myself.

@toddbaert
Copy link
Member

toddbaert commented Dec 19, 2023

Hey @yangzhaox !

Hi @toddbaert, it's great to see that the node.js in-process provider has been completed. Do you have any doc guiding me to connect the in-process provider to a flagd instance through gRPC? And how to configure / start flagd so it can act as a gPRC server streaming its flag definition? I read through the doc but could not figure out myself.

Currently, in process providers can only get their flags from a compatible gRPC server (a server implementing the sync.proto).

There's only one current open source implementation of this, which is flagd-proxy. Unlike flagd, flagd-proxy is not general purpose; it was designed expressly to be used as part of the OpenFeature Operator aka OFO, which defines flags as CRDs. So basically, flagd proxy would read flag CRs in Kubernetes, and expose them via gRPC to in-process providers. OFO basically handles most of this for you.

If you're not using Kubernetes/OFO, you'll need some other implementation of the sync.proto. Some adopters have built this themselves, and also built some associated UI to configure flags, and then their in process providers connect to this custom server.

So this leaves you with 2 options:

  • use OFO (requires Kubernetes)
  • build your own custom flag server which exposes a sync.proto implementation (significant effort)

Do either of these work for you?

If these don't satisfy your needs, we can consider 2 improvements to our offerings:

  1. expose a sync.proto implementation on flagd itself, so that you could use flagd to serve "unevaluated" flags to in-process providers (effectively adding flagd-proxy's functionality to flagd) but backing it with flagd's multi-source paradigm (file, HTTP, etc, sources)
  2. add features to every in-process provider to enable them to read arbitrary sources, not just gRPC sync.protos (ex: files, HTTP, etc).

At first glance, I think option 2 will be way too much duplicated effort across every in-process provider implementation. Option 1 seems more feasible.

What do you think?

@yangzhaox
Copy link
Author

yangzhaox commented Dec 19, 2023

Hi @toddbaert, thanks for the comprehensive reply, really appreciated it.

We use k8s with multi-clusters in multi-DCs, OFO sounds a bit overkill to me, but maybe I don't have a clear picture how it works. Do you have any diagram or doc for that?

I'm interested in the custom flag server approach, as well as the UI to configure the flags. Is there any open source implementation I can refer?

Among the 2 options, I like the option 1 more because it makes flagd more powerful, and give users more choices. One can use rpc evaluation, in-process evaluation or both. The downside is flagd is getting a bit more complicated to maintain.

The 3rd option could be providing a separate flag server, something like flagd-proxy but not tired to k8s. It reads flags from different sources and then broadcast changes to all connected in-process providers. And it does not do rpc evaluation.

@toddbaert
Copy link
Member

toddbaert commented Dec 19, 2023

We use k8s with multi-clusters in multi-DCs, OFO sounds a bit overkill to me, but maybe I don't have a clear picture how it works. Do you have any diagram or doc for that?

See the OFO doc here, specifically the "concepts" section. There's also a specific section on how flagd-proxy fits in.

I'm interested in the custom flag server approach, as well as the UI to configure the flags. Is there any open source implementation I can refer?

There's no such open-source implementation (though I know there are closed-source ones). One of the "non-goals" of the project is to build a UI for feature-flag or flagd configuration. We have a lot of associated vendors that do a great job with their UIs and are compatible with open-feature SDKs. An open source implementation for such a UI is outside the scope of the project. That wouldn't stop anyone else from building such a solution though.

Among the 2 options, I like the option 1 more because it makes flagd more powerful, and give users more choices. One can use rpc evaluation, in-process evaluation or both. The downside is flagd is getting a bit more complicated to maintain.

Agreed on all counts. Though there's a bit of additional complexity here, I don't think it would be unmanageable. We've already designed the interfaces and protos, so implementation should be straightforward.

The 3rd option could be providing a separate flag server, something like flagd-proxy but not tired to k8s. It reads flags from different sources and then broadcast changes to all connected in-process providers. And it does not do rpc evaluation.

This feels very much like flagd-proxy, but "bigger", general purpose (in terms of how it can be deployed for feature flagging) and more flexible, which is basically what flagd already is, which is why I keep going back to option 1, especially considering we don't want to build a UI or API for configuring flags.

@toddbaert
Copy link
Member

toddbaert commented Mar 15, 2024

@yangzhaox with the latest release of flagd, we now serve the sync.proto. In conjunction with our JS, Java, and Go providers, this can be used to evaluate flags in-process, with no network I/O per flag evaluation. flagd essentially just delivers a flag definition to the provider via the new proto.

Documentation and examples will come shortly.

@toddbaert toddbaert removed the Needs Triage This issue needs to be investigated by a maintainer label Mar 15, 2024
@yangzhaox
Copy link
Author

Hi @toddbaert awesome news! Thank you all for making this happen, I'll find time to test it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants