-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Evaluating multiple flags as one batch concurrently #961
Comments
@yangzhaox Hello! Thanks for your input. I don't have all the details about your use case, so I'll make some assumptions... but firstly I'd like to highlight that flagd is an implementation of an OpenFeature provider, it doesn't have an SDK/API that developers write code for. The If I'm understanding correctly, what you're asking for is a change to the actual OpenFeature Go SDK, which would support bulk-evaluation. This is something we've discussed before, but have decided not to implement it so far (see here for some of that discussion). We're always open to discuss this further, but so far we've found that the problem can be solved better in other ways. In your case, if the concern is latency, you might be interested in the go-in-process provider. This provider actually embeds flagd's evaluation engine directly into the go client process, reducing latency by removing all I/O. You can then comfortably make feature flag evaluations one-at-a-time as needed (hopefully), which would have similar performance characteristics to evaluating them in bulk. You can connect the in-process provider to a flagd instance which will just propagate the flag configuration to the provider(s). Let me know what you think about that strategy. |
Hi @toddbaert, thanks for your reply. It's interesting to know that in-process provider has been introduced lately. I saw there were Go and Java in-process providers in the doc, do you also have Node.js in-process provider? Thanks. |
Hey @yangzhaox, yes we plan on building a node.js in-process provider next. @toddbaert could you provide a link to the issue when you have a moment? |
Hey @beeme1mr and @yangzhaox . Here is the issue. It's a big one and will not be implemented in a single PR. |
As to getting the flag definition, gRPC is the default protocol, is http(s) also supported for periodically pulling? @toddbaert |
Currently in-process providers support only gRPC sources. |
Hi @toddbaert, it's great to see that the node.js in-process provider has been completed. Do you have any doc guiding me to connect the in-process provider to a flagd instance through gRPC? And how to configure / start flagd so it can act as a gPRC server streaming its flag definition? I read through the doc but could not figure out myself. |
Hey @yangzhaox !
Currently, in process providers can only get their flags from a compatible gRPC server (a server implementing the sync.proto). There's only one current open source implementation of this, which is flagd-proxy. Unlike flagd, flagd-proxy is not general purpose; it was designed expressly to be used as part of the OpenFeature Operator aka OFO, which defines flags as CRDs. So basically, flagd proxy would read flag CRs in Kubernetes, and expose them via gRPC to in-process providers. OFO basically handles most of this for you. If you're not using Kubernetes/OFO, you'll need some other implementation of the sync.proto. Some adopters have built this themselves, and also built some associated UI to configure flags, and then their in process providers connect to this custom server. So this leaves you with 2 options:
Do either of these work for you? If these don't satisfy your needs, we can consider 2 improvements to our offerings:
At first glance, I think option 2 will be way too much duplicated effort across every in-process provider implementation. Option 1 seems more feasible. What do you think? |
Hi @toddbaert, thanks for the comprehensive reply, really appreciated it. We use k8s with multi-clusters in multi-DCs, OFO sounds a bit overkill to me, but maybe I don't have a clear picture how it works. Do you have any diagram or doc for that? I'm interested in the custom flag server approach, as well as the UI to configure the flags. Is there any open source implementation I can refer? Among the 2 options, I like the option 1 more because it makes flagd more powerful, and give users more choices. One can use rpc evaluation, in-process evaluation or both. The downside is flagd is getting a bit more complicated to maintain. The 3rd option could be providing a separate flag server, something like flagd-proxy but not tired to k8s. It reads flags from different sources and then broadcast changes to all connected in-process providers. And it does not do rpc evaluation. |
See the OFO doc here, specifically the "concepts" section. There's also a specific section on how flagd-proxy fits in.
There's no such open-source implementation (though I know there are closed-source ones). One of the "non-goals" of the project is to build a UI for feature-flag or flagd configuration. We have a lot of associated vendors that do a great job with their UIs and are compatible with open-feature SDKs. An open source implementation for such a UI is outside the scope of the project. That wouldn't stop anyone else from building such a solution though.
Agreed on all counts. Though there's a bit of additional complexity here, I don't think it would be unmanageable. We've already designed the interfaces and protos, so implementation should be straightforward.
This feels very much like flagd-proxy, but "bigger", general purpose (in terms of how it can be deployed for feature flagging) and more flexible, which is basically what flagd already is, which is why I keep going back to option 1, especially considering we don't want to build a UI or API for configuring flags. |
@yangzhaox with the latest release of flagd, we now serve the sync.proto. In conjunction with our JS, Java, and Go providers, this can be used to evaluate flags in-process, with no network I/O per flag evaluation. flagd essentially just delivers a flag definition to the provider via the new proto. Documentation and examples will come shortly. |
Hi @toddbaert awesome news! Thank you all for making this happen, I'll find time to test it out. |
Requirements
In our team we have a need to evaluate multiple flags, e.g. 15 of 30 concurrently by specifying the list of flag names, and ideally each flag evaluation goes into a goroutine so the overall latency would be still fast enough. By reading the documentation, I could not find this api, the only relevant one is resolveAll but it's not what we need. Would you consider to support this use case? Thanks.
The text was updated successfully, but these errors were encountered: