You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently to evaluate a flag for an identity in local evaluation server side SDKs one has to perform a call to getIdentityFlags() with the identity identifier (and possibly trait list). This will eagerly evaluate all flags and return them so they can be checked with things like isFeatureEnabled.
The problem comes from eagerly evaluating the flags which can take some time - time that adds up when making this call often for different identities and when the amount of flags to evaluate is high. Especially if there are flags which might not need to be evaluated at all.
Describe the solution you'd like.
It would be great to have a way to tell flagsmith to evaluate only a subset of flags eg. getIdentityFlags(..., { onlyFlags: ['this_one_is_important', 'so_is_this_one'] }). This call would only evaluate 2 feature flags. Trying to check isFeatureEnabled on flags other than this_one_is_important and so_is_this_one would then act as if the other flags didn't exist.
Describe alternatives you've considered
Alternatively the evaluation could be done lazily so that the const flags = getIdentityFlags(...) call is not bound by the amount of flags. Calling flags.isFeatureEnabled(name) or flags.getFeatureValue(name) would then evaluate name whenever needed (and possibly cache the evaluated result in flags so that double evaluation is not needed).
This approach however changes the performance load so it might be considered a breaking change. The performance of getIdentityFlags would get much better (as it basically doesn't do much other than return the structure for actually evaluating flags) but performance of the individual isFeatureEnabled and getFeatureValue (at least initial) calls would drop (as they each now have to evaluate their respective flags) possibly causing issues when not expected.
Additional context
We were trying out a few solutions for the anonymous user identities after login problem by evaluating flags twice (once with the anonymous identifier when the user is logged out) and again with the user id (once logged in). For certain experiments we can then choose if we switch over to the user specific value or we keep using the "anonymous" evaluated value.
However in this setup we always evaluate all flags twice and we also share flags between FE and BE so we have hundreds of them. Only about a half are needed by this specific project (which are defined in it for type safety so they could be passed into this filter array by default to dismiss evaluating the rest) and again only a handful of these are A/B experiments so only those could be evaluated in the anonymous context greatly reducing the performance impact.
Something like currently (out of a total of 300 flags - 5 A/B experiment ones):
evaluate 300 flags anonymous
evaluate 300 flags for identity
use 3 flags anonymously
use 97 flags for identity
With this improvement (out of a total of 300 flags - 5 A/B experiment ones):
evaluate 5 flags anonymously
evaluate 100 flags for identity
use 3 flags anonymously
use 97 flags for identity
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Currently to evaluate a flag for an identity in local evaluation server side SDKs one has to perform a call to
getIdentityFlags()
with the identity identifier (and possibly trait list). This will eagerly evaluate all flags and return them so they can be checked with things likeisFeatureEnabled
.The problem comes from eagerly evaluating the flags which can take some time - time that adds up when making this call often for different identities and when the amount of flags to evaluate is high. Especially if there are flags which might not need to be evaluated at all.
Describe the solution you'd like.
It would be great to have a way to tell flagsmith to evaluate only a subset of flags eg.
getIdentityFlags(..., { onlyFlags: ['this_one_is_important', 'so_is_this_one'] })
. This call would only evaluate 2 feature flags. Trying to checkisFeatureEnabled
on flags other thanthis_one_is_important
andso_is_this_one
would then act as if the other flags didn't exist.Describe alternatives you've considered
Alternatively the evaluation could be done lazily so that the
const flags = getIdentityFlags(...)
call is not bound by the amount of flags. Callingflags.isFeatureEnabled(name)
orflags.getFeatureValue(name)
would then evaluatename
whenever needed (and possibly cache the evaluated result inflags
so that double evaluation is not needed).This approach however changes the performance load so it might be considered a breaking change. The performance of
getIdentityFlags
would get much better (as it basically doesn't do much other than return the structure for actually evaluating flags) but performance of the individualisFeatureEnabled
andgetFeatureValue
(at least initial) calls would drop (as they each now have to evaluate their respective flags) possibly causing issues when not expected.Additional context
We were trying out a few solutions for the anonymous user identities after login problem by evaluating flags twice (once with the anonymous identifier when the user is logged out) and again with the user id (once logged in). For certain experiments we can then choose if we switch over to the user specific value or we keep using the "anonymous" evaluated value.
However in this setup we always evaluate all flags twice and we also share flags between FE and BE so we have hundreds of them. Only about a half are needed by this specific project (which are defined in it for type safety so they could be passed into this filter array by default to dismiss evaluating the rest) and again only a handful of these are A/B experiments so only those could be evaluated in the anonymous context greatly reducing the performance impact.
Something like currently (out of a total of 300 flags - 5 A/B experiment ones):
With this improvement (out of a total of 300 flags - 5 A/B experiment ones):
The text was updated successfully, but these errors were encountered: