-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(redis): enable tuning of caching parameters #129
Conversation
Codecov ReportAttention: Patch coverage is
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One blocking comment wrt env var misspelling, LGTM otherwise.
stringValue := mustGetEnv(envVar) | ||
value, err := strconv.ParseInt(stringValue, 10, 64) | ||
if err != nil { | ||
panic(fmt.Errorf("parsing env var %s to int: %w", envVar, err)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: we may want to include the stringValue
returned that was expected to be an int, although I suspect this is a very rare panic.
PROVIDERS_CACHE_EXPIRATION_SECONDS = "${terraform.workspace == "prod" ? 30 * 24 * 60 * 60 : 24 * 60 * 60}" | ||
INDEXES_REDIS_URL = aws_elasticache_serverless_cache.cache["indexes"].endpoint[0].address | ||
INDEXES_REDIS_CACHE = aws_elasticache_serverless_cache.cache["indexes"].name | ||
INDEXES_CACHE_EXPIRATION_SECONDS = "${terraform.workspace == "prod" ? 24 * 60 * 60 : 60 * 60}" | ||
CLAIMS_REDIS_URL = aws_elasticache_serverless_cache.cache["claims"].endpoint[0].address | ||
CLAIMS_REDIS_CACHE = aws_elasticache_serverless_cache.cache["claims"].name | ||
CLAIMS_CACHE_EXPIRATION_SECONDS = "${terraform.workspace == "prod" ? 7 * 24 * 60 * 60 : 24 * 60 * 60}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✔️ math is math-ing
pkg/aws/service.go
Outdated
LegacyDataBucketURL: mustGetEnv("LEGACY_DATA_BUCKET_URL"), | ||
HoneycombAPIKey: os.Getenv("HONEYCOMB_API_KEY"), | ||
PrincipalMapping: principalMapping, | ||
ProvidersCacheExpirationSeconds: mustGetInt("PROVIDER_CACHE_EXPIRATION_SECONDS"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blocking: PROVIDERS_CACHE_EXPIRATION_SECONDS
fd6a513
to
875744b
Compare
Question: What's our motivation for using Time-Based Expiry rather than Capacity-Based Expiry (LRU or LFU) for this cache? (Or maybe we are using both already?) |
Capacity based expiry is already built into AWS. :) (and will expire those with expiries first -- goodness I hope I configured that right though -- should probably go back and review :P) |
"When you set a cache data storage maximum and your cache data storage hits the maximum, ElastiCache will begin evicting data in your cache that has a Time-To-Live (TTL) set, using the LRU logic." The reason for setting a TTL at all is that we want to distinguish between stuff that's already on IPNI and stuff we actually pre-cache on the indexer while it gets to IPNI. we want the pre-cache stuff to have no expiration until we get to IPNI, so it doesn't get deleted. Also our capacity is 10GB but we get charged by the GB for what we actually use, so if our hit ration is high even with expiry, it's better to not use all 10GB at most times |
Goals
Currently, our cache hit rate for some of our redis instances is really low, because it turns out 1 hour really isn't long enough to hold some of this data. There's not reason not to hold it longer -- we've allocated almost 10GB to these redis instances and we're currently using all of a couple MB.
Implementation