-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Package review PR #1
base: test
Are you sure you want to change the base?
Conversation
61aa1fb
to
0e4d4d1
Compare
src/commands/bloom.rs
Outdated
let mut result = Vec::new(); | ||
match value { | ||
Some(bf) => { | ||
for item in input_args.iter().take(argc).skip(idx) { | ||
result.push(RedisValue::Integer(bf.add_item(item.as_slice()))); | ||
} | ||
Ok(RedisValue::Array(result)) | ||
} | ||
None => { | ||
if nocreate { | ||
return Err(RedisError::Str("ERR not found")); | ||
} | ||
let mut bf = BloomFilterType::new_reserved(fp_rate, capacity, expansion); | ||
for item in input_args.iter().take(argc).skip(idx) { | ||
result.push(RedisValue::Integer(bf.add_item(item.as_slice()))); | ||
} | ||
match filter_key.set_value(&BLOOM_FILTER_TYPE, bf) { | ||
Ok(_) => Ok(RedisValue::Array(result)), | ||
Err(_) => Err(RedisError::Str(ERROR)), | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we check the bloom filter exists or not first and then insert the data in a single flow. Don't like the code duplication around insertion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can create a separate function for multi adds and call it from both flows. I wanted to handle both through the same flow, however it will be out of reference scope and result in a moved error
src/commands/bloom_util.rs
Outdated
pub struct BloomFilterType { | ||
pub expansion: u32, | ||
pub fp_rate: f32, | ||
pub filters: Vec<BloomFilter>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the underlying support scalable bloom filters or do we need to support this mechanism?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This particular library does not have auto scaling
https://docs.rs/bloomfilter/1.0.13/bloomfilter/struct.Bloom.html
src/commands/bloom_util.rs
Outdated
let new_capacity = filter.capacity * self.expansion; | ||
let mut new_filter = BloomFilter::new(self.fp_rate, new_capacity); | ||
// Add item. | ||
new_filter.bloom.set(item); | ||
new_filter.num_items += 1; | ||
self.filters.push(new_filter); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we fail the request if the expansion is much higher than we can support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expansion
really means expansion_rate. I can rename this variable if that would help clarify this.
I don't think there is a limit on the number of sub filters an object can have. But (if we want) we can define a config for this to either silently fail expansion (by allowing a set
) beyond a limit of X sub filters per object OR explicitly return an error (without a set
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One other aspect we should handle this checking / rejecting based on memory overhead of every operation that creates a new BloomFilter object (BF.ADD, BF.MADD, BD.RESERVE, BF.INSERT, RDB Load). Before any of these operations, we should probably check the memory usage and reject the operations if there is not sufficient space. We have a mechanism to compute the estimated additional memory overhead per creation.
055728d
to
7027032
Compare
c87fd97
to
869a254
Compare
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
… objects. Update RDB load/save Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
* Support for DEBUG DIGEST module data type callback Signed-off-by: Nihal Mehta <nnmehta@amazon.com> * Update test cases Signed-off-by: Nihal Mehta <nnmehta@amazon.com> * Move digest to wrapper Signed-off-by: Nihal Mehta <nnmehta@amazon.com> * Update tests Signed-off-by: Nihal Mehta <nnmehta@amazon.com> * Add more scenarios for debug test Signed-off-by: Nihal Mehta <nnmehta@amazon.com> * Clean code and add scenario for debug test Signed-off-by: Nihal Mehta <nnmehta@amazon.com> --------- Signed-off-by: Nihal Mehta <nnmehta@amazon.com>
3f24e34
to
7c25468
Compare
…filter dependency to version 3.0 (#23) * Updating how we create BloomFilter from rdb loads. BloomFilter vec now has capacity of filter we are loading from Signed-off-by: zackcam <zackcam@amazon.com> * Updating bloomfilter dependency to version 3, fixing breaking changes as well Signed-off-by: zackcam <zackcam@amazon.com> * Updating the digest changes to follow updated version of bloom. As well as removing unnecesary fields saved in rdb Signed-off-by: zackcam <zackcam@amazon.com> * Update log in src/bloom/data_type.rs Signed-off-by: KarthikSubbarao <karthikrs2021@gmail.com> * Update comment in src/bloom/utils.rs Signed-off-by: KarthikSubbarao <karthikrs2021@gmail.com> * Clippy error in src/bloom/data_type.rs Signed-off-by: KarthikSubbarao <karthikrs2021@gmail.com> --------- Signed-off-by: zackcam <zackcam@amazon.com> Signed-off-by: KarthikSubbarao <karthikrs2021@gmail.com> Co-authored-by: KarthikSubbarao <karthikrs2021@gmail.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
99c6f58
to
1b0d819
Compare
Signed-off-by: Nihal Mehta <nnmehta@amazon.com>
…n a non bloom key (#30) Signed-off-by: zackcam <zackcam@amazon.com>
…r structs (#24) * Updating defragmentation to defrag both bloomfiltertype and bloomfiler structs Signed-off-by: zackcam <zackcam@amazon.com> * Draft: Extra debugging and more factors being defragged Signed-off-by: zackcam <zackcam@amazon.com> * Updating defrag and tests to use cursors and make test more robust by getting hits Signed-off-by: zackcam <zackcam@amazon.com> * Fixing merge conflicts Signed-off-by: zackcam <zackcam@amazon.com> * Fixing merge conflicts with random seed. Updating defrag test to use mexists and add_items_till_capacity. Refactored metric tracking as well to reduce code repetition Signed-off-by: zackcam <zackcam@amazon.com> --------- Signed-off-by: zackcam <zackcam@amazon.com>
…commands contain the correct acl categories (#29) Signed-off-by: zackcam <zackcam@amazon.com>
Signed-off-by: VanessaTang <yuetan@amazon.com>
…e properties on replica nodes to match the object on the primary (#32) Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
391d947
to
8b94c17
Compare
…s get incremented in defrag test (#33) Signed-off-by: zackcam <zackcam@amazon.com>
…fail (#34) * Deflaking tests to remove chance of false positives causing tests to fail Signed-off-by: zackcam <zackcam@amazon.com> * Update tests/test_bloom_command.py Co-authored-by: KarthikSubbarao <karthikrs2021@gmail.com> Signed-off-by: zackcam <zackcam@amazon.com> --------- Signed-off-by: zackcam <zackcam@amazon.com> Co-authored-by: KarthikSubbarao <karthikrs2021@gmail.com>
Signed-off-by: Nihal Mehta <nnmehta@amazon.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
fcaad77
to
6a514bf
Compare
Signed-off-by: VanessaTang <yuetan@amazon.com>
#37) Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
9859dd6
to
ab158bb
Compare
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
d3d7522
to
a9b578e
Compare
Signed-off-by: KarthikSubbarao <karthikrs2021@gmail.com>
Signed-off-by: zackcam <zackcam@amazon.com>
Signed-off-by: Nihal Mehta <nnmehta@amazon.com>
…om filter can reach the desired size (#41) * Adding optional arg to BF.INSERT to allow users to check if their bloom filter can reach the desired size Signed-off-by: zackcam <zackcam@amazon.com> * Fixing ATLEASTCAPACITY calculation as well as adding MAXCAPACITY functionality for info Signed-off-by: zackcam <zackcam@amazon.com> --------- Signed-off-by: zackcam <zackcam@amazon.com>
…ity caluclation matches actual capacity (#43) Signed-off-by: zackcam <zackcam@amazon.com>
…44) file as it can be taken from a dependency and moving metric increments to after creates Signed-off-by: zackcam <zackcam@amazon.com>
Signed-off-by: Karthik Subbarao <karthikrs2021@gmail.com>
f46179b
to
8f1c9f4
Compare
This is a PR which merges current changes in
unstable
into an empty branch in order to help with a review code of the entire codebase.