-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[Dataset] Add take, filter API to dataset #16078
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new APIs would have quite a bit of overlap with the existing sampler API. An alternative would be a sample
interface that takes a sampler.
Then the Sampler needs to take a dataset as the input for its constructor, which is quite different from existing samplers. Do we really want to do that? |
this one looks good to me, just wanna confirm if it's compatible with LazyTransform dataset? |
yes. I added more tests with lazy transform dataset. |
@@ -40,6 +40,41 @@ def __getitem__(self, idx): | |||
def __len__(self): | |||
raise NotImplementedError | |||
|
|||
def filter(self, fn): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will eagerly retrieve all values from a lazy=True
transformed dataset. That should be documented?
(The transformation function will also be applied again in __getitem__
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I'll document that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it better in some cases that lazy evaluation should be applied after filter
, i.e.,
data.filter(xxx).transform is way better than data.transform.filter in some situations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both are valid operations and I don't think we can stop users from doing transform() before filter(), since sometimes we need to actually filter the data based on the transformed data. For now I'd resort to better documentation
* first commit * fix lint * add more test for transformed data * address CR * Update dataset.py * use sampler to implement * Update dataset.py
Description
Add dataset.filter, dataset.take APIs. @zhreshold
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments