Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset types use __nonzero/bool__ method for truthiness #992

Merged
merged 1 commit into from
Nov 29, 2016

Conversation

philippjfr
Copy link
Member

As discussed in #988 this PR implements the __nonzero__ (Py2) and __bool__ (Py3) methods on Dataset ensuring, which is used instead of __len__. This ensures that the dask interface can implement the __len__ method correctly, without forcing code throughout holoviews to compute the length on a large out-of-core dask dataframe all the time.

"""
return 1
def nonzero(cls, dataset):
return True

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a definite improvement! Hardcoding nonzero is vastly better than hardcoding length. Even so, is there no way to determine the actual value of nonzero in a way that doesn't load the entire dataset?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried various things such as using:

try:
   next(df.iterrows())
except:
   nonzero = False
else:
   nonzero = True

But all of these approaches still take a considerable amount of time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe worth asking the dask maintainers if there is a quick method for testing nonzero, then.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've asked, hopefully they'll have a good solution.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Matt Rocklin suggested using .head and checking the length of that, not any faster than my solution above though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed it properly now, and it's fairly clear that there won't be a cheap general solution here. A dask dataframe can be the result of a bunch of chained operations which all have to be evaluated before the length or even a nonzero length can be determined.

while i < len(self):
yield tuple(self.data[i, ...])
i += 1


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure of the implications of deleting this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was old and broken because it assumed the data was always an array. There is an existing issue about adding an iterator method to all the Dataset interfaces somewhere.

@jbednar
Copy link
Member

jbednar commented Nov 29, 2016

Looks great! Happy to see it merged.

@jlstevens
Copy link
Contributor

Very happy with this improvement and tests have passed. Merging.

@jlstevens jlstevens merged commit 8c03983 into master Nov 29, 2016
@philippjfr philippjfr deleted the interface_nonzero branch December 10, 2016 23:42
Copy link

This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 26, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants