Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bogus column and row at edge #318

Merged
merged 3 commits into from
Apr 26, 2017
Merged

Conversation

jbcrail
Copy link
Contributor

@jbcrail jbcrail commented Apr 24, 2017

I applied @jbednar's changes to make the upper bound exclusive.

I also updated the test data to reflect the new upper bound semantics. In particular, the log axis tests required a seemingly magic value, x_max_index. This value results from scaling, translating, and indexing the original x range from (1, 11) to (1, sqrt(11)). To account for precision errors, I calculated the final value using the same formula used by Datashader.

Fix #259

Copy link
Member

@jbednar jbednar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Ready to merge?

@@ -152,14 +154,16 @@ def test_multiple_aggregates():


def test_log_axis():
# Upper bound for scale/index of x-axis
x_max_index = 10 ** (1 / (2 / np.log10(11)))
sol = np.array([[5, 5], [5, 5]], dtype='i4')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for tracking that calculation down!

@@ -70,7 +72,7 @@ def test_sum():


def test_min():
out = xr.DataArray(df.i64.reshape((2, 2, 5)).min(axis=2).astype('f8').T,
out = xr.DataArray(df.i64.values.reshape((2, 2, 5)).min(axis=2).astype('f8').T,
coords=coords, dims=dims)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the implication of this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It removes the deprecation warning about reshape. It looks like it was deprecated in Pandas 0.19.0.

@jbcrail
Copy link
Contributor Author

jbcrail commented Apr 26, 2017

This is ready for merging.

@jbednar jbednar merged commit a81c401 into holoviz:master Apr 26, 2017
@jbcrail jbcrail deleted the off-by-one branch April 26, 2017 20:52
@philippjfr
Copy link
Member

philippjfr commented Apr 27, 2017

@jbcrail We've been seeing kernel crashes in the notebook since this PR was merged. I did a git bisect and tracked it down to this commit:

[68d51fe] Fix off-by-one error

You can reliably reproduce this error by executing this code twice in a notebook:

import pandas as pd
import datashader as ds

data = pd.read_csv('crash.csv')

data['k'] = data['k'].astype('category')

cvs = ds.Canvas(plot_width=400,
                plot_height=400,
                x_range=(data['x'].min(), data['x'].max()),
                y_range=(data['y'].min(), data['y'].max()))

cvs.line(data, 'x', 'y', ds.count_cat('k'))

You can download the crash.csv here. I suspect it's because extend_line wasn't updated in this PR.

@jbcrail
Copy link
Contributor Author

jbcrail commented Apr 27, 2017

@philippjfr I think you're right. Thanks for catching this. I'll open an issue and fix it.

jbcrail added a commit to jbcrail/datashader that referenced this pull request Sep 27, 2017
This reverts to exclusive ranges both manual and auto for all glyphs
(points/line) without introducing regressions of holoviz#318, holoviz#330, and holoviz#343.

I refactored several tests to make xarray coordinate indices easier to
read and more explicit.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants