-
-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to 0.18.1 #39
Update to 0.18.1 #39
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
Will need similar changes for |
@amueller @lesteve After successfully backporting scikit-learn/scikit-learn@3933130 we still got the error reported by @jakirkham on osx:
cc @ocefpaf |
I guess nobody looked at it since. I will try to reproduce it but I don't have a OSX VM handy right now. For the record, I don't think it is a good idea to merge this PR. If we want 0.18.1 accessible on conda-forge we should have it exactly the same as the one on PyPI and skip the offending tests just for the purposes of releasing to conda-forge. |
I opened an issue for this: |
Thanks for this! |
constant_column = np.var(Q, 0) < 1.e-12 | ||
# detect constant columns | ||
w[constant_column] = 0 # cancel the regularization for the intercept | ||
- w[v == 0] = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than backporting a fix, I would be in favour of skipping the two problematic tests on OSX and merging this PR. This way 0.18.1 will be available on conda-forge while the code changes will be only minor (test changes only) compared to the 0.18.1 release available on PyPI.
Not to dup myself, but simply refresh after a long hiatus. We typically patch in conda-forge to fix something that is broken so that we can release it. Especially if there is a fix already available. While I understand that is not the exact same as where it was tagged, would much rather ensure having quality products going into the conda-forge channel than relax the quality controls. If what we are saying is this isn't up to snuff and we don't want to patch it to be so, maybe we can wait a bit for the dust to settle on the fixes. |
You mean you don't want to release it as is ? |
Yeah I do remember this conversation.
I completely understand that skipping tests is not something that should be encouraged in general. It is very unfortunate that we did not fix these test failures before releasing 0.18.1 and we will certainly do our best for the next release to publish a release candidate on conda-forge in order to make sure all the tests pass prior to the release. I think it is fair to say that 0.18.1 as it was released on PyPI is considered a quality product by the scikit-learn developers. Skipping these two tests only on OSX before releasing on conda-forge seems the best option as far as I am concerned. I am more than happy to hear other scikit-learn developers' opinions on this, e.g. @amueller @ogrisel @jnothman. |
I agree that the macosx test failure is probably revealing a bug (either in sklearn or openblas, it's hard to tell without investigating further), and this bug should be fixed before releasing scikit-learn 0.19. This test failure could not be detected on the upstream scikit-learn CI infrastructure because it does not happen when numpy is linked against OSX Accelerate instead of openblas. But sklearn 0.18.1 is released (source on pypi) and I think binary distributions such as conda-forge should reflect the matching source releases (possibly disabling this test to avoid noisy reports). Note that this bug was probably already present in 0.17 but we did not test that part of the code at that point. So technically this bug is already present in the current state of conda-forge. |
@ogrisel jakirkham point is less about the second issue than the first one we encountered for which a fix is already available (scikit-learn/scikit-learn#8154). IMHO if it fixes stuff, why not pack it in. That does not mean actively backporting every fixes from upstream. Also it easier to backport a simple patch than create a patch ourselves that disables the corresponding test : ) |
There's probably 20-50 more minor fixes in master that are not in 0.18.1. Why not ship it? Because it will be impossible for us to find out what people have actually installed if the conda-forge version is a custom patch. How do you decide what to ship? I would imagine all project that constant bug-fixes in master, and you don't synchronize. |
If different distributions ship different variants of the code with the same version number that will make upstream maintenance / issue tracking and user support too complex. One should be able to tell our users make sure to use version "X.Y.Z" to have bug XXX fixed. |
We don't love patching things either. However, sometimes it is necessary. In a few cases, we have patches that make little sense outside of In any event, I don't want to be pedantic about this. Especially as it seems the consensus is not to patch. If someone could help clarify how we can skip only the troublesome tests, let's do it and move forward. |
I am really really glad we can agree even if we approach the problem from different angles and with different biases. I think the simplest thing is to do something like this for the two test functions that fail: from sklearn.utils.testing import SkipTest # only needed if not already imported in the test module
def test_that_fails():
if sys.platform == 'darwin':
raise SkipTest('This test is known to fail on OSX')
# body of the test function as it was previously |
yes, I removed the assert, so the result is the same, unless you want to include it upstream. |
Sorry not very familiar with conda recipes, does the I think what is left to do is to skip the test rather than patching sklearn/linear_model/ridge.py. |
superceded by #40 |
No description provided.