-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saner error bounds on linear algebra tests #67
Comments
Shouldn't all error bounds be in terms of |
Our reasoning is wrong on almost all of these bounds; they are either too strict or too lenient (and sometimes both), and the ones for which the error bounds are too strict occasionally break when the randomly sampled matrices change, which causes the test suite to die. |
Error update (XRef #5472):
|
Thanks, exactly what I thought. This is JuliaLang/julia#5526 , just made a comment there. |
Okay. Short term fix is that I install a 32 bit VB and adjust these. |
VB will be overkill, see my comment there. |
I'll continue the conversation from #70 here.
Can you elaborate on that? Better error bounds would be better, but the tests have actually worked pretty well for a long time. The Recently, I included There are many tests and it will take some time to write correct error bounds for all of them. Hence, as a short term solution I don't think it is a problem just to widen the error bounds and get the tests to pass again. The primary reason for the tests is to detect when we break something. |
As I commented in JuliaLang/julia#5705, the root problem here is not the error bounds, but rather the fact that the tests use a random matrix. Eventually, you will always hit a badly conditioned matrix by accident. I would just use a deterministic matrix for the tests. Then you know how accurately the solution should be computed, and can set error bounds relative to that once and for all. |
I am not sure I understand you, the rng is seeded and will produce always the same matrices in the test. |
@mschauer, I didn't notice that the RNG has a deterministic seed in the tests. Nevermind then, that shouldn't be a problem. |
I really shouldn't have spent so much time on this, but I've just uploaded an IJulia notebook JIN5075 (nbviewer link) which does some numerical testing to find out how brittle the test case is when applied to many random matrices, and I've also done some testing to adapt the textbook forward and error bounds into tests that can be used to replace the magic number test in JuliaLang/julia#5075. I hope the notebook will explain clearly why I am not comfortable with continually adjusting the magic numbers. It is just pushing the issue down the road: what if we one day reorder the tests, or someone inserts a test before it, causing the test to sample a completely new random matrix? cc: @andreasnoackjensen |
Thanks for the writeup, @jiahao. I think I understand better now. Also, I put in an |
Thanks, I should remember to do that. |
Well, I'm glad we have a random matrix expert or two around. |
I'm glad I'm useful for something other than supplying snacks for @JeffBezanson and @loladiro . |
Thank you, for the notebook jiahao. You note that changing a magic number like in 5705 "decreases the probability of failure by an essentially negligible amount". This was a bit the point of the change from my point of view, it doesn't make the tests more or less brittle, it was rather the smallest change to make the test in 32 bit systems to behave similar to the 64 systems until a real solution is found and such that a variant of JuliaLang/julia#5578 can be merged without breaking everything. |
One idea: Instead of plain random matrices one can use for testing fixed diagonal matrices with a random orthogonal basis transform that preserves the eigenvalues and condition numbers so it is easier to reason about reasonable error bounds. |
Error update (Xref JuliaLang/julia#5472 ):
|
@mschauer I didn't mean to criticize the intention of JuliaLang/julia#5705, but rather to point out that the smallest change to make the test pass on your system didn't fundamentally change the state of affairs. I think those of us actively discussing these issues understand this quite well, but the point of the numerical experiment was to quantify exactly what the situation looked like and hopefully give others a sense of why this is important. |
@mschauer we could generate matrices from random orthogonal/unitary similarity transformations of diagonal matrices, that is true, but those matrix multiplies don't come for free, and the the tests would still have to account for the roundoff induced by those multiplications. I haven't thought through this case yet, but it is certainly plausible that in this particular situation the numerical stability of the rotation matrices would lead to nice behavior. |
The idea would be to take the result of the similarity transform "as is", that is as the random matrix we are testing with and basically forget about the way it was constructed. That doesn't change much about the integer cases though. |
@jiahao I have followed your example and spent too much time on making a notebook with a reply. http://nbviewer.ipython.org/gist/andreasnoackjensen/8882742 The answer is also related to JuliaLang/julia#5726 |
Thanks for the reply @andreasnoackjensen . A couple of points:
Right now I still think that where possible, we should use theoretically justified error bounds. But adapting the textbook results has turned out to be somewhat tricker than I had expected. The test derived from the backward error bound is easy to reason about and I think it's correct. Conversely, the forward error test is tricky; the correct way to implement my version of the forward error bound test ought to account also for the error of the more precise estimator for the solution. For I smell an excellent paper about linear algebra testing if we want to pursue this further. |
Reading real quickly now. Lots of linear algebra error bounds: Probably many are pessimistic I have a preference for eps * (the right condition number) * (# ops/number) * (3,4,or 10) lapack has lots of discussion about testing, good test matrices, etc random matrices, i feel, are good for a few tests, but not overused something i mentioned and often gets quoted: We want to convey that random matrices are very special matrices. It is a mistake to link psychologically a random matrix with the intuitive notion of a ‘typical’ matrix or the vague concept of ‘any old matrix’. In fact, the larger the size of the matrix the more predictable it becomes. |
Lets read more of the LAPACK manual and the LAWNs and take whats there. |
At some level, the distinction between that and the best possible answer that can be computed within floating-point roundoff becomes meaningless. For the dense matrix case, I think |
With the |
In the current linear algebra test suite, many of the linear algebra tests have error bounds that are heuristically implemented, e.g. something like
N^2*eps()
. This occasionally causes brittleness in that the linear algebra tests fail for no apparent reason.The purpose of this issue is to track
The long term hope is that the last action item will close this issue once and for all, but in the meantime we should keep track of any heuristic adjustments to the test suite.
Cross-reference: JuliaLang/julia#5578 JuliaLang/julia#5472 #33
The text was updated successfully, but these errors were encountered: