- Sponsor
-
Notifications
You must be signed in to change notification settings - Fork 574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What to do with warning: slow doctest … Test ran for [very long time]
?
#39569
Comments
I think the second one (implement a new feature) is a good idea if it's feasible, but I would oppose both of the others, because they each have a serious downside. |
would it be possible / make sense, to specify the timeout value as a relative value? That is, normalize with respect to some specific doctest - one which is unlikely to change in performance. |
It might be possible, but seems unnecessarily complicated. After all the goal is just to have a mechanism to avoid too much false positive, and machines are probably going to only get faster, not slower. On the other hand something that does matter is what are we going to do in the long run. This is not exactly sweeping the problem under the rug, but it is true that at any point in time there isn't too many people interested in the feature either…? If someone is interested to see something is too slow they can just implement a speedup and it is probably not too difficult to identify relevant tests to remove the |
The reason I'm asking is that we currently don't have any performance regression tests, and we do have performance regressions from time to time. Also, with some changes in the architecture, for example using categories instead of inheritance, it is next to impossible (for me) to check what it does to performance. I would find it comforting to know that time relative to some benchmark does not increase too much. But I was told already that good benchmarking is very hard. |
that's unfortunately true, but… out of scope…? On the other hand this change won't make the situation much worse, in the sense that if we're not overwhelmed with piles of false positive of "slow doctest" it is probably easier to detect actual regression. |
Currently there are 46 instances of them in every pull request. It makes the GitHub annotations highlight annoying to look at.
One example
test-long
test runSome of them might be performance regression (note that https://doc.sagemath.org/html/en/developer/coding_basics.html#special-markup-to-influence-doctests requires any doctest to take less than 30 seconds), some are intentional like
I can see a few ways around it.
# long time (
s)
has limit# known bug (see :issue:`39569`)
(disadvantage: their correctness is then no longer tested)See also #38827 #35479
The text was updated successfully, but these errors were encountered: