Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster roots computation for sparse polynomials over ZZ #16516

Closed
bgrenet opened this issue Jun 22, 2014 · 90 comments
Closed

Faster roots computation for sparse polynomials over ZZ #16516

bgrenet opened this issue Jun 22, 2014 · 90 comments

Comments

@bgrenet
Copy link

bgrenet commented Jun 22, 2014

Algorithms exist for the computation of the roots of sparse polynomials, which are much faster than the "generic" algorithms which work for dense polynomials.

In this ticket, I implement one of these algorithms, for integer roots of sparse polynomials over ZZ.

Component: commutative algebra

Keywords: roots, sparse polynomial

Author: Bruno Grenet

Branch/Commit: 686d2b3

Reviewer: Vincent Delecroix, Travis Scrimshaw, Jeroen Demeyer

Issue created by migration from https://trac.sagemath.org/ticket/16516

@bgrenet

This comment has been minimized.

@bgrenet bgrenet changed the title Faster roots computation for sparse polynomials over ZZ Faster roots computation for sparse polynomials over ZZ and QQ Jun 23, 2014
@bgrenet
Copy link
Author

bgrenet commented Jun 24, 2014

@bgrenet
Copy link
Author

bgrenet commented Jun 24, 2014

Commit: 920c16e

@bgrenet

This comment has been minimized.

@bgrenet bgrenet changed the title Faster roots computation for sparse polynomials over ZZ and QQ Faster roots computation for sparse polynomials over ZZ Jun 24, 2014
@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jun 24, 2014

Changed commit from 920c16e to e725544

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jun 24, 2014

Branch pushed to git repo; I updated commit sha1. New commits:

e725544New function for computing roots of sparse polynomials over ZZ

@videlec
Copy link
Contributor

videlec commented Jun 24, 2014

comment:5

Hi Bruno,

It seems reasonable to have it in integer_ring.pyx. And I think that it would be good to allow it for dense polynomial as well. You certainly know what is the good measure of sparseness... you could implement a method .density() of .sparness_XYZ() on polynomials that would help you to find the good threshold for the different algorithms.

For better code, please:

  • remove the trailing whitespaces
  • there is no need for ; at the end of each line
  • for list update it is better to use .append(x) for adding one element and .extend(iterator) for extending with several elements
  • not ring is None -> ring is not None (the is not is a Python operator)

And you must provide timings to see whether it is better with your code for small input. You can use the command timeit as in

sage: R.<T> = ZZ[]
sage: p = R.random_element(degree=10)
sage: timeit("p.roots()")
625 loops, best of 3: 223 µs per loop

It would be nice to make the following works:

sage: R.<T> = PolynomialRing(ZZ, sparse=True)
sage: p = R.random_element(degree=10)
sage: p.roots(QQbar)
Traceback (most recent call last):
...
AttributeError: 'Polynomial_generic_sparse' object has no attribute 'gcd'

It will be a nice thing to have in Sage!

More remarks to come,
Vincent

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jun 25, 2014

Changed commit from e725544 to f344cdc

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jun 25, 2014

Branch pushed to git repo; I updated commit sha1. New commits:

f344cdcCosmestic changes + add conditions on the constant coefficient

@bgrenet
Copy link
Author

bgrenet commented Jun 25, 2014

comment:7

Hi Vincent,

Thanks for the comments. I pushed a new version taking them into account, especially for the "better code" part.

For roots over QQbar, I think it is not really related to this ticket. It is related to #15790 (implementation of GCD for sparse polynomials). I'd like to work in this other ticket too.

Now for the most complicate part (to my mind), timings. The issue is the following: Whether the new algorithm is better than the previous one is not a matter of density. For instance, if you have p = 3 - x + x^7 * q where q is a very high-degree dense polynomial, integer roots of p are common roots of 3-x and q. And the same thing may happen in the middle of a polynomial. In a sense, the only way to detect such things is to use the loop in the algorithm. Of course, for low-degree random polynomials, this will make the algorithm slower.

For the discussion, I refer to the previous algorithm
self._roots_from_factorization(...)
as the dense algorithm, and the new algorithm I implemented as the sparse algorithm.

I did tests:

  • In the first one, the roots are computed using the dense algorithm and then the sparse algorithm, and I compute the ratio of the time it takes. The ration is >1 if the sparse algorithm is worst than the dense one. The left column is the degree, and then 10 tests are performed with random polynomial of the given degree. As you can see, the sparse algorithm is often worst, by a factor between 1.5 and 3, and is sometimes better by a factor of 10 to 25.
  • In the second one, I added a condition of the sparsity (i.e. number of nonzero terms) of the polynomial: if p.degree() - p.sparsity() - p.valuation() < 10, I use the dense algorithm. The timings are then almost equal for both algorithm for small degrees, but sometimes (not as often as for the first test) the new algorithm is much better. For large degrees, we get the same kind of results as in the first test.

As a consequence of these tests, I do not know how the algorithm should be chosen. Do you have any idea (or any idea of which other tests I should perform to be able to chose)?

Finally, I also use the fact that a nonzero root of an integer polynomial should divide its lowest-degree coefficient (constant coefficient if the valuation is zero) to accelerate the computations. On the one hand, if the absolute value cst_coeff of the constant coefficient is small, the dense algorithm can compute the product q of the (x-i)*(x+i) for 1 <= i <= cst_coeff and compute the roots of the gcd of self and q instead of q. My tests show that it really improves the computation time. (Note yet that because of #15790, this cannot work for the moment with sparse polynomials.) On the other hand, in the sparse algorithm, it is useless to compute the split of self into a list polys when the cst_coeff is 1.

Tests

First test:

2 [1.80, 2.15, 2.27, 2.14, 1.80, 1.37, 1.37, 2.10, 2.19, 2.19]
4 [2.17, 1.32, 1.41, 2.12, 2.22, 1.89, 2.01, 1.43, 1.85, 2.11]
6 [2.14, 1.38, 0.968, 1.13, 0.466, 2.12, 1.88, 2.15, 1.87, 1.96]
8 [2.07, 1.93, 2.02, 1.83, 2.08, 1.83, 2.57, 1.76, 1.82, 1.95]
10 [1.77, 2.06, 2.10, 2.13, 2.16, 1.78, 0.709, 2.26, 2.16, 2.02]
12 [1.76, 1.79, 0.671, 1.14, 1.26, 2.12, 2.00, 1.97, 1.78, 2.68]
14 [1.66, 0.651, 1.66, 2.07, 2.31, 0.590, 1.13, 0.930, 2.04, 1.66]
16 [2.13, 1.63, 2.24, 0.586, 0.802, 0.772, 1.71, 1.67, 0.592, 2.06]
18 [0.540, 0.667, 2.11, 1.70, 2.19, 1.99, 1.99, 1.98, 0.758, 1.07]
20 [0.568, 1.64, 1.01, 2.17, 1.64, 0.460, 2.07, 2.81, 1.83, 0.501]
22 [0.392, 2.18, 1.63, 1.50, 1.66, 1.95, 1.77, 0.394, 1.53, 2.08]
24 [0.545, 2.86, 1.64, 1.72, 1.94, 2.19, 1.66, 1.56, 0.351, 0.427]
26 [0.381, 1.51, 0.318, 0.394, 1.99, 0.467, 1.52, 1.89, 1.61, 1.53]
28 [1.55, 1.59, 1.95, 0.320, 2.08, 1.48, 1.99, 1.85, 1.47, 1.51]
30 [2.01, 2.02, 2.04, 1.47, 2.02, 0.267, 1.47, 0.339, 2.04, 2.02]
32 [2.01, 1.99, 1.99, 1.99, 2.01, 2.01, 2.01, 2.01, 0.259, 0.288]
34 [2.01, 2.00, 2.01, 0.287, 2.03, 2.01, 0.398, 0.245, 2.01, 1.98]
36 [0.237, 2.00, 2.02, 1.99, 2.02, 2.01, 2.01, 2.01, 1.94, 0.262]
38 [2.00, 0.256, 0.283, 2.01, 2.00, 0.242, 2.01, 2.01, 2.01, 0.235]
40 [1.99, 2.01, 1.98, 2.00, 2.00, 2.01, 2.00, 2.01, 2.01, 2.00]
42 [2.01, 2.00, 2.00, 2.00, 2.00, 2.00, 2.00, 0.218, 1.99, 2.01]
44 [2.00, 2.00, 0.224, 2.00, 2.00, 2.00, 0.597, 2.00, 2.00, 0.203]
46 [0.174, 2.17, 0.170, 2.22, 0.555, 1.84, 0.645, 1.96, 0.188, 2.01]
48 [1.95, 2.03, 1.98, 2.03, 1.92, 1.94, 2.09, 2.00, 2.00, 2.02]
50 [1.97, 1.97, 2.03, 1.99, 2.00, 0.671, 2.00, 2.00, 0.155, 0.149]
52 [1.99, 0.172, 2.00, 2.01, 1.99, 0.173, 0.141, 2.01, 2.00, 1.99]
54 [1.99, 0.119, 2.00, 2.00, 1.99, 0.125, 0.154, 1.99, 1.99, 2.07]
56 [1.99, 2.00, 0.424, 1.98, 2.01, 1.99, 2.04, 2.01, 2.00, 2.00]
58 [2.00, 1.99, 0.590, 2.02, 0.123, 0.122, 2.01, 1.98, 2.00, 1.99]
60 [2.00, 1.99, 0.100, 2.02, 2.05, 2.02, 1.98, 1.95, 0.117, 1.96]
62 [0.0989, 1.97, 1.98, 0.126, 2.00, 1.98, 0.0992, 0.108, 0.120, 1.99]
64 [1.98, 1.98, 0.0956, 0.142, 0.0896, 2.02, 0.110, 0.106, 2.02, 2.00]
66 [2.01, 2.00, 0.210, 2.00, 2.00, 2.00, 1.98, 1.97, 1.99, 0.115]
68 [2.01, 0.0913, 2.00, 2.00, 2.01, 1.98, 1.99, 0.0813, 2.02, 1.99]
70 [0.0866, 0.0826, 2.00, 2.00, 2.00, 0.0900, 2.00, 1.99, 2.00, 0.105]
72 [2.00, 1.99, 0.268, 2.00, 2.00, 2.00, 2.00, 1.99, 2.00, 2.00]
74 [2.00, 0.430, 2.00, 2.00, 2.00, 0.0803, 2.00, 0.0924, 2.03, 2.02]
76 [1.97, 1.99, 1.98, 0.111, 2.00, 2.00, 2.00, 2.01, 0.0905, 0.0919]
78 [2.00, 2.02, 2.00, 1.99, 0.0716, 2.01, 2.00, 2.00, 2.00, 2.00]
80 [1.99, 0.0748, 2.00, 0.437, 1.99, 2.01, 2.00, 0.0731, 2.00, 2.01]
82 [2.00, 1.99, 0.0549, 0.0575, 2.01, 1.99, 0.0642, 0.0519, 1.99, 2.00]
84 [2.00, 0.0547, 1.99, 2.02, 2.01, 2.01, 1.99, 0.0712, 2.00, 2.01]
86 [1.97, 2.02, 2.00, 2.00, 2.01, 2.00, 2.00, 2.00, 0.0562, 2.01]
88 [2.00, 0.0534, 0.0605, 2.00, 2.00, 0.0567, 0.0709, 1.99, 2.00, 2.00]
90 [0.0553, 0.0496, 2.01, 1.99, 2.00, 2.00, 1.98, 0.0476, 2.01, 0.0541]
92 [1.98, 1.99, 2.00, 0.0545, 2.00, 2.00, 2.00, 1.99, 2.01, 2.01]
94 [1.99, 0.0425, 0.0579, 1.96, 2.01, 0.250, 0.106, 0.0561, 1.97, 2.00]
96 [0.0520, 2.00, 1.96, 2.00, 2.00, 0.0508, 2.00, 0.0480, 1.99, 2.01]
98 [0.0458, 0.0515, 0.0415, 2.01, 1.99, 2.01, 0.0510, 0.0526, 0.0455, 2.00]

Second test

2 [1.02, 1.02, 1.02, 1.02, 1.01, 1.01, 1.02, 1.02, 1.01, 1.02]
4 [1.02, 1.02, 1.02, 1.02, 1.02, 1.02, 1.02, 1.02, 1.02, 1.02]
6 [1.01, 1.03, 1.02, 1.03, 1.02, 1.02, 1.02, 1.02, 1.02, 1.02]
8 [1.01, 1.01, 1.02, 1.01, 1.01, 1.01, 1.02, 1.02, 1.01, 1.02]
10 [1.01, 1.02, 1.01, 1.01, 1.01, 1.01, 1.02, 1.02, 1.01, 1.02]
12 [1.01, 1.01, 1.02, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01]
14 [1.01, 1.01, 1.01, 1.02, 1.01, 1.01, 1.01, 1.02, 1.01, 1.01]
16 [1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01]
18 [1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01]
20 [1.01, 1.01, 1.01, 1.01, 1.00, 1.01, 1.01, 1.01, 1.01, 1.01]
22 [1.00, 1.00, 1.00, 1.01, 1.01, 1.01, 1.01, 1.00, 1.01, 1.00]
24 [1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.01, 1.00, 1.00]
26 [0.996, 0.966, 1.00, 1.01, 1.01, 1.01, 1.00, 1.00, 1.00, 1.01]
28 [1.01, 1.00, 1.00, 1.01, 1.00, 1.01, 1.01, 1.00, 1.00, 1.00]
30 [1.00, 1.01, 1.00, 1.01, 1.01, 1.01, 1.00, 1.01, 1.01, 1.00]
32 [1.00, 1.06, 1.01, 1.00, 1.00, 1.00, 1.01, 1.00, 1.00, 1.00]
34 [1.00, 1.00, 1.01, 1.00, 1.00, 1.01, 1.00, 1.01, 1.00, 1.01]
36 [1.00, 1.01, 1.01, 1.01, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00]
38 [1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 0.358]
40 [1.01, 1.00, 1.00, 2.01, 1.00, 1.00, 1.00, 1.01, 1.00, 1.00]
42 [1.00, 1.00, 1.01, 1.00, 2.01, 1.00, 1.00, 1.00, 1.00, 1.00]
44 [1.00, 1.00, 1.00, 1.01, 1.00, 1.00, 1.00, 1.00, 1.01, 1.00]
46 [0.150, 1.00, 1.00, 1.00, 1.00, 1.00, 0.196, 1.00, 0.188, 2.00]
48 [0.153, 2.01, 1.01, 2.00, 1.01, 2.00, 1.00, 1.00, 1.00, 1.00]
50 [1.00, 1.00, 1.00, 1.00, 2.01, 1.00, 1.00, 0.163, 1.00, 1.99]
52 [1.00, 1.00, 1.00, 1.00, 0.166, 1.02, 1.00, 1.00, 1.01, 1.00]
54 [1.00, 0.166, 1.00, 1.00, 1.00, 2.01, 2.00, 0.153, 2.00, 0.992]
56 [1.00, 2.00, 1.00, 1.00, 2.01, 0.134, 2.00, 2.00, 0.999, 1.00]
58 [0.604, 0.423, 0.132, 2.00, 1.99, 2.00, 1.00, 1.00, 2.01, 0.119]
60 [1.00, 1.00, 0.117, 2.00, 1.00, 2.00, 2.01, 2.00, 0.119, 1.00]
62 [2.00, 0.151, 2.00, 2.00, 1.00, 0.139, 1.00, 2.00, 2.01, 0.106]
64 [1.99, 2.00, 0.103, 1.99, 2.00, 2.00, 0.120, 1.00, 1.00, 1.00]
66 [2.00, 2.00, 1.99, 2.00, 2.00, 2.00, 2.00, 2.01, 2.00, 1.00]
68 [1.99, 1.99, 1.00, 2.00, 2.00, 1.00, 2.00, 2.00, 0.950, 1.99]
70 [1.98, 2.00, 2.01, 1.00, 1.00, 0.0791, 2.00, 1.00, 0.118, 0.0964]
72 [0.429, 2.00, 2.00, 1.99, 1.98, 2.00, 1.99, 2.00, 0.128, 2.01]
74 [0.0890, 0.0795, 1.99, 2.00, 2.01, 2.00, 1.97, 0.0945, 2.00, 1.00]
76 [1.99, 1.99, 2.01, 2.01, 2.02, 2.01, 0.0878, 2.00, 0.0740, 2.01]
78 [1.00, 2.00, 2.03, 2.00, 1.00, 2.00, 0.0617, 0.0812, 2.00, 1.00]
80 [0.0651, 0.0687, 1.98, 2.00, 1.98, 2.00, 2.00, 0.0696, 0.0791, 0.0770]
82 [2.00, 0.0788, 2.00, 1.99, 0.0828, 2.00, 2.02, 2.00, 0.0669, 2.01]
84 [1.02, 1.99, 2.00, 2.01, 0.318, 2.00, 2.00, 1.99, 2.01, 2.01]
86 [2.00, 0.0669, 2.00, 2.00, 2.00, 1.98, 0.0551, 2.00, 2.00, 0.135]
88 [0.0540, 2.00, 2.00, 2.00, 2.00, 2.00, 2.00, 1.99, 2.00, 0.0682]
90 [2.00, 2.02, 1.91, 2.01, 2.01, 2.00, 2.00, 2.00, 2.00, 2.00]
92 [2.00, 1.98, 2.00, 1.99, 1.99, 2.00, 2.00, 2.00, 2.00, 2.00]
94 [1.00, 0.0574, 2.00, 1.00, 2.00, 2.00, 0.0594, 2.00, 2.00, 2.00]
96 [2.00, 2.00, 2.00, 2.00, 0.0420, 2.00, 2.00, 2.00, 2.01, 2.00]
98 [0.0425, 0.0565, 2.01, 2.00, 2.00, 1.99, 1.99, 2.00, 0.0462, 2.00]

@bgrenet
Copy link
Author

bgrenet commented Jun 26, 2014

Author: Bruno Grenet

@videlec
Copy link
Contributor

videlec commented Jun 29, 2014

comment:9

Hi,

Global remarks:

  1. You seem to have a global problem with the difference between == and is so have a look at python-is-operator on stackoverflow.

Comment your code when it is complicated like

if cst_coeff is not ZZ(1):
    i_min=0
    polys=[]

    for i in xrange(1,k):
        if e[i]-e[i-1] > c_max.nbits():
            polys.append(R(p[ e[i_min]:e[i] ].shift(-e[i_min])))
            i_min=i
            c_max=c[i].abs()
        else:
            c_max=max(c[i].abs(),c_max)
    polys.append(R(p[ e[i_min]:1+e[k-1] ].shift(-e[i_min])))

open softwares should also be readable softwares

  1. You have syntax error in the documentation (which will create error when you try to build the documentation with "make doc"):
``algorithm'' -- the algorithm to use

should be

``algorithm`` -- the algorithm to use

ie open and close with back quotes.

Specific ones:

  1. In the method sparsity that you implemented in "polynomial_element.pyx", the variable c is not initialized. So
sage: K.<x>=QQ[]
sage: (x^7 + x^3 + 1).sparsity()
32665

Moreover, the following test is not safe at all

if l.pop() is not zero

you can not believe that the zero always occupy the same memory. For instance

sage: K = QQ['x']
sage: K.zero().constant_coefficient() is QQ.zero()
False

Moreover, using try except is totally useless in that case... You might be inspired by the implementation of coefficients. Hopefully, I am not the one who teach you programming at school ;-) Please test this method with other base rings (at least QQ, QQbar, ZZ/nZZ GF(p), GF(p^n)).

  1. Are you sure that the term sparsity is standard? I would rather go for something more explicit like "num_nonzero_coefficients" or something similar. I hoped to find a similar method in matrices but did not find it.

  2. In your function _roots_univariate_polynomial, there is no gain in using xrange instead of range. But there will be a big one if you define k as an int!

  3. Using the gcd from sage.rings.arith (in the line p=gcd(p,q)) is slow compared to p = p.gcd(q). (and do not forget to remove the import)

Vincent
Vincent

@videlec
Copy link
Contributor

videlec commented Jun 29, 2014

comment:10
  1. the method .extend() of lists do not return anything as you can see
sage: [0].extend([1])

so there is a problem in the case algorithm "dense_with_gcd" in your function _roots_univariate_polynomial.

  1. How can you divide by the content??
sage: R.<x> = PolynomialRing(ZZ, sparse=True)
sage: p = (x^7 + x^3 + 1)
sage: p.content()
Principal ideal (1) of Integer Ring
sage:  (x^7 + x^3 + 1) / p.content()

EDIT: my mistake. This is a bug in the code of polynomials: the method content must either return an ideal or a number but not one or the other depending on the implementation...

Nevertheless, you should test all cases of your code within the documentation!!

Vincent

@videlec
Copy link
Contributor

videlec commented Jun 29, 2014

comment:11

For the content issue, I opened a thread on sage-devel.

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2014

Branch pushed to git repo; I updated commit sha1. New commits:

7fec476Slighlty improved the algorithm + add a algorithm="sparse" option
e89350eChange sparsity function for dense polys + add doctests
5c4c70fDeclaration of int variables
be6decaclean up _roots_univariate_polynomial

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2014

Changed commit from f344cdc to be6deca

@bgrenet
Copy link
Author

bgrenet commented Jul 6, 2014

comment:13

Below are my answers to Vincent's comments:

  1. == vs is: Should be OK now

  2. Comment the code: Done

  3. Syntax error in the doc: Done. "make doc" works now.

  4. Method sparsity in "polynomial_element.pyx": I totally changed the method since it was obviously badly written. I wanted to avoid constructing a list of nonzero monomials to get a better complexity but this was not a good idea! I hope the new (much simpler!) version is fine. I add quite a lot of tests in this function (as well as in others).

  5. Name of the method sparsity: I would say that this name is fairly standard. It may be used more often to speak about "the fact of being sparse or not" rather than "the number of nonzero monomials", but still you can find this term in several papers (cf for instance http://www.math.purdue.edu/~sbasu/ccc04.ps or http://eccc.hpi-web.de/report/2014/056/download/). Further, I find the term nicer than using a periphrasis but I let you tell me what you think is best. I don't mind changing the term if it appears not to be appropriate. I looked at similar terms in matrix_sparse.pyx and related files. I appears that num_nonzero is used a bit, and sparsity is only used in comments to speak about "the fact of being sparse or not".

  6. range vs xrange, and def k as int: I defined k (as well as other variables i and j) as int, but I let xrange rather than range. I did not notice the promised big gain though.

  7. .extend() does not return anything: I note. Yet I removed the case "dense_with_gcd" since I have not been able to build a case where this is faster than the algorithm "dense".

  8. content issue: I let a distinction between sparse and dense polynomials until the issue is solved.

@sagetrac-vbraun-spam sagetrac-vbraun-spam mannequin modified the milestones: sage-6.3, sage-6.4 Aug 10, 2014
@tscrim
Copy link
Collaborator

tscrim commented May 13, 2017

comment:56

Minor comments:

  • Mark the docstring with r""" and

    -Returns the root of the univariate polynomial ``p''.
    +Return the root of the univariate polynomial ``p``.
  • `_roots_from_factorization` should either be :meth:`_roots_from_factorization` or ``_roots_from_factorization`` as it is not latex code.

  • ZZ -> `\ZZ`.

  • No semicolons needed and our error messages star with a lowercase to match python:

    raise ValueError("Roots of 0 are not defined");
    raise ValueError("roots of 0 are not defined")
  • Jeroen can probably explain it better than me, but he has told me you should try to avoid wrapping any Python code in sig_on/sig_off as it is not needed. Since almost all of this looks like Python or Cython files that should be interruptible, I don't think many of them are necessary. You might want to ask him about this though.

  • You should move the reference to the master reference file.

@videlec
Copy link
Contributor

videlec commented May 15, 2017

comment:57

Replying to @tscrim:

  • Jeroen can probably explain it better than me, but he has told me you should try to avoid wrapping any Python code in sig_on/sig_off as it is not needed. Since almost all of this looks like Python or Cython files that should be interruptible, I don't think many of them are necessary. You might want to ask him about this though.

Even better, RTFM https://readthedocs.org/projects/cysignals/

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented May 19, 2017

Branch pushed to git repo; I updated commit sha1. New commits:

1604cc916516: Move reference to global reference file
080a61716516: Better docstring + cosmetic change
6bdb88f16516: Better use of sig_on/sig_off
2a96eb216516: xrange -> range
df27d4916516: Better primitive part

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented May 19, 2017

Changed commit from 48d4fc9 to df27d49

@bgrenet
Copy link
Author

bgrenet commented May 19, 2017

comment:59

Replying to @tscrim:

Minor comments:

  • Mark the docstring with r""" and

    -Returns the root of the univariate polynomial ``p''.
    +Return the root of the univariate polynomial ``p``.
  • `_roots_from_factorization` should either be :meth:`_roots_from_factorization` or ``_roots_from_factorization`` as it is not latex code.

  • ZZ -> `\ZZ`.

  • No semicolons needed and our error messages star with a lowercase to match python:

    raise ValueError("Roots of 0 are not defined");
    raise ValueError("roots of 0 are not defined")
  • You should move the reference to the master reference file.

All of these have been taken into account.

  • Jeroen can probably explain it better than me, but he has told me you should try to avoid wrapping any Python code in sig_on/sig_off as it is not needed. Since almost all of this looks like Python or Cython files that should be interruptible, I don't think many of them are necessary. You might want to ask him about this though.

I tried to put sig_on/sig_off only when needed, performing a number of tests of Ctrl-C. There are still a fairly large number of them.

@videlec
Copy link
Contributor

videlec commented May 19, 2017

comment:60

Bruno, your sig_on/sig_off handling is terribly wrong. For example in def _roots_univariate_polynomial you have

sig_on()
roots = p._roots_from_factorization(p.factor(), multiplicities)
sig_off()

There is a lot of Python code between this sig_on and sig_off (here I count 4). Basically, the code between sig_on sand sig_off should not involve any Python object (this is not formally true but a good advice to follow). Moreover, in this particular case interruption should work perfectly fine without sig_on/sig_off.

The only cases where you should use it is

  • in a portion of your code where you use only C objects and that can potentially takes time
cdef int n
sig_on()
while some_condition:
   n = n + 3
sig_off()
  • a call to a C function that can be long
my_arg1 = my_py_object._my_C_attribute1
my_arg2 = my_py_object._my_C_attribute2
sig_on()
my_c_function(my_arg1, my_arg2)
sig_off()

@tscrim
Copy link
Collaborator

tscrim commented Jun 5, 2017

Reviewer: Vincent Delecroix, Travis Scrimshaw

@tscrim
Copy link
Collaborator

tscrim commented Jun 5, 2017

@tscrim
Copy link
Collaborator

tscrim commented Jun 5, 2017

comment:62

Agg...trac logged me out before I posted my message. Now I don't want to rewrite my long message.

Short version:

  • I removed sig_on/sig_off everywhere because it was calling Python functions (which should by C/Cython functions).
  • I fixed a bug when it was fully dense but had a positive valuation (see added doctest).
  • I reorganized the ordering of the code so special cases are more quickly handled.
  • Got some speed out of the generic polynomial methods.
  • Used self._parent in Polynomial instead of self.parent() because the former is relatively much faster. We can't do this in the special Python methods because of how Cython handles them.

New commits:

f9f4ceaMerge branch 'u/bruno/faster_roots_computation_for_sparse_polynomials_over_zz' of git://trac.sagemath.org/sage into public/polynomials/faster_roots_sparse_zz-16516
16f7c09It is faster to check hasattr than to catch the exception.
e7c0ad3Remove sig_on()/sig_off() because calling Python functions.
9b9bd39Use parent(p) instead of p.parent().
6b7116bSpeed improvements for generic sparse polynomials.
aa6ffb9Using self._parent instead of self.parent().
2290039Doing some reorganization and getting some more speed out of the factorizing.

@tscrim
Copy link
Collaborator

tscrim commented Jun 5, 2017

Changed commit from df27d49 to 2290039

@bgrenet
Copy link
Author

bgrenet commented Jun 19, 2017

comment:63
-        # The dense algorithm is to compute the roots from the factorization
+        # The dense algorithm is to compute the roots from the factorization.
+        # It is faster to let the factorization take care of the content
         if algorithm == "dense":
             return p._roots_from_factorization(p.factor(), multiplicities)

That is not true! The problem is that the factorization factors the content, which may take a very long time of course. As a consequence, there is a failing test in src/sage/rings/polynomial/polynomial_modn_dense_ntl.pyx: The function small_roots uses root finding over ZZ at some point and the test at line 475 hangs because of the factorization of the content (which is a 514-bit RSA integer).

By the way, thanks for cleaning my code!

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jun 22, 2017

Branch pushed to git repo; I updated commit sha1. New commits:

74dd4b216516: Remove content before factorization

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jun 22, 2017

Changed commit from 2290039 to 74dd4b2

@bgrenet
Copy link
Author

bgrenet commented Jun 22, 2017

comment:65

I added a few lines to remove the content before calling _roots_from_factorization in the dense algorithm. This adds some code duplication but keeps this content computation at the latest. I'll wait the patchbots verdicts but the tests pass AFAICT.

@tscrim
Copy link
Collaborator

tscrim commented Jun 22, 2017

comment:66

Then I am ready to set this to a positive review if there are no other objections.

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jun 24, 2017

Branch pushed to git repo; I updated commit sha1. New commits:

53edcc0Merge branch 'public/polynomials/faster_roots_sparse_zz-16516' of git://trac.sagemath.org/sage into public/polynomials/faster_roots_sparse_zz-16516
686d2b3Update comment to reflect the code change.

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jun 24, 2017

Changed commit from 74dd4b2 to 686d2b3

@tscrim
Copy link
Collaborator

tscrim commented Jun 24, 2017

Changed reviewer from Vincent Delecroix, Travis Scrimshaw to Vincent Delecroix, Travis Scrimshaw, Jeroen Demeyer

@tscrim
Copy link
Collaborator

tscrim commented Jun 24, 2017

comment:68

My last commit is a merge and removing part of the comment that wasn't true. Since the patchbot was (essentially) green on the previous branch and my change was trivial, I'm setting this to a positive review since there were no objections.

@vbraun
Copy link
Member

vbraun commented Jun 25, 2017

Changed branch from public/polynomials/faster_roots_sparse_zz-16516 to 686d2b3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants