Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement QQ['x'] via Flint ZZ['x'] + denominator #4000

Closed
malb opened this issue Aug 30, 2008 · 150 comments
Closed

Implement QQ['x'] via Flint ZZ['x'] + denominator #4000

malb opened this issue Aug 30, 2008 · 150 comments

Comments

@malb
Copy link
Member

malb commented Aug 30, 2008

Bill Hart wrote on [sage-devel]:

"""
Almost everything over Q should probably be converted to a problem
over Z. I haven't seen any polynomial problems over Q which should not
be dealt with this way so far, but I suppose they may exist.
"""

Further justification:

sage: f = R.random_element(2000)
sage: g = R.random_element(2000)
sage: fD = f.denominator()
sage: gD = g.denominator()
sage: fZ = (fD * f).change_ring(ZZ)
sage: gZ = (gD * g).change_ring(ZZ)
sage: %time _ = f*g
CPU times: user 0.63 s, sys: 0.02 s, total: 0.66 s
Wall time: 0.67 s

sage: %time _ = (fZ*gZ)
CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s
Wall time: 0.01 s

sage: %time _ = (fZ*gZ)/(fD*gD) 
CPU times: user 0.06 s, sys: 0.00 s, total: 0.06 s
Wall time: 0.06 s

sage: fM = magma(f)
sage: gM = magma(g)
sage: t = magma.cputime()
sage: _ = fM*gM
sage: magma.cputime(t)
0.059999999999999998
sage: f = R.random_element(4000) 
sage: g = R.random_element(4000) 
sage: fD = f.denominator()
sage: gD = g.denominator()
sage: fZ = (fD * f).change_ring(ZZ)
sage: gZ = (gD * g).change_ring(ZZ)
sage: %time _ = f*g
CPU times: user 2.11 s, sys: 0.00 s, total: 2.12 s
Wall time: 2.14 s
sage: %time _ = (fZ*gZ)
CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s
Wall time: 0.02 s
sage: %time _ = (fZ*gZ)/(fD*gD)
CPU times: user 0.14 s, sys: 0.01 s, total: 0.15 s
Wall time: 0.15 s
sage: fM = magma(f)
sage: gM = magma(g)
sage: t = magma.cputime()
sage: _ = fM*gM
sage: magma.cputime(t)
0.10000000000000001

CC: @burcin @sagetrac-drkirkby @sagetrac-spancratz @mwhansen @malb @jdemeyer @peterjeremy

Component: basic arithmetic

Author: Sebastian Pancratz, Martin Albrecht, William Stein, Jeroen Demeyer, Rob Beezer

Reviewer: John Cremona, Martin Albrecht, Alex Ghitza, Harald Schilly, William Stein, Mitesh Patel

Merged: sage-4.6.alpha3

Issue created by migration from https://trac.sagemath.org/ticket/4000

@malb
Copy link
Member Author

malb commented Sep 2, 2009

Author: Martin Albrecht

@malb
Copy link
Member Author

malb commented Sep 2, 2009

comment:2

The attached patch provides the basic skeleton for the proposed new implementation. The following already works with the attached patch:

sage: from sage.rings.polynomial.polynomial_rational_flint import Polynomial_rational_dense_flint
sage: P.<t> = QQ[]
sage: a = Polynomial_rational_dense_flint(P,1/2)
sage: b = Polynomial_rational_dense_flint(P,2/1)
sage: t = Polynomial_rational_dense_flint(P,is_gen=True)
sage: a*t
1/2*t
sage: a*t*b
t
sage: a*t*b*b
2*t
sage: a*t*b*(b*t)
2*t^2
sage: a*t*b*(b*t)*a
t^2

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 7, 2009

comment:3

I've now implemented most methods in the prototype from the previous patch uploaded, and sent a message to sage-devel under the thread "Improving QQ['x']" with some questions.

Sebastian

@malb
Copy link
Member Author

malb commented Sep 8, 2009

comment:4

Some remarks

  • the patch uses the old style docstring format, cf. http://wiki.sagemath.org/combinat/HelpOnTheDoc
  • you should claim copyright
  • cdef inline int _celement_canonicalise why not void?
  • all celement_foo functions should have doctests, which call the QQ[x] methods which call the celement_ functions
  • have you tried switching the default to this implementation to see how many doctests fail?

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 9, 2009

comment:5

Hi Martin,

I've now implemented all methods in the prototype. (I'll attach a new patch in a few minutes.) All of your above comments make sense and I can go through this tomorrow. One thing I could not get done is make SAGE use this implementation by default. In the file polynomial_ring.py, I tried replacing the line 1185 with

from sage.rings.polynomial.polynomial_rational_flint import Polynomial_rational_dense_flint
element_class = Polynomial_rational_dense_flint

and while building still works, executing ./sage results in a whole screen full output, ending with

b37bb0/home/suser/sage-4.1.2.alpha0/local/bin/sage-sage: line 199: 16297 Aborted                 sage-ipython "$@" -i

Am I making a mistake in the way I am trying to switch the default, or does this due to problems in the actual new implementation? I've got no idea how to fix this. Of course, I am then happy to do lots of testing.

Kind regards,

Sebastian

@malb
Copy link
Member Author

malb commented Sep 9, 2009

comment:6

I can take a look tomorrow to debug this.

@malb
Copy link
Member Author

malb commented Sep 10, 2009

comment:7

I fixed the startup crash. I suggest you take a look at fmpq.diff to see what I changed. If you want to debug these kind of issues start Sage using sage -gdb or sage -valgrind (you will need to install the optional Valgrind SPKG for this to work). Note that there is still some conversion code missing in polynomial_rational_flint.pyx.

sage: P.<x> = QQ[]
sage: f = P.random_element(2000)
sage: g = P.random_element(2000)
sage: %time _ = f*g
CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s
Wall time: 0.02 s
sage: P.<x> = PolynomialRing(QQ,'x',implementation='pari')
sage: f = P.random_element(2000)
sage: g = P.random_element(2000)
sage: %time _ = f*g
CPU times: user 0.59 s, sys: 0.00 s, total: 0.59 s
Wall time: 0.59 s
sage: P.<x> = QQ[]
sage: f = P.random_element(5000)
sage: g = P.random_element(5000)
sage: %time _ = f*g
CPU times: user 0.03 s, sys: 0.00 s, total: 0.03 s
Wall time: 0.04 s
sage: fM = magma(f)
sage: gM = magma(g)
sage: t = magma.cputime()
sage: _ = fM*gM
sage: magma.cputime(t)
0.12

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 10, 2009

comment:8

Thanks for the quick debugging and making the code accessible in SAGE for testing! I'll upload a new version of the patch later. Here are a few (unsorted) remarks:

  • The printing isn't "nice" yet: Rationals are still printed in the form r/s even if s divides r. In fact, all coefficients are printed with the common denominator of the polynomial. I've got an idea of how to fix that, but I am not sure it'll work; I'll give it a try later.

  • Say we set up to polynomial rings R[x] and S[y], one using the generic implementation and one using FLINT. Then sometimes (usually?) coercion like "f_flint = S(f_generic)" or "f_generic = R(f_flint)" works, but sometimes it ends in a segfault. For two random polynomials f and d, of two successive calls "q, s, t = xgcd(f, d)" the first one succeeded and the second one ended in a segfault. This seems very strange to me!

  • We achieve a performance gain except in the cases of addition and subtraction. (See below.)

  • The method xgcd doesn't give the right result yet, I'll look into that later.

  • I have no idea what you mean by "Note that there is still some conversion code missing in polynomial_rational_flint.pyx." Are there any examples of this in other files?

  • I'll write up the doctests later. Regarding your comments on using the "old" documentation style, I don't quite understand this.

Here is a complete (except for XGCD) list of performance comparisons, using the local installation of SAGE 4.1.2.alpha0 plus this patch on my laptop (Ubuntu 8.10, Intel Core 2 Duo). The first few tests, from comparison through to power, involve random polynomials f and g of degrees 2000, the division tests use random polynomials f and d of degrees 800 and 560, and for the GCD test f and d have degree 60 and 42. In each case, the first output line is for the generic implementation, the second output line is for the new implementation using FLINT.

```
Comparison: f == g
1 loops, best of 3: 10 µs per loop
1 loops, best of 3: 954 ns per loop
Comparison: f < g
1 loops, best of 3: 11 µs per loop
1 loops, best of 3: 1.91 µs per loop
Addition: f + g
1 loops, best of 3: 373 µs per loop
1 loops, best of 3: 2.26 ms per loop
Subtraction: f - g
1 loops, best of 3: 474 µs per loop
1 loops, best of 3: 2.23 ms per loop
Negation: -f
1 loops, best of 3: 12.9 ms per loop
1 loops, best of 3: 39.8 µs per loop
Multiplication: f * g
1 loops, best of 3: 549 ms per loop
1 loops, best of 3: 15.9 ms per loop
Power: f ** 4
1 loops, best of 3: 15.1 s per loop
1 loops, best of 3: 63.7 ms per loop
Division: q, r = f.quo_rem(d)
1 loops, best of 3: 2.42 s per loop
1 loops, best of 3: 177 ms per loop
Division: q = f // d
1 loops, best of 3: 2.43 s per loop
1 loops, best of 3: 63.9 ms per loop
Division: r = f % d
1 loops, best of 3: 2.43 s per loop
1 loops, best of 3: 193 ms per loop
GCD
1 loops, best of 3: 1.81 s per loop
1 loops, best of 3: 88.9 µs per loop
```

Sebastian

@malb
Copy link
Member Author

malb commented Sep 10, 2009

comment:9

Replying to @sagetrac-spancratz:

  • Say we set up to polynomial rings R[x] and S[y], one using the generic implementation and one using FLINT. Then sometimes (usually?) coercion like "f_flint = S(f_generic)" or "f_generic = R(f_flint)" works, but sometimes it ends in a segfault. For two random polynomials f and d, of two successive calls "q, s, t = xgcd(f, d)" the first one succeeded and the second one ended in a segfault. This seems very strange to me!

Try running Sage with sage -gdb and/or sage -valgrind. The later requires the optional Valgrind SPKG. The output of valgrind is incredibly useful and can be found in ~/.sage/valgrind. If you don't get anywhere, I can take a look. But learning Valgrind is well worth it :)

  • We achieve a performance gain except in the cases of addition and subtraction. (See below.)

We should think about how to make it more efficient, e.g. by only multiplying by the multiplier to get the LCM? Magma can do it faster than what we can do it seems.

  • The method xgcd doesn't give the right result yet, I'll look into that later.

  • I have no idea what you mean by "Note that there is still some conversion code missing in polynomial_rational_flint.pyx." Are there any examples of this in other files?

You are right, the overflow I was expecting doesn't happen (I think this is handled correctly in the base ring). We should consider making x + 1 (1 either int or Integer) fast though by writing special code similar to the Rational code in the __init__ function of polynomial_rational_flint.pyx. Also, construction from a list P([1,2,3,4]) should be made faster, cf. the zmod_poly implementation.

  • I'll write up the doctests later. Regarding your comments on using the "old" documentation style, I don't quite understand this.

You wrote e.g. \code{foo} which is the old LaTeX style. It should be using the Sphinx markup now.

Here is a complete (except for XGCD) list of performance comparisons, using the local installation of SAGE 4.1.2.alpha0 plus this patch on my laptop (Ubuntu 8.10, Intel Core 2 Duo). The first few tests, from comparison through to power, involve random polynomials f and g of degrees 2000, the division tests use random polynomials f and d of degrees 800 and 560, and for the GCD test f and d have degree 60 and 42. In each case, the first output line is for the generic implementation, the second output line is for the new implementation using FLINT.

This is encouraging!

@malb
Copy link
Member Author

malb commented Sep 11, 2009

comment:10

Also, I think our design for .den is false. It shouldn't be preallocated because this makes you call realloc, i.e. we have two system calls instead of one. This is quite expensive.

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 11, 2009

comment:11

Actually, I am not quite sure about this. When working with a random polynomial of degree 2000, which will have lots of non-zero entries all of type fmpz_t, it shouldn't really matter whether we manually initialise a few more for the denominators.

I've tried implementing the denominator with the convention that it is either NULL (which should be interpreted as one) or initialised to a positive integer. But this didn't really change the performance.

Another idea, which will sometimes help to keep numbers small, is to instead represented the polynomial over the rationals as (num / dem) prim where num / dem is a rational number in reduced form and prim is a primitive integer polynomial with positive leading coefficient. Obviously, this change vastly improved the performance of negation (which then only operates on the rational number and leaves the integer polynomial part alone). But it didn't change much apart from that. Anyway, given we need to compute the content of the numerator anyway to ensure that it is coprime to the denominator, we might as well store it separately. I'll implement this throughout the patch and upload a new version later today.

This still leaves the problem: How can we speed up addition?

At the moment, I don't have any further ideas. In fact, I think it might perhaps be the case that we simply can't, since in this kind of implementation we have to at least do a few polynomial scalar multiplications (and perhaps polynomial scalar divisions as well as integer gcd computations to maintain the form of the representation) plus all the coefficient additions. In contrast to this, implementing polynomials as an array of coefficients one only has to do the (rational!) coefficient additions.

So the next things I'll do are

  • Change the fmpq_poly_t data type
  • Change all the methods accordingly
  • Write docstrings

Does anyone have any ideas about the addition?

Sebastian

@malb
Copy link
Member Author

malb commented Sep 11, 2009

comment:12

Replying to @sagetrac-spancratz:

Actually, I am not quite sure about this. When working with a random polynomial of degree 2000, which will have lots of non-zero entries all of type fmpz_t, it shouldn't really matter whether we manually initialise a few more for the denominators.

We shouldn't forget about small polynomials, they should be fast too. Two instead of one system call sounds rather expensive to me for basic arithmetic.

I've tried implementing the denominator with the convention that it is either NULL (which should be interpreted as one) or initialised to a positive integer. But this didn't really change the performance.

Did you try small examples? Also, how much does the realloc trick you implemented give you?

Another idea, which will sometimes help to keep numbers small, is to instead represented the polynomial over the rationals as (num / dem) prim where num / dem is a rational number in reduced form and prim is a primitive integer polynomial with positive leading coefficient. Obviously, this change vastly improved the performance of negation (which then only operates on the rational number and leaves the integer polynomial part alone). But it didn't change much apart from that. Anyway, given we need to compute the content of the numerator anyway to ensure that it is coprime to the denominator, we might as well store it separately. I'll implement this throughout the patch and upload a new version later today.

This still leaves the problem: How can we speed up addition?

Did you try the LCM idea? Rationale:

sage: P.<x> = QQ[]
sage: f = P.random_element(3000)
sage: g = P.random_element(3000)
sage: fD = f.denominator()
sage: gD = g.denominator()
sage: (fD*gD).nbits()
320
sage: (fD.lcm(gD)).nbits()
228

At the moment, I don't have any further ideas. In fact, I think it might perhaps be the case that we simply can't, since in this kind of implementation we have to at least do a few polynomial scalar multiplications (and perhaps polynomial scalar divisions as well as integer gcd computations to maintain the form of the representation) plus all the coefficient additions. In contrast to this, implementing polynomials as an array of coefficients one only has to do the (rational!) coefficient additions.

Well, this other implementation would have to do quite a few rational additions where it would have to deal with denominators quite a bit. I am not convinced yet it has to be this slow. You could also ask on [sage-devel] I am sure, e.g. Bill Hart (main author of FLINT) would have some cool ideas.

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 12, 2009

comment:13

I am sorry for the delay in working on this. Rather than trying the approach of writing the polynomial as r A / s, I've tried again to write this as A / s only as you laid out initially, this time trying really hard to avoid allocating anything new. Not everything is working again yet, I still need to re-write the three division functions, exponentiation and the two gcd functions. The upside is that everything seems to be lots faster now :).

Hopefully I'll be able to upload something useful tomorrow.

Sebastian

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 14, 2009

comment:14

The switch to include NULL denominators still isn't quite done. However, I think addition and multiplication are bug-free already and show another massive improvement in speed. There are definitely still bugs in the division method, and the modular exponentiation as well as the gcd methods aren't implemented yet. I should be able to look into this tomorrow.

Sebastian

@malb
Copy link
Member Author

malb commented Sep 14, 2009

comment:15

I can't test the most current patch (on geom.math):

sage: P.<x> = PolynomialRing(QQ)
sage: f = P.random_element(2000)
...
__celement_den_fit_limbs
Error: division by zero!
/scratch/malb/sage-4.1.2.alpha1/local/bin/sage-sage: line 199: 12195 Aborted                 sage-ipython "$@" -i

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 14, 2009

comment:16

Hi Martin,

I am sorry for that. Last night and this morning I fixed another couple of bugs. In a few minutes, I'll upload a new patch (or rather, again the difference from the 20090911 patch). By the way, for debugging purposes all methods in the linkage file now begin with printing the method's name (although in all but the floordiv method, that line begins with '#').

Random (performance!, not correctness...) tests ran fine last night for polynomials of degree 2000 for the methods ==, <, +, -, neg, *, ^. I thought the three division methods should work fine now, until I stumbled across the following segfault:

```
sage: S.<y> = QQ[]
sage: f = -3 * y^10 - 4 * y^9
sage: g = (1/2) * y^6 + 3 * y^5 + 160930 * y^4
sage: f // g
celement_floordiv
Perform pseudo division -3*x^10-4*x^9 x^6+6*x^5+321860*x^4


------------------------------------------------------------
Unhandled SIGSEGV: A segmentation fault occured in SAGE.
This probably occured because a *compiled* component
of SAGE has a bug in it (typically accessing invalid memory)
or is not properly wrapped with _sig_on, _sig_off.
You might want to run SAGE under gdb with 'sage -gdb' to debug this.
SAGE will now terminate (sorry).
------------------------------------------------------------
```

This strikes me as very odd because the segfault seems to occur in the call fmpz_poly_pseudo_div(q.num, &m, a.num, b.num) with a.num the polynomial -3*x<sup>10-4*x</sup>9 and b.den the polynomial x<sup>6+6*x</sup>5+321860*x^4. Perhaps you could have a look at this one?

I haven't looked at the two gcd methods yet, but I'll do that later today or tomorrow.

As the last question about the implementation (for this method), I noticed that polynomials over QQ in SAGE have the method denominator, which clearly this implementation should overwrite. On which level/ in which file should this be done?

Finally, here are the performance timings, in each case for ten random polynomials of degree 2000, first the time for the generic implementation and then the time for this implementation with FLINT:

  • == - 20µs, 1µs
  • < - 20µs, 1µs
  • + - 400µs, 100µs
  • - - 400µs, 100µs
  • neg - 20ms, 20µs
  • * - 500ms, 1ms
  • ^ (to the 4th power) - 15s, 30µs

Kind regards,

Sebastian

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 14, 2009

comment:17

Hi Martin,

I just started to look at the gcd methods again and I also looked at the logic in polynomial_template.pxi. Here's the question:

Since the gcd of two polynomials is only defined up to multiplication by rationals, what's the right way of dealing with this? I think one can make a good argument for always returning the same normalisation. This would also mean that we do not necessarily have gcd(a,0) == a. This is currently the way it's handled in the file polynomial_template.pxi. If we want to normalise the gcd, in which way should this be done? If it's non-zero..

  • Monic rational polynomial
  • Primitive integer polynomial with positive leading coefficient

Of course, there are lots more but I think these two might be the most sensible choices.

The first one has the advantage that it generalises to all polynomial rings (over commutative rings with 1, at least). Upon adding a method returning the monic scalar multiple of a polynomial to the template file, one can still handle the cases of gcd(a,0) and gcd(0,b) in the template file.

Personally though, I am more in favour of the second option, since this might lead to faster code when working with QQ[]. In this case, we should remove the handling of the above two cases from the template file and always pass the call on to celement_gcd. This would mean that we leave the normalisation up to the actual implementation of the polynomial ring, rather than enforcing it across all base rings using the template file. We would then also have to make sure that all celement_gcd methods are happy to deal with zero arguments.

What do you think?

Sebastian

@malb
Copy link
Member Author

malb commented Sep 14, 2009

comment:18

Replying to @sagetrac-spancratz:

Perhaps you could have a look at this one?

I will (hopefully) take a look later this week.

As the last question about the implementation (for this method), I noticed that polynomials over QQ in SAGE have the method denominator, which clearly this implementation should overwrite. On which level/ in which file should this be done?

You would add a method denominator() to Polynomial_rational_dense_flint.

Finally, here are the performance timings, in each case for ten random polynomials of degree 2000, first the time for the generic implementation and then the time for this implementation with FLINT:

If I understand this correctly, then addition is 20x faster than the previous implementation just because you avoid a remalloc?

@malb
Copy link
Member Author

malb commented Sep 14, 2009

comment:19

Replying to @sagetrac-spancratz:

Since the gcd of two polynomials is only defined up to multiplication by rationals, what's the right way of dealing with this? I think one can make a good argument for always returning the same normalisation. This would also mean that we do not necessarily have gcd(a,0) == a. This is currently the way it's handled in the file polynomial_template.pxi. If we want to normalise the gcd, in which way should this be done? If it's non-zero..

I think we should have gcd(a,0) = 1 because this is what gcd(1/2,0) returns. I would like to avoid to put this logic in the celement_gcd implementations but if we have to then ... well we have to :)

Personally though, I am more in favour of the second option, since this might lead to faster code when working with QQ[]. In this case, we should remove the handling of the above two cases from the template file and always pass the call on to celement_gcd. This would mean that we leave the normalisation up to the actual implementation of the polynomial ring, rather than enforcing it across all base rings using the template file. We would then also have to make sure that all celement_gcd methods are happy to deal with zero arguments.

This might be worth raising on [sage-devel] where people care much more about this than I do, i.e. I guess it is a relevant corner case for number theory and thus people might have strong feelings about it?

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 14, 2009

comment:20

As the last question about the implementation (for this method), I noticed that polynomials over QQ in SAGE have the method denominator, which clearly this implementation should overwrite. On which level/ in which file should this be done?

You would add a method denominator() to Polynomial_rational_dense_flint.

OK, I'll do that.

Finally, here are the performance timings, in each case for ten random polynomials of degree 2000, first the time for the generic implementation and then the time for this implementation with FLINT:

If I understand this correctly, then addition is 20x faster than the previous implementation just because you avoid a remalloc?

Yes. Actually, throughout I am now trying very hard to re-use variables rather than allocating new variables all over the place. It makes the code quite ugly... but definitely faster, which this is about, right? :)

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 14, 2009

comment:21

Replying to @malb:

Replying to @sagetrac-spancratz:
I think we should have gcd(a,0) = 1 because this is what gcd(1/2,0) returns. I would like to avoid to put this logic in the celement_gcd implementations but if we have to then ... well we have to :)

I didn't mean the above for rational numbers a, but for rational polynomials a. Your integer example above highlights that gcd doesn't necessarily guarantee gcd(a, 0) == a. The behaviour of gcd for integers suggests the method should return the monic normalisation. However, the current logic in template_polynomial.pxi doesn't do this, for example:

```
sage: R.<t> = PolynomialRing(IntegerModRing(3), 't')
sage: f = 2*t + 1
sage: type(f)
<type 'sage.rings.polynomial.polynomial_zmod_flint.Polynomial_zmod_flint'>
sage: gcd(f, R(0))
2*t + 1
```

In the above case, the monic version would be t + 2.

This might be worth raising on [sage-devel] where people care much more about this than I do, i.e. I guess it is a relevant corner case for number theory and thus people might have strong feelings about it?

OK, I'll do this.

Sebastian

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 16, 2009

comment:22

I have now opened another ticket, #6941, to change the template implementation, pushing all the logic into the celement_foo methods rather than taking away the cases gcd(a,0) and gcd(0,b) on a higher level. The patch is very short --- it only does the small expected changes to the template file and the GF2X and ZMOD linkage files, plus one other file in the hyperelliptic curve module where the current behaviour of xgcd is used.

Martin, would you be happy to review that patch?

Sebastian

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 19, 2009

comment:23

I've now implemented almost all functionality from the former generic implementation, most of it massively improved through FLINT. There are also some new methods. All of this is in the patch I uploaded just now, still as a difference to the 20090911 patch. Running make test still results in many failures, although fewer than last week.

Sebastian

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 21, 2009

comment:24

Apart from a bad indentation in polynomial_quotient_ring_element, for which I didn't want to re-upload a patch, I am now down to the following doctest failures:

    ```
    sage -t  "devel/sage/sage/schemes/elliptic_curves/padic_lseries.py"
    sage -t  "devel/sage/sage/schemes/elliptic_curves/ell_generic.py"
    sage -t  "devel/sage/sage/schemes/elliptic_curves/padics.py"
    sage -t  "devel/sage/sage/schemes/elliptic_curves/ell_curve_isogeny.py"
    sage -t  "devel/sage/sage/tests/book_stein_modform.py"
    sage -t  "devel/sage/sage/rings/qqbar.py"
    sage -t  "devel/sage/sage/rings/number_field/number_field_element.pyx"
    sage -t  "devel/sage/sage/modular/modform/element.py"
    sage -t  "devel/sage/sage/modular/overconvergent/genus0.py"
    sage -t  "devel/sage/sage/modular/hecke/submodule.py"
    sage -t  "devel/sage/sage/structure/sage_object.pyx"
    ```

All but one of them are memory problems, either in mpq_canonicalize (called in the __getitem__ method) or in fmpz_poly_mul called in celement_mul. At the moment, I do not know how to resolve these.

Sebastian

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Sep 23, 2009

comment:25

Almost all of the memory problems are now resolved. They were arising because I wrongly assumed fmpz_ methods (not fmpz_poly_; they work just fine) supported aliasing of the inputs and outputs. Apart from the method fmpz_neg, I think I have fixed this in all places where I am using it, apart from the square-free decomposition in polynomial_rational_flint.pyx. I'll rewrite that, too, but I've already checked that this method does not get called in the following last two remaining doctest failures:

```
The following tests failed:

    sage -t  "devel/sage/sage/rings/qqbar.py"
    sage -t  "devel/sage/sage/structure/sage_object.pyx"
```

The test in qqbar.py that seems to break down is the following piece of code:

```
sage: x = polygen(AA)
sage: r = QQbar.polynomial_root(x^5 - x - 1, CIF(RIF(0.1, 0.2), RIF(1.0, 1.1))); r
sage: r.real().minpoly()
```

The test in sage_object.pyx that breaks has to do with unpickling, and it's triggered by the following two lines:

```
sage: std = os.environ['SAGE_DATA'] + '/extcode/pickle_jar/pickle_jar.tar.bz2'
sage: sage.structure.sage_object.unpickle_all(std)
```

I will try to chase down the first problem a little further than the minpoly() function, perhaps I can resolve it myself. But any help with the second problem in particular would be very much appreciated.

Sebastian

@sagetrac-spancratz
Copy link
Mannequin

sagetrac-spancratz mannequin commented Oct 6, 2010

comment:120

I think this would be the appropriate way to handle this. Sebastian

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 7, 2010

comment:121

Applying attachment: trac4000_0.patch to 4.6.alpha2, I get a failed "hunk":

--- polynomial_ring.py
+++ polynomial_ring.py
@@ -1220,28 +1220,34 @@
             sage: type(R.gen())
             <class 'sage.rings.polynomial.polynomial_element_generic.Polynomial_generic_dense_field'>
         """
-        if implementation is None: implementation="NTL"
         from sage.rings.finite_rings.finite_field_base import is_FiniteField
+        from sage.rings.rational_field import QQ
+        from sage.rings.polynomial.polynomial_singular_interface import can_convert_to_singular
+        if implementation is None:
+            implementation = "NTL"
+
         if implementation == "NTL" and is_FiniteField(base_ring):
-            p=base_ring.characteristic()
             from sage.libs.ntl.ntl_ZZ_pEContext import ntl_ZZ_pEContext
             from sage.libs.ntl.ntl_ZZ_pX import ntl_ZZ_pX
+            from sage.rings.polynomial.polynomial_zz_pex import Polynomial_ZZ_pEX
+
+            p = base_ring.characteristic()
             self._modulus = ntl_ZZ_pEContext(ntl_ZZ_pX(list(base_ring.polynomial()), p))
-            from sage.rings.polynomial.polynomial_zz_pex import Polynomial_ZZ_pEX
-            element_class=Polynomial_ZZ_pEX
+            element_class = Polynomial_ZZ_pEX
 
         if not element_class:
             if sparse:
                 element_class = polynomial_element_generic.Polynomial_generic_sparse_field
             elif isinstance(base_ring, rational_field.RationalField):
-                element_class = polynomial_element_generic.Polynomial_rational_dense
+                from sage.rings.polynomial.polynomial_rational_flint import Polynomial_rational_flint
+                element_class = Polynomial_rational_flint
             elif is_RealField(base_ring):
                 element_class = PolynomialRealDense
             else:
                 element_class = polynomial_element_generic.Polynomial_generic_dense_field
+
         PolynomialRing_integral_domain.__init__(self, base_ring, name=name, sparse=sparse, element_class=element_class)
 
-        from sage.rings.polynomial.polynomial_singular_interface import can_convert_to_singular
         self._has_singular = can_convert_to_singular(self)
 
     def divided_difference(self, points, full_table=False):

Could someone who knows this code please rebase the patch?

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

Main patch rebased against 4.6.alpha3

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

Attachment: trac4000_0.2.patch.gz

Attachment: trac_4000-sunos_workaround.patch.gz

Solaris work around

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

@qed777 qed777 mannequin added s: needs review and removed s: needs work labels Oct 8, 2010
@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

Changed work issues from Solaris build error, doctest error to none

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

comment:123

The tests pass with a trial 4.6.alpha3 (which is probably the same as alpha2 for this ticket) on sage.math, except for

sage -t -long -force_lib "devel/sage/sage/rings/number_field/number_field_ideal.py"
**********************************************************************
File "/mnt/usb1/scratch/mpatel/apps/sage-4.6.a3/devel/sage/sage/rings/number_field/number_field_ideal.py", line 194:
    sage: NumberField(x^2 + 1, 'a').ideal(7).__hash__()
Expected:
    -9223372036854775779                 
Got:
    -288230376151711715

On David Kirkby's OpenSolaris machine hawk, I get

sage -t -long -force_lib "devel/sage/sage/rings/number_field/number_field_ideal.py"
**********************************************************************
File "/export/home/buildbot/build/sage/hawk-1/hawk_full/build/sage-4.6.alpha3/devel/sage/sage/rings/number_field/number_field_ideal.py", line 194:
    sage: NumberField(x^2 + 1, 'a').ideal(7).__hash__()
Expected:
    -2147483619
Got:
    -67108835

I'm inclined to merge this into 4.6.alpha3. We can open a new ticket for the new error, unless it indicates a serious problem. I'd like to release 4.6.alpha3 in a day or so, so please let me know as soon as possible.

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

Combined patch that replaces all of the others

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

comment:124

Attachment: trac_4000-combined.patch.gz

I've also attached a combined patch that replaces all of the others.

@sagetrac-drkirkby
Copy link
Mannequin

sagetrac-drkirkby mannequin commented Oct 8, 2010

comment:125

Replying to @qed777:

The tests pass with a trial 4.6.alpha3 (which is probably the same as alpha2 for this ticket) on sage.math, except for

sage -t -long -force_lib "devel/sage/sage/rings/number_field/number_field_ideal.py"
**********************************************************************
File "/mnt/usb1/scratch/mpatel/apps/sage-4.6.a3/devel/sage/sage/rings/number_field/number_field_ideal.py", line 194:
    sage: NumberField(x^2 + 1, 'a').ideal(7).__hash__()
Expected:
    -9223372036854775779                 
Got:
    -288230376151711715

On David Kirkby's OpenSolaris machine hawk, I get

sage -t -long -force_lib "devel/sage/sage/rings/number_field/number_field_ideal.py"
**********************************************************************
File "/export/home/buildbot/build/sage/hawk-1/hawk_full/build/sage-4.6.alpha3/devel/sage/sage/rings/number_field/number_field_ideal.py", line 194:
    sage: NumberField(x^2 + 1, 'a').ideal(7).__hash__()
Expected:
    -2147483619
Got:
    -67108835

I'm inclined to merge this into 4.6.alpha3. We can open a new ticket for the new error, unless it indicates a serious problem. I'd like to release 4.6.alpha3 in a day or so, so please let me know as soon as possible.

Personally, I think it would be best to fix it first. Otherwise it strikes me of this comment

#6456 comment:67

by Peter Jeremy.


I am very concerned at this "release it now, we'll make it work later" mentality.


If it is on the strict understanding it does not get into a release until fixed, then I'm OK with it. That is the purpose of alphas. But I thought the intension was to have a feature freeze after this alpha. Merging this could be dangerous thing to do.

The ticket has been open two years - I would have thought those working on it would have had time to checked it!

Dave

@sagetrac-drkirkby
Copy link
Mannequin

sagetrac-drkirkby mannequin commented Oct 8, 2010

comment:126

As a matter of interest, what is the rationale for making a ticket a blocker, when it has been open for two years? If we have lived without it for two years, I find the 'blocker' status a bit unnecessary.

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

comment:127

On bsd.math, I get

sage -t -long  -force_lib devel/sage/sage/rings/number_field/number_field_ideal.py
**********************************************************************
File "/Users/buildbot/build/sage/bsd-1/bsd_full/build/sage-4.6.alpha3/devel/sage-main/sage/rings/number_field/number_field_ideal.py", line 194:
    sage: NumberField(x^2 + 1, 'a').ideal(7).__hash__()
Expected:
    -9223372036854775779                 
Got:
    -288230376151711715

so it seems we can just update the example at line 194 in number_field_ideal.py, which is currently:

            sage: NumberField(x^2 + 1, 'a').ideal(7).__hash__()
            -9223372036854775779                 # 64-bit
            -2147483619                          # 32-bit

Is is OK?

@sagetrac-drkirkby
Copy link
Mannequin

sagetrac-drkirkby mannequin commented Oct 8, 2010

comment:128

For the record, here is a slightly longer quote of what Peter said:

*I am very concerned at this "release it now, we'll make it work later" mentality. If Sage is going to be a viable alternative to the M's, it needs to be trustworthy - complaints of "feature X is missing" are easily rectified, claims of "Sage gave me wrong answers" can quickly turn into "you can't trust the output from Sage" and are far more difficult to refute. *

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

Fix 32/64-bit number_field_ideal doctest. Apply only this patch.

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

comment:129

Attachment: trac_4000-combined.2.patch.gz

I made this ticket a 4.6 blocker three weeks ago. The most recent doctest error appeared because of a recent change, probably in 4.6.alpha2. Yes, I meant to say that I'd make the new ticket a 4.6 blocker. 4.6.alpha3 is not out yet and we are not yet in feature freeze.

I've added V2 of the combined patch, which adjusts the number_field_ideal.py example as I suggest above.

This ticket still needs a final review, and if it's positively reviewed by the time #10097 is merged, I'll merge it into 4.6.alpha3.

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

Changed author from Sebastian Pancratz, Martin Albrecht to Sebastian Pancratz, Martin Albrecht, William Stein, Jeroen Demeyer, Robert Beezer

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

Changed reviewer from John Cremona, Martin Albrecht, Alex Ghitza, Harald Schilly to John Cremona, Martin Albrecht, Alex Ghitza, Harald Schilly, William Stein, Mitesh Patel

@qed777
Copy link
Mannequin

qed777 mannequin commented Oct 8, 2010

Merged: sage-4.6.alpha3

@qed777 qed777 mannequin removed the s: positive review label Oct 8, 2010
@qed777 qed777 mannequin closed this as completed Oct 8, 2010
@rlmill
Copy link
Mannequin

rlmill mannequin commented Oct 11, 2010

comment:133

And there was much rejoicing.

@nexttime
Copy link
Mannequin

nexttime mannequin commented Sep 5, 2011

comment:134

For the record:

There's a bug in fmpq_poly_xgcd() that can lead to severe heap corruption, which in turn can cause almost any kind of failure.

See #11771 for details.

(Unfortunately FLINT 2.2, which includes its own / a newer version of fmpq_poly, doesn't yet provide xgcd() for rational polynomials.)

@nexttime
Copy link
Mannequin

nexttime mannequin commented Sep 5, 2011

comment:135

Replying to @nexttime:

There's a bug in fmpq_poly_xgcd() that can lead to severe heap corruption, which in turn can cause almost any kind of failure.

See #11771 for details.

Patch is up there.

It would be nice if one of the many reviewers here could review my patch there. He/she should IMHO also take a look at the sizes used in other memory (re)allocations / for other variables, in fmpq_poly_xgcd() at least.

@fchapoton
Copy link
Contributor

Changed author from Sebastian Pancratz, Martin Albrecht, William Stein, Jeroen Demeyer, Robert Beezer to Sebastian Pancratz, Martin Albrecht, William Stein, Jeroen Demeyer, Rob Beezer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants