Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve accuracy of builtin sum() for float inputs #100425

Closed
rhettinger opened this issue Dec 22, 2022 · 24 comments
Closed

Improve accuracy of builtin sum() for float inputs #100425

rhettinger opened this issue Dec 22, 2022 · 24 comments
Assignees
Labels
type-feature A feature request or enhancement

Comments

@rhettinger
Copy link
Contributor

rhettinger commented Dec 22, 2022

Currently sum() makes no efforts to improve accuracy over a simple running total. We do have math.fsum() that makes extreme efforts to be almost perfect; however, that function isn't well known, it runs 10x slower than regular sum(), and it always coerces to a float.

I suggest switching the builtin sum() handling of float inputs to Arnold Neumaier's improved variation of compensated summation. Per his paper, this algorithm has excellent error bounds (though not as perfect as fsum():

|s - š| ≤ ε|s| + ε²(3/4n² + n)·Σ|aᵢ|               (IV,12)
|s - š| ≤ ε|s| + ε²(1/4n³ + 5/2n² + n)·Max|aᵢ|     (IV,13)

The compensation tracking runs in parallel to the main accumulation. And except for pathological cases, the branch is predictable, making the test essentially free. Thanks to the magic of instruction level parallelism and branch prediction, this improvement has zero cost on my Apple M1 build. Timed with:

./python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'

N.B. Numpy switched from a simple running total to partial pairwise summation. That isn't as accurate as what is being proposed here, but it made more sense for them because the extra work of Neumaier summation isn't masked by the overhead of fetching values from an iterator as we do here. Also with an iterator, we can't do pairwise summation without using auxiliary memory.

Linked PRs

@mdickinson
Copy link
Member

mdickinson commented Dec 22, 2022

Very nice! It's surprising how little code it takes to make this work.

A few questions:

  • If this went in, would it be considered a CPython implementation detail, or would other implementations be expected to follow suit?
  • I'd expect at least some breakage in downstream unit tests (and perhaps doctests) from the changed numbers. People shouldn't be relying on down-to-the-last-bit reproducibility from sum in unit tests, but I'd like to bet that at least some are, and in practice our current sum does effectively give reproducible results across platforms, especially now that x87 FPUs and the associated double-rounding issues are fading into obscurity. What level of breakage would we consider acceptable here?
  • Did you try timing the variant that uses 2Sum in place of Fast2Sum for computing the individual errors? It involves more floating-point operations, but eliminates the branch in the absolute value test.
  • Would we need to keep a way to get the naive, uncompensated sum? It's a need that I've occasionally had, and I've written code that relies on sum doing the obvious thing. I'll admit that it's likely not a common use-case, and people who need it can still do functools.reduce(operator.add, values, start).

On performance, I'm not seeing the "zero cost" on macOS / Intel. Here are some timings on my machine (comparing commit af7613e on this branch - "cpython" below - with commit e0b4d96 on main - "cpython-compare" below); both clones built with --enable-optimizations. tl;dr - this is around 25% slower on my machine.

mdickinson@lovelace python % ./cpython/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'
5000 loops, best of 5: 44.9 usec per loop
mdickinson@lovelace python % ./cpython/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'
5000 loops, best of 5: 47.6 usec per loop
mdickinson@lovelace python % ./cpython/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'
5000 loops, best of 5: 45 usec per loop
mdickinson@lovelace python % ./cpython-compare/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'
10000 loops, best of 5: 35.3 usec per loop
mdickinson@lovelace python % ./cpython-compare/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'
10000 loops, best of 5: 34.5 usec per loop
mdickinson@lovelace python % ./cpython-compare/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'
10000 loops, best of 5: 34.4 usec per loop

Here's another test case that involves a decent amount of cancellation - we're computing (the imaginary part of) the Gauss sum for the prime 609359; mathematically, it should give exactly √609359. Here's the code:

import fractions
import math
import sys
import timeit

# Test case: Gauss sum for the prime 609359; mathematically, result
# should be exactly sqrt(p).
p = 609359
terms = [math.sin(2*math.pi*r*r / p) for r in range(p)]

# Exact sum of the actual terms we're using, and a way to compute the
# ulps error.
true_sum = sum(map(fractions.Fraction, terms))

def ulps_error(x: float) -> float:
    return (fractions.Fraction(x) - true_sum) / math.ulp(true_sum)

# Error incurred by sum.
error = ulps_error(sum(terms))

# Timing
number = 1000
time = timeit.timeit(stmt="sum(terms)", globals=globals(), number=number)

print(sys.version)
print(f"error: {ulps_error(sum(terms))} ulps")
print(f"time: {1000.0 * time/number:.3f} ms per run")

Results on my machine show around a 20% slowdown:

mdickinson@lovelace python % ./cpython-compare/python.exe ~/Desktop/compensated.py
3.12.0a3+ (tags/v3.12.0a3-114-ge0b4d966a8:e0b4d966a8, Dec 22 2022, 13:23:36) [Clang 14.0.0 (clang-1400.0.29.202)]
error: 58.52302999049425 ulps
time: 3.066 ms per run
mdickinson@lovelace python % ./cpython/python.exe ~/Desktop/compensated.py        
3.12.0a3+ (heads/compensated_sum:af7613ebcf, Dec 22 2022, 13:34:38) [Clang 14.0.0 (clang-1400.0.29.202)]
error: -0.47697000950574875 ulps
time: 3.799 ms per run
mdickinson@lovelace python % ./cpython-compare/python.exe ~/Desktop/compensated.py
3.12.0a3+ (tags/v3.12.0a3-114-ge0b4d966a8:e0b4d966a8, Dec 22 2022, 13:23:36) [Clang 14.0.0 (clang-1400.0.29.202)]
error: 58.52302999049425 ulps
time: 3.063 ms per run
mdickinson@lovelace python % ./cpython/python.exe ~/Desktop/compensated.py        
3.12.0a3+ (heads/compensated_sum:af7613ebcf, Dec 22 2022, 13:34:38) [Clang 14.0.0 (clang-1400.0.29.202)]
error: -0.47697000950574875 ulps
time: 3.789 ms per run

On pairwise summation: agreed that this (i.e., compensated summation) seems superior for our use-case, though it might be interesting to code up pairwise summation just to see what the performance tradeoffs look like. It doesn't need much in the way of auxiliary memory: at most a few hundred bytes for maintaining a stack of partial values of size O(log_2(length)). E.g., in Python:

def _ruler_seq():
    """Sequence 0, 1, 0, 2, 0, 1, 0, 3, ... (A007814)"""
    i = 0
    while True:
        yield (~(i + 1) & i).bit_length()
        i += 1


def sum_pairwise(iterable, /, start=0.0):
    """Sum of an iterable of floats, via pairwise summation."""
    partials = [start]
    for x, pop_count in zip(iterable, _ruler_seq()):
        for _ in range(pop_count):
            x += partials.pop()
        partials.append(x)
    return sum(reversed(partials))

(The _ruler_seq logic would of course be much more streamlined in C.)

@rhettinger
Copy link
Contributor Author

rhettinger commented Dec 22, 2022

If this went in, would it be considered a CPython implementation detail, or would other implementations be expected to follow suit?

What are your thoughts on this? I was thinking that it has to be an implementation detail because it is so easily destroyed by double rounding environments or faulty compilers that optimize away the compensation. That said, I would hope that other implementations would do this or something similar. It doesn't take much effort.

What level of breakage would we consider acceptable here?

Fortunately, we have a precedent here. When numpy switched to pairwise summation, the world was net better off and seemed happy to fix a few broken tests in exchange for better accuracy. Another precedent occurred when the floating point repr changed to display the shortest decimal representation from the equivalence class. Again, the world was net happier for the problems that it no longer had and was happy to fix a few broken tests.

FWIW the PR also causes "unbreakage". Code that is normalizing a list (i.e. converting values to percentages) will work better without the user having to make changes: t = sum(data); percentages = [x * 100.0 / t for x in data]. Commutativity problems will fade, etc. In general, it is nice to have numbers add up correctly most of the time. The notion of "worse is better" doesn't apply here. Part of Matlab's success is that so many subtle numeric issues were handled under-the-hood in sophisticated ways.

Results on my machine show around a 20% slowdown

Thanks for running a timing on Intel silicon. That is about what I had expected and it seems like a reasonable price. The Apple M1 timings were super fast but perplexing. The sum of 10k random values took only 25.2 usec with the PR and 28.2 usec without the PR. Presumably, we working right at the limit of optimizations and parallelization where any change can knock it in and out of local minimums.

Did you try timing the variant that uses 2Sum in place of Fast2Sum for computing the individual errors?

I didn't but will look at it. I first tried straight Kahan summationand then the Neumaier variant. When the latter proved to be both faster (because it broke up the data dependency chain) and more accurate, I was pleasantly surprised and took the win and stopped exploring.

Also, I looked as the generated assembly and there isn't much fat to be cut. The fabs is a single fast instruction. The "branch" is just a predicated instruction so we're effectively branchless. The additions and subtractions for the compensated error calculation aren't on the critical path because they don't have an immediate data dependency (the compensation gets applied outside the loop). As long as the processor has more than one floating point port, that compensation calculation happens in parallel and is basically free. Our only expensive steps are the PyIter_Next() and a _Py_DECREF_SPECIALIZED call. The actual addition is the cheapest part of the overall operation. So I didn't feel the need to do further algorithmic explorations. That said, the story may be a bit different on Intel silicon.

LBB61_38:                       ;   in Loop: Header=BB61_22 Depth=1             
	ldr	d0, [x21, #16]  ; ./Include/cpython/floatobject.h:16:31   <-- Float unboxing runs in parallel with previous float computation
	fadd	d10, d8, d0     ; Python/bltinmodule.c:2553:30      <-- Only this is sequentially dependent between loops
	fabs	d1, d8          ; Python/bltinmodule.c:2554:21      <-- 2 cycle latency; 4 cycle throughput
	fabs	d2, d0                 
	fsub	d3, d8, d10     ; Python/bltinmodule.c:2554:21
	fadd	d3, d0, d3
	fsub	d0, d0, d10
	fadd	d0, d8, d0
	fcmp	d1, d2
	fcsel	d0, d0, d3, lt   ; <-- predicated instruction instead of a branch
	fadd	d9, d9, d0
	;DEBUG_VALUE: _Py_DECREF_SPECIALIZED:op <- $x21            
	ldr	x8, [x21]        ; The decref step runs in parallel with float computations and is not dependent on their outcome
	subs	x8, x8, #1
	str	x8, [x21]

Would we need to keep a way to get the naive, uncompensated sum? It's a need that I've occasionally had, and I've written code that relies on sum doing the obvious thing. I'll admit that it's likely not a common use-case, and people who need it can still do functools.reduce(operator.add, values, start).

I can mention the reduce alternative in the docs, so a user won't have to think of this on their own. That said, the numpy precedent tells us that this isn't a big concern — we don't see people asking how to bypass np.sum(), nor have they had need to document how to run consecutive additions in a loop.

@mdickinson
Copy link
Member

Thanks for the responses and the code updates for infinity and overflow. The code LGTM, and I really like the feature.

Again, the world was net happier for the problems that it no longer had and was happy to fix a few broken tests.

Agreed; this seems manageable.

The only remaining question for me is whether the extra accuracy is worth the performance hit. It would be useful to get some numbers on Linux/x64 and Windows/x64 (and maybe ARM, too).

I'm +1 on making this trade-off. (I'm personally not too bothered by Python's sum performance for large lists of floats, since I'd almost always be using NumPy instead of Python in cases where performance is likely to matter.) But I don't know how others might feel. Worth getting opinions from the faster-cpython folks and the scientific community?

@mdickinson
Copy link
Member

I was thinking that it has to be an implementation detail [...]

SGTM; I'd expect that PyPy at least would want to match the new behaviour, but that's up to them. I'd also want to leave the door open for implementations to use some other flavour of compensated summation if that works better for them, for whatever reason.

@mdickinson
Copy link
Member

I'd also want to leave the door open for implementations to use some other flavour of compensated summation if that works better for them

As a data point, it looks as though Java's DoubleSummaryStatistics currently implements plain old Kahan compensated summation (though the docs don't commit to any particular method being used), so a JVM-based version of Python could potentially want to use that.

@pochmann
Copy link
Contributor

pochmann commented Dec 23, 2022

If I'm not mistaken, you're not using c if the floats/ints are followed by some other type, e.g., complex. Is that intentional?

I mean here:

result = PyFloat_FromDouble(f_result);

@rhettinger
Copy link
Contributor Author

@mdickinson Okay. Marked as an implementation detail. We and other implementations are free to switch methodologies at any time.

@pochmann I'm not sure that it matters, but for consistency, I added c on the other path as well. Thanks for noticing.

@CAM-Gerlach
Copy link
Member

CAM-Gerlach commented Dec 23, 2022

FWIW, testing on Windows on an Ivy Bridge i7-3630QM 4c/8t laptop downclocked to 2.3 GHz, I get the following results with MSC v.1916 x64 (the Python under the test worktree is this branch, while main is the tip of the current main branch (which I assume is essentially the merge base here). I built with the default PCbuild/build.bat in x64 and Release configuration; I didn't see any other optimization options besides PGO

Testing the first set of @mdickinson 's timeit calls, I get a 33.6%, 33.7% and 29.2% slowdown with this PR vs main:

$ PCbuild/amd64/python.exe -c "import sys; print(sys.version)"
3.12.0a3+ (heads/main-dirty:1ecfd1ebf1, Dec 23 2022, 16:48:43) [MSC v.1916 64 bit (AMD64)]
$ PCbuild/amd64/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_0 00)]' 'sum(d)'
5000 loops, best of 5: 53.6 usec per loop
$ PCbuild/amd64/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_0 00)]' 'sum(d)'
5000 loops, best of 5: 53.1 usec per loop
$ PCbuild/amd64/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_0 00)]' 'sum(d)'
5000 loops, best of 5: 53.4 usec per loop
$ test/PCbuild/amd64/python.exe -c "import sys; print(sys.version)"
3.12.0a3+ (heads/compensated_sum-dirty:3cf9536ad7, Dec 23 2022, 16:51:47) [MSC v.1916 64 bit (AMD64)]
$ test/PCbuild/amd64/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range (10_000)]' 'sum(d)'
5000 loops, best of 5: 71.6 usec per loop
$ test/PCbuild/amd64/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range (10_000)]' 'sum(d)'
5000 loops, best of 5: 71 usec per loop
$ test/PCbuild/amd64/python.exe -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range (10_000)]' 'sum(d)'
5000 loops, best of 5: 69 usec per loop

And testing @mdickinson 's compensated.py script, I get a 35% slowdown:

$ PCbuild/amd64/python.exe compensated.py
3.12.0a3+ (heads/main-dirty:1ecfd1ebf1, Dec 23 2022, 16:48:43) [MSC v.1916 64 bit (AMD64)]
error: 64.80813372135162 ulps
time: 3.568 ms per run
$ test/PCbuild/amd64/python.exe compensated.py
3.12.0a3+ (heads/compensated_sum-dirty:3cf9536ad7, Dec 23 2022, 16:51:47) [MSC v.1916 64 bit (AMD64)]
error: -0.19186627864837646 ulps
time: 4.804 ms per run

For what it's worth, on Python 3.11.0 and the latest NumPy 1.24.0 and dependencies (via Conda-Forge), np.sum is around 2.7x faster on a NumPy array vs the builtin sum on a list, both on 10 000 elements generated as above (with sum having very similar results to the above, and the two also being similar on Python 3.9 and older NumPy), and the difference grows to around 5x on 1 000 000 elements.

$ python -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range (10_000)]' 'sum(d)'
5000 loops, best of 5: 52.8 usec per loop
$ python -m timeit -s 'from random import expovariate as r' -s 'import numpy as np' -s 'd=np.array([r(1.0) for i in range (10_000)])' 'np.sum(d)'
20000 loops, best of 5: 19.8 usec per loop
$ python -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range (1_000_000)]' 'sum(d)'
50 loops, best of 5: 5.71 msec per loop
$ python -m timeit -s 'from random import expovariate as r' -s 'import numpy as np' -s 'd=np.array([r(1.0) for i in range (1_000_000)])' 'np.sum(d)'
200 loops, best of 5: 1.16 msec per loop

In general, as a member of the scientific Python community (but merely one single datapoint), I would likewise typically be using NumPy arrays (or data structures build from them, e.g. DataFrames, xarrays, tensors, etc) for at least the great majority of cases that would likely matter for performance when summing many elements. But I certainly can't speak for everyone in the general case, and it seems worth considering more datapoints (both of benchmark results, and use cases) before introducing a potential ≈20-35% regression to the performance of a key builtin on a fundamental datatype.

@hauntsaninja
Copy link
Contributor

hauntsaninja commented Dec 24, 2022

Mark had asked for more performance numbers. On the microbenchmark, I also see about a 30% slowdown on Linux with an AMD Zen 2 chip, with --enable-optimizations

~/cpython 3e46f9fe05 λ ./python -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'
5000 loops, best of 5: 55 usec per loop
# Ran 5 times, saw min 55, max 55.4, avg 55.2
~/cpython 1ecfd1ebf1 λ ./python -m timeit -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range(10_000)]' 'sum(d)'
10000 loops, best of 5: 37.6 usec per loop
# Ran 5 times, saw min 37.3, max 37.8, avg 37.6

(I don't have an opinion on the tradeoff. The scientific computing I do at work is mostly not on CPUs. It looks like Raymond already considered the partial pairwise thing that numpy does)

@gpshead
Copy link
Member

gpshead commented Dec 27, 2022

FWIW I think the trade-off here is fine, people who really want sum performance on a huge quantity of floats should be using numpy.

iritkatriel added a commit to iritkatriel/cpython that referenced this issue Dec 28, 2022
* Correct CVE-2020-10735 documentation (python#100306)

* pythongh-94912: Added marker for non-standard coroutine function detection (python#99247)

This introduces a new decorator `@inspect.markcoroutinefunction`,
which, applied to a sync function, makes it appear async to
`inspect.iscoroutinefunction()`.

* Docs: Don't upload CI artifacts (python#100330)

* pythongh-89727: Fix os.walk RecursionError on deep trees (python#99803)

Use a stack to implement os.walk iteratively instead of recursively to
avoid hitting recursion limits on deeply nested trees.

* pythongh-69929: re docs: Add more specific definition of \w (python#92015)

Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>

* pythongh-89051: Add ssl.OP_LEGACY_SERVER_CONNECT (python#93927)

Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
Fixes python#89051

* pythongh-88211: Change lower-case and upper-case to match recommendations in imaplib docs (python#99625)

* pythongh-100348: Fix ref cycle in `asyncio._SelectorSocketTransport` with `_read_ready_cb` (python#100349)

* pythongh-99925: Fix inconsistency in `json.dumps()` error messages (pythonGH-99926)

* Clarify that every thread has its own default context in contextvars (python#99246)

* pythongh-99576: Fix cookiejar file that was not truncated for some classes (pythonGH-99616)

Co-authored-by: Łukasz Langa <lukasz@langa.pl>

* pythongh-100188: Reduce misses in BINARY_SUBSCR_(LIST/TUPLE)_INT (python#100189)

Don't specialize if the index is negative.

* pythongh-99991: improve docs on str.encode and bytes.decode (python#100198)

Co-authored-by: C.A.M. Gerlach <CAM.Gerlach@Gerlach.CAM>

* pythongh-91081: Add note on WeakKeyDictionary behavior when deleting a replaced entry (python#91499)

Co-authored-by: Pieter Eendebak <P.T.eendebak@tudelft.nl>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>

* pythongh-85267: Improvements to inspect.signature __text_signature__ handling (python#98796)

This makes a couple related changes to inspect.signature's behaviour
when parsing a signature from `__text_signature__`.

First, `inspect.signature` is documented as only raising ValueError or
TypeError. However, in some cases, we could raise RuntimeError.  This PR
changes that, thereby fixing python#83685.

(Note that the new ValueErrors in RewriteSymbolics are caught and then
reraised with a message)

Second, `inspect.signature` could randomly drop parameters that it
didn't understand (corresponding to `return None` in the `p` function).
This is the core issue in python#85267. I think this is very surprising
behaviour and it seems better to fail outright.

Third, adding this new failure broke a couple tests. To fix them (and to
e.g. allow `inspect.signature(select.epoll.register)` as in python#85267), I
add constant folding of a couple binary operations to RewriteSymbolics.

(There's some discussion of making signature expression evaluation
arbitrary powerful in python#68155. I think that's out of scope. The
additional constant folding here is pretty straightforward, useful, and
not much of a slippery slope)

Fourth, while python#85267 is incorrect about the cause of the issue, it turns
out if you had consecutive newlines in __text_signature__, you'd get
`tokenize.TokenError`.

Finally, the `if name is invalid:` code path was dead, since
`parse_name` never returned `invalid`.

* pythonGH-100363: Speed up `asyncio.get_running_loop` (python#100364)

* pythonGH-100133: fix `asyncio` subprocess losing `stderr` and `stdout` output (python#100154)

* pythongh-100374: Fixed a bug in socket.getfqdn() (pythongh-100375)

* pythongh-100129: Add tests for pickling all builtin types and functions (pythonGH-100142)

* Remove unused variable from `dis._find_imports` (python#100396)

* pythongh-78878: Fix crash when creating an instance of `_ctypes.CField` (python#14837)

* pythonGH-69564: Clarify use of octal format of mode argument in help(os.chmod) (python#20621)

Co-authored-by: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com>

* pythonGH-99554: Pack location tables more effectively (pythonGH-99556)

* Correct typo in typing.py (python#100423)

In the docstring of `ParamSpec`, the name of `P = ParamSpec('P')` was
mistakenly written as `'T'`.

* pythongh-99761: Add `_PyLong_IsPositiveSingleDigit` function to check for single digit integers  (python#100064)

* pythonGH-99770: Make the correct call specialization fail kind show up in the stats (pythonGH-99771)

* pythongh-78997: fix bad rebase of moved test file (python#100424)

* pythongh-100344: Add C implementation for `asyncio.current_task` (python#100345)

Co-authored-by: pranavtbhat

* pythonGH-99554: Trim trailing whitespace (pythonGH-100435)



Automerge-Triggered-By: GH:brandtbucher

* pythongh-85432: Harmonise parameter names between C and pure-Python implementations of `datetime.time.strftime`, `datetime.datetime.fromtimestamp` (python#99993)

* pythongh-57762: fix misleading tkinter.Tk docstring (python#98837)

Mentioned as a desired change by terryjreedy on the corresponding issue,
since Tk is not a subclass of Toplevel.

* pythongh-48496: Added example and link to faq for UnboundLocalError in reference (python#93068)

* Fix typo in 3.12 What's New (python#100449)

* pythongh-76963: PEP3118 itemsize of an empty ctypes array should not be 0 (pythonGH-5576)

The itemsize returned in a memoryview of a ctypes array is now computed from the item type, instead of dividing the total size by the length and assuming that the length is not zero.

* pythonGH-100459: fix copy-paste errors in specialization stats (pythonGH-100460)

* pythongh-99110: Initialize `frame->previous` in init_frame to fix segmentation fault when accessing `frame.f_back` (python#100182)

* pythongh-98712: Clarify "readonly bytes-like object" semantics in C arg-parsing docs (python#98710)

* pythongh-92216: improve performance of `hasattr` for type objects (pythonGH-99979)

* pythongh-100288: Specialise LOAD_ATTR_METHOD for managed dictionaries (pythonGH-100289)

* Revert "pythongh-100288: Specialise LOAD_ATTR_METHOD for managed dictionaries (pythonGH-100289)" (python#100468)

This reverts commit c3c7848.

* pythongh-94155: Reduce hash collisions for code objects (python#100183)

* Uses a better hashing algorithm to get better dispersion and remove commutativity.

* Incorporates `co_firstlineno`, `Py_SIZE(co)`, and bytecode instructions.

* This is now the entire set of criteria used in `code_richcompare`, except for `_PyCode_ConstantKey` (which would incorporate the types of `co_consts` rather than just their values).

* pythongh-83076: 3.8x speed improvement in (Async)Mock instantiation (python#100252)

* pythongh-99482: remove `jython` compatibility parts from stdlib and tests (python#99484)

* bpo-40447: accept all path-like objects in compileall.compile_file (python#19883)

Signed-off-by: Filipe Laíns <lains@archlinux.org>
Signed-off-by: Filipe Laíns <lains@riseup.net>
Co-authored-by: Irit Katriel <1055913+iritkatriel@users.noreply.github.com>
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>

* pythonGH-100425: Improve accuracy of builtin sum() for float inputs (pythonGH-100426)

* pythongh-68320, pythongh-88302 - Allow for private `pathlib.Path` subclassing (pythonGH-31691)

Users may wish to define subclasses of `pathlib.Path` to add or modify
existing methods. Before this change, attempting to instantiate a subclass
raised an exception like:

    AttributeError: type object 'PPath' has no attribute '_flavour'

Previously the `_flavour` attribute was assigned as follows:

    PurePath._flavour        = xxx not set!! xxx
    PurePosixPath._flavour   = _PosixFlavour()
    PureWindowsPath._flavour = _WindowsFlavour()

This change replaces it with a `_pathmod` attribute, set as follows:

    PurePath._pathmod        = os.path
    PurePosixPath._pathmod   = posixpath
    PureWindowsPath._pathmod = ntpath

Functionality from `_PosixFlavour` and `_WindowsFlavour` is moved into
`PurePath` as underscored-prefixed classmethods. Flavours are removed.

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Brett Cannon <brett@python.org>
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
Co-authored-by: Eryk Sun <eryksun@gmail.com>

* pythongh-99947: Ensure unreported errors are chained for SystemError during import (pythonGH-99946)

* Add "strict" to dotproduct(). Add docstring. Factor-out common code. (pythonGH-100480)

* pythongh-94808: improve test coverage of number formatting (python#99472)

* pythongh-100454: Start running SSL tests with OpenSSL 3.1.0-beta1 (python#100456)

* pythongh-100268: Add is_integer method to int (python#100439)

This improves the lives of type annotation users of `float` - which type checkers implicitly treat as `int|float` because that is what most code actually wants. Before this change a `.is_integer()` method could not be assumed to exist on things annotated as `: float` due to the method not existing on both types.

* pythongh-77771: Add enterabs example in sched (python#92716)

Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>

* pythonGH-91166: Implement zero copy writes for `SelectorSocketTransport` in asyncio (python#31871)

Co-authored-by: Guido van Rossum <gvanrossum@gmail.com>

* pythonGH-91166: Implement zero copy writes for `SelectorSocketTransport` in asyncio (python#31871)

Co-authored-by: Guido van Rossum <gvanrossum@gmail.com>

* Misc Itertools recipe tweaks (pythonGH-100493)

* pythongh-100357: Convert several functions in `bltinsmodule` to AC (python#100358)

* Remove wrong comment about `repr` in `test_unicode` (python#100495)

* pythongh-99908: Tutorial: Modernize the 'data-record class' example (python#100499)

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>

* pythongh-100474: Fix handling of dirs named index.html in http.server (pythonGH-100475)



If you had a directory called index.html or index.htm within a directory, it would cause http.server to return a 404 Not Found error instead of the directory listing. This came about due to not checking that the index was a regular file.

I have also added a test case for this situation.

Automerge-Triggered-By: GH:merwok

* pythongh-100287: Fix unittest.mock.seal with AsyncMock (python#100496)

* pythongh-99535: Add test for inheritance of annotations and update documentation (python#99990)

* pythongh-100428: Make float documentation more accurate (python#100437)

Previously, the grammar did not accept `float("10")`.
Also implement mdickinson's suggestion of removing the indirection.

* [Minor PR] Quotes in documentation changed into code blocks (python#99536)

Minor formatting fix in documentation

Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>

* pythongh-100472: Fix docs claim that compileall parameters could be bytes (python#100473)

* pythongh-100519: simplification to `eff_request_host` in cookiejar.py (python#99588)

`IPV4_RE` includes a `.`, and the `.find(".") == -1` included here is already testing to make sure there's no dot, so this part of the expression is tautological. Instead use more modern `in` syntax to make it clear what the check is doing here. The simplified implementation more clearly matches the wording in RFC 2965.

Co-authored-by: hauntsaninja <hauntsaninja@gmail.com>

* pythongh-99308: Clarify re docs for byte pattern group names (python#99311)

* pythongh-92446: Improve argparse choices docs; revert bad change to lzma docs (python#94627)

Based on the definition of the collections.abc classes, it is more accurate to use "sequence" instead of "container" when describing argparse choices.

A previous attempt at fixing this in python#92450 was mistaken; this PR reverts that change.

Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>

* Fix name of removed `inspect.Signature.from_builtin` method in 3.11.0a2 changelog (python#100525)

* pythongh-100520: Fix `rst` markup in `configparser`  docstrings (python#100524)

* pythongh-99509: Add `__class_getitem__` to `multiprocessing.queues.Queue` (python#99511)

* pythongh-94603: micro optimize list.pop (pythongh-94604)

* Remove `NoneType` redefinition from `clinic.py` (python#100551)

* pythongh-100553: Improve accuracy of sqlite3.Row iter test (python#100555)

* pythonGH-98831: Modernize a ton of simpler instructions (python#100545)

* load_const and load_fast aren't families for now
* Don't decref unmoved names
* Modernize GET_ANEXT
* Modernize GET_AWAITABLE
* Modernize ASYNC_GEN_WRAP
* Modernize YIELD_VALUE
* Modernize POP_EXCEPT (in more than one way)
* Modernize PREP_RERAISE_STAR
* Modernize LOAD_ASSERTION_ERROR
* Modernize LOAD_BUILD_CLASS
* Modernize STORE_NAME
* Modernize LOAD_NAME
* Modernize LOAD_CLASSDEREF
* Modernize LOAD_DEREF
* Modernize STORE_DEREF
* Modernize COPY_FREE_VARS (mark it as done)
* Modernize LIST_TO_TUPLE
* Modernize LIST_EXTEND
* Modernize SET_UPDATE
* Modernize SETUP_ANNOTATIONS
* Modernize DICT_UPDATE
* Modernize DICT_MERGE
* Modernize MAP_ADD
* Modernize IS_OP
* Modernize CONTAINS_OP
* Modernize CHECK_EXC_MATCH
* Modernize IMPORT_NAME
* Modernize IMPORT_STAR
* Modernize IMPORT_FROM
* Modernize JUMP_FORWARD (mark it as done)
* Modernize JUMP_BACKWARD (mark it as done)

Signed-off-by: Filipe Laíns <lains@archlinux.org>
Signed-off-by: Filipe Laíns <lains@riseup.net>
Co-authored-by: Jeremy Paige <ucodery@gmail.com>
Co-authored-by: Carlton Gibson <carlton@noumenal.es>
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
Co-authored-by: Jon Burdo <jon@jonburdo.com>
Co-authored-by: Stanley <46876382+slateny@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Thomas Grainger <tagrain@gmail.com>
Co-authored-by: Brad Wolfe <brad.wolfe@gmail.com>
Co-authored-by: Richard Kojedzinszky <rkojedzinszky@users.noreply.github.com>
Co-authored-by: František Nesveda <fnesveda@users.noreply.github.com>
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
Co-authored-by: Nikita Sobolev <mail@sobolevn.me>
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
Co-authored-by: Dennis Sweeney <36520290+sweeneyde@users.noreply.github.com>
Co-authored-by: Bisola Olasehinde <horlasehinde@gmail.com>
Co-authored-by: C.A.M. Gerlach <CAM.Gerlach@Gerlach.CAM>
Co-authored-by: Pieter Eendebak <P.T.eendebak@tudelft.nl>
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
Co-authored-by: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com>
Co-authored-by: Dominic Socular <BBH@awsl.rip>
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Hai Shi <shihai1992@gmail.com>
Co-authored-by: amaajemyfren <32741226+amaajemyfren@users.noreply.github.com>
Co-authored-by: Brandt Bucher <brandtbucher@microsoft.com>
Co-authored-by: david-why <david_why@outlook.com>
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
Co-authored-by: penguin_wwy <940375606@qq.com>
Co-authored-by: Eli Schwartz <eschwartz93@gmail.com>
Co-authored-by: Itamar Ostricher <itamarost@gmail.com>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Eric Wieser <wieser.eric@gmail.com>
Co-authored-by: Irit Katriel <1055913+iritkatriel@users.noreply.github.com>
Co-authored-by: Bill Fisher <william.w.fisher@gmail.com>
Co-authored-by: Petr Viktorin <encukou@gmail.com>
Co-authored-by: Ken Jin <kenjin@python.org>
Co-authored-by: Carl Meyer <carl@oddbird.net>
Co-authored-by: Filipe Laíns <lains@riseup.net>
Co-authored-by: Raymond Hettinger <rhettinger@users.noreply.github.com>
Co-authored-by: Barney Gale <barney.gale@gmail.com>
Co-authored-by: Brett Cannon <brett@python.org>
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
Co-authored-by: Eryk Sun <eryksun@gmail.com>
Co-authored-by: Sebastian Berg <sebastianb@nvidia.com>
Co-authored-by: Illia Volochii <illia.volochii@gmail.com>
Co-authored-by: JosephSBoyle <48555120+JosephSBoyle@users.noreply.github.com>
Co-authored-by: James Frost <git@frost.cx>
Co-authored-by: MonadChains <monadchains@gmail.com>
Co-authored-by: Bart Broere <mail@bartbroere.eu>
Co-authored-by: Glyph <code@glyph.im>
Co-authored-by: hauntsaninja <hauntsaninja@gmail.com>
Co-authored-by: Ilya Kulakov <kulakov.ilya@gmail.com>
Co-authored-by: Guy Yagev <yourlefthandman8@gmail.com>
Co-authored-by: Jakub Kuczys <me@jacken.men>
@CAM-Gerlach
Copy link
Member

CAM-Gerlach commented Jan 4, 2023

Just for reference, results on the same with math.fsum(), demonstrating its an order of magnitude (nearly 10x, prior to this change) slower than the sum() builtin on the same microbenchmark as above:

$ PCbuild/amd64/python.exe -c "import sys; print(sys.version)"
3.12.0a3+ (heads/main-dirty:1ecfd1ebf1, Dec 23 2022, 16:48:43) [MSC v.1916 64 bit (AMD64)]
$ PCbuild/amd64/python.exe -m timeit -s 'from math import fsum' -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range (10_000)]' 'fsum(d)'
500 loops, best of 5: 559 usec per loop
$ PCbuild/amd64/python.exe -m timeit -s 'from math import fsum' -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range (10_000)]' 'fsum(d)'
500 loops, best of 5: 470 usec per loop
$ PCbuild/amd64/python.exe -m timeit -s 'from math import fsum' -s 'from random import expovariate as r' -s 'd=[r(1.0) for i in range (10_000)]' 'fsum(d)'
500 loops, best of 5: 464 usec per loop

carljm added a commit to carljm/cpython that referenced this issue Feb 20, 2023
* main: (60 commits)
  pythongh-102056: Fix a few bugs in error handling of exception printing code (python#102078)
  pythongh-102011: use sys.exception() instead of sys.exc_info() in docs where possible (python#102012)
  pythongh-101566: Sync with zipp 3.14. (pythonGH-102018)
  pythonGH-99818: improve the documentation for zipfile.Path and Traversable (pythonGH-101589)
  pythongh-88233: zipfile: handle extras after a zip64 extra (pythonGH-96161)
  pythongh-101981: Apply HOMEBREW related environment variables (pythongh-102074)
  pythongh-101907: Stop using `_Py_OPCODE` and `_Py_OPARG` macros (pythonGH-101912)
  pythongh-101819: Adapt _io types to heap types, batch 1 (pythonGH-101949)
  pythongh-101981: Build macOS as recommended by the devguide (pythonGH-102070)
  pythongh-97786: Fix compiler warnings in pytime.c (python#101826)
  pythongh-101578: Amend PyErr_{Set,Get}RaisedException docs (python#101962)
  Misc improvements to the float tutorial (pythonGH-102052)
  pythongh-85417: Clarify behaviour on branch cuts in cmath module (python#102046)
  pythongh-100425: Update tutorial docs related to sum() accuracy (FH-101854)
  Add missing 'is' to `cmath.log()` docstring (python#102049)
  pythongh-100210: Correct the comment link for unescaping HTML (python#100212)
  pythongh-97930: Also include subdirectory in makefile. (python#102030)
  pythongh-99735: Use required=True in argparse subparsers example (python#100927)
  Fix incorrectly documented attribute in csv docs (python#101250)
  pythonGH-84783: Make the slice object hashable (pythonGH-101264)
  ...
carljm added a commit to carljm/cpython that referenced this issue Feb 22, 2023
* main: (225 commits)
  pythongh-102056: Fix a few bugs in error handling of exception printing code (python#102078)
  pythongh-102011: use sys.exception() instead of sys.exc_info() in docs where possible (python#102012)
  pythongh-101566: Sync with zipp 3.14. (pythonGH-102018)
  pythonGH-99818: improve the documentation for zipfile.Path and Traversable (pythonGH-101589)
  pythongh-88233: zipfile: handle extras after a zip64 extra (pythonGH-96161)
  pythongh-101981: Apply HOMEBREW related environment variables (pythongh-102074)
  pythongh-101907: Stop using `_Py_OPCODE` and `_Py_OPARG` macros (pythonGH-101912)
  pythongh-101819: Adapt _io types to heap types, batch 1 (pythonGH-101949)
  pythongh-101981: Build macOS as recommended by the devguide (pythonGH-102070)
  pythongh-97786: Fix compiler warnings in pytime.c (python#101826)
  pythongh-101578: Amend PyErr_{Set,Get}RaisedException docs (python#101962)
  Misc improvements to the float tutorial (pythonGH-102052)
  pythongh-85417: Clarify behaviour on branch cuts in cmath module (python#102046)
  pythongh-100425: Update tutorial docs related to sum() accuracy (FH-101854)
  Add missing 'is' to `cmath.log()` docstring (python#102049)
  pythongh-100210: Correct the comment link for unescaping HTML (python#100212)
  pythongh-97930: Also include subdirectory in makefile. (python#102030)
  pythongh-99735: Use required=True in argparse subparsers example (python#100927)
  Fix incorrectly documented attribute in csv docs (python#101250)
  pythonGH-84783: Make the slice object hashable (pythonGH-101264)
  ...
miss-islington pushed a commit to miss-islington/cpython that referenced this issue Aug 8, 2023
(cherry picked from commit aab6f71)

Co-authored-by: Raymond Hettinger <rhettinger@users.noreply.github.com>
@tim-one
Copy link
Member

tim-one commented Oct 29, 2023

There is no negative exponent here. It's 1e16, not 1e-16. The question is which one of these two adjacent representable floats the sum rounds to:

>>> float(10**16 + 2).hex() # single rounding
'0x1.1c37937e08001p+53'
>>> float(10**16 + 4).hex() # double rounding
'0x1.1c37937e08002p+53'

@franklinvp
Copy link

franklinvp commented Jun 28, 2024

This didn't need to go into the implementation of a built in function, when applied to a basic type.

To me this is giving people what they want (not be surprised that float are not the real numbers) vs what they need (learn it, and if they need operations with more precision or as if there was more precision, or with correction, then search for a function that is explicitly doing that).

Now sum on float is no longer adding float with the addition that they come with.

@franklinvp
Copy link

franklinvp commented Jun 29, 2024

Some consequences of the implementation

class R(float): pass
sum([0.0, 0.0, float(2**53), 1.0, 1.0, 0.0, 0.0])  # -> 9007199254740994.0 All the sums used the correction
sum([R(0.0), 0.0, float(2**53), 1.0, 1.0, 0.0, 0.0])  # -> 9007199254740994.0 Essentially all the sums used the correction
sum([0.0, R(0.0), float(2**53), 1.0, 1.0, 0.0, 0.0])  # -> 9007199254740992.0 Ordinary sum of float for pretty much the entire sum
sum([0.0, 0.0, float(2**53), 1.0, 1.0, 0.0, R(0.0)])  # -> 9007199254740994.0 Nearly all sums used the correction

Could the built in function sum be kept with a simple definition of "adding from left to right with sum the items define" and leave specialized algorithms to specialized functions?

The "however, that function isn't well known" regarding fsum can be solved by a note in the documentation of sum suggesting to use fsum, numpy.sum, or a function implementing this Kahan-like compensation.

This, as opposed to having sum's documentation only have this note

Changed in version 3.12: Summation of floats switched to an algorithm that gives higher accuracy on most builds.

That hides a lot of detail of what it really does, and to make the description accurate it would add quite a bit of complexity to the explanation. "Adds floats from left to right, using a compensated sum if from the beginning (except possibly the first) are float. If there is a non-float (beyond the first item) the sum from that point on is that of the items. If the sum of the first and start is not float, the compensated sum also stops, but if that sum is a float then the sum is the compensated one."

Also, from a didactic point of view, letting people face the peculiarities of adding float forces them to learn them. Then, if it is their choice to use a better algorithm, being it in a different function, makes the code explicit about it.

@skirpichev
Copy link
Member

@franklinvp, I think this issue has enough arguments for this feature of the sum(). See also #111933. If you really want to revive discussion - probably you should rather start a discourse thread (on https://discuss.python.org/).

BTW, it's not just "adding from left to right with sum the items define". Here is the docstring:

    Return the sum of a 'start' value (default: 0) plus an iterable of numbers

    When the iterable is empty, return the start value.
    This function is intended specifically for use with numeric values and may
    reject non-numeric types.

Sphinx docs:

Sums start and the items of an iterable from left to right and returns the total.
The iterable’s items are normally numbers, and the start value is not
allowed to be a string.

Neither specify how to compute the sum of numbers, i.e. this shouldn't be just an equivalent of functools.reduce(operator.add, xs): these details are (and should be!) under the hood. If addition is associative - this difference doesn't matter. But for floating-point numbers it's not: we should choose a useful definition for the sum of numbers, not just pick up one random order of parentheses.

@pochmann3
Copy link
Contributor

If addition is associative - this difference doesn't matter

Associative isn't enough, think for example of string addition, which is associative but "1" + "2" differs from "2" + "1". Or a case with floats where (a+b)+c = a+(b+c) but (a+c)+b differs:

a = c = 1e308
b = -a
print((a + b) + c)
print(a + (b + c))
print((a + c) + b)

Output (Attempt This Online!):

1e+308
1e+308
inf

@franklinvp
Copy link

franklinvp commented Jun 30, 2024

@pochmann3 Be careful

with floats where (a+b)+c = a+(b+c)

that is not always satisfied.

(float(2**53) + float(1)) + float(1) ==  float(2**53)+ (float(1) + float(1))  # False

@skirpichev #111933 is an example of what the new sum caters to, people who see float's arithmetic as defective. That ticket just has the wrong expectations of what sum was (they expected associative), which doesn't have to be expected of the new sum.

The documentation of sum does say "from left to right" explicitly.

And the explanation that it has for float is not adequate.

It is true that the documentation does not specify which sum is done from left to right. It is bizarre that for float the new sum uses one sum sometimes and sometimes does not.

class R(float): pass
# After `R(0.0)` is the sum of `float`, before it it is a compensated sum.
sum([float(2**53), float(1.0), float(1.0), R(0.0), float(2**53), float(1.0), float(1.0)])

@skirpichev
Copy link
Member

think for example of string addition, which is associative but "1" + "2" differs from "2" + "1".

Well, I'm assuming numeric values ("This function is intended specifically for use with numeric values and may reject non-numeric types."), i.e. commutative addition. Strings are banned by sum(), but there is a way:

>>> sum([[1], [2]], start=[3])
[3, 1, 2]
>>> sum([[2], [1]], start=[3])
[3, 2, 1]

a case with floats where (a+b)+c = a+(b+c) but (a+c)+b differs

That's just another example of missing associativity: a+(b+c) is equal (by commutativity) to a+(c+b), which in general differs from (a+c)+b.

(float(2**53) + float(1)) + float(1)

Lets take this example. sum() now returns 9007199254740994.0 regardless of the elements order, which is exact value of sum([2**53, 1, 1]), while (float(2**53) + 1.) + 1. is 9007199254740992.0 (ulps error is 1, not zero).

The documentation of sum does say "from left to right" explicitly.

Looking on the implementation, I don't see it's violated somehow. Code does summation "from left to right", it's just not simple result+item.

# After R(0.0) is the sum of float, before it it is a compensated sum.

Strictly speaking, R(0.0) is not a float, but an instance of it's subclass.

Specialized code uses Py*_CheckExact() tests (~ type(foo) is float), maybe it's a historical artifact and now it's safe to use just Py*_Check(). Such checks are more costly, however, especially for negatives.

@franklinvp
Copy link

Strictly speaking, R(0.0) is not a float, but an instance of it's subclass.

The items after it are float and are added as with the sum of float. The items before it are also float, but are added with a different sum.

The reason for not implementing sum like this is simple. Some users are missing a feature, and there is a way for no one to miss anything. Among the multiple ways Python is used there are two: Replication of results from other languages, and Education since it so simple looking and easy to read.

sum used to provide a terse, easy to read, way to get the sum of float associated from left to write, implemented in a lower level language. The alternatives now, the for loop and functools.reduce, don't have those properties.

sum now, hides now, partially, some of the peculiarities of the float arithmetic. Fewer people complaining about commutativity, I suppose, a loss for education since less people asking means less people learning.

Putting this compensated sum in some other function nobody looses.

@skirpichev
Copy link
Member

The items after it are float and are added as with the sum of float. The items before it are also float, but are added with a different sum.

In short, R(0.0) appearance cancel running of float-specific algorithms. This happened before too, that can be demonstrated by benchmark:

sk@note:~ $ python3.11 -m timeit -s 'class F(float): pass' -s 'xs=[.1]*10' 'sum(xs)'
500000 loops, best of 5: 416 nsec per loop
sk@note:~ $ python3.11 -m timeit -s 'class F(float): pass' -s 'xs=[.1]*10;xs[3]=F(.1)' 'sum(xs)'
500000 loops, best of 5: 688 nsec per loop

But now also results differ:

sk@note:~ $ python3.12 -c 'xs=[.1]*10;print(sum(xs))'
1.0
sk@note:~ $ python3.12 -c 'xs=[0.1]*10;xs[3]=type("F", (float,), {})(0.1);print(sum(xs))'
0.9999999999999999

As I said above, perhaps the specialization could be extended to subclasses of builtin types. But it's another issue.

I don't see how current behaviour violates the docs. It's your interpretation, that sum() should behave like reduce(add, xs), but it's not that actually is documented.

The alternatives now, the for loop and functools.reduce, don't have those properties.

I don't see why someone will want fast, but inaccurate sum.

Putting this compensated sum in some other function nobody looses.

I think we are repeating already given arguments. In the referenced above issue #111933 several core devs rejected idea of restoring pre-3.12 behaviour.

If you wish to change that - try to open a discourse thread. BTW, now this will be a backward compatibility break, sorry.

@franklinvp
Copy link

franklinvp commented Jul 2, 2024

This happened before too

Timing does not prove anything regarding which operation is used to add an F[float] to a float. A proof requires looking at the implementation.

But now also results differ:

Known. It is you who is repeating things.

perhaps the specialization could be extended to subclasses of builtin types

That makes the cons that I said above worse. I hope they really don't do that.
At least now there is a hacky way to get sum behave like a fast sum of float with their own sum.

backward compatibility break

They clearly did not care about that, when they introduced this change. They just got overenthusiastic about how easy it was to add this compensated sum to the existing implementation.

Anyway, I don't wish for anything. People do whatever they want. They can point out a thousand reasons why this is a good change, and I agree with probably all of them. I am only pointing out that this change affects some uses of Python, when it is possible to add this algorithm without affecting anyone.

@skirpichev
Copy link
Member

Timing does not prove anything [...] A proof requires looking at the implementation.

In this case difference is too significant to be ignored. And of course you can verify this conclusion looking to the sources.

At least now there is a hacky way to get sum behave like a fast sum of float with their own sum.

I'm afraid, that it's not:

$ python3.11 -m timeit -s 'class F(float): pass' -s 'f=F(0.1); xs=[f]*10' 'sum(xs)'
500000 loops, best of 5: 788 nsec per loop

They clearly did not care about that, when they introduced this change.

I don't think so. In the first comment by Mark - raised question on breaking backward compatibility. Fast alternative for uncompensated sum also was discussed.

Anyway, I don't wish for anything.

Then what you are trying to do here?

@franklinvp
Copy link

From 3.11

if (PyFloat_CheckExact(item)) {
                f_result += PyFloat_AS_DOUBLE(item);
                _Py_DECREF_SPECIALIZED(item, _PyFloat_ExactDealloc);
                continue;

@gpshead
Copy link
Member

gpshead commented Jul 2, 2024

A long closed and implemented issue (this has already shipped) is not the place to be having a discussion. I'm going to lock the conversation here. If you want to discuss a feature to be implemented or believe there is a bug, bring it up on discuss.python.org or file a new issue.

@python python locked as resolved and limited conversation to collaborators Jul 2, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
type-feature A feature request or enhancement
Projects
None yet
Development

No branches or pull requests

10 participants