-
-
Notifications
You must be signed in to change notification settings - Fork 31.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convoy effect with I/O bound threads and New GIL #52194
Comments
Background Observed Behavior of I/O Operations For system calls that complete immediately, a thread quickly releases Behavior of the new GIL Although this scheme solves the problem of CPU-bound threads It should be noted that the behavior described also occurs in Python Example # iotest.py
import time
import threading
from socket import *
# CPU-bound thread (just hammers the CPU)
def spin():
while True:
pass
# I/O-bound thread (an echo TCP server)
def echo_server():
s = socket(AF_INET, SOCK_STREAM)
s.setsockopt(SOL_SOCKET, SO_REUSEADDR,1)
s.bind(("",15000))
s.listen(1)
while True:
c,a = s.accept()
while True:
data = c.recv(8192)
if not data:
break
c.sendall(data)
c.close()
s.close()
# Launch the CPU-bound thread
t1 = threading.Thread(target=spin)
t1.daemon=True
t1.start()
# Run the I/O server
echo_server() Here is a benchmark program that runs as a client for the echo_server() # echoclient.py
from socket import *
import time
CHUNKSIZE = 16384
NUMMESSAGES = 640 # Total of 10MB
# Dummy message
msg = b"x"*CHUNKSIZE
# Connect and send messages
s = socket(AF_INET,SOCK_STREAM)
s.connect(("",15000))
start = time.time()
for n in range(NUMMESSAGES):
s.sendall(msg)
bytes_recv = len(msg)
# Get the response back
while bytes_recv > 0:
data = s.recv(bytes_recv)
bytes_recv -= len(data)
s.close()
end = time.time()
print("%0.3f seconds (%0.3f bytes/sec)" % (end-start, (CHUNKSIZE*NUMMESSAGES)/(end-start))) Performance Results If you run the iotest.py program using Python 2.6.4 and execute bash % python echoclient.py If you switch the iotest.py to Python 3.2 and rerun, you get this bash % python echoclient.py Notice that there is a factor 12 performance difference. Modify the iotest.py program so that there are 2 CPU-bound t2 = threading.Thread(target=spin)
t2.daemon
t2.start() Now, repeat the above tests. For Python 2.6.4, you get this:
(Yes the performance actually improves! That's left as an exercise Now, switch the iotest.py server to Python 3.2 and retry:
Notice how the addition of one CPU-bound thread made the time go up by Now, disable all but one of the CPU cores and try the test again in
Here, you see that it runs about 500 times faster than with two cores What's causing this behavior? while True:
data = c.recv(8192)
if not data:
break
c.sendall(data) The I/O operations recv() and sendall() always release the GIL when Is it worth fixing? In heavily loaded I/O bound applications such as servers with How to fix? The effect can be minimized by setting the switch interval to a really |
Would the idea of priority-GIL requests that Antoine had in his original patch solve this issue? |
Just a quick test under Linux (on a dual quad core machine):
As already said, the "spinning endlessly" loop is a best case for thread switching latency in 2.x, because the opcodes are very short. If each opcode in the loop has an average duration of 20 ns, and with the default check interval of 100, the GIL gets speculatively released every 2 us (yes, microseconds). That's why I suggested trying more "realistic" workloads, as in ccbench. Also, as I told you, there might also be interactions with the various timing heuristics the TCP stack of the kernel applies. It would be nice to test with UDP. That said, the observations are interesting. |
The comment on the CPU-bound workload is valid--it is definitely true that Python 2.6 results will degrade as the workload of each tick is increased. Maybe a better way to interpreter those results is as a baseline of what kind of I/O performance is possible if there is a quick I/O response time. However, ignoring that and the comparison between Python 2.6 and 3.2, there is still a serious performance issue with I/O in 3.2. For example, the dramatic decrease in I/O performance as there are more CPU bound threads competing and the fact that there is a huge performance gain when all but one CPU core is disabled. I tried the test using UDP packets and get virtually the exact same behavior described. For instance, echoing 10MB (sent in 8k UDP packets) takes about 0.6s in Python 2.6 and 12.0s in Python-3.2. The time shoots up to more than 40s if there are two CPU-bound threads. The problem being described really doesn't have anything to do with TCP vs. UDP or any part of the network stack. It has everything to do with how the operating system buffers I/O requests and how I/O operations such as sends and receives complete immediately without blocking depending on system buffer characteristics (e.g., if there is space in the send buffer, a send will return immediately without blocking). The fact that the GIL is released when it's not necessary in these cases is really the source of the problem. |
We could try not to release the GIL when socket methods are called on a non-blocking socket. Regardless, I've re-run the tests under the Linux machine, with two spinning threads:
(and as someone mentioned, the "priority requests" mechanism which was in the original new GIL patch might improve things. It's not an ideal time for me to test, right now :-)) |
I'm attaching Dave's new UDP-based benchmarks, which eliminate the dependency on the TCP stack's behaviour. |
And here is an experimental patch which enables the "priority requests" mechanism which was in the original new GIL patch. "Experimental" because it only enables them on a couple of socket methods (enough to affect the benchmarks). Here are the UDP benchmark results (with 2 background threads):
And here's patched py3k with 8 background threads: (benchmarks run on a 8-core Linux machine) |
See also bpo-7993 for a patch adding a similar bandwidth benchmark to ccbench. |
Here is an improved version of the priority requests patch. |
I posted some details about the priority GIL modifications I showed during my PyCON open-space session here: http://www.dabeaz.com/blog/2010/02/revisiting-thread-priorities-and-new.html I am attaching the .tar.gz file with modifications if anyone wants to look at them. Note: I would not consider this to be solid enough to be any kind of official patch. People should only look at it for the purposes of experimentation and for coming up with something better. |
Here is another patch based on a slightly different approach. Instead of being explicitly triggered in I/O methods, priority requests are decided based on the computed "interactiveness" of a thread. Interactiveness itself is a simple saturated counter (incremented when the GIL is dropped without request, decremented when the GIL is dropped after a request). Benchmark numbers are basically the same as with gilprio2.patch. |
The UDP benchmarks look even better on Windows. vanilla py3k gilprio2 patched py3k gilinter patched py3k |
Here's a short benchmark for everyone who thinks that my original benchmark was somehow related to TCP behavior. This one doesn't even involve sockets: from threading import Thread
import time
def writenums(f,n):
start = time.time()
for x in range(n):
f.write("%d\n" % x)
end = time.time()
print(end-start)
def spin():
while True:
pass # Uncomment to add a thread writenums(open("/tmp/nums","w"),1000000) If I run this on my Macbook with no threads, it takes about 1.05 seconds. If the one spinning thread is turned on, the time jumps to about 4.5 seconds. What you're seeing is that the spinning thread unfairly hogs the CPU. If I use my own patched version (new GIL with priorities), the threaded version drops back down to about 1.10 seconds. I have not tried it with Antoine's latest patch, but would expect similar results as he is also using priorities. Just to be clear, this issue is not specific to sockets or TCP. |
On some platforms the difference is not so important. # Python 3.2a0 (py3k:78982M, Mar 15 2010, 15:40:42) 0.67s without thread |
With line buffering, I see the issue.
# Modified version of the test case, with bufsize=1 from threading import Thread
import time
def writenums(f, n):
start = time.time()
for x in range(n):
f.write("%d\n" % x)
end = time.time()
print(end-start)
def spin():
while True:
pass
t1 = Thread(target=spin)
t1.daemon=True
# Uncomment to add a thread
#t1.start() # With line buffering |
Whoa, that's pretty diabolically evil with bufsize=1. On my machine, doing that just absolutely kills the performance (13 seconds without the spinning thread versus 557 seconds with the thread!). Or, put another way, the writing performance drops from about 0.5 Mbyte/sec down to 12 Kbytes/sec with the thread. With my priority GIL, the time is about 29 seconds with the thread (consistent with your experiment using the gilinter patch). One thing to think about with this example is the proper priority of I/O handling generally. What if, instead of a file, this example code was writing on a pipe to another process? For that, you would probably want that I/O thread to be able to blast its data to the receiver as fast as it reasonably can so that it can be done with it and get back to other work. In terms of practical relevance, this test again represents a simple situation where computation is overlapped with I/O processing. Perhaps the program has just computed a big result which is now being streamed somewhere else by a background thread. In the meantime, the program is now working on computing the next result (the spinning thread). Think queues, actors, or any number of similar things---there are programs that try to operate like this. |
Almost forgot--if I turn off one of the CPU cores, the time drops from 557 seconds to 32 seconds. Gotta love it! |
We should be careful with statements such as "you want probably want While IO responsiveness and throughput can be an important measure of |
Oh the situation definitely matters. Although, in the big picture, most programmers would probably prefer to have fast I/O performance over slow I/O performance :-). |
Yes, of course. But that's not the point. We could try to improve GIL |
I absolutely agree 100% that it is not worth trying to fix the GIL for every conceivable situation (although if you could, I wouldn't complain). To me, there are really only two scenarios worth worrying about:
As for everything else, it's probably not worth worrying about so much. If someone is only doing I/O (e.g., a web server), their code is going to behave about the same as before (although maybe slightly better under heavy load if there's less GIL contention). Situations where someone intentionally tries to set up multiple long-running CPU-bound threads seems pretty unlikely---the very presence of the GIL wouldn't make that kind of programming model attractive in the first place, so why would they do it? |
You know, I almost wonder whether this whole issue could be fixed by just adding a user-callable function to optionally set a thread priority number. For example: sys.setpriority(n) Modify the new GIL code so that it checks the priority of the currently running thread against the priority of the thread that wants the GIL. If the running thread has lower priority, it immediately drops the GIL. Other than having this added preemption, do nothing else---just throw it all back to the user to come up with the proper "priorities." If there was something like this, it would completely fix the overlapped compute and I/O problem I mentioned. I'd just set a higher priority on the background I/O threads and be done with it. Problem solved. Ok, it's only a thought. |
I tried Florent's modification to the write test and did not see the effect on my machine with an updated revision of Python32. I am running Ubuntu Karmic 64 bit. According to the following documentation the libc condition is using scheduling policy when waking a thread and not FIFO order: I upload a quick and dirty patch (linux-7946.patch) to the new GIL just to reflect this by avoiding the timed waits. On my machine it behaves reasonably both with the TCP server and with the write test, but so does unpatched Python 3.2. I noticed high context switching rate with dave's priority GIL - with both tests it goes above 40K/s context switches. |
I updated the patch with a small fix and increased the ticks countdown-to-release considerably. This seems to help the OS classify CPU bound threads as such and actually improves IO performance. |
I upload bfs.patch To apply the patch use the following commands on updated python 3.2: The patch replaces the GIL with a scheduler. The scheduler is a simplified implementation of the recent kernel Brain F**k Scheduler by the Linux hacker Con Kolivas: http://ck.kolivas.org/patches/bfs/sched-BFS.txt Con Kolivas is the hacker whose work inspired the current CFS scheduler of the Linux Kernel. On my core 2 duo laptop it performs as follows compared to the other patches:
cpued test spins 3 threads, 2 of them pure python and the 3rd does time.sleep(0) every ~1ms: import threading
import time
def foo(n):
while n > 0:
'y' in 'x' * n
n -= 1
def bar(sleep, name):
for i in range(100):
print (name, i, sleep)
for j in range(300):
foo(1500)
if sleep:
time.sleep(0)
t0 = threading.Thread(target=bar, args=(False, 't0'))
t1 = threading.Thread(target=bar, args=(False, 't1'))
t2 = threading.Thread(target=bar, args=(True, 't2-interactive'))
list(map(threading.Thread.start, [t0, t1, t2]))
list(map(threading.Thread.join, [t0, t1, t2])) The patch is still work in progress. In particular:
The scheduler is very simple, straight forward and flexible, and it addresses the tuning problems discussed recently. I think it can be a good replacement to the GIL, since Python really needs a scheduler, not a lock. |
Attached ccbench-osx.log made today on OSX on latest svn checkout. Hope it helps |
Updated bfs.patch with BSD license and copyright notice. ! Current version patches cleanly and builds with Python revision svn r81201. bpo-7946 and proposed patches were put on hold indefinitely following this python-dev discussion: http://mail.python.org/pipermail/python-dev/2010-May/100115.html I would like to thank the Python developer community and in particular David and Antoine for a most interesting ride. Any party interested in sponsoring further development or porting patch to Python 2.x is welcome to contact me directly at nir@winpdb.org Nir |
Thanks for all your work Nir! I personally think the BFS approach is the best we've seen yet for this problem! Having read the thread you linked to in full (ignoring the tagents bikeshedding and mudslinging that went on there), it sounds like the general consensus is that we should take thread scheduling changes slowly and let the existing new implementation bake in the 3.2 release. That puts this issue as a possibility for 3.3 if users demonstrate real world application problems in 3.2. (personally I'd say it is already obvious that there are problems an wde should go ahead with your BFS based approach but realistically the we're still better off in 3.2 than we were in 3.1 and 2.x as is) |
The issue bpo-12822 asks to use monotonic clocks when available. |
What happened to this bug and patch? |
Not much :) The patch is complex and the issue hasn't proved to be Le 15/07/2014 09:52, Dima Tisnek a écrit :
|
Celery 5 is going async and in order to isolate the main event loop from task execution, the tasks are going to be executed in a different thread with it's own event loop. This thread may or may not be CPU bound. This patch should help a lot. I like Nir's approach a lot (although I haven't looked into the patch itself yet). It's pretty novel. I'm willing to help. |
Note that PyPy has implemented a GIL which does not suffer from this problem, possibly using a simpler approach than the patches here do. The idea is described and implemented here: https://bitbucket.org/pypy/pypy/src/default/rpython/translator/c/src/thread_gil.c |
FYI I can verify that the original benchmark is still valid on Python 3.7.3. I'll need somebody to decide how we're going to fix this problem. |
I suggest: Simultaneously, it'd also be interesting to see someone create an alternate PR using a PyPy inspired GIL implementation as that could prove to be a lot easier to maintain. Lets make a data driven decision here. People lost interest in actually landing a fix to this issue in the past because it wasn't impacting their daily lives or applications (or if it was, they already adopted a workaround). Someone being interested enough to do the work to justify it going in is all it should take to move forward. |
(unassigning as it doesn't make sense to assign to anyone unless they're actually working on it) |
FWIW: I think David's cited behavior proves that the GIL is de facto a scheduler. And, in case you missed it, scheduling is a hard problem, and not a solved problem. There are increasingly complicated schedulers with new approaches and heuristics. They're getting better and better... as well as more and more complex. BFS is an example of that trend from ten years ago. But the Linux kernel has been shy about merging it, not sure why (technical deficiency? licensing problem? personality conflict? the name?). I think Python's current thread scheduling approach is almost certainly too simple. My suspicion is that we should have a much more elaborate scheduler--which hopefully would fix most (but not all!) of these sorts of pathological performance regressions. But that's going to be a big patch, and it's going to need a champion, and that champion would have to be more educated about it than I am, so I don't think it's gonna be me. |
About nine years ago, I stood in front of a room of Python developers, including many core developers, and gave a talk about the problem described in this issue. It included some live demos and discussion of a possible fix. https://www.youtube.com/watch?v=fwzPF2JLoeU Based on subsequent interest, I think it's safe to say that this issue will never be fixed. Probably best to close this issue. |
It's a known issue and has been outlined very well and still comes up from time to time in real world applications, which tend to see this issue and Dave's presentation and just work around it in any way possible for their system and move on with life. Keeping it open even if nobody is actively working on it makes sense to me as it is still a known issue that could be resolved should someone have the motivation to complete the work. |
My 2c as Python user: Back in 2010, I've used multithreading extensively, both for concurrency and performance. Others used multiprocessing or just shelled out. People talked about using **the other** core, or sometimes the other socket on a server. Now in 2020, I'm using asyncio exclusively. Some colleagues occasionally still shell out 🙈. None talking about using all cores on a single machine, rather, we'd spin up dozens of identical containers, which are randomly distributed across N machines, and the synchronisation is offloaded to some database (e.g. atomic ops in redis; transactions in sql). In my imagination, I see future Python as single-threaded (from user's point of view, that is without multithreading api), that features speculative out-of-order async task execution (using hardware threads, maybe pinned) that's invisible to the user. |
If someone wants to close this issue, I suggest to write a short section in the Python documentation to give some highlights on the available options and stategies to maximize performances and list drawbacks of each method. Examples:
These architectures are not exclusive. asyncio can use multiple threads and be distributed in multiple processes. I would be bad to go too deep into the technical details, but I think that we can describe some advantages and drawbacks which are common on all platforms. |
Catching up on the comments on this, it seems like nobody has enough certainty to say it will work well enough. In Linux, the scheduler is pluggable, which lets other non-default schedulers be shipped and tried in the real world.
Similarly I think this needs more testing than it will get living here as a bug. If, like Linux the scheduler was pluggable, it could be shipped and enabled by real users that were brave enough and data could be collected. |
In case someone finds it useful, I've written a blog post on how to visualize the GIL: In the comments (or at maartenbreddels/fastblog#3 (comment) ) |
Please see also faster-cpython/ideas#328 for a proposal for a simple (much simpler than BFS) GIL scheduler only allocating the GIL between runable O/S threads waiting to have ownership of the GIL, and using the O/S scheduler for scheduling the threads. |
I think that we should focus our efforts on removing the GIL, now that we This issue would probably hurt Celery since some users use it with a thread Instead, I suggest we document this with a warning in the relevant place so On Mon, Mar 21, 2022, 20:32 Guido van Rossum <report@bugs.python.org> wrote:
|
Is this really a thing? Something that is definitely happening in a reasonable timescale? Or are there some big compatibility issues likely to rear up and at best create delays, and at worse derail it completely? Can someone give me some links about this please? |
Start here: https://docs.google.com/document/d/18CXhDb1ygxg-YXNBJNzfzZsDFosB5e6BfnXLlejd9l0/edit AFAICT the SC hasn't made up their minds about this. |
Taking all of the above points together, I think that there is still merit in considering the pros and cons of a GIL scheduler. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: