-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release GIL while calling WebPAnimDecoderGetNext
#7782
Release GIL while calling WebPAnimDecoderGetNext
#7782
Conversation
Testing with import timeit
from PIL import Image
im = Image.open('Tests/images/iss634.webp')
def decode():
im._decoder.reset()
for i in range(im.n_frames):
im._decoder.get_next()
print(timeit.timeit(decode, number=1000)) I find it hard to say that this is definitively faster than main. |
Hi, I don't expect it to be faster than |
I think you'll want a test with code like import concurrent
import timeit
from PIL import Image
images = [Image.open('Tests/images/iss634.webp') for _ in range(100)]
def decode(im):
im._decoder.reset()
for i in range(im.n_frames):
im._decoder.get_next()
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as pool:
pool.map(decode, images) (untested but hopefully you get the idea) |
You're right, that test code does demonstrate a substantial speed increase. |
Co-authored-by: Andrew Murray <3112309+radarhere@users.noreply.github.com>
Updated w/ your code suggestion... I think you will have to re-approve the workflow run |
Please could you post a summary of the speed increase? And let's include this in the release notes. |
I've pushed a commit with release notes, based on https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html#release-gil-when-converting-images-using-matrix-operations |
WebPAnimDecoderGetNext
is a relatively expensive pure-C call that currently holds the Python GIL. Release it!(Haven't tested this branch but it seemed like a straightforward change.)