Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example no-site flag for subinterpreters module #34

Open
wants to merge 3 commits into
base: per-interpreter-gil-new
Choose a base branch
from

Conversation

tonybaloney
Copy link

@tonybaloney tonybaloney commented Apr 19, 2023

This is just example code, but with a flag for no-site propagating down to the interpreter config, sub interpreters are 50% faster to create because init_import_site() takes 50% of the execution time for _Py_InterpreterNewFromConfig

screenshot 2023-04-19 at 19 07 36

For cases where users want a site in the main interpreter, but don't need a site in the sub interpreters, it would be very helpful to have a flag.

screenshot 2023-04-19 at 19 54 07

I've created a benchmark to test this against sub interpreters, threading and multiprocessing (skip those other 2 to just see a comparison to sub interpreters).

If you like this idea I'm happy to submit a proper PR with tests.

import pyperf
from multiprocessing import Process
from threading import Thread
import _xxsubinterpreters as subinterpreters

def f():
    ...


def bench_threading(n):
    # Code to launch specific model
    for _ in range(n):
        t = Thread(target=f)
        t.start()
        t.join()

def bench_subinterpreters(n, site=True):
    # Code to launch specific model
    for _ in range(n):
        sid = subinterpreters.create(site=site)
        subinterpreters.run_string(sid, "")

def bench_multiprocessing(n):
    # Code to launch specific model
    for _ in range(n):
        t = Process(target=f)
        t.start()
        t.join()

if __name__ == "__main__":
    runner = pyperf.Runner()
    runner.metadata['description'] = "Benchmark execution models"
    n = 100
    runner.bench_func('threading', bench_threading, n)
    runner.bench_func('subinterpreters', bench_subinterpreters, n)
    runner.bench_func('subinterpreters_nosite', bench_subinterpreters, n, False)
    runner.bench_func('multiprocessing', bench_multiprocessing, n)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant