Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Windows] speedup connections #1679

Merged
merged 5 commits into from
Feb 5, 2020
Merged

[Windows] speedup connections #1679

merged 5 commits into from
Feb 5, 2020

Conversation

giampaolo
Copy link
Owner

...a variant of #1676.

@giampaolo
Copy link
Owner Author

giampaolo commented Feb 3, 2020

@mlabonte I am benchmarking this PR with:

import psutil, socket, time

ls = []
for x in range(5000):
    s = socket.socket()
    s.bind(('127.0.0.1', 0))
    s.listen(4)
    ls.append(s)

t = time.time()
for x in range(100):
    psutil.net_connections('tcp')
print(time.time() - t)
[x.close() for x in ls]

The results I get are worse than #1676. Without patch I get 3.35 secs, with patch 3.00, meaning around 9% speedup.

@mlabonte
Copy link

mlabonte commented Feb 3, 2020

I cannot understand, I did the same on my workspace and 1679 is slighly faster than 1676. I even found one case where it's definively faster but still, I have no idea why as both PR do the same operations.

(pr1679) $ python -m timeit "import psutil; psutil.Process().connections(kind='tcp')"
1000 loops, best of 3: 439 usec per loop

(pr1676) $ python -m timeit "import psutil; psutil.Process().connections(kind='tcp')"
1000 loops, best of 3: 636 usec per loop

Maybe the second call is faster than the first in the case of IPv6.

@giampaolo
Copy link
Owner Author

giampaolo commented Feb 3, 2020

Can you get similar results as #1676 (comment)? This one:

Worsth case (end of a quick rampup from 0 K to 15 K connections):
$ python -m timeit -n 1 -r 1 "import psutil; psutil.Process().connections(kind='tcp4')"
1 loops, best of 1: 25.5 sec per loop <--- psutil==5.6.7
1 loops, best of 1: 259 msec per loop <--- pull request

...looks too big of a speedup. Are you sure you were measuring the right thing?

@mlabonte
Copy link

mlabonte commented Feb 4, 2020

Yes, I get similar results comparing master to 1679. It's a particular case: from approx. 9 K conxs, if the rate of new connections is big enough, the size of the table was always bigger than the previous call so we were stuck in the loop. I could get a single call blocked for minutes with ramping up to 60 K connections.

@giampaolo
Copy link
Owner Author

Whatever measure I try I always get around 10% speedup. Anyway, it's fine. Merging.

@giampaolo giampaolo merged commit 3d29963 into master Feb 5, 2020
@giampaolo giampaolo deleted the win-cons-speedup branch February 5, 2020 16:24
@mlabonte
Copy link

mlabonte commented Feb 7, 2020

Thanks for merging this! :-) Do you have an ETA to release it in pypi?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants