Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow encoding/decoding by comparison with libsnmp #44

Open
philmayers opened this issue Mar 12, 2017 · 4 comments
Open

Slow encoding/decoding by comparison with libsnmp #44

philmayers opened this issue Mar 12, 2017 · 4 comments

Comments

@philmayers
Copy link

philmayers commented Mar 12, 2017

We have a set of bespoke applications for polling thousands of devices in parallel, using Twisted and the old/unmaintained libsnmp.

We recently looked at porting this to Python 3 and moving to pysnmp, but performance was not acceptable for our needs. Pretty much all the time seems to be spent in encoding/decoding when I inspect using profile.

https://gist.github.com/philmayers/67b9300d8fb7282481a1a6af5ed45818

I'll attach The above gist is an example script which just decodes the same fake SNMPv2c PDU in a tight loop, measuring with timeit. Example results I get on my desktop under CPython:

timeit for libsnmp 10000 iterations per-call 0.336565ms
timeit for pysnmp 10000 iterations per-call 1.943247ms

I see similar differences under pypy although both times are obviously much better:

timeit for libsnmp 10000 iterations per-call 0.091585ms
timeit for pysnmp 10000 iterations per-call 0.306807ms

Under pypy with larger iterations, libsnmp improves dramatically (~3x faster) whereas pysnmp far less so (1.5x faster):

timeit for libsnmp 100000 iterations per-call 0.027657ms
timeit for pysnmp 100000 iterations per-call 0.200674ms

I suspect the performance difference here is coming from libsnmp having a much more primitive asn1 implementation and from it being far more monolithic - fewer function calls, less flexibility - and as such it's doing a lot less work, and is also easier for pypy to performance-hotspot.

For the time being, we'll probably fork libsnmp and try moving that to Python 3, or remain on 2.7 with libsnmp, but I thought this info might be of interest.

Many thanks for your hard work on pysnmp!

@etingof
Copy link
Owner

etingof commented Mar 12, 2017

Thank you for raising this concern! Good thing is that it's being addressed by the ongoing overhaul of pyasn1 -- the library that powers ASN.1 de/serialization used by pysnmp.

With which pyasn1 version you are running the benchmarks?

Could you run your comparison with pysnmp on top of the latest pyasn1 taken from master? Is there any difference?

@philmayers
Copy link
Author

Interesting, sounds like this is being worked on already, good news.

The tests I did were with pyasn1 0.2.3 - pip freeze output

appdirs==1.4.3
libsnmp==2.0.5
packaging==16.8
ply==3.10
pyasn1==0.2.3
pycryptodome==3.4.5
pyparsing==2.2.0
pysmi==0.0.7
pysnmp==4.3.4
six==1.10.0

I'll try and re-test with master pyasn1 and report back

@philmayers
Copy link
Author

Slightly better using pyasn1 dc3f323 on cpython 2.7

timeit for libsnmp 10000 iterations per-call 0.341169ms
timeit for pysnmp 10000 iterations per-call 1.629884ms

...versus the released 0.2.3

timeit for libsnmp 10000 iterations per-call 0.334658ms
timeit for pysnmp 10000 iterations per-call 1.929743ms

Will test with pypy

@philmayers
Copy link
Author

Difference is more pronounced under pypy, but both pypy and cython still seem to have a lot of room

pyasn1 master on pypy

timeit for libsnmp 10000 iterations per-call 0.067455ms
timeit for pysnmp 10000 iterations per-call 0.190661ms

versus pyasn1 released on pypy

timeit for libsnmp 10000 iterations per-call 0.066620ms
timeit for pysnmp 10000 iterations per-call 0.289979ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants