-
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quadratic destruction complexity introduced in #1436 #1837
Comments
Great benchmarking job! I didn't know that Google benchmark library can measure and report computational complexity too. That comes in handy. I agree that we should remove |
Can you explain why these |
@jaredgrubb Wild guess, maybe calculating the object/array size is not a constant time operation? |
No, vector and map have constant-time size functions. I think the Benchmark tool is not detecting complexity correctly. In all the iteration cases, the runtime goes down by a constant 14% ... if your patch was improving complexity, then we would expect the improvement to scale with the number of elements. At a high-level, these The fact that they're actually slowing things down is very interesting! But we should try to explain why this is occurring before we accept this change. I would hate to see, for example, that we're working around a GCC-9.2/stdlibc++ bug, and when this test runs on other targets it actually gets worse (like "common wisdom" suggests it should). |
@jaredgrubb If you reserve in a loop with a slowly increasing size, you'll get an allocation pattern like this:
If you don't reserve, then you'll get an allocation pattern like this, assuming a growth factor of 4:
Each time you call reserve(), you have an allocation, several default constructors (which may be trivial), several copies, several destructors (which may be trivial), and a deallocation. The only time you should reserve inside a loop is when you know the final size up front, or you know that it's a few items of large size, so you'll get a better allocation pattern than the built-in growth factor. |
Here is a comment from STL, who is the maintainer of the Visual C++ Standard Library.
|
Further elaboration of the above: https://www.reddit.com/r/cpp/comments/1qhloy/fun_and_danger_with_stdvectorreserve/cdd7dcx/ |
Ah! That makes a lot of sense thanks for sharing that. I learned something. (Also, I looked back to the numbers to see what I missed. I made a mistake in my "14%..." claim, as I was taking the last two benchmark sets, which are clearly both marked as linear. Looking at the first two, the improvement is clearly scaling as claimed ... so I take all that back) |
Recently merged PR #1436 introduced quadratic complexity in destruction of json object due to use of
std::vector::reserve
in a loop.Benchmark code:
Linear complexity.
Quadratic complexity with 3.7.2:
Same version, with
reserve
calls removed:While previous version, 3.7.1, exhibits linear complexity:
Ubuntu 19.10 with GCC 9.2:
develop
branch?Release 3.7.2
The text was updated successfully, but these errors were encountered: