Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve performance with many ships in one system #5166

Merged
merged 5 commits into from
May 1, 2021

Conversation

sturnclaw
Copy link
Member

@sturnclaw sturnclaw commented Apr 29, 2021

This is primarily merging in the work I did to make the tradeships PR (#4984) feasible from a performance perspective. I've added in a few other commits from unmerged branches that made sense and cleaned up the codebase a little bit.

This prevents model animation interpolation from being run when active animations aren't being played, which was the number one performance hog with many ships in a system. It also cleans up a fair amount of C++-to-Lua overhead that was being run every StaticUpdate tick.

This PR also removes unneeded code in the TabView and remembers the last tab the user was on in the Comms and Info windows.

Only tick animations that are marked active.
More work needs to be done in Ship.cpp to support this,
but it should enable us to have far more ships in one system.
Add better profiling to identify frame slowdowns we have lots of ships (2000+)
Cache equipment capabilities between ship equipment updates
Improve Ship::StaticUpdate from 70mc to 40mc with 2500 ships
Animation::Interpolate was getting called for all ships, and was
responsible for 1/3rd of the frame with 2500 ships.
Also stop profiling Frame::GetFrame, as the function is an order of
magnitude faster than the profiling overhead.
- Flatten the Scenegraph LoadNodes tree to avoid unneeded callgraph depth
- Remove several useless (and counterproductive) profiling points in tiny Lua bindings
- Add wall-clock time to profiler reports
- Fix bug in debug-compilation of Frame.cpp
@Gliese852
Copy link
Contributor

Did you mean #4984?

@fluffyfreak
Copy link
Contributor

fluffyfreak commented Apr 30, 2021

I was going to ask about this as I was looking at it (your branch) the other day 👍

@sturnclaw
Copy link
Member Author

Did you mean #4984?

I do in fact! Sorry about that, was a tired typo.

@fluffyfreak thanks for being willing to review this - should be fairly straightforward; I've added a bitvector of active animation indices to replace the naive for-all loop when calling UpdateAnimations (even though technically there's no blending between multiple animations, I wanted to ensure the capacity to have multiple animations active at once is preserved) and aggressively deactivate the landing gear animation.

The other major change is moving a number of Properties calls from being run every static update to only being run when equipment is modified on a ship and the values cached between calls. This made a fairly major difference in frametime, especially for larger numbers of ships.

There might be a small performance speedup by using an intrinsic to retrieve the next active bit index from the list of active animations when updating animations; however, given as I was unable to find a decent cross-platform solution to that without pulling in additional libraries I've left it as a simple for-loop. Given that we currently have 0..1 animations (active or otherwise) per model, it's an extremely minor expense compared to the cost of running animation interpolation.

Copy link
Contributor

@fluffyfreak fluffyfreak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good looking code as always 👍
Tested on Windows using VS2019, saving & loading, hyperspacing etc, all works good.

@sturnclaw sturnclaw merged commit 68cfc2e into pioneerspacesim:master May 1, 2021
@sturnclaw sturnclaw deleted the shipfest branch May 1, 2021 21:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants