Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

faster damage processing - bandwidth constraints handling #1700

Closed
totaam opened this issue Nov 24, 2017 · 6 comments
Closed

faster damage processing - bandwidth constraints handling #1700

totaam opened this issue Nov 24, 2017 · 6 comments

Comments

@totaam
Copy link
Collaborator

totaam commented Nov 24, 2017

Follow up from #999. These classes are becoming complicated and slow.

TODO:

  • run profiling again
  • merge video source? (we never use window source on its own anyway)
  • support multiple video regions?
  • cythonize, use strongly typed and faster "deque":Ring buffers in Python/Numpy
  • pre-calculate more values: ECU "engine map" like
  • more gradual refresh when under bandwidth constraints and at low quality: the jump from lossy to lossless can use up too much bandwidth, maybe refresh first at 80% before doing true lossless
  • use more bandwidth? (macos client could use more quality?)
  • slowly updating windows should be penalized less
  • don't queue more frames for encoding after a congestion event (ok already?)
  • maybe keep track of the refresh compressed size?

See also #920: some things could be made faster on the GPU..

@totaam
Copy link
Collaborator Author

totaam commented Nov 26, 2017

See also #1761

@totaam
Copy link
Collaborator Author

totaam commented Feb 17, 2018

See also #1769 comment:1 : maybe we should round up all screen updates to ensure we can always use color subsampling and video encoders? Or only past a certain size to limit the cost?

@totaam
Copy link
Collaborator Author

totaam commented Mar 8, 2018

2018-03-08 17:59:55: antoine uploaded file encoding-selection.png (236.2 KiB)

profiling encoding selection
encoding-selection.png

@totaam
Copy link
Collaborator Author

totaam commented Mar 8, 2018

After much profiling, it turns out that encoding selection is actually pretty fast already (see above).
And so we're better off spending extra time choosing the correct encoding, instead of trying to save time there: r18669.
Other micro improvements: r18667, r18668

See also #1299#comment:6, we seem to be processing the damage events fast enough (~0.25ms for do_damage), but maybe we're scheduling things too slowly when we get those damage storms?

@totaam
Copy link
Collaborator Author

totaam commented Mar 10, 2018

For the record, I've used this command to generate the call graphs:

python2 ./tests/scripts/pycallgraph -i damage -- start --start-child="xterm -ls" --no-daemon

Minor related fix: r18685.

Re-scheduling as the profiling has shown that this is not a huge overhead after all.

@totaam
Copy link
Collaborator Author

totaam commented Aug 28, 2023

Closing in favour of #3978

@totaam totaam closed this as completed Aug 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant