Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add RTSP features to cudacodec::VideoReader #3247

Merged
merged 5 commits into from
Jun 2, 2022

Conversation

cudawarped
Copy link
Contributor

@cudawarped cudawarped commented May 6, 2022

Add the below two features to cudacodec::VideoReader to help when streaming from live sources.

  1. Lift the limit on the maximum number of unparsed packets if streaming from a source over UDP, see cv2.cudacodec.createVideoReader error when input is a video stream #3225 (comment)
  2. Internally drop frames if the rate which frames are requested by nextFrame()/grab() is less than the source FPS. Although this is something which a user would eventually accomodate for themselves (by reading at the appropriate rate and choosing which packets to discard) it would be a useful for this to be "automatic" until they have implemented that functionality.

Testing - currently the additional tests only check that the new parameters have been set inside VideoReader and do not verify the functionality of (1) and (2). This is because testing (1) requires a new test video file (I am not sure if that is overkill for this small feature) and testing (2) require a live RTSP source. I can easily include an extra test file if required but I am not sure how I can simulate an live RTSP source?

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
  • The PR is proposed to the proper branch
  • There is a reference to the original bug report and related work
  • There is accuracy test, performance test and test data in opencv_extra repository, if applicable
    Patch to opencv_extra has the same branch name.
  • The feature is well documented and sample code can be built with the project CMake

modules/cudacodec/test/test_video.cpp Show resolved Hide resolved
modules/cudacodec/include/opencv2/cudacodec.hpp Outdated Show resolved Hide resolved
modules/cudacodec/src/frame_queue.cpp Outdated Show resolved Hide resolved
Copy link
Contributor

@asmorkalov asmorkalov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me in general, besides force flag in init. I propose to remove it at all.
Tested on Ubuntu 18.04 with CUDA 10.2, NVIDIA Video Codec SDK 11.1.

modules/cudacodec/src/frame_queue.cpp Outdated Show resolved Hide resolved
@cudawarped
Copy link
Contributor Author

No you are correct. Force is not used and can be removed because it could be dangerous to have the facility to initialize a frameque object twice. It may be better if required in the future to have a specific reinitialization routine which performs safety checks, if the size of the que needs to be altered mid-decode.

@cudawarped cudawarped force-pushed the videoreader_add_rtsp_feature branch from 9a2ad10 to 95fd837 Compare May 28, 2022 10:03
@alalek alalek merged commit b2904b9 into opencv:4.x Jun 2, 2022
hakaboom pushed a commit to hakaboom/opencv_contrib that referenced this pull request Jul 1, 2022
…eature

Add RTSP features to cudacodec::VideoReader

* Add live video source enhancements, e.g. rtsp from ip camera's
Add error logs.

* Fix type.

* Change badly named flag.

* Alter live source flag everywhere to indicate what it does not what it is for, which should be left up to the documentation.

* Prevent frame que object from being reinitialized which could be unsafe if another thread and/or object is using it.
@alalek alalek mentioned this pull request Aug 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants