-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DVR: On-demand recording, dynamic recording, recording and slicing based on conditions. #1577
Comments
Continued to try 2 methods, ingest and publish stream event exec, with the following results: ingest
This method pulls the specified 720p stream and pushes it to record SRS, which is feasible, but it cannot dynamically configure or insert parameters to support all streams. Each stream needs to be configured separately, which is not feasible. exec
This method pulls the 720p stream of each stream and pushes it to record SRS, which is feasible. However, in the source station cluster mode, there may be error reports but the push is normal. The error message is as follows:
Method 2 seems to be feasible, but it needs improvement. Can the errors caused by inter-cluster communication be avoided? The above test was conducted on docker 3.0-a8, pushing the /streams/123 stream and transcoding it to 1080p and 720p.
|
Dynamic recording: Decide whether to record based on callbacks, and it is more appropriate to pass the parameters of the stream.
|
What does this mean? Custom recording of pulling streams based on the stream on_publish callback?
|
Please refer to FAQ for more information.
Please make sure to maintain the markdown structure. On-demand DVR, similar to on-demand Forward, is a hook that handles streams on-demand. You can refer to the solution at #1342 (comment). If you are willing to implement it yourself, you can also use a similar approach. Please make sure to maintain the markdown structure. However, the complexity of DVR compared to Forward lies in the fact that Forward is stateless, while DVR involves issues such as stream re-publishing and concatenation. Please make sure to maintain the markdown structure. Therefore, considering these factors, it is better to implement it with callbacks in conjunction with the business system, and SRS is not suitable for support. Please make sure to maintain the markdown structure.
|
Demand scenario:
After the user's live stream is transcoded into multiple resolutions, it is only necessary to record and slice a specified resolution or several resolutions. For example, there is no need to slice the ultra-high-definition resolution, and there is no need to record the low-resolution. Only recording a high-resolution or transcoding-encoded stream is sufficient. Currently, if recording and slicing are done directly in the origin, it will record and slice all resolutions according to the transcoding-related configuration, which does not meet the requirements. However, if it is forwarded to another machine, the forwarded stream will also include all resolutions, and there is no way to forward only a specific resolution. In other words, forwarding does not work within the scope of the transcode engine.
Currently, I have a crude solution: transcoding into an additional stream and outputting it to another SRS (Streaming Media Server) that is specifically configured for recording. This way, it will only record the desired resolution, and the slicing process will work similarly.
This approach seems to be a redundant use of CPU resources to create an additional stream, and it is uncertain whether it will have any impact on the cluster. I am unsure of the correct approach to take in this situation.
Expectation:
If the above implementation can meet the requirements, it is necessary to explore what kind of solutions can be used to achieve better results if it is not currently supported. Alternatively, it is worth considering if there are already existing methods to achieve the desired outcome that I may have overlooked. Any input or guidance would be greatly appreciated.
TRANS_BY_GPT3
The text was updated successfully, but these errors were encountered: