-
Notifications
You must be signed in to change notification settings - Fork 283
Receiving multiple remote tracks/streams #144
Comments
Yes this is by design due to lack of resources, for v1.0 we did the simple thing of supporting a single remote track per peer connection. For v2.0 we will support multi-tracks for all media (audio and video, local and remote). Currently on |
Aah okay. It's good to know the timeline when multi-tracks will be available for remotes. In the meantime I will try using separate peer connections for each new remote connected after the first one. One thing which could be a problem for the additional peer connections is the SDP would need to be set to receive only, i.e. I only want to receive video from the remote, I don't want to duplicate sending my local video out multiple times. I don't see any way in the current API to do that...I can access the raw local SDP string when the LocalSdpReadytoSend event is called, and modify the raw SDP. But it looks like the SetLocalDescription call is done internally and thus won't use my modified raw SDP string. Just a sidenote that on the project's github readme.md one of the bullets is:
Which led me to believe that remote multi-tracks is supported. Friendly suggestion to maybe add a note that it will be available in v2.0 :) p.s. As this SDK gets more traction, which I really think it will, there will definitely be more engagement from the community so I hope you get a whole bunch more support/resources ;) |
Thinking about the above workaround, if I create a new peer connection for each additional remote track and I don't call |
Yes that should work. |
Hi Jerome, Just leaving a quick update that I've implemented the workaround, i.e. for every remote peer, except for the 1st one, I create a new peer connection instance to receive the remote stream. The key was to not call Cheers, |
Hi @drejx, glad to hear that worked, and thanks for the feedback on how you did it, I am sure this can help other users! Hopefully we can land proper multi-track support soon, although that proved more difficult than expected and I am still working through technical issues and understanding the details of how Unified Plan works. I'll post an update when it's ready. |
Hi @drejx, I am basically trying to achieve the same as you - a multipeer conference. My current structure looks like that:
The problem i faced is, that for the local For the remote I have managed once to get the state to "Connected" and "Completed" when initiating the WebRTC workflow for the local The You got maybe any idea why the IceConnectionState doesn't change for me in the described case or maybe a logical mistake? FYI: On the backend-side I use Kurento.NET which opens up the WebRTC endpoints for each user and generates the remote IceCandidates and answers for my offers I send out client-sided. |
Hi @sesd777 , Sorry for the late reply! One thing I'd like to highlight in your case, if user A and user B want to connect and see each other then both their peer connections should be set to sendrecv. But if you have more than 2 users, say user A, B and C then my setup is like this:
Of course the above doesn't show that the users don't really have a direct connection to each other, but rather everything is going through a TURN server (that is the way I have it). In terms of why you don't get an IceConnection Completed event it might be similar to what I was getting. I'm working with a custom TURN implementation, but maybe you're running into the same issue with Kurento. For each new peer connection instance that I created (e.g. user A recv on user C), I also had to open a connection to the signaling server (wss://) and make a request to subscribe/connect/attach to the system. If I didn't do that I would get rejected by the webrtc layer (security failure) and it would result in a IceConnection Failed event. Hope it helps, good luck! |
Hi @drejx , thank you for your answer. For me it looks like that the internal IceAgent struggles to find a corresponding pair of ice candidates although I'm using several STUN/TURN servers and the appropriate ports open. |
The only other thing I think could be an issue is if the PeerConnectionConfiguration is not configured to expected values, for example, to connect to a TURN server:
And double-check whether the required SDP format is PlanB or Unified. Best of luck! |
@drejx Hello, I would like to ask whether the current release1.0 supports multi-player communication.I tried to build a three-person communication function. There are two peerconnection in my scene. I found that the islocalvideotrackEnabled of the peerconnection initialized first would become true.However, the second peerconnection that is initialized is always false after addlocalvideotrackasync(). Then the second peerconnection will not receive the remote video.So I would like to ask you how can I establish a three-person video communication |
For C++ and C#, you need 2 peer connections per person. In |
@djee-ms yes ,i'm using Unity ,the problem is the second initialized peerconnection seems like can't add a localvideotrack , its islocalvideotrackEnabled() property log false all the time ,by the way the perconnection 's isconnected property is true . so how can i send my local video to another two peerconnection ? |
Ah right I just realized that won't work. The first peer connection will open the webcam, so the second one won't be able to access it. There is currently no solution for that, because the webcam is always opened with exclusive access. One possible but very tedious workaround is to open and manage the webcam yourself, but this has several limitations:
So this is a lot of work and I wouldn't recommend it. Normally the typical way to support multiple peers (>2) is to use a media server, which handles duplicating the video feed to all participants. But that's a larger setup. That avoids having a network mesh topology though (1 connection per pair of participants), and replace it with a star topology. Another way would be to separate the video track from its source, and allow a single source (webcam) to be shared by multiple tracks. This is what we already do for external video tracks in |
that's exactly what i want to know , help a lot ,thank |
@djee-ms yes , i'll open a new issue later ,seem's like sharing the webcam video source by multiple tracks is avaliable solution to me ,i'll try it later |
Add support for multiple audio and video tracks per peer connection. This change introduces a new transceiver API closely related to the same-named concept in the WebRTC 1.0 standard. The transceiver API allows the user to add audio and video transceivers to a peer connection, then optionally associate some local track to send some media through those transceivers and/or receive some media from the remote peer via a remote track associated with the transceiver. The new API is more flexible and more powerful, offering more control to the user and opening up many use cases not previously available, at the expense of some moderately increased complexity. In particular it allows: - multiple audio and video tracks per peer connection, limited only by the bandwidth - hot-swapping a track for another track without a renegotiation, dramatically reducing the delay of replacing a track with another one (e.g. switching to "on hold" music instead of voice chat) - "track warm up" (getting a track transport ready even before the track media is available) and more aggressive media transport establishing (callee peer can start sending media to caller peer earlier than before) This change constitutes a major API breaking change, which moves tracks management away from the PeerConnection object to the new Transceiver object. Users are encouraged to read the migration guide associated with this change to migrate existing v1.0-style applications. In particular, understanding the functioning of transceivers and the asymmetrical role of the caller peer (the peer initiating a session with a new offer) and the callee peer (the peer responding to a remote offer) are critical to be able to use this new API correctly. Bug: #144, #152
Hi,
I've been integrating the MixedReality-WebRTC SDK and as I've been going along I've hit a stumbling block I hope someone can help with, namely how are you able to receive multiple remote tracks? All the track related logic appears to only ever consider 1 remote track. For example neither of these events contain a track/streamID:
Are multiple remote tracks on the same peer connection possible with the SDK? Or is there an expectation to use a new peer connection for each desired remote track? I imagine that would use way more resources that a single peer connection with multiple remote tracks.
To give an idea of my use case:
Disclaimer: I started a few weeks ago with the webrtc-uwp-sdk SDK and have the above scenario working with that SDK because it supports multiple remote tracks. It includes an OnTrack event with the following signature:
public delegate void RTCPeerConnection_OnTrackDelegate(IRTCTrackEvent Event);
where IRTCTrackEvent contains:
I'm surprised I don't see the same setup in the MixedReality-WebRTC SDK and would appreciate any insight one can send my way. So far I find the MixedReality-WebRTC is much easier to use than webrtc-uwp-sdk and a big bonus is it offers more than just a UWP platform (e.g. Unity integration & Desktop apps).
Thanks!
Andrej
The text was updated successfully, but these errors were encountered: