Skip to content
This repository has been archived by the owner on Mar 22, 2022. It is now read-only.

Add support for shareable media sources #412

Merged
merged 11 commits into from
Jun 16, 2020
Merged

Add support for shareable media sources #412

merged 11 commits into from
Jun 16, 2020

Conversation

djee-ms
Copy link
Member

@djee-ms djee-ms commented Jun 16, 2020

Separate audio and video sources from the track they provide media frames to,
and allow those sources to be reused with any number of media tracks, including
from different peer connections.

Bug: #336, #382

djee-ms added 11 commits May 26, 2020 10:01
Split the audio track source from the audio track, to allow sharing
across peer connections.

This change splits the former local audio track into 2 separate
components:
- The audio track source, which generates the raw audio frames,
  generally from a local audio capture device (microphone) but not
  necessarily. At this time only device-based sources are supported
  though. The audio track source is not associated with any peer
  connection in particular, and can be shared among any number of
  local audio tracks, from the same peer connection or not.
- The local audio track, which is associated with a particular peer
  connection, and which is a slim bridge between an audio track source
  and the audio transceiver of the peer connection.

This change adds support to the native and C# libraries only.
Split the video track source from the video track, to allow sharing
across peer connections.

This change splits the former local video track into 2 separate
components:
- The video track source, which generates the raw video frames,
  generally from a local video capture device (webcam) but not
  necessarily. The video track source is not associated with any peer
  connection in particular, and can be shared among any number of
  local video tracks, from the same peer connection or not.
- The local video track, which is associated with a particular peer
  connection, and which is a slim bridge between an video track source
  and the video transceiver of the peer connection.

This change does not touch yet the external video track, which should
eventually be converted to a video track source instead, as this is a
more natural fit.
Remove the internal abstraction of the ExternalVideoTrackSource class,
which used to exhibit a clean C++ interface but has no reason to be now
that only a pure C interop API is exported. This simplifies the internal
implementation by removing a level of indirection.
Introduce some polymorphism into the API by having all objects derive
from a base class which contains some common functionalities, and have a
single set of API functions to manipulate those. This prevents
duplicating code to e.g. set an object name into each single type of API
object.

- mrsObject is the base class for all objects, and contains the name
  and user data storage and manipulation API.
- mrsRefCountedObject is the base class for the subset of objects which
  are reference-counted through the API, which are objects the user owns
  as opposed to objects owned by another object.
Ensure that the concrete class is always the first one, and pure
interfaces follow, so that the base and derived pointers are the same
when going through the API surface.
Change ExternalVideoTrackSource to derive from the same base class as
the device-based video track source, which is renamed to
DeviceVideoTrackSource for clarity. The base class is VideoTrackSource.
Convert `WebcamSource` and `MicrophoneSource` to source objects shareable among
multiple peer connections.

Because the `SceneVideoSender` component uses the ARGB32 callback, also re-add
that callback, and add a TODO to remove all ARGB32 callbacks in favor of
explicit conversion utility functions, for clarity about the performance
implication of the conversion.

This change also splits the `VideoChatDemo` and `SceneCaptureDemo` for clarity,
leaving each demo with a single audio and video track per peer connection.  The
`SceneCaptureDemo` now has a placeholder showing the capture camera position.
Multiple tracks per peer connection are demo-ed in the `StandaloneDemo`.

All demos are also updated to show essential components as Unity objects in the
scene, instead of adding multiple of them on the same `GameObject`. This helps
with discoverability.
@djee-ms djee-ms added enhancement New feature or request breaking change This change breaks the current API and requires migration labels Jun 16, 2020
@djee-ms djee-ms requested a review from fibann June 16, 2020 15:25
@djee-ms djee-ms self-assigned this Jun 16, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
breaking change This change breaks the current API and requires migration enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants