top of page
  • Writer's picturecentedge

Using getUserMedia effectively in production video conferencing applications



This is going to be an advanced post on using getUserMedia effectively in real world use cases while creating production grade applications. If you you are a beginner, please check this post first before going through this post.


While developing video conferencing applications, the getUserMedia browser API provides the capabilities to capture the audio and video streams and send these streams to the receiving parties with the help of RTCPeerConnection. In this article, we will discuss the twin concepts of capabilities and constraints to understand the browser's capabilities of capturing the media streams with the applied constraints.


Here is how the process works.


  • Call MediaDevices.getSupportedConstraints() (if needed) to get the list of supported constraints, which tells you what constrainable properties the browser knows about. This isn't always necessary, since any that aren't known will simply be ignored when you specify them—but if you have any that you can't get by without, you can start by checking to be sure they're on the list.


  • Once the script knows whether the property or properties it wishes to use are supported, it can then check the capabilities of the API and its implementation by examining the object returned by the track's getCapabilities() method; this object lists each supported constraint and the values or range of values which are supported.


  • Finally, the track's applyConstraints() method is called to configure the API as desired by specifying the values or ranges of values it wishes to use for any of the constrainable properties about which it has a preference.


  • The track's getConstraints() method returns the set of constraints passed into the most recent call to applyConstraints(). This may not represent the actual current state of the track, due to properties whose requested values had to be adjusted and because platform default values aren't represented. For a complete representation of the track's current configuration, use getSettings().

Defining Constraints


A single constraint is an object whose name matches the constrainable property whose desired value or range of values is being specified. This object contains zero or more individual constraints, as well as an optional sub-object named advanced, which contains another set of zero or more constraints  which the user agent must satisfy if at all possible. The user agent attempts to satisfy constraints in the order specified in the constraint set.


We also need to check first if the constraints we are going to apply are supported by the user agent / browser or not. In the below code, it first checks if the constraints are supported or not and then applies the constraints.



let supports = navigator.mediaDevices.getSupportedConstraints();

if (!supports["width"] || !supports["height"] || !supports["frameRate"] || !supports["facingMode"]) {
  // We're missing needed properties, so handle that error.
} else {
  let constraints = {
    width: { min: 640, ideal: 1920, max: 1920 },
    height: { min: 400, ideal: 1080 },
    aspectRatio: 1.777777778,
    frameRate: { max: 30 },
    facingMode: { exact: "user" }
  };

  myTrack.applyConstraints(constraints).then(function() => {
    /* do stuff if constraints applied successfully */
  }).catch(function(reason) {
    /* failed to apply constraints; reason is why */
  });
} 

Here, after ensuring that the constrainable properties for which matches must be found are supported (width, height, frameRate, and facingMode), we set up constraints which request a width no smaller than 640 and no larger than 1920 (but preferably 1920), a height no smaller than 400 (but ideally 1080), an aspect ratio of 16:9 (1.777777778), and a frame rate no greater than 30 frames per second. In addition, the only acceptable input device is a camera facing the user (a "selfie cam"). If the width, height, frameRate, or facingMode constraints can't be met, the promise returned by applyConstraints() will be rejected.


MediaStreamTrack.getCapabilities() is used to get a list of all of the supported capabilities and the values or ranges of values which each one accepts on the current platform and user agent. This function returns a MediaTrackCapabilities object which lists each constrainable property supported by the browser and a value or range of values which are supported for each one of those properties.


Example


The most common way of using constraints is while calling getUserMedia() to capture the streams.

navigator.mediaDevices.getUserMedia({
  video: {
    width: { min: 640, ideal: 1920 },
    height: { min: 400, ideal: 1080 },
    aspectRatio: { ideal: 1.7777777778 }
  },
  audio: {
    sampleSize: 16,
    channelCount: 2
  }
}).then(stream => {
  videoElement.srcObject = stream;
}).catch(handleError);

In this example, constraints are applied at getUserMedia() time, asking for an ideal set of options with fallbacks for the video.


The constraints of an existing MediaStreamTrack can also be changed on the fly, by calling the track's applyConstraints() method, passing into it an object representing the constraints you wish to apply to the track.


videoTrack.applyConstraints({
  width: 1920,
  height: 1080
});

Retrieving current constraints and settings


It's important to remember the difference between constraints and settings. Constraints are a way to specify what values you need, want, and are willing to accept for the various constrainable properties, while settings are the actual values of each constrainable property at the current time.


If at any time we need to fetch the set of constraints that are currently applied to the media, we can get that information by calling MediaStreamTrack.getConstraints(), as shown in the example below.


function switchCameras(track, camera) {
  let constraints = track.getConstraints();
  constraints.facingMode = camera;
  track.applyConstraints(constraints);
}

This function accepts a MediaStreamTrack and a string indicating the camera facing mode to use, fetches the current constraints, sets the value of the MediaTrackConstraints.facingMode to the specified value, then applies the updated constraint set.


Unless we only use exact constraints (which is pretty restrictive, so be sure we mean it!), there's no guarantee exactly what we are going to actually get after the constraints are applied. The values of the constrainable properties as they actually are in the resulting media are referred to as the settings. If we need to know the true format and other properties of the media, we can obtain those settings by calling MediaStreamTrack.getSettings(). This returns an object based on the dictionary MediaTrackSettings. For example:


function whichCamera(track) {
  return track.getSettings().facingMode;
}

This function uses getSettings() to obtain the track's currently in-use values for the constrainable properties and returns the value of facingMode.


In case you are looking for any specific help with your production video conferencing application related to camera quality issues, do let us know at hello@centedge.io. We will be delighted to help.


0 comments
bottom of page