top of page
  • Writer's picturecentedge

WebRTC getUserMedia: The secret for using media devices in the browser

Updated: Oct 4, 2022



The getUserMedia() API in WebRTC is primarily responsible capturing the media streams currently available. The WebRTC standard provides this API for accessing cameras and microphones connected to the computer or smartphone. These devices are commonly referred to as Media Devices and can be accessed with JavaScript through the navigator.mediaDevices object, which implements the MediaDevices interface. From this object we can enumerate all connected devices, listen for device changes (when a device is connected or disconnected), and open a device to retrieve a Media Stream.

The most common way this is used is through the function getUserMedia(), which returns a promise that will resolve to a MediaStream for the matching media devices. This function takes a single MediaStreamConstraints object that specifies the requirements that we have. For instance, to simply open the default microphone and camera, we would do the following.

const mediaDevices = async (constraints) => {
    return await navigator.mediaDevices.getUserMedia(constraints);
}

try {
    const stream = mediaDevices({'video':true,'audio':true});
    console.log('Got MediaStream:', stream);
} catch(error) {
    console.error('Error while trying to access media devices.', error);
}

The call to getUserMedia() will trigger a permissions request. If the user accepts the permission, the promise is resolved with a MediaStream containing one video and one audio track. If the permission is denied, a Permission Denied Error is thrown. In case there are no matching devices connected, a Not Found Error will be thrown.


Media Constraints


As one can see on the above snippet, one has to pass the media constraints while calling the getUserMedia API to access the video and audio streams( camera and mic) available to the browser. The constraints object, which must implement the MediaStreamConstraints interface, that we pass as a parameter to getUserMedia() allows us to open a media device that matches a certain requirement. This requirement can be very loosely defined (audio and/or video), or very specific (minimum camera resolution or an exact device ID). It is recommended that applications that use the getUserMedia()API first check the existing devices and then specifies a constraint that matches the exact device using the deviceId constraint. Devices will also, if possible, be configured according to the constraints. We can enable echo cancellation on microphones or set a specific or minimum width, height and frame rate of the video from the camera.Below is a brief about how to use media constraints in an advanced way.


async function findConnectedDevices(type) {
    const devices = await navigator.mediaDevices.enumerateDevices();
    return devices.filter(device => device.kind === type)
}

async function openCamera(cameraId, minWidth, minHeight) {
    const constraints = {
        'audio': {'echoCancellation': true},
        'video': {
            'deviceId': cameraId,
            'width': {'min': minWidth},
            'height': {'min': minHeight},
            'frameRate': {'min': 10},
            }
        }

    return await navigator.mediaDevices.getUserMedia(constraints);
}

const cameras = findConnectedDevices('videoinput');
if (cameras && cameras.length > 0) {
    const stream = openCamera(cameras[0].deviceId, 1280, 720);
}

The full documentation for the MediaStreamConstraints interface can be found on the MDN web docs.


Playing the video locally


Once a media device has been opened and we have a MediaStream available, we can assign it to a video or audio element to play the stream locally. The HTML needed for a typical video element used with getUserMedia() will usually have the attributes autoplay and playsinline. The autoplay attribute will cause new streams assigned to the element to play automatically. The playsinline attribute allows video to play inline, instead of only in full screen, on certain mobile browsers. It is also recommended to use controls="false" for live streams, unless the user should be able to pause them.


async function playVideoFromCamera() {
    try {
        const constraints = {'video': true, 'audio': true};
        const stream = await navigator.mediaDevices.getUserMedia(constraints);
        const videoElement = document.querySelector('video#localVideo');
        videoElement.srcObject = stream;
    } catch(error) {
        console.error('Error opening video camera.', error);
    }
}
<html>
<head><title>Local video playback</video></head>
<body>
    <video id="localVideo" autoplay playsinline controls="false" />
</body>
</html>

This post briefly describes how to use the getUserMedia API currently available in all modern browsers. To explore this API further for understanding how it can be used with advanced settings and configurations needed for creating a production grade video conferencing applications, refer to this blog post.


If you are planning to build a simple P2P(peer to peer) video conferencing application, then check this blog post also to understand another important RTCPeerConnection API. One should be able to build a p2p video conferencing app using these 2 important APIs.


if you have any questions related getUserMedia / WebRTC as a whole, you can ask all your questions to get prompt answers in this dedicated WebRTC forum.


if you want to learn WebRTC to build a sound understanding of it along with all the technology in it's protocol stack like ICE, STUN,TURN, DTLS, SRTP, SCTP etc., then check out our live online/onsite instructor led WebRTC training programs here. If you wish to register for one of our upcoming training programs, then you can do so using the registration form link provided there.


Here is an example public github repo for creating a simple p2p video conferencing app build using these 2 API with Nodejs as signalling server. Feel free to download the example and play around to understand the basics of WebRTC.


If you want to build some serious production grade video conferencing applications, then check out this open source github repo for a production grade WebRTC signalling server built using NodeJs which you can use to build and deploy your video calling app to any cloud. If your need is not fulfilled by this repo, feel free to visit our services page to know more about services.


If you want a custom video application without having to go through the pain of building it, then check out our products page to know more about our scalable, customizable and fully managed video conferencing / live streaming application as a service along with custom branding.


Feel free to reach out to us at hello@centedge.io for any kind of help and support need with WebRTC.


0 comments

Comments


bottom of page