Implementing P2P video streaming using WebRTC

Implementing P2P video streaming using WebRTC

Preface

Web Real-Time Communication (WebRTC) is an open source standard that allows real-time communication between web applications and websites without the need for plugins or additional software installation. It is also available as a library for iOS and Android apps that provide the same functionality as the standard.

WebRTC works on any operating system and can be used in all modern browsers, including Google Chrome, Mozilla Firefox, and Safari. Some of the major projects using WebRTC include Google Meet and Hangouts, WhatsApp, Amazon Chime, Facebook Messenger, Snapchat, and Discord.

In this article, we'll cover one of the main use cases for WebRTC: peer-to-peer (P2P) audio and video streaming from one system to another. This functionality is similar to live streaming services like Twitch, but on a smaller and simpler scale.

Core WebRTC concepts to understand

In this section, I will review five basic concepts that you should know to understand how web applications using WebRTC work. These concepts include peer-to-peer communication, Signal servers, and the ICE protocol.

Point-to-point communication

In this guide, we will be using WebRTC's RTCPeerConnection object, which is primarily concerned with connecting two applications and allowing them to communicate using a peer-to-peer protocol.

In a decentralized network, peer-to-peer communication is a direct link between computer systems (peers) in the network, without an intermediary (such as a server). While WebRTC does not allow peers to communicate directly with each other in all scenarios, the ICE protocol and Signal servers it uses allow similar behavior. You will find more information about them below.

Signal Server

For each pair in a WebRTC application to start communicating, they must perform a "handshake", which is done with an offer or answer. One peer generates the offer and shares it with the other peer, and the other peer generates the answer and shares it with the first peer.

In order for the handshake to succeed, each peer must have a way to share their offer or answer. This is where the Signal server comes in.

The main goal of the Signal server is to initiate communication between peers. A peer uses the Signal server to share its offer or answer with another peer, and the other can use the Signal server to share its offer or answer with the first peer.

ICE Protocol

In certain situations, such as when all the devices involved are not in the same local network, it may be difficult for WebRTC applications to establish peer connections with each other. This is because direct socket connections between peers are not always possible unless they are in the same local network.

When you want to use peer-to-peer connections across different networks, you need to use the Interactive Connectivity Establishment (ICE) protocol. The ICE protocol is used to establish connections between peers on the Internet. ICE servers use the protocol to establish connections and relay information between peers.

ICE protocols include the Session Traversal Utilities for NAT (STUN) protocol, the Traversal Using Relays Around NAT (TURN) protocol, or a hybrid of the two.

In this tutorial, we will not cover the practical aspects of the ICE protocol due to the complexity involved in building a server, getting it working, and testing it. However, it is helpful to understand the limitations of WebRTC applications and where the ICE protocol can address these limitations.

Getting Started with WebRTC P2P Video Streaming

Now that we have all of that out of the way, it's time to start the complex stuff. In the next section, we'll be working on a video streaming project. While we're at it, you can see a live demo of the project here.

Before we get started, I have a GitHub repository https://github.com/GhoulKingR/webrtc-project that you can clone to follow along with this article. This repository has a start-tutorial folder organized by the steps you will take in the next section, as well as a copy of the code at the end of each step. While it is not required to use the repo, it is helpful.

The folder we will work on in the repo is called start-tutorial. It contains three folders: step-1, step-2, and step-3. These three folders correspond to the steps in the next section.

Run the video streaming project

Now, let's start building the project. I've divided the process into three steps. We'll create a project that we can run, test, and use in each step.

These steps include:

  • Video streaming within a web page
  • Streaming between browser tabs and windows using BroadcastChannel
  • Use signal server to stream between different browsers on the same device.

Video streaming within a web page

In this step, we only need an index.html file. If you are working in the repo, you can use the start-tutorial/step-1/index.html file.

Now, let's paste this code into it:

 <body> <video id="local" autoplay muted></video> <video id="remote" autoplay></video> <button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button> <script> // get video elements const local = document.querySelector("video#local"); const remote = document.querySelector("video#remote"); function start(e) { e.disabled = true; navigator.mediaDevices.getUserMedia({ audio: true, video: true }) .then((stream) => { local.srcObject = stream; document.getElementById("stream").disabled = false; // enable the stream button }) .catch(() => e.disabled = false); } function stream(e) { // disable the stream button e.disabled = true; const config = {}; const localPeerConnection = new RTCPeerConnection(config); // local peer const remotePeerConnection = new RTCPeerConnection(config); // remote peer // if an icecandidate event is triggered in a peer add the ice candidate to the other peer localPeerConnection.addEventListener("icecandidate", e => remotePeerConnection.addIceCandidate(e.candidate)); remotePeerConnection.addEventListener("icecandidate", e => localPeerConnection.addIceCandidate(e.candidate)); // if the remote peer detects a track in the connection, it forwards it to the remote video element remotePeerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); // get camera and microphone source tracks and add it to the local peer local.srcObject.getTracks() .forEach(track => localPeerConnection.addTrack(track, local.srcObject)); // Start the handshake process localPeerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await localPeerConnection.setLocalDescription(offer); await remotePeerConnection.setRemoteDescription(offer); console.log("Created offer"); }) .then(() => remotePeerConnection.createAnswer()) .then(async answer => { await remotePeerConnection.setLocalDescription(answer); await localPeerConnection.setRemoteDescription(answer); console.log("Created answer"); }); } </script> </body>

It will give you something that looks like this:

picture

Now, let's see what's going on.

To build the project, we need two video elements. We will use one to capture the user's camera and microphone. Afterwards, we will use WebRTC's RTCPeerConnection object to feed the audio and video streams of this element to another video element:

 <video id="local" autoplay muted></video> <video id="remote" autoplay></video>

The RTCPeerConnection object is the main object for establishing direct peer-to-peer connections between web browsers or devices.

Then we need two buttons. One to activate the user's webcam and microphone, and another to stream the contents of the first video element to the second:

 <button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button>

The "Start Video" button runs the start function when clicked. The "Stream Video" button runs the stream function when clicked.

Let's first look at the start function:

 function start(e) { e.disabled = true; navigator.mediaDevices.getUserMedia({ audio: true, video: true }) .then((stream) => { local.srcObject = stream; document.getElementById("stream").disabled = false; // enable the stream button }) .catch(() => e.disabled = false); }

When the start function runs, it first makes the start button unclickable. Then, it requests the user's permission to use their webcam and microphone through the navigator.mediaDevices.getUserMedia method.

If the user grants permission, the start function sends the video and audio streams to the first video element via its srcObject field and enables the stream button. If there is a problem getting permission from the user or the user denies permission, the function clicks the start button again.

Now, let's look at the stream function:

 function stream(e) { // disable the stream button e.disabled = true; const config = {}; const localPeerConnection = new RTCPeerConnection(config); // local peer const remotePeerConnection = new RTCPeerConnection(config); // remote peer // if an icecandidate event is triggered in a peer add the ice candidate to the other peer localPeerConnection.addEventListener("icecandidate", e => remotePeerConnection.addIceCandidate(e.candidate)); remotePeerConnection.addEventListener("icecandidate", e => localPeerConnection.addIceCandidate(e.candidate)); // if the remote peer receives track from the connection, it feeds them to the remote video element remotePeerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); // get camera and microphone tracks then feed them to local peer local.srcObject.getTracks() .forEach(track => localPeerConnection.addTrack(track, local.srcObject)); // Start the handshake process localPeerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await localPeerConnection.setLocalDescription(offer); await remotePeerConnection.setRemoteDescription(offer); console.log("Created offer"); }) .then(() => remotePeerConnection.createAnswer()) .then(async answer => { await remotePeerConnection.setLocalDescription(answer); await localPeerConnection.setRemoteDescription(answer); console.log("Created answer"); }); }

I added comments to outline the process in the stream function to help understand it. However, the handshake process (lines 21-32) and ICE candidate events (lines 10 and 11) are important parts that we will discuss in more detail.

During the handshake process, each pair sets its local and remote descriptions based on the offer and answer created by the pair:

  • The pair that generates the offer sets its local description to that offer, and then sends a copy of the offer to the second pair to set as its remote description.
  • Likewise, the pair that generates answer sets answer to its local description and sends a copy to the first pair to set to its remote description.

After completing this process, peers immediately began to communicate with each other.

ICE candidates are the addresses (IP, port, and other related information) of peers. RTCPeerConnection objects use ICE candidates to find and communicate with each other. The icecandidate event in the RTCPeerConnection object is fired when the object generates an ICE candidate.

The goal of the event listener we set up is to pass ICE candidates from one peer to another.

Between browser tabs and windows with BroadcastChannel

One of the challenges of setting up a peer-to-peer application using WebRTC is getting it to work across different application instances or websites. In this section, we will use the Broadcast Channel API to allow our project to work outside of a single web page but within the context of a browser.

Create the necessary files

We will start by creating two files, streamer.html and index.html. In the repo, these files are located in the start-tutorial/step-2 folder. The streamer.html page will allow the user to create a live stream from their camera, while the index.html page will enable the user to watch these live streams.

Now, let's paste these code blocks into the file. Then, we'll look at them in more depth.

First, in the streamer.html file, paste the following code:

 <body> <video id="local" autoplay muted></video> <button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button> <script> // get video elements const local = document.querySelector("video#local"); let peerConnection; const channel = new BroadcastChannel("stream-video"); channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate); } else if (e.data.type === "answer") { console.log("Received answer") peerConnection?.setRemoteDescription(e.data); } } // function to ask for camera and microphone permission // and stream to #local video element function start(e) { e.disabled = true; document.getElementById("stream").disabled = false; // enable the stream button navigator.mediaDevices.getUserMedia({ audio: true, video: true }) .then((stream) => local.srcObject = stream); } function stream(e) { e.disabled = true; const config = {}; peerConnection = new RTCPeerConnection(config); // local peer connection // add ice candidate event listener peerConnection.addEventListener("icecandidate", e => { let candidate = null; // prepare a candidate object that can be passed through browser channel if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); }); // add media tracks to the peer connection local.srcObject.getTracks() .forEach(track => peerConnection.addTrack(track, local.srcObject)); // Create offer and send through the browser channel peerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await peerConnection.setLocalDescription(offer); console.log("Created offer, sending..."); channel.postMessage({ type: "offer", sdp: offer.sdp }); }); } </script> </body>

Then, in your index.html file, paste the following code:

 <body> <video id="remote" controls></video> <script> // get video elements const remote = document.querySelector("video#remote"); let peerConnection; const channel = new BroadcastChannel("stream-video"); channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate) } else if (e.data.type === "offer") { console.log("Received offer") handleOffer(e.data) } } function handleOffer(offer) { const config = {}; peerConnection = new RTCPeerConnection(config); peerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); peerConnection.addEventListener("icecandidate", e => { let candidate = null; if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, } } channel.postMessage({ type: "icecandidate", candidate }) }); peerConnection.setRemoteDescription(offer) .then(() => peerConnection.createAnswer()) .then(async answer => { await peerConnection.setLocalDescription(answer); console.log("Created answer, sending...") channel.postMessage({ type: "answer", sdp: answer.sdp, }); }); } </script> </body>

In your browser, the page will look and function similar to the following animation:

picture

Detailed breakdown of the streamer.html file

Now, let's explore these two pages in more detail. We'll start with the streamer.html page. This page only needs a video and two button elements:

 <video id="local" autoplay muted></video> <button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button>

The "Start Video" button works the same way as the previous step: it asks the user for permission to use their camera and microphone and feeds the stream to the video element. The "Stream Video" button then initializes the peer connection and feeds the video stream to the peer.

Since this step involves two web pages, we are using the Broadcast Channel API. In our index.html and streamer.html files, we must initialize a BroadcastChannel object with the same name on each page to allow them to communicate.

A BroadcastChannel object allows you to pass basic information between browsing contexts (such as windows or tabs) that have the same URL origin.

When you initialize a BroadcastChannel object, you must give it a name. You can think of this name as the name of a chat room. If you initialize two BroadcastChannel objects with the same name, they can talk to each other as if they were in a chat room. But if they have different names, they can't communicate because they are not in the same chat room.

I say "chatroom" because you can have multiple BroadcastChannel objects with the same name, and they can all communicate with each other at the same time.

Since we are dealing with two pages, each with peer connections, we have to use BroadcastChannel objects to pass offers and answers back and forth between the two pages. We also have to pass ICE candidates from one peer connection to another. So, let's see how it's done.

It all starts with the stream function:

 // streamer.html -> script element function stream(e) { e.disabled = true; const config = {}; peerConnection = new RTCPeerConnection(config); // local peer connection // add ice candidate event listener peerConnection.addEventListener("icecandidate", e => { let candidate = null; // prepare a candidate object that can be passed through browser channel if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); }); // add media tracks to the peer connection local.srcObject.getTracks() .forEach(track => peerConnection.addTrack(track, local.srcObject)); // Create offer and send through the browser channel peerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await peerConnection.setLocalDescription(offer); console.log("Created offer, sending..."); channel.postMessage({ type: "offer", sdp: offer.sdp }); }); }

There are two areas in the function that interact with the BrowserChannel object. The first is the ICE candidate event listener:

 peerConnection.addEventListener("icecandidate", e => { let candidate = null; // prepare a candidate object that can be passed through browser channel if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); });

The other is after the offer is generated:

 peerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await peerConnection.setLocalDescription(offer); console.log("Created offer, sending..."); channel.postMessage({ type: "offer", sdp: offer.sdp }); });

Let's look at the ICE candidate event listener first. If you pass the e.candidate object directly to the BroadcastChannel object, you will receive a DataCloneError: object can not be cloned error message in the console.

This error occurs because the BroadcastChannel object cannot handle e.candidate directly. You need to create an object from e.candidate containing the required details to send to the BroadcastChannel object. We have to do the same thing to send an offer.

You need to call the channel.postMessage method to send a message to the BroadcastChannel object. When this message is called, the BroadcastChannel object on the other web page triggers its onmessage event listener. View this code from the index.html page:

 channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate) } else if (e.data.type === "offer") { console.log("Received offer") handleOffer(e.data) } }

As you can see, we have conditional statements that check the type of message coming into the BroadcastChannel object. The contents of the message can be read via e.data. e.data.type corresponds to the type field of the object we sent via channel.postMessage:

 // from the ICE candidate event listener channel.postMessage({ type: "icecandidate", candidate }); // from generating an offer channel.postMessage({ type: "offer", sdp: offer.sdp });

Now, let's take a look at the index.html file that handles the incoming offers.

Detailed breakdown of the index.html file

The index.html file starts with the handleOffer function:

 function handleOffer(offer) { const config = {}; peerConnection = new RTCPeerConnection(config); peerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); peerConnection.addEventListener("icecandidate", e => { let candidate = null; if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, } } channel.postMessage({ type: "icecandidate", candidate }) }); peerConnection.setRemoteDescription(offer) .then(() => peerConnection.createAnswer()) .then(async answer => { await peerConnection.setLocalDescription(answer); console.log("Created answer, sending...") channel.postMessage({ type: "answer", sdp: answer.sdp, }); }); }

When triggered, this method creates a peer connection and sends any ICE candidates it generates to the other peer. It then continues the handshake process by setting the streamer's offer to its remote description, generating an answer, setting that answer to its local description, and sending that answer to the streamer using a BroadcastChannel object.

Like the BroadcastChannel object in the index.html file, the BroadcastChannel object in the streamer.html file needs an onmessage event listener to receive ICE candidates and answers from the index.html file:

 channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate); } else if (e.data.type === "answer") { console.log("Received answer") peerConnection?.setRemoteDescription(e.data); } }

If you're wondering why the question mark ? is after peerConnection, it tells the JavaScript runtime not to throw an error if peerConnection is undefined. This is somewhat of a shorthand:

 if (peerConnection) { peerConnection.setRemoteDescription(e.data); }

Replace the BroadcastChannel with our Signal server

BroadcastChannel is limited to the browser context. In this step, we will overcome this limitation by using a simple Signal server, which we will build using Node.js. As in the previous steps, I will first give you the code to paste and then explain what is going on in it.

So, let's get started. This step requires four files: index.html, streamer.html, signalserverclass.js, and server/index.js.

We will start with the signalserverclass.js file:

 class SignalServer { constructor(channel) { this.socket = new WebSocket("ws://localhost:80"); this.socket.addEventListener("open", () => { this.postMessage({ type: "join-channel", channel }); }); this.socket.addEventListener("message", (e) => { const object = JSON.parse(e.data); if (object.type === "connection-established") console.log("connection established"); else if (object.type === "joined-channel") console.log("Joined channel: " + object.channel); else this.onmessage({ data: object }); }); } onmessage(e) {} postMessage(data) { this.socket.send( JSON.stringify(data) ); } }

Next, let's update the index.html and streamer.html files. The only changes to these files are the script tags where we initialize the BroadcastChannel object and import the signalserverclass.js script.

Here is the updated index.html file:

 <body> <video id="remote" controls></video> <script src="signalserverclass.js"></script> <!-- new change --> <script> const remote = document.querySelector("video#remote"); let peerConnection; const channel = new SignalServer("stream-video"); // <- new change channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate); } else if (e.data.type === "offer") { console.log("Received offer"); handleOffer(e.data); } }function handleOffer(offer) { const config = {}; peerConnection = new RTCPeerConnection(config); peerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); peerConnection.addEventListener("icecandidate", e => { let candidate = null; if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); }); peerConnection.setRemoteDescription(offer) .then(() => peerConnection.createAnswer()) .then(async answer => { await peerConnection.setLocalDescription(answer); console.log("Created answer, sending..."); channel.postMessage({ type: "answer", sdp: answer.sdp, }); }); } </script> </body>

Here is the updated streamer.html file:

 <body> <video id="local" autoplay muted></video> <button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button> <script src="signalserverclass.js"></script> <!-- new change --> <script> const local = document.querySelector("video#local"); let peerConnection; const channel = new SignalServer("stream-video"); // <- new change channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate); } else if (e.data.type === "answer") { console.log("Received answer"); peerConnection?.setRemoteDescription(e.data); } } // function to ask for camera and microphone permission // and stream to #local video element function start(e) { e.disabled = true; document.getElementById("stream").disabled = false; // enable the stream button navigator.mediaDevices.getUserMedia({ audio: true, video: true }) .then((stream) => local.srcObject = stream); } function stream(e) { e.disabled = true; const config = {}; peerConnection = new RTCPeerConnection(config); // local peer connection peerConnection.addEventListener("icecandidate", e => { let candidate = null; if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); }); local.srcObject.getTracks() .forEach(track => peerConnection.addTrack(track, local.srcObject)); peerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await peerConnection.setLocalDescription(offer); console.log("Created offer, sending..."); channel.postMessage({ type: "offer", sdp: offer.sdp }); }); } </script> </body>

Finally, here is the content of the server/index.js file:

 const { WebSocketServer } = require("ws"); const channels = {}; const server = new WebSocketServer({ port: 80 }); server.on("connection", handleConnection); function handleConnection(ws) { console.log('New connection'); ws.send( JSON.stringify({ type: 'connection-established' }) ); let id; let channel = ""; ws.on("error", () => console.log('websocket error')); ws.on('message', message => { const object = JSON.parse(message); if (object.type === "join-channel") { channel = object.channel; if (channels[channel] === undefined) channels[channel] = []; id = channels[channel].length || 0; channels[channel].push(ws); ws.send(JSON.stringify({type: 'joined-channel', channel})); } else { // forward the message to other channel memebers channels[channel]?.filter((_, i) => i !== id).forEach((member) => { member.send(message.toString()); }); } }); ws.on('close', () => { console.log('Client has disconnected!'); if (channel !== "") { channels[channel] = channels[channel].filter((_, i) => i !== id); } }); }

picture

To get the server running, you need to open the server folder in your terminal, initialize the folder as a Node project, install the ws package, and then run the index.js file. These steps can be completed using the following commands:

 # initialize the project directory npm init --y # install the `ws` package npm install ws # run the `index.js` file node index.js

Now, let's look at the files. To reduce the need to edit the code after swapping the BroadcastChannel object constructor with the SignalServer constructor, I tried to have the SignalServer class mimic the calls and things you would do with the BroadcastChannel - at least for our use case:

 class SignalServer { constructor(channel) { // what the constructor does } onmessage(e) {} postMessage(data) { // what postMessage does } }

This class has a constructor that joins the channel upon initialization. It also has a postMessage function to allow messages to be sent and an onmessage method that is called when a message is received from another SignalServer object.

Another purpose of the SignalServer class is to abstract our backend process. Our signal server is a WebSocket server because it allows us to have event-based two-way communication between the server and the client, which makes it a good choice for building a signal server.

The SignalServer class begins its operation with its constructor:

 constructor(channel) { this.socket = new WebSocket("ws://localhost:80"); this.socket.addEventListener("open", () => { this.postMessage({ type: "join-channel", channel }); }); this.socket.addEventListener("message", (e) => { const object = JSON.parse(e.data); if (object.type === "connection-established") console.log("connection established"); else if (object.type === "joined-channel") console.log("Joined channel: " + object.channel); else this.onmessage({ data: object }); }); }

It first initializes a connection to the backend. When the connection becomes active, it sends an object to the server that we use as a join-channel request:

 this.socket.addEventListener("open", () => { this.postMessage({ type: "join-channel", channel }); });

Now, let's take a look at our WebSocket server:

 const { WebSocketServer } = require("ws"); const channels = {}; const server = new WebSocketServer({ port: 80 }); server.on("connection", handleConnection); function handleConnection(ws) { // I cut out the details because it's not in focus right now }

This is a pretty standard WebSocket server. We have server initialization and event listeners for when a new client connects to the server. The only new functionality is the channels variable, which we use to store the channels that each SignalServer object has joined.

If a channel does not exist and an object wants to join that channel, we want the server to create an empty array with the WebSocket connection as the first element. We then store that array as a field in the channels object with the channel name.

You can see this in action in the message event listener below. The code looks a bit complicated, but the explanation above is a general overview of what the code does:

 // ... first rest of the code ws.on('message', message => { const object = JSON.parse(message); if (object.type === "join-channel") { channel = object.channel; if (channels[channel] === undefined) channels[channel] = []; id = channels[channel].length || 0; channels[channel].push(ws); ws.send(JSON.stringify({type: 'joined-channel', channel})); // ... other rest of the code

The event listener then sends a joined-channel message to the SignalServer object, telling it that the request to join the channel was successful.

As for the rest of the event listener, it sends any messages that are not of type join-channel to the other SignalServer objects in the channel:

 // rest of the event listener } else { // forward the message to other channel memebers channels[channel]?.filter((_, i) => i !== id).forEach((member) => { member.send(message.toString()); }); } });

In the handleConnection function, the id and channel variables store the position of the SignalServer objectWebSocket connection in the channel and the name of the channel in which the SignalServer objectWebSocket connection is stored, respectively:

 let id; let channel = ""; 让id;让频道="";

These variables are set when a SignalServer object joins a channel. They help pass messages from one SignalServer object to other objects in the channel, as you can see in the else block. They also help remove SignalServer objects from the channel when they disconnect for any reason:

 ws.on('close', () => { console.log('Client has disconnected!'); if (channel !== "") { channels[channel] = channels[channel].filter((_, i) => i !== id); } });

Finally, back to the SignalServer class in the signalserverclass.js file. Let's take a look at the part that receives messages from the WebSocket server:

 this.socket.addEventListener("message", (e) => { const object = JSON.parse(e.data); if (object.type === "connection-established") console.log("connection established"); else if (object.type === "joined-channel") console.log("Joined channel: " + object.channel); else this.onmessage({ data: object }); });

If you look at the WebSocket server's handleConnection function, there are two message types that the server sends directly to the SignalServer object: joined-channel and connection-established. These two message types are handled directly by this event listener.

in conclusion

In this article, we covered how to use WebRTC to build a P2P video streaming application — one of its main use cases.

We started by creating a peer connection in a single page to get a simple understanding of how a WebRTC application works without having to worry about signaling. Then, we talked about signaling using the Broadcast Channel API. Finally, we built our own singal server.

Follow me and become stronger.

Original text: https://blog.logrocket.com/webrtc-video-streaming/

By Oduah Chigozie

<<:  7.2 Our computer room is disconnected from the Internet! What should I do?

>>:  Let's talk about the love and hate between Socket, WebSocket and MQTT

Recommend

404 Not Found? It crashed again...

The dreaded "404 Page Not Found" error ...

Have you been "touched" by 5G today?

The number of online 5G users has exceeded 100 mi...

Network transformation in the era of big technology

Technology is always evolving faster and becoming...

State management expert: Cookies and Session

1. Introduction Hello everyone, I am Xiao❤, a 985...

How to use 5G spectrum efficiently? Both licensing and sharing are effective

Telecoms.com regularly invites third-party expert...

How to find the IP address of the router to improve work efficiency

How to find the IP address of a router is an esse...

Cisco ACI original core technology expert reveals the birth of ACI

The official development of ACI began in January ...

Network | How to design a billion-level API gateway?

The API gateway can be seen as the entrance for t...