Part 4 - Adaptive Streaming


In the previous article we created a Media Source Extensions player which could buffer certain parts of the video on demand. We're now going to extend this to create the ability to switch between different renditions of the video on demand, and then to record how long each cluster takes to download and aggregate to estimate the download rate compared to the bitrate of the upcoming clusters. This will enable us to switch the video rendition on the fly depending on the network conditions of the client, all completely client-side and with no reliance on streaming servers.

The code we're using in this example can be found at our Git repository.

Preparing our data

To facilitate changing the video rendition on the fly and recording how long each cluster takes to download we need to add four fields to our Cluster object: fileUrl, rendition, requestedTime, and queuedTime. This means our Cluster definition is now:

function Cluster(fileUrl, rendition, byteStart, byteEnd, isInitCluster, timeStart, timeEnd) { this.byteStart = byteStart; //byte range start inclusive this.byteEnd = byteEnd; //byte range end exclusive this.timeStart = timeStart ? timeStart : -1; //timecode start inclusive this.timeEnd = timeEnd ? timeEnd : -1; //exclusive this.requested = false; //cluster download has started this.isInitCluster = isInitCluster; //is an init cluster this.queued = false; //cluster has been downloaded and queued to be appended to source buffer this.buffered = false; //cluster has been added to source buffer = null; //cluster data from vid file this.fileUrl = fileUrl; this.rendition = rendition; this.requestedTime = null; this.queuedTime = null; }

We then also need to store a list of possible renditions and the current rendition in the Player object. In this example we're going to be using just two: 1080 and 180, to demonstrate the difference between renditions most effectively.

self.renditions = ["180", "1080"]; self.rendition = "1080"

The cluster downloading function and cluster creation functions need to be changed to populate the new data, including setting the file url.

In this example the video url is in the format example<rendition>.webm

Example renditions are available for experimentation at:

this.downloadClusterData = function (callback) { var totalRenditions = self.renditions.length; var renditionsDone = 0; _.each(self.renditions, function (rendition) { var xhr = new XMLHttpRequest(); var url = self.clusterFile + rendition + '.json';'GET', url, true); xhr.responseType = 'json'; xhr.send(); xhr.onload = function (e) { self.createClusters(xhr.response, rendition); renditionsDone++; if (renditionsDone === totalRenditions) { callback(); } }; }) } this.createClusters = function (rslt, rendition) { self.clusters.push(new Cluster( self.sourceFile + rendition + '.webm', rendition, rslt.init.offset, rslt.init.size - 1, true )); for (var i = 0; i <; i++) { self.clusters.push(new Cluster( self.sourceFile + rendition + '.webm', rendition,[i].offset,[i].offset +[i].size - 1, false,[i].timecode, (i === - 1) ? parseFloat(rslt.duration / 1000) :[i + 1].timecode )); } }

Then we need to record the requestedTime in the download method: = function (callback) { this.requested = true; this.requestedTime = new Date().getTime(); this._getClusterData(function () { self.flushBufferQueue(); if (callback) { callback(); } }) };

Also record queuedTime in the Cluster._getClusterData:

xhr.onload = function (e) { if (xhr.status != 206) { console.err("media: Unexpected status code " + xhr.status); return false; } = new Uint8Array(xhr.response); cluster.queued = true; cluster.queuedTime = new Date().getTime(); callback(); };

Appending video data from different renditions

When appending cluster data of different video sources there is an important caveat - you cannot change the video data of the cluster which is currently being displayed in the video. This will break the video for the duration of the current cluster. When switching between renditions we then only download and append upcoming clusters.

To facilitate this we will change the downloadUpcomingClusters method to only download strictly upcoming clusters, and create a new method downloadCurrentCluster which is only called after the initial downloadInitCluster method call:

Generally we also need to change all of our cluster methods to only operate on clusters of the current, active rendition:

this.downloadInitCluster = function (callback) { _.findWhere(self.clusters, {isInitCluster: true, rendition: self.rendition}).download(callback); } this.downloadCurrentCluster = function () { var currentClusters = _.filter(self.clusters, function (cluster) { return (cluster.rendition === self.rendition && cluster.timeStart <= self.videoElement.currentTime && cluster.timeEnd > self.videoElement.currentTime) }); if (currentClusters.length === 1) { currentClusters[0].download(); } else { console.err("Something went wrong with download current cluster"); } } this.downloadUpcomingClusters = function () { var nextClusters = _.filter(self.clusters, function (cluster) { return (cluster.requested === false && cluster.rendition === self.rendition && cluster.timeStart > self.videoElement.currentTime && cluster.timeStart <= self.videoElement.currentTime + 5) }); if (nextClusters.length) { _.each(nextClusters, function (nextCluster) {; }); } else { if (_.filter(self.clusters, function (cluster) { return (cluster.requested === false ) }).length === 0) { self.setState("finished buffering whole video"); } else { self.finished = true; self.setState("finished buffering ahead rendition " + self.rendition); } } }

and in our createSourceBuffer method we make sure the initial downloadInitCluster call then triggers the current and first cluster of the initial rendition to download:


Finally the flushBufferQueue method must be changed to use only the clusters associated with the active rendition but being sure to still add the current rendition's initialization cluster first.

this.flushBufferQueue = function () { if (!self.sourceBuffer.updating) { var initCluster = _.findWhere(self.clusters, {isInitCluster: true, rendition: self.rendition}); if (initCluster.queued || initCluster.buffered) { var bufferQueue = _.filter(self.clusters, function (cluster) { return (cluster.queued === true && cluster.isInitCluster === false && cluster.rendition === self.rendition) }); if (!initCluster.buffered) { bufferQueue.unshift(initCluster); } if (bufferQueue.length) { var concatData = self.concatClusterData(bufferQueue); _.each(bufferQueue, function (bufferedCluster) { bufferedCluster.queued = false; bufferedCluster.buffered = true; }); self.sourceBuffer.appendBuffer(concatData); } } } }

Adapting the stream dynamically

To adapt the stream dynamically we need to know how fast the video data has been downloading and compare that to the bitrate of the upcoming cluster in the current rendition. We're already recording this data, so now we just need to aggregate it.

We're using a simple Map Reduce model to achieve this, and making use of UnderscoreJS's memoize function to avoid doing unnecessary calculations:

this.downloadTimeMR = _.memoize( function (downloadedClusters) { // map reduce function to get download time per byte return _.chain(downloadedClusters .map(function (cluster) { return { size: cluster.byteEnd - cluster.byteStart, time: cluster.queuedTime - cluster.requestedTime }; }) .reduce(function (memo, datum) { return { size: memo.size + datum.size, time: memo.time + datum.time } }, {size: 0, time: 0}) ).value() }, function (downloadedClusters) { return downloadedClusters.length; //hash function is the length of the downloaded clusters as it should be strictly increasing } ); this.getNextCluster = function () { var unRequestedUpcomingClusters = _.chain(self.clusters) .filter(function (cluster) { return (!cluster.requested && cluster.timeStart >= self.videoElement.currentTime && cluster.rendition === self.rendition); }) .sortBy(function (cluster) { return cluster.byteStart }) .value(); if (unRequestedUpcomingClusters.length) { return unRequestedUpcomingClusters[0]; } else { throw new Error("No more upcoming clusters"); } }; this.getDownloadTimePerByte = function () { //seconds per byte var mapOut = this.downloadTimeMR(_.filter(self.clusters, function (cluster) { return (cluster.queued || cluster.buffered) })); var res = ((mapOut.time / 1000) / mapOut.size); return res; }; this.checkBufferingSpeed = function () { var secondsToDownloadPerByte = self.getDownloadTimePerByte(); var nextCluster = self.getNextCluster(); var upcomingBytesPerSecond = (nextCluster.byteEnd - nextCluster.byteStart) / (nextCluster.timeEnd - nextCluster.timeStart); var estimatedSecondsToDownloadPerSecondOfPlayback = secondsToDownloadPerByte * upcomingBytesPerSecond; $('#rate-display').html(estimatedSecondsToDownloadPerSecondOfPlayback); if (estimatedSecondsToDownloadPerSecondOfPlayback > 0.8) { if (self.rendition !== "180") { self.switchRendition("180") } } else { if (self.rendition !== "1080") { self.switchRendition("1080") } } }

Our hash is the length of the downloaded cluster data because this is strictly increasing. There are no circumstances where the average download rate can change without the length of the downloaded clusters data increasing.

We now need to implement a very simple switchRendition function which changes the current rendition variable. We'll want to call the checkBufferingSpeedfunction on time update, to check how the buffering is coming along, and switch the bitrate as the video is moving along if there's a problem:

this.createSourceBuffer = function () { self.sourceBuffer = self.mediaSource.addSourceBuffer('video/webm; codecs="vp8,vorbis"'); self.sourceBuffer.addEventListener('updateend', function () { self.flushBufferQueue(); }, false); self.setState("Downloading clusters"); self.downloadInitCluster(self.downloadCurrentCluster); self.videoElement.addEventListener('timeupdate', function () { self.downloadUpcomingClusters(); self.checkBufferingSpeed(); }, false); } this.switchRendition = function (rendition) { self.rendition = rendition; self.downloadInitCluster(); self.downloadUpcomingClusters(); }

This example does not include further functionality you could (and should) implement such as:

  • seeking to a point in the video which has not yet been buffered
  • stall detection using our knowledge of which clusters have been added to the source buffer
  • using the cluster data to determine the bitrate of each rendition to select the correct when dynamically changing the stream.

Here's one we made earlier

This video begins using a 1080 rendition, when you click the Simulate Network Slowdown button it will switch to a lower 180 rendition. This change will happen visually in the next cluster. This allows the current cluster to continue playing at the same resolution so the video playback isn not interrupted.

The current download rate, as a ratio of average download time per second of video per second of playback is also displayed.

All of the code for this example and the previous examples are available in our Git repository.