Posts

Growing Pains: Lessons Learned from the Steep Ascent of Streaming Video

When technology catches fire the way video streaming has over the last ten-plus years, it can be hard to pin down the turning points. But as we know, there are invaluable lessons to be learned in those moments when a pivot leads to pitfall or progress. The history of streaming video is full of them, including a missed opportunity that presaged the beginning of the end for Blockbuster.  And given that IP video will account for 82% of global Internet traffic by 2021 (CAGR 26 percent, 2016-2021), there are more stories, lessons, and disruptions to come. The OTT video market is still shaking out in the US, let alone in the rest of the world. And as always, disruption is rule number one.

We could mark progress by the switch from DVD to download to stream, video for online advertising and social platforms, the upstart success of Netflix’s original content, or the advent of livestreaming video. There are many exciting moments to consider, but sometimes it’s important to take a closer look behind the scenes.

I’ve found one useful way to capture the lessons of video streaming’s ascendance is at the orchestration level. We mostly know how the video making works (lights, camera, action!), but what about the streaming part? When most companies (not the pure plays like Netflix) started doing streaming video, they didn’t know if it would produce revenue. These companies didn’t want to invest in expensive infrastructure, so they went outside to CDNs and online video platform providers. When the traffic grew, they added more CDNs — when the primary one was taxed, they offloaded to another.

Now that we have the answer to the question of whether video streaming is here to stay — yes, everyone loves video everywhere all the time on all the devices for any purpose you can think of — many providers are looking sideways at the CDN pay-as-you-go model. The larger audiences get, the more the CDN gets in the way of profit making. How to control costs? It’s time to fire up your own metal.

Especially for big media market regions, it’s starting to make a lot more sense to leverage your own POPs (your data center, co-lo, or virtual hosting provider) first, with CDNs as a backup. Solutions like Cedexis’ Openmix provide the essential orchestration layer – the intelligent, automated balancing and routing that ensures the video stream gets delivered to the end-user along the route that enables an optimal mix of quality and cost control.  Openmix’s job is to do what CDN’s do — work out how to get that video delivered quickly — but Openmix can also figure out how to do it for the lowest cost at a pre-defined target level of quality.

There are, after all, many more uses for video now and many different types of audiences and consumption preferences. Different kinds of content can take different encodings — there’s no reason to stream the Peppa Pig cartoon the same way you stream Star Wars: The Last Jedi.  In order to compete, content distributors have to stop overpaying for delivery and make more efficient use of their bandwidth.

The more regionally focused a service is, the easier it is to build their own infrastructure. That’s why Europe is a step or two ahead — in smaller countries, country-based providers can more easily serve all their media markets from POPs they control.  In the US, medium sized providers can get creative — they’re not big enough to have a bajillion POPs and cache boxes at the local ISPs, but they need to deliver content nationwide. An independent TV channel, for example, could identify their top media markets and create their cache units there while using CDNs for everything else. Openmix would figure out the best location to serve from, using stipulated quality, available sources, and cost limits to choose the optimal route.

Finally, there are the companies for whom delivering video is not a primary business. If you’re trying to make money off of video ads, you don’t want to spend too much on serving them up. The consumer isn’t paying for quality, like they are with Netflix, but they also won’t engage with your video ad if it is slow or skippy. If they can reduce the cost, complexity, and pain of delivering video ads, their entire business model makes more sense.

The key is creating a good experience while not breaking the bank.  Only the biggest players like Netflix and, Amazon can spend crazy bank now in order to buy their slice of future markets — and even so, they’d rather spend it on making award-winning shows and movies, not on fiber and bare metal.  To be in the game for the video-saturated present and future, pay attention to what’s going on behind the scenes and look to Cedexis and Openmix for intelligent orchestration and optimized, automated control.

Re-Writing Streaming Video Economics


The majority of Americans – make that the vast majority of American Millennials – stream video. Every day, in every way. From Netflix to Hulu, YouTube to Twitch, CBS to HBO, there is no TV experience that isn’t being accessed on a mobile phone, a tablet, a PC, or some kind of streaming device attached to a genuine, honest-to-goodness television.

The trouble is, we aren’t really paying for it: just 9% of a household’s video budget goes to streaming services, while the rest goes to all the usual suspects: cable companies, satellite providers, DVD distributors, and so forth. This can make breaking a profit a tricky proposition – Netflix just started to churn out ‘material profits’, Hlu is suspected to be losing money, and Amazon is unlikely ever to break out profitability of its Prime video service from the other benefits of the program.

The challenge is there are really only so many levers that can be pulled to make streaming video profitable:

  1. Charge (more) for subscriptions: except that when the cost goes up, adoption goes down, and decelerating growth is anathema to a start-up business
  2. Spend less on (licensing/making/acquiring) content: except that if the content quality misses, audience growth will follow it
  3. Spend less on delivering the content: except that if the quality goes down, audiences will depart, never to be seen again

One and two are tricky, and rely upon the subjective skills of pricing and content acquisition experts. Number three though…maybe there’s something there that is available to everyone.

And indeed, there is. Most video traffic these days travels across Content Delivery Networks (CDNs), who do yeoman work caching popular traffic around the globe, and doing much of the heavy lifting in working out the quickest way to get content from publisher to consumer. Over the years, these vital members of the infrastructure have gradually improved and refined their craft, to the point where they about as reliable as they can be.

That said, no Ops team ever likes to have a single point of failure, which is why almost all large-scale internet outfits contract with at least two – if not more – CDNs. And that’s where the opportunity arises: it’s almost a guarantee that with two contracts, there will be differences in pricing for particular circumstances. Perhaps there is a pre-commit with one, or a time-of-day discount on the other; perhaps they simply offer different per-Gb pricing in return for varying feature sets.

With Openmix, you can actually build an algorithm that doesn’t just eliminate outages; and doesn’t just ensure consistent quality; you can make decisions on where to send the traffic based on financial parameters, once you ensure that the quality isn’t going to drop.

All of a sudden you have access to pull one of the three levers – without triggering the nasty side effects that make each one a mixed blessing. You can reduce your cost, without putting your quality at risk – it’s a win/win.

We’d love to show you more about this, so if you’re at NAB this week, do stop by.

Caching at The Edge: The Secret Accelerator

Think about how much data has to move between a publisher and a whole audience of eager viewers, especially when that content is either being streamed live, or is a highly-anticipated season premiere (yes, we’re all getting excited for the return of GoT). Now ask yourself where there is useless repetition, and an opportunity to make the whole process more efficient for everyone in the process.

Do so, and you come up with the Streaming Video Alliance-backed concept of Open Caching.

The short explanation is this: popular video content is detected and cached by ISPs at the edge; then, when consumers want to watch that content, they are served from local caches, instead of forcing everyone to pass a net-new version from origin to CDN to ISP. The amazing thing is how much of a win/win/win it really is:

  • Publishers and CDNs don’t have to deliver as much traffic to serve geographically-centered audiences
  • ISPs don’t have to pull multiple identical streams from publishers and CDNs
  • Consumers get their video more quickly and reliably, as it is served from a source that is much closer to them

A set of trials opened up in January, featuring some of the biggest names in streaming video: ViaSat, Viacom, Charter, Verizon, Yahoo, Limelight Networks, MLBAM, and Qwilt.

If this feels a bit familiar, it should: Netflix have essentially built exactly this (they call it Netflix Open Connect), by placing hardware within IXPs and ISPs around the world – some British researchers have mapped it, and it’s fascinating. And, indeed, they recently doubled down in India, deploying cached versions of their catalog (or at least the most used elements of it) all around that country.  The bottom line is that the largest streaming video provider (accounting for as much as 37% of all US Internet traffic) understands that the best experience is delivered by having the content closer to the consumer.

As it turns out, ISPs are flocking to this technology for all the reasons one might expect: this gives back some control over their networks, and provides the opportunity to get off the backhaul treadmill. By pulling, say, a live event one time, caching it at the edge, then delivering from that edge cache, they can substantially reduce their network volume and make end customers happy.


And yet – most publishers are only vaguely aware that this is happening (if you’re all up to speed on ISP caching, consider yourself ahead of the curve). Part of the reason is that when ISPs cache content that has traveled their way through a CDN, they preserve the headers – so the traffic isn’t necessarily identifiable as having been cached. And, indeed, if you have video monitoring at the client, those headers are being used, potentially making the performance of a given CDN look even better than it already is, because content is being served at the edge by the ISP. The ISP, in other words, is making not only the publisher look good, with excellent QoE – they’re also making the CDN look like a rock star!

To summarize: the caching that is happening at the ISP level is like a double-super-secret accelerator for your content, whose impact is currently difficult to measure.

It’s also, however, pretty easy to break. Publishers who opt to secure all their traffic essentially eliminate the opportunity for the ISP to cache their content, because the caching intelligence can’t identify what the file is or whether it needs caching. Now, that’s not to say the challenge insurmountable at all – APIs and integrations exist that allow the ISP to re-enter the fray, decrypt that secure transmission, and get back to work making everyone look good by delivering quickly and effectively to end consumers.

So if you aren’t yet up to speed on open caching, now is the time to do a little research. Pop over to the Streaming Video Alliance online and learn more about their Open Caching working group today – there’s nothing like finding out you deployed a secret weapon, without even knowing you did it.

 

Better OTT Quality At Lower Cost? That Would Be Video Voodoo

According to the CTA, streaming video now claims as many subscribers as traditional Pay TV. Another study, from the Leichtman Research Group proposed that more households have streaming video than have a DVR. However accurate – or wonkily constructed – these statistics, what’s not up for grabs is that more people than ever are getting a big chunk of their video entertainment over the Web. Given the infamous AWS outage, this means that providers are constantly at risk of seeing their best-laid-plans laid low by someone’s else’s poor typing skills.

Resiliency isn’t a nice-to-have, it’s a necessity. Services that were knocked out last week owing to AWS’ challenges were, to some degree, lucky: they may have lost out on direct revenue, but their reputations took no real hit, because the core outage was so broadly reported. In other words, everyone knew the culprit was AWS. But it turns out that outages happen all the time – smaller, shorter, more localized ones, which don’t draw the attention of the global media, and which don’t supply a scapegoat. In those circumstances, a CDN glitch is invisible to the consumer, and is therefore not considered: when the consumer’s video doesn’t work, only the publisher is available to take the blame.

It’s for this reason that many video publishers that are Cedexis customers first start to look at breaking from the one-CDN-to-rule-them-all strategy, and look to diversify their delivery infrastructure. As often as not,this starts as simply adding a second provider: not so much as an equal partner, but as a safety outlet and backup. Openmix intelligently directs traffic, using a combination of community data (the 6 billion measurements we collect from web users around the world each day) and synthetic data (e.g. New Relic and CDN records). All of a sudden, event though outages don’t stop happening, they do stop being noticeable because they are simply routed around. Ops teams stop getting woken up in the middle of the night, Support teams stop getting sudden call spikes that overload the circuits, and PR teams stop having to work damage control.

But a funny thing happens once the outage distractions stop: there’s time catch a breath, and realize there’s more to this multi-CDN strategy than just solving a pain. When a video publisher can seamlessly route between more than one CDN, based on its ability to serve customers at an acceptable quality level, there is a natural economic opportunity to choose the best-cost option – in real time. Publishers can balance traffic based simply on per-Gig pricing; ensure that commits are met, but not exceeded until every bit of pre-paid bandwidth throughout the network is exhausted; and distribute sudden spikes to avoid surge pricing. Openmix users have reported seeing cost savings that reach low to mid double-digit percentages – while they are delivering a superior, more consistent, more reliable service to their users.

Call it Video Voodoo: it shouldn’t be possible to improve service reliability and reduce the cost of delivery…and yet, there it is. It turns out that eliminating a single point of failure introduces multiple points of efficiency. And, indeed, we’ve seen great results for companies that already have multiple CDN providers: simply avoiding overages on each CDN until all the commits are met can deliver returns that fundamentally change the economics of a streaming video service.

And changing the economics of streaming is fundamental to the next round of evolution in the industry. Netflix, the 800 pound gorilla, has turned over more than $20 billion in revenue the last three years, and generated less than half a billion in net margin, a 5% rate; Hulu (privately- and closely-held) is rumored to have racked up $1.8B in losses so far and still be generating red ink on some $2B in revenues. The bottom line is that delivering streaming video is expensive, for any number of reasons. Any engine that can measurably, predictably, and reliably eliminate cost is not just intriguing for streaming publishers – it is mandatory to at least explore.

Mobile Video is Devouring the Internet

In late 2009 – fully two years after the introduction of the extraordinary Apple iPhone – mobile was barely discernible on any measurement of total Internet traffic. By late 2016, it finally exceeded desktop traffic volume. In a terrifyingly short period of time, mobile Internet consumption moved from an also-ran to a behemoth, leaving behind the husks of marketing recommendations to “move to Web 2.0” and to “design for Mobile First”. And along the way, Apple encouraged us to buy into the concept that the future (of TV at least) is apps.

Unsurprisingly, the key driver of all this traffic is – as it always is – video. One in every three mobile device owners watches videos of at least 5 minutes’ duration, which is generally considered the point at which the user has moved from short-form, likely user-generated, content, to premium video (think: TV shows and movies). And once viewers pass the 5minute mark, it’s a tiny step to full-length, studio-developed content, which is a crazy bandwidth hog.  Consider that video is expected to represent fully 75% of all mobile traffic by 2020 – when it was just 55% in 2015.


As consumers get more interested in video, producers aren’t slowing down. By 2020, it is estimated that it would take an individual fully 5 million years to watch the video being published and made available in just a month. And while consumer demand varies around the world – 72% of Thailand’s mobile traffic is video, for instance, versus just 41% in the United States – the reality is that, without some help, the mobile Web is going to be straining under the weight of near-unlimited video consumption.

What we know is that, hungry as they are for content, streaming video consumers are fickle and impatient. Akamai demonstrated years ago the 2-second rule: if a requested piece of content isn’t available in under 2 seconds, Internet users simply move on to the next thing. And numerous studies have shown definitively that when re-buffering (the dreaded pause in playback while the viewing device downloads the next section of the video) exceeds just 1% of viewing time, audience engagement collapses, resulting in dwindling opportunities to monetize content that was expensive to acquire, and can be equally costly to deliver.

How big of a problem is network congestion? It’s true that big, public, embarrassing outages across CDNs or ISPs are now quite rare. However, when we studied the network patterns of one of our customers, we found that what we call micro-outages (outages lasting 5 minutes or less) happen literally hundreds to thousands of times a day. That single customer was looking at some 600,000 minutes of direct lost viewing time per month – and when you consider how long each customer might have stayed, and their decreased inclination to return in the future, that number likely translates to several million minutes of indirectly lost minutes.

While mobile viewers are more likely to watch their content through an app (48% of all mobile Internet users) than a browser (18%), they still receive the content through the chaotic maelstrom of a network that is the Internet. As such, providers have to work out the best pathways to use to get the content there, and to ensure that the stream will have consistency over time so that it doesn’t fall prey to the buffering bug.

Most providers use stats and analysis to work out the right pathways – so they can look at how various CDN/ISP combos are working, and pick the one that is delivering the best experience. Strikingly, though, they often have to make routing decisions for audience members who are in geographical locations that aren’t currently in play, which means choosing a pathway without any recent input on which is going to be the best pathway – this is literally gambling with the experience of each viewer. What is needed is something predictive: something that will help the provider to know the right pathway the first time they have to choose.

This is where the Radar Community comes in: by monitoring, tracking, and analyzing the activity of billions of Internet interactions every day, the community knows which pathways are at peak health, and which need a bit of a breather before getting back to full speed. So, when using Openmix to intelligently route traffic, the Radar community data provides the confidence that every decision is based on real-time, real-user data – even when, for a given provider, they are delivering to a location that has been sitting dormant.

Mobile video is devouring the Web, and will continue to do so, as consumers prefer their content to move, dance, and sing. Predictively re-routing traffic in real-time so that it circumvents the thousands of micro-outages that plague the Internet every day means never gambling with the experience of users, staying ahead of the challenges that congestion can bring, and building the sustainable businesses that will dominate the new world of streaming video.

Tracking Video QoS Just Got A Whole Lot Easier

If you follow this blog, you know we’ve mentioned before working with innovative customers to create a creative way to track video Quality of Service (QoS) metrics and make sense of them.

It’s exciting therefore to share that now anyone and everyone can track video QoS in Radar.

Video is fundamentally different to a lot of other online content: not only is it huge (projections are that in the next four or five years video will make up as much as 80% of Internet traffic), it is inherently synchronous. Put another way, your customer might not notice if a page takes an extra second or two to load, but they surely notice if their favorite prime time show keeps stalling out and showing the re-buffering spinner. So our new Performance Report focuses on the key elements that matter to viewers, specifically:

  • Response Time: how long it takes the content source to respond to a request from the intended viewer. Longer is worse!
  • Re-Buffering Ratio: the share of viewing time spent with the content stalled, the viewer frustrated, and the player trying to catch up. Lower is better!
  • Throughput: the speed at which chunks of the video are being delivered to the player after request. Faster is better!
  • Video Start Time: how long it takes for the video to start after viewer request. Shorter is better!
  • Video Start Failures:the percentage of requested video playbacks that simply never start. Lower is better!
  • Bitrate: the actual bitrate experienced by the viewer (bitrate is a pretty solid proxy for picture quality, as the larger the bitrate, the higher the likely resolution of the video). In this case, higher or lower may be better, depending on your KPIs.

Once you enable the tag for your account and add it to your video-bearing pages (see below), you’ll be able to track all these for your site. And, as with all Radar reports, you can slice and dice the results in all sorts of different ways to get a solid picture of how your service is doing, video-wise. Analyses might include:

  • How do my CDNs compare at different times of day, in different locations, or on different kinds of device?
  • What is the statistical distribution of service provided through my clouds? Does general consistency hide big peaks and valleys, or is service generally within a tight boundary?
  • What is the impact of throughput fluctuations to bitrates, video start times, or re-buffering ratios? What should I be focused on to improve my service for my unique audience?

In no time, you’ll have a deep and clear sense of what’s going on with video delivered through your HTML5 player, and be able to extrapolate this to make key decisions on CDN partnering, cloud distribution, and global server load balancing solutions. The ability to really dig down into things like device type and OS – as well as the more expected geography, time, delivery platform, and so forth – means you’ll be able to isolate issues that are not, in fact, delivery-related: for instance, it is possible to see a dip in quality and assume it’s cloud-related, only to discover, in drilling down, that the drop occurs on only one particular device/OS combination, and thus uncover a hiccup in a new product release.

So here’s the scoop. Collecting these QoS metrics isn’t just easy – it’s free, just like our other Radar real user measurements. With the video QoS, you’ll be tracking your own visitors’ experiences, and be able to compare them over time.

The tag works with HTML5 players, running in a browser, and it’s unsurprisingly takes a bit more planning to implement than our standard tag, so you’ll likely want to drop us a line to get started. We’ll be delighted to help you get this up and running – just contact us by going to your Portal and navigating to Impact -> Video Playback Data, then clicking the Contact button..

Fixing The Real OTT Challenge: Monetization

monetization-blog-header

HBO remains a jewel in the crown of Time Warner, while also being a bit of a problem. One of the issues is the slowdown in growth of HBO Direct: it has rather publicly been at give-or-take-a-million since shortly after its launch, and stubbornly refuses to keep moving. There are a million reasons for this (not least of which is that the hundred million or so cable and satellite subscribers can simply buy it for the TV then use HBO Go on the move), but it’s illustrative of what will likely be the number one issue of 2017 for streaming video: monetization. And when we say monetization, we don’t just mean attracting revenue – we mean doing so profitably

You can logically start at the top: what’s the story on Netflix? The last quarter reported (Q3 2016) shows revenue of $2.3B, net income of $51M, and a negative cash flow of $461M. Now let’s not beat them up – they have a solid record of being mildly profitable (the number above imply a 2.2% net profit) – and there are signs that 2017 could very much be the year they start printing money. The thing is, though – Netflix is the 800lb gorilla in the space, consuming over a third of the total Internet bandwidth during peak hours. If they are not cash flow positive (with, it has to be said, some high expectations for Q4 numbers), what is everyone else to do?

We see plenty of news about how many subscribers have, or have not, signed up for a service – but that is only one side of the balance sheet. Profit, after all, is made up of what is left after all the outgoing bills have been paid with the incoming revenue. So while subscriber growth is essential to ramping up the revenue side, there is a corresponding need to focus on the cost side of the ledger. There are a number of real challenges here:

  • The cost of content. Our friends over at StreamingMedia have an important piece about the cost of content, noting that even regular Pay TV companies can end up losing money on particularly expensive content. The simple reality is that, with so many renegade start-ups, the price for good content is being bid up to the point where it is hard, if not impossible to make money. Netflix has content obligations into the future valued at $13B (heck, Westworld cost a reported $100M).
  • The inconsistency of the market. Think about it: Netflix, with its whole library, comes in at $8 to $12 a month; HBO Direct costs $15. The smallest package from SlingTV can be had for a cool $20, where PlayStation Vue weighs in at $40 – and offers lots of options to build up an account whose price can rival any cable bill. Acorn TV, which offers British content is $5 a month, which makes it kind of expensive next to Netflix, but kind of cheap next to HBO. Consumers have no real way to compare and contrast models, which makes the economics of the market chaotic, complex, and confusing.
  • The dizzying delivery system. Getting video from origin to consumer is as complex an endeavor as the human race has ever chosen to undertake. One of the key enablers has been the CDNs, who distribute points of presence (POPs, also known as edges) around the world in an effort to bring content closer to consumers. CDNs, though, charge for each byte of information that flows through them, meaning that each new consumer creates new cost.

Content costs will, eventually, stabilize, as supply and demand meet, and the amount of investment money for start-ups slows a little and returns to a level at which content can be provided profitably. Similarly, market forces will eventually create an equilibrium, as consumers signal through their actions the acceptable cost for services. Each of these two elements, however, rely upon market activity that can be only partially impacted by any individual organization.

By contrast, the cost of delivery can be managed by any organization, by rationalizing the way in which traffic is assigned to delivery partners. Organizations around the world are even now building out Hybrid CDN architectures, in which some combination of CDNs and private clouds are used to route traffic across pathways that can deliver content at high quality, but at the lowest total cost. For instance, consumers in close proximity to the origin may be served by internally-managed caching servers; those in geographically-dispersed, but relatively high consumer concentration, areas may be served by servers controlled within a private cloud; and others in further-flung regions may be served by locally-focused CDNs. Using intelligent algorithms to ensure quality of experience (QoE) meets expectations, while selecting for the cheapest route that can do it, can drive overall delivery costs down by economically-meaningful levels.

Ultimately, profitability in the streaming video space will be determined by the ability of service providers to attract and retain audiences, and their ability to control whatever costs they can. Starting with delivery costs makes the most sense, as it can be undertaken today, and without the cooperation of other market participants.

Want to know more about monetization trends in the OTT space? Join us at Digital Entertainment World on February 1 and 2 and visit us at our booth. And don’t miss our VP of Marketing, Rob Malnati, as he discusses this very topic with other experts from companies like OWNZONES, Pluto.tv, ITV Alliance, and Plex at 3:30pm on February 1st.

Cedexis Buffer Killer Wins TVTechnology Best Product Innovations Award

When we released the Buffer Killer, we knew it was going to have a very real impact on the industry. For the first time, real user measurements (RUM) could be used to track and analyze video quality of experience (QoE), and to re-direct traffic to the right CDNs or Origins through an easily-configured interface. We’ve been working with a group of extraordinary early adopters, who have helped us build the product into a powerhouse for guaranteeing world-class QoE, while managing for cost.

It was a real honor, therefore, for us to be notified this week that the Buffer Killer had been awarded a Best Product Innovations 2016 award by TVTechnology. Finding ourselves standing alongside industry powerhouses like IBM and Verizon was humbling, but exciting. It is a clear sign that we’re onto something special here.

If you haven’t already looked at our Buffer Killer solution, you can learn all about it here. The short version is that we are now collecting in Radar not only the community RUM data you already know, but also video-specific metrics, like video start success, bitrates and re-buffering ratios. These then feed Openmix to drive real-time traffic decisions, moving between CDNs, origins, and private clouds, to deliver problem-free streaming for consumers, even when the Internet is at its busiest.

We suspect what the judges liked about the Buffer Killer was its flexibility. Using an easily-accessible UI, it can be configured to make really sophisticated decisions – like weighting decisions based on video start success rates or monthly purchasing commitments, so that consumers get not just the best QoE, but rather the best QoE that makes financial sense. The visualization of all the data follows the high standard of Radar Live, and makes it easy to spot the correlations between different metrics.  For instance, this graph clearly shows the relationship between throughput and bitrate:

bufferkillerreport

Slicing and dicing to see how things are going in a particular ISP, CDN, or geography is a breeze, and we’re hearing good things all around.

If you’re interested in learning more about the (now award-winning!) Buffer Killer, drop us a line, or catch us at our upcoming conference and trade show events – we’ll be at CES and would love to connect there.

Cedexis Improves Internet Video Quality of Experience for Millions of Users of Popular Websites

Cedexis Improves Internet Video Quality of Experience for Millions of Users of Popular Websites

Leverages Cloud Platforms to minimize buffering and ensure fast video starts

cedexis_pr_logo

Portland, Oregon & Paris, France — September 14, 2015 —Cedexis, the leading provider of Internet measurement and real-time performance-optimization solutions, announces the world’s first crowd-sourced video- optimization service, bringing improved Internet video delivery to millions of online viewers for a growing list of popular websites and mobile applications.

Cedexis’ video solution has been referred to by several early-adopter customers as the “Buffer Killer,” and indeed, that is what it does.  On average, Cedexis video customers have experienced a 45% improvement in buffering; 18% improvement in video start-time; and a decrease in video failures of up to 44%.  A study by TubeMogul states “4 out of 5 people will leave a video if it pauses to buffer.”

Buffer-Killer-2

Cedexis customers, such as recently added ViewLift – the over-the-top (OTT) technology hub for SnagFilms, Funny for Free, and other popular OTT sites – take a best-of-breed approach to delivering video with high QoS for ever more demanding end users.

“Video performance is key to user retention.  We use Cedexis Openmix to ensure our users are directed to the best-performing infrastructure in real time, every time,” said Manik Bambha, Chief Digital Officer and CTO at ViewLift.  “We’ve seen a 65% decrease in buffering incidents since deploying Cedexis Openmix.  These QoS improvements deliver better quality to the end user, resulting in incredibly low viewer-abandonment rates.”

PBS, another great customer, has this to say about their experience using Cedexis for their video deployment:  “For PBS Digital, having 100% uptime and great performing video is critical. We implemented Cedexis specifically to improve our Video Quality of Service and it’s been a great decision,” said Mike Norton, Senior Director of Technical Operations at PBS Digital.

Critical to this solution is the real-time performance monitoring provided by the free Cedexis Radar community of 800+ enterprises and every major Cloud and CDN provider in the world.  Collecting billions of Internet performance metrics a day provides a real-time map of where the Internet is humming along nicely, and where interconnections between ISPs and cloud service providers, who host and distribute the world’s content, are congested or disrupted.

As video encoding moves from 720p to 1080p and now 4K, the demands on Internet Service Providers, Content Delivery Networks (CDNs) and Data Center/Cloud provider networks can become significant bottlenecks that interrupt the seamless delivery of high-resolution content.  Cedexis customers have embraced a strategy of using multiple CDNs and/or cloud providers to avoid single points of failure or congestion.  These strategies are commonly referred to as “Multi -CDN” or “Hybrid-CDN” architectures and bring together the best that each partner has to offer through real-time, data-driven, global traffic management between the public providers or private data centers.

“With video representing two thirds of Internet traffic todayand growing, it is clear that new strategies are needed to deliver the quality of experience end users have become used to from broadcast TV services.  Delivering a world-class video experience over the shared Internet requires a depth of Internet performance visibility and real-time, data-driven traffic management that only Cedexis brings together,” said Robert Malnati, VP of Marketing & Business Development at Cedexis.

Cedexis customers can now easily benefit from the unique insight provided by the Cedexis Radar real-user Internet performance monitoring community, by having their video player client software – be it from a website, mobile app, smart TV or connected gaming console – “talk” directly to the Cedexis global platform.  Alternatively, enterprises can connect their Content Management Systems (CMS) to Cedexis’ platform to make them aware of Internet traffic conditions and outages.

Cedexis’ expertise in Internet video delivery is being well recognized by leading industry associations in both the US and EU.

  • Streaming Media 100 most influential OTT Video solution award
  • Online Video Trophy NETINEO award
  • Standards body participant of the Streaming Video Alliance

Learn more about the Cedexis Buffer Killer solution for streaming OTT video.

About Cedexis

Cedexis provides Web-scale, end-user-experience monitoring and real-time traffic routing across multiple clouds and networks.   Cedexis Radar crowd sources billions of real user measurements (RUM) a day from a community of over 800 enterprises.  Radar data provides real-time visibility into how cloud/network performance is impacting the experience of Web and mobile application users.  Cedexis Impact provides the correlation of end-user performance to business KPIs, enabling enterprises to maximize Web performance investments.  Openmix uses this insight to route traffic for best performance, or availability, or cost, or any mix of the three.  Cedexis is trusted by over 800 global brands including Accor Hotels, Airbus, Cartier, Comcast, LinkedIn, Mozilla, Nissan and Shutterstock.   Cedexis is headquartered in Portland, Oregon with offices in Paris, France, San Francisco, CA, Brooklyn, NY and London, UK.

# # #

Press contact (USA-Canada):
Frances Mann-Craik
Addison Marketing
Mail: frances@addisonmarketing.com
Tel: +1 408 868-9577

Cedexis announces new video solution: let's take a deeper dive.

Last week, we introduced a new video solution that has been dubbed the “Buffer Killer“.

Today, we take a deeper dive into the real value that the new solution provides.

There are two really interesting use cases that have driven the Cedexis “Buffer Killer” solution. The first is player driven mid-stream switching, and the second is CMS ingest. Before we dive directly into those, let’s take a second to understand where this solution falls in the Cedexis product family.

Cedexis-Video

Openmix DNS is the traditional Global Traffic Management solution that Cedexis has always provided. The most common method for managing traffic, Openmix DNS uses DNS records to direct traffic. A Canonical Name record, or CNAME, is used to route users to the best-performing CDN or cloud infrastructure element in the vendor’s portfolio.

The second method (and what is really new here) is Openmix Web Services. This new release exposes a restful API that allows customers maximum control with regard to delivery. The new service uses the HTTP interface at the video player or CMS level to request the best-performing CDN, given a set of configured priorities. The Cedexis Openmix HTTP interface provides several important advantages over the CNAME method. First, it allows for a tightly-coupled video player integration and simultaneously lets Cedexis make routing decisions out-of-band. Second, it allows for greater video client intelligence by considering multiple metrics, such as device and browser type. Finally, the HTTP interface provides greater levels of flexibility in decision making.

The Openmix HTTP API provides:

  • Query per ASN across multiple platforms
  • The query returns the best-performing CDN
  • JSON responses from HTTP Openmix allow for the player or CMS to dynamically alter behavior to achieve top performance

Now, let’s see a few details about how this looks in practice.

Mid-Stream Switching

Mid-stream CDN switching is empowered by segmented Adaptive Bitrate Rate (ABR) streaming over HTTP. With this technology, the source video is encoded at different bitrates. Each distinct bitrate asset is then broken into segments that last from two to ten seconds. The video player switches between the different bitrate segments using the user’s bandwidth, buffer queues, and other factors to determine the best bitrate given the situation. In contrast, the outdated progressive download method receives an entire file or stream at the same bitrate from the same origin, regardless of changes in performance, availability, and bandwidth.

Chunks-from-Various-CDNs

The implementation is quite easy and straightforward. The player will initiate a request (Using Web services) to Openmix asking which CDN is performing best. A JSON package is returned with the scoring. Other player-centric factors can also be considered in the routing decisions, and routing of the CDN is done from within the player.

Screen-Shot-2015-09-14-at-8.23.29-AM

Slurping Openmix Scorings into a Content Management System

Customers have additionally found value in the new Openmix HTTP API by making calls from their Content Management System (CMS) directly. These CDN or cloud scores can then be utilized along with other data to make intelligent decisions about traffic routing. The player typically makes a request to the CMS at the beginning of a session. The CMS can turn and make a request to Openmix and get a scoring of the best content source destination. The manifest is then returned to the player, and subsequent calls are made to the CDN directly.

Let’s see how that is done.

Screen-Shot-2015-09-14-at-8.48.47-AM

This was a brief overview – for more details on how the new Cedexis restful API can improve your online Video solution, check out our Video Delivery Whitepaper. It would not surprise us in the least if you and your team come up with new and innovative ways to use this API. If so, please share them with the community!

Buffer-Killer-2