Cedexis (Real User Monitoring) Community – It’s business as usual

As you’ve already seen, we are happy to announce that Cedexis is now part of the Citrix family. This of course begs the question – What happens to the Cedexis RUM community and customers?

Over the years of managing projects like Xen hypervisor and Apache CloudStack, Citrix has learned the value of a great community. We recognize the importance of the RUM community and what it means for the Cedexis customers and hence first and foremost want to extend a BIG THANK YOU to all of our contributing RUM community.

This  community is important to us and we want to continue to build what Cedexis and you all have spent years putting together. To that end, all of the current members of the community have our assurance that we will not be changing anything about how you work with the Cedexis team.

So what’s next? To be honest, we’re just starting that conversation. As we get clarity, we’ll be sure to share our plans. In the meantime, it is business as usual and you can rest assured that we are monitoring the community to ensure there is minimal disruption.

If you have any comments, concerns, or just thoughts on the matter, please don’t hesitate to share on the discussion for this blog or tweet us @NetScaler.

Steve Shah
VP of Product Management at Citrix.

Self-Sourced Data Won’t Get You Where You Want to Go

Crowdsourced intelligence is a powerful concept, brought to reality and fueled by the Internet and mobile devices. Without the communications technology and vast but addressable audience of smartphone addicts, marketing projects dependent on gathering data or pooling microtasks from an undefined, unrelated audience would remain mostly theoretical. Of course, companies outsourced tasks and polled customers for data before crowdsourcing came along, but soliciting a thousand survey responses takes a surprising amount of work – and, importantly, time –  without the mobile Internet. Massive data projects like the Census or NIH-funded longitudinal studies were carried out long before the Internet, but required complex coordination, significant resources, and a long timeline. But ever since the term “crowdsource” was coined more than a decade ago, the rapid, large-scale assembling of ideas, opinions, funds, collaborators, computing power and labor online has become commonplace. In its myriad, still evolving forms, crowdsourcing has proven to be one of the most groundbreaking tools of the mobile era.

Crowdsourcing technology has moved well beyond market research and fundraising. One of the most popular examples is Waze, where driver experience reports, construction plans, emergency dispatches, and traffic metrics are combined with algorithms to produce real-time guidance for drivers. You may drive around the Bay Area a lot and have a strong sense of traffic patterns based on experience. But what happens if there’s an unexpected event— let’­s say a tornado in Palo Alto—or you have two hours to get to a meeting near Sacramento, where you’ve never driven before. Do you want to rely on your individual experience, or do you want to consult the Waze app and figure out how to avoid gridlock? (Bonus points for having enough time for a side trip to Starbucks.)

Amazingly, we’re now able to apply the power of crowdsourcing to networks, endpoints, and bytes just as we to people and cars.

In the Zettabyte era, community-powered intelligence is a fundamental need if we’re really going to move 20,000 GBps of global Internet traffic without constant outages, slowdowns, and the inefficient yet miraculous workarounds we too often demand from Ops teams — while also controlling costs. Unless you are in the very highest echelon of traffic delivery, your service simply won’t create enough data to truly map the Internet. Heck, even if you got 100 million hits a day, the chances are good they’d all be clustered in a handful of major regions. If you expand into a new market, or your app suddenly becomes popular in Saskatchewan, or for any number of reasons traffic from a previously quiet region surges suddenly (consider the stochastic outcomes of natural disasters, widespread cyber attacks, and political and celebrity happenings), you won’t have enough visibility or intelligence on hand to manage your traffic in that particular –  suddenly vital! –  corner of the Internet. You need to crowdsource the data that will give you the comprehensive view of the Internet you need to avoid outages, ensure high quality of experience (QoE) for users, and make efficient use of your resources.

Let’ use Cedexis’ real user monitoring community Radar to illustrate the challenge you’re up against. Our data sets are based on Real User Monitoring (RUM) of Internet performance at every step between the client and the clouds, data centers, or CDNs hosting your application. To source this data, we have created the world’s largest user experience community, in which publishers large and small deposit network observations from their own service, then share the aggregate total, so they can benefit from one another’s presence.

Banding these services together makes it possible to see essentially all the networks in the world each day. We’re talking about all the “little Internets ”Radar collects data from more than 50,000 of these networks daily.  More than 130 major service providers feed metrics into the system each day. All told, hundreds of millions of clients generate over 14 billion RUM data points every day. That’s quite a crowd – and one that basically no service could pull together all on its own.

Community doesn’t just give you breadth of course, but also depth: we get at least 10,000 individual data points each day from over 13,000 of those ASNs. You simply can’t glean that kind of traffic intelligence from your own system. Do the math: you’ll see that even if you have 100 million visitors each day, likely less than half are coming from outside the major pockets (concentrated ASNs), leaving you very few data points spread across the thousands of remaining ASNs.  So when the first visitor of the day turns up from Des Moines, Iowa, or Ulan Bator, or Paris, France, you’d have no data handy to make intelligent traffic decisions.

Everyone needs community intelligence. Not just for the newest, least understood pockets of users on the edges of your network. Crowdsourced community intelligence from the big, messy pools of so provides the early warning system every Operations team needs to keep the wheels turning, and the user using.

Many countries have thousands of ISPs. In Brazil, the Radar community supplies 10,000 daily data points from 1,595 different ASNs. Russia, Canada, Australia, and Argentina also have enormous diversity of ISPs, especially in relation to their populations. These locations are likely to be central to your business success and global content delivery needs. Having user experience data of this breadth and depth is particularly important where there are so many individual peering agreements and technical relationships, representing countless causes for variable performance. When there are countless ways for application delivery to go wrong, you need granular data to feed into intelligent delivery algorithms to ensure that, in the end, everything goes right.

When you’re trying to manage application and content delivery globally, you need visibility into thousands of ASNs on a continuous basis. Unless you’re Google, you’re not going to touch most of these networks at the depth you need. So unless you have a really cool crystal ball, you have no idea how they are performing. You’d have to conduct an enormous amount of traffic to gain a comprehensive, real-time view on your own.

Instead, rely on community-based telemetry to produce the actionable intelligence and predictive power you need to serve your end-users, no matter where they pop up.

With crowdsourced data sets from the global Internet community, you already have instant access to intelligence about QoE, bottlenecks, and slowdowns — before you even know you need it.

Why CapEx Is Making A Comeback

The meteoric rise of both the public cloud and SaaS have brought along a strong preference for OpEx vs. CapEx. To recap: OpEx means you stop paying for a thing up front, and instead just pay as you go. If you’ve bought almost any business software lately you know the drill: you walk away with a monthly or annual subscription, rather than a DVD-ROM and a permanent or volume license.

But the funny thing about business trends is the frequency with which they simply turn upside down and make the conventional wisdom obsolete.

Recently, we have started seeing interest in getting out of pay as you go (rather unimaginatively often shortened as PAYGO) as a model, and moving back toward making upfront purchases then holding on for the ride as capital items get amortized.

Why? It’s all about economies of scale.

Imagine, if you will, that you are able to rent an office building for $10 a square foot, then rent out the space for $15 a square foot. Seems like a decent deal at 50% margin; but of course you’re also on the hook for servicing the customers, the space, and so forth. You’ll get a certain amount of relief as you share janitorial services across the space, of course, but your economic ceiling is stuck at 50%.

Now imagine that you purchase that whole building for $10M and rent out the space for $15M. Your debt payment may cut into profits for a few years, but at some point you’re paid off – and every year’s worth of rent thereafter is essentially all profit.

The first scenario puts an artificial boundary on both risk and reward: you’re on the hook for a fixed  amount of rental cost, and can generate revenues only up to 150% of your outlay. You know how much you can lose, and how much you can gain. By contrast, in the second scenario, neither risk nor reward is bounded: with ownership comes risk (finding asbestos in the walls, say), as well as unlimited potential (raise rental prices and increase the profit curve).

This basic model applies to many cloud services – and to no small degree explains why so many companies are able to pop up – their growth is scaled with provisioned services.

If you were to decide to fire up a new streaming video service, say, that showed only the oeuvre of, say, Nicolas Cage, you’d want to have a fairly clear limit on your risk: maybe millions of people will sign up, but then again maybe they won’t. In order to be sure you’ve maximized the opportunity, though, you’ll need a rock solid infrastructure to ensure your early adopters get everything they expect: quick video start times, low re-buffering ratios, and excellent picture resolution. It doesn’t make sense to build that all out anew: you’re best off popping storage onto a cloud, maybe outsourcing CMS and encoding to an Online Video Platform (OVP), and delegating delivery to a global content delivery network (CDN). In this way you can have a world-class service, without having to pony up for servers, encoders, points of presence (POPs), load balancers, and all the other myriad elements necessary to compete.

In the first few months, this would be great – your financial risk is relatively low as you target your demand generation at the self-proclaimed “total Cage-heads”. But as you reach a wider and wider audience, and start to build a real revenue stream, you realize: the ongoing cost of all those outsourced, opex-based, services is flattening the curve that could bring you to profitability. By contrast, spinning up a set of machines to store, compute, and deliver your content could set a relatively fixed cost that, as you add viewers, would allow you to realize economies of scale and unbound profit.

We know that this is a real business consideration because Netflix already did it. Actually, they did it some time ago: while they do much (if not most) of their computation through cloud services, they decided in 2012 to move away from commercials CDNs in favor of their own Open Connect, and announced in 2016 that all its content delivery needs were now covered by their own network. Not only did this reduce their monthly opex bill, it also gave them control over the technology they used to guarantee an excellent quality of experience (QoE) for their users.

So for businesses nearing this op v. cap inflection point, the time really has arrived to put pencil to paper and calculate the cost of going it alone. The technology is relatively easy to acquire and manage, from server machines, to local load balancers and cache servers, and on up to global server load balancers. You can see a little bit more about how to actually build your own CDN here.

Opex solutions are absolutely indispensable in getting new services off the starting line; but it’s always worth keeping an eye on the economics, because with a large enough audience going it alone is the way to go.

Whither Net Neutrality

In case you missed it during what we shall carefully call a busy news week, the FCC voted to overturn net neutrality rules. While this doesn’t mean net neutrality is dead – there’s now a long period of public comments, and any number of remaining bureaucratic hoops to jump through – it does mean that it’s at best on life support.

So what does this mean anyway? And does it actually matter, or is it the esoteric nattering of a bunch of technocrats fighting over the number of angels that can dance on the head of a CAT-5 cable?

Let’s start with a level set: what is net neutrality? Wikipedia tells us that it is

the principle that Internet service providers and governments regulating the Internet should treat all data on the Internet the same, not discriminating or charging differentially by user, content, website, platform, application, type of attached equipment, or mode of communication.”

Thought of a different way, the idea is that ISPs should simply provide a pipe through which content flows, and have neither opinion nor business interest in what that content is – just as a phone company doesn’t care who calls you or what they’re talking about.

The core argument for net neutrality is that, if the ISPs can favor one content provider over another, then it will create insurmountable barriers to entry for feisty, innovative new market entrants: Facebook, for example, could pay Comcast to make their social network run twice as smoothly as a newly-minted competitor. Meanwhile, that ISP could, in an unregulated environment, accept payment to favor the advertisements of one political candidate over another, or to simply block access to material or any purpose.

On the other side, there are all the core arguments of de-regulation: demonstrating adherence to regulations siphons off productive dollars; regulations stifle innovation and discourage investment; regulations are a response to a problem that hasn’t even happened yet, and may never occur (there’s a nice layout of these arguments at TechCrunch here). Additionally, classic market economics suggests that, if ISPs provide a service that doesn’t match consumers’ needs, then those consumers will simply take their business elsewhere.

It doesn’t much matter which side of the argument you find yourself on: as long as there is an argument to be had, it is going to be important to have control over one’s traffic as it traverses the Internet. Radar’s ability to track, monitor and analyze Internet traffic will be vital whether net neutrality is the law of the land or not. Consider these opposing situations:

  • Net neutrality is the rule. Tracking the average latency and throughput for your content, then comparing it to the numbers for the community at large, will swiftly alert you if an ISP is treating your competition preferentially.
  • Net neutrality is not the rule. Tracking the average latency and throughput for your content, then comparing it to the numbers for the community at large, will swiftly alert you if an ISP is not providing the level of preference for which you have paid (or that someone else has upped the ante and taken your leading spot).

Activists from both sides will doubtless be wrestling with how to regulate (or not!) the ISP market for years to come. Throughout, and even after the conclusion of that fight, it’s vital for every company that delivers content across the public Internet to be, and remain, aware of the service their users are receiving – and who to call when things aren’t going according to plan.

Find out more about becoming a part of the Radar community by clicking here to create an account.

New Feature: Apply Filters

The Cedexis UI team spends considerable time looking for ways to help make our products both useful and efficient. One of the areas they’ve been concentrating on is improving the experience of applying several sets of filters to a report, which historically has led to a reload of the report every time a user has changed the filter list.

So we are excited to be rolling out a new reporting feature today called Apply Filters.  With the focus on improved usability and efficiency, this new feature allows you to select (or deselect) your desired filters first and then click the Apply Filters button to re-run the report. By selecting all your filters once, you will save time and eliminate confusion of remembering which filters you selected while the report is continuously refreshing itself.

The Apply Filter button appears  in off-state and on-state. The off-state button is a lighter green version that you will see before any filter selections are made. The on-state will become enabled once a filter selection has been made.  Once you run Apply Filters, and the report has completed re-running with the selected filters, the button will return to the off-state.

We have also placed the Apply Filters button at both the top and bottom of the Filters area.  The larger button at the bottom is a fixed setting so no matter how many filter options you have open, this button will always be easily accessible.

 .      

We hope you’ll agree this makes reports easier to use, and will save you time as you slice-and-dice your way to a deep and broad understanding of how traffic is making its way across the public internet.

Want to check out the new filters feature, but don’t have a portal account? Sign up here for free!

Together, we’re making a better internet. For everyone, by everyone.

Which Is The Best Cloud or CDN?

Oh no, you’re not tricking us into answering that directly – it’s probably the question we hear more often than any other. The answer we always provide: it depends.

Unsatisfying? Fair enough. Rather than handing you a fish, let us show you how to go haul in a load of blue fin tuna.

What a lot of people don’t know is that, for free, you can answer this sort of thing all by yourself on the Cedexis portal. Just create an account, click through on the email we send, and you’re off to the races (go on – go do it now, we’ll wait…it’s easier to follow along when you have your own account).

The first thing you’ll want to do is find the place where you get all this graphical statistical goodness: click Radar then select Performance Report, as shown below

With this surprisingly versatile (and did we mention free) tool, you can answer all the questions you ever had about traffic delivery around the world. For instance, if I’m interested in working out which continent has the best and worst availability. Simply change the drop down around the top left to show ‘Continent’ instead of ‘Platform’, and voila – an entirely unsurprising result:

Now that’s a pretty broad brush. Perhaps you’d like to know how a different group of countries, or states look relative to one another – simply select those countries or states from the Location section on the right hand side of the screen and you’re off to the races. Do the same with Platforms (that’s the cloud providers and CDNs), and adjust your view from Availability to Throughput or Latency to see how the various providers are doing when they are Available.

So, if you’re comparing a couple of providers, in a couple of states, you might end up with something that looks like this:

Be careful though – across 30 days, measured day to day, it looks like there’s not much difference to be told, nor much improvement to be found by using multiple providers. Ensure that you dig in a little deeper – maybe to the last 7 days, 48 hours, or even 24 hours.Look what can happen when you focus in on, for instance, a 48 hour period:

There are periods there where having both providers in your virtual infrastructure would mean the difference between serving your audience really well, and being to all intents and purposes unavailable for business.

If you’ve never thought about using multiple traffic delivery partners in your infrastructure – or have considered it, but rejected it in the absence of solid data – today would be a great day to go poke around. More and more operations teams are coming to the realization that they can eliminate outages, guarantee consistent customer quality, and take control over the execution and cost of their traffic delivery by committing to a Hybrid Cloud/CDN strategy.

And did we mention that all this data is free for you to access?

 

Live and Generally Available: Impact Resource Timing

We are very excited to be officially launching Impact Resource Timing (IRT) for general availability.

IRT is Impact’s powerful window into the performance of different sources of content for the pages in your website. For instance, you may want to distinguish the performance of your origin servers relative to cloud sources, or advertising partners; and by doing so, establish with confidence where any delays stem from. From here, you can dive into Resource Timing data sliced by various measurements over time, as well as through a statistical distribution view.

What is Resource Timing? Broadly speaking, resource timing measures latency within an application (i.e. browser). It uses JavaScript as the primary mechanism to instrument various time-based metrics of all the resources requested and downloaded for a single website page by an end user. Individual resources are objects such as JS, CSS, images and other files that the website pages requests. The faster the resources are requested and loaded on the page, the better quality user experience (QoE) for users.  By contrast, resources that cause longer latency can produce a negative QoE for users.  By analyzing resourcing timing measurements, you can isolate the resources that may be causing degradation issues for your organization to fix.  

Resource Timing Process:

Cedexis IRT makes it easy for you to track resources from identified sources, normally identified through domain (*.myDomain.com), by sub-domain(e.g. images.myDomain.com), and by the provider serving your content. In this way, you can quickly group together types of content, and identify the source of any latency. For instance, you might find that origin-located content is being delivered swiftly, while cloud-hosted images are slowing down the load time of your page; in such a situation, you would now be in a position to consider a range of solutions, including adding a secondary cloud provider and a global server load balancer to protect QoE for your users.

Some benefits of tracking Resource Timing.

  • See which hostnames  – and thus which classes of content – are slowing down your site.
  • Determine which resources impact your overall user experience.
  • Correlate resource performance with user experience.

Impact Resource Timing from Cedexis allows you to see how content sources are performing across various measurement types such as Duration, TCP Connection Time, and Round Trip Time. IRT reports also give you the ability to drill down further by Service Providers, Locations, ISPs, User Agent (device, browsers, OS) and other filters.

Check out our User Guide to learn more about our Measurement Type calculations.

There are two primary reports in this release of Impact Resource Timing. The Performance report, which gives you a trending view of resource timing over time and the Statistical Distribution report, which reports Resource Timing data through a statistical distribution view.  Both reports have very dynamic reporting capabilities that allow you to easily pinpoint resource-related issues for further analysis.  


Using the Performance report, you can isolate which grouped resources are causing potential end user experience issues by hostname, page or service provider and when the issue happened. Drill down even further to see if this was a global issue or localized to a specific location or if it was by certain user devices or browsers.  

IRT is now available for all in the Radar portal – take it for a spin and let us know your experiences!

Why The Web Is So Congested

If you live in a major city like London, Tokyo, or San Francisco, you learn one thing early: driving your car through the city center is about the slowest possible way to get around. Which is ironic, when you think about it, as cars only became popular because they made is possible to get around more quickly. There is, it seems, an inverse relationship between efficiency and popularity, at least when it comes to goods that pass through a public commons like roads.

Or like the Internet.

Think about all that lovely 4K video you could be consuming if there was nothing between you and your favorite VOD provider but a totally clear fiber optic cable. But unless you live in a highly over-provisioned location, that’s exactly not what’s going on; rather, you’re lucky to get a full HD picture, and even luckier if it stays at 1080p, without buffering, all the way through. Why? Because you’re sharing a public commons – the Internet – and its efficiency is being chewed away by popularity.

Let’s do some math to illustrate this,

  • Between 2013 and January 2017 the number of web users increased by 1.4 billion people to just over 3.7 billion. Today Internet penetration is at 50% (or put another way – half the world isn’t online yet)
  • In 2013, the average amount of Internet data per person was 7.9G per month; by 2015 it was 9.9G, with Cisco expecting it to reach over 25Gb by 2020 – so assume something in the range of 15Gb by 2017.
  • Logically, then in 2013 web traffic would have been around 2.3B * 7.9G per months (18.1t exabytes), by 2015 it would have been  3.7B * 17Gb per month (62.9 exabytes)
  • If we assume another billion Internet users by 2020, we’re looking at 4.7B & 25Gb per month – or a full 117.5 exabytes

In just seven years, the monthly web traffic will have grown 600% (based on the math, anyway: Cisco is estimating closer to 200 exabytes monthly by 2020).

And that is why the web is so busy.

But it doesn’t describe why the web is congested. Congestion happens when there is more traffic than transit space – which is why, as cities get larger and more populous, governments add lanes to major thoroughfares, meeting the automobile demand with road supply.

Unfortunately, unlike cars on roads, Internet traffic doesn’t travel in straight lines from point to point. So even though infrastructure providers have been building out capacity at a madcap pace, it’s not always connected in such a way that makes transit efficient. And, unlike roads, digital connections are not built out of concrete, and often become unavailable – sometimes for a long time that causes consternation and PR challenges, and sometimes just for a minute or so, stymying a relative handful of customers.

For information to get from A to B, it has to traverse any number of interconnected infrastructures, from ISPs to the backbone to CDNs, and beyond. Each is independently managed, meaning that no individual network administrator can guarantee smooth passage from beginning to end. And with all the traffic that has been – and will continue to be – added to the Internet, it has become essentially a guarantee that some portion of content requests will bump into transit problems along the way.

Let’s also note that the modern Internet is characterized less by cat memes, and more by the delivery of information, functionality, and ultimately, knowledge. Put another way, the Internet today is all about applications: whether represented as a tile on a smart phone home screen, or as a web interface, applications deliver the intelligence to take the sum total of all human knowledge that is somewhere on the web and turn it into something we can use. When you open social media, the app knows who you want to know about; when you consult your sports app, it knows which teams you want to know about first; when you check your financial app, it knows how to log you in from a fingerprint and which account details to show first. Every time that every app is asked to deliver any piece of knowledge, it is making requests across the Internet – and often multiple requests of multiple sources. Traffic congestion doesn’t just endanger the bitrate of your favorite sci fi series – it threatens the value of every app you use.

Which is why real-time predictive traffic routing is becoming a topic that web native businesses are digging deeper into. Think of it as Application Delivery for the web – a traffic cop that spots congestion and directs content around it, so that it’s as though it never happened. This is the only way to solve for efficient routing around a network of networks without a central administrator: assume that there will be periodic roadblocks, and simply prepare to take a different route.

The Internet is increasingly congested. But by re-directing traffic to the pathways that are fully available, it is possible to get around all those traffic jams. And, actually, it’s possible to do today.

Find out more by reading the story of how Rosetta Stone improved performance for over 60% of their worldwide customers.

 

Amazon Outage: The Aftermath

Amazon AWS S3 Storage Service had a major, widely reported, multi-hour outage yesterday in their US-East-1 data center. The S3 service in this particular data center was one of the very first services Amazon launched when it introduced cloud computing to the world more than 10 years ago. It’s grown exponentially since–storing over a trillion objects and servicing a million requests/second supporting thousands of web properties (this article alone lists over 100 well-known properties that were impacted by this outage).

Amazon has today published a description of what happened. The summary is that this was caused by human error. One operator, following a published run book procedure, mis-typed a command parameter setting a sequence of failure events in motion. The outage started at 9:37 am PST.  A nearly complete S3 service outage lasted more than three hours and full recovery of other AWS S3-dependent services lasted several hours more.

A few months ago, Dyn taught the industry that single-sourcing your authoritative DNS creates the risk the military described as two is one, one is none. This S3 incident underscores the same lesson for object storage. No service tier is immune. If a website, content, service or application is important, redundant alternative capability at all layers is essential. And this requires appropriate capabilities to monitor and manage this redundancy. After all, fail-over capacity is only as good as the system’s ability to detect the need to, and to actually, failover. This has been at the heart of Cedexis’ vision since the beginning, and as we continue to expand our focus in streaming/video content and application delivery, this will continue to be an important and valuable theme as we seek to improve the Internet experience of every user around the world.

Even the very best, most experienced services can fail. And with increasing deconstruction of service-oriented architectures, the deeply nested dependencies between services may not always be apparent. (In this case, for example, the AWS status website had an underlying dependency on S3 and thus incorrectly reported the service at 100% health during most of the outage.)

We are dedicated to delivering data-driven, intelligent traffic management for redundant infrastructure of any type. Incidents like this should continue to remind the digital world that redundancy, automated failover, and a focus on the customer experience are fundamental to the task of delivering on the continued promise of the Internet.

Mobile Video is Devouring the Internet

In late 2009 – fully two years after the introduction of the extraordinary Apple iPhone – mobile was barely discernible on any measurement of total Internet traffic. By late 2016, it finally exceeded desktop traffic volume. In a terrifyingly short period of time, mobile Internet consumption moved from an also-ran to a behemoth, leaving behind the husks of marketing recommendations to “move to Web 2.0” and to “design for Mobile First”. And along the way, Apple encouraged us to buy into the concept that the future (of TV at least) is apps.

Unsurprisingly, the key driver of all this traffic is – as it always is – video. One in every three mobile device owners watches videos of at least 5 minutes’ duration, which is generally considered the point at which the user has moved from short-form, likely user-generated, content, to premium video (think: TV shows and movies). And once viewers pass the 5minute mark, it’s a tiny step to full-length, studio-developed content, which is a crazy bandwidth hog.  Consider that video is expected to represent fully 75% of all mobile traffic by 2020 – when it was just 55% in 2015.


As consumers get more interested in video, producers aren’t slowing down. By 2020, it is estimated that it would take an individual fully 5 million years to watch the video being published and made available in just a month. And while consumer demand varies around the world – 72% of Thailand’s mobile traffic is video, for instance, versus just 41% in the United States – the reality is that, without some help, the mobile Web is going to be straining under the weight of near-unlimited video consumption.

What we know is that, hungry as they are for content, streaming video consumers are fickle and impatient. Akamai demonstrated years ago the 2-second rule: if a requested piece of content isn’t available in under 2 seconds, Internet users simply move on to the next thing. And numerous studies have shown definitively that when re-buffering (the dreaded pause in playback while the viewing device downloads the next section of the video) exceeds just 1% of viewing time, audience engagement collapses, resulting in dwindling opportunities to monetize content that was expensive to acquire, and can be equally costly to deliver.

How big of a problem is network congestion? It’s true that big, public, embarrassing outages across CDNs or ISPs are now quite rare. However, when we studied the network patterns of one of our customers, we found that what we call micro-outages (outages lasting 5 minutes or less) happen literally hundreds to thousands of times a day. That single customer was looking at some 600,000 minutes of direct lost viewing time per month – and when you consider how long each customer might have stayed, and their decreased inclination to return in the future, that number likely translates to several million minutes of indirectly lost minutes.

While mobile viewers are more likely to watch their content through an app (48% of all mobile Internet users) than a browser (18%), they still receive the content through the chaotic maelstrom of a network that is the Internet. As such, providers have to work out the best pathways to use to get the content there, and to ensure that the stream will have consistency over time so that it doesn’t fall prey to the buffering bug.

Most providers use stats and analysis to work out the right pathways – so they can look at how various CDN/ISP combos are working, and pick the one that is delivering the best experience. Strikingly, though, they often have to make routing decisions for audience members who are in geographical locations that aren’t currently in play, which means choosing a pathway without any recent input on which is going to be the best pathway – this is literally gambling with the experience of each viewer. What is needed is something predictive: something that will help the provider to know the right pathway the first time they have to choose.

This is where the Radar Community comes in: by monitoring, tracking, and analyzing the activity of billions of Internet interactions every day, the community knows which pathways are at peak health, and which need a bit of a breather before getting back to full speed. So, when using Openmix to intelligently route traffic, the Radar community data provides the confidence that every decision is based on real-time, real-user data – even when, for a given provider, they are delivering to a location that has been sitting dormant.

Mobile video is devouring the Web, and will continue to do so, as consumers prefer their content to move, dance, and sing. Predictively re-routing traffic in real-time so that it circumvents the thousands of micro-outages that plague the Internet every day means never gambling with the experience of users, staying ahead of the challenges that congestion can bring, and building the sustainable businesses that will dominate the new world of streaming video.