Posts

Announcing Cedexis Netscope: Advanced Network Performance and Benchmarking Analysis

The Cedexis Radar community collects tens of billions of real user monitoring data points each day, giving Cedexis users unparalleled insight into how applications, videos, websites, and large file downloads are actually being experienced by their users. We’re excited to announce a product that offers a new lens into the Radar community dynamic data set: Cedexis Netscope.

Know how your service stacks up, down to the IP subnet
Metrics like network throughput, availability, and latency don’t tell the whole story of how your service is performing, because they are network-centric, not user-centric: however comprehensively you track network operations, what matters is the experience at the point of consumption. Cedexis Netscope provides you with additional user-centric context to assess your service, namely the ability to compare your service’s performance to the results of the “best” provider in your market. With up-to-date Anonymous Best comparative data, you’ll have a data-driven benchmark to use for network planning, marketing, and competitive analysis.

Highlight your Service Performance:

  • Relative to peers in your markets
  • In specific geographies
  • Compared with specific ISPs
  • Down to the IP Sub-net
  • Including both IPv4 and IPv6 addresses
  • Comprehensive data on latency or throughput
  • Covering both static and dynamic delivery

Actionable insights
Netscope provides detailed performance data that can be used to improve your service for end users. IT Ops teams can use automated or custom reports to view performance from your ASN versus peer groups in the geographies you serve. This lets you fully understand how you stack up versus the “best” service provider, using the same criteria. Real-time logs organized by ASN can be used to inform instant service repairs or for longer-term planning.

Powered by: the world’s largest user experience community
Real User Monitoring (RUM) means fully understanding how internet performance impacts customer satisfaction and engagement. Cedexis gathers RUM data from each step between the client and any of the clouds, data centers, and CDNs hosting your applications to build a holistic picture of internet health. Every request creates more data, continuously updating this unique real-time virtual map of the web.

Data and alerts, your way
To effectively evaluate your service and enable real-time troubleshooting, Netscope lets you roll up data by the ASN, country, region, or state level. You can zoom in within a specific ASN at the IP subnet level, to dissect the data in any way your business requires. This data will be stored in the cloud on an ongoing basis. Netscope also allows users to easily set up flexible network alerts for performance and latency deviations.

Netscope helps ISP Product Managers and Marketers better understand:

  • How well users connect to the major content distributors
  • How well users/business connect to public clouds (AWS, Google Cloud, Azure, etc.)
  • When, where, and how often outages and throughput issues happen
  • What happens during different times of day
  • Where are the risks during big events (FIFA World Cup, live events, video/content releases)
  • How service on mobile looks versus web
  • How the ISP stacks up v. ”the best” ISP  in the region

Bring Advanced Network analysis to your network
Netscope provides a critical data set you need for your network planning and enhancement. With its real-time understanding of worldwide network health, Netscope gives you the context and actionable data you need to delight customers and increase your market share.

Ready to use this data with your team?

Set up a demo today

 

Why CapEx Is Making A Comeback

The meteoric rise of both the public cloud and SaaS have brought along a strong preference for OpEx vs. CapEx. To recap: OpEx means you stop paying for a thing up front, and instead just pay as you go. If you’ve bought almost any business software lately you know the drill: you walk away with a monthly or annual subscription, rather than a DVD-ROM and a permanent or volume license.

But the funny thing about business trends is the frequency with which they simply turn upside down and make the conventional wisdom obsolete.

Recently, we have started seeing interest in getting out of pay as you go (rather unimaginatively often shortened as PAYGO) as a model, and moving back toward making upfront purchases then holding on for the ride as capital items get amortized.

Why? It’s all about economies of scale.

Imagine, if you will, that you are able to rent an office building for $10 a square foot, then rent out the space for $15 a square foot. Seems like a decent deal at 50% margin; but of course you’re also on the hook for servicing the customers, the space, and so forth. You’ll get a certain amount of relief as you share janitorial services across the space, of course, but your economic ceiling is stuck at 50%.

Now imagine that you purchase that whole building for $10M and rent out the space for $15M. Your debt payment may cut into profits for a few years, but at some point you’re paid off – and every year’s worth of rent thereafter is essentially all profit.

The first scenario puts an artificial boundary on both risk and reward: you’re on the hook for a fixed  amount of rental cost, and can generate revenues only up to 150% of your outlay. You know how much you can lose, and how much you can gain. By contrast, in the second scenario, neither risk nor reward is bounded: with ownership comes risk (finding asbestos in the walls, say), as well as unlimited potential (raise rental prices and increase the profit curve).

This basic model applies to many cloud services – and to no small degree explains why so many companies are able to pop up – their growth is scaled with provisioned services.

If you were to decide to fire up a new streaming video service, say, that showed only the oeuvre of, say, Nicolas Cage, you’d want to have a fairly clear limit on your risk: maybe millions of people will sign up, but then again maybe they won’t. In order to be sure you’ve maximized the opportunity, though, you’ll need a rock solid infrastructure to ensure your early adopters get everything they expect: quick video start times, low re-buffering ratios, and excellent picture resolution. It doesn’t make sense to build that all out anew: you’re best off popping storage onto a cloud, maybe outsourcing CMS and encoding to an Online Video Platform (OVP), and delegating delivery to a global content delivery network (CDN). In this way you can have a world-class service, without having to pony up for servers, encoders, points of presence (POPs), load balancers, and all the other myriad elements necessary to compete.

In the first few months, this would be great – your financial risk is relatively low as you target your demand generation at the self-proclaimed “total Cage-heads”. But as you reach a wider and wider audience, and start to build a real revenue stream, you realize: the ongoing cost of all those outsourced, opex-based, services is flattening the curve that could bring you to profitability. By contrast, spinning up a set of machines to store, compute, and deliver your content could set a relatively fixed cost that, as you add viewers, would allow you to realize economies of scale and unbound profit.

We know that this is a real business consideration because Netflix already did it. Actually, they did it some time ago: while they do much (if not most) of their computation through cloud services, they decided in 2012 to move away from commercials CDNs in favor of their own Open Connect, and announced in 2016 that all its content delivery needs were now covered by their own network. Not only did this reduce their monthly opex bill, it also gave them control over the technology they used to guarantee an excellent quality of experience (QoE) for their users.

So for businesses nearing this op v. cap inflection point, the time really has arrived to put pencil to paper and calculate the cost of going it alone. The technology is relatively easy to acquire and manage, from server machines, to local load balancers and cache servers, and on up to global server load balancers. You can see a little bit more about how to actually build your own CDN here.

Opex solutions are absolutely indispensable in getting new services off the starting line; but it’s always worth keeping an eye on the economics, because with a large enough audience going it alone is the way to go.

Don’t Be Afraid of Microservices!

Architectural trends are to be expected in technology. From the original all-in-one-place Cobol behemoths half the world just learned existed because of Hidden Figures, to three-tiered architecture, to hyper-tier architecture, to Service Oriented Architecture….really, it’s enough to give anyone a headache.

And now we’re in a time of what Gartner very snappily calls Mesh App and Service Architecture (or MASA). Whether everyone else is going for that particular nomenclature is less relevant than the reality that we’ve moved on from web services and SOA toward containerization, de-coupling, and the broadest possible use of microservices.

Microservices sound slightly disturbing, as though they’re very, very small components, of which one would need dozens if not hundreds to do anything. Chris Richardson of Eventuate, though, recently begged us not to assume that just because of the name these units are tiny. In fact, it makes more sense to think of them as ‘hyper-targeted’ or ‘self-contained’ services: their purpose should be to execute a discrete set of logic, which can exist in isolation, and simply provide easily-accessed public interfaces. So, for instance, one could imagine a microservice whose sole purpose was to find the best match from a video library for a given user: requesting code would provide details on the user, the service would return the recommendation. Enormous amounts of sophistication may go into ingesting the user-identifying data, relating it to metadata, analyzing past results, and coming up with that one shining, perfect recommendation…but from the perspective of the team using the service, they just need to send a properly-formed request, and receive a properly-formed response.

The apps we all rely upon on those tiny little computers we carry around in our pocketbooks or pockets (i.e. smart phones) fundamentally rely on microservices, whether or not their developers thought to describe them that way. That’s why they sometimes wake up and spring to life with goodness…and sometimes seem to drag, or even fail to get going. They rely upon a variety of microservices – not always based at their own home location – and it’s the availability of all those microservices that dictates the user experience. If one microservice fails, and is not dealt with elegantly by the code, the experience becomes unsatisfactory.

If that feels daunting, it shouldn’t – one company managed to build the whole back end of a bank on this architecture.

Clearly, the one point of greatest risk is the link to the microservice – the API call, if you will. If the code calls to a static endpoint, the risk is that that endpoint isn’t available for some reason; or at least, is unavailable at an acceptable speed. This is why there are any number of solutions for trying to ensure the microservice is available, often spread between authoritative DNS services (which essentially take all the calls for a given location and then assign them to backend resources based on availability), and application delivery controllers (generally physical devices that perform the same service). Of course if either is down, life gets tricky quickly.

In fact, the trick to planning for highly available microservices is to embed locations that are managed by a cloud-based application delivery service. In other words, as the microservice is required, a call goes out to a location that combines both synthetic and real-user measurements to determine the most performant source and re-direct the traffic there. This compounds the benefits of the microservice architecture: not only can the microservice itself be maintained and updated independently of the apps that use it, so too the network and infrastructure necessary to its smooth and efficient delivery can be tweaked without affecting existing users.

Microservices are the future. To make the most, first ensure that they independently address discrete purposes; then make sure that their delivery is similarly defined and flexible without recourse to updating apps that use them; then settle back and watch performance meet innovation.

Better OTT Quality At Lower Cost? That Would Be Video Voodoo

According to the CTA, streaming video now claims as many subscribers as traditional Pay TV. Another study, from the Leichtman Research Group proposed that more households have streaming video than have a DVR. However accurate – or wonkily constructed – these statistics, what’s not up for grabs is that more people than ever are getting a big chunk of their video entertainment over the Web. Given the infamous AWS outage, this means that providers are constantly at risk of seeing their best-laid-plans laid low by someone’s else’s poor typing skills.

Resiliency isn’t a nice-to-have, it’s a necessity. Services that were knocked out last week owing to AWS’ challenges were, to some degree, lucky: they may have lost out on direct revenue, but their reputations took no real hit, because the core outage was so broadly reported. In other words, everyone knew the culprit was AWS. But it turns out that outages happen all the time – smaller, shorter, more localized ones, which don’t draw the attention of the global media, and which don’t supply a scapegoat. In those circumstances, a CDN glitch is invisible to the consumer, and is therefore not considered: when the consumer’s video doesn’t work, only the publisher is available to take the blame.

It’s for this reason that many video publishers that are Cedexis customers first start to look at breaking from the one-CDN-to-rule-them-all strategy, and look to diversify their delivery infrastructure. As often as not,this starts as simply adding a second provider: not so much as an equal partner, but as a safety outlet and backup. Openmix intelligently directs traffic, using a combination of community data (the 6 billion measurements we collect from web users around the world each day) and synthetic data (e.g. New Relic and CDN records). All of a sudden, event though outages don’t stop happening, they do stop being noticeable because they are simply routed around. Ops teams stop getting woken up in the middle of the night, Support teams stop getting sudden call spikes that overload the circuits, and PR teams stop having to work damage control.

But a funny thing happens once the outage distractions stop: there’s time catch a breath, and realize there’s more to this multi-CDN strategy than just solving a pain. When a video publisher can seamlessly route between more than one CDN, based on its ability to serve customers at an acceptable quality level, there is a natural economic opportunity to choose the best-cost option – in real time. Publishers can balance traffic based simply on per-Gig pricing; ensure that commits are met, but not exceeded until every bit of pre-paid bandwidth throughout the network is exhausted; and distribute sudden spikes to avoid surge pricing. Openmix users have reported seeing cost savings that reach low to mid double-digit percentages – while they are delivering a superior, more consistent, more reliable service to their users.

Call it Video Voodoo: it shouldn’t be possible to improve service reliability and reduce the cost of delivery…and yet, there it is. It turns out that eliminating a single point of failure introduces multiple points of efficiency. And, indeed, we’ve seen great results for companies that already have multiple CDN providers: simply avoiding overages on each CDN until all the commits are met can deliver returns that fundamentally change the economics of a streaming video service.

And changing the economics of streaming is fundamental to the next round of evolution in the industry. Netflix, the 800 pound gorilla, has turned over more than $20 billion in revenue the last three years, and generated less than half a billion in net margin, a 5% rate; Hulu (privately- and closely-held) is rumored to have racked up $1.8B in losses so far and still be generating red ink on some $2B in revenues. The bottom line is that delivering streaming video is expensive, for any number of reasons. Any engine that can measurably, predictably, and reliably eliminate cost is not just intriguing for streaming publishers – it is mandatory to at least explore.

How To Deliver Content for Free!

OK, fine, not for free per se, but using bandwidth that you’ve already paid for.

Now, the uninitiated might ask what’s the big deal – isn’t bandwidth essentially free at this point? And they’d have a point – the cost per Gigabyte of traffic moved across the Internet has dropped like a rock, consistently, for as long as anyone can remember. In fact, Dan Rayburn reported in 2016 seeing prices as low as ¼ of a penny per gigabyte. Sounds like a negligible cost, right?

As it turns out, no. As time has passed, the amount of traffic passing through the Internet has grown. This is particularly true for those delivering streaming video: consumers now turn up their nose at sub-broadcast quality resolutions, and expect at least an HD stream. To put this into context, moving from HD as a standard to 4K (which keeps threatening to take over) would result in the amount of traffic quadrupling. So while CDN prices per Gigabyte might drop 25% or so each year, a publisher delivering 400% the traffic is still looking at an increasingly large delivery bill.

It’s also worth pointing out that the cost of delivery relative to delivering video through a traditional network, such as cable or satellite is surprisingly high. An analysis by Redshift for the BBC clearly identifies the likely reality that, regardless of the ongoing reduction in per-terabyte pricing “IP service development spend is likely to increase as [the BBA] faces pressure to innovate”, meaning that online viewers will be consuming more than their fair share of the pie.

Take back control of your content…and your costs

So, the price of delivery is out of alignment with viewership, and is increasing in practical terms. What’s a streaming video provider to do?

Allow us to introduce Varnish Extend, a solution combining the powerful Varnish caching engine that is already part of delivering 25% of the world’s websites; and Openmix, the real-time user-driven predictive load balancing system that uses billions of user measurements a day to direct traffic to the best pathway.

Cedexis and Varnish have both found that the move to the Cloud left a lot of broadcasters as well as OTT providers with unused bandwidth available on premise.Bymaking it easy to transform an existing data-center into a private CDN Point of Presence (PoP), Varnish Extend empowers companies to easily make the most out of all the bandwidth they have paid for, by setting up Varnish nodes on premise, or on cloud instances that offer lower operational costs than using CDN bandwidth.

This is especially valuable for broadcasters/service providers whose service is limited to one country: the global coverage of a CDN may be overkill, when the same quality of experience can be delivered by simply establishing POPs in strategic locations in-country.

Unlike committing to an all-CDN environment, using a private CDN infrastructure like Varnish Extend supports scaling to meet business needs – costs are based on server instances and decisions, not on the amount of traffic delivered. So as consumer demands grow, pushing for greater quality, the additional traffic doesn’t push delivery costs over the edge of sanity.

A global server load balancer like Openmix automatically checks available bandwidth on each Varnish node as well as each CDN, along with each platform’s performance in real-time. Openmix also uses information from the Radar real user measurement community to understand the state of the Internet worldwide and make smart routing decisions.

Your own private CDN – in a matter of hours

Understanding the health of both the private CDN and the broader Internet makes it a snap to dynamically switch end-users between Varnish nodes and CDNs, ensuring that cost containment doesn’t come at the expense of customer experience – simply establish a baseline of acceptable quality, then allow Openmix to direct traffic to the most cost-effective route that will still deliver on quality.

Implementing Varnish Extend is surprisingly simple (some customers have implemented their private CDN in as little as four hours):

  1. Deploy Varnish Plus nodes within existing data-centre or on public cloud,
  2. Configure Cedexis Openmix to leverage these nodes as well as existing CDNs.
  3. Result: End-users are automatically routed to the best delivery node based on performance, costs, etc.

Learn in detail how to implement Varnish Extend

Sign up for Varnish Software – Cedexis Summit in NYC

References/Recommended Reading:

Mobile Video is Devouring the Internet

In late 2009 – fully two years after the introduction of the extraordinary Apple iPhone – mobile was barely discernible on any measurement of total Internet traffic. By late 2016, it finally exceeded desktop traffic volume. In a terrifyingly short period of time, mobile Internet consumption moved from an also-ran to a behemoth, leaving behind the husks of marketing recommendations to “move to Web 2.0” and to “design for Mobile First”. And along the way, Apple encouraged us to buy into the concept that the future (of TV at least) is apps.

Unsurprisingly, the key driver of all this traffic is – as it always is – video. One in every three mobile device owners watches videos of at least 5 minutes’ duration, which is generally considered the point at which the user has moved from short-form, likely user-generated, content, to premium video (think: TV shows and movies). And once viewers pass the 5minute mark, it’s a tiny step to full-length, studio-developed content, which is a crazy bandwidth hog.  Consider that video is expected to represent fully 75% of all mobile traffic by 2020 – when it was just 55% in 2015.


As consumers get more interested in video, producers aren’t slowing down. By 2020, it is estimated that it would take an individual fully 5 million years to watch the video being published and made available in just a month. And while consumer demand varies around the world – 72% of Thailand’s mobile traffic is video, for instance, versus just 41% in the United States – the reality is that, without some help, the mobile Web is going to be straining under the weight of near-unlimited video consumption.

What we know is that, hungry as they are for content, streaming video consumers are fickle and impatient. Akamai demonstrated years ago the 2-second rule: if a requested piece of content isn’t available in under 2 seconds, Internet users simply move on to the next thing. And numerous studies have shown definitively that when re-buffering (the dreaded pause in playback while the viewing device downloads the next section of the video) exceeds just 1% of viewing time, audience engagement collapses, resulting in dwindling opportunities to monetize content that was expensive to acquire, and can be equally costly to deliver.

How big of a problem is network congestion? It’s true that big, public, embarrassing outages across CDNs or ISPs are now quite rare. However, when we studied the network patterns of one of our customers, we found that what we call micro-outages (outages lasting 5 minutes or less) happen literally hundreds to thousands of times a day. That single customer was looking at some 600,000 minutes of direct lost viewing time per month – and when you consider how long each customer might have stayed, and their decreased inclination to return in the future, that number likely translates to several million minutes of indirectly lost minutes.

While mobile viewers are more likely to watch their content through an app (48% of all mobile Internet users) than a browser (18%), they still receive the content through the chaotic maelstrom of a network that is the Internet. As such, providers have to work out the best pathways to use to get the content there, and to ensure that the stream will have consistency over time so that it doesn’t fall prey to the buffering bug.

Most providers use stats and analysis to work out the right pathways – so they can look at how various CDN/ISP combos are working, and pick the one that is delivering the best experience. Strikingly, though, they often have to make routing decisions for audience members who are in geographical locations that aren’t currently in play, which means choosing a pathway without any recent input on which is going to be the best pathway – this is literally gambling with the experience of each viewer. What is needed is something predictive: something that will help the provider to know the right pathway the first time they have to choose.

This is where the Radar Community comes in: by monitoring, tracking, and analyzing the activity of billions of Internet interactions every day, the community knows which pathways are at peak health, and which need a bit of a breather before getting back to full speed. So, when using Openmix to intelligently route traffic, the Radar community data provides the confidence that every decision is based on real-time, real-user data – even when, for a given provider, they are delivering to a location that has been sitting dormant.

Mobile video is devouring the Web, and will continue to do so, as consumers prefer their content to move, dance, and sing. Predictively re-routing traffic in real-time so that it circumvents the thousands of micro-outages that plague the Internet every day means never gambling with the experience of users, staying ahead of the challenges that congestion can bring, and building the sustainable businesses that will dominate the new world of streaming video.

How to Make Cloud Pay Its Own Way

Rightscale came out with a wonderful report on the state of the cloud industry, and we learned some important new things:

  • 77% of organizations are at least exploring private cloud implementations
  • 82% of enterprises are executing a hybrid cloud strategy
  • 26% of respondents are now listing cost as significant challenge – ironically, given the importance of cost-cutting in the early growth of cloud services

The growth in hybrid cloud adoption is particularly striking: by Rightscale’s count, only 6% of companies are exclusively looking at private cloud,  18% are exclusively looking at public cloud , while a full 71% have a toe dipped into each pool.

Meanwhile, Cisco estimates that two thirds of all Internet traffic will traverse at least one content delivery network by 2020 – which tends to imply that most organizations are, right now, invested in getting the most out of some combination of private cloud, public cloud, CDN, and, presumably, physically-managed data center.

Fundamentally, there are a few core ways that we see organizations using this market basket of delivery pathways – and, naturally, our Openmix global server load balancer – to better serve their customers, and to protect their economics as demand grows, apparently insatiable. The core strategies are:

  1. Balance CDNs, offload to origin. For web-centric businesses, delivering content across the Internet is fundamental to their success (possibly their survival), so they tend to rely upon one or more CDNs to get content to their users effectively. Over time, they tend to expand the number of CDN relationships, in order to improve quality across geographies, and to make the most of pricing differences between providers. Once they get this set to equilibrium, they discover that there is unused capacity at origin (or within a private or public cloud instance) to which they can offload traffic, maximizing the return they get on committed capacity, and minimizing unnecessary spend.
  2. Balance clouds, offload to CDN. For businesses that are highly geographically-focused, it is often more effective to create what is essentially a self-managed CDN, establishing PoPs through cloud providers in population centers where their customers actually originate. Even the most robust internally-managed system, however, is subject to traffic spikes that are way beyond expectations (and committed throughput limits), and so these companies build relationships with CDNs in which excess traffic is offloaded at peak times.
  3. Balance Hybrid Cloud. Organizations at the far right of Rightscale’s cloud maturity scale (in their words, the Cloud Explorers and Cloud Focused) are starting to view each of the delivery options not as wildly distinct options, but merely as similar-if-different-looking cogs in the machine. As such, they look at load and cost balancing through a pragmatic prism, in which each user is simply served through the lowest cost provider, so long as it can pass a pre-defined quality bar (a specified latency rate, for instance, or a throughput level). By shifting the mindset away from ‘primary’ and ‘offload’ networks, organizations are able to build strategies that optimize for both cost and quality.

Of course, to balance traffic across a heterogeneous set of delivery networks (and provider types), while adjusting for a combination of both economic and quality of service metrics, requires three things:

  1. Real-time visibility of the state of the Internet beyond the view of the individual publisher, in order to be able to evaluate Quality of Service levels prior to selecting a delivery provider
  2. Real-time visibility into the current economic situation with each contracted provider: which offers the lowest cost option, based on unit pricing, contract commitments, and so forth
  3. Real-time traffic routing, which takes the data inputs, compares them to the unique requirements of the requesting publisher, and seamlessly directs traffic along the right pathway

Not an easy recipe, perhaps, but when found, it results in the opportunity to apply sophisticated algorithms to delivery – in effect to exercise a Wall Street-level arbitrage approach, which results in a combination of delighted customers, and reduced infrastructure costs.

Or, put another way, the opportunity to make your hybrid cloud strategy pay for itself – and more.

To find out more about real-time predictive traffic routing, please take a look around our Openmix pages,  read about how to deliver 100% availability with a Hybrid CDN architecture, and visit our Github repository to see how easy it is to build your own real-time load balancing algorithm.

Make Mobile Video Stunning with Smart Load Balancing

If there’s one thing about which there is never an argument it’s this: streaming video consumers never want to be reminded that they’re on the Internet. They want their content to start quickly, play smoothly and uninterrupted, and be visually indistinguishable from traditional TV and movies. Meanwhile, the majority of consumers in the USA (and likely a similar proportion worldwide) prefer to consume their video on mobile devices. And as if that wasn’t challenging enough, there are now suggestions that live video consumption will grow – according to Variety by as much as 39 times! That seems crazy until you consider that Cisco predicted video would represent 82% of all consumer Internet traffic by 2020.

It’s no surprise that congestion can result in diminished viewing quality, leading over 50% of all consumers to, at some point, experience buffer rage from the frustration of not being able to play their show.

Here’s what’s crazy: there’s tons of bandwidth out there – but it’s stunningly hard to control.

The Internet is a best-efforts environment, over which even the most effective Ops teams can wield only so much control, because so much of it is either resident with another team, or is simply somewhere in the amorphous ‘cloud’.  While many savvy teams have sought to solve the problem by working with a Content Delivery Network (CDN), the sheer growth in traffic has meant that some CDNs are now dealing with as much traffic as the whole Internet transferred just a few years ago…and are themselves now subject to their own congestion and outage challenges. For this reason, plenty of organizations now contract with multiple CDNs, as well as placing their own virtual caching servers in public clouds, and even deploying their own bare-metal CDNs in data centers where their audiences are centered.

With all these great options for delivering content, Ops teams must make real-time decisions on how to balance the traffic across them all. The classic approaches to load balancing have been (with many thanks to Nginx):

  • Availability – Any servers that cannot be reached are automatically removed from the list of options (this prevents total link failure).
  • Round Robin – Requests are distributed across the group of servers sequentially.
  • Least Connections – A new request is sent to the server with the fewest current connections to clients. The relative computing capacity of each server is factored into determining which one has the least connections.
  • IP Hash – The IP address of the client is used to determine which server receives the request.

You might notice something each of those has in common: they all focus on the health of the system, not on the quality of the experience actually being had by the end user. Anything that balances based on availability tends to be driven by what is known as synthetic monitoring, which is essentially one computer checking another computer is available.

But we all know that just because a service is available doesn’t mean that it is performing to consumer expectations.

That’s why the new generation of Global Server Load Balancer(GSLB) solutions goes a step further. Today’s GSLB uses a range of inputs, including

  • Synthetic monitoring – to ensure servers are still up and running
  • Community Real User Measurements – a range of inputs from actual customers of a broad range of providers, aggregated, and used to create a virtual map of the Internet
  • Local Real User Measurements – inputs from actual customers of the provider’s own service
  • Integrated 3rd party measurements – including cost bases and total traffic delivered for individual delivery partners, used to balance traffic based not just on quality, but also on cost

Combined, these data sources allow video streaming companies not only to guarantee availability, but also to tune their total network for quality, and to optimize within that for cost. Or put another way – streaming video providers can now confidently deliver the quality of experience consumers expect and demand, without breaking the bank to do it.

When you know that you are running across the delivery pathway with the highest quality metrics, at the lowest cost, based on the actual experience of your users – that’s a stunning result. And it’s only possible with smart load balancing, combining traditional synthetic monitoring with the real-time feedback of users around the world, and the 3rd party data you use to run your business.

If you’d like to find out more about smart load balancing, keep looking around our site. And if you’re going to be at Mobile World Congress at the end of the month, make an appointment to meet with us there so we can show you smart load balancing in real life.

Tracking Video QoS Just Got A Whole Lot Easier

If you follow this blog, you know we’ve mentioned before working with innovative customers to create a creative way to track video Quality of Service (QoS) metrics and make sense of them.

It’s exciting therefore to share that now anyone and everyone can track video QoS in Radar.

Video is fundamentally different to a lot of other online content: not only is it huge (projections are that in the next four or five years video will make up as much as 80% of Internet traffic), it is inherently synchronous. Put another way, your customer might not notice if a page takes an extra second or two to load, but they surely notice if their favorite prime time show keeps stalling out and showing the re-buffering spinner. So our new Performance Report focuses on the key elements that matter to viewers, specifically:

  • Response Time: how long it takes the content source to respond to a request from the intended viewer. Longer is worse!
  • Re-Buffering Ratio: the share of viewing time spent with the content stalled, the viewer frustrated, and the player trying to catch up. Lower is better!
  • Throughput: the speed at which chunks of the video are being delivered to the player after request. Faster is better!
  • Video Start Time: how long it takes for the video to start after viewer request. Shorter is better!
  • Video Start Failures:the percentage of requested video playbacks that simply never start. Lower is better!
  • Bitrate: the actual bitrate experienced by the viewer (bitrate is a pretty solid proxy for picture quality, as the larger the bitrate, the higher the likely resolution of the video). In this case, higher or lower may be better, depending on your KPIs.

Once you enable the tag for your account and add it to your video-bearing pages (see below), you’ll be able to track all these for your site. And, as with all Radar reports, you can slice and dice the results in all sorts of different ways to get a solid picture of how your service is doing, video-wise. Analyses might include:

  • How do my CDNs compare at different times of day, in different locations, or on different kinds of device?
  • What is the statistical distribution of service provided through my clouds? Does general consistency hide big peaks and valleys, or is service generally within a tight boundary?
  • What is the impact of throughput fluctuations to bitrates, video start times, or re-buffering ratios? What should I be focused on to improve my service for my unique audience?

In no time, you’ll have a deep and clear sense of what’s going on with video delivered through your HTML5 player, and be able to extrapolate this to make key decisions on CDN partnering, cloud distribution, and global server load balancing solutions. The ability to really dig down into things like device type and OS – as well as the more expected geography, time, delivery platform, and so forth – means you’ll be able to isolate issues that are not, in fact, delivery-related: for instance, it is possible to see a dip in quality and assume it’s cloud-related, only to discover, in drilling down, that the drop occurs on only one particular device/OS combination, and thus uncover a hiccup in a new product release.

So here’s the scoop. Collecting these QoS metrics isn’t just easy – it’s free, just like our other Radar real user measurements. With the video QoS, you’ll be tracking your own visitors’ experiences, and be able to compare them over time.

The tag works with HTML5 players, running in a browser, and it’s unsurprisingly takes a bit more planning to implement than our standard tag, so you’ll likely want to drop us a line to get started. We’ll be delighted to help you get this up and running – just contact us by going to your Portal and navigating to Impact -> Video Playback Data, then clicking the Contact button..

Much Like The Rain Across America, Video Is Streaming Everywhere!

rain-theater

For those outside the US – and not addicted to your weather feeds – you may feel a certain schadenfreude to know that in the first week of 2017 fully 49 out of 50 states had snow. And that California is, even now, being drenched by something called an ‘atmospheric river’, which, based on the pictures you can dig up almost anywhere, is exactly what it sounds like.

Thank goodness for streaming, or Over the Top (OTT), video, then, which entertains all of us as we huddle inside waiting for Spring to appear.

And yet, perhaps they’re under attack in ways we’ve not noticed. According to CIO, sales taxes on streaming services are on their way (who knew Philadephia already charged one?). On the other hand, is it that surprising? Netflix, Hulu and Amazon Prime killed off the neighborhood Blockbuster, and that tax revenue has to be replaced by something – and given the speed at which streaming is increasing (22.6% increase in subscription revenue in 2016), that’s a tempting little nest egg for any self-respecting taxman. In fact, for the first time in 2016, streaming revenues exceeded revenues for physical media like DVDs.

One of the biggest stories for 2017 is likely to be the growth (or otherwise) of Internet-only streaming TV services. Kicked off by Dish with Sling TV, and by Sony with PlayStation Vue, we’re going to be keeping an eye on AT&T’s DirecTV Now, which is apparently keeping its low price of $35 for the time being. Nobody in the industry is really willing to place a bet on where this will go (HBO Go seems to be stuck at a million subscribers, which is nothing to sniff at, but growth is proving tricky) – but if it proves popular, all bets are off as to how future investments will be made on proprietary versus Internet delivery.

The biggest challenge for these new (and newly taxable!) services, of course, will really kick in when they become as ubiquitous as today’s rather more popular cable or satellite subscription. Because TV providers – whether the company that brings signal to your house, or the one that creates the channels you like to watch – are surprisingly robust. Ask yourself when the last time was that your favorite channel just plain stopped playing; or when your cable service stopped working (and no, you can’t count that time the electricity went out because you were undergoing an atmospheric river).

Now ask yourself how those streaming services are doing.  Pop over to the Cedexis CDN and Cloud Performance reports, then see how CDNs are doing – you’ll notice that, while you can get close to 99.9% availability by combining a handful of them and hooking them together with Openmix, it’s near-impossible for any single provider to reach 99%. Why? Because there are so many more moving parts in the Internet than there are in a closed, proprietary cable network. The fantastic news is that we can clearly see here that, working together, CDNs can put together the sort of results that are going to be necessary in order to make streaming a credible challenger to the status quo.

And perhaps that’s the news of the month: working together. At Cedexis, we’re working with clients all around the world to create the ideal delivery networks, from fortifying their origins, to implementing Varnish caching servers, to structuring robust Multi-CDN architectures – then applying real user measurements (RUM) and advanced global traffic management algorithms to make sure that consumers get a great experience. If there’s two things we’ll need to see this year to turbo-charge growth in the OTT space, they are (1) collaboration; to bring about (2) broadcast-quality online despite the Internet’s notoriously chaotic weather patterns.

Have questions about delivering broadcast-quality video online? Don’t miss our webinar, with Level 3, this Thursday, January 12th, at 3pm GMT, and 11am PST