Posts

Announcing Cedexis Netscope: Advanced Network Performance and Benchmarking Analysis

The Cedexis Radar community collects tens of billions of real user monitoring data points each day, giving Cedexis users unparalleled insight into how applications, videos, websites, and large file downloads are actually being experienced by their users. We’re excited to announce a product that offers a new lens into the Radar community dynamic data set: Cedexis Netscope.

Know how your service stacks up, down to the IP subnet
Metrics like network throughput, availability, and latency don’t tell the whole story of how your service is performing, because they are network-centric, not user-centric: however comprehensively you track network operations, what matters is the experience at the point of consumption. Cedexis Netscope provides you with additional user-centric context to assess your service, namely the ability to compare your service’s performance to the results of the “best” provider in your market. With up-to-date Anonymous Best comparative data, you’ll have a data-driven benchmark to use for network planning, marketing, and competitive analysis.

Highlight your Service Performance:

  • Relative to peers in your markets
  • In specific geographies
  • Compared with specific ISPs
  • Down to the IP Sub-net
  • Including both IPv4 and IPv6 addresses
  • Comprehensive data on latency or throughput
  • Covering both static and dynamic delivery

Actionable insights
Netscope provides detailed performance data that can be used to improve your service for end users. IT Ops teams can use automated or custom reports to view performance from your ASN versus peer groups in the geographies you serve. This lets you fully understand how you stack up versus the “best” service provider, using the same criteria. Real-time logs organized by ASN can be used to inform instant service repairs or for longer-term planning.

Powered by: the world’s largest user experience community
Real User Monitoring (RUM) means fully understanding how internet performance impacts customer satisfaction and engagement. Cedexis gathers RUM data from each step between the client and any of the clouds, data centers, and CDNs hosting your applications to build a holistic picture of internet health. Every request creates more data, continuously updating this unique real-time virtual map of the web.

Data and alerts, your way
To effectively evaluate your service and enable real-time troubleshooting, Netscope lets you roll up data by the ASN, country, region, or state level. You can zoom in within a specific ASN at the IP subnet level, to dissect the data in any way your business requires. This data will be stored in the cloud on an ongoing basis. Netscope also allows users to easily set up flexible network alerts for performance and latency deviations.

Netscope helps ISP Product Managers and Marketers better understand:

  • How well users connect to the major content distributors
  • How well users/business connect to public clouds (AWS, Google Cloud, Azure, etc.)
  • When, where, and how often outages and throughput issues happen
  • What happens during different times of day
  • Where are the risks during big events (FIFA World Cup, live events, video/content releases)
  • How service on mobile looks versus web
  • How the ISP stacks up v. ”the best” ISP  in the region

Bring Advanced Network analysis to your network
Netscope provides a critical data set you need for your network planning and enhancement. With its real-time understanding of worldwide network health, Netscope gives you the context and actionable data you need to delight customers and increase your market share.

Ready to use this data with your team?

Set up a demo today

 

New Feature: Reason Code reporting

Cedexis’ Global Load Balancing solution Openmix makes over 2.5 billion real-time delivery decisions every day. These routing decisions are based on a combination of Radar community’s 14 billion daily real user measurements and our customers’ defined business logic.

One thing we hear time and time again is “It’s great that you are making all these decisions, but it would be very valuable into why you are switching pathways.”  The “why” is hugely valuable in understanding the “what” (Decisions) and “when” (Time) of the Openmix decision-routing engine.

And so, we bring you: Reason Codes.

Reason Codes in Openmix applications are used to log and identify decisions being made, so you can easily establish why users were routed, such as to certain providers or geographical locations.  Reason Codes reflect factors such as Geo overrides, Best Round Trip Time, Data Problems, Preferred Provider Availability and or whatever other logic is built into your Openmix applications. Having the ability to see which Reason Codes (the “why”) impacted what decisions were made allows you to see clearly where problems are arising in your delivery network, and make adjustments where necessary.

Providing these types of insights is core to Cedexis’ DNA, so we are pleased to announce the general availability of Reason Codes as part of the Decision Report.  You can now view Reason Codes as both Primary and Secondary Dimensions, as well as through a specific filter for Reason Codes.

As a Cedexis Openmix user, you’ll want to get in on this right away. Being able to see what caused Openmix to route users from your preferred Cloud or CDN provider to another one because of a certain event (perhaps a data outage in the UK) allows you to understand what transpired over a specific time period. No second guessing of why decisions spiked in a certain country or network. Using Reason Codes, you can now easily see which applications are over- and under-performing and why.

Here is an example of how you can start gaining insights.

You will notice in the first screenshot below that for a period of time, there was a spike in the number of decisions that Openmix made for two of the applications.

Now all you have to do is switch the view from looking at the Application as your primary dimension to Reason Code and you can quickly see that “Routed based [on] Availability data” was the main reason for Openmix re-routing users

Drilling down further, you can add Country as your Secondary Dimension and you can see that this was happening primarily in the United States.

All of a sudden, you’re in the know: there wasn’t just ‘something going on’ – there was a major Availability event in the US. Now it’s time to hunt down your rep from that provider and find out what happened, what the plan is to prevent it in the future, and how you can adjust your network to ensure continued excellent service for all your users.

Cloud-First + DevOps-First = 81% Competitive Advantage

We recently ran across a fascinating article by Jason Bloomberg, a recognized expert on agile digital transformation, that examines the interplay between Cloud-First and DevOps-First models. That article led us, in turn, to an infographic centered on some remarkable findings from a CA Technologies survey of 900-plus IT pros from around the world. The survey set out to explore the synergies between Cloud and DevOps, specifically in regards to software delivery. You can probably guess why we snapped to attention.

The study found that 20 percent of the organizations represented identified themselves as being strongly committed to both Cloud and DevOps, and their software delivery outperformed other categories (Cloud only, DevOps only, and slow adopters) by 81 percent. This group earned the label “Delivery Disruptors” for their outlying success at maximizing agility and velocity on software projects. On factors of predictability, quality, user experience, and cost control, the Disruptor organizations soared above those employing traditional methods, as well as Cloud-only and DevOps-only methods, by large percentages. For example, Delivery Disruptors were 117 percent better at cost control than Slow Movers, and 75 percent better in this category than the DevOps-only companies.

These findings, among others, got us to thinking about the potential benefits and advantages such Delivery Disruptors can gain from adding Cedexis solutions into their powerful mix. Say, for example, you have agile dev teams working on new products and apps and you want to shorten the execution time for new cloud projects. To let your developers focus on writing code, you need an app delivery control layer that supports multiple teams and architectures. With the Cedexis application delivery platform, you can support agile processes, deliver frequent releases, control cloud and CDN costs, guarantee uptime and performance, and optimize hybrid infrastructure. Your teams get to work their way, in their specific environment, without worrying about delivery issues looming around every corner.

Application development is constantly changing thanks to advances like containerization and microservice architecture — not to mention escalating consumer demand for seamless functionality and instant rich media content. And in a hybrid, multi-cloud era, infrastructure is so complex and abstracted, delivery intelligence has to be embedded in the application (you can read more about what our Architect, Josh Gray, has to say about delivery-as-code here).

To ensure that an app performs as designed, and end users have a high quality experience, agile teams need to automate and optimize with software-defined delivery. Agile teams can achieve new levels of delivery disruption by bringing together global and local traffic management data (for instance, RUM, synthetic monitoring results, and local load balancer health), application performance management, and cost considerations to ensure the optimal path through datacenters, clouds, and CDNs.

Imagine the agility and speed a team can contribute to digital transformation initiatives with fully automated app delivery based on business rules, actionable data from real user and synthetic testing, and self-healing network optimizations. Incorporating these capabilities with a maturing Cloud-first and DevOps-first approach will likely put the top performers so far ahead of the rest of the pack, they’ll barely be on the same racetrack.

 

 

Optimizing for Resources and Consumers Alike

One of the genuinely difficult decisions being made by DevOps, Ops, and even straight-up developers, today is how to ensure outstanding quality of experience (QoE) for their users. Do you balance the hardware (physical, virtual, or otherwise) for optimal load? Or track  quality of service (QoS) metrics – like throughput, latency, video start time, and so forth – and use those as the primary guide?

It’s really not a great choice, which is why we’re happy to say, the right answer to the question of whether to use local or global traffic management is: both.

It hasn’t been a great choice in the past because, while synthetic and real user measurements (RUM) overlap pretty broadly, neither is the subset of the other. For instance, RUM might be telling you that users are getting great QoE from a cluster of virtual servers in Northern Virginia – but it doesn’t tell you if those servers are near their capacity limits, and could do with some help to prevent overloading. Conversely, synthetic data can tell you where the most abundant resources are to complete a computational, storage, or delivery task – but they generally can’t tell you whether the experience at the point of consumption will be one of swift execution, or of fluctuating network service that causes a video to constantly sputter and pause as the user’s client tries to buffer the next chunk.

Today, though, you can combine the best of both worlds, as Cedexis has partnered with NGINX and their NGINX + product line to produce a unique application delivery optimization solution. Think of it as a marriage of local traffic management (LTM) and global traffic management (GTM). LTM takes care of routing traffic that arrives at a (virtual or physical) location between individual resources efficiently, ensuring that resources don’t get overloaded (and of course, spins up new instances when ready); GTM takes care of working out which location gets the request in the first place. Historically, LTM has been essentially blind to user experience; and GTM has been limited to relatively basic local network data (simple “is-it-working” synthetic monitoring for the most part).

Application delivery optimization demands not just real-time knowledge of what’s happening at both ends, but real-time routing decisions that ensure the end user is getting the best experience. Combining LTM and GTM makes it simple to

  1. Improve on Round Robin or Geo-based balancing. For sure, physical proximity is a leading indicator of superior experience (all else being equal, data that has to travel shorter distances will arrive more quickly). By adding awareness of QoE at the point of consumption, however, Ops teams can ensure that geographically-bounded congestion or obstructions (say, for instance, peering between a data center and an ISP) can be avoided by re-routing traffic to a higher-performing, if more geographically distant, option. In its simplest iteration, the algorithm simply says “so long as we can get a certain level of quality, choose the closest source, but never use any source that dips below that quality floor”.
  2. Re-route around unavailable server instances. Each data center or cloud may combine a cluster of server instances, balanced by NGINX +. When one of those instances becomes unavailable, however (whether through catastrophic collapse, or simply scheduled maintenance), the LTM can let the GTM know of its reduced capacity, and start the process of routing traffic to other alternatives before any server instance become overloaded. In essence, here the LTM is telling the GTM not to get too carried away with QoE – but to check that future experiences have a good chance of mirroring those being delivered in the present.
  3. Avoid application problems. NGINX+ lets Openmix know the health of the application in a given node in real-time. So if, for instance, an application update is made to a subset of application servers, and it starts to throw an unusual number of 50errors, the GTM can start to route around that instance, and alert DevOps of an application problem. In this way, app updates can be distributed to some (but not all) locations throughout the network, then automatically de-provisioned if they turn out not be functioning as expected.

Combining the power of real user measurements, hardware health, and application health, will mean expanding the ability of every team to deliver a high QoE to every customer. At no point will user’s requests be sent to servers approaching full use; nor will they be sent to sprightly resources who can’t actually deliver QoE owing to network congestion that is beyond their control.

It also, of course, will create a new standard: once a critical mass of providers is managing its application delivery in this capacity-aware, consumer-responsive, application-tuned way, a rush will develop for those who have not yet reached this point to catch up. So take a moment now to explore how combining the LTM and GTM capabilities of NGINX+ and Cedexis might make sense for your environment – and get a step up on your competition.

The Cloud Is Coming

Still think the cloud (or should that be The Cloud?)is a possible-but-not-definite trend? Take a look at IDC’s projection of IT deployment types:

Credit: Forbes

So much to unpack! What really jumps out is that

  • Traditional data centers drop in share, but hang in there around 50%: self-managed hardware will be a fact of life as far out as we can see
  • Public cloud will double by 2021, but it isn’t devouring everything, because in the final analysis no Operations team wants to give up all control
  • Private cloud expands rapidly, as the skills to use the technology become more widespread
  • But most importantly…in the very near future, most every shop will likely be running a hybrid network, which combines traditional data centers, private cloud deployments, public clouds for storage and computation, and CDNs for delivery (don’t forget that Cisco famously predicted over half of all Internet traffic would traverse a CDN by the year after next)

It’s a brave new world, indeed, that has so many options in it.

If it is true, though, that cloud computing will be a $162B a year business by 2020 (per Gartner), and that 74% of technology CFOs say cloud computing will have the most measurable impact on their business in 2017, that means this year will end up having been one of upheaval, and of transformation. As ever more complex permutations of public/private infrastructure hit the market, the challenges of keeping everything straight will rapidly multiply: can one truly be said to be optimizing if one cannot centralize the tracking and traffic management for all resources, regardless of whether they’re in your own NOC, under Amazon’s tender care in Virginia, or located at some unidentified POP somewhere in Western Europe?

The truth is that, as with all transformations, this move to hybrid networks will be marked by the classic Hype Cycle:

We are fast approaching the Peak of Inflated Expectations; the sudden fall into the Trough of Disillusionment will be precipitated by the realization that there are now so many different sources of computation in the mix that nobody is quite sure where the savings are. Perhaps we’re saving money by using different CDNs in different geographies – but it’s hard to tell if we’re balancing for economic benefit; perhaps we’re making the right move by storing all our images on a global cloud, but it’s hard to tell whether adding a second (with the inevitable growth in storage fees) would result in faster audience growth; perhaps we’re right to avoid sending content requests back to origin, but at the same time, that seems like a lot of resources to not use.

The Slope of Enlightenment will hit when the tools come along to put all the metrics of all the elements of the hybrid network onto a single pane: balancing between nodes that are, at an abstract level at least, equally measurable, configurable, and tunable will start us down the path to the Plateau of Productivity.

The Cloud is coming; how long we spend in the Trough of Disillusionment trying to figure out how to make it hum like a well-oiled machine is assuredly on us.

Caching at The Edge: The Secret Accelerator

Think about how much data has to move between a publisher and a whole audience of eager viewers, especially when that content is either being streamed live, or is a highly-anticipated season premiere (yes, we’re all getting excited for the return of GoT). Now ask yourself where there is useless repetition, and an opportunity to make the whole process more efficient for everyone in the process.

Do so, and you come up with the Streaming Video Alliance-backed concept of Open Caching.

The short explanation is this: popular video content is detected and cached by ISPs at the edge; then, when consumers want to watch that content, they are served from local caches, instead of forcing everyone to pass a net-new version from origin to CDN to ISP. The amazing thing is how much of a win/win/win it really is:

  • Publishers and CDNs don’t have to deliver as much traffic to serve geographically-centered audiences
  • ISPs don’t have to pull multiple identical streams from publishers and CDNs
  • Consumers get their video more quickly and reliably, as it is served from a source that is much closer to them

A set of trials opened up in January, featuring some of the biggest names in streaming video: ViaSat, Viacom, Charter, Verizon, Yahoo, Limelight Networks, MLBAM, and Qwilt.

If this feels a bit familiar, it should: Netflix have essentially built exactly this (they call it Netflix Open Connect), by placing hardware within IXPs and ISPs around the world – some British researchers have mapped it, and it’s fascinating. And, indeed, they recently doubled down in India, deploying cached versions of their catalog (or at least the most used elements of it) all around that country.  The bottom line is that the largest streaming video provider (accounting for as much as 37% of all US Internet traffic) understands that the best experience is delivered by having the content closer to the consumer.

As it turns out, ISPs are flocking to this technology for all the reasons one might expect: this gives back some control over their networks, and provides the opportunity to get off the backhaul treadmill. By pulling, say, a live event one time, caching it at the edge, then delivering from that edge cache, they can substantially reduce their network volume and make end customers happy.


And yet – most publishers are only vaguely aware that this is happening (if you’re all up to speed on ISP caching, consider yourself ahead of the curve). Part of the reason is that when ISPs cache content that has traveled their way through a CDN, they preserve the headers – so the traffic isn’t necessarily identifiable as having been cached. And, indeed, if you have video monitoring at the client, those headers are being used, potentially making the performance of a given CDN look even better than it already is, because content is being served at the edge by the ISP. The ISP, in other words, is making not only the publisher look good, with excellent QoE – they’re also making the CDN look like a rock star!

To summarize: the caching that is happening at the ISP level is like a double-super-secret accelerator for your content, whose impact is currently difficult to measure.

It’s also, however, pretty easy to break. Publishers who opt to secure all their traffic essentially eliminate the opportunity for the ISP to cache their content, because the caching intelligence can’t identify what the file is or whether it needs caching. Now, that’s not to say the challenge insurmountable at all – APIs and integrations exist that allow the ISP to re-enter the fray, decrypt that secure transmission, and get back to work making everyone look good by delivering quickly and effectively to end consumers.

So if you aren’t yet up to speed on open caching, now is the time to do a little research. Pop over to the Streaming Video Alliance online and learn more about their Open Caching working group today – there’s nothing like finding out you deployed a secret weapon, without even knowing you did it.

 

Don’t Be Afraid of Microservices!

Architectural trends are to be expected in technology. From the original all-in-one-place Cobol behemoths half the world just learned existed because of Hidden Figures, to three-tiered architecture, to hyper-tier architecture, to Service Oriented Architecture….really, it’s enough to give anyone a headache.

And now we’re in a time of what Gartner very snappily calls Mesh App and Service Architecture (or MASA). Whether everyone else is going for that particular nomenclature is less relevant than the reality that we’ve moved on from web services and SOA toward containerization, de-coupling, and the broadest possible use of microservices.

Microservices sound slightly disturbing, as though they’re very, very small components, of which one would need dozens if not hundreds to do anything. Chris Richardson of Eventuate, though, recently begged us not to assume that just because of the name these units are tiny. In fact, it makes more sense to think of them as ‘hyper-targeted’ or ‘self-contained’ services: their purpose should be to execute a discrete set of logic, which can exist in isolation, and simply provide easily-accessed public interfaces. So, for instance, one could imagine a microservice whose sole purpose was to find the best match from a video library for a given user: requesting code would provide details on the user, the service would return the recommendation. Enormous amounts of sophistication may go into ingesting the user-identifying data, relating it to metadata, analyzing past results, and coming up with that one shining, perfect recommendation…but from the perspective of the team using the service, they just need to send a properly-formed request, and receive a properly-formed response.

The apps we all rely upon on those tiny little computers we carry around in our pocketbooks or pockets (i.e. smart phones) fundamentally rely on microservices, whether or not their developers thought to describe them that way. That’s why they sometimes wake up and spring to life with goodness…and sometimes seem to drag, or even fail to get going. They rely upon a variety of microservices – not always based at their own home location – and it’s the availability of all those microservices that dictates the user experience. If one microservice fails, and is not dealt with elegantly by the code, the experience becomes unsatisfactory.

If that feels daunting, it shouldn’t – one company managed to build the whole back end of a bank on this architecture.

Clearly, the one point of greatest risk is the link to the microservice – the API call, if you will. If the code calls to a static endpoint, the risk is that that endpoint isn’t available for some reason; or at least, is unavailable at an acceptable speed. This is why there are any number of solutions for trying to ensure the microservice is available, often spread between authoritative DNS services (which essentially take all the calls for a given location and then assign them to backend resources based on availability), and application delivery controllers (generally physical devices that perform the same service). Of course if either is down, life gets tricky quickly.

In fact, the trick to planning for highly available microservices is to embed locations that are managed by a cloud-based application delivery service. In other words, as the microservice is required, a call goes out to a location that combines both synthetic and real-user measurements to determine the most performant source and re-direct the traffic there. This compounds the benefits of the microservice architecture: not only can the microservice itself be maintained and updated independently of the apps that use it, so too the network and infrastructure necessary to its smooth and efficient delivery can be tweaked without affecting existing users.

Microservices are the future. To make the most, first ensure that they independently address discrete purposes; then make sure that their delivery is similarly defined and flexible without recourse to updating apps that use them; then settle back and watch performance meet innovation.

Live and Generally Available: Impact Resource Timing

We are very excited to be officially launching Impact Resource Timing (IRT) for general availability.

IRT is Impact’s powerful window into the performance of different sources of content for the pages in your website. For instance, you may want to distinguish the performance of your origin servers relative to cloud sources, or advertising partners; and by doing so, establish with confidence where any delays stem from. From here, you can dive into Resource Timing data sliced by various measurements over time, as well as through a statistical distribution view.

What is Resource Timing? Broadly speaking, resource timing measures latency within an application (i.e. browser). It uses JavaScript as the primary mechanism to instrument various time-based metrics of all the resources requested and downloaded for a single website page by an end user. Individual resources are objects such as JS, CSS, images and other files that the website pages requests. The faster the resources are requested and loaded on the page, the better quality user experience (QoE) for users.  By contrast, resources that cause longer latency can produce a negative QoE for users.  By analyzing resourcing timing measurements, you can isolate the resources that may be causing degradation issues for your organization to fix.  

Resource Timing Process:

Cedexis IRT makes it easy for you to track resources from identified sources, normally identified through domain (*.myDomain.com), by sub-domain(e.g. images.myDomain.com), and by the provider serving your content. In this way, you can quickly group together types of content, and identify the source of any latency. For instance, you might find that origin-located content is being delivered swiftly, while cloud-hosted images are slowing down the load time of your page; in such a situation, you would now be in a position to consider a range of solutions, including adding a secondary cloud provider and a global server load balancer to protect QoE for your users.

Some benefits of tracking Resource Timing.

  • See which hostnames  – and thus which classes of content – are slowing down your site.
  • Determine which resources impact your overall user experience.
  • Correlate resource performance with user experience.

Impact Resource Timing from Cedexis allows you to see how content sources are performing across various measurement types such as Duration, TCP Connection Time, and Round Trip Time. IRT reports also give you the ability to drill down further by Service Providers, Locations, ISPs, User Agent (device, browsers, OS) and other filters.

Check out our User Guide to learn more about our Measurement Type calculations.

There are two primary reports in this release of Impact Resource Timing. The Performance report, which gives you a trending view of resource timing over time and the Statistical Distribution report, which reports Resource Timing data through a statistical distribution view.  Both reports have very dynamic reporting capabilities that allow you to easily pinpoint resource-related issues for further analysis.  


Using the Performance report, you can isolate which grouped resources are causing potential end user experience issues by hostname, page or service provider and when the issue happened. Drill down even further to see if this was a global issue or localized to a specific location or if it was by certain user devices or browsers.  

IRT is now available for all in the Radar portal – take it for a spin and let us know your experiences!

Why The Web Is So Congested

If you live in a major city like London, Tokyo, or San Francisco, you learn one thing early: driving your car through the city center is about the slowest possible way to get around. Which is ironic, when you think about it, as cars only became popular because they made is possible to get around more quickly. There is, it seems, an inverse relationship between efficiency and popularity, at least when it comes to goods that pass through a public commons like roads.

Or like the Internet.

Think about all that lovely 4K video you could be consuming if there was nothing between you and your favorite VOD provider but a totally clear fiber optic cable. But unless you live in a highly over-provisioned location, that’s exactly not what’s going on; rather, you’re lucky to get a full HD picture, and even luckier if it stays at 1080p, without buffering, all the way through. Why? Because you’re sharing a public commons – the Internet – and its efficiency is being chewed away by popularity.

Let’s do some math to illustrate this,

  • Between 2013 and January 2017 the number of web users increased by 1.4 billion people to just over 3.7 billion. Today Internet penetration is at 50% (or put another way – half the world isn’t online yet)
  • In 2013, the average amount of Internet data per person was 7.9G per month; by 2015 it was 9.9G, with Cisco expecting it to reach over 25Gb by 2020 – so assume something in the range of 15Gb by 2017.
  • Logically, then in 2013 web traffic would have been around 2.3B * 7.9G per months (18.1t exabytes), by 2015 it would have been  3.7B * 17Gb per month (62.9 exabytes)
  • If we assume another billion Internet users by 2020, we’re looking at 4.7B & 25Gb per month – or a full 117.5 exabytes

In just seven years, the monthly web traffic will have grown 600% (based on the math, anyway: Cisco is estimating closer to 200 exabytes monthly by 2020).

And that is why the web is so busy.

But it doesn’t describe why the web is congested. Congestion happens when there is more traffic than transit space – which is why, as cities get larger and more populous, governments add lanes to major thoroughfares, meeting the automobile demand with road supply.

Unfortunately, unlike cars on roads, Internet traffic doesn’t travel in straight lines from point to point. So even though infrastructure providers have been building out capacity at a madcap pace, it’s not always connected in such a way that makes transit efficient. And, unlike roads, digital connections are not built out of concrete, and often become unavailable – sometimes for a long time that causes consternation and PR challenges, and sometimes just for a minute or so, stymying a relative handful of customers.

For information to get from A to B, it has to traverse any number of interconnected infrastructures, from ISPs to the backbone to CDNs, and beyond. Each is independently managed, meaning that no individual network administrator can guarantee smooth passage from beginning to end. And with all the traffic that has been – and will continue to be – added to the Internet, it has become essentially a guarantee that some portion of content requests will bump into transit problems along the way.

Let’s also note that the modern Internet is characterized less by cat memes, and more by the delivery of information, functionality, and ultimately, knowledge. Put another way, the Internet today is all about applications: whether represented as a tile on a smart phone home screen, or as a web interface, applications deliver the intelligence to take the sum total of all human knowledge that is somewhere on the web and turn it into something we can use. When you open social media, the app knows who you want to know about; when you consult your sports app, it knows which teams you want to know about first; when you check your financial app, it knows how to log you in from a fingerprint and which account details to show first. Every time that every app is asked to deliver any piece of knowledge, it is making requests across the Internet – and often multiple requests of multiple sources. Traffic congestion doesn’t just endanger the bitrate of your favorite sci fi series – it threatens the value of every app you use.

Which is why real-time predictive traffic routing is becoming a topic that web native businesses are digging deeper into. Think of it as Application Delivery for the web – a traffic cop that spots congestion and directs content around it, so that it’s as though it never happened. This is the only way to solve for efficient routing around a network of networks without a central administrator: assume that there will be periodic roadblocks, and simply prepare to take a different route.

The Internet is increasingly congested. But by re-directing traffic to the pathways that are fully available, it is possible to get around all those traffic jams. And, actually, it’s possible to do today.

Find out more by reading the story of how Rosetta Stone improved performance for over 60% of their worldwide customers.

 

How To Deliver Content for Free!

OK, fine, not for free per se, but using bandwidth that you’ve already paid for.

Now, the uninitiated might ask what’s the big deal – isn’t bandwidth essentially free at this point? And they’d have a point – the cost per Gigabyte of traffic moved across the Internet has dropped like a rock, consistently, for as long as anyone can remember. In fact, Dan Rayburn reported in 2016 seeing prices as low as ¼ of a penny per gigabyte. Sounds like a negligible cost, right?

As it turns out, no. As time has passed, the amount of traffic passing through the Internet has grown. This is particularly true for those delivering streaming video: consumers now turn up their nose at sub-broadcast quality resolutions, and expect at least an HD stream. To put this into context, moving from HD as a standard to 4K (which keeps threatening to take over) would result in the amount of traffic quadrupling. So while CDN prices per Gigabyte might drop 25% or so each year, a publisher delivering 400% the traffic is still looking at an increasingly large delivery bill.

It’s also worth pointing out that the cost of delivery relative to delivering video through a traditional network, such as cable or satellite is surprisingly high. An analysis by Redshift for the BBC clearly identifies the likely reality that, regardless of the ongoing reduction in per-terabyte pricing “IP service development spend is likely to increase as [the BBA] faces pressure to innovate”, meaning that online viewers will be consuming more than their fair share of the pie.

Take back control of your content…and your costs

So, the price of delivery is out of alignment with viewership, and is increasing in practical terms. What’s a streaming video provider to do?

Allow us to introduce Varnish Extend, a solution combining the powerful Varnish caching engine that is already part of delivering 25% of the world’s websites; and Openmix, the real-time user-driven predictive load balancing system that uses billions of user measurements a day to direct traffic to the best pathway.

Cedexis and Varnish have both found that the move to the Cloud left a lot of broadcasters as well as OTT providers with unused bandwidth available on premise.Bymaking it easy to transform an existing data-center into a private CDN Point of Presence (PoP), Varnish Extend empowers companies to easily make the most out of all the bandwidth they have paid for, by setting up Varnish nodes on premise, or on cloud instances that offer lower operational costs than using CDN bandwidth.

This is especially valuable for broadcasters/service providers whose service is limited to one country: the global coverage of a CDN may be overkill, when the same quality of experience can be delivered by simply establishing POPs in strategic locations in-country.

Unlike committing to an all-CDN environment, using a private CDN infrastructure like Varnish Extend supports scaling to meet business needs – costs are based on server instances and decisions, not on the amount of traffic delivered. So as consumer demands grow, pushing for greater quality, the additional traffic doesn’t push delivery costs over the edge of sanity.

A global server load balancer like Openmix automatically checks available bandwidth on each Varnish node as well as each CDN, along with each platform’s performance in real-time. Openmix also uses information from the Radar real user measurement community to understand the state of the Internet worldwide and make smart routing decisions.

Your own private CDN – in a matter of hours

Understanding the health of both the private CDN and the broader Internet makes it a snap to dynamically switch end-users between Varnish nodes and CDNs, ensuring that cost containment doesn’t come at the expense of customer experience – simply establish a baseline of acceptable quality, then allow Openmix to direct traffic to the most cost-effective route that will still deliver on quality.

Implementing Varnish Extend is surprisingly simple (some customers have implemented their private CDN in as little as four hours):

  1. Deploy Varnish Plus nodes within existing data-centre or on public cloud,
  2. Configure Cedexis Openmix to leverage these nodes as well as existing CDNs.
  3. Result: End-users are automatically routed to the best delivery node based on performance, costs, etc.

Learn in detail how to implement Varnish Extend

Sign up for Varnish Software – Cedexis Summit in NYC

References/Recommended Reading: