Posts

Announcing Cedexis Netscope: Advanced Network Performance and Benchmarking Analysis

The Cedexis Radar community collects tens of billions of real user monitoring data points each day, giving Cedexis users unparalleled insight into how applications, videos, websites, and large file downloads are actually being experienced by their users. We’re excited to announce a product that offers a new lens into the Radar community dynamic data set: Cedexis Netscope.

Know how your service stacks up, down to the IP subnet
Metrics like network throughput, availability, and latency don’t tell the whole story of how your service is performing, because they are network-centric, not user-centric: however comprehensively you track network operations, what matters is the experience at the point of consumption. Cedexis Netscope provides you with additional user-centric context to assess your service, namely the ability to compare your service’s performance to the results of the “best” provider in your market. With up-to-date Anonymous Best comparative data, you’ll have a data-driven benchmark to use for network planning, marketing, and competitive analysis.

Highlight your Service Performance:

  • Relative to peers in your markets
  • In specific geographies
  • Compared with specific ISPs
  • Down to the IP Sub-net
  • Including both IPv4 and IPv6 addresses
  • Comprehensive data on latency or throughput
  • Covering both static and dynamic delivery

Actionable insights
Netscope provides detailed performance data that can be used to improve your service for end users. IT Ops teams can use automated or custom reports to view performance from your ASN versus peer groups in the geographies you serve. This lets you fully understand how you stack up versus the “best” service provider, using the same criteria. Real-time logs organized by ASN can be used to inform instant service repairs or for longer-term planning.

Powered by: the world’s largest user experience community
Real User Monitoring (RUM) means fully understanding how internet performance impacts customer satisfaction and engagement. Cedexis gathers RUM data from each step between the client and any of the clouds, data centers, and CDNs hosting your applications to build a holistic picture of internet health. Every request creates more data, continuously updating this unique real-time virtual map of the web.

Data and alerts, your way
To effectively evaluate your service and enable real-time troubleshooting, Netscope lets you roll up data by the ASN, country, region, or state level. You can zoom in within a specific ASN at the IP subnet level, to dissect the data in any way your business requires. This data will be stored in the cloud on an ongoing basis. Netscope also allows users to easily set up flexible network alerts for performance and latency deviations.

Netscope helps ISP Product Managers and Marketers better understand:

  • How well users connect to the major content distributors
  • How well users/business connect to public clouds (AWS, Google Cloud, Azure, etc.)
  • When, where, and how often outages and throughput issues happen
  • What happens during different times of day
  • Where are the risks during big events (FIFA World Cup, live events, video/content releases)
  • How service on mobile looks versus web
  • How the ISP stacks up v. ”the best” ISP  in the region

Bring Advanced Network analysis to your network
Netscope provides a critical data set you need for your network planning and enhancement. With its real-time understanding of worldwide network health, Netscope gives you the context and actionable data you need to delight customers and increase your market share.

Ready to use this data with your team?

Set up a demo today

 

Whither Net Neutrality

In case you missed it during what we shall carefully call a busy news week, the FCC voted to overturn net neutrality rules. While this doesn’t mean net neutrality is dead – there’s now a long period of public comments, and any number of remaining bureaucratic hoops to jump through – it does mean that it’s at best on life support.

So what does this mean anyway? And does it actually matter, or is it the esoteric nattering of a bunch of technocrats fighting over the number of angels that can dance on the head of a CAT-5 cable?

Let’s start with a level set: what is net neutrality? Wikipedia tells us that it is

the principle that Internet service providers and governments regulating the Internet should treat all data on the Internet the same, not discriminating or charging differentially by user, content, website, platform, application, type of attached equipment, or mode of communication.”

Thought of a different way, the idea is that ISPs should simply provide a pipe through which content flows, and have neither opinion nor business interest in what that content is – just as a phone company doesn’t care who calls you or what they’re talking about.

The core argument for net neutrality is that, if the ISPs can favor one content provider over another, then it will create insurmountable barriers to entry for feisty, innovative new market entrants: Facebook, for example, could pay Comcast to make their social network run twice as smoothly as a newly-minted competitor. Meanwhile, that ISP could, in an unregulated environment, accept payment to favor the advertisements of one political candidate over another, or to simply block access to material or any purpose.

On the other side, there are all the core arguments of de-regulation: demonstrating adherence to regulations siphons off productive dollars; regulations stifle innovation and discourage investment; regulations are a response to a problem that hasn’t even happened yet, and may never occur (there’s a nice layout of these arguments at TechCrunch here). Additionally, classic market economics suggests that, if ISPs provide a service that doesn’t match consumers’ needs, then those consumers will simply take their business elsewhere.

It doesn’t much matter which side of the argument you find yourself on: as long as there is an argument to be had, it is going to be important to have control over one’s traffic as it traverses the Internet. Radar’s ability to track, monitor and analyze Internet traffic will be vital whether net neutrality is the law of the land or not. Consider these opposing situations:

  • Net neutrality is the rule. Tracking the average latency and throughput for your content, then comparing it to the numbers for the community at large, will swiftly alert you if an ISP is treating your competition preferentially.
  • Net neutrality is not the rule. Tracking the average latency and throughput for your content, then comparing it to the numbers for the community at large, will swiftly alert you if an ISP is not providing the level of preference for which you have paid (or that someone else has upped the ante and taken your leading spot).

Activists from both sides will doubtless be wrestling with how to regulate (or not!) the ISP market for years to come. Throughout, and even after the conclusion of that fight, it’s vital for every company that delivers content across the public Internet to be, and remain, aware of the service their users are receiving – and who to call when things aren’t going according to plan.

Find out more about becoming a part of the Radar community by clicking here to create an account.

New Feature: Apply Filters

The Cedexis UI team spends considerable time looking for ways to help make our products both useful and efficient. One of the areas they’ve been concentrating on is improving the experience of applying several sets of filters to a report, which historically has led to a reload of the report every time a user has changed the filter list.

So we are excited to be rolling out a new reporting feature today called Apply Filters.  With the focus on improved usability and efficiency, this new feature allows you to select (or deselect) your desired filters first and then click the Apply Filters button to re-run the report. By selecting all your filters once, you will save time and eliminate confusion of remembering which filters you selected while the report is continuously refreshing itself.

The Apply Filter button appears  in off-state and on-state. The off-state button is a lighter green version that you will see before any filter selections are made. The on-state will become enabled once a filter selection has been made.  Once you run Apply Filters, and the report has completed re-running with the selected filters, the button will return to the off-state.

We have also placed the Apply Filters button at both the top and bottom of the Filters area.  The larger button at the bottom is a fixed setting so no matter how many filter options you have open, this button will always be easily accessible.

 .      

We hope you’ll agree this makes reports easier to use, and will save you time as you slice-and-dice your way to a deep and broad understanding of how traffic is making its way across the public internet.

Want to check out the new filters feature, but don’t have a portal account? Sign up here for free!

Together, we’re making a better internet. For everyone, by everyone.

Re-Writing Streaming Video Economics


The majority of Americans – make that the vast majority of American Millennials – stream video. Every day, in every way. From Netflix to Hulu, YouTube to Twitch, CBS to HBO, there is no TV experience that isn’t being accessed on a mobile phone, a tablet, a PC, or some kind of streaming device attached to a genuine, honest-to-goodness television.

The trouble is, we aren’t really paying for it: just 9% of a household’s video budget goes to streaming services, while the rest goes to all the usual suspects: cable companies, satellite providers, DVD distributors, and so forth. This can make breaking a profit a tricky proposition – Netflix just started to churn out ‘material profits’, Hlu is suspected to be losing money, and Amazon is unlikely ever to break out profitability of its Prime video service from the other benefits of the program.

The challenge is there are really only so many levers that can be pulled to make streaming video profitable:

  1. Charge (more) for subscriptions: except that when the cost goes up, adoption goes down, and decelerating growth is anathema to a start-up business
  2. Spend less on (licensing/making/acquiring) content: except that if the content quality misses, audience growth will follow it
  3. Spend less on delivering the content: except that if the quality goes down, audiences will depart, never to be seen again

One and two are tricky, and rely upon the subjective skills of pricing and content acquisition experts. Number three though…maybe there’s something there that is available to everyone.

And indeed, there is. Most video traffic these days travels across Content Delivery Networks (CDNs), who do yeoman work caching popular traffic around the globe, and doing much of the heavy lifting in working out the quickest way to get content from publisher to consumer. Over the years, these vital members of the infrastructure have gradually improved and refined their craft, to the point where they about as reliable as they can be.

That said, no Ops team ever likes to have a single point of failure, which is why almost all large-scale internet outfits contract with at least two – if not more – CDNs. And that’s where the opportunity arises: it’s almost a guarantee that with two contracts, there will be differences in pricing for particular circumstances. Perhaps there is a pre-commit with one, or a time-of-day discount on the other; perhaps they simply offer different per-Gb pricing in return for varying feature sets.

With Openmix, you can actually build an algorithm that doesn’t just eliminate outages; and doesn’t just ensure consistent quality; you can make decisions on where to send the traffic based on financial parameters, once you ensure that the quality isn’t going to drop.

All of a sudden you have access to pull one of the three levers – without triggering the nasty side effects that make each one a mixed blessing. You can reduce your cost, without putting your quality at risk – it’s a win/win.

We’d love to show you more about this, so if you’re at NAB this week, do stop by.

How To Deliver Content for Free!

OK, fine, not for free per se, but using bandwidth that you’ve already paid for.

Now, the uninitiated might ask what’s the big deal – isn’t bandwidth essentially free at this point? And they’d have a point – the cost per Gigabyte of traffic moved across the Internet has dropped like a rock, consistently, for as long as anyone can remember. In fact, Dan Rayburn reported in 2016 seeing prices as low as ¼ of a penny per gigabyte. Sounds like a negligible cost, right?

As it turns out, no. As time has passed, the amount of traffic passing through the Internet has grown. This is particularly true for those delivering streaming video: consumers now turn up their nose at sub-broadcast quality resolutions, and expect at least an HD stream. To put this into context, moving from HD as a standard to 4K (which keeps threatening to take over) would result in the amount of traffic quadrupling. So while CDN prices per Gigabyte might drop 25% or so each year, a publisher delivering 400% the traffic is still looking at an increasingly large delivery bill.

It’s also worth pointing out that the cost of delivery relative to delivering video through a traditional network, such as cable or satellite is surprisingly high. An analysis by Redshift for the BBC clearly identifies the likely reality that, regardless of the ongoing reduction in per-terabyte pricing “IP service development spend is likely to increase as [the BBA] faces pressure to innovate”, meaning that online viewers will be consuming more than their fair share of the pie.

Take back control of your content…and your costs

So, the price of delivery is out of alignment with viewership, and is increasing in practical terms. What’s a streaming video provider to do?

Allow us to introduce Varnish Extend, a solution combining the powerful Varnish caching engine that is already part of delivering 25% of the world’s websites; and Openmix, the real-time user-driven predictive load balancing system that uses billions of user measurements a day to direct traffic to the best pathway.

Cedexis and Varnish have both found that the move to the Cloud left a lot of broadcasters as well as OTT providers with unused bandwidth available on premise.Bymaking it easy to transform an existing data-center into a private CDN Point of Presence (PoP), Varnish Extend empowers companies to easily make the most out of all the bandwidth they have paid for, by setting up Varnish nodes on premise, or on cloud instances that offer lower operational costs than using CDN bandwidth.

This is especially valuable for broadcasters/service providers whose service is limited to one country: the global coverage of a CDN may be overkill, when the same quality of experience can be delivered by simply establishing POPs in strategic locations in-country.

Unlike committing to an all-CDN environment, using a private CDN infrastructure like Varnish Extend supports scaling to meet business needs – costs are based on server instances and decisions, not on the amount of traffic delivered. So as consumer demands grow, pushing for greater quality, the additional traffic doesn’t push delivery costs over the edge of sanity.

A global server load balancer like Openmix automatically checks available bandwidth on each Varnish node as well as each CDN, along with each platform’s performance in real-time. Openmix also uses information from the Radar real user measurement community to understand the state of the Internet worldwide and make smart routing decisions.

Your own private CDN – in a matter of hours

Understanding the health of both the private CDN and the broader Internet makes it a snap to dynamically switch end-users between Varnish nodes and CDNs, ensuring that cost containment doesn’t come at the expense of customer experience – simply establish a baseline of acceptable quality, then allow Openmix to direct traffic to the most cost-effective route that will still deliver on quality.

Implementing Varnish Extend is surprisingly simple (some customers have implemented their private CDN in as little as four hours):

  1. Deploy Varnish Plus nodes within existing data-centre or on public cloud,
  2. Configure Cedexis Openmix to leverage these nodes as well as existing CDNs.
  3. Result: End-users are automatically routed to the best delivery node based on performance, costs, etc.

Learn in detail how to implement Varnish Extend

Sign up for Varnish Software – Cedexis Summit in NYC

References/Recommended Reading:

Make Mobile Video Stunning with Smart Load Balancing

If there’s one thing about which there is never an argument it’s this: streaming video consumers never want to be reminded that they’re on the Internet. They want their content to start quickly, play smoothly and uninterrupted, and be visually indistinguishable from traditional TV and movies. Meanwhile, the majority of consumers in the USA (and likely a similar proportion worldwide) prefer to consume their video on mobile devices. And as if that wasn’t challenging enough, there are now suggestions that live video consumption will grow – according to Variety by as much as 39 times! That seems crazy until you consider that Cisco predicted video would represent 82% of all consumer Internet traffic by 2020.

It’s no surprise that congestion can result in diminished viewing quality, leading over 50% of all consumers to, at some point, experience buffer rage from the frustration of not being able to play their show.

Here’s what’s crazy: there’s tons of bandwidth out there – but it’s stunningly hard to control.

The Internet is a best-efforts environment, over which even the most effective Ops teams can wield only so much control, because so much of it is either resident with another team, or is simply somewhere in the amorphous ‘cloud’.  While many savvy teams have sought to solve the problem by working with a Content Delivery Network (CDN), the sheer growth in traffic has meant that some CDNs are now dealing with as much traffic as the whole Internet transferred just a few years ago…and are themselves now subject to their own congestion and outage challenges. For this reason, plenty of organizations now contract with multiple CDNs, as well as placing their own virtual caching servers in public clouds, and even deploying their own bare-metal CDNs in data centers where their audiences are centered.

With all these great options for delivering content, Ops teams must make real-time decisions on how to balance the traffic across them all. The classic approaches to load balancing have been (with many thanks to Nginx):

  • Availability – Any servers that cannot be reached are automatically removed from the list of options (this prevents total link failure).
  • Round Robin – Requests are distributed across the group of servers sequentially.
  • Least Connections – A new request is sent to the server with the fewest current connections to clients. The relative computing capacity of each server is factored into determining which one has the least connections.
  • IP Hash – The IP address of the client is used to determine which server receives the request.

You might notice something each of those has in common: they all focus on the health of the system, not on the quality of the experience actually being had by the end user. Anything that balances based on availability tends to be driven by what is known as synthetic monitoring, which is essentially one computer checking another computer is available.

But we all know that just because a service is available doesn’t mean that it is performing to consumer expectations.

That’s why the new generation of Global Server Load Balancer(GSLB) solutions goes a step further. Today’s GSLB uses a range of inputs, including

  • Synthetic monitoring – to ensure servers are still up and running
  • Community Real User Measurements – a range of inputs from actual customers of a broad range of providers, aggregated, and used to create a virtual map of the Internet
  • Local Real User Measurements – inputs from actual customers of the provider’s own service
  • Integrated 3rd party measurements – including cost bases and total traffic delivered for individual delivery partners, used to balance traffic based not just on quality, but also on cost

Combined, these data sources allow video streaming companies not only to guarantee availability, but also to tune their total network for quality, and to optimize within that for cost. Or put another way – streaming video providers can now confidently deliver the quality of experience consumers expect and demand, without breaking the bank to do it.

When you know that you are running across the delivery pathway with the highest quality metrics, at the lowest cost, based on the actual experience of your users – that’s a stunning result. And it’s only possible with smart load balancing, combining traditional synthetic monitoring with the real-time feedback of users around the world, and the 3rd party data you use to run your business.

If you’d like to find out more about smart load balancing, keep looking around our site. And if you’re going to be at Mobile World Congress at the end of the month, make an appointment to meet with us there so we can show you smart load balancing in real life.

What Can Metrics Tell Us About Internet Video Delivery?

Over the last year or so, we’ve been working with some innovative streaming video leaders to collect and analyze the Quality of Experience (QoE) their consumers have been receiving. Using the results of several billion streams, we can start to see some fascinating trends emerge.

This data was collected through an updated (and still free!) Radar Community tag, which collected video-specific QoE metrics from HTML5 player elements, across 10 video service providers in Q4 of 2016, who served both live and video-on-demand (VOD) assets to audiences all around the world.

Let’s start with a thoroughly unsurprising result: higher throughput is distinctly correlated with higher bitrates:

thgoughput-bitrate

That said, we can also say that the return for getting from below 10K kbps to above that line is significantly greater than getting from below 30K to above. Importantly, we can also see that the largest clusters of chunks occur below and around 10K, so focusing on improvement here will have the most significant impact on customer viewing.

We see a not-dissimilar result when we compare throughput with video start failures (VSF). More throughput is very highly correlated with low video start failures:

throughput-videostartfailure

Once again, getting to above 10K kbps brings the greatest benefit, dropping VSF from a peak of 9% to a more manageable 4%. Doubling the throughput roughly halves the VSF, though the benefits are more modestas speeds exceed 30K.

Less obvious is the degree to which using multiple CDNs can measurably impact the QoE of users. Take a look at the following graph, which compares the Latency of two CDNs across a 7-day period:

two-cdns-7-days

CDN1 (in red) shows a very consistent series of results, with only a couple of spikes that really catch the eye. By contrast, CDN2 (in green) shows way more spikes, a couple of which are quite striking, and a clear pattern of higher latency. Based on this very high level view, one might conclude that the incremental benefit of distributing traffic across the two providers would be relatively low. However, look what happens when we double-click and look at a single day:

2-cdn-one-day

From midnight to around 5am, CDN2 is by far the superior option – and, tantalizingly, appears to become so again right around 11pm. This might be the perfect example of a situation in which some time-based traffic distribution could deliver QoE improvements. And, assuming the CDNs bear different cost structures, there may very well be an opportunity here to arbitrage some costs and improve margins.  Finally, let’s dig into what happens during a single, rather troublesome hour:

2-cdns-one-hour

Note that for this particular hour, CDN2 is outperforming CDN1 for around about 50 minutes, meaning that from a pure QoE perspective, we would probably prefer traffic to be sent via CDN2 than CDN1. This is something that would be effectively impossible to spot at the 7-day level, but by digging in deeply, it becomes clear that distributing our traffic across these two CDNs would result in detectable differences for users.

And what would that bring us? Using one more graph, we can see the relationship between latency and video start time (VST):

latency-to-vst

Unsurprisingly lower latency results in lower VST – which, you can be sure, will in turn contribute to higher VSF. Or, in more direct terms, will mean less people consuming video, and therefore seeing less ads, or becoming increasingly less likely to renew a subscription.

Real User Measurements (RUM) that are tracked through the Cedexis Radar Community provide a powerful set of signposts for how to deliver traffic most effectively and efficiently through the Internet. Adding video-specific metrics helps ensure that the right decisions are being made for a sparkling video experience.

To find out more about adding video metrics for free to your Radar account, drop us a line at <sales@cedexis.com>.

How Much Money Is Your Website Performance Costing You?

Hello friends. We are pleased to announce that Cedexis will be partnering with SOASTA to bring you a roundtable discussion covering recent research into the costs of poor website performance and the expectation of the modern web visitor has evolved.

If you’re interested in learning about:

  • How even slight web delays can lead to lost revenue
  • Understand how to use Real User Measurements to understand what’s not working & make that data actionable
  • What leading enterprises are doing to build high-performance sites

…then this is the event for you. Pete Mastin, Cedexis Product Evangelist, and Tammy Everts , SOASTA Director of Content, will be reviewing strategies and technologies to boost website performance, and show you how business KPIs can be used to ensure your site is contributing as much as possible to the bottom line.

This online event takes place Tuesday, Sept 20th and will be hosted by Aberdeen – the research group, not the Scottish city (though any region housing Balgownie Links is a place I’d like to visit).  Jim Rapoza, Sr Research Analyst from Aberdeen will be moderating.  Presentations will last approximately 30 minutes and there will be plenty of time for Q&A at the end.

This webinar is free as always.  For more detail & how to register, click here.  And if you want to learn more about Cedexis Openmix, which uses real-time data for global traffic management, click here.  Thanks for reading! Enjoy the roundtable.

 

 

Matt Radochonski is Cedexis’ Director of Demand Generation & Marketing Operations. He can be reached on twitter @MattAtCedexis or via email, and feel free to leave a comment below.

Microsoft retiring support for all Internet Explorer browsers prior to Version 11: Does this pose security issues in places like China?

On January 12, 2016, Microsoft stopped support for all Internet Explorer browsers except IE 11. What does this mean? It means that security updates no longer get distributed for earlier browser versions. In the announcement that can be found here, Microsoft states clearly:

Security updates patch vulnerabilities that may be exploited by malware, helping to keep users and their data safer. Regular security updates help protect computers from malicious attacks, so upgrading and staying current is important.

So, how big of a browser security problem is this? It turns out it is a pretty big one. Here at Cedexis, we are fortunate to have the Radar Community, with over 900 enterprise contributors. This community generates billions of Real User Measurements (RUM) every day from every country in the world and every browser type and computer type. So, we see what people are using out there in the world. The results are not always pretty.

When we take 24 hours of measurements and limit it to just Microsoft IE, here is what we see for the entire world.

IE-Versions-being-used-in-Entire-World1

Whoa! 48% of the IE world is pre Internet Explorer 11! 1% of the world that is using IE is still using IE 7 for goodness sake!

To make the point more clear – Malware that infects your computer is often used to generate cyber attacks. Microsoft will no longer support security updates for 48% of the world’s Internet Explorer Instances as of Jan 12th 2016. This means the likelihood of these Internet Explorer Instances getting infected is almost 100% – creating an army of infected machines around the world

So, where do we typically see these cyber attacks coming from? This Government Technology Report identified these countries as the biggest offenders:

Top 10 Hacking Countries in the World

Let’s take a look at the breakdown of Internet Explorer in China, shall we?

IE-Versions-being-used-In-China

Shiver me browsers! 63% of China’s instances of Internet Explorer today are not getting Malware updates anymore. Over 10% of all the IE in China is on Version 7 – a full four versions back!

Let’s contrast that with the US for a sanity check (and since the US is #2 on places where cyber attacks originate from).

IE-Versions-being-used-In-USA

While the US fares somewhat better than China, they are certainly not in great shape either. With only 42% of the IE instances in the US on Version 11, that leaves the majority of IE exposed.

What about Russia? How does the Northern Bear do in keeping its citizens up to date on their browsers?

IE-Versions-being-used-In-Russia

In some ways, Russia is much worse off than China and in some ways much better off. It’s worse in the obvious way – with only 31% of the IE in Russia on Version 11, that leaves roughly 69% on previous versions. Russia is better off because they have 63% on Version 10 (the directly previous version). This is a much easier upgrade path than moving from Version 7 to Version 11.

For comparison, let’s see how Poland does?

IE-Versions-being-used-In-Poland

As you can see, Poland is doing a fantastic job of browser version maintenance compared to the three superpowers. With over 80% of their IE users on the current supported version of Internet Explorer, it may be no accident why they are not on the top ten list of countries where attacks originate from.

As a last point, we will look under the covers of a troubled country that is in the news today.

IE-Versions-being-used-in-Syrian-Arab-Republic

While not as version compliant as the admirable Poles, the Syrians have a majority of their IE on the current version, and this is certainly better than China, the US and Russia.

How important is this security risk? How many people are still using Internet Explorer anyway? This is certainly a valid question. Depending on who you believe, it is anywhere from 10% – 24% of the total browser usage.

Screen-Shot-2016-01-15-at-10.50.02-AM

The Cedexis Radar Community data corresponds very closely to these numbers, and that puts IE in the top four (along with Firefox, Safari and Chrome). While usage of Internet Explorer has dropped from roughly 40% of the browser market in 2009 to whatever it is today, it is important to understand that this still represents many millions of copies of this potentially infectable software out in the wild.

Learn how to identify and solve issues for your company like security risks from outdated Internet Explorer browsers by joining the free Radar Community, accessing Real User Measurements and using our free traffic analysis tools to solve real world web traffic problems.

Improving Website Performance using Global Traffic Management Requires Sufficient Real User Measurements

At Cedexis, we often talk about our Radar community and the vast number of Real User Measurements (RUM) we take daily. Billions every day. Is that enough? Too many? How many measurements are sufficient? These are valid questions. As with many questions, the answer is “it depends”. It depends on what you are doing with the RUM measurements. Many companies that deploy RUM use it to analyze a website’s performance using Navigation Timing and Resource Timing.  Cedexis does this, too, with its Impact Product. To do this type of analysis may not require billions of measurements a day.

However, making the RUM data actionable by utilizing the data for Global Traffic Management is another matter. To do this, it is incredibly important to have data from as many of the networks that make up the Internet as is possible.  If the objective is to have a strong representative sample of the “last mile” then it turns out you need a pretty large number. Let’s take a closer look at perhaps how many.

The Internet is a network of networks. There are around 51k networks established that make up what we call the Internet today. These networks are named, or at least numbered, by a designator called an ASN, or Autonomous System Number. Each ASN is really a set of unified routing policies. As our friend Wikipedia states:

“Within the Internet, an autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the Internet.”

Every ISP has one or more ASNs – usually more. There are 51,468 ASNs in the world as of August 2015. How does that look when you distribute it over whatever number of RUM measurements you can obtain? A perfect monitoring solution should tell you, for each network, whether your users are experiencing something bad – for instance, high latency from the network they are using.

If you are able to spread the measurements out to cover each network evenly (which you cannot) then you get something like the graph below.

Screen-Shot-2015-11-05-at-5.02.07-AM

On the left hand column, you see the number of RUM measurements you get a day and the labels on the bars show the number of measurements PER networks you can expect.

So, if you distributed your RUM measurements over all the networks in the world, and you only had a 100,000 page visits a day, you would get two measurements per network per day. This is abysmal from a monitoring perspective.

With so many ASNs, it’s easy to see why using synthetic measurements is hopeless. Even if you were to have 200 locations for your synthetic measurements and three networks per location that would only give you 600 ASN/Geo map pairings.  Cedexis dynamically monitors over seven million ASN/Geo maps every day. 

One issue, however, is that RUM measurements are not distributed equally. We have been assuming that given your 51k networks you can spread those measurements over them equally, but that’s not the way RUM works. Rather, RUM works by taking the measurements from where they actually come from. It turns out that any given site has a more limited view of the ASNs we have been discussing. To understand this better, let’s look at a real example.

Assume you have a site that generates over 130 million page views a day. The data is from a Cedexis client and was culled over a 24-hour period in October 2015.

134 million is a pretty good number, and you’re a smart technologist who implemented your own RUM tag – you are tracking information about your users, so you can improve the site. You also use your RUM to monitor your site for availability. Your site has significant users in Europe and North and South America, so you’re only really tracking the RUM data from those locations for now. So, what is the spread of where your measurements come from?

Of the roughly 51k ASNs in the world, your site can expect measurements from approximately 1,800 different networks on any given day (specifically 1,810 on this day for this site).

ISP Real User Monitoring

In the diagram above, you see a breakdown of the ISPs and ASNs that participated in the monitoring on this day – the size of the circle shows the number of measurements per minute. At the high-end are Comcast and Orange S.A. with over 4,457 and 6,377 measurements per minute, respectively. The last 108 networks (with the least measurements) all garnered less than one measurement every two minutes. Again, that’s with 134 million page views a day.

The disparity between the top measurement-producing networks and the bottom is very high. As you can see in the table below, almost 30% of your measurements come from only 10 networks while the bottom 1,000 networks produce 2% of the measurements.

Screen-Shot-2015-11-05-at-5.15.03-AM

What is the moral here? Basically, RUM obtains measurements from the networks where the people are and not so much from networks where there are fewer folks. And, every site has a different demographic, meaning that the networks that users come from for Site A is not necessarily the same mix of networks for Site B. Any one site that deploys a RUM tag will not be able to get enough measurements from enough networks to make an intelligent decision about how to route traffic. It just will not have enough data.

This is value of the Cedexis Radar community. By taking measurements from many sites (over 800 and rising) the Cedexis Radar community is able to combine these measurements and get a complete map of how ALL the ASN/GEO pairings are performing – over seven million ASN/GEO pairings a day – and our clients can utilize this shared data for their route optimization using Cedexis Openmix. This is what we mean when we say “Making the Internet Better for Everyone, By Everyone”.  The community measurements allow every individual site (that may only see 1,800 of the 51k networks) to actually see them all! 

The Radar community is free and open to anyone.  If you are not already a member, we urge you to sign up for a free account and see the most complete view of the Internet available today. While you are at it, you can go check out Radar Live, our live view of Internet traffic outages in real time.