How to Make Cloud Pay Its Own Way

Rightscale came out with a wonderful report on the state of the cloud industry, and we learned some important new things:

  • 77% of organizations are at least exploring private cloud implementations
  • 82% of enterprises are executing a hybrid cloud strategy
  • 26% of respondents are now listing cost as significant challenge – ironically, given the importance of cost-cutting in the early growth of cloud services

The growth in hybrid cloud adoption is particularly striking: by Rightscale’s count, only 6% of companies are exclusively looking at private cloud,  18% are exclusively looking at public cloud , while a full 71% have a toe dipped into each pool.

Meanwhile, Cisco estimates that two thirds of all Internet traffic will traverse at least one content delivery network by 2020 – which tends to imply that most organizations are, right now, invested in getting the most out of some combination of private cloud, public cloud, CDN, and, presumably, physically-managed data center.

Fundamentally, there are a few core ways that we see organizations using this market basket of delivery pathways – and, naturally, our Openmix global server load balancer – to better serve their customers, and to protect their economics as demand grows, apparently insatiable. The core strategies are:

  1. Balance CDNs, offload to origin. For web-centric businesses, delivering content across the Internet is fundamental to their success (possibly their survival), so they tend to rely upon one or more CDNs to get content to their users effectively. Over time, they tend to expand the number of CDN relationships, in order to improve quality across geographies, and to make the most of pricing differences between providers. Once they get this set to equilibrium, they discover that there is unused capacity at origin (or within a private or public cloud instance) to which they can offload traffic, maximizing the return they get on committed capacity, and minimizing unnecessary spend.
  2. Balance clouds, offload to CDN. For businesses that are highly geographically-focused, it is often more effective to create what is essentially a self-managed CDN, establishing PoPs through cloud providers in population centers where their customers actually originate. Even the most robust internally-managed system, however, is subject to traffic spikes that are way beyond expectations (and committed throughput limits), and so these companies build relationships with CDNs in which excess traffic is offloaded at peak times.
  3. Balance Hybrid Cloud. Organizations at the far right of Rightscale’s cloud maturity scale (in their words, the Cloud Explorers and Cloud Focused) are starting to view each of the delivery options not as wildly distinct options, but merely as similar-if-different-looking cogs in the machine. As such, they look at load and cost balancing through a pragmatic prism, in which each user is simply served through the lowest cost provider, so long as it can pass a pre-defined quality bar (a specified latency rate, for instance, or a throughput level). By shifting the mindset away from ‘primary’ and ‘offload’ networks, organizations are able to build strategies that optimize for both cost and quality.

Of course, to balance traffic across a heterogeneous set of delivery networks (and provider types), while adjusting for a combination of both economic and quality of service metrics, requires three things:

  1. Real-time visibility of the state of the Internet beyond the view of the individual publisher, in order to be able to evaluate Quality of Service levels prior to selecting a delivery provider
  2. Real-time visibility into the current economic situation with each contracted provider: which offers the lowest cost option, based on unit pricing, contract commitments, and so forth
  3. Real-time traffic routing, which takes the data inputs, compares them to the unique requirements of the requesting publisher, and seamlessly directs traffic along the right pathway

Not an easy recipe, perhaps, but when found, it results in the opportunity to apply sophisticated algorithms to delivery – in effect to exercise a Wall Street-level arbitrage approach, which results in a combination of delighted customers, and reduced infrastructure costs.

Or, put another way, the opportunity to make your hybrid cloud strategy pay for itself – and more.

To find out more about real-time predictive traffic routing, please take a look around our Openmix pages,  read about how to deliver 100% availability with a Hybrid CDN architecture, and visit our Github repository to see how easy it is to build your own real-time load balancing algorithm.

Make Mobile Video Stunning with Smart Load Balancing

If there’s one thing about which there is never an argument it’s this: streaming video consumers never want to be reminded that they’re on the Internet. They want their content to start quickly, play smoothly and uninterrupted, and be visually indistinguishable from traditional TV and movies. Meanwhile, the majority of consumers in the USA (and likely a similar proportion worldwide) prefer to consume their video on mobile devices. And as if that wasn’t challenging enough, there are now suggestions that live video consumption will grow – according to Variety by as much as 39 times! That seems crazy until you consider that Cisco predicted video would represent 82% of all consumer Internet traffic by 2020.

It’s no surprise that congestion can result in diminished viewing quality, leading over 50% of all consumers to, at some point, experience buffer rage from the frustration of not being able to play their show.

Here’s what’s crazy: there’s tons of bandwidth out there – but it’s stunningly hard to control.

The Internet is a best-efforts environment, over which even the most effective Ops teams can wield only so much control, because so much of it is either resident with another team, or is simply somewhere in the amorphous ‘cloud’.  While many savvy teams have sought to solve the problem by working with a Content Delivery Network (CDN), the sheer growth in traffic has meant that some CDNs are now dealing with as much traffic as the whole Internet transferred just a few years ago…and are themselves now subject to their own congestion and outage challenges. For this reason, plenty of organizations now contract with multiple CDNs, as well as placing their own virtual caching servers in public clouds, and even deploying their own bare-metal CDNs in data centers where their audiences are centered.

With all these great options for delivering content, Ops teams must make real-time decisions on how to balance the traffic across them all. The classic approaches to load balancing have been (with many thanks to Nginx):

  • Availability – Any servers that cannot be reached are automatically removed from the list of options (this prevents total link failure).
  • Round Robin – Requests are distributed across the group of servers sequentially.
  • Least Connections – A new request is sent to the server with the fewest current connections to clients. The relative computing capacity of each server is factored into determining which one has the least connections.
  • IP Hash – The IP address of the client is used to determine which server receives the request.

You might notice something each of those has in common: they all focus on the health of the system, not on the quality of the experience actually being had by the end user. Anything that balances based on availability tends to be driven by what is known as synthetic monitoring, which is essentially one computer checking another computer is available.

But we all know that just because a service is available doesn’t mean that it is performing to consumer expectations.

That’s why the new generation of Global Server Load Balancer(GSLB) solutions goes a step further. Today’s GSLB uses a range of inputs, including

  • Synthetic monitoring – to ensure servers are still up and running
  • Community Real User Measurements – a range of inputs from actual customers of a broad range of providers, aggregated, and used to create a virtual map of the Internet
  • Local Real User Measurements – inputs from actual customers of the provider’s own service
  • Integrated 3rd party measurements – including cost bases and total traffic delivered for individual delivery partners, used to balance traffic based not just on quality, but also on cost

Combined, these data sources allow video streaming companies not only to guarantee availability, but also to tune their total network for quality, and to optimize within that for cost. Or put another way – streaming video providers can now confidently deliver the quality of experience consumers expect and demand, without breaking the bank to do it.

When you know that you are running across the delivery pathway with the highest quality metrics, at the lowest cost, based on the actual experience of your users – that’s a stunning result. And it’s only possible with smart load balancing, combining traditional synthetic monitoring with the real-time feedback of users around the world, and the 3rd party data you use to run your business.

If you’d like to find out more about smart load balancing, keep looking around our site. And if you’re going to be at Mobile World Congress at the end of the month, make an appointment to meet with us there so we can show you smart load balancing in real life.

Tracking Video QoS Just Got A Whole Lot Easier

If you follow this blog, you know we’ve mentioned before working with innovative customers to create a creative way to track video Quality of Service (QoS) metrics and make sense of them.

It’s exciting therefore to share that now anyone and everyone can track video QoS in Radar.

Video is fundamentally different to a lot of other online content: not only is it huge (projections are that in the next four or five years video will make up as much as 80% of Internet traffic), it is inherently synchronous. Put another way, your customer might not notice if a page takes an extra second or two to load, but they surely notice if their favorite prime time show keeps stalling out and showing the re-buffering spinner. So our new Performance Report focuses on the key elements that matter to viewers, specifically:

  • Response Time: how long it takes the content source to respond to a request from the intended viewer. Longer is worse!
  • Re-Buffering Ratio: the share of viewing time spent with the content stalled, the viewer frustrated, and the player trying to catch up. Lower is better!
  • Throughput: the speed at which chunks of the video are being delivered to the player after request. Faster is better!
  • Video Start Time: how long it takes for the video to start after viewer request. Shorter is better!
  • Video Start Failures:the percentage of requested video playbacks that simply never start. Lower is better!
  • Bitrate: the actual bitrate experienced by the viewer (bitrate is a pretty solid proxy for picture quality, as the larger the bitrate, the higher the likely resolution of the video). In this case, higher or lower may be better, depending on your KPIs.

Once you enable the tag for your account and add it to your video-bearing pages (see below), you’ll be able to track all these for your site. And, as with all Radar reports, you can slice and dice the results in all sorts of different ways to get a solid picture of how your service is doing, video-wise. Analyses might include:

  • How do my CDNs compare at different times of day, in different locations, or on different kinds of device?
  • What is the statistical distribution of service provided through my clouds? Does general consistency hide big peaks and valleys, or is service generally within a tight boundary?
  • What is the impact of throughput fluctuations to bitrates, video start times, or re-buffering ratios? What should I be focused on to improve my service for my unique audience?

In no time, you’ll have a deep and clear sense of what’s going on with video delivered through your HTML5 player, and be able to extrapolate this to make key decisions on CDN partnering, cloud distribution, and global server load balancing solutions. The ability to really dig down into things like device type and OS – as well as the more expected geography, time, delivery platform, and so forth – means you’ll be able to isolate issues that are not, in fact, delivery-related: for instance, it is possible to see a dip in quality and assume it’s cloud-related, only to discover, in drilling down, that the drop occurs on only one particular device/OS combination, and thus uncover a hiccup in a new product release.

So here’s the scoop. Collecting these QoS metrics isn’t just easy – it’s free, just like our other Radar real user measurements. With the video QoS, you’ll be tracking your own visitors’ experiences, and be able to compare them over time.

The tag works with HTML5 players, running in a browser, and it’s unsurprisingly takes a bit more planning to implement than our standard tag, so you’ll likely want to drop us a line to get started. We’ll be delighted to help you get this up and running – just contact us by going to your Portal and navigating to Impact -> Video Playback Data, then clicking the Contact button..

Fixing The Real OTT Challenge: Monetization


HBO remains a jewel in the crown of Time Warner, while also being a bit of a problem. One of the issues is the slowdown in growth of HBO Direct: it has rather publicly been at give-or-take-a-million since shortly after its launch, and stubbornly refuses to keep moving. There are a million reasons for this (not least of which is that the hundred million or so cable and satellite subscribers can simply buy it for the TV then use HBO Go on the move), but it’s illustrative of what will likely be the number one issue of 2017 for streaming video: monetization. And when we say monetization, we don’t just mean attracting revenue – we mean doing so profitably

You can logically start at the top: what’s the story on Netflix? The last quarter reported (Q3 2016) shows revenue of $2.3B, net income of $51M, and a negative cash flow of $461M. Now let’s not beat them up – they have a solid record of being mildly profitable (the number above imply a 2.2% net profit) – and there are signs that 2017 could very much be the year they start printing money. The thing is, though – Netflix is the 800lb gorilla in the space, consuming over a third of the total Internet bandwidth during peak hours. If they are not cash flow positive (with, it has to be said, some high expectations for Q4 numbers), what is everyone else to do?

We see plenty of news about how many subscribers have, or have not, signed up for a service – but that is only one side of the balance sheet. Profit, after all, is made up of what is left after all the outgoing bills have been paid with the incoming revenue. So while subscriber growth is essential to ramping up the revenue side, there is a corresponding need to focus on the cost side of the ledger. There are a number of real challenges here:

  • The cost of content. Our friends over at StreamingMedia have an important piece about the cost of content, noting that even regular Pay TV companies can end up losing money on particularly expensive content. The simple reality is that, with so many renegade start-ups, the price for good content is being bid up to the point where it is hard, if not impossible to make money. Netflix has content obligations into the future valued at $13B (heck, Westworld cost a reported $100M).
  • The inconsistency of the market. Think about it: Netflix, with its whole library, comes in at $8 to $12 a month; HBO Direct costs $15. The smallest package from SlingTV can be had for a cool $20, where PlayStation Vue weighs in at $40 – and offers lots of options to build up an account whose price can rival any cable bill. Acorn TV, which offers British content is $5 a month, which makes it kind of expensive next to Netflix, but kind of cheap next to HBO. Consumers have no real way to compare and contrast models, which makes the economics of the market chaotic, complex, and confusing.
  • The dizzying delivery system. Getting video from origin to consumer is as complex an endeavor as the human race has ever chosen to undertake. One of the key enablers has been the CDNs, who distribute points of presence (POPs, also known as edges) around the world in an effort to bring content closer to consumers. CDNs, though, charge for each byte of information that flows through them, meaning that each new consumer creates new cost.

Content costs will, eventually, stabilize, as supply and demand meet, and the amount of investment money for start-ups slows a little and returns to a level at which content can be provided profitably. Similarly, market forces will eventually create an equilibrium, as consumers signal through their actions the acceptable cost for services. Each of these two elements, however, rely upon market activity that can be only partially impacted by any individual organization.

By contrast, the cost of delivery can be managed by any organization, by rationalizing the way in which traffic is assigned to delivery partners. Organizations around the world are even now building out Hybrid CDN architectures, in which some combination of CDNs and private clouds are used to route traffic across pathways that can deliver content at high quality, but at the lowest total cost. For instance, consumers in close proximity to the origin may be served by internally-managed caching servers; those in geographically-dispersed, but relatively high consumer concentration, areas may be served by servers controlled within a private cloud; and others in further-flung regions may be served by locally-focused CDNs. Using intelligent algorithms to ensure quality of experience (QoE) meets expectations, while selecting for the cheapest route that can do it, can drive overall delivery costs down by economically-meaningful levels.

Ultimately, profitability in the streaming video space will be determined by the ability of service providers to attract and retain audiences, and their ability to control whatever costs they can. Starting with delivery costs makes the most sense, as it can be undertaken today, and without the cooperation of other market participants.

Want to know more about monetization trends in the OTT space? Join us at Digital Entertainment World on February 1 and 2 and visit us at our booth. And don’t miss our VP of Marketing, Rob Malnati, as he discusses this very topic with other experts from companies like OWNZONES,, ITV Alliance, and Plex at 3:30pm on February 1st.

Much Like The Rain Across America, Video Is Streaming Everywhere!


For those outside the US – and not addicted to your weather feeds – you may feel a certain schadenfreude to know that in the first week of 2017 fully 49 out of 50 states had snow. And that California is, even now, being drenched by something called an ‘atmospheric river’, which, based on the pictures you can dig up almost anywhere, is exactly what it sounds like.

Thank goodness for streaming, or Over the Top (OTT), video, then, which entertains all of us as we huddle inside waiting for Spring to appear.

And yet, perhaps they’re under attack in ways we’ve not noticed. According to CIO, sales taxes on streaming services are on their way (who knew Philadephia already charged one?). On the other hand, is it that surprising? Netflix, Hulu and Amazon Prime killed off the neighborhood Blockbuster, and that tax revenue has to be replaced by something – and given the speed at which streaming is increasing (22.6% increase in subscription revenue in 2016), that’s a tempting little nest egg for any self-respecting taxman. In fact, for the first time in 2016, streaming revenues exceeded revenues for physical media like DVDs.

One of the biggest stories for 2017 is likely to be the growth (or otherwise) of Internet-only streaming TV services. Kicked off by Dish with Sling TV, and by Sony with PlayStation Vue, we’re going to be keeping an eye on AT&T’s DirecTV Now, which is apparently keeping its low price of $35 for the time being. Nobody in the industry is really willing to place a bet on where this will go (HBO Go seems to be stuck at a million subscribers, which is nothing to sniff at, but growth is proving tricky) – but if it proves popular, all bets are off as to how future investments will be made on proprietary versus Internet delivery.

The biggest challenge for these new (and newly taxable!) services, of course, will really kick in when they become as ubiquitous as today’s rather more popular cable or satellite subscription. Because TV providers – whether the company that brings signal to your house, or the one that creates the channels you like to watch – are surprisingly robust. Ask yourself when the last time was that your favorite channel just plain stopped playing; or when your cable service stopped working (and no, you can’t count that time the electricity went out because you were undergoing an atmospheric river).

Now ask yourself how those streaming services are doing.  Pop over to the Cedexis CDN and Cloud Performance reports, then see how CDNs are doing – you’ll notice that, while you can get close to 99.9% availability by combining a handful of them and hooking them together with Openmix, it’s near-impossible for any single provider to reach 99%. Why? Because there are so many more moving parts in the Internet than there are in a closed, proprietary cable network. The fantastic news is that we can clearly see here that, working together, CDNs can put together the sort of results that are going to be necessary in order to make streaming a credible challenger to the status quo.

And perhaps that’s the news of the month: working together. At Cedexis, we’re working with clients all around the world to create the ideal delivery networks, from fortifying their origins, to implementing Varnish caching servers, to structuring robust Multi-CDN architectures – then applying real user measurements (RUM) and advanced global traffic management algorithms to make sure that consumers get a great experience. If there’s two things we’ll need to see this year to turbo-charge growth in the OTT space, they are (1) collaboration; to bring about (2) broadcast-quality online despite the Internet’s notoriously chaotic weather patterns.

Have questions about delivering broadcast-quality video online? Don’t miss our webinar, with Level 3, this Thursday, January 12th, at 3pm GMT, and 11am PST

What Can Metrics Tell Us About Internet Video Delivery?

Over the last year or so, we’ve been working with some innovative streaming video leaders to collect and analyze the Quality of Experience (QoE) their consumers have been receiving. Using the results of several billion streams, we can start to see some fascinating trends emerge.

This data was collected through an updated (and still free!) Radar Community tag, which collected video-specific QoE metrics from HTML5 player elements, across 10 video service providers in Q4 of 2016, who served both live and video-on-demand (VOD) assets to audiences all around the world.

Let’s start with a thoroughly unsurprising result: higher throughput is distinctly correlated with higher bitrates:


That said, we can also say that the return for getting from below 10K kbps to above that line is significantly greater than getting from below 30K to above. Importantly, we can also see that the largest clusters of chunks occur below and around 10K, so focusing on improvement here will have the most significant impact on customer viewing.

We see a not-dissimilar result when we compare throughput with video start failures (VSF). More throughput is very highly correlated with low video start failures:


Once again, getting to above 10K kbps brings the greatest benefit, dropping VSF from a peak of 9% to a more manageable 4%. Doubling the throughput roughly halves the VSF, though the benefits are more modestas speeds exceed 30K.

Less obvious is the degree to which using multiple CDNs can measurably impact the QoE of users. Take a look at the following graph, which compares the Latency of two CDNs across a 7-day period:


CDN1 (in red) shows a very consistent series of results, with only a couple of spikes that really catch the eye. By contrast, CDN2 (in green) shows way more spikes, a couple of which are quite striking, and a clear pattern of higher latency. Based on this very high level view, one might conclude that the incremental benefit of distributing traffic across the two providers would be relatively low. However, look what happens when we double-click and look at a single day:


From midnight to around 5am, CDN2 is by far the superior option – and, tantalizingly, appears to become so again right around 11pm. This might be the perfect example of a situation in which some time-based traffic distribution could deliver QoE improvements. And, assuming the CDNs bear different cost structures, there may very well be an opportunity here to arbitrage some costs and improve margins.  Finally, let’s dig into what happens during a single, rather troublesome hour:


Note that for this particular hour, CDN2 is outperforming CDN1 for around about 50 minutes, meaning that from a pure QoE perspective, we would probably prefer traffic to be sent via CDN2 than CDN1. This is something that would be effectively impossible to spot at the 7-day level, but by digging in deeply, it becomes clear that distributing our traffic across these two CDNs would result in detectable differences for users.

And what would that bring us? Using one more graph, we can see the relationship between latency and video start time (VST):


Unsurprisingly lower latency results in lower VST – which, you can be sure, will in turn contribute to higher VSF. Or, in more direct terms, will mean less people consuming video, and therefore seeing less ads, or becoming increasingly less likely to renew a subscription.

Real User Measurements (RUM) that are tracked through the Cedexis Radar Community provide a powerful set of signposts for how to deliver traffic most effectively and efficiently through the Internet. Adding video-specific metrics helps ensure that the right decisions are being made for a sparkling video experience.

To find out more about adding video metrics for free to your Radar account, drop us a line at <>.

O’Reilly Media & Cedexis Present: Free RUM eBook

O’reilly Media is the world’s leading advocate for Web Technology and Web Performance education and conversation, through their fantastic Velocity, Software Architecture and OSCON shows and communities. We are excited to announce that when O’reilly went looking for an expert to author a book about the hot topic of Real User Measurement (RUM), they turned to our very own Evangelist & Strategist, Pete Mastin.

Download your free copy today.

Pete MastinPete has been around the web performance, CDN and hosting business for some time now, including product and technology roles at InterNAP CDN and early Internet video pioneer Multicast Media.  He brings a wealth of knowledge and his easy-talking style to the topic in a way that educates and entertains.

The book is an easy and complete read, explaining the value and appropriate uses for RUM metrics as it relates to evaluating and optimizing Web sites and applications.  Importantly, Pete spends time comparing and contrasting RUM with other forms of performance monitoring, so readers know when best to use synthetic, passive or active monitoring solutions.


O'Reilly Media & Cedexis Present: Free RUM eBook

O’reilly Media is the world’s leading advocate for Web Technology and Web Performance education and conversation, through their fantastic Velocity, Software Architecture and OSCON shows and communities. We are excited to announce that when O’reilly went looking for an expert to author a book about the hot topic of Real User Measurement (RUM), they turned to our very own Evangelist & Strategist, Pete Mastin.

Download your free copy today.

Pete MastinPete has been around the web performance, CDN and hosting business for some time now, including product and technology roles at InterNAP CDN and early Internet video pioneer Multicast Media.  He brings a wealth of knowledge and his easy-talking style to the topic in a way that educates and entertains.

The book is an easy and complete read, explaining the value and appropriate uses for RUM metrics as it relates to evaluating and optimizing Web sites and applications.  Importantly, Pete spends time comparing and contrasting RUM with other forms of performance monitoring, so readers know when best to use synthetic, passive or active monitoring solutions.


Cedexis Portal Update: New Menus and Reports

As part of our ongoing efforts to surface powerful data to help our customers discover, analyze and improve performance and availability of their digital assets, we’ve made an update to our award winning Cedexis Portal to consolidate our powerful analytics reporting under our Impact product umbrella.

Our Impact menu now exposes Navigation Timing Data (previously called “Page Load Time” in the Portal), our brand new Impact Resource Timing reports (currently in Beta), and our Impact Business Analytics data, which provides incredible details of how site performance impacts business metrics and KPIs.

If you aren’t currently collecting data for any of these reports, we now provide a “sample report” showing what the report looks like, and a description of the report features.  The additional reporting data generally comes from advanced features within the Radar tag.  Check it out and contact us if you have any questions, or would like to see more about these powerful features.

For those customers who have been using the Cedexis Portal already, the previous menu “Page Load Time” can now be found under the Impact “Navigation Timing Data” menu as seen below:


If you are new to Cedexis, and haven’t seen the powerful analytics available in our Portal, or how easy it is to configure our Openmix solution to deliver optimal RUM-based, global load balancing across CDNs, Clouds or your data centers, please sign up for a free Portal account now and dive in!

Alerting from the last mile with RUM


Alerts are the core component of an operations team (or ‘DevOps’ if we want to be fashionable). The ability to provide precise, immediate issue notification and to enable forensic investigation is a key ingredient to resolving things in a timely manner, and those alerts must reflect what the end user would be experiencing as much as possible. Otherwise, they lead to false positives or missed real outages. This is one of the areas Cedexis RUM (Real User Monitoring) distinguishes itself.

Understanding the data down to the minute level for historical trending and comparison analysis is crucial. Once an issue is discovered, the granularity of the data is key to understanding what is happening (or what happened). Likewise, data retention is important to understand what happened in previous time periods (this time last month, for instance).

The new release of Cedexis Alerts provides intelligent, configurable, RUM (Real User Monitoring) performance alerts on any content delivery network (CDN), public or private cloud or data center and the ability to delve into the issue with great precision.


Let’s walk through an example setup of Cedexis Alerts to give you an idea of the power of the RUM alerting platform.

Setting up an alert for a cloud

The first thing to do is set up a platform. This is easily done. Click on the ‘platform’ link on the right menu item.


By simply naming the platform whatever you want and then selecting from one of the public platforms (we monitor all major CDNs and clouds) you can set up a private name to which you can refer in the alerts. You can just accept all the other defaults for now.


The next step is to actually set up the alert on the AWS cloud.

This is achieved by clicking on the Alerts menu item on the left column and clicking on the “+” sign in the upper right to add a new alert.

(BTW – If you don’t see the “Alerts” menu option in your account, contact your Customer Success Manager to get set up).


You can name the alert anything you want (mine is named “Tell me when AWS is having issues”). You can select RUM (Radar) or synthetic measurements (Sonar). For now, select Radar. Select the platform we just named (“AWS East” in this case).

That last item above is super interesting. By selecting the ‘peers’ of AWS East (such as Softlayer, Rackspace, Azure, or many other clouds that we monitor) you can get a report on how those clouds were performing during any outages or performance degradation instances that AWS may experience. For now, we will leave that blank, but expect me to talk about it in future blogs.

Next, we need to limit the scope of our alerts to the geography and networks that we care about. You can select the entire world and all networks, but your alerts will be very noisy. It is much better to get precise feedback from the geographies and networks that matter to your users. In this example, I chose the US market and nine different networks that I really care about – Comcast, AT&T, Verizon, Charter, Cox, Century Link, Cablevision, and two Time Warner Networks.


My next set of configurations for this alert are very important. Basically: what is the “KPI” I want to alert on? Our choices are Availability, Throughput, or Response time. For this example, I’ll choose Response Time and set the threshold to 45 milliseconds. This simply means that if measurements from any of these networks we selected (within the US) go above 45 milliseconds response time, an alert will be triggered.


The last item that needs to be completed concerns the frequency with which you want to see the alerts and the contact information of the person receiving them.


Now your alert is complete. Let’s look at some reports!

Alert Reporting

Now that we have an alert, let’s see what they look like once triggered.

By clicking on the alert you created, you can see the details of that alert.


By clicking on View Report, you can see the alerts that have been generated from this alert definition.


As you can see, in the short time that I have been writing this blog we have six alerts from six of the networks that we specified in the alert.

CDN Alerts

Another example of an alert is a performance alert on a CDN. CDNs have better or worse performance depending on where a user is coming from and what network they are on. Cedexis RUM is a perfect fit to understand what the user experience is on various CDNs. In this next screen shot, we have a CDN that is having a bad performance moment. In this case, the alert was set to fire if performance got above 150ms in the US. As you can see, it went a good bit higher and stayed there for 16 minutes.


So, now that you have been alerted, what do you do?

You know there is an issue with your cloud or CDN. You know that your users are suffering. What can you do? This is where having greater visibility into the network is key. Fortunately for you, Cedexis has just increased BOTH its data retention and its data granularity significantly!

From a data granularity perspective, you can now drop to the one minute level for an entire 48 hours – and down to the one hour level for an entire month. This is huge. With this historic level of visibility, you can drill into a problem and determine EXACTLY what network and geography people are experiencing the issue on.

For more information on our alerting, sign up for a free Radar account and ask your sales representative for a free trial of Cedexis Alerts. Every Radar member gets to take advantage of the increased granularity and data retention! Learn more about Radar here.