Microsoft retiring support for all Internet Explorer browsers prior to Version 11: Does this pose security issues in places like China?

On January 12, 2016, Microsoft stopped support for all Internet Explorer browsers except IE 11. What does this mean? It means that security updates no longer get distributed for earlier browser versions. In the announcement that can be found here, Microsoft states clearly:

Security updates patch vulnerabilities that may be exploited by malware, helping to keep users and their data safer. Regular security updates help protect computers from malicious attacks, so upgrading and staying current is important.

So, how big of a browser security problem is this? It turns out it is a pretty big one. Here at Cedexis, we are fortunate to have the Radar Community, with over 900 enterprise contributors. This community generates billions of Real User Measurements (RUM) every day from every country in the world and every browser type and computer type. So, we see what people are using out there in the world. The results are not always pretty.

When we take 24 hours of measurements and limit it to just Microsoft IE, here is what we see for the entire world.

IE-Versions-being-used-in-Entire-World1

Whoa! 48% of the IE world is pre Internet Explorer 11! 1% of the world that is using IE is still using IE 7 for goodness sake!

To make the point more clear – Malware that infects your computer is often used to generate cyber attacks. Microsoft will no longer support security updates for 48% of the world’s Internet Explorer Instances as of Jan 12th 2016. This means the likelihood of these Internet Explorer Instances getting infected is almost 100% – creating an army of infected machines around the world

So, where do we typically see these cyber attacks coming from? This Government Technology Report identified these countries as the biggest offenders:

Top 10 Hacking Countries in the World

Let’s take a look at the breakdown of Internet Explorer in China, shall we?

IE-Versions-being-used-In-China

Shiver me browsers! 63% of China’s instances of Internet Explorer today are not getting Malware updates anymore. Over 10% of all the IE in China is on Version 7 – a full four versions back!

Let’s contrast that with the US for a sanity check (and since the US is #2 on places where cyber attacks originate from).

IE-Versions-being-used-In-USA

While the US fares somewhat better than China, they are certainly not in great shape either. With only 42% of the IE instances in the US on Version 11, that leaves the majority of IE exposed.

What about Russia? How does the Northern Bear do in keeping its citizens up to date on their browsers?

IE-Versions-being-used-In-Russia

In some ways, Russia is much worse off than China and in some ways much better off. It’s worse in the obvious way – with only 31% of the IE in Russia on Version 11, that leaves roughly 69% on previous versions. Russia is better off because they have 63% on Version 10 (the directly previous version). This is a much easier upgrade path than moving from Version 7 to Version 11.

For comparison, let’s see how Poland does?

IE-Versions-being-used-In-Poland

As you can see, Poland is doing a fantastic job of browser version maintenance compared to the three superpowers. With over 80% of their IE users on the current supported version of Internet Explorer, it may be no accident why they are not on the top ten list of countries where attacks originate from.

As a last point, we will look under the covers of a troubled country that is in the news today.

IE-Versions-being-used-in-Syrian-Arab-Republic

While not as version compliant as the admirable Poles, the Syrians have a majority of their IE on the current version, and this is certainly better than China, the US and Russia.

How important is this security risk? How many people are still using Internet Explorer anyway? This is certainly a valid question. Depending on who you believe, it is anywhere from 10% – 24% of the total browser usage.

Screen-Shot-2016-01-15-at-10.50.02-AM

The Cedexis Radar Community data corresponds very closely to these numbers, and that puts IE in the top four (along with Firefox, Safari and Chrome). While usage of Internet Explorer has dropped from roughly 40% of the browser market in 2009 to whatever it is today, it is important to understand that this still represents many millions of copies of this potentially infectable software out in the wild.

Learn how to identify and solve issues for your company like security risks from outdated Internet Explorer browsers by joining the free Radar Community, accessing Real User Measurements and using our free traffic analysis tools to solve real world web traffic problems.

Improving Website Performance using Global Traffic Management Requires Sufficient Real User Measurements

At Cedexis, we often talk about our Radar community and the vast number of Real User Measurements (RUM) we take daily. Billions every day. Is that enough? Too many? How many measurements are sufficient? These are valid questions. As with many questions, the answer is “it depends”. It depends on what you are doing with the RUM measurements. Many companies that deploy RUM use it to analyze a website’s performance using Navigation Timing and Resource Timing.  Cedexis does this, too, with its Impact Product. To do this type of analysis may not require billions of measurements a day.

However, making the RUM data actionable by utilizing the data for Global Traffic Management is another matter. To do this, it is incredibly important to have data from as many of the networks that make up the Internet as is possible.  If the objective is to have a strong representative sample of the “last mile” then it turns out you need a pretty large number. Let’s take a closer look at perhaps how many.

The Internet is a network of networks. There are around 51k networks established that make up what we call the Internet today. These networks are named, or at least numbered, by a designator called an ASN, or Autonomous System Number. Each ASN is really a set of unified routing policies. As our friend Wikipedia states:

“Within the Internet, an autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the Internet.”

Every ISP has one or more ASNs – usually more. There are 51,468 ASNs in the world as of August 2015. How does that look when you distribute it over whatever number of RUM measurements you can obtain? A perfect monitoring solution should tell you, for each network, whether your users are experiencing something bad – for instance, high latency from the network they are using.

If you are able to spread the measurements out to cover each network evenly (which you cannot) then you get something like the graph below.

Screen-Shot-2015-11-05-at-5.02.07-AM

On the left hand column, you see the number of RUM measurements you get a day and the labels on the bars show the number of measurements PER networks you can expect.

So, if you distributed your RUM measurements over all the networks in the world, and you only had a 100,000 page visits a day, you would get two measurements per network per day. This is abysmal from a monitoring perspective.

With so many ASNs, it’s easy to see why using synthetic measurements is hopeless. Even if you were to have 200 locations for your synthetic measurements and three networks per location that would only give you 600 ASN/Geo map pairings.  Cedexis dynamically monitors over seven million ASN/Geo maps every day. 

One issue, however, is that RUM measurements are not distributed equally. We have been assuming that given your 51k networks you can spread those measurements over them equally, but that’s not the way RUM works. Rather, RUM works by taking the measurements from where they actually come from. It turns out that any given site has a more limited view of the ASNs we have been discussing. To understand this better, let’s look at a real example.

Assume you have a site that generates over 130 million page views a day. The data is from a Cedexis client and was culled over a 24-hour period in October 2015.

134 million is a pretty good number, and you’re a smart technologist who implemented your own RUM tag – you are tracking information about your users, so you can improve the site. You also use your RUM to monitor your site for availability. Your site has significant users in Europe and North and South America, so you’re only really tracking the RUM data from those locations for now. So, what is the spread of where your measurements come from?

Of the roughly 51k ASNs in the world, your site can expect measurements from approximately 1,800 different networks on any given day (specifically 1,810 on this day for this site).

ISP Real User Monitoring

In the diagram above, you see a breakdown of the ISPs and ASNs that participated in the monitoring on this day – the size of the circle shows the number of measurements per minute. At the high-end are Comcast and Orange S.A. with over 4,457 and 6,377 measurements per minute, respectively. The last 108 networks (with the least measurements) all garnered less than one measurement every two minutes. Again, that’s with 134 million page views a day.

The disparity between the top measurement-producing networks and the bottom is very high. As you can see in the table below, almost 30% of your measurements come from only 10 networks while the bottom 1,000 networks produce 2% of the measurements.

Screen-Shot-2015-11-05-at-5.15.03-AM

What is the moral here? Basically, RUM obtains measurements from the networks where the people are and not so much from networks where there are fewer folks. And, every site has a different demographic, meaning that the networks that users come from for Site A is not necessarily the same mix of networks for Site B. Any one site that deploys a RUM tag will not be able to get enough measurements from enough networks to make an intelligent decision about how to route traffic. It just will not have enough data.

This is value of the Cedexis Radar community. By taking measurements from many sites (over 800 and rising) the Cedexis Radar community is able to combine these measurements and get a complete map of how ALL the ASN/GEO pairings are performing – over seven million ASN/GEO pairings a day – and our clients can utilize this shared data for their route optimization using Cedexis Openmix. This is what we mean when we say “Making the Internet Better for Everyone, By Everyone”.  The community measurements allow every individual site (that may only see 1,800 of the 51k networks) to actually see them all! 

The Radar community is free and open to anyone.  If you are not already a member, we urge you to sign up for a free account and see the most complete view of the Internet available today. While you are at it, you can go check out Radar Live, our live view of Internet traffic outages in real time.

 

Cedexis Openmix, an alternative to Route 53 load balancing

Congratulations on taking your company to the cloud! Now, are you ready for it to fail?

Broken-Road-sign-

You see, in the cloud, you have to build for failure.

At the very least, geodiversity is required to ensure high availability. That means using your cloud’s other geographical regions to protect yourself from availability issues.

When you deploy in a second or third cloud region, how do you direct traffic to the correct region? How do you make the internet users going to your site or mobile app go to a different region when your East Coast cloud is out of commission?

There are a few tools out there to do just that. One of them is owned and operated by one of the major cloud providers. It’s called Route 53.

Route 53 started out as an authoritative DNS service for AWS users. They have more recently started introducing features that allow you to push your traffic between different AWS regions. Today, we want to take a look at how that product stacks up.

Route 53 allows users to set up conditional routing trees that establish rules for when to fail over to your various other cloud instances in different regions.  These conditional routing trees are dependent on some form of health check that you establish, as well. Sound like a lot of work? It is.

Cedexis offers a smarter alternative to Route 53 load balancing:

  • The Cedexis Radar community collects billions of latency and availability measurements of every AWS instance every day.
  • Radar is real-time data. Route 53 is a daily score. So, if your cloud is having an issue, your users will get routes elsewhere tomorrow. Not helpful.
  • Radar provides performance data on non-AWS endpoints, public and private. 
  • Cedexis has off-the-shelf APM integrations (New Relic, AppDynamics, CloudWatch, Catchpoint and more)
  • Openmix is application defined, allowing custom algorithms. Route 53 allows for nested policies, which are complex and restrictive.
  • Openmix can be queried via DNS and HTTP (think video players, games, mobile apps, etc.).
  • Cedexis is separate from primary DNS. You can keep using AWS, or any other DNS solution for authoritative DNS.
  • Only Cedexis has the capabilities that allow you to:
    • Deliver performance-based load balancing between AWS and your private data center
    • Optimize video stream delivery from your player or CMS
    • Automate traffic management based on APM data (or any other top Synthetic Monitoring tool, for that matter)

route53_v_om

Join us for a free webinar on Wednesday, October 28 at 10:00am PT, where we will take a deeper dive on the benefits of Openmix over Route 53. Register now!

To learn more about how we can help companies with GSLB, read our Tango Case Study.

AWS #reInvent: What we learned about Multi-Cloud vs. Multi-region

Cedexis had a great AWS re:Invent. Thank you to everyone who paid us a visit.

Amazon is very busy and is continuing to provide leadership in the cloud space, but one thing we noticed is what was missing from all the announcements.

Multi-Cloud vs. Multi-region deployments 

As we spoke with customers and prospects at this important event, one thing became blazingly clear – while many of these customers are not ready to make the ‘multi-vendor’ leap they are acutely interested in being able to provide real-time active-active load balancing between AWS regions. This supports efforts to ensure 100% availability in the face of AWS Cloud availability zone outages, as well as provide perfect performance in the face of peering congestion or full out interruptions between AWS and ISPs. Both of these things can (and do) happen, and the seasoned IT professional will always plan for failure. Using a load balancer like Cedexis Openmix that can detect outages instantly (because of the billions of measurements a day provided by the Radar community) is a best practice. We expect this time next year to see more and more companies climbing their way up the Cloud Maturity Model. To learn more about how to make your Cloud infrastructure indestructible, read our GSLB whitepaper.

Cloud-Maturity-Model-1

 

When Amazon falters: June 30th 2015

Screen-Shot-2015-07-01-at-11.25.08-AM

What you see above is Real User Measurements taken from the US and Canada across most of the major networks – aimed at AWS East, West and Oregon. Each of the “strings” is a network or ISP. As you can see there was a pretty significant “blip” that lasted about 45 minutes. A large number of major ISPs in North America went to near 40% available or lower with regard to their connectivity to AWS. That means 6 out of 10 request were being rejected in general. In many cases – there was effectively no connectivity.

In this second diagram you can see that one of the networks went to 0% Available after the initial hit. This diagram is just the US in the same time frames.

Screen-Shot-2015-07-01-at-11.46.35-AM

According to this Mashable article

Netflix, Slack, Pinterest and thousands of other websites and services, appeared to suffer a widespread outage Tuesday

The reasons that Amazon has given:

Between 5:25 PM and 6:07 PM PDT we experienced an Internet connectivity issue with a provider outside of our network which affected traffic from some end-user networks. The issue has been resolved and the service is operating normally.

The root cause of this issue was an external Internet service provider incorrectly accepting a set of routes for some AWS addresses from a third-party who inadvertently advertised these routes. Providers should normally reject these routes by policy, but in this case the routes were accepted and propagated to other ISPs affecting some end-user’s ability to access AWS resources. Once we identified the provider and third-party network, we took action to route traffic around this incorrect routing configuration. We have worked with this external Internet service provider to ensure that this does not reoccur.

This was not Amazon’s fault. They could not have done anything to avoid this. However – the Enterprise can.

An appropriate tweet from @joshhinman

Screen-Shot-2015-07-01-at-12.42.07-PM

What this illustrates so well – having a multi-cloud implementation but using all the same vendor is a flawed strategy. All of AWS in North America was impacted. Not just one instance. If you had just (like many of our customers) spread your instances across multiple vendors, your users would not have noticed this AWS outage at all. A Multi-Cloud / Multi-Vendor strategy is the best practice.

For more on best practices around using Multiple Clouds and Multiple Cloud vendors read our GSLB whitepaper.

UPDATE:

Apparently not only was there a signifigeant BGP route leak that caused this – there was a fiber cable cut!!!!

For more details on the BGP route leakage – check out this blog from our friends at ThousandEyes

Good stuff!

South of the border: When is it faster to serve South America from Dallas?

Today, I am interested in exploring the performance impact of serving your North American customers from a South American cloud or visa versa – serving your South American customers from a North American cloud instance. We will use AWS – EC2 in Sao Paulo as our South American cloud and IBM/Softlayer Dallas as our North American cloud. There was no reason in particular that I choose these two, they are just good examples of clouds that many of our customers use.

Restricting measurements to just come from North America and South America and then analyzing our Radar community latency measurements over the last 48 hours we get an interesting result (this is latency, so lower is better).

Screen-Shot-2015-07-01-at-11.25.08-AM

So, this already seems a little surprising. While the measurements of the two platforms from North America meet most people’s expectations, the ones from South America are a bit harder to understand. Softlayer Dallas, as measured from North America, is pretty fast at 87ms for the entire 48 hour period (on average) while AWS in Sao Paulo is pretty darn slow (as measured from North America) at 226ms for the same 48 hour period.

However, when we look at the measurements from South America, we see that while the latency back to the US is roughly the same as from North America to South America, the latency from South America to a South American cloud is roughly twice the latency from North America to a North American cloud!

Lets look at the 25th Percentile vs. the 95th percentile from the two geographies to get some perspective.

The-Americas-and-2-Clouds-

As you can see, the latency at the 25th percentile in the two different directions is roughly the same for the “local cloud” and the “remote cloud” – but at the 95th percentile they are vastly different. The measurements from South America to its “local cloud” (AWS Sao Paulo) are much worse at the 95th percentile than the North American measurements to its “local cloud” at the 95th percentile.

What this means is there are some really bad outliers from somewhere in South America to Sao Paulo AWS.

Even at the 75th percentile, the latency from South America to Sao Paulo AWS is 166ms – a terrible time.

Some further analysis helps to see the problem. Let’s drop down to the country level to see what is going on (remember, lower is better).

Latency-to-2-Clouds-from-multiple-SA-Countries-Shaded

While Argentina, Brazil, Uruguay and Chile all have results that meet expectations – i.e. the lowest latency time is to the cloud in Brazil – Columbia, Peru and Venezuela all find faster performance to Dallas than they do to Sao Paulo.

This means that users in (at least) Columbia, Peru and Venezuela generally would be better served to have their traffic routed to Dallas!

So much for geo-routing. Geo-based load balancing would have sent these users to Sao Paulo. We know from previous work that just routing traffic based on geography is not optimal. As one of my professors used to say, this is just a specific instance of the general theorem. If you care to read the previous work, you can find it here. If you already are a believer and you want to figure out what you can do about it, go read our “How to guide” on using Latency Based Traffic Management.

Hybrid Cloud – Adoption is imminent

A recent article by David Linthicum was on point. He states:

Hybrid clouds are desirable because they can deliver the best of both the private and public cloud worlds by letting you move workloads back and forth between the two platforms. You can also partition applications so that components can reside on both the public and private cloud.
– David Linthicum
http://www.cloudtp.com/2015/06/04/how-to-modernize-legacy-apps-for-hybrid-cloud/

Many companies that are adopting a straight cloud model are doing so to attain agility, scalability and cost models they can only get from shared cloud infrastructure.

Screen-Shot-2015-06-08-at-11.51.08-AM

Unfortunately, a wholesale move to the cloud can result in a loss of the visibility and control they need from bare metal and company-owned infrastructure.

This loss in visibility and control can be catastrophic to a business that relies on its online presence – and what business today does not?

Screen-Shot-2015-06-08-at-12.00.34-PM

But what about performance? How does it win or lose in the cloud? The answer is (as always, it seems): “It Depends”. On the one hand, the cloud provides many more locations to distribute your application to make it more performant to users that are near that location. So, from that perspective, a multi-cloud strategy stands to be much more performant than many companies’ ability to scale its data center strategy.

However, in a public cloud, companies have the issue of shared resources. This, of course, is part of the classic “noisy neighbor” problem and continues to cause many issues for many companies.

To understand the real pro’s and con’s of the multi-cloud vs. hybrid-cloud models, we have developed the Hybrid-Cloud Maturity Model. This model helps companies understand the relative merits of moving its infrastructure around.

Hybrid-Cloud-Maturity-Model

By adopting a Latency-Based Global Traffic Management strategy, companies can have the best of all worlds when it comes to performance, visibility, scalability, cost and control.

99999

Another great quote from Mr. Linthicum:

With a hybrid cloud, enterprises can host the applications and the data on the platform that delivers the optimal mix of cost efficiency, security, and performance.
– David Linthicum
http://www.cloudtp.com/2015/06/04/how-to-modernize-legacy-apps-for-hybrid-cloud/

To see specifically how a Multi-Cloud / Hybrid-Cloud strategy can benefit a top social-mobile company, see how Tango uses Cedexis in this Case Study.

SSL – Not your granny's secure sockets anymore.

SSL has come a long way. Starting out as mostly a way to encrypt a payment page on e-commerce sites, its use has grown to where many sites are using it exclusively (as opposed to raw HTTP) as the delivery channel. Google has been pushing it as a mechanism to insure privacy and is even starting to use it as a Ranking Signal in their search engine scoring. This means that your site gets a slight boost in search engine scores if it uses SSL. If you want proof that Google is supportive of the web going to SSL – check out this video (after you read my blog of course).

One of the downsides in SSL is that it can be slower and to offset this many CDNs offer SSL acceleration from the edge. This is a great service and can help really speed up a SSL based site.

Here at Cedexis we have seen an enormous growth in SSL traffic. At this point over 35% the the traffic we direct across all CDN’s is directed to the SSL maps of those various CDNs. This is a big change from a few years ago – and the number is growing. What is less well understood is that CDN’s that deliver this SSL traffic have different maps for SSL and for Non-SSL traffic. Historically the far larger footprint was for straight HTTP traffic (non-SSL). As the SSL traffic has been growing within their user bases they have expanded those maps – but nevertheless – typically the SSL maps of any given CDN have less capacity and footprint than the HTTP maps.

Akamai, Limelight, Level 3 and Edgecast – the SSL challenge
As an example lets take the SSL maps vs the Non-SSL maps of 4 of the larger global CDN’s – Akamai, Limelight, Level 3 and Edgecast. To avoid creating any angry CDN partners I have renamed the 4 CDNs as greek gods so you won’t know which one is which. It really does not matter for the point I am making here. (*This data is all freely available on our public portal – so if you are curious about which ones are performing best – go take a look!)

Lets start with an aggregate view of latency in 5 continents of the SSL traffic vs. the non-SSL traffic. In other words – what is the mean latency across these 4 CDNs (over a 24 hour period) for SSL vs. Non-SSL. (*because this is latency – lower numbers are better)

SSL-vs-HTTP-per-Continent1

So in aggregate you can see that from a latency perspective the SSL map is quite different than the HTTP map. But before making any broad generalizations – lets see the data for each of the 4 CDNs.

Total-View-CDN-+-Continent

Viewed from this perspective you can see that some of the CDN’s have huge differences. For instance, looking at Europe at the performance of the “Prometheus” CDN you can see that there is a dramatic performance costs of SSL delivery (if you were using only that CDN in Europe – for this 24 hour period). Likewise – in North America “Prometheus” has a mean SSL Latency of 108ms while its HTTP map clocks in at 68ms for the same time period.

The “Zeus” CDN shares this pattern of HTTPS delivery being generally slower than HTTP. As you can see the in every Continent, “Zeus” is generally significantly slower for SSL.

On the other hand – 2 of the CDNs have a different pattern. “Apollo” in particular has lower latency for its SSL maps in every region but Africa. “Poseidon” does poorly in Asia (SSL around 90ms slower) but in other regions the SSL and HTTP latency is much closer – with SSL actually being better in North America. These results are outstanding if you are on one of these 2 CDN’s and running your site across SSL for Google’s sake.

But latency is not the whole picture. As whole sites and downloads are starting to be delivered using SSL we are seeing the importance of Throughput for SSL delivery. Lets take a look at how these 4 CDNs are handling throughput across SSL.

First – lets take a worldview – same 4 CDNS – same 24 hour period. Which have the best throughput across the SSL map? (*Because this is throughput – higher is better).

World-Throughput-SSL-

As you can see – “Zeus” has the lowest throughput on its SSL map. Significantly lower. If you are using “Zeus” for a game download you are under-serving your customers. Perhaps this is a reflection of some outliers in remote areas of the world. Lets look just in the US shall we?

Throughput-North-America-SSL-2

As you can see “Zeus” did not improve by just focusing on the US. However – “Poseidon” did – indicating that it had some tough outliers in remote locations (perhaps).

Generally SSL is being adopted as the delivery layer for the entire web. While Google is pushing this – they are not alone and adoption seems to be brisk. As long as Google is scoring SSL traffic higher for search – architects and commerce companies will continue to evolve their sites in that direction. Knowing which CDNs will provide the best SSL performance is important. As you can see above – no one CDN is optimal in every major region. From a throughput perspective – “Prometheus” and “Apollo” traded being the best a couple of times in the US in a 24 hour period.

From a latency perspective – just in the 3 regions of Asia, Europe and North America you can see that in North America “Apollo” had the lowest latency on its SSL map, while in Europe it was “Poseidon”. In Asia it was “Zeus” for Mean SSL Latency. 3 different regions – 3 different CDNs SSL maps.

3-Continent-View

If you started drilling down into these regions you find even more variations. This is the important point around a multi-CDN strategy. For more on how to implement a Latency Based Traffic Management Solution (either SSL or otherwise) go check out our Multi-CDN solution brief.

SSL – Not your granny’s secure sockets anymore.

SSL has come a long way. Starting out as mostly a way to encrypt a payment page on e-commerce sites, its use has grown to where many sites are using it exclusively (as opposed to raw HTTP) as the delivery channel. Google has been pushing it as a mechanism to insure privacy and is even starting to use it as a Ranking Signal in their search engine scoring. This means that your site gets a slight boost in search engine scores if it uses SSL. If you want proof that Google is supportive of the web going to SSL – check out this video (after you read my blog of course).

One of the downsides in SSL is that it can be slower and to offset this many CDNs offer SSL acceleration from the edge. This is a great service and can help really speed up a SSL based site.

Here at Cedexis we have seen an enormous growth in SSL traffic. At this point over 35% the the traffic we direct across all CDN’s is directed to the SSL maps of those various CDNs. This is a big change from a few years ago – and the number is growing. What is less well understood is that CDN’s that deliver this SSL traffic have different maps for SSL and for Non-SSL traffic. Historically the far larger footprint was for straight HTTP traffic (non-SSL). As the SSL traffic has been growing within their user bases they have expanded those maps – but nevertheless – typically the SSL maps of any given CDN have less capacity and footprint than the HTTP maps.

Akamai, Limelight, Level 3 and Edgecast – the SSL challenge
As an example lets take the SSL maps vs the Non-SSL maps of 4 of the larger global CDN’s – Akamai, Limelight, Level 3 and Edgecast. To avoid creating any angry CDN partners I have renamed the 4 CDNs as greek gods so you won’t know which one is which. It really does not matter for the point I am making here. (*This data is all freely available on our public portal – so if you are curious about which ones are performing best – go take a look!)

Lets start with an aggregate view of latency in 5 continents of the SSL traffic vs. the non-SSL traffic. In other words – what is the mean latency across these 4 CDNs (over a 24 hour period) for SSL vs. Non-SSL. (*because this is latency – lower numbers are better)

SSL-vs-HTTP-per-Continent1

So in aggregate you can see that from a latency perspective the SSL map is quite different than the HTTP map. But before making any broad generalizations – lets see the data for each of the 4 CDNs.

Total-View-CDN-+-Continent

Viewed from this perspective you can see that some of the CDN’s have huge differences. For instance, looking at Europe at the performance of the “Prometheus” CDN you can see that there is a dramatic performance costs of SSL delivery (if you were using only that CDN in Europe – for this 24 hour period). Likewise – in North America “Prometheus” has a mean SSL Latency of 108ms while its HTTP map clocks in at 68ms for the same time period.

The “Zeus” CDN shares this pattern of HTTPS delivery being generally slower than HTTP. As you can see the in every Continent, “Zeus” is generally significantly slower for SSL.

On the other hand – 2 of the CDNs have a different pattern. “Apollo” in particular has lower latency for its SSL maps in every region but Africa. “Poseidon” does poorly in Asia (SSL around 90ms slower) but in other regions the SSL and HTTP latency is much closer – with SSL actually being better in North America. These results are outstanding if you are on one of these 2 CDN’s and running your site across SSL for Google’s sake.

But latency is not the whole picture. As whole sites and downloads are starting to be delivered using SSL we are seeing the importance of Throughput for SSL delivery. Lets take a look at how these 4 CDNs are handling throughput across SSL.

First – lets take a worldview – same 4 CDNS – same 24 hour period. Which have the best throughput across the SSL map? (*Because this is throughput – higher is better).

World-Throughput-SSL-

As you can see – “Zeus” has the lowest throughput on its SSL map. Significantly lower. If you are using “Zeus” for a game download you are under-serving your customers. Perhaps this is a reflection of some outliers in remote areas of the world. Lets look just in the US shall we?

Throughput-North-America-SSL-2

As you can see “Zeus” did not improve by just focusing on the US. However – “Poseidon” did – indicating that it had some tough outliers in remote locations (perhaps).

Generally SSL is being adopted as the delivery layer for the entire web. While Google is pushing this – they are not alone and adoption seems to be brisk. As long as Google is scoring SSL traffic higher for search – architects and commerce companies will continue to evolve their sites in that direction. Knowing which CDNs will provide the best SSL performance is important. As you can see above – no one CDN is optimal in every major region. From a throughput perspective – “Prometheus” and “Apollo” traded being the best a couple of times in the US in a 24 hour period.

From a latency perspective – just in the 3 regions of Asia, Europe and North America you can see that in North America “Apollo” had the lowest latency on its SSL map, while in Europe it was “Poseidon”. In Asia it was “Zeus” for Mean SSL Latency. 3 different regions – 3 different CDNs SSL maps.

3-Continent-View

If you started drilling down into these regions you find even more variations. This is the important point around a multi-CDN strategy. For more on how to implement a Latency Based Traffic Management Solution (either SSL or otherwise) go check out our Multi-CDN solution brief.

Cedexis Snowpiercer: Getting to the top of the Mount Blanc. Faster.

Committed to ensuring  customers the best possible experience, Cedexis has decided to launch Snowpiercer (yes, like this one). The new product will help managers climb Mount Blanc faster, the tallest mountain in Europe, for their holidays.

Nah, we just kidding. However, its not entirely a joke. A few weeks ago, we got in touch with Filippo, an Italian engineer with a moutain of photography. Filippo was working on a crazy project called In2White: he wanted to let people navigate to the top of the Mount Blanc, thanks to a zoomable gigapixels picture.

Filippo and his team spent two weeks, between October and November 2014, shooting photos from the top of the mountain. 70,000 pictures, with limited equipement that they could carry in their backpacks.  « It was a bit of a challenge, stands Filippo. We had to wait for the perfect weather conditions, even the wind could have been a problem. What is more, we could not shoot longer than six hours a day, because at this height, morning and evening shadows can hide a lot of the relief. »

MontBlanc2

One thing is certain: with In2White, you will go faster to Mount Blanc than Filippo.

The other issue encountered is a reflection of the breadth of Filippo’s project: for the navigation in the final picture to be fluid, the amateur photographers had to anticipate a 50% overlap in the pictures. So every shoot had to capture 1° in height and 1.5° in width. The result is hard to imagine: because Photoshop cannot handle files bigger than 120 GB, Filippo and his team had to work on 14 different files.

With a technology close to the one used in Google Maps’ Street View, they had to produced 8 million files, each of those being a single little square in the big picture. The total size of the gigapixels photo is more than 60 GB.

Which brings us to Cedexis’s part. When Filippo came to us, at Cedexis, he had identified his main challenges: the huge size of the content users would navigate in, and the potential traffic peaks in the first weeks when the site goes live, especially if the project gets noticed by the media.

Openmix goes for a trek!

MontBlanc1

One question remains: will the Openmix traffic peaks curves be bigger than the Mount Blanc?

Well, guess what? This is exactly the kind of problems we love to solve here at Cedexis. Large content delivery and traffic peaks. Plus everyone was very enthusiast for Filippo’s project, which is why we love the internet: the only limit is your imagination. And Filippo showed us once again that humans have a lot of imagination. So we decided to sponsor it.

Being an engineer, Filippo was already aware of some content delivery problems; which means that he had already selected several CDNs. Good choice, especially if you consider that Filippo expects more than 1 million visitors in the first two days, which would — because of the size of the content — generate about 200 TB of traffic. The first figures came in: he got 1.7 million users in 48 hours! Cedexis truly wanted to support the project, which is in the same time creative and a technical challenge. We offered to monitor and load balance the traffic between the CDNs selected by Filippo.

The use of Radar for monitoring, and Openmix for load balancing will ensure him a 100% availability, a real time balancing of the traffic to the best performing provider, and at the end an user experience that fits with the wow-effect of the project. One challenge for you, now: go and navigate on the website. Play with it, enjoy. We are glad to be your Snowpiercer!

Already played with it? Now learn more about multi-CDNs strategies to optimize your web performance thanks to our free whitepaper.