GermanyAvg. Response Time 261ms
JapanAvg. Response Time 161ms
CanadaAvg. Response Time 143ms
With one AWS Region, performance is hostage to distance and network congestion.
GermanyAvg. Response Time 261ms383ms
JapanAvg. Response Time 161ms71ms
CanadaAvg. Response Time 143ms275ms
With two AWS Regions, your Availability improves, and end users can receive a better performance due to shortened distances, if the closer cloud region is performing well.
GermanyAvg. Response Time 261ms383ms111ms
JapanAvg. Response Time 161ms71ms354ms
CanadaAvg. Response Time 143ms275ms172ms
OpenMix monitors all AWS regions in real time and routes end users to the best performing instance at any given moment to avoid congestion and “busy” hours around the world.
Data courtesy of Cedexis Radar, the independent authority of web performance.
Availability is a key driver for having an N +1 architecture. The advent of cloud solutions has re-written the HA/DR handbook, by preferring multiple low-cost instances over “5-9s”, super spendy, solutions. At Cedexis, we have learned that effectively exploiting multiple cloud or data center locations can provide unexpected performance gains when combined with real time end user telemetry.
To the left is an example analysis we do for interested customers using actual Radar data, which shows expected performance gains from load balancing across multiple clouds. Importantly, note that the different colors display the relative performance of different traffic routing methodologies: Round Robin, Historical Latency data and Real Time Latency data.
Round robin routing is fine for improving availability by maintaining active-active instances, but does little to improve performance. Latency routing using historical data results in limited benefit, as is seen by users of the daily latency data used by AWS-Route 53. Significant gains are found when using real-time data, however, providing meaningful latency reduction for your end users.
Real-time data allows for the avoidance of the numerous availability gaps and performance lags that occur throughout a day on any cloud or within any data center’s Internet connectivity. By combining two or more clouds/data centers, your customers can “ride the low latency curve” of optimized performance.
Control & Flexibility. A powerful aspect of the Cedexis load balancing solution is that you control the data utilized to make traffic routing decisions and the weighting of the decision criteria. Use our decision application wizard, pull a sample from our Developers’ community or utilize PHP to define your own custom traffic shaping application.Learn more about Cedexis Openmix.
Cedexis customers have used a range of data sources to influence decisions:
Cost information to utilize lower costs infrastructure, if performance measures are within a prescribed threshold.
Application performance monitoring data to anticipate applications trending towards “busy” thresholds.
Monthly aggregate usage on Cloudfront or other CDNs to manage “bursting” costs.
Watch performance problems in real-time. Discover how noisy the internet is.