[How-to] DNS based session persistence with latency based routing

byChris Haagon Comments0

Act Locally

Many applications require each HTTP request, from a particular client, be directed to the same server or cloud instance. This is called session persistence and is common with web applications that do not share application “state” between back-end servers or data stores.

In the past, this has been solved using local Load Balancers such as those provided by F5 or Brocade or more recently with the Elastic Load Balancer from Amazon. However, as application architects look to serve a global audience and deploy their applications to multiple data centers it becomes necessary to ensure session persistence at a global and local level.

Think Globally

In a previous post we discussed the trend towards “active-active” application design. Active-active applications can serve any request, from any visitor from any data center at any time. End users maybe directed to a new data center mid session.  However, some applications rely on real-time personalization data that may be slow to synchronize across data centers. In this  case, while it is possible to bounce a user between data centers mid-session, the end user may temporarily see slower performance or have a less personalized experience while the back-ends transfer data.

Examples of this type of application include Ad networks which use real-time data on our web surfing habits to determine which impression will be most relevant to us.  Certain gaming applications may similarly use recent interactions to enhance game play.

Switching Costs

In essence, this type of application pays a “switching cost” anytime a particular end-user requests a response from a new data center. Obviously, end-users for whom one of the available choices is clearly faster should be directed to the “best” data center. But what if performance from their ISP to the original data center starts to degrade?

Ideally we’d want a threshold based on each application’s unique business needs. This threshold would ensure we’d only switch data centers if the performance data is very compelling or the initial data center is unable to process the request.

Tie Breakers

Of greater concern are end-users for whom two or more data center choices are tied from a Radar performance perspective. The worst thing a GSLB could do would be to “flap”: sending the end user to multiple destinations over the course of a single visitor session. What we need is a way to break the ties in a consistent way that ensures an even distribution across data centers.

Consistent Hashing through Application Defined Load Balancing

Cedexis Openmix is flexible enough to handle this use case by combining Radar performance data with the consistent hashing algorithm devised at MIT in the 1990′s. Customers can let Radar determine which of their data centers are going to be least latent for visitors from each ISP. For example, lets imagine a customer has two data centers: one in Chicago and another in Dallas.

Some web visitors will clearly be better off going to Chicago while others will be routed to Dallas. This is easily accomplished with our standard “Optimal Round Trip Time” Openmix application. But what about ISPs located half way (in an Internet sense) between those locations? Using a variance threshold, customers can define exactly what constitutes a “tie”. By implementing one of the many consistent hashing implementations available Openmix can ensure ties are always broken the same way. This should provide the benefits of Radar performance routing while maintaining session persistence.

We’ve created an example implementation. Give it a whirl!

From the Blog

Check out our blog for the latest in Cedexis news.

Upcoming Events
Performance and Capacity Conference by CMGNov 4 in Atlanta, GA
OmniTI Surge 2014Sep 24 in Baltimore
PDX TechAug 18 in Portland