Automobile congestion is a scourge in most cities around the world. The negative impacts are varied: personal time wasted sitting in traffic, dangerous accidents, delayed arrival of emergency vehicles, environmental damage from stop-and-go driving, inefficient delivery logistics…the list goes on. Stop signs and simple on-off timed signals were sufficient to direct traffic in the early years; in recent decades, camera and sensor-triggered traffic lights became necessary to address mounting congestion and protect pedestrians. With congestion and pollution problems mounting, Lighthouse Cities like Cologne, Barcelona and Stockholm are piloting smart traffic management systems. These innovative combinations of software controllers and connected sensors in traffic lights and vehicles will optimize travel speeds, delivery routes, and public transportation links to prevent accidents, untangle bottlenecks, and balance the ratio of car drivers and rail riders.
Building Towards a Future of Sustainable Internet Traffic
Internet traffic has followed similar patterns on an accelerated timeline. Local load balancers were sufficient to divvy up incoming requests between a handful of servers sitting in the same place. With the explosion of Internet data use and development of cloud and virtual infrastructure, local load balancers (LLBs) have proliferated and need to be managed like endpoints themselves. Application Delivery Controllers address this issue for data centers, but aren’t up to the task in a hybrid IT world. To optimize modern Internet infrastructure — increasingly a combination of data centers, CDNs, co-los, and regional AWS resources — we need an innovative smart traffic management system, too. Like the street traffic control systems in widespread use today, basic Global Load Balancers (GLBs) are an essential start, but we need something more dynamically data-driven, real-time responsive, and configurable to specific scenarios. This is especially true for DevOps organizations.
Murphy’s Law Applies: All the Things that Can Go Wrong…
When LLBs are not intelligent and dynamically controlled, quality of experience (QoE) degrades, provisioning economics are out of whack, and outages occur. Even with failover mechanisms in place, if traffic gets sent to an LLB cluster that is down, it often takes an unacceptable amount of time to switch over. Sometimes, a location is working fine but starting to bump up against resource limits. A shift or failure in another location could cause so much traffic to flow to the near-capacity resource that it breaks, resulting in an entirely avoidable outage (and another fun post-midnight emergency intervention). In multi-cloud and hybrid infrastructure for application and media delivery, the sought-after advantages of scalability, agility, and affordability are lost when there is no overarching control layer. Without intelligent global traffic control, one location will be overused and in danger of failing if something unexpected happens, while another location will be underused. Data Center resources are already paid for under CapEx, as opposed to OpEx cloud spot instances; intelligent, configurable GLBs can maintain the difficult balance between low cost and high cost locations in your specific resource mix.
Outages are Unnecessary Evils
Everybody hates outages – users, developers, sys admins, sales teams, and executives. You might be able to keep your cloud budget woes to yourself, but outages big and small chip away at your brand, reputation, and app or site popularity. Dynamic GLBs use real-time health checks to detect potential traffic or resource problems, route around them, and send an alert before failure occurs so that you can address the root cause (during normal work hours, imagine that). Even with a failover plan, LLBs allowed to run without real-time intelligence are susceptible to slowdowns, micro-outages, and cascading failures, especially if hit with a DDOS attack or unexpected surge. There are times when it’s necessary to shift your standard resource model: updates, repairs, natural disasters, and app or service launches. Without scriptable load balancing, you have to dedicate significant time to shifting resources around — and problems mount quickly if someone takes down a resource but forgets to make the proper notifications and preparations ahead of time.
Intelligent GLBs are Here to Save the Day
The direct benefits of implementing a scriptable, user-configurable, data-driven global load balancing platform for hybrid architectures are three-fold: performance, economics, and control (including the ability to account for region-specific regulatory requirements). Feeding high quality, real-time datasets (LLB and cloud monitoring, server availability, real user measurements, and other resource health checks) into traffic decision engines automates the entire delivery path, optimizing application availability and latency for consistently high QoE, swift resolution of congestion and outages, and fine-tuned control over resource use and cost. This automated approach to software-defined application delivery is essential for DevOps innovation; continuous integration and delivery methods can’t wait for hardware changes, and the growing use of microservices and containers requires application-level control that developers can understand and configure. Moreover, software-defined global load balancers are designed to work the same way on all platforms, so smooth deployment is possible, no matter your resource mix.
It may be hard to imagine now, especially if you are stuck in the transportation grind of a major urban area, but we may one day live in cities where cars and public transportation flow and connect seamlessly, air pollution is minimal, and road rage is an embarrassing relic of the past. When it comes to Internet traffic, we can start living the dream much sooner. After all, nobody in 1995 would have believed predictions about binge watching entire sitcom seasons on your handheld wireless computer while going about your daily routine. With intelligent, dynamic global load balancing, we’re well on our way to yet another step change in application and content delivery.