Cedexis, the leader in content and application delivery optimization for clouds, CDNs and data centers, today announced the general availability of its application delivery optimization solution. The complete, enterprise-ready application delivery platform solution integrates with NGINX Plus, which extends open source NGINX with advanced features and award-winning support. For the first time, DevOps teams can now optimize application availability and latency in real time using monitoring data from the entire delivery path, from end-user audience through to NGINX Plus load balancers and application servers.
“DevOps teams want their cloud networking to be agile and flexible, as we’ve seen compute and storage become, to enable rapid and intelligent end-to-end scaling,” said Rob Malnati, Cedexis SVP of Business & Corporate Development. “By partnering with NGINX, our customers have a comprehensive view of application quality, from server to end-user, and automation to optimize application delivery in real time for high-availability, low-latency and optimal resource use.”
“Businesses today are expected to deliver high-performing, scalable applications, or risk losing out to the next competitor who is doing so,” said Paul Oh, Head of Business Development at NGINX. “By being able to make traffic routing decisions closest to the application with the advanced metrics available in NGINX Plus, Cedexis greatly improves an organization’s ability to deliver the best possible experience to their customers no matter where they are located.”
The Cedexis application delivery platform addresses a range of issues associated with modern, cloud-native application delivery by providing:
- A single end-to-end integration of local and global load balancing that is usable in any combination of public or private clouds, and private data centers
- The first, real-time, automated solution that leverages both application and user performance monitoring
- A single “pane of glass” for analysis of all traffic routing decisions
- Intelligent auto-scaling that optimizes for both cloud application server capacity and performance, actively managing “costs for performance” tradeoffs.