Citrix Intelligent Traffic Management announcement at Citrix Synergy

Citrix announced the new Citrix Intelligent Traffic Management at Citrix Synergy last week, as a result of the recent acquisition and integration of Cedexis.

As content and data elements move to multiple clouds and CDNs, providing the best user experience requires dynamically optimizing the flow of traffic across ISPs, CDNs and public clouds worldwide.

Watch the presentation session on demand on Citrix Synergy TV:

« Deliver the best user experience for your customers and users with Intelligent Traffic Management »


Raj Gulani, 

Citrix Sr. Director, Product Management




Steven Lyons,  Citrix Principal Product Manager for Cedexis/Intelligent Traffic Management



Watch it Now!


Cedexis Openmix Powered by Datadog Monitoring


Datadog is a popular monitoring and analytics platform that is used widely across most corners of the business world. It provides a great way to collect and report on operational performance, and, perhaps most importantly, to create and operationalize alerts, which automatically sound the alarm when an application or service is not functioning correctly.

Cedexis Openmix manages global application and content delivery – it is, essentially, a control plane for delivery that is integrated with many eco-system services. It uses data inputs and Big Data algorithms to make traffic routing and load balancing decisions that are communicated into applications via DNS or an HTTP API. Openmix optimizes global delivery based on the real-time health of infrastructure endpoints, which optimizes user experience, maximizes performance, and provides granular control for organizations according to their unique business rules.

Consider a customer that has multiple cloud regions. Among others, they are using Azure cloud regions in the US, one on the East Coast and another on the West Coast. Operational monitoring data is sent to Datadog; exception handling for the development and operations teams is all handled based on the alerts configured in that system. The customer would like to use that data to automate their global routing  – it would be a wonderful thing if their operations team no longer needed to be woken up in the middle of the night to manually make changes every time an issue popped up.

Radar provides on-going, last mile performance real user measurements from a global community of websites and applications. Openmix uses that real-time performance data as a key input to dynamically route traffic along the optimal pathway. However, there are a lot of other things that can go wrong with application and content delivery. It may, in theory, be fastest to send traffic from a specific data center or cloud region, but if that location isn’t functioning correctly, the reality may not be a great user experience. Ensuring that systems are operating at full power is exactly why services are monitored by Datadog. And it is exactly why we integrated with Datadog to add valuable real-time infrastructure data to enhance traffic steering.

It is easy to setup. Simply add a Fusion connector to your Cedexis account; mention the webhook callback in your existing Datadog alert; and update your Openmix routing application logic. Openmix will be alerted when the alert fires, and traffic decisions will now be informed by the immediate facts on the ground, automatically opting for endpoints that are in full working order.

Openmix integrates with your existing operational environment, whatever it may be. It provides the benefits of intelligent, automated routing infrastructure with minimal changes necessary. Cedexis provides all the tools necessary to monitor and make global routing decisions –  but the system is built to ingest and use vital data from substantially any 3rd party system (beyond Datadog, you can quickly and easily integrate source like New Relic, CDN metrics, NGINX health checks, and many more). Whatever infrastructure monitoring you have up and running already, Openmix can integrate with it, automate your steering, and help you to thrill your users, make the most of your systems – and stop getting up in the middle of the night to manually re-route traffic.

To find out more, give us a call, or create your own free Radar account.

Cedexis 2017 Highlights

2017 has been a thrilling year for our whole company.

Read our infographic “2017 in a Nutshell” to get a 360 degree view of the year’s biggest highlights.

Click inside the infographic on the text to get direct access to dedicated contents.

Quarter 1:




10 Ways to Make Your Outage Emergency Room Fun

Originally published in Devops Digest website. By Andrew Marshall, Director of Product Marketing at Cedexis

Blogpost Make outages fun

It’s 3:47am. You and the rest of the Ops team have been summoned from your peaceful slumber to mitigate an application delivery outage. As you catch up on the frantic emails, Slack chats, and text messages from everyone on your international sales team (Every. Single. One.), your mind races as you switch to problem solving mode. It’s time to start thinking about how to make this mitigation FUN!


No need to rub it in, but … if you had turned on a software-defined application delivery platform for your hybrid infrastructure, you’d all be sound asleep right now. Automated real-time delivery decisions and failover would be nice, right? Just sayin’.


Your coworkers’ opinions on gaming consoles, Spiderman movies, and music are different than yours! The perfect animated GIF will remind them of your erudite tastes, while you have their attention. Extra credit if you can work in some low-key shade about not listening to your equally sophisticated opinions on optimized outage mitigation.


Oh hey, look at that! Your cloud provider’s health dashboard page says everything is fine…because it’s powered by the services that went down. Help your team vent their creative energy (and frustration) with some fun customized MS Paint updates to the offending page. Bonus points for art that reminds everyone of the value of a multi-cloud strategy powered by a programmable application delivery platform.


Tired of “learning lessons” from these emergency room drills? You depend on NGINX for your LLB, but don’t have a way to use those LLB health metrics and data to automate global delivery. Disjointed delivery intelligence means you don’t know how your apps will land with users. You need an end user-centric approach to app delivery that automates the best delivery path and ensures failover is in place at all times. Micro-outages often fly under your passive monitoring radar, but that doesn’t mean your users don’t notice them. An active, integrated app delivery approach re-routes automatically before you lose business. Post-mortems are fun…but so is making sure your apps survive the last mile. Arguably more so?


“Sales needs to sell more stuff.” You’ll feel better.


Sure, your Mode1 ADC hardware was a sunk cost so you’re stuck with them for a while. But you’re one unnecessary emergency closer to having a fully software-defined application delivery platform for your hybrid cloud. And now you’re even closer. And closer … Tick. Tock.


Probably best to do this during normal work hours. User experience data from around the world can detect degrading, sluggish resources in real-time, and user-centric app delivery logic powered by RUM can make quick re-routing decisions automatically. No more getting woken up after the application crashes. While you’re wishing you had RUM on your side, you can look up some fun facts from the countries experiencing app outages. Did you know Luxembourgish is an official language?


You too can be the weird colleague who sends emails with crazy, middle-of-the-night time stamps.


Browse around to see what you could have purchased with the money you were just forced to spend on unplanned cloud instance provisioning in order to keep your app running. That desktop Nerf missile launcher (or 700 of them) would have been pretty nice.


You’ve just proven it’s not that much fun after all. Don’t just dump everything onto one cloud and call it done. Clouds go down for so many reasons. Use an application delivery platform control layer to build in the capability to auto-switch to an available resource, while you sleep soundly. Running on multi-cloud without an abstracted control layer removes most of the value of the cloud. Swear off the game of chance. Out loud. Right now.


More information:

More Than Science Fiction: Why We Need AI for Global Data Traffic Management

Originally published in Product Design and Development website

by Josh Gray, Chief Architect, Cedexis








Blade Runner 2049 struck a deep chord with science fiction fans. Maybe it’s because there’s so much talk these days of artificial intelligence and automation — some of it doom and gloom, some of it utopian, with every shade of promise and peril in between. Many of us, having witnessed the Internet revolution first hand — and still in awe of the wholesale transformation of commerce, industry, and daily life — find ourselves pondering the shape of the future. How can the current pace of change and expansion be sustainable? Will there be a breaking point? What will it be: cyber (in)security, the death of net neutrality, or intractable bandwidth saturation?

Only one thing is certain: there will never enough bandwidth. Our collectively insatiable need for streaming video, digital music and gaming, social media connectivity, plus all the cool stuff we haven’t even invented yet, will fill up whatever additional capacity we create. The reality is that there will always be buffering on video — we could run fiber everywhere, and we’d still find a way to fill it up with HD, then 4K, then 8K, and whatever comes next.

Just like we need smart traffic signals and smart cars in smart cities to handle the debilitating and dangerous growth of automobile traffic, we need intelligent apps and networks and management platforms to address the unrelenting surge of global Internet traffic. To keep up, global traffic management has to get smarter, even as capacity keeps growing.

Fortunately, we have Big Data metrics, crowd-sourced telemetry, algorithms, and machine learning to save us from breaking the Internet with our binge watching habits. But, as Isaac Asimov pointed out in his story Runaround, robots must be governed. Otherwise, we end up with a rogue operator like HAL, an overlord like Skynet, or (more realistically) the gibberish intelligence of the experimental Facebook chatbots. In the case of the chatbots, the researchers learned a valuable lesson about the importance of guiding and limiting parameters: they had neglected to specify use of recognizable language, so the independent bots invented their own.

In other words, AI is exciting and brimming over with possibilities, but needs guardrails if it is to maximize returns and minimize risks. We want it to work out all the best ways to improve our world (short of realizing that removing the human race could be the most effective pathway to extending the life expectancy of the rest of Nature).

It’s easy to get carried away by grand futuristic visions when we talk about AI. After all, some of our greatest innovators are actively debating the enormous dangers and possibilities. But let’s come back down to earth and talk about how AI can at least make the Internet work for the betterment of viewers and publishers alike, now and in the future.

We are already using basic AI to bring more control to the increasingly abstract and complex world of hybrid IT, multi-cloud, and advanced app and content delivery. What we need to focus on now is building better guardrails and establishing meaningful parameters that will reliably get our applications, content, and data where we want them to go without outages, slowdowns, or unexpected costs. Remember, AI doesn’t run in glorious isolation, unerring in the absence of continual course adjustment: this is a common misconception that leads to wasted effort and disappointing or possibly disastrous results. Even Amazon seems to have fallen prey to the set-it-and-forget-it mentality: ask yourself, how often does their shopping algorithm suggest the exact same item you purchased yesterday? Their AI parameters may need periodic adjustment to reliably suggest related or supplementary items instead.

For AI to be practically applied, we have to be sure we understand the intended consequences. This is essential from many perspectives:  marketing, operations, finance, compliance, and business strategy. For instance, we almost certainly don’t want automated load balancing to always route traffic for the best user experience possible — that could be prohibitively expensive. Similarly, sometimes we need to route traffic from or through certain geographic regions in order to stay compliant with regulations. And we don’t want to simply send all the traffic to the closest, most available servers when users are already reporting that quality of experience (QoE) there is poor.

When it comes right down to it, the thing that makes global traffic management work is our ability to program the parameters and rules for decision-making — as it were, to build the guardrails that force the right outcomes. And those rules are entirely reliant upon the data that flows in.  To get this right, systems need access to a troika of guardrails: real-time comprehensive metrics for server health, user experience health, and business health.

System Guardrails

Real-time systems health checks are the first element of the guardrail troika for intelligent traffic routing. Accurate, low-latency, geographically-dispersed synthetic monitoring answers the essential server availability question reliably and in real-time: is the server up and running at all.

Going beyond ‘On/Off’ confidence, we need to know the current health of those available servers. A system that is working fine right now may be approaching resource limits, and a simple On/Off measurement won’t know this. Without knowing the current state of resource usage, a system can cause so much traffic to flow to this near-capacity resource that it goes down, potentially setting off a cascading effect that takes down other working resources.

Without scriptable load balancing, you have to dedicate significant time to shifting resources around in the event of DDoS attacks, unexpected surges, launches, repairs, etc. — and problems mount quickly if someone takes down a resource for maintenance but forgets to make the proper notifications and preparations ahead of time. Dynamic global server load balancers (GSLBs) use real-time system health checks to detect potential problems, route around them, and send an alert before failure occurs so that you can address the root cause before it gets messy.

Experience Guardrails

The next input to the guardrail troika is Real User Measurements (RUM), which provide information about Internet performance at every step between the client and the clouds, data centers, or CDNs hosting applications and content. Simply put, RUM is the critical measurement of the experience each user is having. As they say, the customer is always right, even when Ops says the server is working just fine. To develop true traffic intelligence, you have to go beyond your own system. This data should be crowd-sourced by collecting metrics from thousands of Autonomous System Numbers, delivering billions of RUM data points each day.

Community-sourced intelligence is necessary to see what’s really happening at both the edges of the network as well as in the big messy pools of users where your own visibility may be limited (e.g. countries with thousands of ISPs like Brazil, Russia, Canada, and Australia). Granular, timely, real user experience data is particularly important at a time when there are so many individual peering agreements and technical relationships, all of which could be the source of unpredictable performance and quality.

Business Guardrails

Together, system and experience data inform intelligent, automated decisions so that traffic is routed to servers that are up and running, demonstrably providing great service to end users, and not in danger of maxing out or failing. As long as everything is up and running and users are happy, we’re at least halfway home.

We’re also at the critical divide where careful planning to avoid unintended consequences comes into play. We absolutely must have the third element of the troika: business guardrails.

After all, we are running businesses. We have to consider more than bandwidth and raw performance: we need to optimize the AI parameters to take care of our bottom line and other obligations as well. If you can’t feed cost and resource usage data into your global load balancer, you won’t get traffic routing decisions that are as good for profit margins as they are for QoE. As happy as your customers may be today, their joy is likely to be short-lived if your business exhausts its capital reserves and resorts to cutting corners.

Beyond cost control, automated intelligence is increasingly being leveraged in business decisions around product life cycle optimization, resource planning, responsible energy use, and cloud vendor management. It’s time to put all your Big Data streams (e.g., software platforms, APM, NGINX, cloud monitoring, SLAs, and CDN APIs) to work producing stronger business results. Third party data, when combined with real-time systems and user measurements, creates boundless possibilities for delivering a powerful decisioning tool that can achieve almost any goal.


Decisions made out of context produce optimal results rarely and only by sheer luck. Most companies have developed their own special blend of business and performance priorities (and anyone who hasn’t, probably should). Automating an added control layer provides comprehensive, up-to-the-minute visibility and control, which helps any Ops team to achieve cloud agility, performance, and scale, while staying in line with business objectives and budget constraints.

Simply find the GSLB with the right decisioning capabilities, as well as the capacity to ingest and use System, Experience, and Business data in real-time, then build the guardrails that optimize your environment for your unique needs.

When it comes to practical applications of AI, global traffic management is a great place to start. We have the data, we have the DevOps expertise, and we are developing the ability to set and fine-tune the parameters. Without it, we might break the Internet. That’s a doomsday scenario we all want to avoid, even those of us who love the darkest of dystopian science fiction.

Josh GrayAbout Josh Gray: Josh Gray has worked as both a leader in various startups as well as at large enterprise settings such as Microsoft. At Microsoft he was awarded multiple patents. As VP of Engineering for Home Comfort Zone his team designed and developed systems that were featured in Popular Science, HGTV, Ask this Old House and won #1 Cool product at introduction at the Pacific Coast Builders Show. Josh has been a part of many other startups and built on his success by becoming an Angel Investor in the Portland Community. Josh continues his run of success as Chief Architect at Cedexis. Linkedin profile


Growing Pains: Lessons Learned from the Steep Ascent of Streaming Video

When technology catches fire the way video streaming has over the last ten-plus years, it can be hard to pin down the turning points. But as we know, there are invaluable lessons to be learned in those moments when a pivot leads to pitfall or progress. The history of streaming video is full of them, including a missed opportunity that presaged the beginning of the end for Blockbuster.  And given that IP video will account for 82% of global Internet traffic by 2021 (CAGR 26 percent, 2016-2021), there are more stories, lessons, and disruptions to come. The OTT video market is still shaking out in the US, let alone in the rest of the world. And as always, disruption is rule number one.

We could mark progress by the switch from DVD to download to stream, video for online advertising and social platforms, the upstart success of Netflix’s original content, or the advent of livestreaming video. There are many exciting moments to consider, but sometimes it’s important to take a closer look behind the scenes.

I’ve found one useful way to capture the lessons of video streaming’s ascendance is at the orchestration level. We mostly know how the video making works (lights, camera, action!), but what about the streaming part? When most companies (not the pure plays like Netflix) started doing streaming video, they didn’t know if it would produce revenue. These companies didn’t want to invest in expensive infrastructure, so they went outside to CDNs and online video platform providers. When the traffic grew, they added more CDNs — when the primary one was taxed, they offloaded to another.

Now that we have the answer to the question of whether video streaming is here to stay — yes, everyone loves video everywhere all the time on all the devices for any purpose you can think of — many providers are looking sideways at the CDN pay-as-you-go model. The larger audiences get, the more the CDN gets in the way of profit making. How to control costs? It’s time to fire up your own metal.

Especially for big media market regions, it’s starting to make a lot more sense to leverage your own POPs (your data center, co-lo, or virtual hosting provider) first, with CDNs as a backup. Solutions like Cedexis’ Openmix provide the essential orchestration layer – the intelligent, automated balancing and routing that ensures the video stream gets delivered to the end-user along the route that enables an optimal mix of quality and cost control.  Openmix’s job is to do what CDN’s do — work out how to get that video delivered quickly — but Openmix can also figure out how to do it for the lowest cost at a pre-defined target level of quality.

There are, after all, many more uses for video now and many different types of audiences and consumption preferences. Different kinds of content can take different encodings — there’s no reason to stream the Peppa Pig cartoon the same way you stream Star Wars: The Last Jedi.  In order to compete, content distributors have to stop overpaying for delivery and make more efficient use of their bandwidth.

The more regionally focused a service is, the easier it is to build their own infrastructure. That’s why Europe is a step or two ahead — in smaller countries, country-based providers can more easily serve all their media markets from POPs they control.  In the US, medium sized providers can get creative — they’re not big enough to have a bajillion POPs and cache boxes at the local ISPs, but they need to deliver content nationwide. An independent TV channel, for example, could identify their top media markets and create their cache units there while using CDNs for everything else. Openmix would figure out the best location to serve from, using stipulated quality, available sources, and cost limits to choose the optimal route.

Finally, there are the companies for whom delivering video is not a primary business. If you’re trying to make money off of video ads, you don’t want to spend too much on serving them up. The consumer isn’t paying for quality, like they are with Netflix, but they also won’t engage with your video ad if it is slow or skippy. If they can reduce the cost, complexity, and pain of delivering video ads, their entire business model makes more sense.

The key is creating a good experience while not breaking the bank.  Only the biggest players like Netflix and, Amazon can spend crazy bank now in order to buy their slice of future markets — and even so, they’d rather spend it on making award-winning shows and movies, not on fiber and bare metal.  To be in the game for the video-saturated present and future, pay attention to what’s going on behind the scenes and look to Cedexis and Openmix for intelligent orchestration and optimized, automated control.

Local and Global Server Load Balancing with NGINX Plus and Cedexis

This blog post also appears on the NGINX blog

Today’s web applications run at a scale and speed that was hard to imagine just a few years ago. To support them, organizations host digital assets in many environments: regions in a single public cloud, multiple public and private clouds, content distribution networks (CDNs), company‑owned and leased data centers, and others. They move digital assets between hosts to meet business needs relating to delivery speed, reliability, and cost.

NGINX works on these problems from the bottom up. From individual web servers to load balancers to clusters, NGINX users create aggregations of servers to deliver services in specific regions and around the world. Cedexis works on the same problems from the top down. Cedexis helps customers manage their online assets to support service delivery on a global scale.

Cedexis and NGINX have now announced the NGINX Plus & Cedexis Local + Global Server Load Balancing (L+GSLB) solution. With this solution, you can fully automate the delivery of your full application stack. Your stack becomes responsive to a complete set of metrics. NGINX Plus provides health‑check data for local load balancers. Cedexis gathers customer‑centric real user monitoring data and synthetic testing data, the NGINX-provided health check data, plus any other data feeds, and uses them as as input to application delivery logic incorporating your business rules.

With the NGINX Plus & Cedexis L+GSLB solution, when one or more servers go down within a single region or globally, traffic can automatically be routed around impacted servers and locations. When the servers come back up, previous traffic patterns can be automatically restored. Traffic can be made similarly responsive to business rules that address cost, response time, availability, and other metrics. You can try it yourself with the interactive NGINX Plus & Cedexis L+GSLB demo.

Ensuring Low Latency and High Availability for NGINX Plus Users

The software‑defined Cedexis Global Server Load Balancer (GSLB) platform is a control plane and abstraction layer for DevOps and IT Operations. It provides automated, predictive, and cost‑optimal routing of your apps, video, and web content.

The platform is powered by both real user monitoring (RUM), which leverages the world’s largest community of live user‑experience data, and synthetic monitoring. The Cedexis GSLB platform can also ingest data feeds from many other data sources like application performance monitoring (APM), clouds, and CDNs – and now, critically, NGINX Plus local load balancer health checks.

NGINX Plus users can now access the power of the Cedexis GSLB platform, and Cedexis users can now see what’s going on inside the data center. This helps a great deal when load balancing across a combination of data centers, clouds, and CDNs (or within any one of them).

The left side of the following graphic gives you an idea of the massive amount of activity monitoring data that is available to Cedexis GTM from NGINX Plus, as opposed to open source NGINX (the right side of the graphic).

Implementing the NGINX Plus & Cedexis Local + Global Server Load Balancing Solution

Inside a single public cloud region (such as AWS West‑Oregon), you can set up a high availability (HA) solution. An example is the NGINX Plus and AWS Network Load Balancer solution. The NGINX Plus configuration provides best‑in‑class, HA load balancing inside that particular cloud region. But most cloud‑based apps reside in more than one public cloud zone or region, or in a hybrid‑cloud infrastructure including at least one data center, as recommended by public cloud providers like AWS. The NGINX Plus & Cedexis L+GSLB solution automatically extends to the second, third, and additional regions as they’re added, with no further setup required.

Inside a public cloud region and data center, such as US West Azure and US East AWS in the figure below, each NGINX Plus instance continuously collects data about the health of the resources where it is installed. That data is then transmitted to Cedexis, where it’s used to make better global traffic management (GTM) routing decisions.

NGINX Plus generates real‑time activity data that provides DevOps teams with both load and performance metrics. Cedexis is able to ingest this data through a RESTful JSON API and incorporate it into the GTM algorithm. DevOps teams can use this data any way they want to inform the routing of apps.

How It Works: Real-Time Decisions Based on Cedexis Radar and NGINX Plus

To walk you through how the integrated NGINX Plus & Cedexis solution works, let’s say a SaaS company uses one data center and two public clouds across the globe to deliver services to their worldwide customer base. It’s likely they are set up to use traditional geo‑based routing only. Essentially, this means that app data is routed to the data center or cloud closest to the end user. When things are working well, this is probably OK. However, when problems arise, this simple setup can make things worse.

The Cedexis Radar service continually collects traffic data from all over the world, related to hundreds of millions of web users, not just those using any one customer’s data center(s) and cloud(s). The Cedexis GTM platform uses this data to route traffic to the data center or cloud that offers the fastest‑responding servers for end users. This means that content may be delivered from a source that is not the closest geographically, if that provides the best customer experience.

As an example, consider the setup in the graphic below, with NGINX Plus running in two public cloud regions in the US and in a data center in the UK. If Cedexis Radar detects network traffic issues between continental Europe and the UK, Cedexis can use that data to route European traffic to the two US cloud regions, because they are now closer (in terms of user experience) than the UK data center. “Micro‑outages” of this type, often undetected by Ops teams until a forensic analysis takes place, are dealt with automatically, without any adverse impact on end users.

Each NGINX Plus instance collects data about the health of the resources where it is installed, which is automatically ingested by Cedexis to inform GTM. Suppose, for example, that a data center has several servers that are dropping connections, as shown in the following graph.

Cedexis automatically adjusts and readjusts its decisions, in real time, to route traffic around the impacted servers, as depicted in this graph.

When things return to normal, or a mitigation is implemented, the traffic is automatically restored to the original resource. DevOps teams can sleep soundly, knowing this is taking place even when they are out of the office.

Check out this demo to see for yourself how NGINX Plus and Cedexis deliver integrated, full‑stack load balancing. For details, see our Solution Brief or talk to a solutions expert at either Cedexis or NGINX.

Announcing Cedexis Netscope: Advanced Network Performance and Benchmarking Analysis

The Cedexis Radar community collects tens of billions of real user monitoring data points each day, giving Cedexis users unparalleled insight into how applications, videos, websites, and large file downloads are actually being experienced by their users. We’re excited to announce a product that offers a new lens into the Radar community dynamic data set: Cedexis Netscope.

Know how your service stacks up, down to the IP subnet
Metrics like network throughput, availability, and latency don’t tell the whole story of how your service is performing, because they are network-centric, not user-centric: however comprehensively you track network operations, what matters is the experience at the point of consumption. Cedexis Netscope provides you with additional user-centric context to assess your service, namely the ability to compare your service’s performance to the results of the “best” provider in your market. With up-to-date Anonymous Best comparative data, you’ll have a data-driven benchmark to use for network planning, marketing, and competitive analysis.

Highlight your Service Performance:

  • Relative to peers in your markets
  • In specific geographies
  • Compared with specific ISPs
  • Down to the IP Sub-net
  • Including both IPv4 and IPv6 addresses
  • Comprehensive data on latency or throughput
  • Covering both static and dynamic delivery

Actionable insights
Netscope provides detailed performance data that can be used to improve your service for end users. IT Ops teams can use automated or custom reports to view performance from your ASN versus peer groups in the geographies you serve. This lets you fully understand how you stack up versus the “best” service provider, using the same criteria. Real-time logs organized by ASN can be used to inform instant service repairs or for longer-term planning.

Powered by: the world’s largest user experience community
Real User Monitoring (RUM) means fully understanding how internet performance impacts customer satisfaction and engagement. Cedexis gathers RUM data from each step between the client and any of the clouds, data centers, and CDNs hosting your applications to build a holistic picture of internet health. Every request creates more data, continuously updating this unique real-time virtual map of the web.

Data and alerts, your way
To effectively evaluate your service and enable real-time troubleshooting, Netscope lets you roll up data by the ASN, country, region, or state level. You can zoom in within a specific ASN at the IP subnet level, to dissect the data in any way your business requires. This data will be stored in the cloud on an ongoing basis. Netscope also allows users to easily set up flexible network alerts for performance and latency deviations.

Netscope helps ISP Product Managers and Marketers better understand:

  • How well users connect to the major content distributors
  • How well users/business connect to public clouds (AWS, Google Cloud, Azure, etc.)
  • When, where, and how often outages and throughput issues happen
  • What happens during different times of day
  • Where are the risks during big events (FIFA World Cup, live events, video/content releases)
  • How service on mobile looks versus web
  • How the ISP stacks up v. ”the best” ISP  in the region

Bring Advanced Network analysis to your network
Netscope provides a critical data set you need for your network planning and enhancement. With its real-time understanding of worldwide network health, Netscope gives you the context and actionable data you need to delight customers and increase your market share.

Ready to use this data with your team?

Set up a demo today


With a Multi-cloud Infrastructure, Control is Key

By Andrew Marshall, Cedexis Director of Product Marketing

Ask any developer or DevOps manager about their first experiences with the public cloud and it’s likely they’ll happily share some memories of quickly provisioning some compute instances for a small project or new app. For (seemingly) a few pennies, you could take full advantage of the full suite of public cloud services—as well as the scalability, elasticity, security and pay-as-you-go pricing model. All of this made it easy for teams to get started with the cloud, saving on both the IT budgets and infrastructure setup time. Public cloud providers AWS, Azure, Google Cloud, Rackspace and others made it easy to innovate.

Fast forward several years and the early promise of the cloud is still relevant: Services have expanded, costs have (in some cases) been reduced and DevOps teams have adapted to the needs of their team by spinning up compute instances whenever they’re needed. But for many companies, the realities of their hybrid-IT infrastructure necessitates support for more than one public cloud provider. To make this work, IT Ops needs a control layer that sits on top of their infrastructure, and can deliver applications to customers over any architecture, including multicloud. This is true, no matter what the reason teams need to support multi-cloud environments.

Prepare for the Worst

As any IT Ops manager, or anyone who has lost access to their web app knows, outages and cloud service degradation happen. Modern Ops teams need a plan in place for when they do. Many companies choose to utilize multiple public cloud providers to ensure their application is always available to worldwide customers, even during an outage. The process of manually re-routing traffic to a second public cloud in the event of an outage is cumbersome, to say the least. Adding an app delivery control plan on top of your infrastructure allows companies to seamlessly and automatically deliver applications over multiple clouds, factoring in real-time availability and performance.

Support Cloud-driven Innovation

Ops teams often support many different agile development teams, often in multiple countries or from new acquisitions. When this is the case, it’s likely that the various teams are using many architectures on more than one public cloud. Asking some dev teams to switch cloud vendors is not very DevOps-y. A better option is to control app delivery automation with a cloud-agnostic control plane that sits on top of any cloud, data center or CDN architectures. This allows dev teams to work in their preferred cloud environment, without worrying about delivery.

Avoid Cloud Vendor Lock-in

Public cloud vendors such as Amazon Web Services or Microsoft Azure aren’t just infrastructure-as-a-service (IaaS) vendors, they sell (or resell) products and services that could very well compete with your company’s offering. In the beginning, using a few cloud instances didn’t seem like such a big deal. But now that you’re in full production and depend on one of these cloud providers for your mission-critical app, this no longer feels like a great strategy. Adding a second public cloud to your infrastructure lessens your dependence on a single cloud vendor who you may be in “coopetition” with.

Multiple-vendor sourcing is a proven business strategy in many other areas of IT, giving you more options during price and SLA negotiations. The same is true for IaaS. Cloud services change often, as new services are added or removed, and price structures change. Taking control over these changes in public cloud service offerings, pricing models and SLAs is another powerful motivator for Ops teams to move to a multi-cloud architecture. An application delivery automation platform that can ingest and act on cloud service pricing data is essential.

Apps (and How They’re Delivered) Have Changed

Monolithic apps are out. Modern, distributed apps that are powered by microservices are in. Similarly, older application delivery controllers (ADCs) were built for a static infrastructure world, before the cloud (and SaaS) were commonly used by businesses. Using an ADC for application delivery requires a significant upfront capital expense, limits your ability to rapidly scale and hinders the flexibility to support dynamic (i.e. cloud) infrastructure. Using ADCs for multiple cloud environments compounds these issues exponentially. A software-defined application delivery control layer eliminates the need for older ADC technology and scales directly with your business and infrastructure.

Regain Control

Full support for multi-cloud in product may sound daunting. After all, Ops teams already have plenty to worry about daily. Adding a second cloud vendor requires a significant ramp-up period to get ready for production-level delivery, and the new protocols, alerts, competencies and other things you need to think about. You can’t be knee-deep in the details of each cloud and still manage infrastructure. Adding in the complexity of application delivery over multiple clouds can be a challenge, but much less so if you use a SaaS-based application delivery platform. With multi-cloud infrastructure, control is key.

Learn more about our solutions for  multi-cloud architectures and discover our application delivery platform.

You can also download our last ebook “Hybrid Cloud, the New Normal” for free here.