China Tightens Its Grip on Website Delivery

Reports surfaced recently of the Chinese Government enforcing tighter control over pre-existing telecom regulations.

Beijing is said to have ordered state-run telecommunication firms (including China Mobile, China Telecom and China Unicom) to prevent people from using VPNs, in keeping with President Xi Jinping’s ‘cyber sovereignty’ campaign. Until today VPNs operated in a legal gray area, used by individuals and corporations alike to access international websites.

But the new crackdown on illegal website delivery is said to affect a lot more services than just VPNs (that have grown accustomed to being regularly blocked by the Ministry of Industry and Information Technology, or MIIT). Now local Chinese CDNs have started sending alarming messages to customers warning of the risk of having their websites blocked in China for lack of a proper license.

In the past, delivering a website’s content in China could be achieved either through local or international hosting. In order to be hosted locally, however, publishers need to apply for a state-issued registration number (also known as an ICP license) at the local branch of the MIIT in the province where their business was registered. They must then display that ICP number on the website’s footers.

In November 2016, the National People’s Congress passed a new China Cybersecurity Law, which required network operators in critical sectors to store data inside China. Most notably, business information and data on Chinese citizens gathered within China would have to be stored on domestic servers for Chinese authorities to spot-check when needed.

Effectively, that meant most foreign businesses and their websites now required an ICP license in order to keep operating in China. Although that law came into effect on June 1st, 2017, it looks like internet regulators gave it a more aggressive push ahead of the National People’s Congress of the Communist Party in China (mid-October). That means more scrutiny is going into whether websites possess the mandatory ICP license – and more heat directed at the network operators that allow these ‘non-compliant’ websites to be delivered in China.

Until this weekend, a website that did not possess an ICP license could be hosted outside of mainland China (say, in Hong Kong) and delivered from there. Most Chinese and international CDNs even offered some type of ‘Near-China Delivery’ options, whereby they would host content on their own servers near China (Singapore, Hong Kong, etc.) and through their private lines get that content through the Great Firewall and more efficiently delivered to end-users in China.

And this is where the new Cybersecurity Law comes into play: VPNs are nothing but private lines operated to usher content in and out of China through the Great Firewall. And the CDN’s private networks fall squarely into that category. China Telecom, China Unicom and China Mobile have each now issued a notice to foreign businesses in China requiring these companies to obtain proper Private Line licensing from the government, and to block usage of those lines to connect to anything other than the company’s overseas headquarters.

Looking ahead, it is safe to assume Chinese network operators and CDNs will start enforcing the new Cybersecurity Law strictly: verifying the ICP licenses of all websites delivered in China through their services is likely just the tip of the iceberg. If you don’t have an ICP license yet, apply for it now or risk having your website blocked in China (as Lamborghini and Ferrari were this weekend) within the next few weeks.

Try Radar Without Risking an Install

Would you like to try the Radar tag on your website so you can see how your delivery performance is impacting customer experience? But also want to ensure that there is no impact on your production site before committing to installing a new tag?

That’s not just possible, it’s easy!

In two minutes you will execute Radar on your website from your favorite browser, no muss, no fuss.

The first step is to install the Tampermonkey extension in your browser. Tampermonkey is a Chrome extension that lets you execute an additional javascript on a website. Visit http://tampermonkey.net/ for details around the plugin. Once installed, creating a javascript from the Tampermonkey dashboard is a piece of cake. Copy-paste the one below and replace the http://mywebsite.com/path/file url and the customerId with your own! (If you don’t have a Radar account yet, simply click here and grab one – it’s quick and free!)

sample tampermonkey script
// ==UserScript==
// @name         radar_test
// @namespace    http://tampermonkey.net/
// @version      0.1
// @description  test the Cedexix Radar on my website
// @author       you
// REPLACE THE LINE BELOW WITH YOUR WEBSITE URL
// @match        http://mywebsite.com/path/file
// @grant        none
// ==/UserScript==
(function() {
    'use strict';
    // CHANGE THE CUSTOMER ID BY YOURS */
    var customerId = 99999;
    function radar(zid,cid) {
        var elem = document.createElement('script');
        elem.type = 'text/javascript';
        elem.async = true;
        elem.src = '//radar.cedexis.com/' + zid + '/' + cid + '/radar.js';
        document.body.appendChild(elem);
    }
    radar(1, customerId);
})();

Then save your script and enable it.

The next step is to open the network debugger in a new tab in your browser and visit your website. If you see the calls to Cedexis – radar.js, r20.gif -, it means you executed our radar on your site.

Easy right?

 

Cedexis and Mux Announce Joint Data-Driven Traffic Management Solution Based on Consumers’ Video Quality of Experience

Combining real-time OTT video analytics with the leading global server load balancer, this new solution ensures the highest quality experience for consumers at the lowest possible cost.

PORTLAND, Ore. and SAN FRANCISCO, Aug. 31, 2017 – Cedexis, the leader in content and application delivery optimization for clouds, CDNs and data centers, and Mux, the most accurate video analytics platform on the market, today announced a joint solution to cost-effectively drive superior video Quality of Experience (QoE) through data-driven global traffic management. This partnership enables video publishers to gain clear visibility over their users’ video experiences, while using unique algorithms to guarantee optimal QoE at the lowest possible cost.

Mux’s simple-yet-powerful video monitoring system, which can be integrated and implemented in just minutes, will be integrated with Cedexis Openmix, the global server load balancer used by the world’s leading companies, including A&E, Hudl and Sky. By ingesting accurate, timely, and comprehensive QoE metrics worldwide from MUX, the Cedexis application delivery platform will deliver swift and accurate real-time, predictive traffic routing decisions that will eliminate outages, ensure consistent QoE, and keep delivery costs to a minimum.

“For years streaming video providers have wrestled with the competing challenges of providing broadcast quality experiences to their consumers, while dealing with the very real costs of traffic delivery,” said Ryan Windham, Cedexis CEO. “By partnering with Mux to gain access to comprehensive streaming video monitoring data and analysis, Cedexis is helping publishers around the world to thrill their viewers, while protecting their economic models with powerful, real-time traffic delivery decisioning.”

The Cedexis Radar community currently tracks the status of Internet delivery through 14 billion real user measurements every day. Mux collects, processes, and analyzes streaming video events from the consumer’s video player to quickly identify QoE events. Both sets of data are then used by the Cedexis global server load balancer to make real-time predictive traffic routing decisions, improving audience growth and retention through consistent quality for viewers, and the swift resolution of congestion and downstream outages.

Openmix algorithms – which can be adjusted and executed in just minutes by customers using simple JavaScript code – can also take into account data ingested from other sources to make optimal economic decisions. This data may include synthetic monitoring details, contracts with cloud providers, usage tracking from CDNs, performance data from PLM services, and others.

“We are excited to combine Cedexis’ powerful, data-driven policy engine with Mux’s QoE analytics to provide the most comprehensive traffic management solution for video,” said Jon Dahl, CEO of MUX. “The intelligence derived from the consumer’s video playing experience is a critical resource for delivering the most effective optimization decisions for global server load balancing.”

To find out more about Cedexis application delivery solutions, please visit www.cedexis.com.

To find out more about Mux’s award-winning measurement and analytics solutions, please visit www.mux.com.

Meet us at IBC Show Amsterdam, September 15-19 to learn more about our joint solution.

Schedule a meeting now

Metrics That Beat Murphy’s Law

Automobile congestion is a scourge in most cities around the world. The negative impacts are varied: personal time wasted sitting in traffic, dangerous accidents, delayed arrival of emergency vehicles, environmental damage from stop-and-go driving, inefficient delivery logistics…the list goes on. Stop signs and simple on-off timed signals were sufficient to direct traffic in the early years; in recent decades, camera and sensor-triggered traffic lights became necessary to address mounting congestion and protect pedestrians. With congestion and pollution problems mounting, Lighthouse Cities like Cologne, Barcelona and Stockholm are piloting smart traffic management systems. These innovative combinations of software controllers and connected sensors in traffic lights and vehicles will optimize travel speeds, delivery routes, and public transportation links to prevent accidents, untangle bottlenecks, and balance the ratio of car drivers and rail riders.

Building Towards a Future of Sustainable Internet Traffic
Internet traffic has followed similar patterns on an accelerated timeline. Local load balancers were sufficient to divvy up incoming requests between a handful of servers sitting in the same place. With the explosion of Internet data use and development of cloud and virtual infrastructure, local load balancers (LLBs) have proliferated and need to be managed like endpoints themselves. Application Delivery Controllers address this issue for data centers, but aren’t up to the task in a hybrid IT world. To optimize modern Internet infrastructure — increasingly a combination of data centers, CDNs, co-los, and regional AWS resources — we need an innovative smart traffic management system, too.  Like the street traffic control systems in widespread use today, basic Global Load Balancers (GLBs) are an essential start, but we need something more dynamically data-driven, real-time responsive, and configurable to specific scenarios. This is especially true for DevOps organizations.

Murphy’s Law Applies: All the Things that Can Go Wrong…
When LLBs are not intelligent and dynamically controlled, quality of experience (QoE) degrades, provisioning economics are out of whack, and outages occur. Even with failover mechanisms in place, if traffic gets sent to an LLB cluster that is down, it often takes an unacceptable amount of time to switch over. Sometimes, a location is working fine but starting to bump up against resource limits. A shift or failure in another location could cause so much traffic to flow to the near-capacity resource that it breaks, resulting in an entirely avoidable outage (and another fun post-midnight emergency intervention).  In multi-cloud and hybrid infrastructure for application and media delivery, the sought-after advantages of scalability, agility, and affordability are lost when there is no overarching control layer. Without intelligent global traffic control, one location will be overused and in danger of failing if something unexpected happens, while another location will be underused. Data Center resources are already paid for under CapEx, as opposed to OpEx cloud spot instances; intelligent, configurable GLBs can maintain the difficult balance between low cost and high cost locations in your specific resource mix.

Outages are Unnecessary Evils
Everybody hates outages – users, developers, sys admins, sales teams, and executives. You might be able to keep your cloud budget woes to yourself, but outages big and small chip away at your brand, reputation, and app or site popularity. Dynamic GLBs use real-time health checks to detect potential traffic or resource problems, route around them, and send an alert before failure occurs so that you can address the root cause (during normal work hours, imagine that). Even with a failover plan, LLBs allowed to run without real-time intelligence are susceptible to slowdowns, micro-outages, and cascading failures, especially if hit with a DDOS attack or unexpected surge. There are times when it’s necessary to shift your standard resource model: updates, repairs, natural disasters, and app or service launches. Without scriptable load balancing, you have to dedicate significant time to shifting resources around — and problems mount quickly if someone takes down a resource but forgets to make the proper notifications and preparations ahead of time.

Intelligent GLBs are Here to Save the Day
The direct benefits of implementing a scriptable, user-configurable, data-driven global load balancing platform for hybrid architectures are three-fold: performance, economics, and control (including the ability to account for region-specific regulatory requirements). Feeding high quality, real-time datasets (LLB and cloud monitoring, server availability, real user measurements, and other resource health checks) into traffic decision engines automates the entire delivery path, optimizing application availability and latency for consistently high QoE, swift resolution of congestion and outages, and fine-tuned control over resource use and cost. This automated approach to software-defined application delivery is essential for DevOps innovation; continuous integration and delivery methods can’t wait for hardware changes, and the growing use of microservices and containers requires application-level control that developers can understand and configure. Moreover, software-defined global load balancers are designed to work the same way on all platforms, so smooth deployment is possible, no matter your resource mix.

It may be hard to imagine now, especially if you are stuck in the transportation grind of a major urban area, but we may one day live in cities where cars and public transportation flow and connect seamlessly, air pollution is minimal, and road rage is an embarrassing relic of the past. When it comes to Internet traffic, we can start living the dream much sooner. After all, nobody in 1995 would have believed predictions about binge watching entire sitcom seasons on your handheld wireless computer while going about your daily routine. With intelligent, dynamic global load balancing, we’re well on our way to yet another step change in application and content delivery.

Why CapEx Is Making A Comeback

The meteoric rise of both the public cloud and SaaS have brought along a strong preference for OpEx vs. CapEx. To recap: OpEx means you stop paying for a thing up front, and instead just pay as you go. If you’ve bought almost any business software lately you know the drill: you walk away with a monthly or annual subscription, rather than a DVD-ROM and a permanent or volume license.

But the funny thing about business trends is the frequency with which they simply turn upside down and make the conventional wisdom obsolete.

Recently, we have started seeing interest in getting out of pay as you go (rather unimaginatively often shortened as PAYGO) as a model, and moving back toward making upfront purchases then holding on for the ride as capital items get amortized.

Why? It’s all about economies of scale.

Imagine, if you will, that you are able to rent an office building for $10 a square foot, then rent out the space for $15 a square foot. Seems like a decent deal at 50% margin; but of course you’re also on the hook for servicing the customers, the space, and so forth. You’ll get a certain amount of relief as you share janitorial services across the space, of course, but your economic ceiling is stuck at 50%.

Now imagine that you purchase that whole building for $10M and rent out the space for $15M. Your debt payment may cut into profits for a few years, but at some point you’re paid off – and every year’s worth of rent thereafter is essentially all profit.

The first scenario puts an artificial boundary on both risk and reward: you’re on the hook for a fixed  amount of rental cost, and can generate revenues only up to 150% of your outlay. You know how much you can lose, and how much you can gain. By contrast, in the second scenario, neither risk nor reward is bounded: with ownership comes risk (finding asbestos in the walls, say), as well as unlimited potential (raise rental prices and increase the profit curve).

This basic model applies to many cloud services – and to no small degree explains why so many companies are able to pop up – their growth is scaled with provisioned services.

If you were to decide to fire up a new streaming video service, say, that showed only the oeuvre of, say, Nicolas Cage, you’d want to have a fairly clear limit on your risk: maybe millions of people will sign up, but then again maybe they won’t. In order to be sure you’ve maximized the opportunity, though, you’ll need a rock solid infrastructure to ensure your early adopters get everything they expect: quick video start times, low re-buffering ratios, and excellent picture resolution. It doesn’t make sense to build that all out anew: you’re best off popping storage onto a cloud, maybe outsourcing CMS and encoding to an Online Video Platform (OVP), and delegating delivery to a global content delivery network (CDN). In this way you can have a world-class service, without having to pony up for servers, encoders, points of presence (POPs), load balancers, and all the other myriad elements necessary to compete.

In the first few months, this would be great – your financial risk is relatively low as you target your demand generation at the self-proclaimed “total Cage-heads”. But as you reach a wider and wider audience, and start to build a real revenue stream, you realize: the ongoing cost of all those outsourced, opex-based, services is flattening the curve that could bring you to profitability. By contrast, spinning up a set of machines to store, compute, and deliver your content could set a relatively fixed cost that, as you add viewers, would allow you to realize economies of scale and unbound profit.

We know that this is a real business consideration because Netflix already did it. Actually, they did it some time ago: while they do much (if not most) of their computation through cloud services, they decided in 2012 to move away from commercials CDNs in favor of their own Open Connect, and announced in 2016 that all its content delivery needs were now covered by their own network. Not only did this reduce their monthly opex bill, it also gave them control over the technology they used to guarantee an excellent quality of experience (QoE) for their users.

So for businesses nearing this op v. cap inflection point, the time really has arrived to put pencil to paper and calculate the cost of going it alone. The technology is relatively easy to acquire and manage, from server machines, to local load balancers and cache servers, and on up to global server load balancers. You can see a little bit more about how to actually build your own CDN here.

Opex solutions are absolutely indispensable in getting new services off the starting line; but it’s always worth keeping an eye on the economics, because with a large enough audience going it alone is the way to go.

Optimizing for Resources and Consumers Alike

One of the genuinely difficult decisions being made by DevOps, Ops, and even straight-up developers, today is how to ensure outstanding quality of experience (QoE) for their users. Do you balance the hardware (physical, virtual, or otherwise) for optimal load? Or track  quality of service (QoS) metrics – like throughput, latency, video start time, and so forth – and use those as the primary guide?

It’s really not a great choice, which is why we’re happy to say, the right answer to the question of whether to use local or global traffic management is: both.

It hasn’t been a great choice in the past because, while synthetic and real user measurements (RUM) overlap pretty broadly, neither is the subset of the other. For instance, RUM might be telling you that users are getting great QoE from a cluster of virtual servers in Northern Virginia – but it doesn’t tell you if those servers are near their capacity limits, and could do with some help to prevent overloading. Conversely, synthetic data can tell you where the most abundant resources are to complete a computational, storage, or delivery task – but they generally can’t tell you whether the experience at the point of consumption will be one of swift execution, or of fluctuating network service that causes a video to constantly sputter and pause as the user’s client tries to buffer the next chunk.

Today, though, you can combine the best of both worlds, as Cedexis has partnered with NGINX and their NGINX + product line to produce a unique application delivery optimization solution. Think of it as a marriage of local traffic management (LTM) and global traffic management (GTM). LTM takes care of routing traffic that arrives at a (virtual or physical) location between individual resources efficiently, ensuring that resources don’t get overloaded (and of course, spins up new instances when ready); GTM takes care of working out which location gets the request in the first place. Historically, LTM has been essentially blind to user experience; and GTM has been limited to relatively basic local network data (simple “is-it-working” synthetic monitoring for the most part).

Application delivery optimization demands not just real-time knowledge of what’s happening at both ends, but real-time routing decisions that ensure the end user is getting the best experience. Combining LTM and GTM makes it simple to

  1. Improve on Round Robin or Geo-based balancing. For sure, physical proximity is a leading indicator of superior experience (all else being equal, data that has to travel shorter distances will arrive more quickly). By adding awareness of QoE at the point of consumption, however, Ops teams can ensure that geographically-bounded congestion or obstructions (say, for instance, peering between a data center and an ISP) can be avoided by re-routing traffic to a higher-performing, if more geographically distant, option. In its simplest iteration, the algorithm simply says “so long as we can get a certain level of quality, choose the closest source, but never use any source that dips below that quality floor”.
  2. Re-route around unavailable server instances. Each data center or cloud may combine a cluster of server instances, balanced by NGINX +. When one of those instances becomes unavailable, however (whether through catastrophic collapse, or simply scheduled maintenance), the LTM can let the GTM know of its reduced capacity, and start the process of routing traffic to other alternatives before any server instance become overloaded. In essence, here the LTM is telling the GTM not to get too carried away with QoE – but to check that future experiences have a good chance of mirroring those being delivered in the present.
  3. Avoid application problems. NGINX+ lets Openmix know the health of the application in a given node in real-time. So if, for instance, an application update is made to a subset of application servers, and it starts to throw an unusual number of 50errors, the GTM can start to route around that instance, and alert DevOps of an application problem. In this way, app updates can be distributed to some (but not all) locations throughout the network, then automatically de-provisioned if they turn out not be functioning as expected.

Combining the power of real user measurements, hardware health, and application health, will mean expanding the ability of every team to deliver a high QoE to every customer. At no point will user’s requests be sent to servers approaching full use; nor will they be sent to sprightly resources who can’t actually deliver QoE owing to network congestion that is beyond their control.

It also, of course, will create a new standard: once a critical mass of providers is managing its application delivery in this capacity-aware, consumer-responsive, application-tuned way, a rush will develop for those who have not yet reached this point to catch up. So take a moment now to explore how combining the LTM and GTM capabilities of NGINX+ and Cedexis might make sense for your environment – and get a step up on your competition.

New Feature: Apply Filters

The Cedexis UI team spends considerable time looking for ways to help make our products both useful and efficient. One of the areas they’ve been concentrating on is improving the experience of applying several sets of filters to a report, which historically has led to a reload of the report every time a user has changed the filter list.

So we are excited to be rolling out a new reporting feature today called Apply Filters.  With the focus on improved usability and efficiency, this new feature allows you to select (or deselect) your desired filters first and then click the Apply Filters button to re-run the report. By selecting all your filters once, you will save time and eliminate confusion of remembering which filters you selected while the report is continuously refreshing itself.

The Apply Filter button appears  in off-state and on-state. The off-state button is a lighter green version that you will see before any filter selections are made. The on-state will become enabled once a filter selection has been made.  Once you run Apply Filters, and the report has completed re-running with the selected filters, the button will return to the off-state.

We have also placed the Apply Filters button at both the top and bottom of the Filters area.  The larger button at the bottom is a fixed setting so no matter how many filter options you have open, this button will always be easily accessible.

 .      

We hope you’ll agree this makes reports easier to use, and will save you time as you slice-and-dice your way to a deep and broad understanding of how traffic is making its way across the public internet.

Want to check out the new filters feature, but don’t have a portal account? Sign up here for free!

Together, we’re making a better internet. For everyone, by everyone.

Mobile Video is Devouring the Internet

In late 2009 – fully two years after the introduction of the extraordinary Apple iPhone – mobile was barely discernible on any measurement of total Internet traffic. By late 2016, it finally exceeded desktop traffic volume. In a terrifyingly short period of time, mobile Internet consumption moved from an also-ran to a behemoth, leaving behind the husks of marketing recommendations to “move to Web 2.0” and to “design for Mobile First”. And along the way, Apple encouraged us to buy into the concept that the future (of TV at least) is apps.

Unsurprisingly, the key driver of all this traffic is – as it always is – video. One in every three mobile device owners watches videos of at least 5 minutes’ duration, which is generally considered the point at which the user has moved from short-form, likely user-generated, content, to premium video (think: TV shows and movies). And once viewers pass the 5minute mark, it’s a tiny step to full-length, studio-developed content, which is a crazy bandwidth hog.  Consider that video is expected to represent fully 75% of all mobile traffic by 2020 – when it was just 55% in 2015.


As consumers get more interested in video, producers aren’t slowing down. By 2020, it is estimated that it would take an individual fully 5 million years to watch the video being published and made available in just a month. And while consumer demand varies around the world – 72% of Thailand’s mobile traffic is video, for instance, versus just 41% in the United States – the reality is that, without some help, the mobile Web is going to be straining under the weight of near-unlimited video consumption.

What we know is that, hungry as they are for content, streaming video consumers are fickle and impatient. Akamai demonstrated years ago the 2-second rule: if a requested piece of content isn’t available in under 2 seconds, Internet users simply move on to the next thing. And numerous studies have shown definitively that when re-buffering (the dreaded pause in playback while the viewing device downloads the next section of the video) exceeds just 1% of viewing time, audience engagement collapses, resulting in dwindling opportunities to monetize content that was expensive to acquire, and can be equally costly to deliver.

How big of a problem is network congestion? It’s true that big, public, embarrassing outages across CDNs or ISPs are now quite rare. However, when we studied the network patterns of one of our customers, we found that what we call micro-outages (outages lasting 5 minutes or less) happen literally hundreds to thousands of times a day. That single customer was looking at some 600,000 minutes of direct lost viewing time per month – and when you consider how long each customer might have stayed, and their decreased inclination to return in the future, that number likely translates to several million minutes of indirectly lost minutes.

While mobile viewers are more likely to watch their content through an app (48% of all mobile Internet users) than a browser (18%), they still receive the content through the chaotic maelstrom of a network that is the Internet. As such, providers have to work out the best pathways to use to get the content there, and to ensure that the stream will have consistency over time so that it doesn’t fall prey to the buffering bug.

Most providers use stats and analysis to work out the right pathways – so they can look at how various CDN/ISP combos are working, and pick the one that is delivering the best experience. Strikingly, though, they often have to make routing decisions for audience members who are in geographical locations that aren’t currently in play, which means choosing a pathway without any recent input on which is going to be the best pathway – this is literally gambling with the experience of each viewer. What is needed is something predictive: something that will help the provider to know the right pathway the first time they have to choose.

This is where the Radar Community comes in: by monitoring, tracking, and analyzing the activity of billions of Internet interactions every day, the community knows which pathways are at peak health, and which need a bit of a breather before getting back to full speed. So, when using Openmix to intelligently route traffic, the Radar community data provides the confidence that every decision is based on real-time, real-user data – even when, for a given provider, they are delivering to a location that has been sitting dormant.

Mobile video is devouring the Web, and will continue to do so, as consumers prefer their content to move, dance, and sing. Predictively re-routing traffic in real-time so that it circumvents the thousands of micro-outages that plague the Internet every day means never gambling with the experience of users, staying ahead of the challenges that congestion can bring, and building the sustainable businesses that will dominate the new world of streaming video.

Cedexis confirms its position as potential future French unicorn with the 2017 Tech Tour Growth 50

Tech Tour announces its 2017 Tech Tour Growth 50 to shine a light on the next generation of Europe’s fastest growth equity backed tech businesses : Europe’s 50 future Unicorns. Cedexis, a leader in internet performance monitoring and optimization, is distinguished for the second consecutive year as one of 50 European companies with the highest growth and thus established its position as European leader.

Tech Tour, a platform that facilitates high-tech growth companies to develop strategic relations with investors, together with Silverpeak Investment Bank and in conjunction with a selection committee of international investors have researched and evaluated over 275 European private tech companies at a sub-one billion US dollar valuation.

The selection committee, chaired by Jean-Michel Deligny, Silverpeak, was composed of 18 international venture capital firms, advisers and experts who judged the companies based on their achievement, impact and momentum.

The CEOs of the companies will gather at the annual Tech Tour Growth Forum in Geneva from March 30th – 31st where two companies will be announced as the 2017 Tech Tour Growth Award and Tech Tour Innovation Award winner.

“Thanks to Tech Tour, we finalized our $ 22.8 million fundraising last year in just 3 weeks. For the second consecutive year, it is a recognition of the work we have accomplished that allowed us to drive Cedexis where we wished: to be recognized as a high-growth European company. This allows us to continue our mission and go even further: make the fastest Internet for all “, said Julien Coulon, Cedexis founder.  

William Stevens, Managing Director, Tech Tour, commented “The billion dollar “unicorn” successes that grab our attention are just the tip of the iceberg. The Tech Tour Growth 50 puts the next layer of Europe’s high-tech, high-growth businesses in the limelight. These are Europe’s future potential unicorns – the companies with the most promise to have a global impact. The Tech Tour Growth 50 Companies have created over 9, 000 high-tech jobs, raised over $3.7 billion of investment from 309 investors and have an estimated average valuation of $338 million. This is a clear demonstration of Europe’s strength and competitiveness in scaling-up tech businesses.”

Cedexis is proud to be one of the companies that is transforming the age of the Internet, an area where Europe has an obvious competitive advantage.

More information on  TECH TOUR GROWTH 50: www.techtourgrowth50.com.

Make Mobile Video Stunning with Smart Load Balancing

If there’s one thing about which there is never an argument it’s this: streaming video consumers never want to be reminded that they’re on the Internet. They want their content to start quickly, play smoothly and uninterrupted, and be visually indistinguishable from traditional TV and movies. Meanwhile, the majority of consumers in the USA (and likely a similar proportion worldwide) prefer to consume their video on mobile devices. And as if that wasn’t challenging enough, there are now suggestions that live video consumption will grow – according to Variety by as much as 39 times! That seems crazy until you consider that Cisco predicted video would represent 82% of all consumer Internet traffic by 2020.

It’s no surprise that congestion can result in diminished viewing quality, leading over 50% of all consumers to, at some point, experience buffer rage from the frustration of not being able to play their show.

Here’s what’s crazy: there’s tons of bandwidth out there – but it’s stunningly hard to control.

The Internet is a best-efforts environment, over which even the most effective Ops teams can wield only so much control, because so much of it is either resident with another team, or is simply somewhere in the amorphous ‘cloud’.  While many savvy teams have sought to solve the problem by working with a Content Delivery Network (CDN), the sheer growth in traffic has meant that some CDNs are now dealing with as much traffic as the whole Internet transferred just a few years ago…and are themselves now subject to their own congestion and outage challenges. For this reason, plenty of organizations now contract with multiple CDNs, as well as placing their own virtual caching servers in public clouds, and even deploying their own bare-metal CDNs in data centers where their audiences are centered.

With all these great options for delivering content, Ops teams must make real-time decisions on how to balance the traffic across them all. The classic approaches to load balancing have been (with many thanks to Nginx):

  • Availability – Any servers that cannot be reached are automatically removed from the list of options (this prevents total link failure).
  • Round Robin – Requests are distributed across the group of servers sequentially.
  • Least Connections – A new request is sent to the server with the fewest current connections to clients. The relative computing capacity of each server is factored into determining which one has the least connections.
  • IP Hash – The IP address of the client is used to determine which server receives the request.

You might notice something each of those has in common: they all focus on the health of the system, not on the quality of the experience actually being had by the end user. Anything that balances based on availability tends to be driven by what is known as synthetic monitoring, which is essentially one computer checking another computer is available.

But we all know that just because a service is available doesn’t mean that it is performing to consumer expectations.

That’s why the new generation of Global Server Load Balancer(GSLB) solutions goes a step further. Today’s GSLB uses a range of inputs, including

  • Synthetic monitoring – to ensure servers are still up and running
  • Community Real User Measurements – a range of inputs from actual customers of a broad range of providers, aggregated, and used to create a virtual map of the Internet
  • Local Real User Measurements – inputs from actual customers of the provider’s own service
  • Integrated 3rd party measurements – including cost bases and total traffic delivered for individual delivery partners, used to balance traffic based not just on quality, but also on cost

Combined, these data sources allow video streaming companies not only to guarantee availability, but also to tune their total network for quality, and to optimize within that for cost. Or put another way – streaming video providers can now confidently deliver the quality of experience consumers expect and demand, without breaking the bank to do it.

When you know that you are running across the delivery pathway with the highest quality metrics, at the lowest cost, based on the actual experience of your users – that’s a stunning result. And it’s only possible with smart load balancing, combining traditional synthetic monitoring with the real-time feedback of users around the world, and the 3rd party data you use to run your business.

If you’d like to find out more about smart load balancing, keep looking around our site. And if you’re going to be at Mobile World Congress at the end of the month, make an appointment to meet with us there so we can show you smart load balancing in real life.