Posts

New Feature: Reason Code reporting

Cedexis’ Global Load Balancing solution Openmix makes over 2.5 billion real-time delivery decisions every day. These routing decisions are based on a combination of Radar community’s 14 billion daily real user measurements and our customers’ defined business logic.

One thing we hear time and time again is “It’s great that you are making all these decisions, but it would be very valuable into why you are switching pathways.”  The “why” is hugely valuable in understanding the “what” (Decisions) and “when” (Time) of the Openmix decision-routing engine.

And so, we bring you: Reason Codes.

Reason Codes in Openmix applications are used to log and identify decisions being made, so you can easily establish why users were routed, such as to certain providers or geographical locations.  Reason Codes reflect factors such as Geo overrides, Best Round Trip Time, Data Problems, Preferred Provider Availability and or whatever other logic is built into your Openmix applications. Having the ability to see which Reason Codes (the “why”) impacted what decisions were made allows you to see clearly where problems are arising in your delivery network, and make adjustments where necessary.

Providing these types of insights is core to Cedexis’ DNA, so we are pleased to announce the general availability of Reason Codes as part of the Decision Report.  You can now view Reason Codes as both Primary and Secondary Dimensions, as well as through a specific filter for Reason Codes.

As a Cedexis Openmix user, you’ll want to get in on this right away. Being able to see what caused Openmix to route users from your preferred Cloud or CDN provider to another one because of a certain event (perhaps a data outage in the UK) allows you to understand what transpired over a specific time period. No second guessing of why decisions spiked in a certain country or network. Using Reason Codes, you can now easily see which applications are over- and under-performing and why.

Here is an example of how you can start gaining insights.

You will notice in the first screenshot below that for a period of time, there was a spike in the number of decisions that Openmix made for two of the applications.

Now all you have to do is switch the view from looking at the Application as your primary dimension to Reason Code and you can quickly see that “Routed based [on] Availability data” was the main reason for Openmix re-routing users

Drilling down further, you can add Country as your Secondary Dimension and you can see that this was happening primarily in the United States.

All of a sudden, you’re in the know: there wasn’t just ‘something going on’ – there was a major Availability event in the US. Now it’s time to hunt down your rep from that provider and find out what happened, what the plan is to prevent it in the future, and how you can adjust your network to ensure continued excellent service for all your users.

Cedexis Announces Integration with NGINX Plus for Cloud Application Delivery Optimization

Cedexis, the leader in content and application delivery optimization for clouds, CDNs and data centers, today announced the general availability of its application delivery optimization solution. The complete, enterprise-ready application delivery platform solution integrates with NGINX Plus, which extends open source NGINX with advanced features and award-winning support. For the first time, DevOps teams can now optimize application availability and latency in real time using monitoring data from the entire delivery path, from end-user audience through to NGINX Plus load balancers and application servers.

“DevOps teams want their cloud networking to be agile and flexible, as we’ve seen compute and storage become, to enable rapid and intelligent end-to-end scaling,” said Rob Malnati, Cedexis SVP of Business & Corporate Development. “By partnering with NGINX, our customers have a comprehensive view of application quality, from server to end-user, and automation to optimize application delivery in real time for high-availability, low-latency and optimal resource use.”

The complete solution combines industry leading NGINX Plus local load balancers with Cedexis global traffic management to provide end-to-end software configurable, real-time data-driven, application delivery optimization. The solution also ensures consistent quality of applications, and the swift resolution of any congestion or outages. Openmix algorithms, which can be easily adjusted and executed by customers using simple JavaScript code, are augmented with NGINX Plus monitoring data to enable optimal decision-making.

“Businesses today are expected to deliver high-performing, scalable applications, or risk losing out to the next competitor who is doing so,” said Paul Oh, Head of Business Development at NGINX. “By being able to make traffic routing decisions closest to the application with the advanced metrics available in NGINX Plus, Cedexis greatly improves an organization’s ability to  deliver the best possible experience to their customers no matter where they are located.”

The Cedexis application delivery platform addresses a range of issues associated with modern, cloud-native application delivery by providing:

  • A single end-to-end integration of local and global load balancing that is usable in any combination of public or private clouds, and private data centers
  • The first, real-time, automated solution that leverages both application and user performance monitoring
  • A single “pane of glass” for analysis of all traffic routing decisions
  • Intelligent auto-scaling that optimizes for both cloud application server capacity and performance, actively managing “costs for performance” tradeoffs.

To find out more about Cedexis application delivery solutions, please visit cedexis.com. To find out more about NGINX Plus please visit www.nginx.com.

The Cloud Is Coming

Still think the cloud (or should that be The Cloud?)is a possible-but-not-definite trend? Take a look at IDC’s projection of IT deployment types:

Credit: Forbes

So much to unpack! What really jumps out is that

  • Traditional data centers drop in share, but hang in there around 50%: self-managed hardware will be a fact of life as far out as we can see
  • Public cloud will double by 2021, but it isn’t devouring everything, because in the final analysis no Operations team wants to give up all control
  • Private cloud expands rapidly, as the skills to use the technology become more widespread
  • But most importantly…in the very near future, most every shop will likely be running a hybrid network, which combines traditional data centers, private cloud deployments, public clouds for storage and computation, and CDNs for delivery (don’t forget that Cisco famously predicted over half of all Internet traffic would traverse a CDN by the year after next)

It’s a brave new world, indeed, that has so many options in it.

If it is true, though, that cloud computing will be a $162B a year business by 2020 (per Gartner), and that 74% of technology CFOs say cloud computing will have the most measurable impact on their business in 2017, that means this year will end up having been one of upheaval, and of transformation. As ever more complex permutations of public/private infrastructure hit the market, the challenges of keeping everything straight will rapidly multiply: can one truly be said to be optimizing if one cannot centralize the tracking and traffic management for all resources, regardless of whether they’re in your own NOC, under Amazon’s tender care in Virginia, or located at some unidentified POP somewhere in Western Europe?

The truth is that, as with all transformations, this move to hybrid networks will be marked by the classic Hype Cycle:

We are fast approaching the Peak of Inflated Expectations; the sudden fall into the Trough of Disillusionment will be precipitated by the realization that there are now so many different sources of computation in the mix that nobody is quite sure where the savings are. Perhaps we’re saving money by using different CDNs in different geographies – but it’s hard to tell if we’re balancing for economic benefit; perhaps we’re making the right move by storing all our images on a global cloud, but it’s hard to tell whether adding a second (with the inevitable growth in storage fees) would result in faster audience growth; perhaps we’re right to avoid sending content requests back to origin, but at the same time, that seems like a lot of resources to not use.

The Slope of Enlightenment will hit when the tools come along to put all the metrics of all the elements of the hybrid network onto a single pane: balancing between nodes that are, at an abstract level at least, equally measurable, configurable, and tunable will start us down the path to the Plateau of Productivity.

The Cloud is coming; how long we spend in the Trough of Disillusionment trying to figure out how to make it hum like a well-oiled machine is assuredly on us.

New Feature: Apply Filters

The Cedexis UI team spends considerable time looking for ways to help make our products both useful and efficient. One of the areas they’ve been concentrating on is improving the experience of applying several sets of filters to a report, which historically has led to a reload of the report every time a user has changed the filter list.

So we are excited to be rolling out a new reporting feature today called Apply Filters.  With the focus on improved usability and efficiency, this new feature allows you to select (or deselect) your desired filters first and then click the Apply Filters button to re-run the report. By selecting all your filters once, you will save time and eliminate confusion of remembering which filters you selected while the report is continuously refreshing itself.

The Apply Filter button appears  in off-state and on-state. The off-state button is a lighter green version that you will see before any filter selections are made. The on-state will become enabled once a filter selection has been made.  Once you run Apply Filters, and the report has completed re-running with the selected filters, the button will return to the off-state.

We have also placed the Apply Filters button at both the top and bottom of the Filters area.  The larger button at the bottom is a fixed setting so no matter how many filter options you have open, this button will always be easily accessible.

 .      

We hope you’ll agree this makes reports easier to use, and will save you time as you slice-and-dice your way to a deep and broad understanding of how traffic is making its way across the public internet.

Want to check out the new filters feature, but don’t have a portal account? Sign up here for free!

Together, we’re making a better internet. For everyone, by everyone.

Which Is The Best Cloud or CDN?

Oh no, you’re not tricking us into answering that directly – it’s probably the question we hear more often than any other. The answer we always provide: it depends.

Unsatisfying? Fair enough. Rather than handing you a fish, let us show you how to go haul in a load of blue fin tuna.

What a lot of people don’t know is that, for free, you can answer this sort of thing all by yourself on the Cedexis portal. Just create an account, click through on the email we send, and you’re off to the races (go on – go do it now, we’ll wait…it’s easier to follow along when you have your own account).

The first thing you’ll want to do is find the place where you get all this graphical statistical goodness: click Radar then select Performance Report, as shown below

With this surprisingly versatile (and did we mention free) tool, you can answer all the questions you ever had about traffic delivery around the world. For instance, if I’m interested in working out which continent has the best and worst availability. Simply change the drop down around the top left to show ‘Continent’ instead of ‘Platform’, and voila – an entirely unsurprising result:

Now that’s a pretty broad brush. Perhaps you’d like to know how a different group of countries, or states look relative to one another – simply select those countries or states from the Location section on the right hand side of the screen and you’re off to the races. Do the same with Platforms (that’s the cloud providers and CDNs), and adjust your view from Availability to Throughput or Latency to see how the various providers are doing when they are Available.

So, if you’re comparing a couple of providers, in a couple of states, you might end up with something that looks like this:

Be careful though – across 30 days, measured day to day, it looks like there’s not much difference to be told, nor much improvement to be found by using multiple providers. Ensure that you dig in a little deeper – maybe to the last 7 days, 48 hours, or even 24 hours.Look what can happen when you focus in on, for instance, a 48 hour period:

There are periods there where having both providers in your virtual infrastructure would mean the difference between serving your audience really well, and being to all intents and purposes unavailable for business.

If you’ve never thought about using multiple traffic delivery partners in your infrastructure – or have considered it, but rejected it in the absence of solid data – today would be a great day to go poke around. More and more operations teams are coming to the realization that they can eliminate outages, guarantee consistent customer quality, and take control over the execution and cost of their traffic delivery by committing to a Hybrid Cloud/CDN strategy.

And did we mention that all this data is free for you to access?

 

Re-Writing Streaming Video Economics


The majority of Americans – make that the vast majority of American Millennials – stream video. Every day, in every way. From Netflix to Hulu, YouTube to Twitch, CBS to HBO, there is no TV experience that isn’t being accessed on a mobile phone, a tablet, a PC, or some kind of streaming device attached to a genuine, honest-to-goodness television.

The trouble is, we aren’t really paying for it: just 9% of a household’s video budget goes to streaming services, while the rest goes to all the usual suspects: cable companies, satellite providers, DVD distributors, and so forth. This can make breaking a profit a tricky proposition – Netflix just started to churn out ‘material profits’, Hlu is suspected to be losing money, and Amazon is unlikely ever to break out profitability of its Prime video service from the other benefits of the program.

The challenge is there are really only so many levers that can be pulled to make streaming video profitable:

  1. Charge (more) for subscriptions: except that when the cost goes up, adoption goes down, and decelerating growth is anathema to a start-up business
  2. Spend less on (licensing/making/acquiring) content: except that if the content quality misses, audience growth will follow it
  3. Spend less on delivering the content: except that if the quality goes down, audiences will depart, never to be seen again

One and two are tricky, and rely upon the subjective skills of pricing and content acquisition experts. Number three though…maybe there’s something there that is available to everyone.

And indeed, there is. Most video traffic these days travels across Content Delivery Networks (CDNs), who do yeoman work caching popular traffic around the globe, and doing much of the heavy lifting in working out the quickest way to get content from publisher to consumer. Over the years, these vital members of the infrastructure have gradually improved and refined their craft, to the point where they about as reliable as they can be.

That said, no Ops team ever likes to have a single point of failure, which is why almost all large-scale internet outfits contract with at least two – if not more – CDNs. And that’s where the opportunity arises: it’s almost a guarantee that with two contracts, there will be differences in pricing for particular circumstances. Perhaps there is a pre-commit with one, or a time-of-day discount on the other; perhaps they simply offer different per-Gb pricing in return for varying feature sets.

With Openmix, you can actually build an algorithm that doesn’t just eliminate outages; and doesn’t just ensure consistent quality; you can make decisions on where to send the traffic based on financial parameters, once you ensure that the quality isn’t going to drop.

All of a sudden you have access to pull one of the three levers – without triggering the nasty side effects that make each one a mixed blessing. You can reduce your cost, without putting your quality at risk – it’s a win/win.

We’d love to show you more about this, so if you’re at NAB this week, do stop by.

Don’t Be Afraid of Microservices!

Architectural trends are to be expected in technology. From the original all-in-one-place Cobol behemoths half the world just learned existed because of Hidden Figures, to three-tiered architecture, to hyper-tier architecture, to Service Oriented Architecture….really, it’s enough to give anyone a headache.

And now we’re in a time of what Gartner very snappily calls Mesh App and Service Architecture (or MASA). Whether everyone else is going for that particular nomenclature is less relevant than the reality that we’ve moved on from web services and SOA toward containerization, de-coupling, and the broadest possible use of microservices.

Microservices sound slightly disturbing, as though they’re very, very small components, of which one would need dozens if not hundreds to do anything. Chris Richardson of Eventuate, though, recently begged us not to assume that just because of the name these units are tiny. In fact, it makes more sense to think of them as ‘hyper-targeted’ or ‘self-contained’ services: their purpose should be to execute a discrete set of logic, which can exist in isolation, and simply provide easily-accessed public interfaces. So, for instance, one could imagine a microservice whose sole purpose was to find the best match from a video library for a given user: requesting code would provide details on the user, the service would return the recommendation. Enormous amounts of sophistication may go into ingesting the user-identifying data, relating it to metadata, analyzing past results, and coming up with that one shining, perfect recommendation…but from the perspective of the team using the service, they just need to send a properly-formed request, and receive a properly-formed response.

The apps we all rely upon on those tiny little computers we carry around in our pocketbooks or pockets (i.e. smart phones) fundamentally rely on microservices, whether or not their developers thought to describe them that way. That’s why they sometimes wake up and spring to life with goodness…and sometimes seem to drag, or even fail to get going. They rely upon a variety of microservices – not always based at their own home location – and it’s the availability of all those microservices that dictates the user experience. If one microservice fails, and is not dealt with elegantly by the code, the experience becomes unsatisfactory.

If that feels daunting, it shouldn’t – one company managed to build the whole back end of a bank on this architecture.

Clearly, the one point of greatest risk is the link to the microservice – the API call, if you will. If the code calls to a static endpoint, the risk is that that endpoint isn’t available for some reason; or at least, is unavailable at an acceptable speed. This is why there are any number of solutions for trying to ensure the microservice is available, often spread between authoritative DNS services (which essentially take all the calls for a given location and then assign them to backend resources based on availability), and application delivery controllers (generally physical devices that perform the same service). Of course if either is down, life gets tricky quickly.

In fact, the trick to planning for highly available microservices is to embed locations that are managed by a cloud-based application delivery service. In other words, as the microservice is required, a call goes out to a location that combines both synthetic and real-user measurements to determine the most performant source and re-direct the traffic there. This compounds the benefits of the microservice architecture: not only can the microservice itself be maintained and updated independently of the apps that use it, so too the network and infrastructure necessary to its smooth and efficient delivery can be tweaked without affecting existing users.

Microservices are the future. To make the most, first ensure that they independently address discrete purposes; then make sure that their delivery is similarly defined and flexible without recourse to updating apps that use them; then settle back and watch performance meet innovation.

Live and Generally Available: Impact Resource Timing

We are very excited to be officially launching Impact Resource Timing (IRT) for general availability.

IRT is Impact’s powerful window into the performance of different sources of content for the pages in your website. For instance, you may want to distinguish the performance of your origin servers relative to cloud sources, or advertising partners; and by doing so, establish with confidence where any delays stem from. From here, you can dive into Resource Timing data sliced by various measurements over time, as well as through a statistical distribution view.

What is Resource Timing? Broadly speaking, resource timing measures latency within an application (i.e. browser). It uses JavaScript as the primary mechanism to instrument various time-based metrics of all the resources requested and downloaded for a single website page by an end user. Individual resources are objects such as JS, CSS, images and other files that the website pages requests. The faster the resources are requested and loaded on the page, the better quality user experience (QoE) for users.  By contrast, resources that cause longer latency can produce a negative QoE for users.  By analyzing resourcing timing measurements, you can isolate the resources that may be causing degradation issues for your organization to fix.  

Resource Timing Process:

Cedexis IRT makes it easy for you to track resources from identified sources, normally identified through domain (*.myDomain.com), by sub-domain(e.g. images.myDomain.com), and by the provider serving your content. In this way, you can quickly group together types of content, and identify the source of any latency. For instance, you might find that origin-located content is being delivered swiftly, while cloud-hosted images are slowing down the load time of your page; in such a situation, you would now be in a position to consider a range of solutions, including adding a secondary cloud provider and a global server load balancer to protect QoE for your users.

Some benefits of tracking Resource Timing.

  • See which hostnames  – and thus which classes of content – are slowing down your site.
  • Determine which resources impact your overall user experience.
  • Correlate resource performance with user experience.

Impact Resource Timing from Cedexis allows you to see how content sources are performing across various measurement types such as Duration, TCP Connection Time, and Round Trip Time. IRT reports also give you the ability to drill down further by Service Providers, Locations, ISPs, User Agent (device, browsers, OS) and other filters.

Check out our User Guide to learn more about our Measurement Type calculations.

There are two primary reports in this release of Impact Resource Timing. The Performance report, which gives you a trending view of resource timing over time and the Statistical Distribution report, which reports Resource Timing data through a statistical distribution view.  Both reports have very dynamic reporting capabilities that allow you to easily pinpoint resource-related issues for further analysis.  


Using the Performance report, you can isolate which grouped resources are causing potential end user experience issues by hostname, page or service provider and when the issue happened. Drill down even further to see if this was a global issue or localized to a specific location or if it was by certain user devices or browsers.  

IRT is now available for all in the Radar portal – take it for a spin and let us know your experiences!

Why The Web Is So Congested

If you live in a major city like London, Tokyo, or San Francisco, you learn one thing early: driving your car through the city center is about the slowest possible way to get around. Which is ironic, when you think about it, as cars only became popular because they made is possible to get around more quickly. There is, it seems, an inverse relationship between efficiency and popularity, at least when it comes to goods that pass through a public commons like roads.

Or like the Internet.

Think about all that lovely 4K video you could be consuming if there was nothing between you and your favorite VOD provider but a totally clear fiber optic cable. But unless you live in a highly over-provisioned location, that’s exactly not what’s going on; rather, you’re lucky to get a full HD picture, and even luckier if it stays at 1080p, without buffering, all the way through. Why? Because you’re sharing a public commons – the Internet – and its efficiency is being chewed away by popularity.

Let’s do some math to illustrate this,

  • Between 2013 and January 2017 the number of web users increased by 1.4 billion people to just over 3.7 billion. Today Internet penetration is at 50% (or put another way – half the world isn’t online yet)
  • In 2013, the average amount of Internet data per person was 7.9G per month; by 2015 it was 9.9G, with Cisco expecting it to reach over 25Gb by 2020 – so assume something in the range of 15Gb by 2017.
  • Logically, then in 2013 web traffic would have been around 2.3B * 7.9G per months (18.1t exabytes), by 2015 it would have been  3.7B * 17Gb per month (62.9 exabytes)
  • If we assume another billion Internet users by 2020, we’re looking at 4.7B & 25Gb per month – or a full 117.5 exabytes

In just seven years, the monthly web traffic will have grown 600% (based on the math, anyway: Cisco is estimating closer to 200 exabytes monthly by 2020).

And that is why the web is so busy.

But it doesn’t describe why the web is congested. Congestion happens when there is more traffic than transit space – which is why, as cities get larger and more populous, governments add lanes to major thoroughfares, meeting the automobile demand with road supply.

Unfortunately, unlike cars on roads, Internet traffic doesn’t travel in straight lines from point to point. So even though infrastructure providers have been building out capacity at a madcap pace, it’s not always connected in such a way that makes transit efficient. And, unlike roads, digital connections are not built out of concrete, and often become unavailable – sometimes for a long time that causes consternation and PR challenges, and sometimes just for a minute or so, stymying a relative handful of customers.

For information to get from A to B, it has to traverse any number of interconnected infrastructures, from ISPs to the backbone to CDNs, and beyond. Each is independently managed, meaning that no individual network administrator can guarantee smooth passage from beginning to end. And with all the traffic that has been – and will continue to be – added to the Internet, it has become essentially a guarantee that some portion of content requests will bump into transit problems along the way.

Let’s also note that the modern Internet is characterized less by cat memes, and more by the delivery of information, functionality, and ultimately, knowledge. Put another way, the Internet today is all about applications: whether represented as a tile on a smart phone home screen, or as a web interface, applications deliver the intelligence to take the sum total of all human knowledge that is somewhere on the web and turn it into something we can use. When you open social media, the app knows who you want to know about; when you consult your sports app, it knows which teams you want to know about first; when you check your financial app, it knows how to log you in from a fingerprint and which account details to show first. Every time that every app is asked to deliver any piece of knowledge, it is making requests across the Internet – and often multiple requests of multiple sources. Traffic congestion doesn’t just endanger the bitrate of your favorite sci fi series – it threatens the value of every app you use.

Which is why real-time predictive traffic routing is becoming a topic that web native businesses are digging deeper into. Think of it as Application Delivery for the web – a traffic cop that spots congestion and directs content around it, so that it’s as though it never happened. This is the only way to solve for efficient routing around a network of networks without a central administrator: assume that there will be periodic roadblocks, and simply prepare to take a different route.

The Internet is increasingly congested. But by re-directing traffic to the pathways that are fully available, it is possible to get around all those traffic jams. And, actually, it’s possible to do today.

Find out more by reading the story of how Rosetta Stone improved performance for over 60% of their worldwide customers.

 

Better OTT Quality At Lower Cost? That Would Be Video Voodoo

According to the CTA, streaming video now claims as many subscribers as traditional Pay TV. Another study, from the Leichtman Research Group proposed that more households have streaming video than have a DVR. However accurate – or wonkily constructed – these statistics, what’s not up for grabs is that more people than ever are getting a big chunk of their video entertainment over the Web. Given the infamous AWS outage, this means that providers are constantly at risk of seeing their best-laid-plans laid low by someone’s else’s poor typing skills.

Resiliency isn’t a nice-to-have, it’s a necessity. Services that were knocked out last week owing to AWS’ challenges were, to some degree, lucky: they may have lost out on direct revenue, but their reputations took no real hit, because the core outage was so broadly reported. In other words, everyone knew the culprit was AWS. But it turns out that outages happen all the time – smaller, shorter, more localized ones, which don’t draw the attention of the global media, and which don’t supply a scapegoat. In those circumstances, a CDN glitch is invisible to the consumer, and is therefore not considered: when the consumer’s video doesn’t work, only the publisher is available to take the blame.

It’s for this reason that many video publishers that are Cedexis customers first start to look at breaking from the one-CDN-to-rule-them-all strategy, and look to diversify their delivery infrastructure. As often as not,this starts as simply adding a second provider: not so much as an equal partner, but as a safety outlet and backup. Openmix intelligently directs traffic, using a combination of community data (the 6 billion measurements we collect from web users around the world each day) and synthetic data (e.g. New Relic and CDN records). All of a sudden, event though outages don’t stop happening, they do stop being noticeable because they are simply routed around. Ops teams stop getting woken up in the middle of the night, Support teams stop getting sudden call spikes that overload the circuits, and PR teams stop having to work damage control.

But a funny thing happens once the outage distractions stop: there’s time catch a breath, and realize there’s more to this multi-CDN strategy than just solving a pain. When a video publisher can seamlessly route between more than one CDN, based on its ability to serve customers at an acceptable quality level, there is a natural economic opportunity to choose the best-cost option – in real time. Publishers can balance traffic based simply on per-Gig pricing; ensure that commits are met, but not exceeded until every bit of pre-paid bandwidth throughout the network is exhausted; and distribute sudden spikes to avoid surge pricing. Openmix users have reported seeing cost savings that reach low to mid double-digit percentages – while they are delivering a superior, more consistent, more reliable service to their users.

Call it Video Voodoo: it shouldn’t be possible to improve service reliability and reduce the cost of delivery…and yet, there it is. It turns out that eliminating a single point of failure introduces multiple points of efficiency. And, indeed, we’ve seen great results for companies that already have multiple CDN providers: simply avoiding overages on each CDN until all the commits are met can deliver returns that fundamentally change the economics of a streaming video service.

And changing the economics of streaming is fundamental to the next round of evolution in the industry. Netflix, the 800 pound gorilla, has turned over more than $20 billion in revenue the last three years, and generated less than half a billion in net margin, a 5% rate; Hulu (privately- and closely-held) is rumored to have racked up $1.8B in losses so far and still be generating red ink on some $2B in revenues. The bottom line is that delivering streaming video is expensive, for any number of reasons. Any engine that can measurably, predictably, and reliably eliminate cost is not just intriguing for streaming publishers – it is mandatory to at least explore.