Real user monitoring, or RUM, is a method for collecting metrics from end-users of a website or web-based application using some kind of passive monitoring software. The quality of a digital experience, including response time, reflects powerfully on brands. Understandably, the measurements and the insights RUM analysis can provide are a hot topic in the content provider industry. But do you know what brand of RUM you are really getting?
The most advanced website monitoring tools right now usually use either Real Users or a synthetic analog. It’s important to understand the difference. Synthetic testing runs or emulates a web browser. This software can be quite sophisticated and usually does a good job of acting exactly like the application it’s imitating.
Using either custom-built scripts or an analysis engine, the synthetic testing software generates multitudes of clicks and events on the website. Thousands of measurements can be quickly gathered this way, and analysis of the resulting data set can yield a great deal of useful information. However, though this method can efficiently collect excellent results in a short time, it has two very important deficiencies compared to our favorite RUM, Cedexis Radar Real User Measurements: First, it is a simulation of users, Second, it does not account for the critical last mile of the network.
Simulated Users and Confirmation Bias
Synthetic users are not real users. Manually scripted synthetic sessions are subject to that subtle confirmation bias that plagues test engineers by subconsciously discouraging them from triggering those odd outlying bugs actual users always seem to find. Classic problems with confirmation bias will hamper the site: Edge cases are not captured, real end-user practices are overlooked, and results can not be correlated to actual end-user experience.
The Vital Last Leg
Synthetic user monitoring lacks that important last leg of network traffic essential to measuring the response time experienced by a real end-user. Many synthetic environments exist in major Internet hubs or within firewalls, and thus do not suffer the problems with latency and connectivity that many end-users deal with. The high-speed, low-latency connections on which synthetic testing platforms live can therefore miss certain important speed issues, such as the impact of narrow band mobile connections, or congested ISP peering, inducing rendering delays caused by big and improperly constructed HTML. This diminishes the utility of synthetic testing in certain areas where real user experiences are important.
Before the release of an application, synthetic testing allows evaluation throughout development, and it is often the most effective way to stress test. Likewise, Cedexis Radar RUM requires the collection of a user base before it is effective, and it would be irresponsible to wait until this is established before performing real-time testing of an application.
Apply the Right Tool
The primary difference between Cedexis Radar RUM and synthetic testing is that RUM is crowdsourced data collected from real users, and synthetic testing is performed by software agents attempting to emulate real users.
While synthetic testing can be launched from dozens or even hundreds of locations, it cannot effectively simulate the true diversity of RUM, which utilizes measurements from potentially millions of end-points in every country and tens of thousands of ISPs. But the other differences point out shortcomings in the metrics that synthetic testing gathers and, for that reason, it’s important to understand the difference between the two.
In my next post, we will explore the best ways to use these tools in cloud or content delivery environments.