Synthetic testing: A definition and how it compares to Real User Monitoring

| 9 min. (1876 words)

Performance monitoring is critical for a healthy software application. No matter how much you test, when your code is in production, things will happen that you didn’t anticipate. If you don’t have synthetic testing or real user monitoring in place, opportunities for optimizations and fixes are slipping through the cracks.

The two main types of application monitoring are Real User Monitoring (RUM) and synthetic testing (or synthetic monitoring). It’s important to know the differences and to understand which will deliver the results you need.

Synthetic testing vs Real User Monitoring (RUM)

Both real user monitoring and synthetic testing allow you to check and analyze the frontend performance and user experience of a website or application.

However, while synthetic monitoring is both a testing and monitoring technique that can be used before and after deployment, RUM is solely used after deployment, during the ‘operations and maintenance’ phase of the software development lifecycle (SDLC).

That said, neither technique is better or worse; they simply serve different purposes in frontend application monitoring. Essentially, while synthetic testing is active monitoring performed on simulated users, real user monitoring is passive monitoring carried out on real users.

Note, however, that here, we’re solely speaking about frontend application monitoring that focuses on user experience. There’s also application performance monitoring (APM), a backend monitoring technique that gives insight into the performance of your application server and server-side code, which is outside the scope of this article.

Now, let’s see the differences, use cases, advantages, and disadvantages of synthetic testing and real user monitoring in detail.

What is synthetic testing?

Synthetic testing is a method of understanding how your users experience your application by predicting their behavior. The main goal of synthetic testing is to prevent performance-related, functional, and other issues before real users encounter them — this is why it’s also called active or proactive monitoring.

You can perform synthetic tests manually or use a synthetic monitoring tool that automates the process. These tools connect to test servers (usually in different locations around the world) and use behavioral scripts to simulate the typical actions of end-users.

You might reproduce user logins, newsletter signups, form submissions, shopping cart operations, checkout processes, and other user journeys. In addition to testing the performance and user experience of critical business transactions and common navigation paths, synthetic testing tools can also provide information on uptime, response times, regional issues, the availability of API endpoints, and more.

Examples of synthetic testing use cases

Having little to no traffic

As synthetic testing is a proactive method, you can use it during the development phase before your application goes into production, or right after deployment while you’re still having very little traffic and passive monitoring tools won’t give much insight.

With the help of simulated tests, you can make sure that your critical business transactions work properly, find performance bottlenecks, and detect user experience issues before they affect your real users.

Simulating specific segments or new markets

As synthetic monitoring tools let you set up different conditions, you can test your website or application for specific geographic locations, network request types, web browsers, API endpoints, and more. This can be useful when you are targeting a specific user segment or before you launch your application on a new market.

Monitoring uptime and third-party APIs

Another use case of synthetic testing tools is monitoring the availability of your service, including third-party APIs, so you’ll be notified when a part of your application goes down or becomes entirely unavailable. As you can automate the entire process and run synthetic tests at pre-defined intervals, your team can be alerted immediately as the problem occurs.

Improving Continuous Integration and Delivery (CI/CD)

If your team uses iterative development and frequently deploys new code, it can be a good idea to integrate synthetic testing into your CI/CD pipeline. If you test your code for different conditions before deployment, you minimize the risk of post-deployment errors and performance issues and prevent the accumulation of technical debt.

Pros of synthetic testing

  • Create synthetic tests both manually and using an automated tool.
  • Proactively test complex, multi-step business transactions and user journeys for several sets of specific conditions.
  • Detect problems caused by third-party scripts and API endpoints.
  • Monitor critical database queries for availability and performance over low-traffic periods.
  • Rapidly alert your team to outages and performance problems.
  • Create a baseline for performance trends across countries.
  • Modern synthetic testing tools give you access to graphical user interfaces, code-free tests, and web recorders.

Cons of synthetic testing

  • Setting up and scripting synthetic tests manually requires coding knowledge and proper testing infrastructure.
  • Synthetic tools can’t track the actions of real users. Even if your tests come out fine, your real users may still experience issues not anticipated in test conditions.
  • Smaller UI changes can break testing scripts, which can lead to false-negative results.
  • Difficult to maintain at scale.
  • You can’t measure Core Web Vitals in the field, which can negatively affect your search engine rankings.

How is real user monitoring different?

In contrast to synthetic testing, real user monitoring is a passive monitoring technique. Instead of using independent test servers and pre-written scripts, RUM tools collect data via a small script that silently runs in each user’s browser and captures data while they’re navigating your website or application.

The main goal of real user monitoring is to surface performance bottlenecks, such as slow-loading pages and poor UX, that directly impact your users and can negatively affect your bottom line. As RUM is both agentless and asynchronous, it doesn’t impact the performance of your application.

Advanced RUM tools give you insight into both individual user sessions and aggregated data that you can further filter for time, geo-location, user device, browser, and more. They provide you with a range of performance metrics — for example, page load times, first paint, AJAX response time, and Google’s Core Web Vitals scores, depending on the platform.

Some real user monitoring tools, such as Raygun, further break down page load times into individual actions, such as DNS resolution, server latency, SSL handshake, and others, so that you can locate the culprit of each performance issue.

Examples of real user monitoring use cases

Capturing unreported performance issues

Users tend to not report the issues they encounter when using a website or application. You might get feedback or poor reviews complaining about slow load times or a sluggish interface, but this kind of report rarely gives you much context. This is where real user data can help enormously.

RUM tools not only locate the issues and provide insight into the corresponding user sessions, but can also send you alerts when specific conditions are met so that you can react in a timely manner. If you fix performance issues right after the first users encounter them, you can save a lot of time for your entire customer base.

Improving SEO and Core Web Vitals

Core Web Vitals are Google’s user-focused performance metrics that are included in search engine algorithms. Passing the minimum thresholds for all three Core Web Vitals on both desktop and mobile will help your web pages rank in Google, and boosts crucial on-page metrics like engagement and conversion rates. This is where a good real user monitoring tool can help.

Even though you can create synthetic tests to estimate Core Web Vitals scores in different browsers, this is just lab data that can be very different from field data collected on real users. The best real user monitoring platforms, including Raygun, give you real data on the three Core Web Vitals, namely Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS), so that you know when you don’t meet the requirements and can understand the underlying issues.

Note: As of March 2024, Google has replaced FID (First Input Delay), the original interactivity metric, with INP (Interaction to Next Paint). Learn about the transition, and what it means for development teams, here.

Core Web Vitals in Raygun’s RUM tool

Correcting the inaccuracies of synthetic testing

With a real user monitoring tool, you can also validate the assumptions you are using in your synthetic testing workflow. First, performance issues can be caused during deployment that your synthetic monitoring tool won’t report, as you’ve run your tests before deployment in a simulated environment. Second, there can be variables across user journeys, transactions, setups, and locations that you just didn’t think of before seeing field data from real users.

If you compare your simulated tests with real user behavior, you can find the inaccuracies, discrepancies, and redundancies between the two and adjust your synthetic tests to better simulate the behavior of your real users.

Facilitating browser testing and the adoption of new features

Getting your website or application to run fast and without issues in different browsers across multiple devices is one of the most difficult things in frontend development. Browser monitoring is an important feature that a good RUM tool can provide.

You can create performance reports for different web browsers so that you can improve your code on a browser-to-browser basis, collect browser stats, and track and compare the performance of your most popular browsers. Browser monitoring also makes it more straightforward to test new frontend features that only have partial browser support and need vendor prefixes, polyfills, or fallback methods in some browsers.

Pros of real user monitoring

  • Tracks all user data and interactions, providing greater accuracy than synthetic testing.
  • Easily locate high-priority and critical pages and focus on them.
  • Reproduce the sessions of individual users in detail.
  • Collect field data for Core Web Vitals to help maintain and improve search rankings.
  • Gather historical data and compare the performance of your application at different times.
  • Gives you the context you don’t get from simulated tests and can be used to adjust your priorities and development goals.
  • Once the tracking script is added to your website, real user monitoring can be run without any coding knowledge.

Cons of real user monitoring

  • Only works after deployment, so you need activated traffic to track user sessions.
  • Unlike synthetic testing, you can’t perform it manually, so you’ll need to invest in a RUM tool.
  • Doesn’t examine your competitors’ websites or applications, which is possible with synthetic testing.
  • Can provide too much data that sometimes can be hard to manage if not presented accessibly.


Real user monitoring and synthetic testing are complementary tools that approach application monitoring from a different angle. Together, they can create a comprehensive monitoring picture. While synthetic monitoring falls short in some areas, real user monitoring will fill in the gaps. It’s worth noting that Google has even recommended supplementing their own synthetic testing tools with real user monitoring.

Raygun’s Real User Monitoring offers detailed insights into end-users and even a weekly digest of your web and mobile application’s performance. You can fix performance problems quickly, seeing with complete clarity what went wrong and the root cause. To get unrivalled visibility into the actual experiences of your real users, grab a free trial of Raygun RUM.