Monitor your campaign experiments

After you’ve started running an experiment, it’s helpful to understand how to monitor its performance. By understanding how your experiment is performing in comparison to the original campaign, you can make an informed decision about whether to end your experiment, apply it to the original campaign, or use it to create a new campaign.

This article explains how to monitor and understand your experiments’ performance.


The new Google Ads experience is now the exclusive way for most users to manage their accounts. Note, automatic targeting is only available in the new Google Ads experience.

View your experiment’s performance

  1. Sign in to your AdWords account.
  2. Expand the menu on the left, then click the name of the experiment you’d like to see performance for under the All experiments header. You’ll be taken to a page that shows your experiment’s information and a comparison of key metrics for your experiment and its original campaign.
  3. To adjust the date range for this data, use the date drop-down on the top right corner. Note that you can only see data between your experiment's start and end dates.
  4. To see this comparison at the ad group level, click the Ad groups tab below this table. Then, click the ad group you’d like to see data for.

How to interpret this data

In the table near the top, you’ll see a comparison of the experiment’s key metrics with that of the original campaign, along with arrows next to each metric.

Icon Statistical significance ((1-p) value)
No icon Not enough data
blank diamond <95%
Statistical+1 image upward arrowStatistical-1 downward arrow 95%-99%
Statistical+2 upward arrowStatistical-2 image 99%-99.9%
Statistical+3 upward arrowStatistical-3 downward arrow >99.9%
  • The direction of the arrows indicates whether the experiment values are more or less than the original campaigns.
  • The number of arrows indicate statistical significance, or the likelihood of differences that aren’t due to chance. As many as three arrows (Statistical+3 upward arrowStatistical-3 downward arrow) can appear in the same direction; and the more arrows that appear, the more certain it is that the difference isn’t due to chance.
  • A diamond (blank diamond) indicates that the results aren’t statistically significant. These are some reasons your results may not be statistically significant:
    • Your experiment hasn’t had enough time to run.
    • Your campaign doesn’t receive enough traffic.
    • Your traffic split was too small and your experiment isn’t receiving enough traffic.
    • The changes you’ve made haven’t resulted in a statistically significant performance difference.
  • Experiments with more statistically significant results are more likely to continue performing with similar results after they’re converted to a campaign.
Was this helpful?
How can we improve it?
Previous New