Monitor your campaign experiments

After you’ve started running an experiment, it’s helpful to understand how to monitor its performance. By understanding how your experiment is performing compared against the original campaign, you can make an informed decision about whether to end your experiment, apply it to the original campaign, or use it to create a new campaign.

This article explains how to monitor and understand the performance of your experiments.

Instructions

Select the AdWords experience that you're using. Learn more

View your experiment’s performance

  1. Sign in to your AdWords account.
  2. Expand the menu on the left-hand side, then click the name of the experiment that you’d like to see performance for under the All experiments header. You’ll be taken to a page that shows information on your experiment and a comparison of key metrics for your experiment and its original campaign.
  3. To adjust the date range for this data, use the date drop-down menu in the top right-hand corner. Note that you can only see data between your experiment's start and end dates.
  4. To see this comparison at the ad group level, click the Ad groups tab under this table. Then, click the ad group that you’d like to see data for.

How to interpret this data

In the table near the top, you’ll see a comparison of the experiment’s key metrics against those of the original campaign, along with arrows next to each metric.

Icon Statistical significance ((1-p) value)
No icon Not enough data
blank diamond <95%
Statistical+1 image upward arrowStatistical-1 downward arrow 95%-99%
Statistical+2 upward arrowStatistical-2 image 99%-99.9%
Statistical+3 upward arrowStatistical-3 downward arrow >99.9%
  • The direction of the arrows indicates whether the experiment values are more or less than those of the original campaigns.
  • The number of arrows indicate statistical significance, or the likelihood of differences that aren’t due to chance. As many as three arrows (Statistical+3 upward arrowStatistical-3 downward arrow) can appear in the same direction; and the more arrows that appear, the more certain it is that the difference isn’t due to chance.
  • A diamond (blank diamond) indicates that the results aren’t statistically significant. These are several reasons why your results may not be statistically significant:
    • Your experiment hasn’t run for long enough.
    • Your campaign doesn’t receive enough traffic.
    • Your traffic split was too small and your experiment isn’t receiving enough traffic.
    • The changes that you’ve made haven’t resulted in a statistically significant difference in performance.
  • Experiments with more statistically significant results are more likely to continue performing with similar results after they’ve been converted to a campaign.
Was this article helpful?
How can we improve it?
Previous New