Monitor your campaign experiments
This article explains how to monitor and understand your experiments’ performance.
The new Google Ads experience is now the exclusive way for most users to manage their accounts. If you’re still using the previous AdWords experience, choose Previous below. Learn more
View your experiment’s performance
- Sign in to your AdWords account.
- Expand the menu on the left, then click the name of the experiment you’d like to see performance for under the All experiments header. You’ll be taken to a page that shows your experiment’s information and a comparison of key metrics for your experiment and its original campaign.
- To adjust the date range for this data, use the date drop-down on the top right corner. Note that you can only see data between your experiment's start and end dates.
- To see this comparison at the ad group level, click the Ad groups tab below this table. Then, click the ad group you’d like to see data for.
How to interpret this data
In the table near the top, you’ll see a comparison of the experiment’s key metrics with that of the original campaign, along with arrows next to each metric.
|Icon||Statistical significance ((1-p) value)|
|No icon||Not enough data|
- The direction of the arrows indicates whether the experiment values are more or less than the original campaigns.
- The number of arrows indicate statistical significance, or the likelihood of differences that aren’t due to chance. As many as three arrows () can appear in the same direction; and the more arrows that appear, the more certain it is that the difference isn’t due to chance.
- A diamond () indicates that the results aren’t statistically significant. These are some reasons your results may not be statistically significant:
- Your experiment hasn’t had enough time to run.
- Your campaign doesn’t receive enough traffic.
- Your traffic split was too small and your experiment isn’t receiving enough traffic.
- The changes you’ve made haven’t resulted in a statistically significant performance difference.
- Experiments with more statistically significant results are more likely to continue performing with similar results after they’re converted to a campaign.