Monitor your campaign experiments
AdWords Campaign Experiments (ACE) are no longer supported
On February 1, 2017, AdWords Campaign Experiments was replaced by campaign drafts and experiments to give you a more powerful way to test changes to your AdWords campaigns, measure results, and apply the changes that are working well for your business.
After you’ve started running an experiment, it’s helpful to understand how to monitor its performance. By understanding how your experiment is performing in comparison to the original campaign, you can make an informed decision about whether to end your experiment, apply it to the original campaign, or use it to create a new campaign.
This article explains how to monitor and understand your experiments’ performance.
Before you begin
If you haven’t yet created an experiment, read Set up a campaign experiment.
- Expand the menu on the left, then click the name of the experiment you’d like to see performance for under the All experiments header. You’ll be taken to a page that shows your experiment’s information and a comparison of key metrics for your experiment and its original campaign.
- To adjust the date range for this data, use the date drop-down on the top right corner. Note that you can only see data between your experiment's start and end dates.
- To see this comparison at the ad group level, click the Ad groups tab below this table. Then, click the ad group you’d like to see data for.
How to interpret this data
In the table near the top, you’ll see a comparison of the experiment’s key metrics with that of the original campaign, along with arrows next to each metric.
|Icon||Statistical significance ((1-p) value)|
|No icon||Not enough data|
- The direction of the arrows indicates whether the experiment values are more or less than the original campaigns.
- The number of arrows indicate statistical significance, or the likelihood of differences that aren’t due to chance. As many as three arrows () can appear in the same direction; and the more arrows that appear, the more certain it is that the difference isn’t due to chance.
- A diamond () indicates that the results aren’t statistically significant. These are some reasons your results may not be statistically significant:
- Your experiment hasn’t had enough time to run.
- Your campaign doesn’t receive enough traffic.
- Your traffic split was too small and your experiment isn’t receiving enough traffic.
- The changes you’ve made haven’t resulted in a statistically significant performance difference.
- Experiments with more statistically significant results are more likely to continue performing with similar results after they’re converted to a campaign.