After you’ve started running an experiment, it’s helpful to understand how to monitor its performance. By understanding how your experiment is performing in comparison to the original campaign, you can make an informed decision about whether to end your experiment, apply it to the original campaign, or use it to create a new campaign.
This article explains how to monitor and understand your experiments’ performance.
Instructions
Note: The instructions below are part of a new Google Ads user experience that will launch for all advertisers in 2024. If you’re still using the previous version of Google Ads, review the Quick reference map or use the Search bar in the top navigation panel of Google Ads to find the page you’re searching for.
View your experiment’s performance
- In your Google Ads account, click the Campaigns icon
.
- Click the Campaigns drop down in the section menu.
- Click Experiments.
- Find and click the experiment that you want to check performance for.
- You’ll see the “Experiment Summary table” and a scorecard.
- You can choose to Apply experiment or End experiment.
What the scorecard shows
- Performance comparison: This shows the dates for which your experiment's performance is being compared to the original campaign's performance. Only full days that fall between your experiment's start and end dates and the date range you've selected for the table below will show. If there is no overlap, the full days between your experiment's start and end dates will be used for "Performance comparison."
- By default, you’ll see performance data for Clicks, CTR, Cost, Impressions, and All conversions, but you can select the performance metrics you want to view by clicking the down arrow next to the metric name. You’ll be able to choose from the following:
- Clicks
- CTR
- Cost
- Avg. CPC
- Impr
- All conv. (available only if you’ve set up conversion tracking)
- Conv. rate (available only if you’ve set up conversion tracking)
- Conversions (available only if you’ve set up conversion tracking)
- Cost / conv. (available only if you’ve set up conversion tracking)
- View-through conv. (available only if you’ve set up conversion tracking)
- The first line below each metric name shows your experiment’s data for that metric. For example, if you see 4K below “Clicks,” that means your experiment’s ads have received 4,000 clicks since it began running.
- The second line shows an estimated performance difference between the experiment and the campaign.
- The first value shows the performance difference your experiment saw for that metric when compared to the original campaign. For example, if you see +10% for Clicks, it’s estimated that your experiment received 10% more clicks than the original campaign. If there’s not enough data available yet for the original campaign and/or the experiment, you’ll see “‑‑”.
- The second value shows that if you chose a 95% confidence interval then, this is the possible range for the performance difference that might exist between the experiment and the original campaign. For example, if you see [+8%, +12%], it means that there might be anywhere from a 8% to 12% increase in performance for the experiment when compared to the campaign. If there’s not enough data available yet for the original campaign and/or the experiment, you’ll see “‑‑”. You can pick your own confidence intervals (80% is the default confidence interval) and be able to understand your experiment metrics better with dynamic confidence reporting.
- If your result is statistically significant, you’ll also see a blue asterisk.
Tip
Point your cursor over this second line for a more detailed explanation of what you’re seeing. You'll be able to view the following information:
Statistical significance: You’ll see whether your data is statistically significant.
- Statistically significant: This means your p value is less than or equal to 5%. In other words, your data is likely not due to chance, and your experiment is more likely to continue performing with similar results if it’s converted to a campaign.
- Not statistically significant: This means your p value is greater than or equal to 5%. These are some possible reasons why your data could be considered not statistically significant.
- Your experiment hasn’t had enough time to run.
- Your campaign doesn’t receive enough traffic.
- Your traffic split was too small and your experiment isn’t receiving enough traffic.
- The changes you’ve made haven’t resulted in a statistically significant performance difference.
- Whether or not your data was shown to be statistically significant, you’ll see an explanation like the following to show the level of likelihood that the performance data was due to random chance: “There's a 0.2% (p-value) chance of getting this performance (or a larger performance difference) due to randomness. The smaller the p-value, the more significant the result.”
- Confidence interval: You’ll also see more details about the confidence interval for the performance difference with an explanation like the following: “There's a 95% chance that your experiment sees a +10% to +20% difference for this metric when compared to the original campaign.”
-
Finally, you’ll see the actual data for that metric for the experiment and the original campaign.
What you can do in the scorecard
- In the scorecard, you can change the metric you see using the drop-down next to the metric.
- To see the scorecard for ad group in the experiment, click an ad group from the table below.