Monitor your experiments

After you’ve started running an experiment, it’s helpful to understand how to monitor its performance. By understanding how your experiment is performing in comparison to the original campaign, you can make an informed decision about whether to end your experiment, apply it to the original campaign, or use it to create a new campaign.

This article explains how to monitor and understand your experiments’ performance.

Instructions

Note: The instructions below are part of the new design for the Google Ads user experience. To use the previous design, click the "Appearance" icon, and select Use previous design. If you're using the previous version of Google Ads, review the Quick reference map or use the Search bar in the top navigation panel of Google Ads to find the page you’re searching for.

View your experiment’s performance

  1. In your Google Ads account, click the Campaigns icon Campaigns Icon.
  2. Click the Campaigns drop down in the section menu.
  3. Click Experiments.
  4. Find and click the experiment that you want to check performance for.
  5. Review the “Experiment Summary table” and scorecard, then choose to Apply experiment or End experiment.

What the scorecard shows

  • Performance comparison: This shows the dates for which your experiment's performance is being compared to the original campaign's performance. Only full days that fall between your experiment's start and end dates and the date range you've selected for the table below will show. If there is no overlap, the full days between your experiment's start and end dates will be used for "Performance comparison."
  • By default, you’ll notice performance data for Clicks, CTR, Cost, Impressions, and All conversions, but you can select the performance metrics you want to view by clicking the down arrow next to the metric name. You’ll be able to choose from the following:
  • The first line below each metric name shows your experiment’s data for that metric. For example, if you notice 4K below “Clicks,” that means your experiment’s ads have received 4,000 clicks since it began running.
  • The second line shows an estimated performance difference between the experiment and the campaign.
    • The first value shows the performance difference your experiment saw for that metric when compared to the original campaign. For example, if you notice +10% for Clicks, it’s estimated that your experiment received 10% more clicks than the original campaign. If there’s not enough data available yet for the original campaign and/or the experiment, you’ll notice “‑‑”.
    • The second value shows that if you chose a 95% confidence interval then, this is the possible range for the performance difference that might exist between the experiment and the original campaign. For example, if you notice [+8%, +12%], it means that there might be anywhere from a 8% to 12% increase in performance for the experiment when compared to the campaign. If there’s not enough data available yet for the original campaign and/or the experiment, you’ll notice “‑‑”. You can pick your own confidence intervals (80% is the default confidence interval) and be able to understand your experiment metrics better with dynamic confidence reporting.
    • If your result is statistically significant, you’ll also find a blue asterisk.

Tip

Point your cursor over this second line for a more detailed explanation of what you’re reviewing. You'll be able to view the following information:

Statistical significance: You’ll find whether your data is statistically significant.
  • Statistically significant: This means your p value is less than or equal to 5%. In other words, your data is likely not due to chance, and your experiment is more likely to continue performing with similar results if it’s converted to a campaign.
  • Not statistically significant: This means your p value is greater than or equal to 5%. These are some possible reasons why your data could be considered not statistically significant.
    • Your experiment hasn’t had enough time to run.
    • Your campaign doesn’t receive enough traffic.
    • Your traffic split was too small and your experiment isn’t receiving enough traffic.
    • The changes you’ve made haven’t resulted in a statistically significant performance difference.
  • Whether or not your data was shown to be statistically significant, you’ll notice an explanation like the following to show the level of likelihood that the performance data was due to random chance: “There's a 0.2% (p-value) chance of getting this performance (or a larger performance difference) due to randomness. The smaller the p-value, the more significant the result.”
  • Confidence interval: You’ll also find more details about the confidence interval for the performance difference with an explanation like the following: “There's a 95% chance that your experiment notice a +10% to +20% difference for this metric when compared to the original campaign.”
  • Finally, you’ll find the actual data for that metric for the experiment and the original campaign.

What you can do in the scorecard

  • In the scorecard, you can change the metric you find using the drop-down next to the metric.
  • To review the scorecard for ad group in the experiment, click an ad group from the table below.
  • To view details like the name and budget of a campaign, hover over the corresponding cell in the table.

Understand the time series chart

The time series chart displays the performance of up to 2 metrics in your experiment and shows how they've changed over time in both treated and control campaigns. With this chart, you can compare the effects your experiments have on a particular metric and learn more about how it performs over time.

Apply or end an experiment

To apply an experiment on a campaign or end an experiment for any reason, click the "Apply" or "End" button in the lower right corner of the "Experiment summary" card above the time series chart.

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Google apps
Main menu
3804434750240072789
true
Search Help Center
true
true
true
true
true
73067
false
false
false