Monitor your ad variations

After you’ve started running ad variations, you can monitor their performance. By understanding how your ad variations are performing, you can make an informed decision about whether to replace your original ads with better performing ads.

This article explains how to monitor and understand the performance of your ad variations.

Before you begin

If you haven’t yet created an ad variation, check Set up an ad variation.

Instructions

Note: The instructions below are part of the new design for the Google Ads user experience. To use the previous design, click the "Appearance" icon, and select Use previous design. If you're using the previous version of Google Ads, review the Quick reference map or use the Search bar in the top navigation panel of Google Ads to find the page you’re searching for.
  1. In your Google Ads account, click the Campaigns icon Campaigns Icon.
  2. Click the Campaigns drop down in the section menu.
  3. Click Experiments.
  4. Click Ad variations. You’ll see a table of ad variations you’ve created, along with information about each variation.

The number of affected ads, clicks, and impressions are listed for each ad variation. When you click on an ad variation, you’ll see a comparison of the performance metrics with the original ad.

About performance metrics in ad variations

The performance metrics you’ll see for your ad variations include four numbers:

  1. The first number is the value for each performance metric for the ad variation only. For example, your ad variation got X clicks.
  2. The percentage outside of the brackets "[]" is the difference between your ad variation and the original. For example, your variation got Y% more clicks than the original.
  3. Inside the brackets, the two numbers give an expected range with a flexible confidence interval. For example, if you picked 80% confidence interval, then you have an 80% chance of seeing a difference between A% and B%.
Note: You can pick your own confidence intervals (80% is the default confidence interval) and be able to understand your experiment metrics better with dynamic confidence reporting.

When a metric is marked by a blue asterisk "*" it's statistically significant. It's at least 95% likely that the impact on performance resulted from the change you made rather than from random chance.

Generally speaking, significance is affected by three factors:

  • The difference in performance between the original ads and the modified ads. Larger differences tends to increase significance.
  • The changes in performance. A campaign where clicks vary by 50% from day to day has more variability than a campaign where clicks vary by 2%. Large variability tends to decrease significance.
  • Total number of impressions in ad variation. The higher the impressions, the higher the statistical significance may be.

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Main menu
14960620819752914727
true
Search Help Center
true
true
true
true
true
73067
false
false
false