Monitor the performance of a video experiment

After you set up a video experiment, you can monitor its performance in Google Ads and find the best performing video ads in the experiment arms. By understanding which ad is performing better in the experiment, you can make an informed decision on which campaign to continue using and where to allocate higher budgets.

This article explains how to monitor and understand the performance of a video experiment.

 


Instructions

Note: The instructions below are part of the new design for the Google Ads user experience. To use the previous design, click the "Appearance" icon, and select Use previous design. If you're using the previous version of Google Ads, review the Quick reference map or use the Search bar in the top navigation panel of Google Ads to find the page you’re searching for.
  1. In your Google Ads account, click the Campaigns icon Campaigns Icon.
  2. Click the Campaigns drop down in the section menu.
  3. Click Experiments, then click Video experiments.
  4. Select an experiment to view its results, while still in progress (directional) or when completed (conclusive).

    UI dashboard for monitoring creative experiments  for video campaigns

 


Directional vs. Conclusive results

You can view the supposed winner of your experimental arms while the video experiment is still running and gathering data, allowing you to have directional results earlier. We recommend waiting until the experiment finishes for conclusive results but if you don’t have time and are comfortable with directional results, you can have them as early as when the experiment reaches a threshold of 70% confidence. You also have the option to wait until the experiment reaches 80% confidence (still considered directional results) or wait until it reaches completion at 95% confidence (results at this stage are considered conclusive results).

A threshold of 70% confidence or what is known as a “confidence interval” means that if you were to repeat this experiment, you would get this same result 70% of the time. If you choose to wait for conclusive results, those will be at a 95% confidence interval.

 


Interpreting your results

  • When evaluating your results, focus on the green and red call outs for your success metric as well as other metrics in the reporting table, as these metrics help you find the significant differences in performance among the experiment arms.
  • To start calculating the differences between the experimental arms and the control arm for each conversion metric (except for Absolute Brand Lift), you need at least 100 conversions. For other metrics related to clicks or views, there is no minimum number of actions required.
Metric Actions threshold What this means
Click-through-rate (CTR) No requirement Arm with the highest click-through-rate
Conversion rate ≥ 100 conversions Arm with the highest conversion rate
Conversions ≥ 100 conversions Arm with the highest number of conversions
Cost-per-click (CPC) No requirement Arm with the lowest cost-per-click
Cost-per-thousand (CPM) No requirement Arm with the lowest cost per thousand impressions
Cost-per-view (CPV) No requirement Arm with the lowest cost-per-view
Video-view-rate No requirement Arm with the highest view-rate

For all metrics

  • As soon as the difference with the control arm becomes statistically significant at the chosen confidence level (70%, 80%, or 95%), you’ll begin to find results. If you have:
    • “Similar performance”: At this confidence level, there is no statistical evidence that the arm is performing better or worse than the control arm for this metric.
    • A green value: At this confidence level, there is statistical evidence that the arm is performing better than the control arm for this metric.
    • A red value: At this confidence level, there is statistical evidence that the arm is performing worse than the control arm for this metric.
  • For conversion metrics: If you have selected a conversion metric and a campaign in your experiment hasn’t received at least 100 conversions, a message about “collecting data” (if the experiment is still running) or “not enough data” (if the experiment has ended) will appear.
  • Note: If the budget is split evenly between experiment arms, but one campaign receives more impressions than the other, this means that the campaigns are entering different auctions and winning at different bids. The campaign that wins more auctions at a lower cost will have the most impressions. An experiment only ensures that the users in one experiment arm don’t overlap with users in another experiment arm.

 


Best practices

  • Take action on your results: If you find statistically significant results in an experiment arm, you can maximize the impact by pausing other experiment arms and shifting all the budget to the experiment arm with the more significant results.
  • Build on past learning: For example, if you find out that customized video assets for different audience segments perform better than showing the same generic asset to all the audiences, then use this to inform the development of future video assets.
  • Inconclusive results can also be insightful: For example, you may have 2 creatives that perform equally well in the experiment, but one of the creatives may be cheaper to produce than the other.

Was this helpful?

How can we improve it?
true
Achieve your advertising goals today!

Attend our Performance Max Masterclass, a livestream workshop session bringing together industry and Google ads PMax experts.

Register now

Search
Clear search
Close search
Main menu
5235807790101353194
true
Search Help Center
true
true
true
true
true
73067
false
false
false