Create A/B experiments for Demand Gen campaigns

Experiments let you propose and test changes to your Demand Gen campaigns. You can measure your results and understand the impact of your changes before you apply them to a campaign.

This article explains how Demand Gen experiments work. When you’re ready, set up a Demand Gen campaign.


Before you begin

  • You can start an experiment with a minimum of 2 Demand Gen campaigns. Both campaigns should be ready but not currently running.
  • Choose campaigns that differ in only one variable to help you better understand and draw conclusions from the experiment results.
  • All changes to campaign setup should be done before saving the experiment.

Features unique to Demand Gen A/B experiments

  • Demand Gen experiments will run on all the inventories - Discover, Gmail, and Youtube.
  • Demand Gen experiments will allow advertisers to test with all variations of Image and Video campaigns.
  • The experiments allow testing with creatives, audience and product feed. We do not recommend testing other variables like bidding, and budget at this time.
  • Advertisers are recommended to create new campaigns with the same start date in-order to run the experiment. Experiments can only use Demand Gen campaigns.

Instructions

Set up a Custom Experiment

  1. Go to Experiments within the Campaign menu Campaigns Icon .
  2. Select the plus button at the top of the “All Experiments table” and then select Demand Gen experiment.
    • If you’re A/B testing creative as a single variable, select A/B test assets and proceed to ”Set up an Asset Uplift experiment”.
    • If you’re testing audiences, bidding strategies, formats or creative with more than 2 experiment arms, continue by selecting “Custom Experiments”.
  3. Label the experiment arms. There are 2 experimental arms by default, and you can add up to 10 if needed.
    • In “Traffic split”, input the percentage by which you want to split your experiment. We recommend using 50% to provide the best comparison between the original and experiment campaigns.
    • Assign campaigns to each experiment group. A campaign can’t be in more than one experiment group at the same time, but an experiment group may have several campaigns if necessary.
  4. Select the primary success metric to measure the outcome of the experiment.
    • Metrics include: Clickthrough rate (CTR), Conversion rate, Cost-per-conversion, and Cost-per-click (CPC).
  5. Enter the name of your experiment and description. Your experiment shouldn’t share the same name as your campaigns and other experiments.
  6. Click Save to finish creating the experiments. Your experiment is now ready to run.

Set up an Asset A/B Experiment

  1. Go to Experiments within the Campaign menu Campaigns Icon .
  2. Select the plus button at the top of the “All Experiments table” and then select Demand Gen experiment.
    • If you are A/B testing creative as a single variable, continue by selecting “A/B test assets”.
    • If you are testing audiences, bidding strategies, formats or creative with more than 2 experiment arms, select “Custom Experiments” and proceed to “Set up a Custom Experiment”.
  3. Click the metric drop-down and select one success metric among:
    • Average cost-per-click (Avg. CPC)
    • Cost per conversion (Cost/conv)
    • Conversion rate (Conv. rate)
    • Clickthrough rate (CTR)
  4. Click Select control campaign and select the campaign from the campaign list.
    • The control campaign can be a campaign that is currently live or a new campaign.
  5. Click Create treatment campaign.
    • Your control campaign will be duplicated. The daily budget of the campaign will be the same as your control campaign.
  6. On the Ad card click +Add videos to add videos within your treatment arm and then select the desired videos from the “Asset library”.
  7. Enter the name of your experiment and description. Your experiment shouldn’t share the same name as your campaigns and other experiments.
  8. Click Save.
Any changes made to the control will be automatically reflected in the treatment. For example, turning off enhancements in the control or enabling optimized targeting in the control campaign, will be reflected in the treatment campaign. Turning on enhancements may not always create new videos. Learn more about video enhancements here.

Evaluate your experiment results

As your experiment runs, you can evaluate and compare its performance against your original campaign. If you’d like, you can end your experiment early. You can find 3 components in the experiment report:

  • Confidence level dropdown: Select the confidence level at which you want to view the results. This affects both the top card and the reporting table. A lower number allows for faster results while the opposite is slower but leads to more certainty:
    • 70% (default): Directional results, aligns with Lift Measurement’s lowest CL.
    • 80%: Directional results, a balance between speed and certainty.
    • 95%: Conclusive results, for users who strive for high certainty for big decisions.
  • Top card: View the result of your experiment for the success metric you chose. The status from the card will provide useful info like:
    • Collecting data: The experiment needs more data to start calculating results. For conversion related metrics, you need to collect at least 100 data points to start seeing results.
    • Similar performance: There is no significant difference between the different arms at the chosen confidence level. You can wait longer to see if the difference becomes significant with more data points.
    • One arm is better: There is a significant difference between the different arms at the chosen confidence level.
  • Reporting table: Find more comprehensive results for your success metric and all other available metrics. Columns will have information on what is the control arm, what is the experiments arm, status on arm performance and general performance metrics.

End your experiment

Make sure to end your experiment after results come in before you take action related to the original campaign. To end your experiment, go to the Experiments page, hover over the experiment, and click End experiment.

If you don’t proactively end the experiment, unpaused campaigns in it may continue to serve on restricted traffic, even if the campaigns in the other arm were paused.

Best practices

  • When using conversion based bidding strategies, Demand Gen experiments require a minimum of 50 conversions per arm to surface results. In-order to achieve this, it is recommended to use target CPA or max conversions bidding, optimizing towards shallow conversions like Add to Cart, or Page view.
  • Create experiments with campaigns where only 1 variable differs.
  • For example, run a creative experiment with different kinds of creatives, but creatives of the same format that target the same audience. So the creative variable is different, but the format and audience variables stay the same.
  • Take action on your results: If you find statistically significant results in an experiment arm, you can maximize the impact by pausing other experiment arms and shifting all the budget to the experiment arm with the more significant results.
  • Build on past learning: For example, if you find out that customized video assets for different audience segments perform better than showing the same generic asset to all the audiences, then use this to inform the development of future video assets.
  • Inconclusive results can also be insightful: An experiment that does not yield a winner might mean the creative variation you are testing is not a substantial one. You could test other asset types or test more significant variation on your next experiment.

Related links

Was this helpful?

How can we improve it?
6491904012390891691
true
Search Help Center
true
true
true
true
true
73067
false
false
false
Search
Clear search
Close search
Main menu
false
false