About maximise conversion value experiments for search campaigns

Campaign experiments allow you to test maximise conversion value (with an optional target ROAS) against your existing bid strategy. Experiments are the best way to test value-based bidding since they allow you to isolate your new value-based strategy within the experiment (trial arm) and compare it against your campaign’s current bid strategy (base arm) while keeping all other variables constant. This helps ensure that you're measuring impact in a reliable way.

Only test one variable at a time to isolate the uplift of value bidding. In this instance, that variable would be the bid strategy. If you’d like to change the conversion action that you’re optimising towards prior to testing value-based bidding, follow the steps outlined in changing biddable conversion actions before proceeding with an experiment.

To ensure that your experiment delivers meaningful results, be sure to follow the best practices below:

Set up your experiments

Note: The simplest way to set up an experiment for tROAS bidding is through a one-click experiment. You can review suggested campaigns in the 'Recommendations' section of your account.
  1. Pick campaigns with the right settings for your value-based bidding experiment.
    • Conversion value: The campaign should already be measuring conversion value before testing value-based bidding (2 or more unique, non-zero values are required).
    • Budgets: The campaign should not be budget-constrained if you are testing maximise conversion value with target ROAS bidding. You can have a capped budget when testing max conversion value without a target ROAS since it will work to maximise value within budget parameters.
    • Biddable conversion goal: The campaign should be bidding to a conversion action that is set as a primary conversion action. This conversion action can be either the account default, which is ideal for simplicity, or specific to this particular campaign using campaign-specific conversion goals.
    • Conversion volume: To ensure sufficient volume in the base and trial arms, the campaign should have a conversion volume of at least 50 conversions in the last 30 days. Note that this is not a minimum conversion requirement to opt in to value-based bidding more generally.
  2. Choose the correct experiment settings.
    • Ensure it’s a 'clean' test of one bid strategy against another: Only test one variable at a time. In this instance, that would be the bid strategy. Do not make other changes between the base and trial arms.
      • For example, you can compare maximise conversion value with a target ROAS against a bid strategy that is not value-based (such as maximise conversions with a target CPA, Maximise Clicks or target impression share), but we do not recommend changing other parameters like biddable conversion goals.
    • Same conversion actions: Never test different conversion actions against one another as part of the experiment. Results will not be meaningful as Smart Bidding will train across all reported conversions in the 'Conversions' column regardless of how the experiment is set up.
    • Even split: Split the base and trial arms 50/50 (it can be either a cookie or traffic split).
    • Sync changes: Enable 'experiment sync' before you start the experiment so any changes made will be consistent across the base and trial arms. Even with 'experiment sync' on, avoid making large changes during the experiment (like major creative changes or adding many new keywords).
    • Set fair targets: The best way to ensure that value-based bidding has adequate opportunity to bid on traffic would be to test maximise conversions against maximise conversion value bidding without an ROAS target. Otherwise, if you want to test target CPA against target ROAS, ensure that the targets in both arms are comparable. Your ROAS target should be at or below the ROAS that the CPA campaign has historically achieved in the past 4 weeks. If you’d like to drive additional traffic in the trial arm, lower your ROAS target throughout the course of the experiment.

Monitoring your experiments and evaluating success

  1. Choose the right success metrics.
    • Focus on conversion value and ROAS as your metrics. In a value-based bidding experiment, the trial arm bidding to maximise conversion value with an optional target ROAS is expected to deliver higher conversion value within your budget. Performance against other trailing metrics like CPA and clicks will be incidental and should not be considered.
  2. Follow the recommended experiment timeline.
    • Day 1: Launch the experiment following the best practices above. The trial arm is testing maximise conversion value (with an optional target ROAS) and the base arm is using the existing bid strategy.
    • Days 1–14: Give the experiment time to ramp up. This could be 2 weeks or 3 conversion cycles, whichever is longer. Always exclude this period of time when evaluating performance.
    • Days 14–44: Let the experiment run uninterrupted for at least 30 days.
    • Account for conversion lag: When evaluating results, be sure to account for conversion delay by excluding any recent days from your assessment, where less than 90% of your conversions have been reported.
    • Evaluate performance: Compare the value metrics between the base and trial arms. Conversion value should be higher than the base arm at your desired ROAS target or better. If this isn’t the case, consult your account team or file a troubleshooting ticket.
    • Promote the experiment to full traffic: Consider scaling value-based bidding across other campaigns in the account.
    • Note: The experiment interface may determine that results are statistically significant before this timeline has been completed. Please defer to the best practices above when running a value-based bidding experiment.

Testing multiple campaigns? Consider the multi-campaign experiments beta.

  1. You may want to test more than one campaign at a time to help generate meaningful results more quickly, particularly if you have limited conversion volume. The multi-campaign experiments beta allows you to execute one of these 2 setups. Ask your account representative or contact support to participate.
    • The base and trial arms each have a different portfolio bid strategy, for example:
      • Base arm: Includes all campaigns within an existing CPA portfolio bid strategy
      • Trial arm: Includes all campaigns within a new tROAS portfolio bid strategy
    • Best practices:
      • Campaigns can be grouped into portfolio bid strategies for the base and experiment arms. Note that shared budgets are not compatible with experiments.
      • The experiment should only test a single variable. As an example, testing portfolio ROAS in the experiment arm versus individual CPA campaign-level strategies in the control arm is not recommended as you would be testing 2 variables at once (portfolios and bid strategy).
      • Create experiments with the same start and end dates, cookie split and split percentage (i.e. 50%). This will ensure that audiences will only be exposed to either the base scenario or the trial scenario, making for cleaner tests.
      • When evaluating results, focus on the value driven in the trial arm compared to the base arm.

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Main menu
12125139906359030254
true
Search Help Centre
true
true
true
true
true
73067
false
false
false