Campaign Mix Experiments allow you to test multiple campaign types, budgets, and settings across campaigns in a single experiment. Using this feature, you can run an experiment across your campaigns using various campaign types, and identify the most effective strategies for your business goals. Learn more about options to test your campaigns.
On this page
Benefits of Campaign Mix Experiments
- Test across campaign types: Combine Search, Performance Max, Shopping, Demand Gen, Video, and App campaigns in a single experiment with multiple arms, and test the effectiveness of different campaign strategies.
- Optimize budget allocation: Understand how to allocate your budget (across different campaigns/campaign types) to best maximize the return on investment through experimentation.
- Test features and settings: Compare the performance of different bidding strategies, targeting options, and other campaign settings.
- Gain deeper insights: Analyze granular performance data and identify winning strategies.
- Improve campaign performance: Gain a deeper understanding of how different campaign types interact and contribute to your overall business goals.
Key features
Campaign Mix Experiments allows you to select existing campaigns and assign them to different experiment arms (up to 5 total). This approach offers flexibility in testing various scenarios, including:
- Account structure testing: Evaluate which campaign combinations are ideal for your business objectives.
- Campaign consolidation: Test the impact of consolidating multiple campaigns into a single campaign.
- Cross-campaign budget optimization: Assess different ways to optimally allocate your budget across campaign types to identify the most efficient strategy.
- Feature adoption: Compare the performance of different features or settings across campaigns.
- Note: For testing specific individual features, like Broad Match, Google recommends using the dedicated feature experiment, for example, Broad Match Experiments, instead of Campaign Mix Experiments.
Before you begin
- Campaign Mix Experiments allows you to create up to 5 experiment arms.
- You can add any number of campaigns per arm.
- It is available for all campaign types except Hotel campaigns.
- It supports custom and even traffic splits across arms. Keep in mind that the minimum traffic split is 1%, and traffic split percentages can’t have decimals.
- The same campaign can be added to multiple experiment arms, and the traffic will be split according to the % chosen. No experiment arm can be identical, in terms of which campaigns are included in them.
- Google recommends having a plan/intent of what you want to experiment with before starting the experiment. For example, you should aim to have the same budget amounts per experiment arm, unless this is what you’re specifically testing.
Set up an experiment
Follow the instructions below to create a mixed campaign type experiment:
- Go to Experiments within the Campaigns menu
.
- Select the plus button
at the top of the “All Experiments table”, then choose Custom experiment.
- Select the “Mixed campaign types” campaign type.
- Select Next.
- In the “Experiment arms” section, enter the name of the arm in the “Name” field. The arm card title automatically updates every time you rename the arm.
- Choose “Select campaigns” from the dropdown in the “Campaigns” field. Select the campaigns you’d like to test from the explorer. You can select multiple campaign types for each arm.
- Note: The same campaign can be placed in multiple experiment arms.
- Select Confirm.
- Enter the desired traffic split percentage per arm in the “Traffic split” field.
- Note: As a default, traffic is evenly split among the arms. You can adjust the split, but whenever an arm is added or removed, the traffic will distribute evenly. However, the split happens only in whole numbers and not in decimals. For example, if an experiment has 3 arms, the even distribution will be 33%-33%-34%.
- Select the + Add arm button to add more arms and fill in all the fields in each arm.
- In the “Experiment dates” section, select the experiment’s start and end dates.
- Review your experiment settings and name, then select Schedule.
Reporting
You can review the reporting metrics on the “Experiment summary” page. From there, you can view how your arms are performing across metrics like Cost / Conv., Avg. CPM, and Conv. rate, along with details about how the arms are performing compared to each other.
Campaign level reporting
You can view campaign level reporting from the “Campaigns” page. You can select and update confidence intervals, primary metrics for experiment evaluation in the “Reporting” page. You’ll have the option to select from 3 confidence intervals (95%, 80%, or 70%), and choose primary metrics from the options below:
- Conversion Value
- Conversions
- ROAS
- CPA
- Clicks
- Impressions
- CPC
If the experiment has an uneven traffic split, the reporting page data will be scaled down to the lowest split. For example, if the traffic is allocated to different arms at 20%, 30%, 10%, and 40%, the metrics will be surfaced at 10% traffic for each arm.
Best practices
Setting up your experiment
For more accurate and reliable results, follow these best practices when setting up your experiment:
- Ensure that the experiment arms are similar and differ only in one variable (for example, asset, bidding strategy, targeting match type, and so on).
- Ensure that the experiment arms are different in some way - experiment arms are not allowed to be identical.
- Ensure that the total daily budget (sum of daily budget of all campaigns in an experiment arm) is similar across all experiment arms (unless this is specifically what you are testing).
- Campaigns in an arm should be either individual campaigns or part of the same portfolio bid strategy. If they don't meet this criterion, we recommend removing them from their current portfolios or create a new portfolio specifically for those campaigns within that arm.
- We recommend not using shared budgets across different experiment arms. We recommend that each campaign has its own budget. Following this best practice will minimize any additional potential noise to the experiment results.
Experiment duration
While the minimum duration can vary depending on the experiment type and settings, aim to run your experiment for at least 6 to 8 weeks to collect enough data for reliable results.
Guidance while the experiment is running
We recommend making no/minimal changes to the campaigns in the experiment while the experiment is running, to minimize any noise or unintended impact on the experiment results. This includes major changes to budget, bidding targets or strategy, creatives, targeting settings, and more. If significant changes like these are made, keep this in mind when reviewing the experiment metric results.
