Frequently asked questions about the "Experiments" page (formerly drafts and experiments)
Experiment Settings
- When will the experiment changes be applied?
- I encountered the “Failed to Create” error, what should I do?
- Can I opt out from the auto-apply?
Experiment results
- Will an experiment be applied if the experiment ended manually?
- How to know if an experiment is favorable and is directly applied?
- How long should I run an experiment?
- My Experiment status shows results are inconclusive or statistically not significant. How long does it take to get conclusive or statistically significant results?
- Why is the initial 7 days of data missing from some of my Experiment results?
Experiment settings
1. When will the experiment changes be applied?
After your experiment reaches the end date, we’ll identify whether your experiment results were favorable, using the definition above. If we determine that it was favorable, we’ll apply the experiment changes to the control campaign.
2. I encountered the “Failed to Create” error, what should I do?
This happens often when you have deprecated or ineligible ad types, you should remove all the ads with deprecated ad types before restarting the creation.
Moreover, check if you have similar audience lists because campaigns with similar audiences, including those with removed similar audiences aren’t supported. Learn more about changes in similar audiences.
3. Can I opt out from the auto-apply?
By default, this feature will be enabled when creating a new experiment, but you can choose to opt out before the experiment ends.
Experiment results
1. Will an experiment be applied if the experiment ended manually?
No, we don't apply any experiments which were ended manually. We only apply experiments which end on the end date that you define.
2. How to know if an experiment is favorable and is directly applied?
- If you’re using Max conversions with target cost per action (CPA) bidding, your experiment will be directly applied if conversions in your treatment arm are higher than your control arm, with CPA being lower.
- If you’re using Max conversion value with target return on ad spend (ROAS) bidding, your experiment will be directly applied if the conversion value in your treatment arm is higher than your control arm, with ROAS being higher.
- If you’re using Max conversions or Max conversion value bidding, your experiment will be directly applied if either the conversions or conversion value in your treatment arm are higher than your control arm.
3. How long should I run an experiment?
It’s recommended that you run the experiment for at least 4-6 weeks or longer if you have a long conversion delay. We recommend you wait for 1-2 conversion cycles. Learn how long it takes for your customers to convert.
4. My Experiment status shows results are inconclusive or statistically not significant. How long does it take to get conclusive or statistically significant results?
To maximize experiment power, pick campaigns with high volumes and run experiments for longer. We recommend running the experiment for at least 4-6 weeks. Keep in mind that if the experiment has many comparable campaigns with a significantly higher budget relative to the Performance Max campaign, just the noise from the experiment may overshadow the uplift from running Performance Max. Choose comparable campaigns thoughtfully in order to get detectable results.
5. Why is the initial 7 days of data missing from some of my Experiment results?
It’s recommended for experiments to run for at least 4-6 weeks, and the first 7 days of data is discarded to account for the experiment ramp-up time. This ensures that you’re evaluating both arms fairly. For example, if your experiment start data is December 1 and the end date is December 31, you’ll only view data for the period between December 8-31 in the Experiment results page. However, you should be able to view stats for all campaigns in the main campaigns table for your desired date range.
Frequently asked questions About Performance Max experiments.
Experiment Settings
- What are comparable campaigns?
- How are comparable campaigns selected?
- Can I edit the comparable campaign selections?
- Will existing campaigns be impacted by running the experiment?
- Can I change the traffic split between base and trial?
- Can Performance Max experiments run alongside other ongoing experiments in the account (for example, Ad Variations, Drafts, and Experiments)?
- Can I change the budgets for my Performance Max or comparable campaigns while an experiment is running?
- Do changes to the base arm affect the experiment arm?
- Are comparable campaigns expected to change throughout the experiment?
- What’s the effect on my existing campaigns when they’re part of an experiment?
- How much budget should I use for Performance Max campaigns?
- Should I double my Performance Max campaign budget since it’ll only serve on 50% of the eligible traffic?
- How and when does the user split happen between the 2 arms?
- Are Uplift experiments available for all MOs?
- Will Uplift experiments work with Performance Max when it has SA360 Floodlight support?
Experiment results FAQ
Experiment settings
1. What are comparable campaigns?
Comparable campaigns are campaigns that are similar to the Performance Max campaign and may serve on the same inventory as Performance Max campaigns. They’re included in the control and trial groups of your experiment.
2. How are comparable campaigns selected?
Comparable campaigns are automatically selected for your experiment based on factors such as:
- Matching domain names
- At least one overlapping conversion goal
- Overlapping locations
This is necessary to ensure the correct experiment setup.
3. Can I edit the comparable campaign selections?
It’ll take one day after the experiment starts to populate the list of comparable campaigns that were automatically chosen for the experiment. After the list is populated, you’ll get an option to edit the comparable campaign selections up to 7 days after the experiment start date. You can make these changes until the experiment concludes. To make the changes, follow the steps below.
- Click on the campaigns in the “Comparable campaigns” column for the respective experiment.
- Click Edit.
- Select the comparable campaigns you want to add or remove.
- Click Done.
4. Will existing campaigns be impacted by running the experiment?
Performance for existing campaigns won’t be negatively impacted by experiments. Putting them into base and trial simply tags which ones had Performance Max traffic alongside them and which ones didn’t. For the existing campaign traffic in the trial arm, Google will measure what happens when it runs alongside Performance Max. This should capture any effects of traffic shifting as well.
Make sure to target the same products in both the Standard Shopping campaign, and the Performance Max campaign in order to accurately test the performance of both campaigns. Furthermore, ensure that the products targeted in the 2 campaigns aren't targeted by any other campaigns outside of the experiment. This will help ensure that the experiment won't interfere with existing campaigns in the account.
5. Can I change the traffic split between base and trial?
Traffic split options are available for Shopping versus Performance Max campaigns, but unavailable for non-GMC (Google Merchant Center) Uplift Experiments. Once an experiment is scheduled, the traffic split cannot be changed.
6. Can Performance Max experiments run alongside other ongoing experiments in the account (for example, Ad Variations, Drafts, and Experiments)?
Yes, it’s technically possible to run other types of experiments in the same account. However, it's recommended you minimize these if possible.
7. Can I change the budgets for my Performance Max or comparable campaigns while an experiment is running?
Yes, you can. However, it’s generally recommended to make as few changes as possible while an experiment is in progress.
8. Do changes to the base arm affect the experiment arm?
Yes, you can make changes to campaigns, then Google will automatically pick them for comparable campaigns and apply them to base and trial. However, making changes while an experiment runs isn't recommended.
9. Are comparable campaigns expected to change throughout the experiment?
This can be expected if changes are made to the campaign that are around conversion goal, domain, or location. If campaigns are added or removed, or the above changes are made in existing campaigns, additional comparable campaigns can be added or removed.
10. What’s the effect on my existing campaigns when they’re part of an experiment?
Existing campaign settings aren’t affected by being in the experiment. Keep in mind:
- Any existing Performance Max campaign that’s part of an experiment may notice a decrease in traffic because the campaign will only serve to 50% of the eligible traffic. When the experiment ends, Performance Max traffic should recover to pre-experiment levels if you launch the performance Max campaign.
- Any new Performance Max campaign created as part of this experiment will notice an increase in traffic if it’s launched to 100%.
11. How much budget should I use for Performance Max campaigns?
The higher your Performance Max budget and spend is compared to the total spend in your account, the higher your chances are of noticing statistically significant results.
12. Should I double my Performance Max campaign budget since it’ll only serve on 50% of the eligible traffic?
Set a budget you’re comfortable spending for the experiment despite the traffic suppression. Even if ads serve only 50% of the eligible traffic, you might end up using the entire budget. Remember, your budget can be doubled (the daily limit) similar to standalone campaigns.
13. How and when does the user split happen between the 2 arms?
A user split occurs at the start of the experiment and Google’s systems try to ensure fairly balanced arms. While the experiment isn’t limited to only signed-in users, signed-in users make it easier for a clean split. For signed-out users, there aren’t guarantees provided, but the distribution should be similar in both arms.
14. Are Uplift experiments available for all MOs?
Uplift experiments are currently only available to advertisers using the Online Sales (Non-feed), Store goals (Offline) and Lead Gen MOs. Uplift experiments don't support Performance Max with GMC feed.
15. Will Uplift experiments work with Performance Max when it has SA360 Floodlight support?
The Uplift experiments tool is currently only available in Google Ads. Advertisers need to create, manage, and view summary reporting on experiments in the Google Ads interface.
Experiment results
1. Am I able to view how many conversions or how much conversion value my comparable campaigns drove in the trial arm?
No. You’ll only be able to view aggregated conversions, conversion value, CPA, ROAS, and spend for the groups. The goal of the experiment is to show you how much more conversions or conversion value the Performance Max campaign is driving for the account as a whole.
Here are best practices for responding to experiment results:
Experiment results | Conclusion and recommendations |
The trial arm drove more conversions or conv value at the same or better CPA or ROAS compared to the control arm. |
Running Performance Max alongside comparable campaigns can bring you additional conversions at a comparable ROI. Recommendation: Launch the Performance Max campaign and scale budgets to get more coverage and efficient conversions at that ROI. |
The trial arm drove more conversions or conv value at a CPA or ROAS worse than the control arm. |
If the Performance Max campaign had a target CPA or ROAS set, evaluate if they are comparable to targets for other performance campaigns.
If the Performance Max campaign didn’t have a target CPA or ROAS set but comparable campaigns did, performance for the trial arm can seem worse.
|