Performance Max experiments FAQs

This article addresses frequently asked questions About Performance Max experiments in the Experiments page.

Experiment Settings

  1. What are comparable campaigns?
  2. How are comparable campaigns selected?
  3. Can I edit the comparable campaign selections?
  4. Will existing campaigns be impacted by running the experiment?
  5. Can I change the traffic split between base and trial?
  6. Can Performance Max experiments run alongside other ongoing experiments in the account (for example, Ad Variations, Drafts, and Experiments)?
  7. Can I change the budgets for my Performance Max or comparable campaigns while an experiment is running?
  8. Do changes to the base arm affect the experiment arm?
  9. Are comparable campaigns expected to change throughout the experiment?
  10. In the child accounts “Labs” tab, the Uplift Experiments feature is visible for a user who directly logins to the child account but isn’t shown for a user drilling down from an MCC to the child account. Is the Performance Max Uplift Experiment feature available for MCC users?
  11. What’s the effect on my existing campaigns when they’re part of an experiment?
  12. How much budget should I use for Performance Max campaigns?
  13. Should I double my Performance Max campaign budget since it’ll only serve on 50% of the eligible traffic?
  14. How and when does the user split happen between the two arms?

Technical considerations

  1. Are Uplift experiments available for all MOs?
  2. Will Uplift experiments work with Performance Max when it has SA360 Floodlight support?

Experiment results

  1. My Experiment status shows results are inconclusive. How long does it take to get conclusive results?
  2. Why don’t my Experiment results seem to include the first 7 days of data?
  3. Am I able to view how many conversions or how much conversion value my comparable campaigns drove in the trial arm?

Experiment settings

What are comparable campaigns?

Comparable campaigns are campaigns that are similar to the Performance Max campaign and may serve on the same inventory as Performance Max campaigns. They’re included in the control and trial groups of your experiment.

How are comparable campaigns selected?

Comparable campaigns are automatically selected for your experiment based on factors such as:

  • Matching domain names
  • At least one overlapping conversion goal
  • Overlapping locations

This is necessary to ensure the correct experiment setup.

Can I edit the comparable campaign selections?

It’ll take one day after the experiment starts to populate the list of comparable campaigns that were automatically chosen for the experiment. After the list is populated, you’ll get an option to edit the comparable campaign selections up to 7 days after the experiment start date. To make the changes, follow the steps below.

  1. Click on the campaigns in the “Comparable campaigns” column for the respective experiment.
  2. Click Edit.
  3. Select the comparable campaigns you want to add or remove.
  4. Click Done.

Will existing campaigns be impacted by running the experiment?

Performance for existing campaigns won’t be negatively impacted by experiments. Putting them into base and trial simply tags which ones had Performance Max traffic alongside them and which ones didn’t. For the existing campaign traffic in the trial arm, Google will measure what happens when it runs alongside Performance Max. This should capture any effects of traffic shifting as well.

Make sure to target the same products in both the Standard Shopping campaign, and the Performance Max campaign in order to accurately test the performance of both campaigns. Furthermore, ensure that the products targeted in the 2 campaigns aren't targeted by any other campaigns outside of the experiment. This will help ensure that the experiment won't interfere with existing campaigns in the account.

Can I change the traffic split between base and trial?

Traffic split options are available for Shopping versus Performance Max campaigns, but unavailable for non-GMC (Google Merchant Center) Uplift Experiments.

Can Performance Max experiments run alongside other ongoing experiments in the account (for example, Ad Variations, Drafts, and Experiments)?

Yes, it’s technically possible to run other types of experiments in the same account. However, it's recommended you minimize these if possible.

Can I change the budgets for my Performance Max or comparable campaigns while an experiment is running?

Yes, you can. However, it’s generally recommended to make as few changes as possible while an experiment is in progress.

Do changes to the base arm affect the experiment arm?

Yes, you can make changes to campaigns, then Google will automatically pick them for comparable campaigns and apply them to base and trial. However, making changes while an experiment runs isn't recommended.

Are comparable campaigns expected to change throughout the experiment?

This can be expected if changes are made to the campaign that are around conversion goal, domain, or location. If campaigns are added or removed, or the above changes are made in existing campaigns, additional comparable campaigns can be added or removed.

In the child accounts “Labs” tab, the Uplift Experiments feature is visible for a user who directly logins to the child account but isn’t shown for a user drilling down from an MCC to the child account. Is the Performance Max Uplift Experiment feature available for MCC users?

If an MCC user wants to access the tool, they must allowlist the MCC. If not, they'll only be able to access it on the child account level.

What’s the effect on my existing campaigns when they’re part of an experiment?

Existing campaign settings aren’t affected by being in the experiment. Keep in mind:

  • Any existing Performance Max campaign that’s part of an experiment may notice a decrease in traffic because the campaign will only serve to 50% of the eligible traffic. When the experiment ends, Performance Max traffic should recover to pre-experiment levels if you launch the performance Max campaign.
  • Any new Performance Max campaign created as part of this experiment will notice an increase in traffic if it’s launched to 100%.

How much budget should I use for Performance Max campaigns?

The higher your Performance Max budget and spend is compared to the total spend in your account, the higher your chances are of noticing statistically significant results.

Should I double my Performance Max campaign budget since it’ll only serve on 50% of the eligible traffic?

Set a budget you’re comfortable spending for the experiment despite the traffic suppression. Even if ads serve to only 50% of the eligible traffic, you might end up using the entire budget. Remember, your budget can be doubled (the daily limit) similar to standalone campaigns.

How and when does the user split happen between the two arms?

A user split occurs at the start of the experiment and Google’s systems try to ensure fairly balanced arms. While the experiment isn’t limited to only signed-in users, signed-in users make it easier for a clean split. For signed-out users, there aren’t guarantees provided, but the distribution should be similar in both arms.


Technical considerations

Are Uplift experiments available for all MOs?

Uplift experiments are currently only available to advertisers using the Online Sales (Non-feed), Store goals (Offline) and Lead Gen MOs. Uplift experiments don't support Performance Max with GMC feed.

Will Uplift experiments work with Performance Max when it has SA360 Floodlight support?

The Uplift experiments tool is currently only available in Google Ads. Advertisers need to create, manage, and view summary reporting on experiments in the Google Ads interface.


Experiment results

My Experiment status shows results are inconclusive. How long does it take to get conclusive results?

It’s recommended that you run the experiment for at least 4-6 weeks.

Why don’t my Experiment results seem to include the first 7 days of data?

It’s recommended for experiments to run for at least 4-6 weeks, and the first 7 days of data is discarded to account for the experiment ramp-up time. This ensures that you’re evaluating both arms fairly. For example, if your experiment start data is December 1 and the end date is December 31, you’ll only view data for the period between December 8-31 in the Experiment results page. However, you should be able to view stats for all campaigns in the main campaigns table for your desired date range.

Am I able to view how many conversions or how much conversion value my comparable campaigns drove in the trial arm?

No. You’ll only be able to view aggregated conversions, conversion value, CPA, ROAS, and spend for the groups. The goal of the experiment is to show you how much more conversions or conversion value the Performance Max campaign is driving for the account as a whole.

Here are best practices for responding to experiment results:

Experiment results Conclusion and recommendations
The trial arm drove more conversions or conv value at the same or better CPA or ROAS compared to control arm.

Running Performance Max alongside comparable campaigns can bring you additional conversions at a comparable ROI.

Recommendation: Launch the Performance Max campaign and scale budgets to get more coverage and efficient conversions at that ROI.

The trial arm drove more conversions or conv value at a CPA or ROAS worse than the control arm.

If the Performance Max campaign had a target CPA or ROAS set, evaluate if they are comparable to targets for other performance campaigns.

  • If targets aren’t comparable, then the test should be adjusted. Performance Max automation will always try to achieve the set CPA or ROAS targets within the available budget.
    • Recommendation: Continue testing after adjusting CPA or ROAS targets.
  • If targets are comparable, then this result could be because Performance Max was driving additional conversions, which may have used additional budget.
    • Recommendation: Continue to run Performance Max campaigns as part of overall cross-channel strategy.

If the Performance Max campaign didn’t have a target CPA or ROAS set but comparable campaigns did, performance for the trial arm can seem worse.

  • Recommendation: Re-run the experiment with a target set (you can start with the account average CPA or ROAS).

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Main menu
17913267991645849320
true
Search Help Center
true
true
true
true
true
73067
false
false
false