A manual experiment is an experiment you define based on your own criteria and schedule. The experiment runs on a percentage of your network’s actual traffic to test how applying those settings would impact revenue. When you run an experiment, the experiment appears on the "Experiments" page.
To see a list of all of the available manual experiment types and view your active experiments, click Optimization Experiments.
Get notified about your experiments
You can get notifications about your experiments through email or directly in Ad Manager. To start receiving the notifications:
- Sign in to Google Ad Manager.
- For email notifications, click Optimization
Experiments
Subscribe for email updates.
- For notifications in Ad Manager, click Notifications
Settings
Experiments notifications.
Manual experiments overview
To run a manual experiment:
- Select the experiment type and criteria you want to use.
- Define an experiment trial and let the trial run for a specified amount of time.
- Edit conditions to automatically pause the experiment, or disable auto-pause for the experiment.
- Compare the impression traffic allocated to the "variation" group with the traffic allocated to the "control" group to see which performed better during the experiment.
- Run more trials as needed.
- Decide whether to apply the experiment settings to your Ad Manager network.
Run an experiment
Complete the following steps to run a manual experiment.
Note: The steps differ slightly for each experiment type. For more details, go to Choose a manual experiment type below.
- Sign in to Google Ad Manager.
- Click Optimization
Experiments.
- On the card for the experiment type you want to use, click New experiment.
- Name your experiment so you can refer to results more easily.
- Enter the settings that are specific to your experiment type.
- Under "Experiment period," set a start date and an end date for the experiment trial.
- Start date: Each trial needs to run for at least 7 days to improve the chance of reaching conclusive results. You can schedule a trial to start immediately, or choose a later date. All trials start at 12:00 AM and end at 11:59 PM on the scheduled dates in your local time zone. Data is refreshed daily. If you set the start date to the current day, the trial will start within the next hour.
- End date: Each experiment trial can run up to 31 days total. When the trial ends, you can review the results to decide if you want to apply it as an opportunity, run another trial, or end the experiment and keep the original settings.
- Under "Traffic allocation," set the percentage of impression traffic to allocate to the experiment.
- Under "Auto-pause experiment," set up to 10 auto-pause conditions, selecting Cumulative or Daily for each condition:
- Cumulative: The amount of revenue loss that will pause an experiment within the duration of the trial.
- Daily: The amount of revenue loss that will pause an experiment within the last full day of data.
Note: Auto-pause will check that results meet the conditions specified once per day. To ensure experiments aren’t paused before they have had a chance to collect data, trials won’t be paused within the first 24 hours of starting. To avoid sampling errors, trials will only be paused based on statistically significant results. For example, trials will be paused when the lower bound of a 95% confidence interval meets the necessary threshold.
- Click Save.
Best practices for manual experiments
Sometimes applying changes to your settings may impact the way buyers and other market participants behave, such as changing buying patterns. To get the most benefit out of manual experiments you run and to capture their potential impact on market behavior, we recommend the following best practices.
Ramp up experiments using trials
Experiments applied to lower percentages of traffic are less risky, but they’re also less likely to encourage behavior change from other market participants.
Once performance reaches an acceptable level at a lower percentage of traffic allocation, you can start a second trial using the same settings at a higher traffic allocation to help you get a deeper understanding of the impact of updating all of your network’s traffic to those settings. This is especially true for changes made to pricing, which are more likely to elicit responses from buyers.
It’s important to consider that experiments with lower traffic allocation percentages may not have strong enough results to change behavior. As you ramp up to higher traffic allocations you have a better chance of changing market behavior and measuring the effects of that behavior change.
Run longer experiments
It’s important to run experiments over a long enough period of time for behaviors to change and for the impact of those changes to be measured by the experiment. It often takes 7 or more days for behavior changes to influence revenue.
Consider total revenue in addition to comparisons between different settings
Experiment settings may impact behavior in both the variation group and the control group. You should verify that the expected revenue in the variation group is in line with your expectations when considered against your network as a whole.
Choose a manual experiment type
The experiment type you choose determines the traffic allocation and criteria used to run a manual experiment.
Unblocking categories allows more advertisers and buyers to compete for your inventory, which increases coverage and helps you maximize your revenue.
Experiment criteria
- Protection: Select the protection to which you want to apply this experiment.
- Unblock the following category: Select the category you want to unblock during the experiment.
- Experiment period: The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
- Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style gets the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
Unified pricing rule experiments allow publishers to run manual experiments that change the floor price on any unified pricing rule. You can experiment with raising or lowering the floor price of the rule and comparing the results.
Overlapping pricing rules: Unified pricing rule experiments evaluate performance across all ad requests that match the targeting criteria of the rule, including ad requests associated with overlapping targeting on other pricing options. This allows Ad Manager to account for pricing changes that shift the balance of impressions and revenue onto other pricing rules or pricing options.
The CPM displayed is an average of all ad requests that matched the targeting criteria. This means that the CPM may be lower than the floor on some unified pricing rule experiments if a large amount of the traffic that matched the targeting criteria is not subject to the pricing option you selected and is only subject to a lower priced rule.
Experiment criteria
- Unified pricing rule: Select the unified pricing rule that determines the traffic on which this experiment runs.
- Pricing option: Select pricing options that are used for the duration of this experiment.
- Experiment price: The price you want to be applied to traffic in the variation group.
- Affected remnant line items: The estimated number of remnant line items that are affected by changing the experiment floor price relative to the original price.
- Experiment period: The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
- Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style gets the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
The original style that you want to test against a new design is the “control.” The new design is the “experiment” style. You update the experiment’s settings in an attempt to improve performance compared to the control style. You can then analyze the two styles’ performance and determine which settings you want to keep.
Native experiments can only compare two native styles on existing native placements. You can’t compare banner and native ads in the same ad placement.
If an experiment targets a control native style that mixes both programmatic and traditional traffic, your reservation traffic will be affected.
Experiment criteria
- Native style: Select the native style with the format and deal eligibility on which you want to run an experiment.
- Experiment period: The earliest available start date will be displayed in the date range picker. Scheduled experiments will start on the selected date.
- Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest will go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style will get the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
- Add targeting: Select targeting for the experiment. Targeting must match line item targeting to serve successfully.
You can test new ad formats to understand their impact on your inventory. Currently, the web interstitial and anchor formats are available for testing. This allows you to test web interstitials or anchors on your web inventory before committing developer resources to tagging.
Experiment criteria
- Format: Select the format for which you want to run an experiment.
- Experimental ad unit: Select the ad unit to be used for trafficking and to understand experimental performance in reporting. Note that the ad unit is not used for targeting, meaning the format will serve to any request that matches the experiments targeting. We recommend setting up a new ad unit.
- Target: Define where you want the experiment to run.
- Experiment period: The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
- Traffic allocation: The percent of estimated impressions you want to allocate to the experiment web interstitials during the experiment. The rest go to other ads. For example, if you allocate 60% of impressions to the experiment web interstitials, other ads get the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
Features in Beta phase might not be available in your network. Watch the release notes for when this feature becomes generally available.
Experiment criteria
- Yield group settings: Select the yield group to use for the experiment. A yield group must be set as "Inactive" before it can be used in an experiment (the experiment works by activating it). When used in an experiment, the yield group’s status is identified as "Experimenting." When you end the experiment by applying or declining it, the yield group’s status changes to "Active" or "Inactive," accordingly.
- Experiment period: The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
- Traffic allocation: The percent of impressions you want to allocate for the yield group to compete on during the experiment. Only the impressions targeted by the yield group are affected. The rest will be filled as if the yield group was inactive. For example, if you allocate 60% of impressions to the experimental yield group, 40% will be unaffected. Keep this allocation in mind when analyzing the experiment results.