To see a list of all of the available manual experiment types and view your active experiments, click Home Reporting
Experiments.
To run a manual experiment:
- Select the experiment type and criteria you want to use.
- Define an experiment trial and let the trial run for a specified amount of time.
- Edit conditions to automatically pause the experiment, or disable auto-pause for the experiment.
- Compare the impression traffic allocated to the "experiment" group with the traffic allocated to the "control" group to see which performed better during the experiment.
- Run more trials as needed.
- Decide whether to apply the experiment settings to your Ad Manager network.
Run an experiment
Complete the following steps to run a manual experiment.
Note: The steps differ slightly for each experiment type. For more details, go to Choose a manual experiment type below.
- Sign in to Google Ad Manager.
- Click Reporting
Experiments.
- Click New experiment in the card for the experiment type you want to use.
- Name your experiment so you can refer to results more easily.
- Set a start date and an end date for the experiment trial.
- Start date: Each trial needs to run for at least 7 days to improve the chance of reaching conclusive results. You can schedule a trial to start immediately, or choose a later date. All trials start at 12:00 AM and end at 11:59 PM on the scheduled dates in your local time zone. Data is refreshed daily. If you set the start date to the current day, the trial will start within the next hour.
- End date: Each experiment trial can run up to 31 days total. When the trial ends, you can review the results to decide if you want to apply it as an opportunity, run another trial, or end the experiment and keep the original settings.
- Set the percentage of impression traffic to allocate to the experiment.
- Set up to 10 auto-pause conditions, selecting Cumulative or Daily for each condition:
- Cumulative: The amount of revenue loss that will pause an experiment within the duration of the trial.
- Daily: The amount of revenue loss that will pause an experiment within the last full day of data.
Note: Auto-pause will check that results meet the conditions specified once per day. To ensure experiments aren’t paused before they have had a chance to collect data, trials won’t be paused within the first 24 hours of starting. To avoid sampling errors, trials will only be paused based on statistically significant results. For example, trials will be paused when the lower bound of a 95% confidence interval meets the necessary threshold.
- Click Save.
Best practices for manual experiments
Sometimes applying changes to your settings may impact the way buyers and other market participants behave, such as changing buying patterns. To get the most benefit out of manual experiments you run and to capture their potential impact on market behavior, we recommend the following best practices.
Ramp up experiments using trials
Experiments applied to lower percentages of traffic are less risky, but they’re also less likely to encourage behavior change from other market participants.
Once performance reaches an acceptable level at a lower percentage of traffic allocation, you can start a second trial using the same settings at a higher traffic allocation to help you get a deeper understanding of the impact of updating all of your network’s traffic to those settings. This is especially true for changes made to pricing, which are more likely to elicit responses from buyers.
It’s important to consider that experiments with lower traffic allocation percentages may not have strong enough results to change behavior. As you ramp up to higher traffic allocations you have a better chance of changing market behavior and measuring the effects of that behavior change.
Run longer experiments
It’s important to run experiments over a long enough period of time for behaviors to change and for the impact of those changes to be measured by the experiment. It often takes 7 or more days for behavior changes to influence revenue.
Consider total revenue in addition to comparisons between different settings
Experiment settings may impact behavior in both the experiment group and the control group. You should verify that the expected revenue in the experiment group is in line with your expectations when considered against your network as a whole.
Choose a manual experiment type
The experiment type you choose determines the traffic allocation and criteria used to run a manual experiment.
Native ad styles
The original style that you want to test against a new design is the “control.” The new design is the “experiment” style. You update the experiment’s settings in an attempt to improve performance compared to the control style. You can then analyze the two styles’ performance and determine which settings you want to keep.
Native experiments can only compare two native styles on existing native placements. You can’t compare banner and native ads in the same ad placement.
If an experiment targets a control native style that mixes both programmatic and traditional traffic, your reservation traffic will be affected.
Experiment criteria
- Native style: Select the native style with the format and deal eligibility on which you want to run an experiment.
- Experiment period: All experiments must start at least 24 hours in the future. The earliest available start date will be displayed in the date range picker. Scheduled experiments will start on the selected date.
- Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest will go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style will get the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
- Add targeting: Select targeting for the experiment. Targeting must match line item targeting to serve successfully.
Unified pricing rules
Experiment criteria
- Unified pricing rule: Select the unified pricing rule that will determine the traffic on which this experiment will run.
- Pricing option: Select pricing options that will be used for the duration of this experiment.
- Experiment price: The price you want to be applied to traffic in the experiment group.
- Affected remnant line items: The estimated number of remnant line items that will be affected by changing the experiment floor price relative to the original price.
- Experiment period: All experiments must start at least 24 hours in the future. The earliest available start date will be displayed in the date range picker. Scheduled experiments will start on the selected date.
- Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest will go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style will get the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
Unblock categories
Experiment criteria
- Protection: Select the protection to which you want to apply this experiment.
- Unblock the following category: Select the category you want to unblock during the experiment.
- Experiment period: All experiments must start at least 24 hours in the future. The earliest available start date is displayed in the date range picker. Scheduled experiments start on the selected date.
- Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style gets the remaining 40%. Keep this allocation in mind when analyzing the experiment results.