Run a manual experiment (Beta)

Experiment using your own criteria to learn how changes may impact your network
This feature is in Beta
Features in Beta phase might not be available in your network. Watch the release notes for when this feature becomes generally available.

A manual experiment is an experiment you define based on your own criteria and schedule. The experiment runs on a percentage of your network’s actual traffic to test how applying those settings would impact revenue. When you run an experiment, the experiment appears on the "Experiments" page.

To see a list of all of the available manual experiment types and view your active experiments, click Home and then Reporting and then Experiments.

Up to 100 active experiments can exist in your Ad Manager network at any given time. Active experiments include experiments that are running, paused, or have completed and are waiting for you to take action.

To run a manual experiment, you will:

  • Select the experiment type and criteria you want to use.
  • Define an experiment trial and let the trial run for a specified amount of time.
  • Compare the impression traffic allocated to the "experiment" group with the traffic allocated to the "control" group to see which performed better during the experiment.
  • Run additional trials, if needed.
  • Decide whether to apply the experiment settings to your Ad Manager network.
You can also run an experiment from an opportunity suggested by Ad Manager.

Run an experiment

Complete the following steps to run an experiment from an opportunity:

  1. Sign in to Google Ad Manager.
  2. Click Reporting and then Experiments.
  3. Click New experiment in the card for the experiment type you want to use.
  4. Edit the name of the experiment, or use the default name.
  5. Set a start date and an end date for the experiment trial.
    • Start date: Each trial needs to run for at least 7 days to improve the chance of reaching conclusive results. You can schedule a trial to start immediately, or specify a later date. All trials start at 12:00 am and end at 11:59 pm on the scheduled dates in your local time zone, and data is refreshed daily. If you set the start date to the current day, the trial will start within the next hour.
    • End date: Each experiment trial can run up to 31 days total. When the trial ends, you can review and evaluate the results to decide whether you want to apply it as an opportunity, run another trial, or end the experiment and keep the original settings.
  6. Set the percentage of impression traffic to allocate to the experiment.
  7. Click Save.

Best practices for manual experiments​

Sometimes applying changes to your settings may impact the way buyers and other market participants behave, such as changing buying patterns. To get the most benefit out of the manual experiments you run and capture their potential impact on market behavior, we recommend the following best practices.

Ramp up experiments using trials

Experiments applied to lower percentages of traffic are less risky, but they’re also less likely to encourage behavior change from other market participants.

Once performance reaches an acceptable level at a lower percentage of traffic allocation, you can start a second trial using the same settings at a higher traffic allocation to help you get a deeper understanding of the impact of updating all of your network’s traffic to those settings. This is especially true for changes made to pricing, which are more likely to elicit responses from buyers.

It’s important to consider that experiments with lower traffic allocation percentages may not have strong enough results to change behavior. As you ramp up to higher traffic allocations you have a better chance of changing market behavior and measuring the effects of that behavior change.

Run experiments for longer durations

It’s important to run experiments over a long enough period of time for behaviors to change and for the impact of those changes to be measured by the experiment. It often takes 7 or more days for behavior changes to influence revenue.

Consider total revenue in addition to comparisons between different settings

Experiment settings may impact behavior in both the experiment group and the control group. You should verify that the expected revenue in the experiment group is in line with your expectations when considered against your network as a whole.

Choose a manual experiment type

The experiment type you choose determines the traffic allocation and criteria used to run a manual experiment.

Native ad styles

Run an A/B test using two sets of native ad styles, including visual elements and other updates. Compare the results to determine which would perform better in your network.

The original style that you want to test against a new design is the “control.” The new design is the “experiment” style. You update the experiment’s settings in an attempt to improve performance compared to the control style. You can then analyze the two styles’ performance and determine which settings you want to keep.

Native experiments can only compare two native styles on existing native placements. You can’t compare banner and native ads in the same ad placement.

If an experiment targets a control native style that mixes both programmatic and traditional traffic, your reservation traffic will be affected.

Experiment criteria

  • Native style: Select the native style with the format and deal eligibility on which you want to run an experiment.
  • Experiment period: All experiments must start at least 24 hours in the future. The earliest available start date will be displayed in the date range picker. Scheduled experiments will start on the selected date.
  • Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest will go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style will get the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
  • Add targeting: Select targeting for the experiment. Targeting must match line item targeting to serve successfully.

Unified pricing rules

Unified pricing rule experiments allow publishers to run manual experiments that change the floor price on any unified pricing rule. You can experiment with raising or lowering the floor price of the rule and comparing the results. 
Overlapping pricing rules: Unified pricing rule experiments evaluate performance across all ad requests that match the targeting criteria of the rule, including ad requests associated with overlapping targeting on other pricing options. This allows Ad Manager to account for pricing changes that shift the balance of impressions and revenue onto other pricing rules or pricing options.
The CPM displayed is an average of all ad requests that matched the targeting criteria. This means that the CPM may be lower than the floor on some unified pricing rule experiments if a large amount of the traffic that matched the targeting criteria is not subject to the pricing option you selected and is only subject to a lower priced rule.

Experiment criteria

  • Unified pricing rule: Select the unified pricing rule that will determine the traffic on which this experiment will run.
  • Pricing option: Select pricing options that will be used for the duration of this experiment.
  • Experiment price: The price you want to be applied to traffic in the experiment group.
  • Affected remnant line items: The estimated number of remnant line items that will be affected by changing the experiment floor price relative to the original price.
  • Experiment period: All experiments must start at least 24 hours in the future. The earliest available start date will be displayed in the date range picker. Scheduled experiments will start on the selected date.
  • Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest will go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style will get the remaining 40%. Keep this allocation in mind when analyzing the experiment results.

Unblock categories

Unblocking categories allows more advertisers and buyers to compete for your inventory, which increases coverage and helps you maximize your revenue.

Experiment criteria

  • Protection: Select the protection to which you want to apply this experiment.
  • Unblock the following category: Select the category you want to unblock during the experiment.
  • Experiment period: All experiments must start at least 24 hours in the future. The earliest available start date will be displayed in the date range picker. Scheduled experiments will start on the selected date.
  • Traffic allocation: The percent of estimated impressions you want to allocate to the experiment style during the experiment. The rest will go to the original style. For example, if you allocate 60% of impressions to the experiment style, the original style will get the remaining 40%. Keep this allocation in mind when analyzing the experiment results.
Was this helpful?
How can we improve it?