Running an experiment allows you to compare one of your ad settings against a variation of that setting to see which performs better. Experiments work by splitting your site's traffic between the original ad setting and the variation, so that their performance can be measured side by side. Experiments help you to make informed decisions about how to configure your ad settings, and can help you to increase your earnings.
To view your "Experiments" page:
- Sign in to your AdSense account, and click Optimization > Experiments.
Create your experiment
When you create an experiment you:
- Select the original ad setting that you want to compare the experiment variation against.
- Select which settings you’d like to change for the variation.
- Depending on the experiment type, choose whether you'd like Google to automatically apply the winning setting for you after your experiment has finished.
Tip: Selecting this option can help save you time, for example, if you're planning to run lots of experiments.
You can create the following types of experiments:
Monitor your experiment
On the "Experiments" page you can see an overview of your experiments which shows their current status and progress, and highlights any experiments that are "Result ready" (which is when we recommend you choose a winner).
|Status||What it means|
|Running||Your experiment is in progress and collecting data.|
|Result ready||Your experiment has collected sufficient data and is now ready for you to choose a winner. Learn how to choose the winner of an experiment.|
You’ve chosen the winner of your experiment and your experiment is finished.
Choose the winner of your experiment
When your experiment has collected sufficient data, you can choose the winner of your experiment. We recommend that you wait until your experiment is marked "Result ready" before you choose a setting as the winner.
- If you choose the original as the winner, then your original settings are retained.
- If you choose the variation as the winner, then we apply the settings of the variation to your account.
In either case, we stop splitting your traffic, and your experiment ends.
- If you've opted to let Google choose the winner of your experiment, the best performing settings will be automatically applied for you. For more information, see: Understanding your experiment results.
- If your experiment hasn't collected sufficient data by the time limit (21 days for search style experiments or 90 days for all other experiment types, unless you set a shorter time) we'll automatically stop it. If you've opted to let Google choose the winner, we’ll revert your settings back to the way they were before the experiment started. Otherwise, you'll have another 30 days to choose the winner of the experiment or, in the case of search style experiments, until you edit the style, whichever happens first.
- If an Auto optimize experiment can't confidently determine that a change improves performance within the experiment's time limit, we'll retain your original settings.
If you have further questions about experiments, see the Experiments FAQ.
Understanding your experiment results
Experiments evaluate the performance of the variation using the change in revenue attributed to the variation. In an experiment's results card we show the following metrics:
- The estimated monthly earnings scaled to 100% of your traffic.
Note: This metric is an estimation and doesn't necessarily reflect the amount you will ultimately be paid.
- The revenue for traffic tested with the original and the variation.
- The percentage of revenue uplift.
- A confidence interval to indicate the range of change to the revenue uplift AdSense has calculated using a 95% confidence level. To see the confidence interval for your experiment, click next to the revenue percentage.
- If your experiment shows there's a high probability that either one setting outperforms the other or there's little difference in performance between the two settings, you'll see the probability that the recommended setting is better than the other setting. For example, "80% chance the variation will perform better than the original". Note that this score is likely to be less accurate if you stopped your experiment early.
Note that older experiments may show different metrics.