To monitor a running experiment or see the results of a concluded experiment, click the Reporting tab at the top of the experiment detail page. You can also see the input data and raw computations on those input data in Google Analytics.
Bayesian inference
Optimize uses Bayesian inference to generate its reports. Bayesian inference is a method of statistical analysis that allows Optimize to continually refine results as more data is gathered. While computationally involved and expensive, Bayesian inference offers four key benefits compared to more traditional approaches:
- Allows Optimize to compute probabilities directly to better answer questions like, "What's the probability that the new variant is better than the original?"
- Avoids a common misunderstanding that p-values are the same as probabilities, allowing Optimize to provide relevant data that's actionable.
- Allows Optimize to determine the probability of any one variant to be the best overall, without the problems associated with hypothesis-testing approaches.
- Allows you to end an experiment as soon as Optimize finds there isn't much more to learn by continuing to run the experiment.
Learn more about Bayesian inference and the Optimize statistical methodology.
Elements of an Optimize report
Optimize reports contain a wealth of data about your experience, including its status and how your variants performed against your objective. The report includes a summary at the top with key information about its status and actionable data in a series of charts and tables.
Summary header
The summary header displays actionable data about your experiment’s primary objective – including its status, sessions, and recommendation – right at the top of the report. In this example, you see your experiment's status (running), the number of experiment sessions (54k), recommendation (keep it running), and the start and end times.
Status messages
The summary header contains a prominent status message and submessage which provide important information about your experiment. Following are some of the status messages that you may see in the summary header of your experiment's report.
Waiting for data
Optimize needs time to process data provided by Analytics, so this message is normal. Data collection and processing schedules vary by product and you should expect a slight delay between when data is collected and when it gets processed by Optimize.
Allow 1-2 days for the first results to show up. You can verify that your experiment is receiving visits by checking the "active visitors" column in the experiment list. It shows the number of active visitors to your experiment in real-time, seconds after you click Start. .
No experiment sessions
No experiment sessions were received, indicating that something is wrong. To troubleshoot this message:
- Check Optimize's installation diagnostics to ensure there are no mistakes in your installation. For example, make sure you installed the Optimize snippet on your site and that you linked Optimize to the correct Google Analytics property.
- Check your page targeting rules. For example, "equals" is very rigid and omits visitors with query parameters in the URL. Try using the more flexible "matches" which allows query parameters, mixed case, and http/https.
- Ensure that there are visitors on your site that match any audience targeting rules.
Not enough experiment sessions
Your experiment has sessions, but not enough across all variants to make the data useful. Optimize needs a minimum of one experiment session per day, per variant. There is either a targeting issue on some of your pages or you have used variant weights that don't allow enough traffic to each variant. An example would be assigning 100% of the traffic to your variant and 0% to your original. Consider adjusting your variant weights to resolve problem.
Keep your experiment running
You'll see this message under two scenarios:
- Optimize won't declare an outcome unless an experiment has received data for at least two weeks. You must leave your experiment running long enough to account for cyclical variations in web traffic during the week – even for high-traffic sites.
- Collected data doesn't provide enough confidence to declare an outcome yet, more data is needed. If the experiment is still running you may want to leave it running longer. Run your experiment until at least one variant has a 95 percent Probability to beat Original (PBO).
No leader found
There is enough data to conclude that there is no leader (neither a variant nor the original). This means that if you decide to deploy or implement a variant, you won't do worse than with your original. It's safe to end the experiment now since the outcome is unlikely to change and it would be better to try a new experiment.
At least one variant is better than the original
One or more variants already seem to be beating the original, but there isn't enough data to conclude which is best. You can either (1) deploy the one with the highest probability to be best, or if the experiment is still running, (2) wait longer to find the absolute best.
The original is the leader
All of your variants perform worse than the original. It's better to keep the original and not deploy any of the variants. It's safe to end the experiment now since the outcome is unlikely to change and it would be better to try a new experiment.
One or more leaders found
There is enough data to conclude that some of your variants perform better than the original. There is also enough data to conclude which of them is the absolute best. You can deploy the variant with the highest probability to be best (PBB) or any of the others because they all perform better than the current original. It's safe to end the experiment now since the outcome is unlikely to change.
A variant is the leader
There is enough data to conclude that only one of your variants performs better than the original. Deploy or implement the leading variant on your site. It's safe to end the experiment now since the outcome is unlikely to change.
Objective card
The objective card displays a report on the current status of your experiment for a given objective. The example below shows the results for the Clicked 'Claim offer' objective, where Optimize concludes that the Original has an 86% probability to be best (PBB). The objective card includes two types of data, Observed data and Optimize analysis.
Observed data
On the left of the page is Observed data, which comes from the linked Google Analytics property, as of the date in the timestamp in the upper right.
Optimize analysis
On the right of the page is the Optimize analysis, which includes probability to be best (PBB), probability to beat original (PBO), modeled conversion rate and modeled improvement. A Google Analytics timestamp and link to View in Analytics are in the upper right of the report. The View in Analytics option is only available to Universal Analytics properties.
In this example, you can see that the "special offer" variant is underperforming the original.
Modeled conversion rate chart
Below your experiment's data and analysis is a chart showing your variant's modeled conversion rate as we collect more data over time. This chart graphs how your variants have performed to date against an objective picked from the drop-down menu in the upper left.
The colored areas in the graph represent the performance range that your original and variant(s) are likely to fall into 95% of the time. The line in the middle of a range shows the median value of the range.
At the start of an experiment there's greater uncertainty of each variant's performance, resulting in wider intervals (taller shaded areas). In most cases, each variant's conversion rate range will narrow over time as the Optimize model takes more data into account, allowing for a better determination of how the variant is performing against the original.
Analytics view filters
Universal Analytics only
The data used in Optimize reports is subject to any filters applied to the linked Analytics view. However, view filters are not taken into account when determining whether to serve an experiment. For example, if the experiment is linked to an Analytics view that excludes internal IP addresses, internal users will be excluded from your Optimize experiment data. However, those users may still see the experiment execute if they meet the targeting requirements of the experiment. Generally, it’s a good idea to link the experiment to a view with as few filters as possible.
No sampling
Optimize data is pulled from the underlying Google Analytics data tables, and is not sampled. And, Optimize does not impose any sampling in the Optimize interface. This means that any data you see in Optimize is unsampled, regardless of whether you utilize Optimize or Optimize 360, or Analytics or Analytics 360.
Learn more about experiment dimensions and accessing your Optimize data in Google Analytics.