Test with confidence using Google Ads drafts and experiments
Analyze results and choose experiment winners
You’re testing to find out what works. Once you analyze the test results of a Google Ads experiment, update your strategy and continue building on that insight.
Wait for enough data to be confident in your results.
When you’re testing in a Google Ads experiment, you’ll have clear indications about the progress of that experiment in the results bar.
The little arrows next to your experiment results will tell you how confident you can be in your results. Three arrows is the most confident (greater than 99.9% statistical significance) while no icon simply means that there hasn’t been enough data. Monitor the metric that matters the most for you, and wait for significant data.
Why does this matter? You can expect your campaign to perform similarly in the future if you are confident in an experiment’s outcome. More data now means more confidence going forward that you’re making the right decisions.
Check for outliers within your top-level experiment outcomes.
Data in a Google Ads experiment, along with data in many other types of testing, will be summarized at a high-level. Don’t stop with that, though, as high-volume ad groups or keywords can skew results for an entire campaign.
Often this will take a matter of seconds. Scroll through your ad groups and see if the performance of those ad groups aligns with the top-level results. If there aren’t any outliers you’re done. If there are areas that performed differently than the headline result, drill in to understand why. Experiments can help inform your top-level strategy, but when analyzing Google Ads don’t overlook outliers that may be obscured by the overall results.
Implement what you’ve learned in your future campaigns.
The most critical step of any experiment is updating your tactics based on what you’ve learned. You’re not testing for fun, you’re testing to improve performance. Far too many experiments gather data then conclude with no action taken. Be sure to take the right action after your experiment.
In a Google Ads experiment you have the option to update an original campaign or convert to a new campaign. When you’ve executed a successful experiment this choice will come down to personal preference. A majority of the time, though, you’ll probably want to update your original campaign. Updating an original campaign will port all changes over to your current campaign, which preserves that campaign’s history.
Converting to a new campaign, on the other hand, pauses your original campaign and continues your experiment as a “normal” campaign with the same dates and budget as your original campaign. However, it will add another campaign to your account. While that new campaign won’t have the original campaign’s history, you’ll retain any data from when the experiment was running. Consider new campaigns if you’re testing a new campaign structure, or if you want to preserve findings from a learning period on an experiment with a new bidding strategy.
If you choose to update your campaign with a successful experiment, you’ll lose any data that accumulated over the course of the experiment.
Keep records of your experiments
As you continue testing in your account, prioritizing what to test will be easier if you keep records of tests you’ve performed in the past. You can do this either via the names of your experiments in Google Ads, or even through a simple spreadsheet that tracks progress across any number of tests. Well-documented results allow you to return for insights long after any experiments have ended.
Your future self will thank you if your previous experiments are named in a logical and orderly way. Dates and what elements were tested are particularly important to include. Make it easy to scroll through and filter your results.
Testing in Google Ads is crucial when optimizing your account. It helps you to learn lessons and make decisions based on data. As you experiment with your own campaigns, take full advantage of Google Ads campaign drafts and experiments to stay organized and consistent. And don’t forget to implement those changes - you test to see better performance, and no test is complete without an executed change in the real world.