Test with confidence using Google Ads drafts and experiments
Create experiments that produce clear results
Testing changes in your campaigns is important. It’s just as important to design Google Ads experiments that deliver insights as quickly as possible. There are a number of steps to follow for speedy, insightful experiments.
Focus your tests on one variable at a time.
You should focus your tests prior to an experiment starting. Only test one variable at a time. It’s impossible to isolate the effect of any single change if an experiment updates multiple elements.
Think of this scenario: you change to Target CPA bidding and at the same time you update all of your messaging to include a new offer. Conversions go up 15% during the test period. It’s a great result, but to what should you attribute that 15% lift? The new bidding strategy or the new messaging?
Plan your Google Ads experiments in stages if you’re unsure that they’ll be a good thing. First try out a new bidding strategy. Find out how performance changes, then move on to the next thing. By creating separate experiments for separate variables you can understand the performance of each variable, which can then inform future tests.
Design tests to reach statistical significance as quickly as possible
When you’re creating a Google Ads experiment your choices will dictate how quickly tests will conclude. A 50/50 split is typically the fastest way to significant results. If you are unsure about a change and are worried about affecting performance negatively, you can reduce the percentage to your experiment. An uneven split may make your test take longer to reach significance, but it allows you to control how much any single test will affect your overall campaign performance.
The overall volume driven by the element you’re testing can help you decide what split to use. If a certain element drives a lot of impressions/clicks, you might not require an even split to move quickly.
Your traffic split is important, but another consideration is the severity of the changes that you’re making. Subtle changes to your ads or bids are going to take longer to show a difference in performance than more noticeable changes. Only test what you’re comfortable testing, but recognize that if you’re being conservative in the changes you’re making, it’ll probably take longer to identify performance differences.
Pick one metric to gauge the success of your tests.
As you decide the traffic allocation of a test, you’ll also want to decide on a metric to gauge success. This will most often be the main goal for your account, things like total sales or cost-per-acquisition, but that won’t make sense all the time. Secondary metrics could be crucial for certain tests.
It’s important to pick a metric before a test even begins. Each experiment will produce a large amount of data, and you’re going to be the one who has to sort through that data. Balancing multiple metrics makes it difficult to pick a winner. When the time comes to end a test, let that metric tell you the winner. Of course, it’s helpful to use other secondary metrics for supplementary insights, but you’ll want to be consistent in how you choose a winner.
Avoid changing campaigns while experiments are running.
It’s technically possible to make changes to original and experiment campaigns while an experiment is active, but it’s a bad idea to do so. Mid-experiment changes can skew your results. If you aren’t exceedingly careful and organized with those changes you’ll probably bias your entire results. If you must make changes, mirror them across the original campaign and your experiment so that you can trust the lessons from your test.