Finding success with Smart Bidding

Test your bid strategies


Keep each bidding test simple and only change one variable at a time

Google Ads experiments  allow you to clearly evaluate the results of your test. For example, you can run an experiment to compare Target CPA automated bidding to manual bid changes or your third-party bidding solution.

Tip

Keep things simple! You want your tests to reveal clearly whether automated bidding is working well, so resist the urge to also test new ads or landing pages at the same time.

Choose the largest campaign that you’re comfortable experimenting with

  • Pick the largest campaign that you’re comfortable experimenting with. In testing, more data means more confidence.
  • Test campaigns that have been running for a while. Automated bids rely on account history, so they'll perform better in an account with more past data to look at.
  • Keep your experimental split at 50% when possible. This will help you generate statistically significant results as quickly as possible.
  • Aim for campaigns and experiment splits that would provide you with at least 30 conversions in the last 30 days. For the campaign that you're testing, multiply your experiment split by the conversion volume of the campaign and make sure that you get 30 conversions. For example, if you are only going to put 10% of campaign traffic in your experiment, you'll want a campaign that generates 300 conversions per month. An experiment with a 50% split, by contrast, would only need 60 conversions.

Tip

You can also find campaigns that are a good fit for Smart Bidding in the Recommendations page.

Start with targets that align with your historical CPA or ROAS

When you start your test, use your historical average CPA for that campaign as your performance target. This gives you the best possible comparison in performance with your current bidding strategy.

Tip

Consider the conversion delay while assessing performance. You’ll want to remove those delays from your analysis. As an example, if it typically takes seven days for users to convert after an ad click, do not include the most recent week of performance when evaluating tests. Your data will still be missing conversion data from those seven days.

As you get your tests up and running, plan for a one-week window while automated bidding learns about your account. Following this learning period, your actual test should last a few weeks. Once your test concludes, exclude the first week of data from your analysis when evaluating the success of your bid strategy. You can make decisions on the winning strategy or adjust what you’re doing after your standard conversion delay has passed. Experiments will let you know when a test has reached statistical significance.

If your automated test was a success, scale it to other campaigns. 

Next: Evaluate your automated bid strategy’s performance

 

 

Sign up for the Best Practices newsletter to get advanced Google Ads tips and updates right to your inbox.
Was this helpful?
How can we improve it?

Need more help?

Sign in for additional support options to quickly solve your issue

Search
Clear search
Close search
Google apps
Main menu
Search Help Centre
true
73067
false
false