Compare bid optimization systems
To compare different bid optimization systems, you can do the following:
Identify campaigns with similar historical performance to include in a randomized A/B test:
Select campaigns with similar historical performance (such as similar conversion rate and click volume). DS bid strategies perform best when its campaigns share similar performance characteristics.
Before/after comparison during stable periods:
During stable periods, when there aren’t large seasonal changes (e.g., not during Valentine’s Day), check the data before the bid optimization systems are implemented, and at least two weeks after they’ve been applied.
- Use AdWords campaign drafts and experiments to create copies of the campaigns and run an A/B test:
AdWords campaign drafts and experiments allows you to split your campaign traffic into two randomized arms. You can put one arm in the DS Performance Bidding Suite and the other in a different system and get A/B test results.Learn how:
It's recommended that you allow a DS bid strategy two weeks to learn from the campaigns' performance before you compare the results.
- In AdWords, create a draft of each of the campaigns you want to test.
It's recommended that you follow these do's and don'ts.
- Do include the word "Experiment" or something similar in the experiment's name to help you keep track of which campaigns are experiments.
- Do allocate 50% of the campaign's traffic to the experiment to prevent bias in the test. The traffic will be evenly distributed between the campaigns that are managed by the different bid optimization systems.
- Do plan to apply a DS bid strategy to the original campaigns and apply the other optimization system to the experiments later in these steps. If you're happy with the performance of the DS bid strategy when you end the test, you won't be required to make any changes to the original campaigns.
- Don't change anything in the campaign drafts. The only differences between the original campaign and the experiment should be the name and the bid optimization system that you will apply later in DS.
- Don't set an end date for the experiment to allow the test sufficient time to collect enough data to ensure that the results are statistically relevant. You can stop the experiment at any time.
- In DS, sync the account.
Experiments look like regular campaigns in DS. Don't make any changes to the campaigns or experiments during the A/B test other than applying bid optimization systems.
- Create a DS bid strategy and then apply it to a portfolio of original campaigns.
Note that instead of applying a DS bid strategy to the original campaigns, you could apply the DS bid strategy to the experiments. If you want to keep using the DS bid strategy after you end the test, you'll need to remove the DS bid strategy from the experiments and apply it to the original campaigns.
When running a comparison, please keep the following in mind:
Test one change at a time:
During a test, don’t change creatives, landing pages, campaign settings, etc. You want to minimize the number of variables that are changing during the test period. Otherwise, if you change both your bid management from manual to the Performance Bidding Suite, and change your landing pages, you won’t know if the conversion volume changed because of the landing pages or due to the Performance Bidding Suite taking over your bids.
- Set realistic goals:
Following up on the previous tip, “Test one change at a time”, use historical performance as a guide when setting goals.Learn more:
- For example, if you set goals in terms of cost per action (CPA), first look at the historical CPA for the campaigns you’re about to put into a DS bid strategy. If you see the CPA has been $100 over the last month, it’s a good idea to enter $100 as the goal. This way you’re testing one change at a time - keeping the CPA constant over time but changing the agent who manages the bid to monitor the impact on volume.
- However, in some cases you may need to set a different CPA goal if your intent is to lower the CPA. In this case, make sure the new goal is realistic. You could reduce the CPA by 10%, observe the impact, and repeat as necessary. For example, if the historical CPA is $100 and the historical average position is 5.3, reduce the CPA by 10% to $90. If that CPA change doesn’t produce the required results, reduce it by another 10% to $81, observe the results, etc.
Make sure the bid strategies are not constrained:
You need to give the Performance Bidding Suite room to adjust the bids. If the bid strategy is too constrained by bid or position limits, the system will not be able to optimize effectively. Check the Bid strategy health column to ensure the bid strategy is in a healthy state.
Make sure the test isn’t too small:
The Performance Bidding Suite algorithms optimize better in the presence of more data and more keywords. Only run tests on bid strategies that have a reasonable amount of clicks (for click-based bid strategies) and conversions (for conversion-based bid strategies) to allow full optimization.
Wait at least two weeks before evaluating the results:
Wait a minimum of two weeks to evaluate the results (while confirming the bid strategy is not constrained during that period). For some advertisers, there can be considerable delay from click to conversion. In those cases, the first few days of data should be considered noisy or incomplete. The first two or three days are noisy because conversion data observed while running on the Performance Bidding Suite is actually the result of the previous day’s clicks and bids.
- How to evaluate success:
What to look for when evaluating the success of an optimization system:
- Accuracy to target: If you set a target CPA, how close is the actual CPA to target CPA?
- Volume changes: Did the KPI you chose to optimize for (e.g., actions or revenue) increase or decrease? This should be considered in conjunction with the targets you have set. If you decided to set a target that’s lower than previous averages, the volume could go down as a result of the new target value.