When a website provides a great user experience (UX) users are delighted and return to your site, becoming loyal customers! Great UX comes from user centric research, but a common challenge is that the research often leads to a long list of ideas of what to fix.
To help you prioritize where to begin, I recommend adopting a process that your entire company can use to prioritize A/B tests. With a structured framework, your company spends more time on gaining actionable tests and less time deciding which tests to run.
So which prioritization process do CRO experts recommend? In this episode of The Optimize CRO Series, we share what CRO experts do to prioritize their everyday work:
James Flory, Senior Experimentation Strategist at WiderFunnel
“In my world, prioritization is primarily done using the PIE framework (shown below). PIE was developed by WiderFunnel and stands for Potential, Importance, and Ease:
Potential: How much improvement can be made? (Think of how large the barrier to conversion you’re solving really is. Is the thing you’re solving really stopping people from converting? Do you have strong evidence to support the change? Have you seen this tactic work elsewhere?)
Importance: How much impact can these improvements have on our primary business goal? (Think traffic volume and conversion rates affected. Is this a high traffic page that is worth your time and resources? Are you making changes to key landing pages that receive a ton of paid traffic, or are you changing a single blog page that’s lower on the totem pole?
Ease: How easy would it be to actually make these improvements? (Think both technically and politically.)
Then you give each hypothesis a score within the three areas, as below, and can see in which order you should perform each test.”
Lorenzo Carreri, Optimization and Growth at CXL Agency
“I use the PXL framework (shown below) which CXL created a few years ago. It brings more objectivity to elements that are key to evaluate test ideas, such as potential, impact and ease of implementation.
I then customise the PXL for each client based on their business, their goals and KPIs in the next months as well as the way their team is organized and structured.
We have also developed another framework called “Testing Bandwidth” that allows you to prioritize on which type of pages we should run tests on based on the ability to detect an effect (if there’s an actual lift to be detected). Why spend time running a test on a page that has a very high minimum detectable effect and the chances to find a difference between the control and variation are very small? This has allowed me to focus on prioritizing testing on those pages where there’s higher chances to increase revenue and profits for my clients.”
The CXL framework for testing bandwidth
Daniel Ingamells, Digital Optimization Director at Optisights
“The ability to prioritize effectively is one of the fundamental pillars of a successful optimization program. Prioritization has to be a collaborative process, it cannot be done in isolation. The key is to merge technical expertise and business expertise in order to prioritize holistically.
As optimization experts, it’s our responsibility to facilitate this process by presenting the qualitative and quantitative data as well as factoring in other considerations such as technical difficulty and strategic relevance.
Using a simple scoring system such as below allows large quantities of hypotheses to be ranked, and possibly re-rank them after further research.”
Optisights' prioritization framework
Melanie Bowles, Specialist in Optimization and Insights at InfoTrust
“Whenever possible I work with multifunctional teams to get multiple viewpoints on the priority of a test. In terms of taking a list of tests and finding a consensus across teams on which tests to run next, I have had success with the PIE framework. It allows us to quickly evaluate different aspects of a test including data-based evidence, traffic, ROI and how easy it would be to build and launch the test.”
Andrew Garberson, VP of Marketing Services at Bounteous
“We remind clients and training attendees to always follow the money. Whether optimizing landing pages or attempting to correct a cart conversion cliff, prioritizing tests around financial impact will excite stakeholders across the organization. Money is typically a common denominator.
Keep in mind that your biggest wins may come early in your testing career. Optimize, as a verb, should not have a past participle. Can you imagine the Amazon testing team saying something like, “Good news, folks. The shopping cart experience is fully optimized.” No way. While you should prioritize your testing around anticipated financial ROI, you should also begin to set expectations around how you will define success and failure. We love this quote from Jesse Nichols: ‘Our test success rate is about 10 percent. But we learn something from all our tests.’”
Ayat Shukairy, co-founder and CCO at Invesp
“We have 18 different factors that we consider when prioritizing tests—some related to the actual problem uncovered, while the remaining factors are more focused on the solution. A few of the factors we evaluate are:
Problem was uncovered with what form of research?
Is the problem above the fold?
Where is the problem on the page?
Where is the problem at a site level?
Is it an element, page, or process level change?
Is it a politically easy change?
The potential impact this problem will have if fixed?
The factors for the solution include:
What is the Level Of Effort (LOE))?
Is it an addition, removal or replacement of an element?
Does the solution increase trust?
Does it reduce fuds (Fears, Uncertainties and Doubts)?
% of pageviews?
Type of page?”
Patrik Matell, Conversion Optimization Specialist at Conversionista
“For bigger companies, where there is a large volume of ideas to work through, we do initial prioritization based on a short online form (primarily a battery of yes or no questions to reduce subjectivity). The form results then feed into a scoring model, that combines the answers with web analytics data as well as past test data, in order to provide an initial estimate.
From that initial estimate, or from all the active ideas if it’s a smaller set, we select a promising focus area. A bit of extra data digging commences for relevant area and ideas, which is the final step before the real secret sauce—the HEAT meet. What follows is a hypothesis exploration meeting, a small cross functional meeting. In the meet we go through the data and some set exploration steps, in order to further polish the ideas. This part of the process ensures that the final hypotheses and resulting tests are as good as possible.”
Dr Karl Blanks and Ben Jesson, founders of Conversion Rate Experts and authors of Making Websites Win
“Take all of the ideas you’ve generated from the research and prioritize those big, bold, targeted ones that will grow your business in the shortest time. Bold changes give you more profit, and you get quicker, larger returns (it’s a statistics thing). After collating all your ideas, prioritize them based on three simple metrics:
- How likely is it to double your conversion rate? Asking this question helps to ensure that you’re prioritizing the big opportunities. Bigger, bolder tests should be given a higher priority while meek tweaks need to be demoted.
- How easy is it to implement the test? Look for the quick wins with the biggest financial impact, so changes that are easy to implement are given a higher priority.
- Has this idea worked before? Once you’re testing, you’ll quickly start learning what your visitors respond to. Every test we develop is documented so that we can review and prioritize ideas that are inspired by winning tests.”
To find a prioritization process that will fit your company needs, you might need to test out a few. One suggestion is to start with a simple version (such as the PIE framework) and then progress to more granular frameworks to see if they bring better test results. In the long run, it all comes down to how fast your company moves forward with tests that bring impact.
By Lina Hansson