The key to impactful A/B tests is research! But often companies skip this important step and start testing topics that are being discussed within the company instead – which unfortunately brings tests that aren't user centric and solving the big friction points.
So how do CRO experts tackle the challenge and find the big opportunities? Hear from nine CRO experts:
Lorenzo Carreri, Optimization and Growth at CXL Agency
“I’m a big fan of qualitative research, in particular customer surveys to the most recent first time buyers. Depending on the goal, my favorite type of questions are open ended because I can get direct access to the voice of the customer and often time use it to tweak the copy of an A/B test.
The reason why I love surveys so much is because I can always learn about what I consider the two most important aspects are that can help you optimize a site: reduce users’ frictions and increase users’ motivations. Knowing what motivates users allows me to run tests that put more emphasis on this. A lot of times it’s all about making something that is important to users in the decision making process more visible and prominent. At the same time, knowing user’s elements of friction allows me to come up with treatments that answer those objections in the user’s mind or alleviates them.”
Andrew Garberson, VP of Marketing Services at Bounteous
“Every test begins with a hypothesis. As data nerds, we love to talk about regression or cluster analysis, and we’ll use website analytics to help support our efforts. But conversion optimizers need questions to form hypotheses. And the best way to craft questions is qualitative research.
For some organizations, that might mean large scale unmoderated evaluations with a qualitative tool like Google Surveys. For others, maybe it is a DIY moderated session at a coffee shop. Either way, the value of talking to people outside of the organization cannot be overstated.”
Gino Arendsz, Growth and E-commerce Manager at Helloprint
“We base a lot of our A/B tests on a combination of qualitative and quantitative research. For instance, we might stumble upon an idea through a Hotjar screen recording. We will then use a poll to get quantitative data on that improvement.
Next to external tools we also collect user feedback through user testing, where we observe random participants when they try to order with us. Last but not least, we do a lot of data analysis. We have our complete customer journey mapped onto our version of the AARRR framework (Acquisition, Activation, Retention, Referral, Revenue). We do data deep dives to uncover why people are dropping off in each phase or which long term effects we get in terms of retention for customers that had an unhappy first experience.”
James Flory, Senior Experimentation Strategist at WiderFunnel
“At WiderFunnel, we leverage the Infinity Optimization Process, specifically the Explore phase (user research, digital analytics, business context, psychological principles, and an experiment archive).
If I’m starting with a fresh page or website that I’ve never run an experiment on before, I’ll leverage initial passive user research tools such as heatmaps, scroll maps, and move maps to get high level insight into how users are behaving. That data, coupled with digital analytics data to get a semblance of the customer journey through the page, will serve as a solid foundation.
Initial questions I’d ask of the analytics data include:
What is the traffic volume and conversion rate through this page?
How are people navigating to and from this page? Where does it sit in the larger journey?
What are the traffic sources? Is it primarily paid or organic traffic? What do those acquisition touch points look like?
This information helps me understand the user’s context and imagine myself in their shoes when I browse the site.
Armed with that preliminary analysis, I’ll often perform a heuristic analysis, which we call a LIFT analysis (improve value proposition, clarity, relevance, urgency, and reduce anxiety and distraction) to begin to identify key areas of opportunity. I’ll then begin to think of preliminary solutions to some of the LIFT points that could be translated into an experiment hypothesis.”
Daniel Ingamells, Digital Optimization Director at Optisights
“I believe research methods need to form a synergy and are more beneficial as a collective (ideally integrated) than used as standalone data sources. The best example of this is analytics data. This data is great at telling us what the problem is, i.e. “we have a 70% bounce rate on the homepage”, but it doesn’t answer the question of why this is happening. When this is combined with a qualitative data source like customer feedback or session recording we can add the why, and at this point the problem statement from analytics can be turned into a testable hypothesis.”
Melanie Bowles, Specialist in Optimization and Insights at InfoTrust
“I try to take advantage of any available data source when researching test ideas. I start with web analytics to look for key metrics that are out of range for certain user segments or pages. The metrics I look at are first e-commerce metrics such as conversion rate, average order value, add to cart and product detail page views, as well as engagement metrics like session duration, bounce rate, pages per session and form completions. This gives me ideas of audiences to target and elements to optimize.
If there have been any usability tests or user surveys conducted, I will review customer feedback for any trends that can be developed into an A/B test. Reviewing user recording and heatmaps are also an important part of the process to see how customers interact with a site even when they don’t click on anything.”
Ayat Shukairy, co-founder and CCO at Invesp
“At Invesp we developed the SHIP methodology which is an acronym for Scrutinize, Hypothesize, Implement and Propagate.
70% of the time we spend will be on the scrutinize phase which includes all forms of qualitative and quantitative research in order to determine the problems we uncover on a site. Qualitative research methods range from usability tests, heuristic evaluations, customer interviews, polling and surveying, reviews analysis, and more. Quantitative is more focused on the numbers from BI, Analytics, Heatmaps and video recordings, and aggregated marketing data.”
Patrik Matell, Conversion Optimization Specialist at Conversionista
“As is often the case, choosing which research method to utilize is a context dependent decision:
Are you deciding on an initial battery of tests for a site that is still being developed? In that case hypothesis generated from an iterative usability test program, backed up with web analytics data from a previous site might be your best bet.
Are you running an ongoing test program for a site that acts as a sales platform for an app? In that case, a cross platform cohort analysis of previous tests in order to address churn might be a better fit.
The one constant I can think of is that it’s often desirable to triangulate findings using multiple research methods. This is a useful approach (especially when answering high stakes question) since it lowers the impact of the varied intrinsic biases, connected to our different research methods.”
Dr Karl Blanks and Ben Jesson, founders of Conversion Rate Experts and authors of Making Websites Win
“Don’t guess what the blockages are. Find out. The key question is “Why aren’t visitors converting?” The answer typically comes from research in the following core areas:
- Understanding different visitor types and intentions: This typically revolves around your web analytics platform and customer database. Seek to understand your different traffic sources and how they behave. Consider new visitors vs. repeat visitors, which referring sources of traffic convert best, and if you have distinct visitor types based on the visitors’ situations, their past experiences, or their intentions.
- Identifying user experience problems: All websites have visitors who don’t convert simply because something prevents them from doing so—they’re willing but unable. There are many tools and techniques for identifying user experience issues. We find user testing, co-browsing and exit surveys to be the most fruitful.
- Gathering and understanding visitors’ objections: Capturing the voice of the customer is more difficult with the web, but it can be done. Start by implementing appropriate feedback mechanisms (like live chat and exit surveys) for capturing the most common objections. We also get valuable insights from method marketing, interviewing salespeople and customer surveys.”
According to a study by Nielsen Norman Group, 85% of usability problems can be discovered by doing user testing with as few as five users, but only 51% of companies perform usability testing. The pattern that becomes evident when interviewing CRO experts is that user testing and qualitative methods are stated as one of the most important research methods, which leads to the conclusion that this might be a key component of their success. Qualitative user testing should thereby be added to the regular quantitative data analysis that companies usually do. Check out this article on the Google Optimize Resource Hub on how to perform quick user testing – it’s actually much easier than you might think!
Another interesting finding is the devotion the experts have in discovering the thoughts and feelings of the website’s visitors, with Ayat Shukairy stating that they spend 70% of their time on research and how more of them broaden their understanding of the visitor to not only include the experience on the page but also the channel sources the visitors come from and what they therefore might want when entering the website.
One thing is clear—the learning from CRO experts is that great A/B test results come from great research.
By Lina Hansson, Conversion Specialist at Google
- Learn the step by step approach to a CRO process in the Optimize Resource Hub