Failing tests to learn from

Hear the tricks of the trade from CRO experts in The Optimize CRO Series.

Are you frustrated by A/B tests on your website that don’t bring good results? Studies show that only 1 out of 7 tests succeed, but this can improve to one out of three if a conversion rate optimization agency is involved doing the proper process of CRO. But don’t get discouraged if your tests have a negative outcome, it’s actually an opportunity to learn! 

In this final episode of The Optimize CRO series we get to learn from the CRO experts and their “failing” tests. The insights can often be as useful as from a winning test...

Dr Karl Blanks and Ben Jesson, founders of Conversion Rate Experts

Dr Karl Blanks and Ben Jesson, founders of Conversion Rate Experts and authors of Making Websites Win
United States

“A/B testing is like a compass: It tells you which direction to move in. One of our clients, a company in the telecoms industry, was debating whether to lower the price of its top-selling phone. The phone was already the lowest priced in the marketplace. To measure how price sensitive the company’s visitors were, we A/B tested the existing price against zero dollars (completely free of charge). To everyone’s surprise, the zero-dollar offer didn’t sell more phones. Our research revealed that users were concerned that the free-phone deal was ‘too good to be true’.

Concluding that the visitors weren’t sensitive to the price of the handset, we went in the other direction by A/B testing higher prices. The winning page featured two higher-priced premium versions of the phone alongside the standard product. We then obtained a further win by offering optional upsells including accessories, insurance, call credit, and 24-hour customer support.

So not only did A/B testing save the company from pointlessly destroying its margins, but it revealed an unexpected opportunity for growing the profits.”

James Flory, Senior Experimentation Strategist at WiderFunnel

James Flory, Senior Experimentation Strategist at WiderFunnel
United States

“For one of our clients, a well-known financial services provider, we explored injecting social proof into their email capture pages to try to lift conversion rates. This organization has a ton of satisfied paying members who say fantastic things about their product, so we thought why not leverage these testimonials to help alleviate any anxiety users may have about submitting their email. The experiments tanked.

Over the course of a few failed experiments, we uncovered a delicate threshold in the efficacy of social proof in that particular funnel. This client’s users responded to that sort of information in a very unique way. High up in the funnel, it turned out to be a distraction. Users that were drawn to this organization’s investing service actually valued exclusivity at that stage. Deeper in the funnel, though, authentic social proof was very important. It was an anxiety reducing element that reaffirmed the user’s choice to purchase.

Read more about these experiments (and find screenshots) here.”

Lorenzo Carreri, Optimization and Growth at CXL Agency

Lorenzo Carreri, Optimization and Growth at CXL Agency
United Kingdom

“I recently ran a test for a client in the education space. The purpose of this test was to understand if introducing a brand new product into their funnel would have a negative impact on the conversion rate of their main money making product. It was basically a not inferiority test.

The new product was a killer deal for the user, I can’t go into details too much, but it was a highly discounted offer and definitely a no brainer for the user. I even would have bought it myself if I needed it.

We went ahead and launched the test for both desktop and mobile and agreed on running it for two weeks.

After less than 24 hours, the conversion rate of the main product in the variation was down -12% on desktop and -22% mobile. We first thought it was a technical issue, so we ran an additional round of quality assurance and also watched over 100 session recordings to see what was going on, but didn’t find any bug.

A few days later, both tests were still losing, so together with the client we decided to stop and call the tests losers.

Here’s why I love testing: if we didn’t run this test and my client went ahead and rolled out the new product anyways—they would have had a major revenue loss.”

Ayat Shukairy, co-founder and CCO at Invesp

Ayat Shukairy, co-founder and CCO at Invesp
United States

“One of our clients sells high end furniture. For people who struggle with interior design, being able to have all the pieces for a full room can be helpful. And this site provided matching pieces, but they were not accessible through the product pages. Our test was focused on bringing that experience to the product pages, allowing visitors to design the room and upsell them on more products. We estimated the test would provide a 8% uplift in revenue based on the increased AOV. The test did beautifully on mobile, with an almost 15% increase. However, desktop, surprisingly was at a -1.25%. While we understand the hypothesis was likely sound, the test will be revisited with an updated solution that will better meet the needs for desktop in particular.”

Daniel Ingamells, Digital Optimization Director at Optisights

Daniel Ingamells, Digital Optimization Director at Optisights
United Kingdom

“The only way an A/B test fails is if you don’t answer the question posed in the hypothesis. This can only happen if you didn’t reach significance for either an uplift, a decrease in conversion or a problem occurred with the technical implementation of a test.

Both of these issues can be easily mitigated by looking closely at the visitor data and forecasting sample size and test length. A well-defined QA program should catch 99% of all implementation and technical errors. And if you don’t have the traffic, you shouldn’t be running the test.”

Melanie Bowles, Specialist in Optimization and Insights at InfoTrust

Melanie Bowles, Specialist in Optimization and Insights at InfoTrust
United States

“There are a lot of case studies that say that carousels reduce conversion rates. As a result this is a popular element to test across industries. The team believed that evidence suggested that removing carousels throughout the site would result in big gains in conversion rates across the board. The results showed quite the opposite. Overall engagement dropped across the board along with e-commerce metrics. But the success of the test was that it proved that case studies can be great for ideas but they need to be tested as they don’t always work for every site.”

Gino Arendsz, Growth and E-commerce Manager at Helloprint

Gino Arendsz, Growth and E-commerce Manager at Helloprint
Netherlands

“To sign up for an account you need to fill thirteen form fields excluding consent and our newsletter checkbox. Everytime I see this page it hurts deep down in my heart and that’s why we decided to get rid of a few fields. Ideally we would end up with one field, email address, but that was quite a technical challenge.

So we limited it to nine fields and pre-filled the fields that we could. This experiment had a negative result on both sign-ups and transactions. Since we thought it would be successful we ran it again. This time on more shops to gather more data and now we also took out our hyperlink dense footer. Again we didn’t hit the jackpot, the second time we ran the experiment it again had a negative impact on both sign-ups and transactions.”

How to approach failing tests

Two key takeaways are clear from the “failing” tests of the CRO experts:

Have you missed the previous episodes of The Optimize CRO series, where experts give more of their insights? Check them out here!

By Lina Hansson

Was this helpful?
How can we improve it?
Search
Clear search
Close search
Google apps
Main menu
Search Help Center
true
101337
false