Many marketing gurus talk about conversion rate optimization (CRO), which includes anything from A/B tests (most obvious) to design and usability changes. You probably agree that testing is a great thing to do. You might have even run a few experiments. But did you get the results you had hoped for?
Since testing requires resources, how can you conduct it for the maximum benefit to your organization?
After running dozens of A/B tests on AmsterdamPrinting.com over the last two years, I've gathered eight of the most important lessons about conducting a successful A/B testing program.
1. Get Your Buy-Ins Before Starting to Test
A/B testing might seem like a no-brainer, but switching from gut-feeling decisions to data-driven decisions might be harder than you think. The ultimate goal of CRO—to gain a "culture of testing" within a company—is a big leap; you need a step-by-step approach.
Setting up tests requires additional work and resources, and you ought to have support from the top. Involve your testing team in all of the big decisions about your website, so your team can evaluate great candidates for the tests.
2. Involve More People Once You Have Management Buy-in
To have a successful testing program, you'll need to involve key players. Having a designer and a developer on your team would ensure timely implementation of tests.
Involve the rest of the organization by sharing test results, asking for feedback, or even taking votes on the winning test recipes. When we ran a call to action test (and got a testing award), we had seven versions of the button text, all suggested by our co-workers. As a fun twist, we ran an internal contest to predict the Top 3 versions of the test, rewarding the winner with an Amazon gift card. Naturally, more people participated and were excited to check the results.
3. Define Hypotheses and Success Metrics Before Starting Each Test
When launching a new test, you can easily be biased in its favor. Often, you'll believe that the new version is better. To avoid that self-fulfilling prophecy, define your hypotheses and success metrics before running the test.
For example, if you're testing three call to action buttons (including a control version), define the following:
Your Hypotheses
- Why might the second and third versions of the button be better than a control?
- If version two or version three wins, what would you attribute it to?
Your Success Metrics
What event (e.g., click, download, purchase) or metric (number of seconds spent, bounce rate, or average order value) will be compared against the control?
A single success metric is what will ultimately define whether your test is successful.
Be careful with your assumptions: More clicks do not mean more purchases, just like more purchases might not mean higher sales totals (what if the average order value dropped?).
Align your success metrics with your business goals.
4. Get Enough Conversions for Statistical Significance
So, you've launched a test and... no traffic. The great candidates for testing are pages with a lot of traffic that are closer to the bottom of the conversion funnel. (You can usually make a greater impact testing something toward the end of a transaction, such as a product details page or the checkout process.)
The rule of thumb is to test for at least two full weeks and receive 100 conversions (success metrics) per each recipe in the test.
5. Segment Your Traffic
Segments are really important, because different types of visitors might react differently to the versions you're testing. Loyal customers (direct or "branded" traffic) will behave differently than those who landed on your site for the first time.
Other segments could include the following:
- Visitor types (customers, prospects, new, returning)
- Browsers (Internet Explorer, Firefox, Chrome, Safari)
- Operating systems (Mac, Windows, Linux)
- Screen resolutions (what visitors will likely see above the fold)
- Traffic sources (direct, search engines, and campaigns, such as email, PPC, display, remarketing, etc.)
- Landing pages: Was the test page the first page the person saw when she landed?
Creating lots of segments can improve the conclusiveness of your A/B tests. Again, you should line up with your business goals (will a 4% drop in conversion of current customers be offset by a 7% increase in conversion of new visitors?). Evaluate different success metrics for different segments, too!
6. Unsuccessful Tests Don't Exist
Even if your test is inconclusive (producing no statistically significant lift or drop), it is still valuable. Don't let results like those disappoint you. You did learn, for example, that the particular element you tested is not that important and does not affect your success metrics.
(Note: Brian Massey wrote a piece for Search Engine Land titled "Four Things You Can Do With Inconclusive Split Tests." It's worth checking out.)
Remember, keeping detailed records of all your tests, conclusive or not, will help with future decisions to test or change your website.
7. Big Changes Equals Big Impact
We've all heard success stories about a small change that resulted in a big positive impact. In reality, though, that is rarely the case. Big changes, which usually have a high-perceived "risk" within the company, result in big impact—good or bad. Even if the impact is negative, look back at item No. 6. You've learned that the tested element really matters!
8. Beware of the hidden variables (confounds) affecting the test
Even with the perfect setup, your testing tool will not account for the "hidden" variables. Beware of them, and minimize risk by doing the following:
- Do not run simultaneous tests that target the same group of visitors.
- Run a test for a longer timeframe (3-4 weeks), because seasonal traffic changes can affect the outcome (e.g., a direct mail catalog drop will bring in qualified traffic, skewing your test results).
- Test one thing at a time. If you're testing a landing page, do not have different versions that mix and match layout changes, design changes, and calls to action. Approach that kind of test with a few phases, starting with the layout test, then the design, and only then the call to action. Even though multivariate tests (MVT) are great alternatives, they will take a long time to complete unless you have tens of thousands of visitors.
The Long Process
Following the lessons in this article will make your conversion rate optimization activities more effective. Our goal here, again, is to introduce your company to the culture of testing. As you have more winning A/B tests, you can start questioning the "foundational" elements of your site (e.g., navigation, layout, features, even design).
Joanna Lord, director of acquisition at SEOmoz, calls these elements the "all truths" and says they are the hardest ones to push through. But improving/testing the "all truths" can potentially make the biggest impact across the entire site.
As you perform more testing and learn how your audience reacts to the changes, communicate your positive results and, more important, put the results in perspective for the next year. For example, a 4% conversion-rate lift? That's $236,000 in additional revenue per year. Doing so will help to get your executives to pay attention to A/B testing and conversion rate optimization!
New Ideas and Sharing
To get the ideas flowing, frequently check sites such as WhichTestWon.com or ABtests.com. You'll get a look at what others are testing and a chance to guess which version had a higher lift. Both sites accept great tests, so feel free to share your successes while bringing attention to your brand.
Good luck to you and your CRO team!
(Photo courtesy of Bigstock: Education theme)