Conversion Rate Optimization 4 min read
29 Oct 2020

How To Create Better A/B Tests With U/X Data

How To Create Better A/B Tests With U/X Data

Introduction

A/B testing sits at the center of optimization efforts for some organizations and is nothing more than aspirational for others. We all want to have game-changing multivariate test ideas over and over again. The reality for most is likely somewhere between consistent success and testing that does nothing more than create confusion.

A common theme among testing that doesn’t deliver great results is a lack of research and intentional design. In this article, we’ll take a look at the anatomy of great (and bad) A/B tests and talk through how even a small amount of user experience (UX) research can help you grow your Shopify store’s revenue.

Bad A/B Testing

When we say “bad” in this context, we’re not talking about the control (Version A) beating the test idea (Version B). In fact, you could argue that there’s no such thing as a “bad” A/B test as all tests end up teaching us something about our business.

A bad test usually misses the mark in one of a few areas.

First, tests shouldn’t be based on a hunch (no matter where it comes from in the company). For example, seeing a unique feature on a related website is not a cause for immediate testing. Instead, a team should research whether or not this feature is something that would even apply well on their Shopify store.

Second, test to solve an issue, not to better understand an issue. The idea here is to avoid testing against variables you don’t quite understand. Simply put, be sure you’re testing the right thing. Testing without clear variables can lead to an endless cycle of variations that don’t really change anything.

As an example, if you’ve discovered a high amount of visitors dropping off after viewing your shipping policies, what should you test?

What do you need to learn next to be able to solve the issue? In this case, you need to figure out which part of this experience is driving visitors away. It could certainly be something obvious like your shipping costs being too high or times being too long. It could, however, also be a less obvious factor such as a rendering issue on mobile or your form styling malfunctioning.

The goal here is to take a step back and say, “do I need to do more research before I test?”

Lastly, use the right tools for the job. If you’re investing time and effort (and budget) into testing, be sure your tools are delivering the variations as intended. You need to be able to trust the data you get and know that your test is being shown properly across all browsers and devices.

The Perfect A/B Test

The Perfect A/B Test

You’ve just pushed a test live. Everyone involved understands what you’re testing, what you’re measuring, and the target audience. You go live and know that in a short amount of time you’ll have meaningful results that help you better understand the wants and needs of your store visitors.

Sounds nice, doesn’t it? So, how do we get there?

No matter your team size, the responsibilities leading up to a great A/B test are very much the same. These responsibilities can, of course, be handled by one person or be spread across a full team of experts. Here’s a typical end-to-end process for developing and executing a great test.

Do The UX Research

If you take away nothing else from this article, it should be this: UX research is simply trying to figure out where your website is getting in the way of positive customer interactions. This, of course, includes broken links and forms but also includes clunky design and navigation that isn’t as intuitive as it should be.

A great starting point for UX research is identifying friction points. Where on your site should there be free-flowing exploration, conversations, and conversions where there are not? Using a conversion funnels tool will help you monitor traffic as it advances from one page to another, highlighting areas where the drop-off is higher than normal.

By doing this exercise, you help narrow down your focus. Finding the page in your funnel that’s causing visitors to leave your site lets you dive deeper into what’s happening. The next step is to watch session recordings of visitors who leave from that page. Take notes as you watch.

Is the visitor going all over the page? Are they clicking several times on something they shouldn’t? Are they having form issues or not seeing the information on the page as you intended? These signs of struggle can help pinpoint areas of the page that are ripe for optimization--and testing.

Again, using the right tools is key. Outside of conversion funnels and session recordings, we recommend regularly surveying your audience to refine personas and segmentation. Ask about what they’re trying to accomplish on your website, if they’re able to do it and, if not, what’s getting in their way. You can even offer things like promo codes or free monthly subscriptions in return for a UX research phone call.

We also recommend a consistent evaluation of heatmaps for key pages. By focusing on the effective fold (where 50% of your visitors scroll), you’ll be able to ensure that high-priority content is seen by the average visitor.

Do The UX Research

Another step towards great A/B testing is understanding anomalies and unexpected variables. Finding outliers, isolating them, and accounting for them in your research helps you avoid going down a testing path based on bad data.

For example, if customer support recently changed where they send customers for answers, the behavior may change on your website. Or if you’ve changed which fields are required or which legal language is displayed on your forms, your conversion rate may drop.

Understanding what’s happening in all areas of your business (that impact the website) is key to properly interpreting your data. Before you follow an idea, be sure to (try to) determine if you’re simply looking at an anomaly.

Finally, you need to isolate your target testing area. This means developing a hypothesis statement around a particular page, feature, or set of pages that you believe will increase conversions and deliver on a specific key performance metric. If your organization and testing tools are more complex, you may also be testing a specific audience segment such as traffic from a referral source or only new visitors.

Form A Hypothesis

Once you’ve isolated the pages, elements or audience segment you’d like to A/B test, it’s time to create your hypothesis. At a basic level, this simply includes the problem you’ve identified, your proposed solution and the expected outcome. Here’s what that looks like in execution:

Form A Hypothesis

After identifying your issue in UX research, you’ll want to develop a concise proposed solution and then state your expected outcome. This outcome should be a combination of the metric you’ll be monitoring and the change (up/down). We don’t recommend stating this with a specific number (Example: Increase conversions by 2%) as this distracts from the goal of the test--isolating a problem area and providing a solution to optimize.

Here Are A Few More Examples Of Properly Stated Testing Hypotheses:

(Issue) We are seeing return visitors reading our about us page more than average. (Solution) We believe that by adding a promo code popup to this page (Result) we will see an increase in repeat purchases.

(Issue) We are seeing fewer email signups than usual on our blogs. (Solution) We believe that by adding more descriptive content around the email signup form (Result) we will see an increase in email signups.

(Issue) We are seeing form abandonment happening more with European visitors. (Solution) We believe that by adding a currency conversion widget to our checkout page, (Result) we will increase purchases from European shoppers.

What To Do With Test Results

Once you’re in the groove with testing research and are organizing great test hypotheses, it’s time to figure out what to do with your results. Start by documenting them. This can be as simple as an ongoing spreadsheet with the following columns:

  • Testing dates
  • URLs included in the test
  • Elements, pages, or audience segments tested (With as much detail as necessary or screenshots)
  • Hypothesis
  • Metrics tracked and outcome
  • Key takeaways (Include ideas for future tests)

When Is An A/B Test Over?

We generally recommend letting tests run until the results reach statistical significance. This means you potentially need a large sample size. Some tests may require one day while others may take more than a month to show you something noteworthy. Trust your process and testing tools and know that the results will come.

Conclusion

Taking small steps to a more mature testing program can pay off massively in your conversion rate optimization efforts. Better testing is a more efficient path to test variations that outperform expectations and unlock new levels of business performance. Do the research, form the hypothesis and use the right tools to evolve your website into something filled with user-centered design.

Sean
Author

Sean

Sean is the Senior Content Marketing Manager at Lucky Orange, a leading CRO toolkit serving over 250,000 websites around the world. He helps businesses of all sizes better understand the role of a great website in the customer journey.

Share post