!Discover over 1,000 fresh articles every day

Get all the latest

نحن لا نرسل البريد العشوائي! اقرأ سياسة الخصوصية الخاصة بنا لمزيد من المعلومات.

Your Comprehensive Guide to A/B Testing: Expert Tips from Google, HubSpot, and More

Whether you are an experienced entrepreneur or just starting out, there’s a good chance you have come across many articles and resources on A/B testing. You may have already conducted A/B testing on your email subject lines or your social media posts.

Although much has been said about A/B testing in the marketing field, many people still get it wrong. The result? People make major business decisions based on inaccurate results from improper testing.

A/B testing is often oversimplified, especially in content written for store owners. Below, you will find everything you need to know to get started with different types of A/B testing for e-commerce, explained in the simplest possible terms. A/B testing can be a pivotal factor in choosing the right product direction, increasing conversion rates on landing pages, and much more.

What is A/B Testing?

A/B testing is sometimes referred to as split testing. It is a process that compares two versions of the same webpage, email, or any other digital asset to determine which one performs better based on user behavior. It is a useful tool for improving the performance of a marketing campaign and gaining a better understanding of what converts your target audience.

This process allows you to answer important business questions, helps you generate more revenue from the traffic you already have, and provides the foundation for a data-driven marketing strategy.

How Does A/B Testing Work?

When using A/B testing in a marketing context, version A of the asset (let’s call it “the control”) is shown to 50% of visitors, and version B (let’s call it “the variant”) is shown to the other 50% of visitors.

The version that leads to the highest conversion rate wins. For example, let’s say that version B resulted in the higher conversion rate. You would declare it the winner and direct 100% of visitors to version B.

Then, version B becomes the new control, and you need to design a new variant.

It’s worth noting that the conversion rate from A/B testing can often be an inaccurate measure of success.

For example, if you priced an item at $50 on one page and offered it for free on another, that wouldn’t provide any real valuable insights. Like any tool or strategy you use for your business, there needs to be strategy.

That’s why you should track conversion value all the way to the final sale.

How to Set Up A/B Testing

Let’s look at some tools and basic information for setting up A/B testing:

Choosing an A/B Testing Tool

There are many tools available for conducting A/B testing, such as Google Optimize, Optimizely, and VWO. You can choose the tool that fits your needs and budget.

Formulating a Strong Hypothesis

Before you start testing anything, you should have a strong hypothesis. For example, “If I reduce shipping costs, conversion rates will increase.” The hypothesis should be measurable, aimed at solving a specific conversion problem, and focused on data-driven insights instead of gut feelings.

Defining the Goal and Designing the Test

After formulating the hypothesis, you need to define the goal you want to improve and design the test accordingly. You should have a control version and a variant version to test.

Implementing the Test and Analyzing Results

After setting up the test, you need to implement it and gather the necessary data. Once the test is concluded, analyze the results and determine the winner. Don’t forget to analyze the losers as well to learn what you can from them.

Documenting Results and Learning from Them

It’s important to document past A/B testing results and keep them organized. This documentation will help you utilize insights and experiences from the past in the future. Excel spreadsheets or specialized tools like Effective Experiments can be optimal ways to document results.

How to

Analysis of A/B Test Results

When analyzing the results of an A/B test, you should focus on insights and analyses rather than just knowing the winner and the loser. You should leverage the data even if the test is a failure; it may provide valuable insights that can be used in future experiments and in other areas of your business.

One of the most important things here is the need to segment the data. The test may be a loser overall, but it may have performed well with a certain category of audience. You should break down the data to uncover hidden insights beneath the surface.

You should also focus on insights and analyses rather than just knowing the winner and the loser. There is always something to learn and analyze. Do not ignore the losers!

A/B Testing Processes for Professionals

Now that you’ve gone through a standard educational course on A/B testing, let’s take a look at the processes of professionals from companies like Google and HubSpot.

Krista Seiden

The A/B testing process for web and apps at Hayma is centered around analysis – in my opinion, this is the fundamental essence of any good testing program. During the analysis phase, the goal is to examine your analytics data, or survey data or user experience data, or any other customer insights you may have to understand your improvement opportunities.

Once you have a good pipeline of ideas from the analysis phase, you can move forward to hypothesize about what may be wrong and how you can improve these areas of enhancement.

Next, it’s time to build and run your tests. Be sure to run them for a reasonable period (I prefer a two-week period to ensure I account for weekly changes or variations), and when you have enough data, analyze the results to determine the winner.

It’s also important to take some time at this stage to analyze the losers as well – what can you learn from these differences?

Finally, and you may only reach this stage after spending time laying the groundwork for a strong optimization program, it’s time to consider personalization. This does not necessarily require sophisticated tools; it can be a result of the data you have about your specific users.

Marketing personalization can be as simple as targeting the right content to the right locations, or complex like targeting based on individual user actions. Do not jump into everything at once in terms of personalization. Make sure you’ve spent enough time getting the basics right first.

Alex Birkett

At a high level, I try to follow this process: collect data and ensure accurate implementation of analytics. Analyze data and uncover insights. Turn insights into hypotheses. Determine prioritization based on impact and ease, and maximize resource allocation (especially technical resources). Run the test (using the best statistical practices to the best of my knowledge and capabilities). Analyze and implement or not based on the results. Iterate based on findings and repeat.

In simpler terms: research, test, analyze, iterate.

Although this process can change based on context (am I testing a critical product feature for the business? Is it a CTA for a blog post? What is the risk profile and the balance of innovation versus risk exposure?), it applies to almost any size or type of company.

The point here is that this process is flexible, but it incorporates enough data, whether it’s quantitative customer feedback or quantitative analysis, to be able to provide better testing insights and determine the best priorities so that you can drive traffic to your online store.

Ton Wesseling

The first question we answer when we want to optimize the customer journey is: where does this product or service fit in the ROAR model we created in Online Dialogue? Are you still in the risk phase, where we can do a lot of research but cannot validate our results through sub-online experiments (less than 1000 conversions per month), or are you in the optimization phase? Or even higher?

The phase

Risk: Lots of research, which will translate into anything from changing business models to brand new design and value. Optimization Phase: Large experiments will enhance the value proposition and business model. Improvement Phase: Small experiments to validate user behavior hypotheses, which will build knowledge for larger design changes.

Automatic: You still have the power of experience (visitors) left, meaning there’s no need to use full tests to validate the user journey. The remaining should be used for exploitation and faster growth now (without focusing on long-term learning). This can be done by running random tests/using algorithms.

Rethink: You stop adding a lot of research unless there is a shift to something new.

So, web or app A/B testing is only a big deal in the optimization phase of ROAR and beyond (up to rethinking).

Our approach to running experiments is the FACT & ACT model:

The research we conduct is based on our 5V model:

We gather all these insights to form a key research hypothesis, which will lead to sub-hypotheses that will be prioritized based on the data collected from A/B tests on desktop or mobile. The stronger the hypothesis, the higher its ranking.

Once we know whether our hypothesis is true or false, we can start integrating the findings and taking bigger steps by redesigning/reorienting larger parts of the customer journey. However, at some point, all winning executions will lead to the local maximum. In this case, you need to take a bigger step to reach the potential global maximum.

Of course, the key discoveries will spread throughout the company, leading to all kinds of broader improvements and innovations based on reliable first-party insights.

Julia Starostinko

The purpose of the experiment is to verify that making changes to the current webpage will have a positive impact on the business.

Before starting, it is important to determine whether running an experiment is truly necessary. Consider the following scenario: there is a button with a very low click rate. It would be almost impossible to improve the performance of this button. Therefore, verifying the effectiveness of the proposed change to the button (i.e., running an experiment) is unnecessary.

Similarly, if the proposed change to the button is small, it is likely not worth spending time setting up, executing, and dismantling an experiment. In this case, the changes should be applied to everyone, and the button’s performance can be monitored.

If it has been determined that running an experiment would actually be beneficial, the next step is to identify the business metrics that should be improved (e.g., increasing the conversion rate of the button). Then we ensure adequate data collection is in place.

Once that is done, the audience is randomly assigned, splitting the test between two groups: the current version of the button is shown to one group, while the other group receives the new version. The conversion rate for each group is monitored, and once statistical significance is reached, the results of the experiment are determined.

Peep Laja

A/B testing is part of a bigger picture for conversion optimization. In my opinion, 80% of it is about research and only 20% is about testing. Conversion research will help you determine what to test in the first place.

My testing process usually looks like this (simplified summary): Conduct conversion research using a framework such as ResearchXL to identify issues on your site. Choose a high-priority problem (affecting a large number of users and is acute), and brainstorm as many solutions as possible for this problem. Feed the ideation process with ideas obtained from conversion research. Identify the device you want to run the test on (A/B testing for mobile should be run separately from desktop). Determine how many variations you can test (based on your traffic/transactions level), then select the best idea or two to solve the problem to test against a control. Draw a detailed layout of the micro changes (copywriting, design changes, etc.). You may also need to include a designer to design new elements based on the scope of changes. Implement the changes in your testing tool with the help of a front-end developer. Set up necessary integrations (Google Analytics) and define appropriate goals. Conduct quality testing on the test (broken tests are the biggest killer of A/B testing) to ensure it works across all browsers/devices. Run the test! Once the test is complete, perform a post-test analysis. Depending on the results, implement the winner, iterate on the process, or go and test something else.

Optimization

A/B Testing for Your Business

You have the process, you have the power! So go ahead, get the best A/B testing tool, and start testing your store. Before you know it, these insights will add extra value to your own bank.

If you want to continue learning about optimization, take advantage of a free course, such as the A/B Testing course by Google on Udacity. You can learn more about A/B testing for websites and applications to enhance your optimization skills.

Are you ready to create your first business? Start a free trial with Shopify – no credit card required.

Source: https://shopify.com/blog/the-complete-guide-to-ab-testing


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *