11 A/B Testing Mistakes That Even Experts Make

Watch out for these 11 pitfalls of A/B Testing.

11 Significant A/B Test Mistakes NOT to Make

Have you ever run an A/B test? A/B testing is a lot more complex than many people think. If you are not careful, you might end up screwing up your A/B tests. Not testing under the right conditions might end up causing more harm than good. Below are the 11 A/B testing mistakes that you must avoid to run a successful A/B testing campaign.

1. Ending A/B tests too early

One of the most common mistakes made by marketers is ending their A/B tests too early. Statistically speaking, the larger the sample size, the more accurate the results should be. When you are conducting an A/B test, the results can be subject to an error in interpretations. Statistical significance is what tells you whether version A is actually better than version B, but the sample size has to be big enough for the results to be as accurate as possible.

Imagine that you are running A/B tests on a page that normally has 40,000 unique visits per month. You then run test A and get 40 conversions out of 400 visits. Test B gives you 24 Conversions out of 367 visits. In this case, it is not very wise to conclude that test A is better than test B because the sample size is not large enough.

In order to get the most accurate result, you must base your A/B tests on a sample size that is as near the 40,000 mark as possible. That obviously means that you should not stop your A/B test too early. The rule of the thumb is that if you run a site that deals with seasonal products, you should run your test until at least the next season. Another good ballpark is to have at least 100 conversions before you terminate the test, but this should hinge on the size of your traffic.

2. Believing what you read

Everyone talks about his or her experiences of A/B tests. As we have already mentioned, it is possible that someone runs a test and calls it too early and/or uses a very small sample size. So they end up talking about how a certain change really boosted their conversion rates without knowing that they might be making premature pronouncements based on inconclusive results. There are plenty of CRO testers who implement test results published by others, only for those changes to end up having detrimental effects on their conversions. To be on the safe side, it is advisable that you run your own tests, under the right conditions, and see how they work out.

3. Influencing traffic

Influencing traffic behavior or artificially increasing traffic volume to the site so that you can run your A/B test and conclude it as soon as possible is not a good idea. You should let things run their natural course. Influencing traffic so that you can test your changes is like increasing the number of cars on the road so that you can test the frequency of accidents. You can be sure to get incorrect results.

4. Basing your tests without hypothesis

Too often, companies take a rather random approach to running their split tests. They seem to be of the idea that A/B testing is about taking random ideas and seeing which ones work and which do not. The thing is you cannot run a successful A/B testing based on a random idea. You need to have a hypothesis that needs to be tested. Simply stated, a hypothesis is something that is made of limited evidence, which acts as a starting point for further experimentation or study.

5. Giving up after the first test fails

Do not give up after your first test has failed and move on to another page. The best approach is to learn from your failed test, improve your customer theory and hypothesis then run a follow up test. If it fails, improve your hypothesis once again and run another test and so on until you succeed.

6. Assessing the past instead of getting insights for the future

One of the mistakes even experts make is being stuck in the past instead of finding insights to model the future. Too many times, marketers find themselves in a situation where they are spending a lot of time trying to figure why what worked in the past is no longer working instead of using their A/B test to get insights for the future.

7. Testing too many variables or elements at a time

“One of the top A/B testing mistakes is to test too many variables at a time,” says Matt Gershoff of Conductric. It is easy to make a mistake running too many tests. Although sometimes you may get excited about success of one A/B test and get tempted to run many concurrent tests, it is not advisable.

The reason is simple: one, researching, running a test, analyzing, measuring the results and making a decision may take time. So if you run several tests at a time, you may get overwhelmed and confused. Secondly, when you run an A/B tests, you are in essence opening yourself to the possibility of experiencing either drop in revenue for the duration of the test. When you run too many tests, you run the risk of significantly experiencing drop in revenue. It is much better to run one test at a time. As already mentioned, the test you run should be well researched (your hypothesis).

8. Testing insignificant elements

Not everything should be subjected to an A/B test because not everything has significant value in terms of conversions. For instance running a test on the background color is a waste of time because there is no one color to suit everyone. However, your overall color scheme should be harmonious, including contrasting colors. Only stuff that can significantly impact on conversion rates should be tested.

9. Letting gut feelings overrule the results

The first lesson you learn in A/B testing is that you might not like some test results. So don’t let your gut feelings overrule your gut results. For instance, test results might tell you that a red button produces higher conversions. Even if you think that the red button doesn’t look right in the eye in respect with the rest of the page, do not reject the test results. Your goal is to convert, not to beautify the website.

10. Ignoring small gains

Most marketers run A/B tests with the aim of hitting a jackpot. Unfortunately, that might not always be the case. Sometimes you may gain only 1% improvement in your tests. Don’t ignore those results because that is what you will get in most cases. Unless your site is very bad, seldom will you get numbers like 10%, 20%, 35% or 50%. What you will get most of the time are numbers like 1%, 1.5%, 3.23% etc. Remember, even 1% improvement in your conversion rate can result in thousand or even millions in additional revenue.

11. Not running tests all the time

Conducting A/B tests is not an occasion, it is a continuous process. Many elements and variables need to be tested in order to come up with the best combination that produces optimum results. Most people run a couple of tests and end it there forgetting that there are dozens of elements or variables that need to be tested as well.

Total
0
Shares
Leave a Reply
Related Posts
Workers looking at documents in an office
Read More

How to Increase Sales with a Customer Satisfaction Analysis

For any business, customer satisfaction should be a priority. After all, your customer satisfaction levels directly impact customer loyalty, repeat purchases, brand reputation, and ultimately your overall success.    Satisfied customers will spread the word...
loyalty
Read More

How to Start a Loyalty Program in 5 Easy Steps

One of the best ways to grow your startup business it to create a loyalty program. Rewards programs remain one of the most efficient methods for startups to reward existing customers and keep them coming...