How to Determine if Your A/B Test is Statistically Significant

Vertical Measure’s Conversion Rate Optimization (CRO) team uses A/B testing to determine which recommended changes should be fully implemented on our website. We’ve developed our very own statistical significance calculator to determine whether or not an A/B test has reached statistical significance.

And we know what you’re thinking: “That’s cool. But, what the hell does all that mean?”

To help explain, let’s start with a quick refresh…

What is CRO?

CRO is the continuous process of improving a site’s user experience in order to derive the most value (conversions) from all traffic. Since Vertical Measures values data-driven solutions, our CRO team recommends ongoing website changes based on data gained from continuous A/B testing.

What is A/B Testing?

A/B testing determines if a given change positively or negatively impacts a site’s performance. Testing involves using a control page (page without the change) and a variant page (page with the change). A/B testing software, such as Convert or AB Tasty, then splits traffic evenly between the two and tracks the conversion rate for the control and the variant. An A/B test can be concluded and implemented once it reaches statistical significance.

Convert.com
ABTasty.com

What is statistical significance?

Statistical significance proves the level of certainty that the results of a given test are not due to a sampling error. Statistical significance is used across many different industries and testing environments, including:

  • Academic research,
  • psychology tests,
  • medical tests,
  • and many more.

Within the realm of A/B testing these data sets are typically the number of users and number of conversions for each variation. Using statistical significance proves that an A/B test was successful or unsuccessful.

Ideally, all A/B test reach 95% statistical significance, or 90% at the very least. Reaching above 90% ensures that the change will either negatively or positively impact a site’s performance. The best way to reach statistical significance is to test pages with a high amount of traffic or a high conversion rate. The ideal test length falls anywhere between 2 and 8 weeks.

However, sometimes a test will never reach statistical significance due to low traffic or low conversion volume. We recommend running most tests for a maximum of 2 months due to cookies being reset or deleted.

You don’t want users included in multiple versions of a test while it’s running because that will skew the data. If a test isn’t reaching statistical significance conclude the test and consider making a more drastic change or moving the test to an area of your site with higher traffic or a higher conversion rate. Don’t be like Rose, don’t wait.

How to use VM’s statistical significance calculator

Our statistical significance calculator only requires 4 data points to determine a test’s statistical significance. You only need to know control visitors, control conversions, variant visitors, and variant conversions. To use the calculator and obtain results, you simply need to enter this data into the left-hand side of the calculator (underlined for your convenience).

The results section displays 3 pieces of important information:

The first is to determine whether or not the variant won or lost. This result declares whether or not the control or variant has a higher conversion rate. But, don’t celebrate too quickly, a win or loss is only worth implementing if it’s statistically significant.

The second result is the % change of the conversion rate. The greater the change between the conversion rates, the faster a test will reach statistical significance. Additionally, this result quickly quantifies the potential conversion rate lift if the variant is applied to the site, securing your bragging rights.

The third and last piece of information is the test of statistical significance. As stated earlier, you want above 95% (green text) or at least above 90% (orange text).  If you have a truly winning test all three results will be displayed in GREEN!

Why should you use the statistical significance calculator?

You should use the statistical significance calculator to determine the success or failure of an A/B test. If you only look at the change in conversion rate, it’s impossible to be sure whether a test’s results are due to the change made or sampling error.

At Vertical Measures, we only recommend implementing data-driven solutions. Overall, our CRO tests win with statistical significance over 66% of the time and see an average conversion rate increase of 22%. By using our statistical significance calculator, you can ensure that your changes are data-driven.

Is your A/B test statistically significant?

Click the link below and head over to our Statistical Significance Calculator to find out just how well your split tests are performing.

Take me to the calculator now!

Zach Bramwell :Zach is a total web dork. He's worked within a wide range of the web, from UX design to backend web development. Zack loves data-driven design and usability solutions -- he considers himself to be a user’s advocate. As noted by the team at Vertical Measures and his fans, "His work is almost as fantastic as his hair".

This website uses cookies.