How to Run A/B Tests with Low Website Traffic or Small Sample Sizes

Table of Contents

Share post

Twitter
LinkedIn

Time and time again, A/B testing has been proven to be a simple, yet powerful tool for improving a website’s performance. But what happens when you don’t have a ton of traffic? For many businesses, low traffic A/B testing presents a unique challenge.

If your site gets fewer than 100,000 visitors a month or has low conversion numbers, traditional A/B testing methods might feel a bit useless. This is because low traffic makes it harder to achieve statistical significance—but that doesn’t mean you should skip A/B testing altogether.

In fact, with the right strategies, you can still run meaningful experiments, gather valuable insights, and optimize your website for success. It’s all about working smarter—not harder. 

Let’s break it down step by step.

What is the minimum sample size for an A/B test?

While some industry experts suggest a minimum sample size of 100 conversions per variation to determine statistical significance, there’s no magic number for a sample size really. The minimum sample size for A/B testing depends on two key factors:

Your current traffic levels

The expected “uplift” (difference between variations A and B)

To calculate the required sample size, you’ll need to know:

Your current conversion rate

The minimum detectable effect (the smallest improvement you’re looking for)

Your desired confidence level (typically 95%)

How to calculate it:

The major metrics you’ll be evaluating while estimating sample size are:

  • Baseline conversion rate: The predicted conversion rate for the control group.
  • Minimum detectable effect (MDE): This is the amount of improvement that the experiment should detect as compared to the baseline. 

Assume you have a website with a baseline conversion rate of 20% and wish to identify a 2% improvement in conversion rate. The MDE would be 10 percent. To reach 10%, apply the following formula:

MDE (10%) = Desired Conversion Rate Lift (2%) / Baseline Conversion Rate (20%) * 100% 

Next, determine the sample size for your test. To do so, use a sample size calculator, such as this one from Survey Monkey. 

Why does sample size matter? Without enough data, your A/B test results won’t be statistically significant. This means you can’t confidently say whether version A or B is better—or if the difference is just random noise.

For low-traffic websites, this can be tricky. If your site only gets a few hundred visitors a week, reaching the required sample size could take months.

How to split traffic for A/B testing

Before we get into the nitty gritty of how to split your traffic given that it is simple in theory but requires careful execution, especially on low-traffic sites, what really is it?

Traffic splitting is the process of dividing your website visitors into groups—usually 50/50—for A/B testing. One group (Group A) sees the original version, while the other (Group B) sees the variation. This ensures a fair comparison between the variations and prevents external factors like holiday or marketing campaigns from biasing the results.

To split your traffic correctly:

  • Use an A/B testing tool
    Tools like Google Optimize, Optimizely, or VWO can handle traffic splitting for you. These tools ensure visitors are evenly distributed and consistently see the same version during the test. You choose the split ratio, and the tool randomizes visitors into groups.
  • Stick to equal splits (50/50)
    For low-traffic websites, it’s tempting to give more traffic to the variation you think will perform better. Don’t do this. Uneven splits can skew the data and make your test results unreliable.
  • Avoid overlapping tests
    Running multiple A/B tests at the same time on the same audience can create data contamination. Stick to one test at a time, especially if your traffic is limited.
  • Exceptions to the 50/50 rule:

If testing a high-risk change (e.g., a pricing page redesign), start with a 90/10 split to minimize potential revenue loss.

Use multi-armed bandit testing to dynamically allocate more traffic to the winning variation.

CTA CARD 7 | How to Run A/B Tests with Low Website Traffic or Small Sample Sizes | GrowthApp Your AI-Powered CRO Assistant

How to split traffic for A/B testing

1. Test on high-traffic pages

Focus your efforts where they’ll have the most impact. Identify your top-performing pages and run tests there. Homepages, product pages, or landing pages with the most traffic are great candidates.

2. Optimize for micro-conversions

Instead of testing for big outcomes like revenue increases, start smaller. Test for micro-conversions—like clicks on a button, form submissions, or scroll depth.

Why? Micro-conversions happen more frequently, which means you can gather data faster. For example:

Test whether changing a CTA button color increases clicks.

See if tweaking a headline improves user engagement.

3. Limit variations

The more variations you test, the more traffic you’ll need to reach statistical significance. For small sample sizes, stick to two variations (A and B). Save multivariate tests for when your website grows.

4. Extend the test duration

Low traffic naturally means longer test durations. Let your tests run for at least 2-4 weeks, even if you’re impatient for results. Use a duration calculator to estimate how long your test should run based on your traffic and conversion rates.

5. Combine quantitative and qualitative data

Quantitative data (like conversion rates) is essential, but it’s not the whole picture. Use qualitative data to gain deeper insights:

Session recordings show how users interact with your site.

Surveys reveal why users behave the way they do.

Heatmaps identify where users click—or don’t click.

This approach is especially useful for low-traffic sites, where statistical significance may be harder to achieve.

6. Test big, bold changes

Small changes (like swapping a single word) may not show significant results with low traffic. Test bold, noticeable changes to increase the chances of seeing a measurable uplift.

7. Run sequential tests

If you can’t split traffic evenly due to low visitor numbers, try running sequential tests. Show version A to all visitors for 2 weeks, then version B for the next 2 weeks.

8. Be patient

Low-traffic A/B testing requires more time and discipline. Don’t rush the process or stop a test too early.

To sum up, A/B testing on low-traffic websites isn’t impossible. It just requires a strategic approach, patience, and a willingness to test smarter, not harder.

Focus on high-traffic pages, test for micro-conversions, and use tools to calculate sample sizes and test durations. And remember: Even if a test isn’t statistically significant, qualitative insights can still provide valuable takeaways for improving your website.

Low traffic isn’t an excuse to skip testing. It’s an opportunity to dig deeper and get creative with your optimization efforts.

You may also like
Making Sense of Your Website's Data Just Got Easier and More Rewarding

Obsessed with Growth

Get weekly growth tips, special offers, and feature updates from Growie, your CRO AI assistant.

You've downloaded the plugin, and that's fantastic!

Now, let’s get your website connected and start unlocking those powerful insights. Here’s a quick guide to completing the setup within the GrowthApp web app:

And that’s it! GrowthApp will now start analyzing your website visitors and providing you with valuable insights to optimize your website and boost your sales.