Split test
A split test (A/B testing) is a method of comparing two versions of a page, element or campaign to find out which one performs better. It is the most reliable way to optimize your online store.
What is a split test?
In a split test (also called A/B testing), you create two versions of something - for example, a product page, an ad or an email. Half of your visitors see version A and the other half see version B. By measuring which version brings more conversions, you can make data-driven decisions instead of guessing.
What can you split test?
- Headlines and product titles: Test different wording - e.g. "Ergonomic office chair" vs. "Office chair with lumbar support".
- Product images: Test lifestyle images vs. product images on a white background, or zoom vs. full view.
- Prices and offers: Test different pricing strategies (e.g. "499 kr incl. shipping" vs. "459 kr + 40 kr shipping").
- CTA buttons: Test color, text and placement of the "Buy now" button.
- Checkout flow: Test number of steps, form fields and payment options.
- Emails: Test subject lines, send time, content and design.
- Ads: Test ad creatives, texts and audiences in Google Ads or social media.
- Shipping options: Test free shipping vs. low order threshold for free shipping.
How to run a split test
- Formulate a hypothesis: "I believe that [change X] will lead to [outcome Y] because [reason Z]."
- Create two variants: Version A (the control) and version B (the variant with the change).
- Distribute traffic: Send 50% of visitors to each version, randomly distributed.
- Measure the result: Define a primary metric (e.g. conversion rate, average order value).
- Wait for significance: Let the test run until you have enough data for a reliable conclusion.
- Implement the winner: Roll out the winning variant to all visitors.
Statistical significance
It's crucial to run the test long enough to achieve statistical significance - typically at least 95% confidence. This means that there is less than 5% probability that the result is due to chance.
Factors that affect how long the test should run:
- Traffic volume: More traffic gives faster results.
- Conversion rate: Lower conversion rate requires more traffic.
- Size of the difference: Large differences require less data than small differences.
Common mistakes in split testing
- Stopping too early: Drawing conclusions on data that is not yet statistically significant. Early results can be reversed.
- Too many changes: Only test one thing at a time or you won't know what created the difference (multivariate tests are the exception).
- Too little traffic: With very little traffic, it can take weeks or months to reach significance.
- Seasonality: Don't run tests over periods of abnormal traffic (e.g. Black Friday) unless you intend to.
- Looking at the wrong metric: A higher click-through rate is not necessarily better if the conversion rate drops.
Split testing tools
There are dedicated tools for split testing websites:
- VWO (Visual Website Optimizer): Popular tool with visual editor.
- Optimizely: Enterprise solution with advanced features.
- AB Tasty: Easy-to-use tool with personalization.
- Convert: Privacy-friendly alternative to the larger tools.
Split testing in Shoporama
Shoporama's newsletter system has built-in A/B testing functionality. You can test two variants of a newsletter - for example, different subject lines, content or design - and send the winning variant to the rest of the list. This allows you to optimize your newsletters data-driven without external tools.
We know online marketing in Shoporama
We've been working with online marketing ourselves for decades. As the only shop system in the country, we have spoken multiple times at conferences such as Marketingcamp, SEOday, Shopcamp, Digital Marketing, E-commerce Manager, Ecommerce Day, Web Analytics Wednesday and many more.