What is a split test?
A split test (also called A/B testing) is a simple way to find out what works best in your shop. You create two or more variants of the same page, for example your front page, and let Shoporama show each visitor one of the variants at random. After a while, you can see which variant generated the most sales, biggest basket or highest conversion. Instead of guessing what your customers like best, you let them vote with their behavior.
The beauty of a split test is that it's fair. Both variants are seen by visitors at the same time and under the same conditions, so neither the weather, the season or a campaign skews the result. When one variant wins, it's because it actually performs better.
A concrete example: a green "Add to cart" button that converts 8% better than the red one sounds small, but over thousands of visits it's real money.
Two types of split tests in Shoporama
You can test two different types of changes against each other:
- Page layouts. Two or more variants of the same page in the Page Designer. For example, test two different front page layouts, two versions of your category page, or two product page layouts. You can change images, sections, text, order - everything in the Page Designer.
- Stylesheets. Two or more CSS variants against each other. Test button colors, font sizes, layout adjustments or entire design moods. If you know how to write a little CSS (or have a helper who does), this is where you'll find the surprise winners. Small visual changes can have a big impact on conversion.
Both types work the same way technically and use the same statistics. You decide which type of test makes sense for what you want to test.
Bonus: time management without testing
The variant system behind the split test can also be used to schedule pages on specific dates, e.g. a Black Friday front page that turns itself on and off. Read more.
Time management - also without split testing
Classic example: Black Friday. Instead of remembering to log in on November 28 at 00:00 and change your front page, you create a Black Friday front page in advance and tell the system: "Show this from November 28 to December 1". Shoporama takes care of the rest.
It works for all the major seasons: summer sales, Christmas, Father's Day, Singles Day, Valentine's Day. You can have multiple timed variants at the same time, e.g. a regular front page all year round, a summer version June-August, and a Black Friday version for that week. The system automatically switches between them on the right dates.
If you only want the variant to appear on a specific date, leave the "Online" checkbox unchecked and only fill in the dates. It will then automatically activate in the interval and disappear again when the end date has passed.
How to get started
First, you need two variants to test against each other:
- Go to Page Designer (or Stylesheet if you want to test CSS), find the page you want to test and click the copy icon (the two sheets of paper) next to the page name. You'll get a copy that starts offline so you can safely edit it.
- Customize the copy. Change the image, correct text, rearrange sections. This is what you want your customers to see as an alternative to the original.
- Put the copy online. You now have two online versions of the same page. If you had simply put both online without testing, the one with the highest priority would win and be shown to everyone. This is where the split test comes in.
- Go to Split test, create a new test. Choose what you want to test (the page or stylesheets), give the test a name ("Home Hero A vs B"), add the two variants, choose the traffic split (start with 50/50) and click Start. Now your visitors are automatically distributed and each user is assigned one variant that follows them on repeat visits.
How do we find the winner?
Behind the scenes, Shoporama uses a modern statistical method (Bayesian) to calculate how likely each variant is to be the best. You don't need to understand the math, but unlike classic p-values, Bayesian gives you answers you can actually use: "variant B has an 87% probability of being best". When that probability reaches 95% and there is enough data, we pick the variant as the winner.
We do NOT pick a winner too quickly. Even if variant B looks better after two days, it could be chance or a short campaign that pumps up sales. To avoid false positive winners we require:
- At least 5,000 impressions per variant
- At least 100 total orders across variants
- At least 14 days duration (so we cover a full week cycle twice)
- 95% Bayesian probability that the winner is the best
If the criteria are not met, we show you which variant is ahead right now, but clearly mark that the test is not settled. So you know if you need to wait or if the result is already reliable.
Automation - set it and forget it
You don't have to sit and watch your test every day. Two independent settings allow the system to handle it itself:
- Stop automatically when there is a clear winner. As soon as the criteria are met and a variant has 95%+ probability, the test stops automatically. You will be notified in admin.
- Use the winner as the new default when the test stops. The winner is put online with high priority and the other variants are put offline. You still have them all in the system, so you can always roll back manually from the variant list. This works both when auto-stopping AND when you manually stop the test yourself.
With both options enabled, you can create a test, click Start, and forget about it. Once it ends, your shop is already upgraded to the winning version and you have a notice in admin telling you what happened.
The statistics you get
For each variant, Shoporama shows:
- Views - number of page views of the variant (bots excluded)
- Orders - number of completed orders from visitors who viewed the variant
- Revenue - total revenue per variant including VAT and shipping
- Conversion rate - orders divided by views, shown as a percentage
- Revenue per visitor (RPV) - the most important KPI in e-commerce because it captures both conversion and basket size in one number
- Win probability - Bayesian percentage of the variant being the best, shown as a colored bar
The leading variant is highlighted with a green background and when a winner is declared with 95%+ probability, it receives a trophy badge. The numbers stand still between updates until new data comes in, so you can safely reload without seeing the percentages jump.
What happens behind the scenes?
When a visitor hits your shop for the first time during a running test, we assign them a variant based on the traffic distribution. The assignment is stored in a small cookie called pb_v so they see the same variant on repeat visits. This is crucial. If the customer saw variant A on the phone yesterday and variant B on the laptop today, it was no longer a valid test.
When the customer adds an item to the cart, we save the variant assignment on the cart itself. When the order is completed, it is tagged with which variant the customer saw. This means we can measure real revenue per variant, not just estimated clicks. Every night a script runs that updates the statistics and auto-stops tests that have reached significance.
Cookie and GDPR
You don't have to do anything for split testing to comply with GDPR. It is a functional cookie that does not share data with third parties and is not used for marketing.
The cookie only contains a random visitor ID and a mapping to which variant the user is viewing. It cannot be read by JavaScript on the page, is only sent on first-party navigation, and expires after 30 days. If visitors reject cookies via your cookie banner, the cookie is not set. Instead, we use a session-based fallback that follows the user for the rest of the session without writing to disk.
You should add the cookie to your cookie policy as a functional/preference cookie. We also display a discrete note in the Split-Test administration reminding you of this. The Danish Data Protection Agency considers sticky A/B test cookies as functional cookies that do not require explicit marketing consent.
Bots and search engines
Googlebot, Bingbot and other crawlers always get the same variant (the one with the highest traffic weight in the test). They do not get a cookie and are not included in the statistics. This is neither cloaking nor an SEO issue because we don't deliver different content based on user-agent - all new visitors get the same choice as the bot. The canonical URL is stable so Google can rank the page without interference.
When the test is complete
If you have enabled "Use winner as new default", it happens automatically: the winner is activated, the losers are taken offline, you get a notice. If not, you can click into the test yourself to see which variant won and manually update your shop to that version.
The statistics are preserved no matter what. You can always go back and view historical tests and their results. Over time, this gives you a library of knowledge about what works for your store. You'll plan the next test smarter because you understand your customers better.
Tips for a good test
- Test ONE thing at a time. Change only the button color, or only the header, or only the image. If you change five things at once and B wins, you won't know what made the difference.
- Choose changes that can make a real difference. A change from gray to dark gray button rarely moves anything. Test major visual or textual changes first and geek out on the details later when the big stuff is optimized.
- Let the test run for at least 14 days. There is typically a difference between weekdays and weekends, and a full week cycle twice gives a more balanced picture.
- Don't stop a test early because it "looks good". This is the classic trap that leads to false positives. If you have significance stopping enabled, the system will automatically wait until there is enough data.
- Have a hypothesis. Write down what you expect BEFORE the test starts ("I think green button converts 5% better because it stands out more"). Even if you're wrong, you'll learn something about your customers.
- 50/50 is fastest. If you're cautious and only want to expose a new variant to 10% of traffic, run 90/10 - but accept that it will take longer before there's enough data for a conclusion.
Concrete test ideas to start with
- For fashion/lifestyle: Test the "New for Spring" headline against "New items in store" on your front page, or test a model image against a flat-lay image.
- For homeware: Test "Best sellers" section against "Latest products" as the first block.
- For specialty items: Test whether a short product description or a long technical spec converts best.
- Universal: Test your primary button color. The classic CRO test shows that small color changes often shift conversion 3-10%.
Combine with time management and AI
Time management and split testing can be combined. For example, you can create a Black Friday front page, put it online for that week, and at the same time split-test two variants of it (red hero vs green hero) to find out what works best for Black Friday traffic. Seasonal data will be worth its weight in gold next year.
If you have AI control enabled, you can also create timed pages directly by chatting with your AI client: "Create a Black Friday landing page that automatically goes online on November 28 and goes offline on December 1" or "Take my summer landing page offline now". The split-test orchestration itself (create test, select variants, start/stop) is deliberately not automated via AI. These are choices you have to make yourself so that no one accidentally starts a test.
Frequently asked questions
Do I need to be technical to use split testing?
No, you don't. For page layouts (front pages, category pages, etc.), click through the Page Designer as always and copy with one click. For stylesheets, you or a helper will need to be able to write some CSS, but the rest of the split testing flow is the same: select variants, start, wait for the result.
How long does it take before I have a winner?
It depends on your traffic. A shop with 5,000 visits per day can be finished in the minimum 14 days. A smaller shop may need 4-8 weeks to gather enough data. If you don't have 5,000 views per variant + 100 total orders within a reasonable timeframe, the significance stop will never trigger and you have to manually assess based on the numbers you have. For very small shops, split testing can still be a good tool to catch obvious differences (e.g. a variant that converts twice as well).
What happens if I delete a variant in the middle of a test?
If the test falls below two variants due to deletion, it will stop automatically. Collected data is retained as history on tagged orders, but the test cannot be continued. We do not select a "winner" in this situation because a test with only one variant does not make sense.
Does it affect the speed of my eCommerce store?
Hardly at all. The variant selection happens in the server before the HTML is rendered, so there is no flicker or delay in the browser. The only extra cost is a quick file check and one database lookup per visit during an active test. When tests are not running, overhead is minimal. However, the cache is disabled while a test is running and also for users who have participated in a test (up to 30 days after they have been assigned a variant), so they are always served their assigned version without risk of cache contamination.
How do I know if I should trust the result?
The win probability is your guideline. Below 70% - the test is too close, wait for more data. 70-94% - there is a clear leader but not yet statistically certain. 95%+ with minimum requirements met - you can safely choose the winner by default. We show colored bars and badges that make it visually easy to see how far you are.
Can I run multiple split tests at the same time?
Yes, as long as they are not competing for the same target. You can have a front page split test AND a stylesheet split test at the same time - they test different things. But you can't have two different front page split tests running in parallel because it pollutes each other's data. The system automatically enforces the rule and will notify you if you try.
What happens to the variants that lose?
They don't disappear. If you have auto-apply winner enabled, they are taken offline. But they are still there in the system with their content intact. You can always manually put them back online or use them as a starting point for a new variant. The statistics from the completed test are also preserved.
What if more visitors end up on one variant than the other?
We check daily for skewed traffic distribution. If the traffic is clearly distributed differently than your weights (e.g. 70/30 instead of 50/50), you will get a warning in admin. This is typically due to a cache configuration or unusual bot pattern and the result should be taken with caution until the issue is resolved.
When is Revenue per Visitor (RPV) displayed?
RPV is calculated as soon as there are orders for a variant, but we require a minimum of 30 orders per variant before we can give a statistically reliable RPV-based answer. Until then, you can still see the raw numbers and use the conversion rate as the primary KPI.
Does split testing send data to Google Analytics?
No, it doesn't. All statistics are kept internally in Shoporama. You don't need to configure anything external to get the numbers.
Does split testing cost extra?
No, it doesn't. It's included in your Shoporama solution at no extra charge, no limit on the number of tests, no per-visitor charge. You can run as many tests as you want for as long as you want.
Get started
Find Design → Split testing in your Shoporama admin and create your first test. If you're not sure what to test, start with your front page. It's the page most of your customers see, and even small improvements here will pay big dividends. For example, test two different hero images or two different headlines against each other.
If you have any questions, feel free to contact us at support@shoporama.dk.