This is why you should NOT A/B-test in December

Elin Linde

Let us save you from A/B-testing in December! A/B-testing is one of the most powerful tools to optimize your website, but there are several things to consider in order to succeed. Or not to fail completely.

Dear Opitmizer,

How lucky you are reading this, it will make you a hero in the office. You will, in fact, save your business from lost time, wrong decisions and lost customers. Great, isn’t it?

We are using December as a clear example of divergent user behavior, especially for e-commerce. The mindset applies to all.

Note: You can A/B-test in December, but read this blog post first.

The reason why it’s a bad idea to test in December:

The traffic IS different

Let’s look at two traffic reports, in this case the Audience Overview and E-Commerce Overview. It doesn’t matter whose GA we peeked in, big Swedish e-retailers should be reference enough.

Look at the traffic in December:
A/B-testa inte i december

Okay, a stronger case: look at that hump:

A/B-testa inte i december

Another one, check this out:

A/B-testa inte i december

Side note: if you use GA in Swedish, switch to English right now. When you need to google your problems, you will get (a hundred times) more help in English.

Isn’t more traffic a good thing?

No, not under these circumstances.
Let us explain.

If you run an e-commerce site you most likely have a traffic and conversion peak in December, which is unique for the year. And you love this peak.

BOOM how much you sell.
You think: “I want this for the rest of the year too – let’s learn from this period, test the users and see what makes them convert!”

No. No, no, no.
For this to apply to the rest of the year, the following should also be true:

Your visitors represent your target audience
Your visitors’ behavior can represent the rest of the year
You have no external factors that influences the visitors’ behavior

More traffic will most likely provide significant test results that are valid, for THAT PERIOD and THAT SPECIFIC user behavior.

An example:
You company runs 3 for 2 (and that converts by the way better than 33 %) on selected products, and to give it a push the company also runs a big TV campaign. At the same time, your IT department just implemented a new paying method. And then you also find out that your colleague in marketing is running a remarketing campaign on Facebook.

You A/B-test button color (which by the way is a silly but simple test, it is rather about the contrast) and get a significant result, the orange one wins! Yey! The TV campaign ends, your AdWords budget ran out midway through the test and there you are with an orange button that no longer convert as well. It then appears that the visitors’ intentions and behavior were totally different from normal.

The result is true, but only under these specific conditions.

What does this mean for your optimization?

Your visitors behave differently during the Christmas rush. Tests in December will give you insight into the Christmas campaigns, but do you really want to generalize your findings and implement them after Christmas? Probably not.

So it’s very likely that you need to rerun your test in January anyway.

Or if you sell gym memberships online, the same results, but the peak will be in January.
Trips abroad, that rainy week in the fall.
And so on and so forth. You get what we’re saying.

Lesson learned until next Christmas?

Now you think like “Ah, the things I discovered this Christmas will be useful next year”. Not impossible at all, but not so fast, your business and industry may look quite different when that time comes.

There is a reason why A/B-test shouldn’t run a longer period

Online behavior changes over time, and is influenced by external factors. For this reason, optimization and A/B-testing are a continuous project and not single events. In the same way, the winning version today may not be winning in 1, 2 or 3 years.

A real case

We increased sales by 6% in a project with Swedoffice, the result of several tests, persistence and careful analysis.

The first test started in November, we got a valid result but no uplift. We adjusted the hypothesis based on an important insight, which lead to a really elaborate hypothesis.

The second test started in December. The test finished in January, the traffic had increased but the result was neither significant nor showed any difference. And sales stood still in both original and test version.

In our analysis we took into account our awareness that the test took place during the weeks around Christmas and therefore had abnormal behavior.

Eureka! It was B2B e-commerce. And it was Christmas time.
When we removed the Christmas weeks traffic from the test, and only analysed the traffic from before and after Christmas – and THEN we could identify a behavioral change.

Your visitors represent your target audience
Your visitors behaviour can represent the rest of the year
You have no external factors that influences the visitors behaviour

Ok, you get it. But what should you do instead?

Read this blog post: Mobil e-handel: Julens just f*ign do it.

And if you are testing in December, be sure to have your test plan ready, read this wonderful spread sheet! “Regular” tests run until December 1, then the “Christmas tests” take over.

A/B-testplan

Delivery before the 24th?

Last year we did a study on Swedish e-retailers and how well they optimized the days before Christmas, especially for the critical task “delivery before Christmas”.

Optimized communication: How you respond to concerns about delivery before Christmas
Paket leverans julPhoto cred: Postnord – E-barometern Q3 2014

Get better at A/B-testing

Study to become a Conversion Manager and learn CRO.

 

Read also

Conversionista is open for business in The Netherlands.
Conversionista is open for business in The Netherlands. Read more.