A/B testing

If you send e-newsletters regularly, you probably find yourself wondering how you can increase the number of people reading them. Here's how: split testing.

What is split testing?

Split testing involves sending variants of your e-newsletters to some of your mailing list, monitoring the performance of each, and sending the 'best' version to the remainder of your list. It generally involves four main steps:

  1. You create two or more versions of your e-newsletter.
  2. You send these different versions to a percentage of email addresses on your mailing list.
  3. You compare how each version of your e-newsletter performs (in terms of either opens or click throughs).
  4. You roll out the best peforming version to the remaining email addresses on your list.

A/B testing and multivariate testing

Strictly speaking, there are two types of split testing: ‘A/B testing’ and ‘multivariate’ testing. A/B testing involves just two versions of an e-newsletter being tested, and multivariate (as the name suggests) involves several.

What sort of things can I test?

There are a variety of things you can test:

  1. Subject header – the title of the email that recipients see in their inbox (does including the recipient’s name in it help? Is a longer or shorter subject header better?)
  2. Sender – the person who the email is coming from (open rates may vary, for example, depending on whether you send your email using a company name or an individual’s)
  3. Content – different text or images in the body of your email may elicit different responses to your message, and consequently influence the number of click-throughs.
  4. Time of day / week – you can test different send times to see which generate the most opens and click-throughs.

With all the above variables, you will need to decide whether to pick a winning e-newsletter based on open rate or click-through rate. Open rates are generally used to determine the winner of subject header, sender and time-based tests; click-through rates tend to be used as a measure of success when establishing which sort of content you should use in an e-newsletter. If you want to be very clever about things, you could run sequential tests – for example, you could carry out a subject header test, pick a winner and then subsequently run a content-based test using 3 emails sent using that subject header but with different copy in them. The more complex your tests, however, the more time-consuming it all becomes – you may need to start segmenting lists, spend a lot of time on copy-writing and so on.

How do I carry out a split test?

Most popular e-marketing solutions – such as Getresponse, Campaign Monitor, Aweber and Mailchimp – come with split testing functionality. This lets you create different versions of your e-newsletter, choose sample sizes, specify whether you want to measure success based on open rates or click-throughs, and then they handle the rest of the test, sending the best performing e-newsletter automatically to the remainder of the email addresses on your list.

Split testing and statistical significance

The key thing worth remembering about split tests is that the results have to be statistically significant – otherwise you can’t have confidence in using them.

This means

  • using a mailing list that contains quite a lot ofrecords (Aweber suggest only split testing when you are dealing with a list containing more than 100 email addresses)
  • testing using sample sizes that deliver meaningful results

The maths of split testing is surprisingly complicated, and it is quite easy to run split tests that seemingly produce winners but don’t actually have any statistical significance. It’s relatively straightforward to work out correct sample sizes for simple A/B tests - Campaign Monitor have a good guide to A/B sample size here - but working out the best approach to samples for multivariate tests is tricky. For a bit of a primer on the latter, you might wish to read this article on split testing samples from Lucidview. As a rule of thumb though, using larger percentages of your data in tests and running longer tests will deliver the most accurate set of results.

Which tool is best for split testing?

When reviewing the most popular e-marketing apps, we’ve found Getresponse to have the best split testing functionality (it allows you to test more variants of e-newsletters against each other than its key competitors); Mailchimp and Aweber are very good too. Campaign Monitor’s split testing functionality is pretty basic, in that only 2 versions of your e-newsletter can be tested against each other; and Mad Mimi doesn’t currently offer it as an option.


Feel free to share...

If you enjoyed this article, I'd be really grateful if you could share it using the buttons below. If you run a website or blog it would be great if you could consider adding a link to it too. Thanks!

Get our tips and reviews in your inbox

Get excellent tips on e-commerce, e-marketing and web design, plus an assortment of reviews and comparisons in your inbox.

Name *

Email *

Company (optional)