More than 50 years ago, the father of advertising David Ogilvy said “Never stop testing and your advertising will never stop improving”. It was true then, it’s still true today, probably more than ever.

Testing different variations of an ad or a marketing message was always a part of the process.

50 years ago they used focus groups and then analysed the responses. The testing was done behind closed doors. After the campaign launch, little could be done.

Today we have the technology and the tools to test campaigns, marketing messages and other elements in real-time. We’re collecting immediate feedback from the actual users, the website visitors.

We now have the opportunity to test anything and optimise performance and conversions accordingly.

Most of the times we refer to this process as A/B Testing.

How does A/B testing work?

Let’s assume that we want to test the copy on a call-to-action button. The current copy is Start Your Trial and we call it the control version.

Step 1 – Make a hypothesis

Making a hypothesis requires that you think of what to test and how you’re going to evaluate the results.

For example, if I change the call-to-action copy, how will it affect the number of visitors who sign up for a trial?

This is a valid hypothesis because we defined that we’re going to test the call-to-action copy and count the number of signups each variation generates.

Step 2 – Create the variations

Next step is to create the variations that you’re going to test. For the example that we’re using, the variations could be:

  • Control version: Start Your Trial
  • Variation #1: Start Your Free Trial
  • Variation #2: Start Your Free Trial Now
  • Variation #3: Start My Free Trial

Step 3 – Run the experiment

Time to put your hypothesis to the test. You want to randomly serve each of the 4 variations to approximately 25% of your visitors and track the results.

But before you start the experiment, you need how many visitors you’re going to need to make the results statistically significant. In simpler words, you need to know when the results stop being random.

Luckily, you don’t need a degree in statistics for that. Here’s an a/b test duration calculator that you can use to determine the number of visitors and the number of days that you should run the experiment.

Step 4 – Pick the winner and start a new experiment

After you’ve reached statistical significance, you’ll know the variation that performed better than the rest. If you used a tool for your a/b testing you’ll see something like the image below.

This graph is from an email campaign that tested two variations of the subject line, sent to 20% of the recipients in our mailing list. Variation B had a 25.2% open rate compared to just 11.4% of variation A. After the test, we sent the email with the variation B as the subject line to the remaining 80% of our subscribers.

Email Marketing A/B Testing Line Graph

It’s to time to make a new hypothesis and plan your next experiment.

What should I be testing?

The short answer is everything. You can test every step of your marketing campaign. From the banner ads or the email subject to the title on your landing page and the colour of your call to action. You should probably start though from the elements that may have the greatest impact on the user behaviour.

There’s so many ideas on what to test that a dedicated post is necessary. Use the form below to subscribe and I’ll make sure you get it as soon as it’s  published.

 

Target abandoning visitors and turn them to email subscribers

Get weekly tips to optimize your website


Like this post? Share it!

Pavlos Linos

Founder and CEO at Exit Bee. I enjoy working on customer development and marketing optimization. I mostly write about how you can turn more visitors into customers.

read more