Move Beyond the Subject Line- A/B testing your email campaigns

If you are not testing, then you are missing out on revenue. AB testing your email campaigns is not only a great way to improve the quality of content, but it also gives you a chance to know your customers better and understand what content, images, CTA’s work best. Testing gives a clear direction to teams, eliminates instances of opinionated decisions, and ensures an overall satisfactory customer experience.

However, surveys show that only 31% of marketers test their emails regularly, while 43% don’t know what to test.

The purpose of email campaigns is to maximize open and click-through rates while generating as many leads and sales. But often, over-saturated inboxes minimize the chances of prospects opening the emails.

MailChimp estimates that open rates vary from 18% to 28% depending on the industry concerned. While that’s not too bad, it still means that 75%-80% of emails remain unopened.

Truth be told, there is no magic formula to make people open your emails, and the success of email campaigns depends on uncovering the perfect mix of layout, copy and design that clicks with your target audience by A/B testing different versions.

So what is AB testing?

A/B testing is a way to compare two (or more) versions of an email to understand which one performs better.

Let’s look at it from another perspective. Say you own a bakery. We can name it Flour Butter Sugar. After a few months of flat revenue, you decide to redesign the menu to see if it has an impact on sales. So you implement one single change- instead of having plain texts on the card, you include pictures of the most popular items sold at the bakery. Make 30 copies of the redesigned card, placing them along with 30 copies of the old menu, then split the bakery down the middle- patrons get the new menu, while everyone else gets the old one.

After five days, tally the revenue coming in from the two sides. You will be surprised to find that the menu with pictures increased average table ticket by 20%, and that is your clear winner.

This is an A/B test. Now let’s look at how it works in the email marketing space. A/B testing for email campaigns involves tweaking a part of the email slightly to ascertain which version generates more opens, clicks and conversions. For instance, does a red button as the CTA generate more clicks than the white button?

To run a test successfully marketers need to come up with a hypothesis and goal that needs to be achieved in the test, which is typically based on a value such as click-through-rate.

Apart from A/B testing, there are other methods as well.

Multivariate testing- This method involves testing multiple variations to determine which combination of the variations is the most effective. This is a good alternative to A/B testing, however, it’s important to be mindful to avoid testing multiple KPI success metrics and just focus on a single metric. It’s a great way to optimize winning elements from previous tests.

https://www.klaviyo.com/

Champion vs Challenger- This is a long term testing plan. Each winning variation is run against the control as the new challenger. It’s necessary to have breaks for implementation in this kind of testing. This test is best used for trying new ideas, but is best avoided during holiday seasons as it may skew results.

https://www.klaviyo.com/

Hold-out- This basically means excluding a group for an email campaign. Then, evaluate the purchases of both groups to understand which plan has worked better. This method is very effective to understand the impact on ROI compared to other tests. However, this method is the last resort, and should be used on a very small group to avoid leaving money on the table.

Now that we have looked into the different kinds of tests, let’s understand the best way to determine the sample size and best time to send the test mails.

Theoretically, to determine the winner between Variations A and B, there needs to be a wait period to collect the stats. This is dependent on the company size, sample size and testing methods, and results can be gathered within a couple of hours, or sometimes after a couple of weeks. Herein lies the problem. While it is okay to wait out a month to gather the results of testing the headline copy of a landing page, or the CTA of a blog post, waiting on email test results is not the best option for the following reasons:

· Each email campaign has a finite set of audience- Unlike landing pages, where marketers can continue to gather new audience members over time, once an email test is set off, it’s not possible to ‘add’ more people to that list. Hence the best option is to send an AB test to the smallest portion of the list necessary to gauge the results, and then send the winning variation to the rest.

· Typically, multiple email programs run simultaneously- In any organization, multiple email programs (monthly, nurtures) run simultaneously. Hence its highly possible that if marketers spend too much time collecting results, they might miss out on sending out the next email, which will have far worse consequences than sending a winner variation to a particular segment of the database.

· Email sends are timely- Marketing emails are optimized to be delivered on a certain time of the day and week, and hence waiting too long to collect statistically relevant data from tests might result in missing out on the timely relevance of sending out the emails. Hence, email A/B tests have a built in ‘timing’ setting, which means that at the end of the time period, even if there is no statistical winner, the pre-decided variation will be sent out to the list. This way, the email schedule remains on track while reaping the best results from the test.

Hence, the next questions is, how do marketers determine sample size and send time?

· Begin by assessing the size of the contact list- To run the test successfully, the sample list size can should be decently large, at least 1000 contacts to begin with. If it’s smaller than that, then the proportion of the list that will be tested will be comparatively large. For example, if the list is small, then almost 85%-90% of the contact base might need to be tested. The result from the portion that you do not test on might be so small that it might not be statistically significant.

· Open calculator- Once you open it, this is what it looks like:

· Enter Confidence level, Confidence interval and Population into the tool- Lets understand these jargons:

o Population- The sample represents the larger group of your segment base, known as population. In an email campaign, the population consists of the number of people in your list to whom the email has been delivered to, not the number of people it’s sent to.

o Confidence interval- This refers to the range of results that can be the outcome of an A/B test.

o Confidence level- This tells you how sure you can be about the results of the confidence interval. The higher the percentage, the surer you can be of the results, and vice versa.

Example: Let’s pretend to run an A/B test with a sample size of 1000 contacts, and a deliverability rate of 95%. Let’s be 95% sure that the winning email metrics fall with a five point metric of our population metrics:

Population- 950

Confidence Level- 95%

Confidence Interval- 5

· Click on ‘Calculate’.

· The sample size, in our case is 274. This is the size of one of the variation lists. Hence, if there is one variation and one control, then this number needs to be doubled.

· Calculate the sample size percentage of the whole email- Depending on the platform being used, marketers will need to choose the percentage of contacts rather than entering the raw sample size. In this case it will be 27.4% (274/100)

Now that the list is split, it’s time to determine the best time to send out the emails. For this, one needs to know the time when the opens and clicks begin to drop off. For instance, if 80% of the clicks are within the first 24 hours of sending the email, and then 5% each day after, it would make sense to capture data from the first 24 hours in the A/B test stats.

What can you test in an email campaign?

There are several things that you can test in a campaign, but here is a list of 4 elements, which when tested can yield positive results.

Subject Lines and pre-headers: All the hard work will go vain if your target audience do not open your emails. These are extremely valuable as they are the two elements that form the only touch points before an email is opened, and hence require extensive attention. Optimal subject lines are between 60–70 characters, and several parameters can be tweaked to find the perfect fit:

· Tone (neutral, provocative, mysterious, friendly)

· Length (shorter, or longer)

· Word order (try reversing the word order to create impact)

· Personalization (including first name in email body)

As for pre-headers, they are typically pulled from the first line of your email. But as you delve deeper into this art, you can invest in tools that can help you create intentional pre-headers, add relevant words and additional information that can trigger action on the part of prospects.

Images and Visuals: The brain engages more with pictures, and adding engaging visuals to emails can be a useful tool to generate more engagement. Different kinds of banners, product pictures, videos and GIFs are all ways that can increase user engagement.

However, visuals will not have the same effect on target audience of different industries across the board. In fact, too many visuals can result in readers getting distracted from the Call-to-Action on the email. In order to get a clear idea of whether or not visuals are having a positive impact, run an A/B test, where version A has no visuals (but the exact copy, subject line and CTA), while Version B contains the visuals, and see which one performs better.

Copywriting: There is no secret to the perfect copy. If you have different theories about the expectations of the target audience, then create different copies based on the expected behaviors, and then send both versions to the same mailing list to see which one outperforms the other. You can try different wordings, placements and text lengths, while focusing on the key message that is being conveyed.

Call-to-Actions and Buttons: Be it buttons, hypertext or images , the CTA’s copy and design goes a long way in persuading readers to click on them. The best way to conduct an in-depth test on CTAs is to try them in different formats, shapes and texts to understand which one has the highest click-through rate. If this method does not yield any definitive results, then you can think of testing the effectiveness of CTAs by experimenting with their value proposition, that is, the offer behind the CTA.

Other things that can be tested include levels of personalization that trigger engagement, and the time and day of the week that has maximum open and click-through rates.

Here are 5 kinds of emails that you can test

1. Abandoned cart emails- In this email, there are several subject line variations that can be tested:

· Test with a sense of urgency- for example, “Your items are waiting for you” vs “Hurry, only 2 left in stock”.

· Test the language- for example, “You forgot something in your cart” vs “Your items are waiting for you”.

· Product name against no product name- for example, “You forgot your Adidas sneakers” vs “ You forgot something in your cart”.

For content, A/B testing can focus on product-focused against brand focused abandoned cart emails. In the first type, the central focus is on the abandoned items, whereas, in the latter type, importance is given to copy and brand philosophy.

Once the foundational structure of the abandoned cart email is decided, variations can be added to the tests, for instance, recommendations of similar products can be included in the email, while being careful not to divert attention from the product left in the cart. This way, customers can see other products that might be of interest, in case, they are not sold out on the item they had originally selected.

Sense of urgency in subject lines, and added incentives like discounts and free shipping offers are also things that can be tested to check on the engagement level of the audiences.

2. Browse abandoned emails- These are similar to the abandoned cart emails. The only difference is that browse abandoned emails are triggered by viewing history, as opposed the abandoned cart emails that are sent out when people leave items in their carts. The second type requires a bit of personalization compared to the first.

Content for testing this type of email can include product recommendations, bestsellers or simply a manually created list of similar items that might spark interest. Then, the results from different collection platforms can be tallied.

3. Win-Back emails- The effectiveness of discounts is the main thing to check in win-back emails. Marketers can experiment with fixed discounts, or percent discounts. They can also test other incentives like promotions (buy 2 get 1 free) and free shipping. It’s also important to determine the effectiveness of the timeline allotted to redeem discounts, for instance, one can test a 2-day period redeem coupon code against a week long period, and see which converts better.

4. Welcome Emails- Since the main goal of the welcome email is to guide subscribers to their first purchase, you can test different incentives, like discounts, or free shipping in these emails. On the other hand, if the main objective is to introduce new subscribers and customers to your brand, then testing the content is a good idea, for instance, racking whether an eBook has better click-through rates than a curated list of blog posts.

5. Post-purchase Emails- The three main types of post-purchase email includes ‘product recommendation’, ‘product review’ and ‘thank you’ emails. Typically, it’s possible to test up to five variations simultaneously. The timing of these emails matter as well. Its best to send these emails five days after the purchase was made, and then increase the frequency to see the reaction.

With all this being said, let us now deep dive and see the best practices used for setting up AB testing for emails.

Step1: Set a goal

Having a well-defined goal before running an AB test is a great time saver, and that can be identified by looking at the data from previous campaigns. For instance,

· If the goal is to increase open rates: Then the focus falls mainly on pre-headers and the subject line.

· If the goal is to increase click-through rate, subscriptions and downloads: Then the focus is to increase engagement by testing body-related content like tone, copy, visuals, call-to-action, as these elements trigger clicks and conversions.

Step 2: One vs Multiple variables testing

Adding multiple variables to an A/B test template means that marketers will have to keep increasing the sample size in order to get statistically relevant results.

Moreover, comparing two versions with multiple variants will not lead to any definitive results because it will be hard to pin down the exact element that triggered the desired action.

Hence the best practice is to test one variable at a time.

Step 3: Testing at same time vs different times

Although it’s a good idea to A/B test emails based on hours and days, its best to avoid sending variants at different times, since it will be difficult to delineate if the resultant actions are due to the timing of the email, or its content.

Step 4: Track results and build on them

There’s no point running an A/B test without tracking the results. The four main metrics that should be tracked to measure success include:

· Open Rate

· Click Through Rate

· Response Rate

· Subsequent Conversion Rate

For most campaigns, open rates and click through rates are the basic performance indicators. On specific campaigns geared towards lead generation, or e-commerce promotional offers, conversion rate associated with the call-to-action can be tracked to measure the outcome. To put it in simple words, tracking the number of form completion or sales from email analytics platform gives a good sense of ROI. In these scenarios, marketers can track real conversions instead of open rates, and deal with much more tangible marketing analysis data.

I am a marketing expert with experience of handling digital campaigns, nurture programs, webinars and events.