18 May 2018
How To Split-Test Emails Like a Pro
Email marketing is truly a beautiful thing . . . when it’s done right. The good news is that you — yes, you! — have 100% control of your entire email campaign. From the email copy to the audience, and even the very content itself — it’s all in your hands! The bad news? Well, most campaigns yield lackluster results because most people don’t take the time to test and split-test to improve their campaign’s performance.
Ah, but fear not, comrades. We’ve done the dirty work for you and wrote this primer on split-testing so that you can optimize, optimize, optimize! And of course, make more sales and increase your followers when the dust settles.
What is Split-Testing?
Glad you asked. Split testing, also called A/B testing, is when you isolate different elements of your marketing (whether it be an email subject line, body content, Call-to-Action, etc.) and test different variations to see what gets the best results. OK, and now in plain English. . . Let’s say I’m seeing a 25% open rate on my emails. To improve this to a healthy 50% open rate, I would write a different subject line and test this new “variant” against the original subject line, or “control.”
Now, if I send each email to 500 recipients and the new subject line beats out the original in open rates, then eureka! I toss the original, march on with the new and improved subject line, and continue to split-test until I hit 60%, 70%, or even 80-90% open rates!
Like a shark with laser-beams attached to its forehead, split testing is very powerful. It eliminates the guesswork and allows us to make decisions based on objective statistics. And we can split test lots of things — everything from landing pages to squeeze pages, to whole websites and sales letters, and on and on. And it’s ridiculously fun. You get to play mad scientist and speculate on theories, present hypotheses, and concoct new tactics and strategies.
So should you incorporate split-testing into your marketing? Don’t answer that — it’s a rhetorical questions. Of course you should! And here’s how. . .
Best Practices for Split-Testing Emails
Automatic over manual :- Most email service providers (Mailchimp, Aweber, ConvertKit, etc.) have a split test or A/B test function built right into the service. You’ll want to use this and avoid the tedium of setting it up manually and trying to manage it old school.
Look at the big picture :- Ask yourself where your prospects are getting stuck in the funnel (yes, an email campaign is a funnel). For example, if you’re getting a 70% open rate, but only a 4% click-through-rate (CTR), then you know the subject line isn’t the problem, but rather the body copy or CTA needs a little TLC. First, look at the big picture to diagnose the problem, and then decide what exactly it is you want to split test.
Split test only one variable at a time :- This is crucial. If you’re looking to improve your response rate, do not tweak the email body copy and the CTA on the same test. If you do, any improvement on performance cannot be tracked and attributed to either change. In other words, are people clicking because you sold them in the body copy, or was your new CTA doing the heavy lifting? Or both?! Unfortunately, you’ll never know. For this reason, test only one variable at a time to inform future efforts.
The more the merrier with split testing :- You’ll want to split test an adequate sample size so that your test is “statistically significant” (*pushes glasses onto bridge of nose*). Obviously, you can’t send the email to 10 people and expect to get an accurate performance statistic. Like any study, you’ll need a large enough sample size (250+ with emails, but the bigger the better) to account for outliers, anomalies, etc. And of course, randomly split your list so that you’re not sending to two completely different audiences, like dog-lovers and cat-lovers. That would be ruff.
Time of day matters :- Be sure to send both the control and variant at the same time of day. Even a 30-min. difference could skew results. For instance, fewer people are opening emails at 5pm on Friday than, say, 4:30pm on Friday. Because happy hour. . . it’s a fact of life. You’ll want this control to remain the same. Unless, of course, ‘time of day’ is the variable you’re testing for.
Track the numbers early and often :- Set aside 10-20 minutes of your day to review the performance statistics. This might seem like common sense, but too many marketers and business owners set up a split-test campaign with the best intentions, only to get lost in the daily grind and completely forget to look at the numbers. If you don’t track the stats, you’ll never know what’s working and what’s not.
What variables should you test for? :-This is only limited to your imagination. But test things like subject line, opening line, length of email (short and sweet is typically best), testimonials, CTA, text size and color, trust badges, voice and tone of copy, personalized copy, and on and on. Experiment, try creative variants, and use the empirical data to do more of what works and less of what doesn’t.
And remember, it’s OK to make mistakes. It’s better to try, fail, and pivot than it is to sit around justthinking about optimizing your campaigns but never actually taking the necessary action. Take the action — trust us, your bottom line will be better for it.
But we understand, maybe split-testing email campaigns doesn’t exactly put the fire in your loins like it does to us. Or maybe it does. Either way, you know it’s crucial for growth. So if you’re unwilling or unable to split-test and optimize your marketing in-house, please get in touch with your friends here at Coachella Media. And let our digital marketing experts take the reins.
Until next time, keep on Coachellin’!