A/B Testing is Important
As a marketer, you have been told that you should always test something in your campaigns in order to collect key learnings from any initiative that is worth undertaking. Based on the volume of sends that you may conduct, this may seem like a tedious task, and you may find yourself struggling for variables to use in your experiments. Because of this, you may also find yourself reaching to interpret results, and derive valuable information that you can use in future campaigns. When this critical mass is reached, marketers may "water down" this practice to items that can provide minimal impact: slight differences in Subject Lines, minor changes in Hero Images, or simply adjusting Call To Action messaging. This almost defeats the purpose of running A/B tests at all, and provides a minimal lift that doesn't drive incremental growth overall.
Don't Get Overwhelmed, Start With the Basics
It doesn't matter if you are a brand new brand trying to solidify itself as a player in a competitive space or seasoned veteran and category leader, it is never too late to start getting into the proper habits that will drive iterative improvement in your campaigns over time. There are several things that every marketer should consider in order to position yourselves for success. Remember that the key goal for any marketer is to drive engagement, and Retention Science has a platform that will handle all the variables that you throw at it, as well as performing optimizations that lead to varied send distributions to maximize the immediate impact to your bottom line. The most important piece of the puzzle is leveraging different variables in different combinations that can help you identify the best way to mix and match content to perfect your game.
- Define the length of your test early, and don't be afraid to pull the plug on underperforming variables
- Don't wait for results to become obvious. Going into each test, you should determine that "This is a test I want to run for the month of February," or "I want to look at the numbers after I've sent a volume of 150,000 across this stage."
- If one subject line or template is falling behind a clear winner, don't continue to let it run so that you can reach the timeframe you established early. Be smart with the way you interpret your results, and understand which factors may be contributing to the outcome, and build upon those learnings.
- Subject Line Testing Notes
- Try playing around with the capitalization of letters.
- Try subject lines of different lengths
- Use emojis to see if your list responds favorably
- Mix things up between complete sentences, shorter statements, or even incomplete sentences
- Template Testing Notes
- Layout is the single most important piece of your tests. Vary the way that you present content blocks to your users. You have a very short window of time to pique the interest of your user once they open your message. Many may not even scroll through the entire email, so try to load your content above the fold to maximize what a user sees immediately.
- Vary the voice that you use when writing your call to actions. Does creating a sense of urgency, or conveying the nature of limited stock available create more clicks? Should you speak authoritatively or pose compelling questions?
- Does educational content that builds brand identity work more effectively than messages that focus on the promotional nature of the products you sell?
- What is the proper number of item recommendations to show in order to increase discovery and investigation?
- Header layouts can create a more user friendly experience. When giving your customers an opportunity to navigate to certain areas or categories, are you cannibalizing your opportunity to show them new or different things?
- Incentive Testing Notes
- Think beyond simply looking at things from a "monetized value" perspective. Do dollars off or percentage off create different perceptions of value to your customers?
- Does putting expiration dates on your coupon codes create a sense of urgency to create impulse purchases?
- How frequently you include incentives in your promotions can impact buying behaviors. If you frequency run 20% off promotions, users may not purchase until that offer comes back around again.
- Can you use "hurdling" to increase order value? Specifically, if you have a group of users who have an AOV of $50, can offering them 10% off a purchase of $60 or more increase that average?
Monitor Your Results!
While Cortex is going to interpret these results and vary the send volume to optimize distribution to keep your numbers healthy, there is still a human element required to control your tests and drive improvement. The reality is that most marketers judge everything based on conversion rates, but there is much more to this process than looking at your bottom line. Put yourself in the role of a consumer. After you open an email, and look at its content, can you even remember the subject line that caused you to open it? Have you ever read a subject line, and thought to yourself "I really need to spend money with this brand because that caught my attention?"
Look at each step of the shopping process as something that drives a subsequent action. Your goal as a marketer should be to look at Subject Lines in a way that only judges them on the rate at which it generates opens. View a template or promotion as a means to gauge the percentage of users you have driven enough interest to get them to click on your message and visit your webpage. Does an incentive attached to a promotion make your users want to actually purchase once they hit the site?
Look closely at all of these items, and use the results to inform your creative process as you move from month to month, and from promotion to promotion. If you can figure out the right structure to writing subject lines that can increase opens, and combine that with the right layout to present your brand/products in the best light that improves your click rate, it's only logical that the conversions and revenue will follow. Keep track on what impact the incentives and frequency in which you distribute them contributes to the buying patterns of your users.
If you can follow this process correctly, you can position yourself to constantly improve your bottom line by some percentage each and every iteration of tests that you run. Create new baselines, and think creatively for how you can elevate the bar in the next round of testing. Are there certain things that tend to work, or does a certain voice work with certain audiences at different stages of the customer lifecycle?
At Retention Science, we are constantly looking at what our partners are doing, and are committed to sharing those tips and tricks with those in our User Community. Do you have a testing idea that worked really well for you in the past? Would you like to share your expertise?