Can you identify how Retention Science has made your life easier in the email channel?
Partnering with Retention Science was one of the better decisions we've made as a company, because it's given me not only a platform that can handle multi-variate testing, but also a concept that has helped me improve as a marketer. I always knew that A/B testing was important and essential, but never realized that there were so many layers to it. The concept of Hypothesis Testing has forced me to think deeply about my brand, and the best ways to drive success through very simple and straight forward tests that provide me with statistical improvement and valuable insights.
What kind of tests are you running, and what sort of results have you seen?
We have been live on ReSci for a little over 5 months now, and so far we've activated 14 stages while still running our Promo and Smart Blasts throughout the week. Within those 14 stages, we've tested out a total of 136 different templates, all with different layouts, messaging styles, recommendation counts and incentives. We've also tested out almost 200 different subject lines across these templates, using different lengths, style's and capitalization, as well as emojis and punctuation that help us get the most coverage for trying different things. With Cortex optimizing the distribution of these variables, we are never afraid to try things outside of standard practice, because the models will safely send these out, and adjust volume based on what is working, and suppress those things that aren't working. We've been able to identify that in certain stages, more recommendations seem to work better, where in others, focusing on 2 or 3 recommendations seems to personalize the user experience effectively.
Is there anything that you would recommend other users do when trying to get started with this methodology?
One thing that I think is most important is coming up with a guideline for naming your templates. Because we were testing so many different things across so many different stages, at first I was overwhelmed when trying to interpret the results. Once I realized that gaining consistency with my naming conventions made my life easier, looking at performance became much easier. I recommend being as descriptive as possible when naming your templates, specifically, starting with the name of the stage, and then using key words so that you easily understand which template is which without having to look at them individually. I've included some screenshots for ways that I name some of my items, and think that anyone using this tool should adapt this in some way!
For example, here are some of the things I'm currently testing in the New To Your Brand Stage:
And here are some of the things I'm testing in Cart Abandon (there are more, I just couldn't fit them all in one screenshot):
This seems like a ton of work to set up, and probably took a lot more time than marketers have available to create tests this elaborate. Do you have any advice for marketers that are strapped for time in order to get the most out of Cortex?
First, if you're going to be good at your job, you need to be committed to putting in the required amount of work to drive results. With Cortex and the Drag and Drop Editor, this actually probably took much less time to setup than you're thinking. We're a pretty small and lean team, and where we were really able to scale was when we started leveraging the "clone" feature that allowed us to take an existing template, make a copy, and make a slight change, either with something like the amount of recs, or the positioning of certain items. We tasked ourselves with creating/modifying a certain amount per week, and held ourselves accountable to deliver what we were promising.
Sofia Fuller is the Head of Marketing at Breazy.com and has more than 5 years of marketing experience. She is excited about marketing, and interested in learning other tips, tricks and best practices for driving success in the email channel.