Broadcast A/B testing
Project details
Role: Product Designer
Design timeline: 6 weeks
Team: PD, PM, 6 Engineers
Background
Emotive, an SMS marketing automation platform, has a product called Broadcasts which allows brands to send one-way SMS blasts to any segment of their customers.
The ability to A/B test these messages was one of our top-voted feature requests. As one of our company OKRs was to increase our users’ ROI with the platform, we decided to work on this feature which would allow users to optimize their SMS strategy and therefore increase their ROI. Additionally, all of the top competitors in the space offer this capability, and it would allow us to develop our own internal knowledge around SMS best practices.
How might we help users test and learn from different Broadcast variations?
Discovery
Competitor review
Our team conducted a review of our competitors’ A/B testing offerings to get a sense for expectations in the space and to inform which features we would include. We had limited development resources and our goal was to build an MVP that met the core needs of our brands and offered one or two points of differentiation. From this review we found:
Common/expected features:
Ability to test different content
Ability to preview variants
Duplicate the first variant when creating the A/B test
Ability to send the test to only a percentage of the audience (determined by the user)
Determine the winner by highest open (email) or click-through-rate
Opportunities for differentiation:
Determine the winner by a variety of metrics (eg lowest opt-out rate, highest revenue, highest conversion rate)
Past research review
We had a good amount of past research notes on this topic, so I collected the insights from those conversations and surveys to develop a list of proposed requirements according to our brands’ needs. I was happy to find that the list was similar to the competitor review list but offered some more detail. For example, the “content” that users wanted to test (copy, discount code, image) and details on how they liked to review the results of the tests. For example, our existing Broadcasts “sent” tab displayed the name of the Broadcast and its’ performance metrics in a table row. For A/B testing, brands wanted to test different images against each other, so being able to see the image that accompanied the Broadcast in the same row would help them easily see which image performed best.
From the past research I gathered these recommended requirements:
Test different images
Test different copy (length, content, tone, coupon or no)
Test time sent*
Test with a percentage of group first, and then send the winner
Set test time frame
Test more than 2 options**
See results of both versions and overall test grouped together
See images of each version next to each other
*Though this was a popular request, our team planned to implement the ability to send a message in the customers’ timezone soon. We decided to wait to implement time-based A/B testing until the timezone feature was available since any findings from A/B testing without this feature may not hold true after it was released.
**De-scoped to shorten dev time
Interviews with brands and Customer Success Managers
Finally, I wanted to confirm the proposed requirements and get a sense of the priority of each in case we needed to scale down the projects. I met with two brands and two CSMs to talk through their expectations and get their thoughts on the list of features I had in mind. I learned that testing content, the 50/50 split test, and determining a winner based on click-through rate and conversion rate were the must-haves for this project. Using a test group to determine a winner and then sending the rest of the customers the winner was a nice-to-have. Testing more than 2 options was one of the lower priority items for this group, so we did end up removing it from our list.
Explorations
Initiating the A/B test
I explored different options for the user to initiate the A/B test option.
In the first exploration, the user would choose whether they wanted to start an A/B test as the first step in creating their Broadcast. I decided not to go with this option because it would not allow users to duplicate the first variant in creating the second variant, and this could cause errors. Also, I was not able to find a name for the non-A/B variant that was clear to users. Ultimately they thought of A/B testing more as a setting to turn on than a completely different kind of Broadcast.
In the next exploration, I explored putting the A/B test button below the “next” button on the first step. This way, it would not clutter the normal user flow but was available to users who wanted it.
In the third exploration, I created a toggle to turn on the A/B test. I was also testing the verbiage of “multi-variant test”, which ended up being less recognized and irrelevant when we de-scoped to only allow two variants. The issue with this option was that the toggle would need to persist when turned on, and the second variant was then expected to be below the toggle. This ultimately did not work with the presentation I decided on for the variants, which collapses them both to keep the page from being too long.
Previewing the variants
I explored different ways to view & toggle between the two variants, including tabs, a side or top bar, collapsed cards, and message previews beside the main phone frame. The collapsed cards were preferred by the design team and several CSMs because they provided a small amount of context on each variant (image and the beginning of the copy) while not overwhelming the page with a full preview of each version. For faster development, we opted to toggle between the variants in the phone frame using the variant cards, rather than the carousel-like design in the bottom right.
Adjusting test settings
I explored a few options to allow the user to choose whether or not to test with a 50/50 split of the entire audience or to test with a smaller group first and then send the winning variant to the remainder. In the approach on the left, I separated these two decisions, so users would first decide whether they wanted to send to a test group or not. Then, they could use the slider to determine how large of a group to send the test to. The language around this became confusing (eg a “test” group of your a/b “test”). In the second iteration, I combined the two concepts. Users can choose their A/B test size which is reflected in the slider. By default, this setting is set to 20% based on what we learned was the brands’ expectation. If users want to test with the entire group, they can simply change this range to 100% by inputting the value or dragging the slider. I added in some additional labeling of the variants to clarify how the test percent would split among the variants.
Testing and iterations
I user-tested a prototype with 5 brands and 4 CSMs which resulted in improvements to the editing/toggling behavior of variants and some clarification around the test size slider.
Solution
Integrating A/B testing within the existing flow
The A/B testing feature allows users to start an A/B test by duplicating their existing message, set test settings on a new 3rd step in the Broadcast wizard, and review the performance of each message in their A/B test in a group from the Broadcast List page.
Outcomes
Overview
The A/B testing feature was released from June-July 2022 and has 40% adoption as of August 2022. We plan to look into the data 3 months post-launch to see if brands who have adopted the A/B testing feature have increased their Broadcast ROI.
Learnings
We received some feedback from the initial launch which resulted in updates to this feature when I redesigned Broadcasts. They included:
The A/B test button was often missed since it was below the CTA. It is now moved below the messaging field.
It was still not quite clear to users how to edit the variants, so the cards expand on first click now instead of second click.