Emotive Account Analytics

Project details
Role: Product Designer
Design timeline: ~2 months
Team: PD, PM, 5 Engineers

Background
Emotive, an SMS marketing automation platform, was seeing a high percentage of churn due to a lack of trust in the platform’s value. Additionally, the company sold an ROI guarantee but the lack of transparency and discrepancies in the data made it difficult for customers to understand their ROI. Our team was tasked with increasing users’ trust in the platform and the value it provides.

How might we increase trust in the value of Emotive?

 
 
 

Discovery


In-app survey

I conducted a small in-app survey to get a baseline metrics for user’s trust in Emotive.

  • How satisfied are you with the data available in Emotive? 4.48/10

  • I feel confident that Emotive is delivering measuring value. 5.16/10

 

User interviews

I conducted discovery interviews with 12 brands to understand how they analyze and report on marketing tools. I need to learn:

  • What are marketers’ goals, processes, and timelines?

  • What data do they need to draw insights?

  • How do they evaluate the success of a marketing channel?

  • What are their expectations on what Emotive should provide?

  • Where is Emotive not living up to that expectation?

 

Tables from two different areas of the platform show performance of individual campaigns across varying timeframes

Problem 1: Data is disparate
Sales and other metrics are spread across the platform. It is difficult to get an overall understanding of the money going in to Emotive and whether or not the marketer is seeing a return or getting engagement.

 
A screenshot of the Broadcast page shows campaigns sent in chronological order

From the Broadcast page, users can see SMS sent in chronological order, but it is difficult to get a macro look at what kind of messages perform best over time.

Problem 2: It’s difficult to optimize
Because marketers can’t see which campaigns are working, they can’t improve. Broadcasts (one-way blasts) are listed in chronological order which makes it hard to understand which types of messages perform best overall. Experiences (automated flows) are listed with all-time metrics only, which makes it difficult to tell if certain flows are performing well this month or are dying off. These limitations make it difficult to replicate and iterate on successful campaigns.

 
Attribution model diagram shows that there are many ways for a sale to be attributed and it gets quite complicated

SMS attribution is not straightforward and, like all marketing channels, is unique to the platform. Emotive’s attribution model looks at multiple factors including time, type of engagement, and order information.

Problem 3: Lack of transparency
ROI and the Emotive attribution model are not surfaced in the platform. ROI is manually calculated by Customer Success Managers. Attribution for SMS is a new concept, and customers are wary of the factors used to contribute their sales to Emotive. These factors are discussed briefly during onboarding but are otherwise opaque.

 
Diagram shows three different definition of conversation rate, each one calculated from a starting point further down the funnel

An example of three different definitions for conversation rate, each one calculated from a starting point further down the funnel. Competitors that calculate CVR from further down in the funnel have better looking conversion rates.

Problem 4: Conflicting definitions
Emotive, Marketers, and competitors calculate common metrics differently. At times this can make Emotive’s metrics (for example conversion rate) appear lower than competitors. It is also difficult for marketers to celebrate positive metrics when they have no transparency into the formulas used to define the metric.

 
Shrugging emoji overlayed on top of data

Problem 5: So what?
Given the potential data we can give brands, what should they do with it? Brands reported that since they have little experience with SMS, even though more data will help them test and learn, they needed some direction as to what were good or bad indicators, and what they should do with the information they receive.

 

Design jams

To further narrow our focus and ideate some early concepts, I held a design thinking jam with members of our go-to-market team who had knowledge around this problem. I set the stage by introducing the problems we had identified so far, and that for this project we would be focusing on fixing near-term, immediate trust issues rather than providing very detailed strategic analytics insights.

First, I led the team through an Empathy Map brainstorming where we identified the problems they were hearing, and I contributed my previous learnings.

Next we used the top voted problems to create How Might We statements which we then ideated solutions for.

I took some time to use all of this discovery info to create an initial concept for an Analytics Dashboard, which I brought back to the next jam to crowd-source feedback. I used this feedback to iterate on the first version used in user testing.

Screenshots of empathy map and how might we brainstorming session articfacts

Empathy maps for our “Doubter” persona helped us prioritize the most important pain points to solve for. We created three How Might We statements from these top pain points and then brainstormed solutions which needed to make it into the V1

Screenshots of initial wireframes for analytics dashboard

Based on the jams, I created these initial concepts around an Analytics dashboard. Go-to-market teams used sticky notes to leave feedback and questions on these designs. This feedback was incorporated into the first version used for user testing.

 
test tube icons
 

Testing and iterations


 

I tested multiple variations of the designs with 7 users. Below are examples of some of the iterations for attribution, insights, and calendar features considered or tested.

Three iterations for explaining attribution which range from a flyout to four cards on the dashboard, using various graphs and micro-copy.

Explaining attribution as well as the timeline of sales was one of the more difficult concept to communicate. I started with a flyout detail with lines showing the percentage of attribution to each attribution type. The flyout proved to be hard to find, and I wanted users not to have to search for the information. I also found that there were other factors besides attribution type which told the story of where sales were coming from. I shifted the design to four tiles on the dashboard, using pie charts to show the relative percentages of attribution. I added in graphs to show more detail on how quickly the sale was made after the campaign was sent.

 
Iterations on an insights tile, which shows how the brand is performing against others in their industry, and on a calendar component

My initial concept for an insights tile was communicating too much information at once and therefore was not able to be specific enough about the metrics shown. Brands were confused about how lookalikes were calculated and what the benchmark up/down number were related to. I ended up simplifying this tile to communicate how the most important metric, the brands’ revenue, compared to others in the industry. I also provided a definition of how this was calculated, a visual on how well they were doing, and a link to learn how to improve this number.

In order to timeslice the data on the page, I tested an iteration with tabs at the top to allow the user to easily toggle between timeframes. I learned that there were a variety of timeframes the user may want to view, the volume of which did not lend itself to tiles. Also, something as simple as “month” could mean different things (past 30 days, month to date) so it was important for marketers to have a visual of this timeframe as well. I iterated on this design to provide a calendar dropdown with some common timeframes available for easy use.

 
Trophy icon
 

Solution


I designed a brand new Analytics dashboard for Emotive, addressing all 5 of the pain points found in discovery.

 
 

Overall metrics shown on the analytics dashboard condense data from multiple areas of the platform.

Problem 1: Data is disparate
The dashboard compiles data from multiple sources to give an overall picture of how the brand’s account is performing. The KPI numbers on the top of the dashboard were identified as the top metrics needed to gauge their overall performance. Below, a table compares key campaign metrics across different campaign types.

 
Bar graphs of top broadcasts and experiences compare top performing campaigns for the timeframe

Bar graphs of top Broadcasts and Experiences compare top performing campaigns for the timeframe selected.

Problem 2: It’s difficult to optimize
Two charts rank the top-performing campaigns across different campaign types. Users can look at the top campaigns from the past month, year, or custom time frame to find patterns they may want to repeat or iterate on.

 

Sales are broken down by attribution type (with definitions) and purchase timeframe.

Problem 3: Lack of transparency
This section breaks down sales by attribution type, timeframe, and more. Here users can find definitions of each attribution type and see which is the most common among their customers. They can also see how quickly users typically purchase after receiving their campaign, which helps dispel fears that the attribution windows are too long and are over-attributing.

 
Definitions are provided throughout the platform which describe what formulas are used for each metric and link to models when needed.

Definitions for each metrics are provided throughout the platform.

Problem 4: Conflicting definitions
Each metric used on the dashboard includes a popover explaining how Emotive defines this metric. They include the formula used and any extra information such as links to more detailed information on the Emotive attribution model.

 
Screenshot of brainstorming session titled "How can brands increase their revenue"?

Screenshot of a brainstorming session I led with Customer Success Managers to identify strategies brands can use to improve their metrics.

Problem 5: So what?
I identified areas in the dashboard where users had questions on how to improve their numbers, and then held a brainstorming session with Customer Success Managers around strategies the brand could implement in each of these areas. I provided this information to our Product Marketers and Copywriters to create blog articles which I linked under the relevant data.

New analytics dashboard in desktop
 
New analytics design in mobile
 
line graph trending up icon
 

Outcomes


Post-release survey
35% increase in satisfaction with the data available in Emotive
14.7% increase in confidence that Emotive is delivering measurable value.

Learnings

  • Best practices around charting (pie vs donut, more fidelity over soft corners in line graphs)

  • In new fields, analytics must be paired with education

  • Analytics needs are more subjective than I first assumed

  • Need to focus research strategy on ICP (in early stages of Emotive we worked mostly with brands who volunteered)

Future considerations

  • Add analytics into the page to understand if page length is an issue.

  • Now that SMS as a channel is understood, how might we communicate Emotive’s unique value in the market?