Example screens of a flow builder with analytics shown throughout

 

Flow Builder Performance Analytics

Project details
Role: Product Designer
Design timeline: 6 weeks
Team: PD, PM, 6 Engineers

existing flow builder

Existing Experiences flow builder

Experience flow analytics feature request with 58 upvotes

Upvoted Canny request to add analytics at the message level

Background
Emotive, an SMS marketing automation platform, has a product called Experiences which allows brands to craft and automated SMS conversation with their customers. Brands craft outgoing questions and then prepare different responses for the customer based on the customer’s reply. This response/reply/message tree is managed via a flow builder. Before starting this project, Experiences could only display overall metrics- for example, of everyone who entered this conversation 20% clicked a link at some point, 10% purchased at some point, etc. Brands were not able to see the performance of an individual message within the conversation, which made it difficult to tell what was working and what was not. This feature was #1 in our feature request platform.

How might we help users optimize their automated flows with data insights?

 
 
 

Discovery


Identifying the problems

By the time we were able to tackle this feature, the topic had come up in many user calls and previous research efforts. Our team felt very confident from previous interviews in our understanding of the problem. Due to this, the discovery effort for this project was shorter than others and involved mostly synthesizing notes from the Product Research repository I stood up for, which houses the product team’s notes from user touchpoints like discovery calls, usability tests, NPS, churn calls, etc. I reviewed these to document the main asks for this feature. Rather than simply building the feature that was requested, I wanted to make sure we understood what the user’s end goal was and why they were asking for this feature. I collected feedback from:

  • Past research calls

  • NPS surveys

  • Similar feature requests recording in Canny

  • Churn calls

  • Competitors

 

Questions to answer

From reviewing our notes I defined the requirements for this project in terms of questions our solution needed to answer.

  • Are customers responding and what are the most common responses? How often do my replies capture their responses?

  • When are customers converting? Are customers getting to the “sale” in my experience?

  • Which messages are getting the most traffic? Which messages are highest converting?

  • When are customers converting?

 
binocular icon
 

Explorations


States explorations

Explorations around separating editing and view states for the flow, with the existing state in the forefront.

Step 1: Separating states

The existing flow only had an editing state which caused saving errors, and accidental changes, and would make it visually difficult to communicate analytics at the same time. The first step in the design was to separate viewing/analytics and editing views, so the flow controls change depending on the user’s use case.

 

Step 2: Introducing analytics

With the view state determined, I explored different ways to answer the user’s analytics questions within the flow. I explored visually showing the flow of users between nodes with a Sankey-like diagram, different options for expanding/collapsing data on each node, and played around with where to display overall metrics which were previously in their own tab, but which I wanted to move into the new view state.

(Row 1) I explored different methods of revealing data for each node so that the amount of data show would not be overwhelming, but users could still easily compare data across nodes.
(Row 2) I explored what it might look like to have data in view persistently for all nodes, as well as adding spend and usage data into the overall metrics panel per PM request.
(Row 3) After receiving some positive feedback on integrating the data within the node (rather than a flyout) I explored a grid layout as well as how spend could be incorporated per node.
(Row 4) I considered show/hide all metrics toggles as well as node-level toggles, and made sure this design direction would work with the new UI style I want to introduce to Experiences in the future. After getting some early user feedback, I introduced benchmark data & a graph to help users see trends in the flow’s performance, a Sankey-like visual to communicate the path of users through the flow, and I separated the data into two parts- the most important metric, which is pulled out at the top of the node (‘sent to’ for outgoing messages and ‘% responded’ for incoming replies), and the more detailed metrics in the expanded section.

 
test tubes icon
 

Testing and iterations


I tested the drafting/editing features and analytics features separately with 10 users and 6 CSMs. Task- based questions included

  • What is the most common response?

  • Which message is performing best?

  • Are customers getting to the discount or sale in this Experience?

  • What changes would you make based on this data, if any?

Design that was tested showing some feedback received (listed in caption below)

The majority of customers were able to complete the tasks given. Some usability issues slowed down the process or finding and interpretting data. Example feedback is listed here in the image above: “People entered” was not a clear label to users. User were overwhelmed with seeing all the detailed data for each node and didn’t have a use case to need to see it all at once. They preferred to expand the details individually. The breakdown of SMS/MMS messages in the usage section, though there was a tooltip, was difficult to understand and not needed by most customers. The icon to expand analytics was not a quite read. The Sankey-like flow diagram was visually overwhelming and made it hard to tell what to focus on first. It’s meaning was not immediately clear. I deciced to remove this and consider creating a different view where this diagram was available in the future.

 
 

Solution


Clear viewing and editing with discoverable data insights

With the view and edit states separated, users can learn about their Experience’s performance without fear of making edits. Three levels of data allow users to easily get a high level understanding of the Experience’s performance, and dive deeper when needed to help them optimize the flow.

Are customers responding and what are the most common responses? How often do my replies capture their responses?

At the top of each response option, users can see what percent of customers replied with that option. There is also a no-reply branch which shows customers who did not respond within the specified timeframe. I added another branch for view mode only, this branch shows the percent of customers who responded but did not fit any of the pre-determined response options. These customers are sent to the brand to respond. Brands can now see if they need to ask more engaging questions, avoid asking questions altogether, or simply build out their response option to capture more customers.

 

When are customers converting? Are customers getting to the “sale” in my experience?

At the top of each outgoing message, users can see how many people the message was sent to. This helps customers know if traffic is even making its way to this message. At the bottom of the node, they can see click-through and conversion rates, the top metric for gauging how a message itself is performing. On click of details, they can view more information like response and opt-out rate, sales and cost information, and the total counts to correspond with the percentages shown on the node (eg 10% CVR here translates to 3 conversions).

 

Which messages are getting the most traffic? Which messages are highest converting?

Allowing users to see how many people the message was sent to and the conversion rate for each node allows them to easily compare the performance of messages and make changes. In the example to the right, the first message received more traffic but has a lower conversion rate than the second one. The brand may want to adjust their Experience to send more people to the second message. Or, it may be that the kind of people that get to the second message in the flow (based on their response) are more likely to purchase. Both are useful insights to help optimize the brand’s SMS strategy.

 

When are customers converting?

As discussed, outgoing messages show conversion rates per node. Users can now also view the overall conversion rate of the entire flow from the side panel in the flow. They can now easily compare the overall conversion rate of the Experience to the conversion rate of one message and determine if the flow is adding value. For example, on the right, if the customer finds that the first message in the flow is collecting the majority of the conversions, they may avoid building out long, complicated question/response flows in the future as they don’t have a large impact on conversion rates.

 
inline analytics desktop prototype preview
 
 
 

Outcomes


Overview

The view/edit states were the first to be released in this project. Analytics within the flow are currently wrapping up development and will be slowly released to brands one at a time due to the time needed to process historical data.

There have been no tickets related to misunderstood states or saving/drafting errors since we released the view/edit states (previously several per week).

Learnings

  • Prioritizing data allowed me to create a design that displayed only as much information as the user needed at the moment, without overwhelming them.

  • Changing the shape of items within a flow can be difficult from a dev perspective and potentially jarring for the user. We opted for a flyout with data details rather than expanding the node itself and re-organizing the flow.

  • To avoid communicating too much information at once, different views can be used depending on how the user wants the data displayed. We tabled the Sankey diagram for another day, when we may add the option to view the flow with only the Sankey lines.

  • Time-slicing data can become tricky with complicated attribution models. We had to make some calls about which timeframes data would appear in, taking into account that the attribution window may have been up to 14 days and the Experience itself can last for days or months.

Future considerations

  • Deeper data around the replies- which users responded with which replies? What were the responses that got categorized as “other”? We could suggest common keywords to add into the flow.

  • Deeper data around conversions- which products were purchased at each node? Are people purchasing the product advertised or something else? What are the most popular products among these customers so brands can introduce them to and educate customers around them sooner?

  • Data around missed opportunities- for Experiences overall, what kinds of triggers and templates could the brand be using and how much revenue is missed out on by not having these?