Thomas Essl

View Original

E22 · How I use data and analytics to power product development

Your browser doesn't support HTML5 audio

E22 How I use data and analytics to power product development Thomas Essl

Transcript

[00:00:00] Product research and discovery is not just about a designer conducting interviews. It's about driving product strategy and making sure that the whole team knows what to focus on to get to the desired outcomes.

[00:00:30] Hello, and welcome to product nuggets. I'm Thomas . And today I want to talk about the role of data in product development and how I generate insights to improve outcomes. I will be talking about what metrics I care about and why the role that analytics plays in my research and insights gathering strategy.

[00:00:50] And I'll share a five, maybe slightly less widely used sources of user insight. You can deploy pretty easily. Firstly, what kinds of information [00:01:00] am I looking for? I think about this as strategic versus operational and experience metrics. It's important not to conflate the two metric types and be clear that you'll need both.

[00:01:12] Together with qualitative for user research. These helped me answer, how well are we doing? What performance issues are driving these results and why are we seeing the results we're seeing that. So we're strategic metrics just really briefly. So when I talk about strategic metrics, I talk about those that are representative or indicative of the outcomes I'm looking to optimize with my product.

[00:01:35] In other words, How good of a job am I in my team doing these are often what I communicate to stakeholders, and there might be high level metrics such as monthly active users, conversion rates, satisfaction, or actual net promoter scores, so on, and they might be wrapped up in objectives and key results.

[00:01:56] In episode one, I give a bunch of tips on those [00:02:00] specifically. So. Go back and check that out. If you want. These types of metrics might be high level numbers. I pull from a web analytics tool to, or the response rate to key questions in a survey. I send out every quarter. So often I'll be looking to drive an improvement.

[00:02:16] Sometimes they gathered these metrics for the first time at the start of a project say, and they don't seem terribly useful. That's because I'm looking to establish a baseline measurement that I can then work to improve over time. And this is why it's important to think about metrics that will have relevance for a long period of time to give you a chance to track those changes when it comes to reporting.

[00:02:37] Especially once I can see a change emerge, I'll add some context to like operational metrics that have led to this change. Well, maybe some qualitative user insights or user quotes to emphasize certain points, but try to not overdo it. I find that often it's more important to have something really simple in place.

[00:02:56] That's easy to understand for anyone rather than [00:03:00] painting a complete picture. This also means that I try to not engineer sort of like aggregate or vanity metrics, things that are kind of like the results of equations of. Certain user actions per time period in given segments, et cetera. I try to come up with ones that are really easy to understand for anyone I'm talking to.

[00:03:20] Now operational or experience metrics. Tell me how well each piece of a product is serving its purpose in order to drive my strategic goals and why they might look the way they do the show me, what does or doesn't work for users, whether they get stuck, what do they ignore? And so on. It's what I use to drive daily decision-making and to drive the rest of my design and research approach.

[00:03:44] Now, let's see what's the role of these operational metrics during the design and development. One famous quote, I like to bring up in this context is if you can't measure it, don't do it. This is because everything we're doing on a product [00:04:00] team, I look at as experiments, regardless if that is a prototype that we're testing or some kind of survey.

[00:04:07] Or a feature that we're building. It means that I always start with a question and form a hypothesis. And after running an experiment, look at a specific set of metrics to analyze the outcomes of that experiment. It helps me verify my hypothesis and come up with a plan as to how to respond to these results.

[00:04:24] Rather than thinking of this as a circular process, I look at this as a spiral, as each iteration should deepen my level of understanding. Some research questions require me to start broad, to hone in on a problem and then go deeper to understand it better. Here's an example. I might start with a hypothesis for a feature that solves a user problem.

[00:04:44] Then create designs or assets that I can test with users. Maybe I'll have one-to-one user testing sessions or interviews, and based on what comes back, I iterate a few times. Once I'm happy with my iterations, I develop the feature with my team and ship it to a [00:05:00] small audience or on an opt-in basis for everybody.

[00:05:03] I might survey users who have used that feature. To get some perception of how things worked out and these surveys might identify one particular issue. Like on this website, a piece of content is hard to find. For example, then I'll dig into the analytics to find the specific sticking points where our drop off happening, which filters are used the least, and then a former hypothesis about.

[00:05:28] How we might fix that issue and it all starts over again. So what I'm hoping to illustrate with this is that I really try to get this kind of full coverage of the entire process. And depending on which phase I'm on, I choose different research and insights, gathering methods and different metrics to help me track improvements.

[00:05:46] But fundamentally it's always the same in that. I launch an experiment and I gather information to evaluate the outcomes of that experiment. And that helps me develop new experiments. So that really, as easy as that, the point of these [00:06:00] operational or experience metrics is that together with qualitative research, I want to be able to go through such a complete journey of discovery.

[00:06:08] Like I've just described without any blind spots. And I set this up by key user journey or job to be done. I start by thinking about what are the key things I need my users to be able to do in a product. And how can a guarantee for research coverage of how well this is going and why.

[00:06:37] And in order to get to this full coverage, I employ a whole range of tools and like promised in the introduction. I want to go into five different ones that I think are a little bit under used. The first one is session recordings on the qualitative research side. Of course there are one-to-one user interviews and testing sessions, but I can also use my product to give me some of that insight.

[00:07:00] [00:07:00] But looking at session recordings of actual users, if you've never heard of this before, it might seem a bit odd or creepy, but any website can actually record a video of what is happening in your browser window and store this recording. Of course, this is completely anonymized. But if you and your product team notice issues with the conversion funnel, say through such an analytics tool, you can go back and actually watch a recording of exactly what a user might have struggled with.

[00:07:30] Maybe they didn't struggle at all. Maybe the cruiser was just staying in place for five minutes. While they went to get a coffee, but these recordings really help you get that context around how things happen. Exactly. Number two is event tracking. Of course I use web analytics tools for basic things like conversion funnels.

[00:07:48] But what I'm really interested in seeing here is looking at how users move, not just between pages. But between individual interactions, what kind of user clicks [00:08:00] on which product over another, for example, what did some interact with before searching for help? Again, it's important that this kind of advanced event tracking is set up across my product, but I conduct the analysis along specific user journeys.

[00:08:15] It is only in the context of a series of interactions that individual numbers. Become meaningful. So while there is a temptation to just put all the data that you're gathering on a dashboard, think about what is actually meaningful and what actually helps you to drive decisions. Number three is search terms.

[00:08:33] This is another type of event I'm interested in with most products, because most products will have some area. Where users can search for stuff, some tools let you log what is typed into a search field. And this lets me look for patterns and guide. For example, if some content does exist, but needs to be more easily discoverable, or maybe you as a searching for something that I'm currently not offering.

[00:08:58] And then I need to go and create that [00:09:00] content. It's also important to me that I can set all of this up without needing any engineers to help me set up those events, this way, setting up analytics won't compete with. Improving the product and heap analytics is such a tool that I use for example, but I'm sure there are plenty others out there.

[00:09:18] The next thing that I think is incredibly helpful, it's called intercepts one set of data points. I love his summary between qualitative and quantitative information intercepts and in context feedback, our search pieces, they have some commonalities with surveys. And I've talked about surveys in previous episodes.

[00:09:38] So check those out later. So how do this work? Well, for example, if I want to learn more about how users feel about a user journey after they completed it, I can automatically trigger a micro survey to gather feedback on it. There's information here. Gold and much more reliable than email surveys, because the information is gathered exactly in the context of a user having just performed [00:10:00] this action that I'm interested in, but of course you gotta be respectful with your user's attention.

[00:10:05] And don't ask too much too often. It's a bit of a balancing act, but don't get overexcited and bombard users with lots and lots of these intercepts. They are really powerful tool, but. Use them wisely and less intrusive way is to ask one question, not in a modal window, like an intercept, but as part of your actual interface, you see this sometimes when you are served a search result, for example, and then there is some text with a button asking you if this piece was helpful.

[00:10:33] I'm sure not everyone North and hardly anyone engage with surveys, intercepts or in context feedback. And this is where I look at. Any feedback mechanism might deploy just like the product itself in that I optimize it over time. How long did it take to complete a survey? What was the response rate to intercepts?

[00:10:52] Do these things work better with some segments than others? Of course, all this should be user tested too, so I can be sure [00:11:00] that. It isn't just causing too much disruption and that it is crystal clear what I'm actually asking for. Yes, these are not quick and easy things to do, but they offer some degree of automation of insight gathering once they're up and running and establish a regular cadence of INSEAD coming in.

[00:11:17] Number four is some kind of content focus tool. So if you run a product or a site that is primarily content focused like a blog or a video sharing site, you'll want to not only know how well your interface is performing, but also how your content is doing tools like Chartbeat, give you insight into exactly that to help you work out pieces, you should feature more heavily.

[00:11:39] Which you might want to share more or less on social media and you can even do things like run AB tests of headlines for a specific piece of content. Number five is click confetti and scroll heatmaps. So visualizing interactions on your website. If you're a site or product has long scrolling pages, especially [00:12:00] then like the popular single page scrolling layout that we see on many company websites.

[00:12:05] For example, you want to know more about users, mouse actions and scroll behavior, click confetti maps show where on a page users clicked. This is helpful. For example, to visualize if users attempt to click an element, they mistake for being interactive. When it actually isn't. Scrolling heat maps show where on the page users actually stopped scrolling to engage with the content.

[00:12:29] Both these things would not be picked up by standard analytics as those mostly track interactions with well interactive elements and not things that aren't meant to be interacted with. So that gives you a little bit more granularity about what's not working. As you can see, there is a lot of information you could gather, and if you do it all at once, you probably won't find it that helpful.

[00:12:54] Plus you'll actually have to pay for these tools to actually be able to do all of this. This is why it's important to think [00:13:00] about what are your key user journeys. What is your inside strategy to get the greatest research coverage of those user journeys at minimum effort and expense before you set up tracking for some metric, put it to the, so what test, ask yourself if we track this metric and it goes significantly up or down.

[00:13:18] So what would this influence your decision-making or is it just interesting if it's the latter don't waste another thought on it. This also varies with the part of the product you are focusing on. And with the maturity of your product itself and also of your user base. So be sure to continue to iterate on this over time.

[00:13:42] If you enjoy this podcast, I've got another thing that you might be interested in. It's my newsletter called seven things. You can sign up@seventhingsdotthomassl.com, or you can also just find it on my website at. Thomas sl.com. This episode has been produced by myself and the [00:14:00] music is from blue dot sessions.

[00:14:01] Any opinions expressed on my own. Thanks so much for listening to next time.