Thomas Essl

View Original

E19 · User testing (User research essentials for anyone #3)

Your browser doesn't support HTML5 audio

E19 · User testing (User research essentials for anyone #3) Thomas Essl

User testing helps you ensure that you're building your product in the right way before you go and actually do it, thus de-risking your work and avoiding wasted effort. This way, it is one of those activities that seems like a time sink, but actually saves you a ton of time. And as always, it doesn't have to be too complicated. Anyone can do it.

Transcript

[00:00:00] when we create concepts prototypes or new parts of a product, it is imperative to make sure that these things actually work for the people that they are intended for. And when I say work, I don't talk about technical issues alone, but if it feels right, it makes sense to people. This is where user testing comes in.

[00:00:25] Hello, and welcome to product nuggets. My name is Thomas SL. Here to take your fears and skepticism of user research. One of the most surprising and enlightening parts of product development, user testing, was it referred to as usability tests focused on putting something you and your teammate in front of actual end-users to get actionable feedback on it.

[00:00:46] The magic here lies in the realization that it doesn't matter how much experience you have, how many products you've worked on. You are not the user and you are certainly not. Representative of all users. You're also [00:01:00] far too deep in with your head. You know, your product in and out has been with the idea since its first day and so have much more and very different contextual information than anyone else.

[00:01:11] When you interact with your product, your mindset and experience is fundamentally different to that of your users. And because of this, you're probably the least suitable candidate to judge the quality of interaction with your product. This is especially true. If you happen to have much in common with your users.

[00:01:28] I once worked with medical professionals who were developing a product for other members of that professional group. Sure. They had a lot of experience with the subject matter, which helped them choose the right problem to solve. But they were convinced that they knew exactly what their users, people just like them needed and how it had to work.

[00:01:46] When they tested their idea with users, they got frustrated when it didn't work, they'd say things like, well, they just don't get it. We know what's right. For them. Only once that stubborn belief was broken. Did he use it full and [00:02:00] successful product emerge based on listening to actual end users? In their real life context and responding to the feedback they needed to open their mind and accept that they didn't hold all the right answers.

[00:02:10] It's a really hard shift of mindset to pull off, but a vital one. So the first step to starting off with successful user testing is this realization that you are not the user. That's why use a test is so important if done, right? They always end up surprising you beyond that, doing them regularly as in every week or every two weeks helps you and your team build empathy for your users and also develops relationships with them.

[00:02:36] And when is it time for user testing? This is where the mantra of failing early fast and often comes to its own. I want my designs to fail before anyone, including myself has wasted too much effort on them. Contrary to come and believe what you are. Testing. Doesn't have to be a finished product or working prototype more often than not.

[00:02:58] I test mirror [00:03:00] sketches with people. Those were words represented by squiggly lines to test concepts or arrangement of content. And if a sequence of pages flows well from one to the next, my test material doesn't even need to be a representation of the product at all. As I test process diagrams of complex scenarios, I'm trying to improve with a product or feature down the line.

[00:03:24] You can probably guess what this means for the timing of interviews. Let me do that. It's always a good time for testing. I once worked on a really fast paced project where we conducted a dozen user interviews each week that said most teams won't have the luxury to do that. I generally take a common sense approach, weighing up the complexity of what I'm working on with the timing of my tests.

[00:03:47] I say timing and not need because eventually I want to test. Everything, but some more simple things, like say the usability of standard components in the user interface. I might put to the side for a bit and test them [00:04:00] all at once later. On the other hand, when I work on a new idea or a feature I tested as soon as there is something to show to people, even just the paper sketch and keep testing it at each stage of refinement until it's in the app and beyond.

[00:04:15] And when I say testing, really I'm talking about courses or batches of tests as I run each test with say four to six people to make sure that the responses I get aren't outliers. Now that you know, why user tests are critical and when to conduct them basically all the time, let's look at how they actually work.

[00:04:35] Like I've said on previous episodes, I like to combine them with discovery interviews in a one hour session. That's because. The discovery interview at the start really serves nicely to set up the context for the usability coming down the line. You're basically saving yourself from giving an extra introduction, but more importantly, since the two sessions relate to each other, at the [00:05:00] end of the entire session, users can reflect on both of them and they can reflect on their responses to the discovery interview in the context of whatever it is that.

[00:05:09] You tested with them. And so you'll get a greater variety of responses. If you're doing both in one session, what comment also make is that you are testing user journeys and not features. So for example, you want to test things like use this prototype to find a flight from London to New York on Saturday, and then book it.

[00:05:31] What you don't want to do is ask questions like. Take a look at this prototype and tell me how well the filters are working. That's because people don't tend to think in those kinds of terms. We think in terms of what we are trying to do with a product or service, and you're trying to replicate people's.

[00:05:49] Genuine sort of realistic experience in your tests and observe them and get their feedback on that and not on them. Pretending to be an expert on filters. [00:06:00] You also want to make sure that you don't direct them too much, but use what you show them as a prompt to gather their response. You want to write down an interview guide before the session to make sure that.

[00:06:11] You cover everything that you wanted to cover and that you reduce bias in the way in which you frame your questions. If you're interested about the topic of bias, I recommend checking out the previous episode. And again, you want to recruit you users who actually represent your target user audience and not friends or family or colleagues, et cetera.

[00:06:32] So try to make an effort to get real users who are likely to actually experience your product in the future to conduct the test with them. Let's get to the setup. So you, the facilitator are sitting next to the participant or around the corner of a table. So you can see what they are doing on their screen, as well as talk to them.

[00:06:54] If you're doing this remotely, as you'll probably have to these days, the same applies, you're [00:07:00] trying to see the users as well as what they're doing with your product. And that just means that you need to make sure that your it setup. It's done before to enable you to do that. And that users can actually access your prototype and share their screen as they complete the tasks that you setting them.

[00:07:35] All right. And with that, let's dive into the session. I generally split them into four sections. First, you'll have an introduction. This is where again, you're telling users that this is not a test of them and that there is nothing that they can do wrong. And in fact, you need them to be as critical as they possibly can be.

[00:07:51] To point out where your product is still going wrong and where you can make improvements. Before those improvements end up being too [00:08:00] costly. I really stressed it with users because they are trying to be friendly and I want them to be mean to that design some testing with them, tell them that you need them to think out loud to allow you to get an insight into how they're thinking as they interact with your product.

[00:08:15] And tell them that you might not respond if they talk to you or ask you a question. This is because you're trying to keep the scenario that you're testing as realistic as possible. And if they interact with your product in the wild, you won't be there to guide them. So if they ask you questions, note them down for the discussion afterwards, but really try not to answer them.

[00:08:36] And that's what you'll tell them upfront. And of course, it's always a good idea to record these sessions. This is much easier now with things like zoom and they will help you for note taking or to convince stakeholders. We have snippets of recordings down the line. You can check out more on that on episode five.

[00:08:54] Anyways, if you record, be sure to ask permission during the introduction or even before, [00:09:00] and remind your participants that you're starting a recording. Also note that you may need written permission for this too. So if you haven't done so yet now is a good time to get that permission. Now step two, the actual test, this is where you expose users to your work.

[00:09:16] And if you're testing a prototype, I recommend giving them a task to solve with it. For example, complete this checkout process. Or like I said before, Booker flight, you want to use this to go through an actual user journey? And then it's rock and roll up Sophia users as they go through using your prototype to complete their task.

[00:09:36] Take note of any positive or negative or confused reactions or statements, any questions they ask and importantly observed them. Do they go back and forth a lot? How long do they take to work out a particular action? It's not just what they do, but how they perform that task. If you're planning to hold many sessions like these, you might express the success of a prototype [00:10:00] in metrics, for example, task success and completion time.

[00:10:04] So were they able to finish that task? And how long did it take them to do so? The third step is the debrief and discussion at the end of the first run-through I typically ask for first impressions what they liked, what they disliked, if anything was confusing, et cetera, just to get their reactions as if they had just completed the sort of process with the real deal.

[00:10:28] And then in the fourth section, I'll do a detailed walkthrough of every step of the prototype. I take them back right to the beginning and go through everything again, step by step in detail. But this time I'm not silent, we're discussing each element. On each screen might ask them what was confusing here, what was unnecessary, what was missing and what could be improved.

[00:10:50] This is also the one time where you can actually ask slightly more leading questions, like pointing at a particular element and asking, what do you think this would do? What do you [00:11:00] think this icon represents? Those sorts of things. You can discuss any moments you noted down earlier, where they were confused or where they asked you a question and then move on to the next screen or part of your prototype.

[00:11:12] And that pretty much concludes the session at the end. You might ask if they have any more questions or thoughts about what you've shown them overall, and then thank them for the time and close the session. Now, once you're done, you're going to have to take your notes and synthesize them, aggregate them somehow.

[00:11:29] And that's with everything else. I like to keep this step really simple and efficient. I do this by using the thing I tested itself to document the feedback. So if I'm testing the prototype, I'll have a screenshot or a printout of each screen and make annotations on them. If you really must generate a report for stakeholders, I recommend doing the same thing.

[00:11:50] But prettier with one screen per slide and notes of the various elements that were loved or cost offense, most comments will relate directly to [00:12:00] what you tested because of the nature of the conversation. So you should be covered for most of the feedback. But anything that doesn't fit to what was tested, goes into the bucket of discovery.

[00:12:11] So listen to the previous episode, to hear about how a deal with those comments. After I completed all my interviews, I look for commonalities and apply common sense to work out what needs to change for the next iteration of my prototype. Those changes are logged as tickets and go into our product backlog to actually work on them one by one, once that's done my notes can be destroyed and I can call it a day.

[00:12:41] I hope you enjoyed this episode. If you did, please pass it on to your friends or colleagues. I'd also really love your feedback. Good or bad by a Twitter. At Thomas underscore. So, or you can send me an email to hello@thomassl.com. This show is produced by me and the music is from blue dot sessions. [00:13:00] Any opinions expressed on my own.

[00:13:02] Thanks for listening.