Thomas Essl

View Original

Ep 9 - Surveys #2 - Design surveys that don't suck

Your browser doesn't support HTML5 audio

Ep 9 - Surveys #2 - Design surveys that don't suck Thomas Essl

Transcript

What makes a good question, and what a bad one? And how do you get people to answer them? Today, we're back talking about surveys.

Hello, and welcome to product nights. My name is Thomas Essl. And today, we pick up where we left off last time talking about surveys and going to the more granular aspects of how to design your survey in the best possible way. One question editor. If you're interested in a bit more background, in terms of what does make a great survey, I recommend going back and listening to the previous episode. It is variable Brief and I think very helpful.

All right, now let's get down to the detail and look at the survey design itself. How can you improve your questions to be more effective? One way to do that is don't use terminology in your questions that you also use in your response options. And this one's a little bit tricky to explain. But if you imagine you giving people three options in three words, let's say glass cup and jar and you repeating one of those words or several in a certain kind of order, in the question that you're asking, then you might bias your respondents to select one option over the other So try to ask your questions in a neutral way in a way that is agnostic of whatever your response options are. Next one is avoiding to mix the explainer text with the questions. If you're saying something like, Oh, we need more information about so and so and when was the last time you used it and why? And then you give people a chance to respond to that. There's a lot of information they have to hold in their heads, and it goes back to this point of reducing cognitive load. You want them to hold this little information in their head as possible, is a little new information in your head as possible. So they can concentrate on what the actual question is about. So you can do that in this example, by having some kind of text introduction that doesn't include a question at all just to set the context if you feel like that's needed. And then just ask one question. at a time and avoid those kinds of double barreled questions and say like, oh, when's the last time you did something? And why? Those are two questions could be even more questions depending on what kind of information you want. So try to avoid that. Just keep it really simple and just say like, when is the last time you used x might be a good question. Maybe a bad question really depends, but try to keep it as simple as possible. Another way to do that is to convert free text questions into multiple choice ones, wherever it is possible. free text questions are a leading reason for survey abandonment or just skimming through it and not really thinking about your responses. If you really must ask them do so at the end, as respondents will have already invested time into the survey at that point and are less likely to abandon it. But if you bring them in at the beginning, especially it's a very high chance to People will just immediately give up. Another way to think about this is to not ask respondents to list things that you could have listed for them. If you struggle to come up with a list of options to create a multiple choice question, just think about how hard it is going to be for them to think of all the possible options and you know, put some kind of answer to get out of that cognitive load incredibly high in this scenario. So really try as hard as you can to make multiple choice questions or something similarly simple, out of something you may have asked as a free text question. One point that goes to quality of information is don't ask respondents to anticipate their own behavior, or attitudes or feelings of the future. Fortunately, this mistake is really easy to detect if you start a question with If the phrase, would you dot dot dot, then that falls into this area of sort of trying to get them to think about something that hasn't actually happened yet. People are notoriously unreliable in answering these kinds of questions, most likely it will get you a lot of wrong information here. That is not on purpose. They're not trying to deceive you, as just humans are inherently bad doing that. So as an alternative, see if you can ask about past behavior. And say in this example, you know, rather than would you do such and such, you can say, oh, in the past week, no, actually, that's a bad one in the past seven days. How often have you done this action? Just come back to what I was saying. About week versus days. This is another point where the term we can be quite ambiguous if I if I'm saying Or what did you do in the past week? And today is Friday? Not quite sure if I asked you about that particular week starting Monday or if your week started on Saturday. So try to be really clear about those sorts of definitions and make that as simple to answer as well. Another mistake I see quite a lot is that folks force their participants to give an answer, they either make a question that might be quite tricky to answer a mandatory one, or they don't provide a sort of not sure or it doesn't apply or other option. And it's like, okay, here's a five point rating scale pick one, or here are three options, pick one. That assumes that everybody is perfectly able to answer that question. And in more cases than not, this is not quite the case. There's always going to be somebody who's not able to answer the question and if you force them to provide an answer That can a be frustrating for them because, again, it increases the cognitive load them thinking about how they can answer it somehow. Or it pollutes your survey data by them just ending up picking any option even if it doesn't apply, just so they can carry on with the survey. So think really hard about when you really need to force an answer. Another really small but important detail I often pick up with mandatory questions is if you've ever designed a survey, you will have noticed this little star icon on mandatory questions and servi creators just assume that if there's this little star that will immediately inform everybody that a question is mandatory. Now, a couple of thoughts on this, first of all, my belief is that before making that distinction, you're kind of telling people that some questions are more important than others. While there's, I would argue that if a question isn't all that important to be answered, then don't include it in the survey. Why would you take up somebody's time if you really kind of don't really care all that much if, if they're going to give a response? If you cut out everything that you've previously considered as optional, then you're gonna increase the likelihood of completion and responding to happiness by a lot. I would say if if you're someone who who tends to do that to have like a few mandatory questions and then A whole bunch of optional ones. Just think about just getting rid of all the optional ones. And then again, not making the mandatory ones strictly mandatory to the extent that participants can actually proceed without answering that question. If you really do have questions that are optional, for example, free text entries are great contender for this, right? You're asking a question where you really want to have the answer to and it's phrased in a very easy way, and it's a multiple course just quick multiple choice question. It's really easy to answer, then you can just leave that as it is. And then if something is optional, you can you can just say this question is optional. You know, if you if you want to provide some additional information, please do really appreciate it. But you don't have to be speaking and that makes it easier for people to kind of be comfortable with that whole situation and knowing that their results are still going to be valued even if they don't feel In that textbox. So against common wisdom, I would actually flip that dynamic around. I would say don't Mark anything. It's mandatory. Consider everything important, but don't make it mandatory such that nobody can answer the question. And then highlight whatever is optional so that people can move on if they really want to. Just a few more pieces of advice on reducing cognitive load. One way to do that is to bundle questions together. Rather than asking a whole bunch of complicated question in a sequence, could you follow the format of saying, How much do you agree to the following statements, and then have a couple of questions listed below each other with the same kind of rating scale? The reason why they work so well is because that when participants go through a survey, they don't only have to think about what their answer to a question is going to be. But their mind also how to interpret how they have to convey that information into your survey format. And by keeping the format for a few questions in a sequence, the same, you're cutting out that step that additional cognitive load. So if you can do that in a way that doesn't kind of destroy the actual inside you're trying to get, I really strongly encourage doing that it makes filling out surveys so much easier. I've just briefly mentioned rating scales, and it is really hard to generalize, what's the right kind of scale in any given context, but you can use this as a guide. And I think you don't want to go by the kind of data that you would like to have. But think about what the result would mean to the respondent and what it would change for you. So what I mean with that is, you know, of course, we would always is like the the richest and most granular data reading points from zero to hundred. So we have every detail of a kind of nuanced question, you know, encapsulated in that. But if you do that, you know, even on as on a one to 10 scale, what does it actually mean to rate difference between seven and six? Does that have any kind of actionable consequence? And if it doesn't, maybe you can reduce the amount of scale here. And, again, that goes to say that if you're asking yourself, what difference would it actually make between a seven and a six, then that's something a respondent might ask themselves as well or at least subconsciously struggling with as they reply to a question. So by choosing a more applicable rating scale, you're not only making the question An easier to answer for your participant, and therefore faster and more reliable. But when you get the data back and you're interpreting it, and you're trying to figure out what actions to take from it, that's going to make it easier as well. If you thought about what insight will lead to what kind of action before you construct the survey, and do so then accordingly, then that's going to cut down time in terms of analysis on your end as well. Another mistake I see quite often is that the rating scale sometimes doesn't make any sense in the context of the question it is asked in. And that often has to do with sort of positive negative values. So for example, if you say if you ask me a question that says how likely are you to, you know, so and so whatever your question is, and then you have a scale from minus five to plus five. Like, that doesn't make sense. Like, I can't be less likely than zero really, right? And then what does what is the difference? between zero and minus five in this context. So this sounds really obvious, but I see it quite frequently. And that just, you know, is an extreme example, but it just means think about what rating scale makes the most sense for your question. That can also be in different formats. So sometimes it's a number, sometimes it should be word choices. And there's a whole science behind what that would be plenty of material online on the internet if you if you Google, you know, good rating scales. But just this to say, you know, think about it. Finally, I want to talk a little bit about the length of a survey. And the main point of this, of course, being keep it short, keep it really, really short. I have heard so many stories of surveys that were abandoned, and they were three questions long. And of course, that depends on what kinds of questions that was everybody Even if those are easy questions, survey abandonment can still happen. So you're doing yourself a favor by doing that, you know, five minutes for a survey doesn't sound long to us. It is really long to spend on a survey if you have other things to do. If you do have a very long survey, then there's a whole range of things you can do to adjust for that you can break it up into sections that make sense. Or you can allow participants to either complete it later or submitted as an incomplete survey such that you get at least some of the data before they give up. You can show the progress especially with long surveys. But even with short ones, it's quite frustrating not to see how much you still have left to go. And you don't want people to abandon your survey You know, one or two questions short of the Have it so make sure you are showing that progress as well. And this is all my survey wisdom for you today. If you liked this episode, please do subscribe and rated on iTunes, Spotify or wherever you listen to podcasts. It is not a survey, strictly speaking, but it does really help me do better and it also helps others find it. I'd also love to hear from you directly. If you have any thoughts on the show. You can get in touch with me via Twitter at Thomas underscore so or email me to Hello at Thomas essl.com product nuggets is produced by myself. The theme song is aeronaut by blue dot sessions. Any opinions expressed in this episode are always my own and do not necessarily reflect those of any current or previous employers. I thank you so much for listening, and I can't wait to talk to you next time. Goodbye. Transcribed by https://otter.ai