How do we ask questions during the most common usability testing studies. I will briefly touch on some generalities that I prefer.
Usability Testing
With concise and standard words, usability testing is a research activity where a product is used by specific users to perform specific tasks in a specific context of use. The goal is to determine the parts of the product that work and those who don’t, by assessing the users’ effectiveness, efficiency, and satisfaction as they experience the product.
When and What to Ask
Usability Testing sessions usually follow a similar structure offering multiple opportunities for questions. With very rare exceptions, usability testing focuses on the experience of one single user at a time.
When and what questions are asked depends highly on the type of test conducted; i.e. moderated vs unmoderated, and formative (early in the process) vs summative (at the end). For all these types of usability testing, the following are the usual phases in a session and the types of questions used
1. Introduction
Goal: to make sure the user understands the plan for the session.
Starting the sessions, we welcome the user and explain to them the goals and rules for the session. The subject can ask questions to make sure they understood what will happen during the session.
Usually, the only question we ask the user during the intro is:
“Do you have any questions before we start?”
- It is a clarifying question.
- If we are going to be asking the user a lot of questions throughout the session, the intro is the best moment to tell them that we are not testing their intelligence or savviness, and that the reason why we will be asking questions is because we want to understand clearly why some things might be working or not, so that we can better inform future steps.
- Concluding the introduction, we make sure that all required equipment is ready, and we start gathering data.
2. Pre-test
Goal: to understand what type of user is doing the testing.
- Very often, before the user starts experiencing the product, a brief pre-test interview is conducted. Sometimes, this is done in a questionnaire before we meet the user in the session. This interview focuses on helping us understand the user’s attitudes, goals and behaviour related to the business or usage of the product under testing. If persona research has been conducted before (the ideal), these questions will inform us what persona each user matches.
- Most of the time, we would ask about the user’s demographics (e.g. age, education, family composition), psychographics (e.g. domain expertise, tech savviness, attitudes towards the product or brand, expectations), behaviours (e.g. common tasks) and context of use (when, how, devices and other resources used).
- Normally, a few of these questions have been asked during the recruiting phase, where the recruiter makes sure that the participant matches the eligibility criteria for the study. So, if you have a lot of tasks to get the user through, and not much time per session (around 10 minutes for this part), you could skip the Pre-test interview.
- If the sessions are moderated and the sample size matches a qualitative research approach, these questions are usually asked in an open way using probes. For example:
To understand demographics:
Q1. Can you tell me briefly about yourself?
Probe 1. What do you do?
Probe 2. What is your household composition?”
To understand psychographics:
Q2. On a scale of 1 to 5, where 1 is Nothing at all, and 5 is Very much, how would you rate your confidence/comfort using your mobile phone for ___ [one or several tech savviness-related activities. E.g. transfer money from your accounts, shop online, etc.]?
Why?
Q3. On the same scale, how much do you know about ___ [domain specific concepts. E.g. investment products, home insurance, etc]?
Why?
Q4. What do you like or dislike about … [tasks related to the system or domain]?
To understand behaviour and context of use:
Q5. How often do you ___ [general description of a set of tasks related to the business or product under testing. E.g. do research about insurance online]?
Probe 1. When was the last time you did it?
Probe 2. What tasks did you do?
Probe 3. What devices do/did you use for that?
Probe 4. How was your experience then? Did you encounter any difficulties? Which ones?
- If the sessions are unmoderated and/or the approach matches a quantitative approach, usually closed questions versions of these are used, providing ranges and multiple-choice options. E.g.
Q4. Which of the following statements match how much you know about [domain specific concepts]?
I know nothing about that.
I know a bit, but I could definitely learn more about it.
I know a lot, there are very few things I don’t know well.
I am an expert. I manage these concepts daily.
3. Test/Tasks
Goal: To understand how the user uses the product to complete a set of tasks, why they use it that way, and what they experience as they do so.
- In this step, the user performs each planned task using the product. Approaches vary here from a highly conversational activity with a moderator to an independent exercise where users complete tasks on their own without interacting with a moderator. The approach followed depends highly on the maturity of the product under testing and what the most important data to be collected is.
- For products in early stages of development, it is more common to have a moderator lead the session and ask questions about the experience to make sure there is rich qualitative data about why an approach might work or not. Whereas, in a mature or finalized product, there is more interest in performance metrics such as time to complete the task. In this case, it is required not to engage in conversations with the user before they complete the task so that their attention is put solely on completing the task in a natural way, without distractions.
- With this in mind, questions can be asked when the event of interest happens (e.g. a click on a specific UI element) or held until the user is done with the task (or we call it the end after realizing they will never get there).
- These questions focus on understanding why and how the user did or tried to do something with the product, and what expectations they had after a specific set of actions. In moderated sessions, we usually ask the user questions based on our observations of their experience completing the tasks. E.g.:
Question: I noticed you ___ [did something at a specific moment]. Can you please describe what happened at that moment?
Probe 1: What exactly did you do?
Probe 2: Why did you take that approach?
Question: I noticed you tried interacting with [specific UI element] a couple of times, but nothing happened as a result. Can you tell me what your expectations were at the moment?
- Even if sessions go in an unexpected direction or if some behaviours or events are not observed, a few of these common questions can be asked after users complete each task:
Question 1. How did you find the experience of using the system to complete this task?
Probe 1. How did you find the language used?
Probe 2. How did you find the navigation (or search functionalities)?
Probe 3. How did you find the layout of the content?
Probe 4. How did you find the amount of scrolling you had to do on your phone to complete the task?
- Whatever approach is taken here (asking during or after the task), you need to make sure you are not leading the user, that your words don’t look like you are judging them, or revealing information the user needs to discover on their own to complete another task.
- Example of ‘not ideal’ questions:
Question 1: Why did you go to ___ [page A] instead of ___ [page B]?
Question 2: Why did you not click on this icon?
Both questions sound judgemental
Question 3: Would you agree that ___ [way A of completing the task] is better than ___ [way B]?
Is leading toward favouring way A.
Question 4: Did you notice you could get to that page using this ___ [menu, icon, link, search]?
Could reveal a way of navigating that might be necessary in another task
- Better ways to ask these questions are:
Question 1: Did you notice whether there was any other way to ___ [complete a specific step/task]?
Probe: What do you think of this [other approach]?
Question 2: Can you tell me what you think about these icons?
Probe: What do you expect to happen if you interact with this icon? Why?
Question 3: Which of these two approaches/options do you find better? Why?
Question 4: If we want to ask about things that participants did not seem to notice, then it could be rephrased and asked when all tasks have been tested (post-test). See below.
- One final question common in usability test tasks is about the user’s satisfaction with the product within the context of the task.
Satisfaction rating: On a scale of 1 to 5, where 1 is not at all easy and 5 is very easy, how easy was it to complete the task using ___ [the product]?
Why that rating and not some other?
4. Post-test
Goal: to understand the overall user experience and get feedback on any additional elements not covered with the tasks.
- To wrap up the Usability Testing session, we usually find a post-test custom made interview followed by widely known and validated instruments like the System Usability Scale (SUS) or similar.
- The post-test interview can be a retrospective conversation about the system throughout all the tasks and sometimes can include requests for the user to go back to the product and explore additional features.
- Common questions asked here are:
Question 1. How would you describe your overall experience with ___ [the product] now that you used it to complete a few tasks?
Question 2. What did you like the most from ___ [the product]? Why?
Question 3. What did you like the least? Why?
Probe: Did you encounter any difficulties as you used ___ [the product]? Which ones?
- When we hold back questions to prevent giving hints to the user before they finish the tasks, the post-test is also a good moment to ask them. Common questions look like the following one:
Question 4: As you performed the tasks, did you notice these ___ [captions, headings, menu options, icons, links] in any of the screens you saw?
Probe: What do you understand they relate to?
Probe: Do you think they could help you do some of the tasks you did today? How?
- And finally, validated questionnaires such as the SUS usually come as a set of closed statements to which the user answers using a Likert scale (a set of symmetrical answer options specifying the users level of agreement – from Strongly disagree to Strongly agree). These types of instruments are used more as a quantitative instrument, and their output allows comparing the product against different versions of the same product or other products in the same domain as a benchmark.
Informing Behaviour Rather Than Intention
One thing to have in mind when asking questions in a usability test is that our questions intend to inform the actual experience of using the product rather than the intention of using the product or certain features in certain ways. So we focus on understanding why users did something rather than asking them to imagine a potential scenario of use which might not actually happen.
To avoid gathering inaccurate or false information that could lead future design efforts in the wrong way, usability testing avoids intention questions like “Would you use this ___ [feature or product]”. Instead of asking these questions, the users are given tasks and we observe whether they use the features of interest and under what scenarios. Then we ask about how they used it.
Intention questions are more common in market research exercises, and they intend to capture interest in a value proposition, more than informing usability issues in a system. Product purpose or intention questions should have been answered long before the product exists, using methods such as focus groups, storyboarding, ethnography, diary studies, etc.
Summary
When doing usability testing, remember that the main idea is to understand how the users use our product/service and why they use it that way. As we collect information about their patterns of use, we can start discovering what components work or don’t work, and the context of this use; and therefore, where to focus our efforts for improving the experience.
It is important to identify the type of questions that better suit the goals of your usability study, and the best wording and moment to ask these questions.
It is also important to have a skilled moderator observing the testing session, so they can quickly identify the appropriate moment for a key question and how to phrase it appropriately and prove for more information.
Similar to other UX research activities, it is a good idea to have a pilot testing session to help you assess whether your testing instruments are ready to elicit the most important data that will allow you to draw the most useful insights for your future design efforts.