9

Learn how to conduct usability tests in 10 min

 3 years ago
source link: https://uxplanet.org/learn-how-to-conduct-usability-tests-in-10-min-95dd3e7c458a
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Image for post
Image for post

Learn how to conduct usability tests in 10 min

From a summary of a 4-week User testing University Course.

Evaluating designs with users is the recently launched course from the University of Michigan that belongs to their most recently launched program User Experience (UX) Design and Research, unfolding the scope of user testing methodologies focused on Formative testing.

Even though I still highly recommend this course, I honestly found that the course did not follow a logical order which made it hard to keep pace. I hope this summary can help more people to learn on the go with this restructured quicker version.

Image for post
Image for post

1. Types of User testing

Summative Tests

  • Goal: to prove a point and respond to a hypothesis, example: “Is A>B in x measurements?” (where the differences from option A and B are usually minimal, differing only in design aspects)
  • Tests: Requires Controlled experiment and the outcomes should be measurable (Example: A/B Testing)
  • Requires: Between 10 and 20+ users
  • Quantitative Methodology

Formative Tests

  • Goal: identity problems to fix, and which part of the design that causes problems
  • Tests are task-oriented
  • Focus on what the user is saying, doing and struggling.
  • Qualitative methodology
Image for post
Image for post

2. Who should you recruit?

The general purpose of a usability test is to see if user group X ( users with similar characteristic, behaviour or attitude) can use system Y to do activity Z. So, you have to be very clear about who your users are and what it is that you’re interested in seeing them do in your test.

Before you start recruiting users to perform user testing you should consider the following traits: Expertise, Characteristics, Behaviour, Attitudes.

Expertise: Their expertise levels towards the general computer or to the digital product.

Behaviour: Their experience level with the digital product you are testing.

Characteristics: The user’s background (for example, if you are testing a Health App, you cannot expect the same level of knowledge of a Doctor compared to the patient)

Attitudes: Some features of the digital product might trigger the user so their premade opinion on certain matters (such as privacy issues) should be researched.

Image for post
Image for post

3. Designing the Tasks

The tasks are the activities you are going to give your participants to do in order to have them try to accomplish some goal using the digital product. So ask yourself, what is the purpose of this test? What are we trying to find out from this test in the first place?

This will start with a question statement, that should include the user group, the digital project, and the activity that it provides, for example:

Can experienced online shoppers use eBay to find and purchase Decoration items?

  • User group: Experienced online shoppers
  • System: eBay
  • Activity: purchasing Decoration item

After you identified the user group and activity, you start brainstorming specific actions that your users would perform within the scope of that activity.

Characteristics of a well-oriented task

In sum, a task should reunite the following characteristics:

  • Be relevant to the testing goals
  • Be realistic and verifiable
  • Should not be accompanied by instructions

Refine them and don’t fall into the trap of just referring only the end result of that task, for example, imagine a task framed as “buying a new book”, ask first “why does the user want to buy that book?”, the reason behind it will undercover the purpose for buying it, “the user wants a book to learn how to cook”.

Example User task setting

Purpose: To see if beginner online learners can effectively interact with FutureLearn to access course content

Primary Task: You are looking for courses related to User Experience that you can enrol in. It must have no prerequisites and must be at the beginner-intermediate level. You want a course that you can work for 3 hours per week. It should only run for 6 weeks and preferably you can start it asap.

Task Set

  • 1ºTask: Create an account in FutureLearn
  • 2ºTask: Look for a list of courses related to IT & Computer Science
  • 3ºTask: Choose a course that has no prerequisites, which you preferably can at the moment, is doable in 2 hours per week and should only run for 2 weeks.
  • 4ºTask: Check if your course has been successfully added to your account/profile.
  • 5ºTask: Find a micro-credential running no more than 2 weeks with Level of Postgraduate and costing less than 800 euros
Image for post
Image for post

4. Questionnaires and Interviews

Pre-test Questionnaires

This is the phase where you learn about the Dimensions of Diversity of your participants and only ask what it is relevant to use in analytics. This information will help you interpret what you see during the conduction of tasks. This step can also be skipped if the researcher doesn’t expect differences among users that could impact the performance.

Post-test Questionnaires

Applied after a participant has finished the tasks. Free-text responses should be avoidable in a post-test questionnaire. For example, questions using numbers that indicated the level of agreement: “it was easy to learn to use this system”: Strongly disagree — — — Strongly agree”

Measurable responses:

  • Perceived usabilityhow usable did you feel that the system was?”
  • Perceived usefulness “How useful do you think this system would be for the things that you actually need to do?”
  • aspects of preference or desirability: “How aesthetically appealing did you feel the system was? what is their preference relative to competitive products?”

Pre Test Interview

Pre-test Interview can be conducted instead of Questionnaires to get quantifiable questions verbally. It is advantageous to build rapport, and get more details beyond quantified measure, however, it can take longer.

Ex. “let’s say I want to know how many online purchases my participants have made. So that I can characterize them according to their expertise with online shopping. So if I ask verbally how many online purchases would you estimate you’ve made in the past month?”

Post-Test Interviews

You can flag for follow-ups by using the technique “tag-team debriefing”. A Debrief might include:

  • Follow up on tasks
  • Places they got stuck
  • Wrong turns
  • Errors and places they might do not even noticed they made errors
  • Questions asked
  • Replay the tasks and go throw what questions they had during the task.

General Questions:

  • What do you think the system does well?
  • Where do you think the system most needs improvement?
  • What if anything would you use this system for? ( perceived usefulness)
  • Who do you think this system would be most valuable for?
  • If you had to explain to someone what the system does, what would you say?
  • Do they actually understand what the system is trying to do and what it’s supposed to be for?
  • Have you used any systems that do similar things to what this system does, and how would you compare them?

Above all, “why” should be the core questions to ask when conducting this interview.

What could go wrong?

Watch out for demand characteristics,(the tendency of participants to give you what you want) acquiescence bias (likeliness of giving positive feedback) and confirmation bias. (selective ignoration of disconfirming our beliefs). Also don’t expect everyone to give you valid answers.

How to resolve:

Demand characteristics and acquiescence bias: by requiring honest feedback and pay attention to unnatural answers, and stating the purpose of the test.

Confirmation bias: by conducting tests with unbiased 3ºparties, and don’t share your interest in the outcome of the test.

Image for post
Image for post

5.Conducting the usability test

1.Pick a representative set of tasks

2. Pilot Testing: Run through them yourself and figure out what it would look like to succeed in performing that task and how easily could go.

3. Clean State: if required, remove data from past participants (reset the system, clear cache, search history, undo user history in the system)

4. Organise the tasks from easier to harder

Average time: 30–45 minutes to complete all of the tasks

How Should the Moderator act

Start by Introduce yourself and other intervenients. It is important to set the tone by establishing a relationship of trust with the participants, being enthusiastic by showing the excitement of their value and also try to be clear with your goals with the user test. Show them your role, how you are there to facilitate the evaluation and observations and how honest feedback is crucial. Whereas their role is voluntarily helping you evaluate the system, make sure they understand they are not being tested, they can stop at any time and are not forced to answer all the questions. If possible give a reward and say “thank you for your time”. Also, don’t lead the user to accomplish the task by giving instructions, Newman (2020) suggested the following sentence to say beforehand:

“I want you to try to do this task. I’m here if you get stuck and you really need help. But I want to really try to let you go through this to the best of your ability, and we’ll talk about questions or problems that you had afterwards.”

How Should the participants act

Participants should say out loud everything what are they thinking and what comes to their mind, for example:

  • Looking for something
  • Reading a text out loud
  • Hypothesizing how the system might work
  • How are they making sense of what they are doing?
  • Interpreting system feedback
  • Explaining and reasoning their decisions
  • How are they feeling

Why you may ask? You hear them thinking through the tasks, you learn what they notice, you hear how they interpret their options.

Informed Consent

If needed, Informed consent might be asked to sign, here is an example:

informed concent
informed concent
Informed Consent example from the University of Michigan, (Newman, 2020)

How to Moderate

  • Plan before the test to be prepared to face problems
  • Chose a quiet and private Place
  • Record your test sessions (capturing the video and audio will allow you to analyse everything later)
  • Keep track of the logger moves and progress by using a logging sheet

Logging Sheets enables you to keep the tasks analysis organized and only focus on relevant reports during the test:

Image for post
Image for post
Font: (University of Michigan, 2020)
Image for post
Image for post

6. Collect statistics and Analyse the Results

Going back to the goal statement “Can user X use system Y to do activity Z” you need to:

  • Collect the following statistics: Task Sucess/failure; Errors; Timing
  • Review critical incidents: Where did breakdowns occur? Why did they occur?
  • Interpret debrief responses
Image for post
Image for post
“Task Sucess Non-binary outcomes” Font: (University of Michigan, 2020)

At the end of the performed user tests, you should have collected:

  • 5–7 Test Sessions
  • Users attempted probably 25 to 50 tasks.
  • 10 hours of video
  • A Pile of logging sheets
  • 5–14 questionnaires

Identify Severe problems (Key findings)

In the final report, the most severe problems, or in other words, the Key Findings, should be highlighted, in order to do it, start by:

1- Describe the problem “What screen or page, or during what interaction does the problem occur? Are there particular conditions under which it occurs and how often?”

2- provide evidence: Indicate the critical incidents that created the problem (ex: task failure, extended time).

3- Course of action to find solutions by giving suggestions or additional research find suitable examples of other digital product, design principle or a heuristic, a general principle of good usability practice that would apply to this situation, to understand how this problem could be solved and if additional research is needed.

For the less severe problems:

  • Describe were found, description and severity scale
  • Provide an appendix

Explain the outcomes

Summarize your results, review incidents, and notice patterns. Why did the problems happened, what usability principles were violated, is there a root cause?

Reporting

  • The format should be determined by the audience and purpose
  • If its a Personal project: the prioritized list is enough but if you are working with a team, the report should include: list key findings; less-severe list; Evidence.
  • If this is for external stakeholders: focus on a formal report, emphasise on the method and how you came up with the tasks, so they can understand where the results came from.

“Nobody will benefit from your hard work if nobody understands what you found except for you.” Newman, 2020

Bibliography

Limpitsouni, K., 2020. Undraw. [online] UnDraw. Available at: <https://undraw.co /> [Accessed 2 January 2021].

Mark Newman. 2020. User Experience (UX) Design And Research. [online] Available at: <https://www.futurelearn.com/programs/ux-design-and-research> [Accessed 2 January 2021].


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK