0

Should you fix a design error in the middle of user testing?

 2 years ago
source link: https://uxdesign.cc/should-you-fix-a-design-error-in-the-middle-of-user-testing-571d782e1a50
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Should you fix a design error in the middle of user testing?

What to consider if you want to change your design prototype mid-test

A woman using a phone and two people sitting next to her and taking notes. The person in back seems to be the facilitator, while the person in front seems to be taking notes with a pen.
Photo by Adam Wilson on Unsplash

It is always painful when you realize a design error during user testing. When building a large-scale prototype, we had a small oversight: two buttons were labeled the same but did different things. The previous design had been a modal window, so you’d click the “Add Hospital” button, which would bring up the modal to fill in some basic information. You’d then click the “Add Hospital” button in the modal window to add it to the record.

A secondary button, Add hospital, is at the bottom. An arrow points to a modal window, where the user would input in information and then a blue button, Add Hospital, is listed there.
A secondary button, Add hospital, is at the bottom. An arrow points to a modal window, where the user would input in information and then a blue button, Add Hospital, is listed there.
Modal window button

Our team couldn’t agree on better wording, so we figured we’d see if users ran into issues with it. But when the design shifted from a modal to a second page, we hadn’t considered the impact.

There are two screens. The first screen on the left has an add hospital button in blue, but the arrow goes towards another page with a number of blanks and choices to fill in. However, the 2nd screen on the right has “Add Hospital” as another blue primary button at the bottom.
There are two screens. The first screen on the left has an add hospital button in blue, but the arrow goes towards another page with a number of blanks and choices to fill in. However, the 2nd screen on the right has “Add Hospital” as another blue primary button at the bottom.

Something we thought a few users might comment on was causing users to fail the more significant task. Users had clicked “Add Hospital” once, so they thought clicking “Add Hospital” on the second page would add yet another hospital. It was such a big deal that we had to abandon the task with several participants. So we decided to change the button’s label after 3 participants.

This change was a minor change that we decided to change for the rest of our test participants after a few users ran into it. But it opened up a larger question: should we change our test prototype if there are flaws that every user encounters?

The answer is, it depends.

Arguments for and against changing prototypes

There are good arguments for and against changing prototypes. One of the primary arguments for changing a prototype is we’re not running a strict quantitative test. As a result, not all test conditions have to be the same.

If you’re gathering qualitative insights, there’s a chance that you don’t want to hear the same comments about something minor. Instead, you’d rather hear about the different viewpoints users have to address the problem in detail. And sometimes the problem is so severe that you don’t need to see many people failing the task. After all, it only takes one person burning their hand before you might reconsider putting the handle of a deep fryer is next to the heating element.

Besides, it may be more beneficial to get some feedback about possible design alternatives you might be considering. Changing things here allows you to get a glimpse at how design alternatives might perform, which will help you iterate faster. But, there are also two good arguments why you shouldn’t. The first is if any UX metrics are involved. If a change alters your metrics substantially (like time on task shrinking by a minute), it will throw out any semblance of valid metrics.

This change can also be an issue, even if you’re not super focused on metrics. For example, if the first couple of users expressed frustration overall with the product, and then the following users love your product because of changes, how do you talk about general user impressions? It’s not a mixed bag; it’s a bag that was largely negative until you decided to make some (often hasty) changes to the prototype.

This also becomes important if considering how you’re going to talk with stakeholders. For example, it can be hard to discuss if this is a severe problem, but only 2 of 8 users ran into it (because you changed it mid-test). As a side note, if you’re doing rapid prototyping, you might not ever change anything mid-test if your next round of Testing is next week. But if your next round of usability testing is a month or two away, you might want to consider it.

However, the main argument against changing things is that sometimes, the solution can be worse than the cure. Note that you’ve often spent weeks thinking about and re-designing something before Testing. Making a sudden change after a day of Testing can sometimes screw up the results worse than the existing prototype. Why? Because you might succeed.

You might be forced to include a half-baked design idea in future iterations. This is because you didn’t capture the problem in detail, and the changes you made seem to work, so you can’t do much else. So there can be valid arguments both for and against changing things. But that’s not the question that we should address to answer this. Instead, the question is this: is this change worth spending my user’s mental energy on?

Thinking about changes

When you spend some time thinking about it, a user’s ignorance is precious. We’re getting users who may have some knowledge or experience of a subject to go into our prototype blind and play around with it for the first time. We’ll never quite re-capture their first time playing with the website, whether it’s things that they don’t know or mistakes that they make. Once they’re familiar with it, later testing will be faster and usually less error-prone. We use two different terms, learnability and memorability, to talk about these experiences.

The definition of learnability from N/Ng. The text reads “How to measure learnability of a user interface.” The definition says “Learnability considers how easy it is for users to accomplish a task they encounter the interface and how many repetitions it takes for them to become efficient at that task.”
The definition of learnability from N/Ng. The text reads “How to measure learnability of a user interface.” The definition says “Learnability considers how easy it is for users to accomplish a task they encounter the interface and how many repetitions it takes for them to become efficient at that task.”
https://www.nngroup.com/articles/measure-learnability/

So ask yourself: is the detail that they’re getting tripped up on worth their mental energy? If the answer to that question is not immediately obvious, consider asking yourself these two questions:

  1. Is the solution obvious?
  2. Will it negatively impact your analysis/metrics?

Is the solution obvious?:

The previous example had a pretty obvious solution: change the button’s name to something more understandable. Participants also gave suggestions on what we could use instead, making this even easier to address. If the solution is this obvious, you might as well go ahead with changing it. However, If the solution will need a team to help figure it out (or at least some deep thought), perhaps you want to hold off on changing things mid-test.

Will it negatively impact your analysis/metrics?

Will the change affect the larger picture or effort to do an analysis? Thinking about your findings/metrics from all participants may help you decide if you should change anything. For example, if your users felt negative about the product, but later, users loved the product after you made changes. It can be hard to summarize the overall findings for your stakeholders. Or if your task completion/time on task is better, but that’s because the design makes it easier to skip optional (but recommended) steps. In that case, you may need to hold off on making any changes. But these questions may not give you the complete answer to that question. So here are some general guidelines on what you should (and shouldn’t) change mid-test.

What to change mid-test

I will describe issues in the broadest terms because these are general rules of thumb. Depending on your situation, you may or may not want to make some of these changes. It’s up to you to interpret them for your project. However, most issues tend to fall into three main categories: OK to change, Design alternatives and Don’t change.

OK to change: Prototype issues, technical issues, button/container word issues, order of pages.

The first category of issues is the minor fixes that users should probably never devote mental energy to. We should remember that we’re testing the design of our product with a prototype, not the technical capabilities of the prototype itself. If things are missing from the prototype that negatively impacts your Testing, it’s OK to fix it. For example, if people expect to navigate back to the home page by clicking the logo and not distracting them from the task, it’s OK to fix that.

Other common prototype issues include:

  • Pop-up windows not being centered,
  • Problems with the prototype when magnified (look at your prototype zoomed in 125–150%),
  • Buttons not working.

Tech issues are also an easy fix. For example, if you’re testing remotely and there’s a bit of lag, fixing an issue like clicking a button twice creating duplicate form fields is a no-brainer.

However, the other two categories require just a bit more thought. Poorly worded buttons, headers, and containers are easy for your users to spot. Sometimes you’ll get a lot of feedback or suggestions for what to change it to.

Several users suggested the same thing for the button’s wording (“Save Hospital Information”).

The solution is obvious, but before making the change, think about any meetings (or discussions) around button labels. It may be that you can change it no problem, but sometimes site-wide consistency takes precedence over the individual name of the button.

This also applies to changing the order of the pages for user testing. You can often get a huge difference in user feedback by switching pages (or excluding pages) to match their mental models and avoid priming them for specific questions. For example, suppose people expect the first page of an application to enter their information, but you start with your emergency contact page. In that case, they might make many errors accidentally entering in their info. Or if you start with a page of instructions that primes the user to stop moving forward (because it seems like they need to find documents before they can continue) when you don’t need to. It can be an easy fix to switch or exclude different pages, but there haven’t been any previous meetings about this.

The main thing here is with information dependencies: if any information carries over the next page (or different parts of the application), things will break by switching this around.

This brings us to the next category of changes, often considered the middle ground: design alternatives.

Design alternatives: Design elements, Button placement, informational/error notices

Sometimes, you get a lot of feedback about some design aspects that aren’t as crystal clear. Perhaps users complain a lot that you used radio buttons on a list because they want the ability to select multiple. Or they expected the order of buttons to be swapped, such as “Checkout” and “Continue Shopping.”

Or the solution to the wording issue isn’t crystal clear, such as with informational notices (“Before you begin, you need to…”).

In that case, it’s still good to collect feedback about the current design: trying to swap things out randomly may harm your Testing more than you help it. But what you may want to do is create a design alternative that you can ask your users about during your Testing.

The best way to think about this is “informal” A/B Testing: you go into your prototyping software, duplicate the page, and change one specific thing with the page.

After gathering the user’s impressions on a particular page (or after you’re done with all of your tasks), you can show them the alternative design and gather their impressions.

Remember, in A/B testing you only change one particular feature: it’s not helpful to put all of the design alternatives you want to make on a single page. But generating these alternative designs is not only pretty straightforward: you can also gather feedback about possible alternatives that you might have tested in the next round of testing anyways. If the overall sentiment around radio buttons was negative, trying drop-down lists as an alternative allows you to iterate on an existing problem and gather feedback on a possible solution. But you shouldn’t always create alternatives for everything: sometimes it’s best not to touch the design.

Don’t change: Navigation issues, complex wording, contested features, large-scale issues.

Lastly, you should probably never touch the categories of issues, even if they might yield good research data. The reason why tends to be because of one of three reasons:

  • You had a lot of meetings discussing different possible solutions to this before settling on something
  • This is a significant change that would probably result in multiple things changing (and making your analysis harder)
  • This touches on things outside your project, so you need to talk about it.

Many of these issues touch on multiple reasons. For example, navigation issues not only are large-scale changes but also touch on things that may be outside of your projects, such as site-wide consistency or just another department.

If people are getting tripped up with menus or can’t find something, you might consider designing alternatives for button labels in a menu, but you still need to gather data about the current design. Or, you might have to either move on to the next task or nudge them to look at something and get their impressions.

Complex wording, such as paragraphs explaining instructions or definitions of terminology, likely has also had many meetings focused around them and may have things outside the project (like legal or organizational wording) to consider.

So please don’t change it: you’re not only not necessarily helping users (since your re-design might not even be feasible), you’re also stepping on a lot of toes by doing that.

Lastly, there may be other large-scale issues that you might not want to touch, even if the users consistently suggest this. For example, if they recommend a searchable table for ease of use, you might not want to consider that if you have six tables throughout the process.

The reason is, which tables do we apply it to? Do we make all tables searchable (even the ones that have a few results)? Do we only apply it to some? But, again, that’s something you need discussions around, rather than quickly slapping together a re-design. Just as a reminder, these three different categories of issues exist only as guidelines: sometimes, you might find it beneficial to break these categories up. But all of them are based around a single idea: users’ ignorance is a precious resource.

User ignorance is a precious resource.

I’ve been fortunate enough to have been taught the value of users’ ignorance early on. Since my background was in Healthcare UX, I often tested with the same pool of Subject Matter Experts through many user tests. It was a luxury to sit down with a new medical professional and pick their brain, and sometimes they gave valuable insights we had never considered.

That’s translated to the way that I work now. For example, with iterative design testing, it’s often more important to capture as much data as possible, even if that means changing things mid-test. Users will see your prototype for the first time only once. So devoting their mental energy towards gathering those insights, problems, and alternatives is an excellent use of their time. So don’t be afraid to change something mid-test, especially if it allows you to get to what you want to learn.

Kai Wong is a UX Specialist, Author, and Data Visualization advocate. His latest book, Data Persuasion, talks about learning Data Visualization from a Designer’s perspective and how UX can benefit Data Visualization.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK