×

Product Validation Guide

Stop building the wrong thing!
Don't miss our guide to validate your product way before launching it to market.

“You won't believe what they did...” [Part 2 of UX Research Case Study]

Tomás Correia Marques

Hi there, this is the second part of a UX Research Case Study, if you haven't read the first part yet you can see it here.

So, at this point we had collected the concerns and needs of our users, it was time to convert them into a UI for the website.
After a couple of iterations using the client's feedback, we agreed it was good to start building...
Wait what‽... how do we know this is what our users need, and that they'll be able and will want to use this instead of their current solutions?
That's where Usability Testing comes in.

In this post, I'll explain the process we took to validate the UI Mockups with real users, how we analysed the feedback we got, and how that was converted into real changes in the website.

Creating the Usability Test plan

The first step towards a test plan is to define what you actually want to learn from the test. Is it finding out what slows users down? what confuses users? what mistakes users make? do users notice and use a new feature?

Tip #1.Have a very clear understanding of what exactly you want to find out with the Usability Sessions.

After that you need to establish who are the users you need to find (they may be the same user profile from previous research sessions or not); how many users do you need (take a look at part 1 for more info on this), and how you may reward users for taking part in the test.

Depending on the type of findings you are looking for, different types of tests may be more suited than others. A good standard base is to do a task-based test. If you have more than one user profile, it is very likely that you'll need a version for each.

When writing down the task instructions, it is very important not to use the same names that are present in the UI, since you want users to interpret the task and not simply search for the words in the interface. Try to create real (believable) tasks for users to complete as they are easier to understand and fulfil.
Avoid constructing the task as a step by step instruction, opt for just giving the goal and let users find out how to achieve it.

Tip #2.Don't use the same labels from the UI in the task instruction, and try to create plausible tasks users would face in their day-to-day.

Finally, it is also relevant to determine what hardware and software you'll use for the sessions, is it going to be remote or in-loco, do you need to schedule a meeting room, do you need a mobile phone, a laptop, or a monitor.

It's important to note here that this method works both in person and remotely via video-call (with screen sharing); I've tried both approaches, and you can get similar results.

A neat template for formalising all of this information is the Usability Test Plan Dashboard by Davis Travis. If you're feeling hipster, you can even print this and post it on a wall for everyone to comment and share ideas.

Preparing the Usability Session materials

Now that you have a plan it's time to create / update the screens you'll need for users to complete the tasks. Maybe you already have everything ready, but it is always good to go over the main flows and possible forks and check if any screens are missing.
When possible use real data in the interface, lorem ipsum is very confusing for users, I've been told once "What does this say? I can't read Latin!".

Tip #3.Try to use real data, names, images, ... in your mockups. It's much easier for users to understand your UI.

A side note is due here since you can also use this method with the actual website / app / system you are building if you want to evaluate the usability of an existing product.

At the time of this project, Figma was just out, and Sketch did not have Prototyping yet, so I opted for using a plugin that created HTML pages from the artboards and links based on the layer names (it's actually pretty cool and very scalable). You can take a look at it here.

Nowadays, both Figma and Sketch have prototyping built-in so that plugin might be too much work. The downside of the built-in prototyping tools is that they don't allow you to show the current task to the user (with the HTML prototype it is straightforward to add a banner on top). A good way to achieve this is by using a software like the "Stickies" from MacOS and set the note as a floating window (meaning it will always be on top of all the other windows).

Showing the task is very important because the user can refer back to it without having to ask you to repeat the question, and if your task involves specific names or numbers, it's good to have a written version very accessible on the top of the screen. Just make sure it is distinctly different in style from the system, to not be confused with it.

Tip #4.Having the task instruction always in sight is super helpful for users. If you are doing the session in-person you can even stick a real post-it on the top of the monitor.

Another thing you might consider adding is a "task completed" screen at the end of each task flow, this provides a sense of fulfilment for the users and motivates them to continue.

Try to do a pilot test with someone you trust will be brutally honest to see if the system is working as it is supposed to, if the instructions are clear, if the test duration is appropriate, and most important if the tasks provide the insight you need.

Conducting the Usability Testing Sessions

At this point, you should have a plan, the materials, the users and the location for the sessions.
When conducting the sessions, there are some important aspects to take notice. Before starting, explain to users what the test consists of, inform users they can withdraw at any point if they feel uncomfortable, that what you are testing is the system and not them, and that there is no right or wrong way to complete each task, you just want to know how users interact with your system.

Tip #5.Before starting the session, explain to your users what they'll be doing. Most likely, this is the first time they are doing something like this. Make them feel comfortable.

It's also helpful to ask users to think out-loud, repeat this if users are very quiet during the test, it helps to make them vocalise the issues or confusions they might be experiencing.

The important aspect here is that you don't want to ask specific questions or give instructions, you want to encourage users to continue and make them feel you are paying attention to what they are saying and doing. I personally quite like the "mm-hmm" expression, because you are signalling you heard what they said, but you are not agreeing or disagreeing with what they did.
Once the test is finished, you can ask for the reason behind some of the decisions users made that you found intriguing.

If you are conducting the usability sessions alone don't try to take notes, record the session and analyse it later; and even if you have someone taking notes, it is always good to record at least the sessions' audio.
Don't forget to ask for permission before you start recording. Often times a verbal agreement is enough, but you can also ask users to sign a Consent Form (these forms may include an NDA type clause too).

Analysing the results

After the sessions you'll hopefully have a lot of notes and hours of recordings, now it is time to extract important notes and comments users made. Don't just take note of the issues, save some positive comments too, for when you need to cheer up ;) and to share them with the team.

If you made recordings of the screen, measure the time it took to perform each task, and the ease of navigation of the interface (number of clicks for example). These two points combined are very useful to detect issues and exclude non-issues. For example, a user might have taken a long time to complete a task but did the same number of clicks as a faster user, meaning that user might just have been reading the interface more carefully.
Establish what is the minimum and acceptable worst for each task and compare that with your results in order to identify the problematic areas.

Sort the issues by the number of occurrences and classify them by severity. I quite like Dumas & Redish (1993) Severity Scale as it is very clear and easy to classify. Check out this article by MeasuringU on other scales and how they compare.

Tip #6.Detect hesitations, extract comments, summarise insights, sort and classify issues. It's a lot of work, I know..., but the reward is great.

Go over each issue and try to find a way to fix it; these solutions might not be the ones you end up using, but often it helps to understand the issue better.

If you've tested with a lot of users and have a lot of data, it might be useful to create some charts to visualise better the areas that need more improvement and to better communicate that to the client.

This process was conducted before the development had started using UI Mockups and again in the middle of the development cycle to confirm the product we were building was what users needed and wanted to use.

I hope you found this article interesting and learned something from it. Let us know @whitesmithco if you like this sort of articles and what other topics you'd like us to write about!

See you soon, Tomás



Cover photo credits: The Commons


Subscribe to our newsletter

Would you like to receive more posts of this kind in your Inbox?