So you want to make a great product? How can you be sure the solution you have is the right one for your users? You test it.
User research in its most basic definition is the process of understanding user behavior, needs, and motivations.
User research places people (the end-user) at the center of the design process and your products. User research inspires your design and evaluates solutions. This is important for a few reasons. First, the team (everyone required in making the product: client, stakeholders, vendors, partners, etc) needs to be aligned on a clearly defined goal. Second, the team should define what success will look like and how it will be measured. Third, the team would be wise to validate existing assumptions about the problem and the users. For some reason, this last one always seems to be the trickiest for teams to put into practice, even though it is no less critical to the success of the product. Far too often, teams create products as if they were the user.
TIP: If you want to be on the lookout for this behavior in your organization or team, it sounds like this: “I have a problem and I would solve it in this way. Let’s build a product that represents my solution and sell it to others with the same problem.”
This line of thinking isn’t necessarily wrong. It is absolutely limiting. You are not the user. And the biggest problem with thinking you are is that it introduces bias. To create the best product you need the best solution. Which means you’ll have to understand all the people who experience the problem you want to solve and be open to the solution being different (and often greater) than you imagined.
There are different methods and techniques teams use to discover user behavior, needs, and motivations. Which of these methods your team employees will largely depend on what you are trying to accomplish, what you have available, and where you are in the process. User research is a part of a cycle. It should never be one-and-done, and in theory it could go on forever, continuing to inform improvements to products already on the market. The cycle looks like this:
Research > Design > Build > Test
With testing being its own form of research that starts the cycle over again.
Let’s bring this cycle to life with a recent case study from our design team here at Digital Scientists. We believe in working together with our clients to hypothesize, experiment, and analyze with the end goal of creating simplicity for our users.
We worked with our client to define the following:
- What is the problem to be solved? (opportunity)
- Who are the people affected by the problem? (users)
- What type of product should we build to solve the problem? (goal)
We began with a deep dive of the landscape this product would enter. We identified current products in the existing and adjacent landscapes and conducted a competitive analysis. This helped us identify patterns and gaps in what was currently being offered.
Based on those findings, our team began iterating, designing, and refining several concept solutions for our client. Eventually we landed on three strong concepts that independently satisfied the goal we set out to achieve. Where the three concepts differed was how the users would approach decision making on their path to a specific end result.
- Would they prefer a product that provided the fastest path to the end result?
- Or a product that offered many options to determine the best path to the end result?
- Or perhaps a product that was somewhere in between?
We were missing key information about what motivates our users in their decision making processes. Therefore we were unable to determine which of these three concepts was the right solution. The only way to determine which concept provided the right solution was to run a test.
We created a clickable prototype for each concept so we could run a concept test with live users and gather feedback to inform what the right solution was. We reached out to our network with a screening survey to find people who fit the profile of our users. We identified ten people and offered them an incentive to come into our office and test our three prototypes. We used an online service called Lookback to record the sessions so we could watch them again and share them internally and with our client. Video evidence is a powerful tool to deliver insights and hearing from the test users helps clients make decisions with more security.
During the tests, we worked from a script containing necessary contextual information, a set of tasks for each concept prototype, and questions to investigate the behavior and approach around those tasks. We also added some deeper debriefing questions to really get to the core of what motivated these test users’ decisions on the path toward that defined end goal. Having a good script with the right questions is an integral part of the process. Poorly framed or leading questions return answers that limit and bias the research process and could ultimately lead to outcomes that don’t represent the user accurately.
After all the interviews had been conducted, it was time to sort through the data. We believe It is important to re-watch user interviews even if we conducted interviews in person. Watching things a second time and taking a second set of notes can help identify new things that might have been missed the first time. We jotted down interesting things, comments, questions, stated assumptions, etc. on Post-it Notes. Next, we dove into an affinitizing exercise looking for patterns and themes. This process turned our close to 150 Post-it Notes into four high level feedback themes and a clear concept winner. This type of exercise insures that all the feedback informs the recommendations and insights that follow.
To present these findings back to our client we created a highlight reel of all the interesting and positive things test users had said about the product. Then we announced the winning concept. Since we came away with plenty of other insights related to the four high level feedback themes, we created a deck of recommendations to address each of those themes. We were prepared to not only choose a concept for moving forward, but also provide insights on ways to continually enhance the concept for an even better product.
We achieved our goal. We discovered that the right solution was the concept that offered many options to determine the best path to the end result. In the process, we learned why that was the right solution. It turns out that comparing options when making a decision was directly tied to the user’s level of confidence in their decision as they pursue that end result. This was a massive insight for both our design team and our client that we wouldn’t have known had we not run a test.
INSIGHT: This debunked a key stakeholder assumption that users were going to prefer a product that provided the fastest path to the end result. The assumption might have been rooted in logic, but it was still an assumption. The concept test proved otherwise.
Design is an iterative process. The insights informed almost an entirely new design for the product. We started out with the winner from the concept test then consolidated it with client approved recommendations from the high level feedback patterns. Additionally, we created features inside the product to accommodate the insight about users having a higher level of confidence in their decision after comparing options. Everyone was pleased with the new product design. What do we do next? We are now ready to build a functional prototype and conduct a usability test to validate the MVP design.
This process left us wondering:
- What would have happened if we’d never asked the question that led to the concept test?
- What would have happened if we’d asked the question sooner?
- How might the product be affected in either case?
We don’t have to live in a world of “what ifs” when we can use research to discover and validate user behavior, needs, and motivations. What we can do is continue to learn and improve just as the cycle suggests. If you or your team find yourselves facing “what ifs” on the road to creating a great product, know that the best next step is to test it. You’ll pass with flying colors.
Want to learn more about how we approach to research at Digital Scientists? Or work with us to leverage insights on your product? Check this out.