Overview
Every client and organization brings their own definition of user research — shaped by their experience and maturity. This guide maps out the full range of testing methods I draw from, so we have a shared vocabulary before deciding how to investigate
The right tool for the task at hand
Expert-Based Evaluation
Heuristic Evaluation
A systematic inspection of the user interface design against recognized usability principles (heuristics). Experts evaluate the interface independently and compare findings. Measures compliance with established design principles and identifies potential usability problems.
Cognitive Walkthrough
Evaluators work through a series of tasks from a user's perspective, focusing on cognitive processes required to complete each task. Measures learnability and identifies areas where users might struggle with task completion.
The method involves the tester asking four simple questions about the way a specific user journey is conducted:
- Will the user try to achieve the right result?
- Will the user notice that the correct action is available?
- Will the user associate the correct action with the effect they're trying to achieve?
- After the action is performed, will the user see that progress is being made toward that goal?
User-Based Testing
Usability Testing
Observation of real users completing specific tasks while thinking aloud. Measures ease of use, task completion rates, time on task, and user satisfaction. Can be conducted as:
- Moderated — Conducted with a facilitator present
- Unmoderated — Users complete tasks independently
- Remote — Conducted online
- In-person — Conducted in a lab or specific location
Eye Tracking
Tracks users' eye movements while they interact with the interface. Measures visual attention, scanning patterns, and areas of interest — helping understand what users actually look at and for how long.
First Click Testing
Users indicate where they would click first to complete a given task. Measures intuitive navigation and information architecture effectiveness.
Preference Testing
A quick research method used to determine which design option users prefer when presented with two or more variations. It primarily focuses on subjective appeal, aesthetics, and initial impressions.
Information Architecture Testing
Card Sorting
Users organize content items into groups that make sense to them. Can be:
- Open Sort — Users create and name their own categories
- Closed Sort — Users sort items into predefined categories
Measures how users expect content to be organized and labeled.
Tree Testing
Users attempt to locate items in a stripped-down site structure. Measures the effectiveness of your site's navigation hierarchy without visual design influence.
Quantitative Testing
A/B Testing
Compare two versions of a page element to see which performs better. Measures specific metrics like conversion rates, click-through rates, or time on page.
Multivariate Testing
Tests multiple variables simultaneously to determine optimal combinations. Measures how different elements interact with each other to influence user behavior.
Web Analytics
Collection and analysis of website usage data, including:
- Traffic patterns
- User flows
- Page performance
- User behavior
- Conversion funnels
Measures various aspects of user behavior and website performance at scale.
Heat Mapping
Visual representation of where users click, move, and scroll on your pages. Measures user engagement and interaction patterns across your website.
Technical Testing
Performance Testing
Evaluation of website speed, responsiveness, and stability under various conditions. Measures:
- Load times
- Server response times
- Resource usage
- Scalability
- Stability under stress
Cross-Browser / Device Testing
Testing website functionality and appearance across different browsers, devices, and screen sizes. Measures consistency of user experience across platforms.
Accessibility Testing
Evaluation of website usability for people with disabilities. Measures compliance with accessibility standards (WCAG) and actual usability for users with various disabilities.
Feedback-Based Methods
Surveys and Questionnaires
Structured collection of user feedback through various question types. Measures user satisfaction, preferences, and demographic information.
User Interviews
In-depth conversations with users about their experiences, needs, and pain points. Measures qualitative aspects of user experience and gathers detailed feedback.
Contextual Inquiry / Ethnographic Studies
Qualitative research where researchers observe users performing tasks within their natural environment — workplace, home — to gain a deep understanding of behaviors, context, challenges, workflow, and motivations as they naturally occur. It often involves observation combined with targeted questions asked "in context."
Focus Groups
Moderated discussions with groups of users about their experiences and expectations. Measures collective user opinions and generates ideas for improvement.
I've included this method due to its frequent request, but it carries several challenges worth understanding before you reach for it.
Focus Group Caveats
- Groupthink — Participants may conform to dominant opinions or avoid dissent.
- Moderator influence — Poor facilitation can bias responses.
- Limited generalizability — Small, non-random samples don't represent broader populations.
- Social desirability bias — People may give answers they think are acceptable or expected.
- Dominant voices — Some participants may overshadow quieter ones.
Common Biases
- Conformity bias — Tendency to agree with the group or influential members.
- Moderator bias — Leading questions or reactions sway participants' responses.
- Sampling bias — The group may not represent the target population.
- Response bias — Participants may withhold honest feedback to avoid judgment.