Contextual inquiry is a UX research method where you shadow people as they do their job (or leisure tasks), allowing you to ask questions in context. This video provides advice on overcoming the main challenges with this method.
Test early and often is a key recommendation for UX research. Dora Brune shares her approach, including regular Open Test Labs to engage more product teams and make user research more agile. Kinder Eggs make for a nice warmup task, even in remote tests. (Recorded at a participant panel at the UX Conference.)
Good UX design requires understanding the context and patterns of human behavior, especially in new products or features that solve real needs. The 5 steps to rapid corporate ethnography lead you to these discoveries.
Know the inherent biases in your recruiting process and avoid them in order to recruit study participants that are representative for your target audience.
In the early stages of a UX-design project, recruit enough people to gain an in-depth understanding of users’ experiences and needs. The number of people needed for an interview study is often smaller than you think.
Unsure where to start? Use this collection of links to our articles and videos to learn about ethnographic methods like field studies and diary studies — methods that help you learn about your user’s context.
Communicating UX work and findings to the full team, stakeholders, and leadership requires engaging deliverables. Amanda Gulley shared her experience improving the design and usability of UX deliverables at a UX Conference participant panel.
Two user research methods allow you to quickly test a large number of design alternatives, thus accelerating UX innovation. Rapid iterative design and within-subjects testing of multiple alternate designs aren't for every project, but are great when they do apply.
Improve design decisions by looking at the problem from multiple points of view: combine multiple types of data or data from several UX research methods.
Unsure where to start? Use this collection of links to our articles and videos to learn about quant research, quant usability testing, analytics, and analyzing data.
For each research or design method you employ, create a document that defines this method and can be used to educate other team members on UX activities.
To gain a holistic picture of your users, exchange data with the non-UX teams in your company who are collecting other forms of customer data, besides the user research you do yourself. You gain; they gain.
We compare the budgets needed for different kinds of qualitative user research: in-person usability testing vs. remote studies run by software (unmoderated) or run by a human moderator.
Qualitative usability testing aims to identify issues in an interface, while quantitative usability testing is meant to provide metrics that capture the behavior of your whole user population.
Contextual inquiry is a UX research method where you shadow people as they do their job (or leisure tasks), allowing you to ask questions in context. This video provides advice on overcoming the main challenges with this method.
Test early and often is a key recommendation for UX research. Dora Brune shares her approach, including regular Open Test Labs to engage more product teams and make user research more agile. Kinder Eggs make for a nice warmup task, even in remote tests. (Recorded at a participant panel at the UX Conference.)
Good UX design requires understanding the context and patterns of human behavior, especially in new products or features that solve real needs. The 5 steps to rapid corporate ethnography lead you to these discoveries.
Communicating UX work and findings to the full team, stakeholders, and leadership requires engaging deliverables. Amanda Gulley shared her experience improving the design and usability of UX deliverables at a UX Conference participant panel.
Two user research methods allow you to quickly test a large number of design alternatives, thus accelerating UX innovation. Rapid iterative design and within-subjects testing of multiple alternate designs aren't for every project, but are great when they do apply.
Improve design decisions by looking at the problem from multiple points of view: combine multiple types of data or data from several UX research methods.
For each research or design method you employ, create a document that defines this method and can be used to educate other team members on UX activities.
To gain a holistic picture of your users, exchange data with the non-UX teams in your company who are collecting other forms of customer data, besides the user research you do yourself. You gain; they gain.
We compare the budgets needed for different kinds of qualitative user research: in-person usability testing vs. remote studies run by software (unmoderated) or run by a human moderator.
Usability testing can yield valuable insights about your content. Make sure you test with the correct users, carefully craft the tasks, and ask the right follow-up questions.
Qualitative and quantitative are both useful types of user research, but involve different methods and answer different questions for your UX design process. Use both!
Ask users to keep a diary throughout a fairly long period is great for researching customer journeys or other bigger-scope issues in user experience that go beyond a single interaction.
What is the difference between a field study, an ethnographic study, and a contextual inquiry in a user experience design project? Not much. The main difference is that between field methods and lab-based user research.
Locating features or content on a website or in an app happen in two different ways: finding (users look for the item) and discovering (users come across the item). Both are important, but require different user research techniques to evaluate.
Learn how to run a remote moderated usability test. This second video covers how to actually facilitate the session with the participant and how to end with debrief, incentive, and initial analysis with your team.
In remote usability studies, it's hard to identify test participants who should not be in the study because they don't fit the profile or don't attempt the task seriously. This is even harder in unmoderated studies, but it can (and should) be done.
Focus groups and surveys study users'
opinions - not actual behavior - so they are misleading
for the design of interactive systems like websites.
Automated usability measures are just as misleading.
How to collect usability data from site users, using a historical archive as the case study. Keep surveys simple, collect data from real-world usage, and get feedback from friends of the site.
Website usage must be tracked to plan server capacity needs and future business models. Examples show use of regression statistics to predict future traffic patterns.
Focus groups can be a powerful tool in system development, but they should not be the only source of information about user behavior. In interactive systems development, the proper role of focus groups is not to assess interaction styles or design usability, but to discover what users want from the system.
Discount usability engineering is our only hope. We must evangelize methods simple enough that departments can do their own usability work, fast enough that people will take the time, and cheap enough that it's still worth doing. The methods that can accomplish this are simplified user testing with one or two users per design and heuristic evaluation.
Participants in a course on usability inspection methods were surveyed 7-8 months after the course. Factors which influenced adoption were cost, rated benefit of the method, relevance to current projects, and whether the methods had active evangelists.
Extensive usability testing was conducted to guide the 1995 Sun Microsystems' Web site design. This series of articles describes in detail the methods and findings of the design team.
Heuristic evaluation is a good method of identifying both major and minor problems with an interface, but the lists of usability problems found by heuristic evaluation will tend to be dominated by minor problems, which is one reason severity ratings form a useful supplement to the method.
Usability inspection is the generic name for a set of methods that are all based on having evaluators inspect a user interface. Typically, usability inspection is aimed at finding usability problems in the design, though some methods also address issues like the severity of the usability problems and the overall usability of an entire system.
Rating usability problems according to their severity facilitates the allocation of resources to fix the most serious problems. Severity ratings are a combination of frequency, impact, and persistence.
A summary of statistics for 13 usability laboratories in 1994, an introduction to the main uses of usability laboratories in usability engineering, and survey of some of the issues related to practical use of user testing and computer-aided usability engineering.
This essay describes a technique for extending a task analysis based on the principle of goal composition. Basically, goal composition starts by considering each primary goal that the user may have when using the system. A list of possible additional features is then generated by combining each of these goals with a set of general meta-goals that extend the primary goals.