Usability testing


Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system. It is more concerned with the design intuitiveness of the product and tested with users who have no prior exposure to it. Such testing is paramount to the success of an end product as a fully functioning application that creates confusion amongst its users will not last for long. This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.
Usability testing focuses on measuring a human-made product's capacity to meet its intended purposes. Examples of products that commonly benefit from usability testing are food, consumer products, websites or web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human–computer interaction studies attempt to formulate universal principles.

What it is not

Simply gathering opinions on an object or a document is market research or qualitative research rather than usability testing. Usability testing usually involves systematic observation under controlled conditions to determine how well people can use the product. However, often both qualitative research and usability testing are used in combination, to better understand users' motivations/perceptions, in addition to their actions.
Rather than showing users a rough draft and asking, "Do you understand this?", usability testing involves watching people trying to use something for its intended purpose. For example, when testing instructions for assembling a toy, the test subjects should be given the instructions and a box of parts and, they should be asked to assemble the toy, rather than just comment on the parts of materials. Instruction phrasing, illustration quality, and the toy's design all affect the assembly process.

History

Usability testing didn't start with websites and apps. It first emerged through the study of how people used machines in the 1940s, such as airplane controls during World War II. Later, in the 1980s, when personal computers became the norm, a new field known as Human-Computer Interaction established usability testing as a standard practice in technology design.
In the 1990s, people began setting up special labs where they could observe individuals using computers and identify areas where they were experiencing difficulties. As the internet and smartphones gained popularity in the 2000s and 2010s, usability testing expanded to websites and apps. It began to occur online, allowing companies to test with people from anywhere.
Usability testing has become a standard part of product design, particularly in fast-moving technology environments. New tools, remote testing, and emerging technologies, such as AI and virtual reality, make the process faster and more sophisticated. Still, the objective remains the same: to make technology easier for real people.

Usability Labs

Usability labs, in contrast to field testing or remote testing, are spaces designed specifically for conducting usability testing. They provide conditions conducive to testing through access to resources, testing equipment, and a dedicated space which can be formatted and upgraded according to the specifications of the usability test. A typical usability lab often includes a desk, chair, and computer, along with any additional elements specific to the test.

Methods

Setting up a usability test involves carefully creating a scenario, or a realistic situation, wherein the person performs a list of tasks using the product being tested while observers watch and take notes. Usability testing follows a structured process that allows researchers to observe how users interact with a product while performing tasks. In addition to direct observation, researcher may use several other test instruments such as scripted instructions, paper prototypes, and pre- and post-test questionnaires are also used to gather feedback on the product being tested. For example, to test the attachment function of an e-mail program, a scenario would describe a situation where a person needs to send an e-mail attachment, and asking them to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can identify the problem areas and fix them. Techniques popularly used to gather data during a usability test include think aloud protocol, co-discovery learning and eye tracking.

Guerrilla Usability Testing

Guerrilla usability testing, also known as hallway testing or pop-up research, is a quick and cheap method of usability testing that consists of short informal interviews in public spaces that are frequented by people most likely to use your product or service.
This unorthodox method is primarily used in the early stages of a design process to receive direct and immediate feedback from a wide cross-section of the general public; significantly cutting the cost and testing time required in traditional testing. Guerrilla testing can help designers to identify core usability problems with the product and target "specific user groups that may be difficult to reach - for example, care home residents, homeless people or A level students."
This type of testing is an example of convenience sampling and thus the results are potentially biased. Limitations of this method can include: in-comprehensive data, lack of willing participants, or needing to be paired with other methods of usability testing to produce more detailed results.

Remote usability testing

In a scenario where usability evaluators, developers and prospective users are located in different countries and time zones, conducting a traditional lab usability evaluation creates challenges both from the cost and logistical perspectives. These concerns led to research on remote usability evaluation, with the user and the evaluators separated over space and time. Remote testing, which facilitates evaluations being done in the context of the user's other tasks and technology, can be either synchronous or asynchronous. The former involves real time one-on-one communication between the evaluator and the user, while the latter involves the evaluator and user working separately. The increasing need for remote testing stems from its capacity to improve accessibility to essential services and communication for individuals with limited mobility, due to factors-such as susceptibility to illness, disability, or limited transportation resources. Numerous tools are available to address the needs of both these approaches.
Synchronous usability testing methodologies involve video conferencing or employ remote application sharing tools such as WebEx. WebEx is a commonly used technology to conduct a synchronous remote usability test. This form of remote testing allows for real-time communication between moderators and participants, which is valuable to older adults or individuals who are homebound due to health, mobility, or environmental conditions. Unlike traditional usability testing, remote is able to reach participants who deal with the complications listed. As dependency on remote services such as telemedicine, online shopping, and remote banking continue to grow, moderated remote usability testing plays a crucial role in ensuring these technologies meet the needs of high-risk populations while being cost-efficient.
However, synchronous remote testing may lack the immediacy and sense of "presence" desired to support a collaborative testing process. Moreover, managing interpersonal dynamics across cultural and linguistic barriers may require approaches sensitive to the cultures involved. Other disadvantages include having reduced control over the testing environment and the distractions and interruptions experienced by the participants in their native environment. One of the newer methods developed for conducting a synchronous remote usability test is by using virtual worlds.
Asynchronous methodologies include automatic collection of user's click streams, user logs of critical incidents that occur while interacting with the application and subjective feedback on the interface by users. Similar to an in-lab study, an asynchronous remote usability test is task-based and the platform allows researchers to capture data automatically by auto-logging which collects pages visited, time spent on each page, and interface actions. Hence, for many large companies, this allows researchers to better understand visitors' intents when visiting a website or mobile site. The tests are carried out in the user's own environment helping further simulate real-life scenario testing. By eliminating the need to conduct individual sessions, asynchronous remote testing can include a larger number of participants, making it more flexible and cost-effective than traditional lad-based studies. Conducting usability testing asynchronously has also become prevalent and allows testers to provide feedback in their free time and from the comfort of their own home.

Expert review

Expert review is another general method of usability testing. As the name suggests, this method relies on bringing in experts with experience in the field to evaluate the usability of a product.
A heuristic evaluation or usability audit is an evaluation of an interface by one or more human factors experts. Evaluators measure the usability, efficiency, and effectiveness of the interface based on usability principles, such as the 10 usability heuristics originally defined by Jakob Nielsen in 1994.
Nielsen's usability heuristics, which have continued to evolve in response to user research and new devices, include:
  • Visibility of system status
  • Match between system and the real world
  • User control and freedom
  • Consistency and standards
  • Error prevention
  • Recognition rather than recall
  • Flexibility and efficiency of use
  • Aesthetic and minimalist design
  • Help users recognize, diagnose, and recover from errors
  • Help and documentation