The recommended methods were selected as generally applicable and
easy to learn and apply, even in organisations with limited usability
The recommended methods assume that the requirements are well-understood.
If this is not the case, additional methods such as focus
groups, observing users in field studies,
task analysis and task
allocation should be used.
Where appropriate skills are available, user-based evaluation should
be complemented by expert and heuristic evaluation.
These and other more specialised methods are described below.
This technique is a means of defining and managing the user-centred
design activities that will take place during the development of
a product or system. For each activity a task manager is appointed,
an appropriate technique is selected and a schedule is specified.
The usability plan is a living document, and undergoes regular reviews
as the project progresses.
A focus group brings together a cross-section of stakeholders in
the context of a facilitated but informal discussion group. Views
are elicited by the facilitator on topics of relevance to the software
product being evaluated. Focus groups are often used to identify
initial requirements they can also serve as a means of collecting
feedback once a system has been in use or has been placed on field
trials for some time. Focus groups give more limited information
than a field study or user-based evaluation, and focus
groups should not be used to replace evaluation by individual users.
Whenever possible, the design team should arrange a field study
to observe how users currently work. This can provide an in depth
understanding of the users needs and working environment, and provide
a solid foundation for design. This information is often difficult
or impossible to obtain by any other means.
A summary of the technique can be found in the RESPECT
Handbook. For more detail, the book
on Contextual Design by Beyer and Holtzblatt is recommended.
A functionality matrix can be used to specify the system functions
that each user will require for the different tasks that they perform.
The most critical task functions are identified so that more time
can be paid to them during usability testing later in the design
process. This method is useful for systems where the number of possible
functions is high (e.g. in a generic software package) and where
the range of tasks that the user will perform is well specified.
In these situations, the functionality matrix can be used to trade-off
different functions, or to add and remove functions depending on
their value for supporting specific tasks. It is also useful for
multi-user systems to ensure that the tasks of each user type are
Storyboards are sequences of images which demonstrate the relationship
between individual events (e.g. screen outputs) and actions within
a system. A typical storyboard will contain a number of images depicting
features such as menus, dialogue boxes and windows. The formation
of these screen representations into a sequence conveys further
information regarding the possible structures, functionality and
navigation options available. The storyboard can be shown to colleagues
in a design team as well as potential users, allowing others to
visualise the composition and scope of possible
Task analysis is used to identify what a user is required to do
in terms of actions and/or cognitive processes to achieve a task.
A detailed task analysis can be conducted to understand the current
system and the information flows within it. These information flows
are important to the maintenance of the existing system and must
be incorporated or substituted in any new system. Task analysis
makes it possible to design and allocate tasks appropriately within
the new system. The functions to be included within the system and
the user interface can then be accurately specified. For simple
systems, tasks can be identified by questioning users, and tasks
can be sorted and grouped using post-it-notes. For more complex
systems a more structured method may be beneficial.
A successful system depends on the effective allocation of tasks
between the system and the users. Different task allocation options
may need to be considered before specifying a clear system boundary.
A range of options are established to identify the optimal division
of labour, to provide job satisfaction and efficient operation of
the whole work process. The approach is most useful for systems
which affect whole work processes rather than single user, single
Expert evaluation involves a usability expert inspecting a system
to identify any usability problems. Further information on appropriate
methods such as heuristic evaluation can be found in the INUSE
This is a participatory technique in which designers attend a workshop
with analysts and HCI specialists (who act as facilitators) to examine
the ergonomic issues associated with the system being developed
and scope the work required to develop solutions based on the contents
of the ISO 9241 standard. This standard contains the best and most
widely agreed body of software ergonomics advice. In particular
the processes recommended in Part 1 and Annex 1 of parts 12-17 of
the standard ensure a systematic evaluation of each clause to check
its applicability to the particular system(s) under consideration.
The combination of these processes and recommendations is used to
ensure that the principles of software ergonomics have been considered
in the development of a system. This approach supports (and may
supersede) the use of style guides.
The method ensures that a product conforms with ISO 9241 and thus
embodies good ergonomic principles. A software product is assessed
for conformance to the relevant requirements as detailed in the
ISO 9241 standard: Ergonomic Requirements for work with Visual Display
Terminals (VDTs). Developers provide documentary evidence regarding
their processes and one or more auditors examine these documents
and interview relevant staff.
This method allows designers to create a video-based simulation
of interface functionality using simple materials and equipment.
As with paper-prototyping, interface elements are created using
paper, pens, acetates etc. Video equipment is then used to film
the functionality of the interface. For example a start state for
the interface is recorded using a standard camcorder. The movements
of a mouse pointer over menus may then be simulated by stopping
and starting the camcorder as interface elements are moved, taken
away and added. Users do not directly interact with the prototype
in this approach, however they can view and comment on the completed
video-based simulation. This variant on paper-prototyping is particularly
suited for simulating the dynamic character of a simple interface
mock-up and can be used during the early stages of the design cycle
to demonstrate design options and concepts to an audience.
This variant of computer-based prototyping involves a user interacting
with a computer system which is actually operated by a hidden developer
- referred to as the 'wizard'. The wizard processes input from a
user and simulates system output. During this process the user is
led to believe that they are interacting directly with the system.
This form of prototyping is beneficial early on in the design cycle
and provides a means of studying a user's expectations and requirements.
The approach is particularly suited to exploring design possibilities
in systems which are demanding to implement such as those that feature
intelligent interfaces incorporating agents, advisors and/or natural
It is often helpful to develop possible system concepts with a
parallel process in which several different designers work out possible
designs. The aim is to develop and evaluate different system ideas
before settling on a single approach as a basis for the system.
When designers have completed their designs, it is likely that they
will have approached the problem in radically different ways that
will give rise to different user systems. It is then possible to
combine designs and take the best features from each. Parallel design
is most useful for novel systems where they is no established guidelines
for how best the system should operate. Although parallel design
might at first seem like an expensive approach, since many ideas
are generated without implementing them, it is a very cheap way
of exploring the range of possible system concepts and selecting
the probable optimum.
The measurement of cognitive workload involves assessing how much
mental effort a user expends whilst using a product to accomplish
a task. This information can be obtained by a number of means such
as the subjective workload assessment technique which is based on
three rating scale dimensions: time load, mental effort load and
psychological stress load. There are also questionnaires for evaluating
subjective perceptions of effort. Cognitive workload complements
other subjective measures, and is particularly useful information
when the user is expected to be over- or under-loaded.
Measuring the Usability of Systems in Context
The European MUSiC project developed usability
context analysis and usability metrics including the User Performance
Measurement method (see below) and the SUMI attitude questionnaire.
performance measurement method
This is a detailed procedure for usability
testing that provides reliable metrics for effectiveness and
efficiency as defined in ISO
9241-11. It includes a procedure for scoring effectiveness based
on the impact of errors and omissions. An overview is contained
in the paper The
MUSiC Performance Measurement Method (133K).
is also available.
(Note that the DRUM tool for video analysis is no longer available,
but similar commercial tools are now available from suppliers such