Contents of this page
Inland Revenue used the following methods:
Context Analysis
Evaluate the existing
system
Setting a usability
requirement
How to run better
facilitated workshops
Task scenarios
Paper prototyping
Style guide
Performance Measurement
SUMI live survey
Context
Analysis
Business aims
- Ensure all factors that relate to use of the system are identified
before design starts.
- Provide a basis for producing task scenarios, setting usability
requirements and designing usability tests.
Lifecycle stage
Method
Hold a workshop with key stakeholders and use a context checklist,
which covers the main aspects of the system. This technique was
already well established in the testing area but we did it earlier
in the development before design started and it was produced as
a joint exercise by requirements and testing.
Lessons Learned
The users skills, tasks and the working environment were
defined. The value of documenting this corporate knowledge should
not be underestimated. Our IT supplier does not have staff with
an intimate knowledge of our core processes or organisational culture.
Before TRUMP there was a feeling that we had this knowledge "in
our bones" and could pass this on to the IT supplier when requested.
Context analysis proved there was a better way to spread that knowledge
around and the document has been used time and again by all involved
to act as a reminder of what and whom we were trying to design for.
It was also the key input to setting the usability requirement for
and for both the diagnostic and performance measurement usability
evaluations.
How to do it
Instructions for context
analysis.
Evaluate
Usability of Existing System
Business benefits
- Identify problems to be avoided in the design of the new system
- Provide usability measures, which can be used as a baseline
for the new system.
Lifecycle stage
Method
A usability analyst and seven users evaluated the existing system
out in the local office network. Each user was given a short introduction
and then observed using the system to do the same key tasks. Comments
were captured by a usability analyst which generated a problem list
and a report was produced which was fed into the development team
before design of the new system began.
Lessons learned
Reviewing the output from this work it is clear that the users
should have also been asked to fill out a Software Usability Measurement
Inventory (SUMI) questionnaires and we should have used the opportunity
to gain efficiency and effectiveness figures for later use but apart
from that the time invested was well worth it.
How to do it
Instructions for evaluation
of an existing system.
Setting
a Usability Requirement
Business Aims
- Identify the most important strategic effectiveness, efficiency
and satisfaction targets for the new system.
- Highlight the importance of usability early in development.
- Allow those targets to be tracked during design and measured
during testing.
Lifecycle stage
Method
- Context analysis to define the users skills, tasks and
the working environment.
- Four hour workshop involving requirements managers, end user
team representatives, project sponsors and usability analyst to
decide:
- which task(s) and user type(s) need a usability requirement;
- for each chosen task and user type estimate acceptable and
optimum tasks times. Agree acceptable and optimum effectiveness
target based around likely errors. Use the Software Usability
Measurement Inventory (SUMI) to set a satisfaction target.
- Data trapping in a small sample of field offices to verify the
workshop estimates against similar tasks being completed on the
existing system.
- Collation into an agreed requirement.
Lessons learned
We had never set a detailed usability requirement early in the
lifecycle before (though we had set less detailed ones later in
the lifecycle) and needed a trained facilitator to guide us through
the process. It was also difficult in a commercial development environment
used to setting hard and fast business requirements based on legislation
and/or clear operational needs to adapt to a less exact science.
We needed to change that mindset and acquire an understanding that
the usability requirement had to have "blurry edges" because the
new system would not have the same tasks as the old one, only similarities.
Our estimates, however well meaning and well researched, might (and
indeed did) therefore be proved to have missed the mark somewhat
when the new system was tested. The use of a trained facilitator
was invaluable in helping us get to grips with all of that as well
as reference to resources on the web and publications such as Usability
Engineering by Nielsen.
The initial workshop proved troublesome because of the different
user types and tasks involved but even when these were pared down
the various estimates for effectiveness and efficiency had to be
agreed and then verified out in local offices on the existing system.
Once the data had been collected it then had to be processed and
a document produced for the project to agree. An amount of rework
was necessary before all parties were content with the measures
set and then it could be published. Finally we made the mistake
of not growing and refining the requirement sufficiently as our
understanding of the system matured which meant that when we came
to do the final performance measurement test adjustments had to
be made to ensure it reflected that latest views of the business.
In hindsight it is clear however that the advantages do outweigh
the time spent. All parts of the project team had a clear, common
understanding of what is an acceptable standard for the usability
of the system and we were able to evaluate if that benchmark was
being met so helping improve and control the quality of the system.
The skills necessary to set a requirement were easily transferred
from the facilitator to the business and are already being applied
on other projects.
To find out more
Instructions for usability
requirements.
How
to run better facilitated workshops
Business Aims
- Provide a level of engineering for the design process by providing
workshop participants with information for designing and verifying
the IT functions.
- Ensure common view on design priorities.
- Maintain a business focus in the workshops.
Lifecycle stage
Method
We adopted a two phased approach to ensure workshops were effective
and efficient:
Phase 1 - prior to workshops
- Use Context Analysis to scope who will use the system, what
tasks they will undertake and how the workplace is organised;
- Produce task scenarios to cover all the main tasks;
- Set usability requirements for those tasks;
- Produce a preparation pack for each function that collates the
context analysis, task scenarios, IT requirements and design thoughts
so the business share a common view of what they need to deliver
from the JAD.
Phase 2 - during the workshops
- Use paper mock-ups to design windows;
- Employ corporate style guides to ensure a consistent look and
feel to those windows;
- Test the paper mock-ups using the task scenarios.
Lessons learned
Task Scenarios
We all took this technique to our hearts. It was relatively simple
to pick up as it involved the business people (the end users) documenting
what they did on a daily basis back in the office. This knowledge
could then be captured before every function design workshop and
not only used to focus what the IT was being developed for but used
in conjunction with other techniques such as task analysis and paper
prototyping to verify the emerging design was meeting the needs
of users and then used again to validate the final IT prototype
was correct.
One of the undoubted successes of the whole project. After an initial
workshop to the end users were able to produce task scenarios on
demand and both formally document them in the appropriate preparation
pack and produce them during the JAD sessions when presented with
a proposed change to the design.
Preparation Pack
An aid for participants so they have a clear idea of what the workshop
is expected to achieve and how they can contribute effectively.
The packs were put together by the requirements analysts responsible
for the function to be designed. The packs always included:
- Task Scenarios;
- Roles & Access Permissions;
- Process Inputs/Transformations/Outputs;
- Process Validation Rules;
- Core Process Exception Handling Controls;
- Help Controls;
- Data Items/Associated Volumes;
- Proposed Strawman Screen for JAD;
- Proposed Navigation;
- Previous JAD Minutes;
- Baselined requirement definition;
- Current System Screen-Prints.
In our view an absolutely invaluable briefing aid for all participants
in the workshops that became integral to the success of each session.
The cost of 4 hours preparation was repaid by the fact that each
workshop usually consisted of 8 people and once packs had been introduced
business could usually be wrapped up in a single day rather than
spilling over into the following which had been the norm previously.
This proved to both IR and EDS that prior preparation prevents poor
performance.
Paper Prototyping
Paper or a whiteboard or the COOL:Gen CASE tool were used during
the workshops to produce draft windows based on the business requirement
and the straw-man contained in the preparation pack. These prototypes
were then tested using the task scenarios, fresh designs produced
and then tested again until agreement was reached and the initial
straw-man design turned into a tin-man one that could be coded.
This technique was already widely used on other projects but was
formalised for the trial project and (this is probably the most
important part) linked to the preparation activities before the
workshop and the use of task scenarios during it. As a technique
it was easily picked up by the analysts and end users but really
proved its worth when linked to the task scenarios. The two go hand
in hand.
Style Guides
Our usual practice had been to leave Graphical User Interface standards
to individual projects, which meant applications were delivered
to the business with a different look and feel. An attempt had been
previously to provide a corporate style guide but this had rapidly
become out of date, was unwieldy to use and had no sense of ownership
amongst the IT developers.
A corporate style guide and an overview of the chosen user interface
style were provided to the development team and a number of major
national projects that were starting at the same time and followed
religiously throughout design. The benefits of this approach were
immediately apparent:
- Consistent look and feel both within trial application functions
and other developing systems;
- Windows met all the technical constraints imposed by the CASE
tool;
- Give a common template to start from so speed up the design
process (less debate about placement of data or what a button
or menu item should be called);
- Gave a benchmark to measure against during unit, system and
business testing.
We also learnt that style guides need to be at the right level
to ensure they can be easily read and understood by both a technical
and business audience and should not overly prescriptive to take
the imagination and innovation out of design. To address the concern
about too much control a waiver system was designed for when standards
effected the efficiency of the compliance business.
Developers involved in previous projects commented on how much
less pointless discussion spent on names and placement of controls.
The one failure in this area was the third leg of the style guide
set, a list of usability guidelines, designed to give business developers
a few rules of thumb to complement the interface style guides. Though
a set of guidelines have now been agreed by all parties it took
a number of versions to get all three sets to fit together, complement
rather than contradict each other and add value rather than say
the same thing in a different way. This delay meant that the usability
guidelines were not used during design though they will be able
to be used during testing and will be used on other projects as
the style guide set has already been accepted a the way forward
for IR/EDS.
How to do it
Instructions for task
scenarios, paper
prototyping and style
guides.
Performance
Measurement
Business benefits
- Identify usability problems.
- Provide measures on efficiency, effectiveness and satisfaction
against pre-set usability requirement.
- Inform the acceptance process for the project.
Lifecycle stage
Method
- Get the analyst running the exercise on a training course for
the chosen method or buy-in trained resource to transfer their
skills. You won't do this successfully without that sort of outside
help.
- Use the context analysis and usability requirement to identify
what you need to measure.
- Produce an evaluation plan defining scope, timetable, technical
and business needs. You not only need to create a realistic business
environment, you need stable code with realistic data (our advice
is to use the system test code and data which should be reasonably
stable and realistic by this time in the lifecycle and if it isn't
an advantage of the method is you've found out early enough to
address the issues) and the same operating platform and system
architecture as for live running. Add that to the fact that you
will need to either video or stopwatch the users and recruit at
least 8 for statistically reliable results and the absolute necessity
of careful planning and communicating your needs to the different
parties involved becomes apparent.
- Secure users who meet profile in context analysis and send out
preparatory material.
- Obtain training material and supporting business products such
as guidance manuals as identified in context analysis.
- Prepare work triggers.
- Prepare scoring plan based on agreed output for each task. You
need to have a clear idea of task goals and sub goals and make
sure the key business players share that view. Once you've got
that you can agree the inputs and outputs for each and what outputs
you will score against. Once you know those items you can agree
what constitutes the quality indicators for each of them and derive
a scoring system that assigns values for different grades of completion.
- Prepare the technical environment or get someone to do it for
you.
- Dry run and revise.
- Hold training.
- Observe users executing tasks, administer SUMI and hold individual
debriefs.
- Score and analyse output.
- Report.
Lessons learned
We have been using performance measurement for a number of years
but this time introduced scoring against a detailed usability requirement
for all the main business tasks. Previously we had compared testers
against a perfect user to see how far up the learning curve they
were or constructed a cheap and cheerful requirement based around
one or two tasks as part of the preparation activities. The use
of a detailed requirement that formed part of the wider business
requirement and that had the buy in of the whole project meant the
results of the exercise carried much more credibility and empowered
the usability analysts in their discussions about resolution of
the problems that has been discovered.
Please be under no misunderstandings about how complex these exercises
are. You need a usability analyst trained in the method and it typically
takes 30 days of work over a two month period to prepare, execute,
analyse and report. When you add to that the technical costs plus
securing the right users the cash register is kept ringing. That
said the results justify the trouble and expense. You get an end
of lifecycle activity that objectively measures the system for its
quality in use just as the many thousands of users will do when
they switch it on from day one. You get advance notification as
to the impact of the system on the efficiency and effectiveness
of key business processes. And that type of data is priceless to
any business not only because it gives you the opportunity to manage
issues and expectations but because it's a much clearer signal to
acceptance than whether the code meets a business requirement set
in the dim and distant past and usually centred around functionality
rather than task.
How to do it
Instructions for performance
measurement.
SUMI Live
Survey
Business aims
- Track user satisfaction with the product into daily use.
- Feed comments gained back to system developers and process improvers.
Lifecycle stage
Live running
Method
- Revisit the context analysis report to identify main users and
tasks.
- Decide on target users, tasks and sample size.
- Get buy in from the operational managers of the staff who will
be surveyed.
- Prepare a covering note explaining the objectives of the exercise
and what tasks you want people to do before completing SUMI.
- Receive the returns and process the data.
- Analyse the results.
- Report.
Lessons Learned
We have used the Software Usability Measurement Inventory (SUMI)
tool for a number of years in development. It is an accepted industry
standard way of assessing Users perception of the usability
of software and provides six scaled scores; a Global usability figure
and five sub-scales of Efficiency, Affect, Helpfulness, Control
and Learnability. The Global figure is probably the most useful
as it is a highly reliable and valid metric which gives a very accurate
general view of the user perception of usability. It can be used
for "before and after" studies in the development process or in
making direct comparisons between products. The global figure can
be viewed as a "temperature gauge" of usability as perceived by
the users and is valuable in development to feed into your acceptance
process for the system.
What we learnt from administering SUMI in live running is that
it's not a good idea to wave goodbye to a system once the code has
gone live. That policy not only encourages a culture of delivery
to time and cost rather than delivery of a useful, usable product
but means you don't gain information that helps you improve the
product (and others still in development) but also can verify if
your testing is rigorous enough to replicate live conditions and
give you results that you can confidently rely on.
The other thing to be aware of with SUMI is the natural tendency
of managers to believe you get can valid results by just sending
out the questionnaire and not bothering with the bothersome aspects
of completing context analysis or administering it after staff have
done work using the software you want to measure the level of satisfaction
with. As with most things if you cut corners you'll have an accident.
SUMI needs to follow the correct steps and you can then have confidence
in the results, if you don't want to follow the correct steps, forget
it. You'll be better off using focus groups, straw polls or not
talking to your customer.
To find out more
Information on SUMI.
Last updated 11-Oct-00.
|