UsabilityNet logo 1 UsabilityNet logo 2

Critical Incident Technique Analysis

Summary

End users are asked to identify specific incidents which they experienced personally and which had an important effect on the final outcome. The emphasis is on incidents rather than vague opinions. The context of the incident may also be elicited. Data from many users is collected and analysed.

Benefits

The CIT is an open-ended retrospective method of finding out what users feel are the critical features of the software being evaluated. It is more flexible than a questionnaire or survey and is recommended in situations where the only alternative is to develop a questionnaire or survey from the start. It focuses on user behaviour, so it can be used in situations where video recording is not practicable so long as the inherent bias of retrospective judgement is understood.

Method

The CIT is a method for getting a subjective report while minimising interference from stereotypical reactions or received opinions. The user is asked to focus on one or more critical incidents which they experienced personally in the field of activity being analysed. A critical incident is defined as one which had an important effect on the final outcome. Critical incidents can only be recognised retrospectively.

CIT analysis uses a method known as Content Analysis in order to summarise the experiences of many users or many experiences of the same user.

What do you test

Define the activity you intend to study, and get access to the users as soon as possible after the activity has finished. In the case of a lab study this should be after the testing has finished but before any de-briefing takes place; in the case of a naturalistic study, this should be soon after the user has used the software being surveyed or investigated, and if possible in the same environment.

How do you test it

You can do CIT by either employing an interview or by getting the users to fill out a paper form. The user is requested to follow the three stages described below in that order:

  • focus on an incident which had a strong positive influence on the result of the interaction and describe the incident
  • describe what led up to the incident
  • describe how the incident helped the successful completion of the interaction

It is usual to request two or maybe three such incidents, but one at least should be elicited. When this has been done, the procedure is repeated but now the user is asked to focus on incidents which had a strong negative influence on the result of the interaction and to follow the above formula to place the incidents in context. There will be some variation in the number of positive and negative incidents users respond with.

It is usual to start with a positive incident in order to set a constructive tone with the user.

If context is well understood, or time is short, the method may be stripped down and the user simply required to do the first part only: focus on describing the positive and negative critical incidents.

In an interview situation the user can be corrected if they attempt to reply with generalities, not tying themselves to a specific incident. This is more difficult to control if you are employing a written form, so ensure that the introductory instructions are clear.

Analysis and Reporting

When you have gathered a sufficient quantity of data you should be able to categorise the incidents and produce a relative importance weighting for each - some incidents will happen frequently and some less frequently.

For a summative evaluation, you should collect enough critical incidents which will enable you to make statements such as "x percent of the users found feature y in context z was helpful/ unhelpful."

For a formative evaluation, you should collect enough contextual data around each incident so that the designers can place the critical incidents in scenarios or use cases.

More Information

offsiteCritical incidents technique (EMMUS)

offsitehttp://www.ul.ie/~infopolis/methods/incident.html

offsitehttp://medir.ohsu.edu/~carpentj/cit.html

Alternative Methods

Alternative methods of acquiring this kind of data are subjective questionnaires or surveys. The CIT is sometimes used as a precursor to designing a questionnaire. Insofar as the behaviour of users is being examined, video records may also be used as alternatives.

Next Steps

CIT is usually employed to supplement other less subjective methods of measuring user behaviour. The next step is to integrate the findings from the CIT analysis with other usability data to produce a full usability test report.

If you are going to do the same kind of evaluation a number of times, you may find that you can summarise the CIT categories you have extracted into a checklist which you can present to the users in subsequent evaluations. The checklist items may function as memory probes but you do risk contaminating the data by the implicit suggestions of the checklist items: the essence of the CIT is that users report their spontaneous experiences.

You will need to be familiar with Content Analysis to analyse the data.

Case studies

Carlisle, K. E. (1986) . Analyzing Jobs and Tasks. Englewood Cliffs, NJ: Educational Technology Publications, Inc.

Background Reading

The original article is:

Flanagan JC (1954) The Critical Incident Technique. Psychological Bulletin, 51.4, 327-359

See also:

Carlisle, K. E. (1986) . Analyzing Jobs and Tasks. Englewood Cliffs, NJ: Educational Technology Publications, Inc.

Fivars, G. (Ed) (1980) Critical Incident Technique, American Institutes for Research.


Home | Professional Groups | Tools & Methods | Usability Practitioner | Usability for Managers | EU Project Support | About this Site
©UsabilityNet 2006. Reproduction permitted provided the source is acknowledged.