UsabilityNet logo 1 UsabilityNet logo 2

User survey for design

Summary

User surveys are a means of finding out how the software or web site is likely to be used by a specific set of users, and who these users are likely to be.  The answers user surveys provide must be relevant to the issues that are important to the design team.  User surveys are traditionally carried out by post, but increasingly, the internet is used for this purpose.

Benefits

User surveys only provide benefits if they provide answers to questions that design teams raise. Because it is possible to survey a large number of users, usage profiles from user surveys can be relied upon, if the correct methodology has been used.  User surveys are often analysed statistically, and this gives moderately hard, objective data.  However, there are many sources of bias and a poorly designed survey can do more harm than good.

Method

The method of carrying out a survey is described differently in different sources.  However, the following stages form a common core.  It is easy to get lost in the technology of surveying and so the best advice is to learn from experience, guided by the sources recommended below.

Focusing the survey

Find out what are the major decision points, or areas of uncertainty in the thinking of the design team with regard to the usage of the product.  Focus in on those areas and find out what needs to be discovered.  Involve the decision makers in the development of the survey, find out when they need the information by, and what organisational contexts are likely to be affected by the presentation of the results. See Patton’s book for involving decision makers, and that by Akin et al. for analysis of decision points.

It is useful to employ the Context of Use analysis method as a way of discovering what aspects of the Context of Use are understood and what aspects pose problems or raise uncertainties.

Consider whether some (or perhaps all) of these issues may not be more easily resolved using methods other than surveys (e.g. by placing hit counters on a web site, or by looking at patterns of sales.)

Creating the survey instrument

There are many guidelines on how to formulate questions and how to lay out surveys to make them ‘respondent friendly.’  Remember that information about ‘how often’, or ‘how much’ is often coloured in respondents’ minds, and that if your questions start asking about respondent’s attitudes it will become difficult to interpret the results.  Reliable questions focus on simple things that occur relatively infrequently, and on preferences from a fixed set of alternatives.  An important element of designing a survey is to develop the concept of trust between yourself and the respondents.  Dillman’s book is a good guide to this.

Use open-ended questions sparingly, but always include an ‘Other (please specify):’ option at the end of a list of choices.

Testing the survey

It is absolutely essential to test the survey before sending it out or releasing it.  A survey test must be done in conditions as close to the real as possible, down to the issue of whether to include a stamp on the return envelope. Another useful technique is to do a ‘walk through’ the survey with a small number of typical respondents, asking them what they understand by each question as they go through the survey.

Conducting the survey

The sampling frame must be established: everyone from whom you require information must have an even chance of replying to the survey.  Sampling theory is complex, and is best left to a statistician, but in essence you have to state how you define the total population from whom you want information, and then how you take an unbiased sample from that population.

A good survey method will target the sample, by sending a warning, then the survey itself, and following up on non-responders with reminders or second copies of the survey.  As much as possible, send surveys to individuals rather than anonymously or to roles such as ‘managing director’ or ‘secretary.’  Some kind of reward may also be associated with filling out the survey (for instance, charity donations.)  Surveys which follow this kind of methodology can usually bring in much more than the 20% of sample that is usually expected from single mailshot surveys.   Reporting rates of up to 70% have been noted.

Analysing the results

How the results should be analysed should be clear from the outset, from the focusing activities.  Coding and tabulating the data should be as automatic as possible, so as to rule out possibilities for random error or bias to creep in.  A spreadsheet is a very useful tool for keeping raw survey results, which can then be exported to a statistics package such as SPSS.  For large surveys (response rates of more than 1,000) a database may be more suitable.

The two most useful statistical procedures to use when analysing survey results are:

  • counting of frequencies of response to options;
  • cross-tabulating responses to one series of questions or options against another series of questions or options.

Presenting the results

If you have taken care to include the decision makers in the focusing and design of the survey, you will generally find that there is a pressure on you to deliver the results.  When presenting results, always give the headline news first; then follow that up with a detailed analysis of how you got there; and finish with a conclusion based on the data.  You will naturally form your own opinions or biases as you work with the data.  These are important, and you should present these, carefully marking them as your extrapolations from the data so as not to confuse objective fact with your subjective opinion.

More Information

Alkin, MC, K Dalliak and P White: Using evaluations: does evaluation make a difference? Sage: Newbury Park CA, 1969.

Patton, QM: Utilization-focused evaluation.  Sage: Newbury Park CA, 1986.

Dillman, DA: Mail and internet surveys: the tailored design method (2nd Ed) Wiley, 2000.

Alternative Methods

The ethnographic method is the only viable alternative to user surveys;  this method however is time consuming, laborious, and does not easily yield quantitative information.

Next Steps

The user survey in design has the purpose of decreasing the amount of uncertainty in the design team’s minds.  Most of the information from a design survey should find its way into the Context of Use analysis.

Case studies

The book by Dillman contains many case studies and anecdotes relating to the practice of carrying out surveys.  Patton also provides anecdotes of a rather more gritty nature.

Background Reading

Oppenheim, AN: Questionnaire design, interviewing, and attitude measurement (New Edition). Continuum press: London and New York, 1992.

See also the page on User Surveys on this site for another view of how to conduct post-release surveys: both accounts are complementary and support each other.


Home | Professional Groups | Tools & Methods | Usability Practitioner | Usability for Managers | EU Project Support | About this Site
We welcome feedback at info@usabilitynet.org
©UsabilityNet 2003. Reproduction permitted provided the source is acknowledged.