In the development of any new technology, the last frontier always seems to
be the encounter with the end user of the product. The challenge is to develop
a product that can be easily and efficiently used to accomplish its intended
purpose. The key element is to design an interface between the user and the
machine that allows for the proper flow of control between human and machine
processes. This is particularly true of computer based systems. The
development cycle of computers is now well past the point where the use of
computers is exclusively by computer scientists. The general work force and
the public-at-large is already using or beginning to use the computer in one
form or another. It is at this point that careful consideration must be made
of how to lay out the architecture and the building code in each new community
of users. Without such forethought, users are likely to become prematurely
locked into an arbitrary and archaic structure.
Among the new architectures of the human/computer interface that specifically deal with flow of control is the design of menu selection systems. Figure 1.1 illustrates several such systems. Users are presented with a list of options from which they can choose and some mechanism by which to indicate their choice. The characteristics of menu selection are that (a) the interaction is, in part, guided by the computer; (b) the user does not have to recall commands from memory, and (c) user response input is generally straight forward. The four examples shown in Figure 1.1 highlight the variety of such menu systems.
In the early days of interactive systems, menu selection seemed to be an intuitively simple solution to the problem of user control. The unfortunate result was that without additional thought, poorly designed systems proliferated. A number of basic questions about menu selection were overlooked. For example, when is menu selection preferable over other forms of interaction such as a command language? How should the menu selection systems be designed? How does the structure of menu selection change the process of control in the mind of the user? How should menu selection systems differ for use by novice and expert users? The answers to these questions and many others are crucial in helping to determine how to design a usable and efficient human/computer interface.
Menu selection is emphasized in this book as a principle mode of control used in conjunction with other modes such as form fill-in, command languages, natural language, and direct manipulation. It is felt by the author that in terms of flow of control, menu selection is emerging as the mode of choice. Other modes come into play to handle different demands on the human/computer interface. Switching from one mode to another is often necessary and must be done gracefully with clear expectation on the part of the user. Specialized modes, their integration with menu selection, and the problems of switching between modes will be discussed as it is appropriate.
In this chapter we will consider the human/computer interface in terms of flow of control. The user has certain tasks to accomplish and, consequently, wants to direct the computer to perform a subset of those tasks. The problem from the user's perspective is knowing what the computer can do and knowing how to direct it to do those tasks. The problem from the system designer's point of view is knowing what functions to implement and how the computer should inform the user about the availability and invocation of these functions. In doing so, the computer must be assigned a certain measure of control over the flow of operations; but ultimately, control rests with the user.
In the next section the process of designing the human/computer interface and its importance will be discussed. Following this a cognitive model of the human/computer interface is proposed that helps to define and specify the issues of how humans work with machines and how machines must be designed so that they are usable by humans. Finally, the use of experimental methods in the design process will be discussed. Empirical research is the mainstay of human factors and ergonomics, and it is important to understand how and why experiments are conducted.
1.1 Research and Design of the Human/Computer Interface
One of the major steps in the software development life cycle is the translation of system requirements into an integrated set of task specifications that define the functional capabilities of the software and hardware (Jensen & Tonis, 1979). The problem is that such requirements are often dictated by analysts with little thought about the end user. Consequently, user's needs and limitations are often not factored into the equation. No matter how well implemented other parts of the system may be, if the human/computer interface is intractable, the system will falter and ultimately fail. Ledgard, Singer, and Whiteside (1981) point out the incredible cost of poor human engineering in the design of interactive systems. They identify four costs:
1. The direct costs of poor design are observed in wasted time and excessive errors. The system itself may have been designed for maximum efficiency in terms of memory management, input/output, and computation but, it may sit idly by as the user ponders an error message or thumbs through documentation looking for the correct command. Menu selection systems may alleviate such wasted time by self-documenting the system and reduce errors by limiting user input to only the set of legal options.
2. Indirect costs are incurred in the time to learn a system. A novice user often has a lot to learn about a system. This typically involves reading manuals, working through tutorials, memorization of commands, and development of performance skills. The time to learn such systems can be a great cost that must be born by either the employee or the employer or both. For the employer, cost of training employees is particularly important when employee turnover is high, often an indirect result itself of poor human engineering due to a third cost.
3. A cost that is paid when users are frustrated and irritated by a poorly designed system. The psychological cost can be great and can result in a number of spin off problems such as low morale, decreased productivity, and high employee turnover rate. It has been said that systems should be fun to use rather than frustrating. They should encourage exploration rather than intimidate users and inhibit use.
4. Finally, this leads to the cost of limited or lack of use. Poorly designed systems, no matter how powerful, will simply not be used. Users will tend to employ only those components that are of use to them. It is typically the case that for systems with 40 plus commands, only about 7 commands show any frequency of use. Limited use is one of the greatest costs because it negates the benefit side of the equation.
Since the turn of the century psychologists and engineers have been interested in studying the generic man/machine problem in order to reduce costs such as these. Research has been conducted under a number of different names conveying the varied intent of the researchers. The term "human engineering," for example, conveys the idea that the user or operator limitations and capabilities may be engineered into the system. He or she is a part of the mechanism. The early time and motion analysts realized that both man and machine could be retrained or redesigned for greater efficiency and compatibility. The term "human factors" arose from World War II when it was realized that equipment was not performing at its rated levels. There was a human factor that introduced error into the system. Other terms such as "biomechanics," "ergonomics," and "psychotechnology" emphasize the physiological and psychological aspects. The more generic term "applied experimental psychology" indicates the importance of systematic practical research methodology but, unfortunately, de-emphasizes theoretical development. For the present discussion, the term "ergonomics" will be used to convey the importance of both theory and research, as well as the idea that we are dealing with systems.
Today a multidisciplinary approach to the human/computer problem is being taken. Research teams include cognitive psychologists, computer scientists, specialists in subject domains such as management, library and information science, medicine, etc., as well as a new breed of specialists in the emerging discipline of human/computer interaction. The multidisciplinary approach is extremely important insofar as menu selection is concerned in that designs generally attempt to implement a task domain on a computer system while satisfying the cognitive constraints of the users. The team approach allows each set of concerns to be represented and voiced.
1.1.1 Issues in Design. In this book we are interested in the issue of user control over automatic processes. The control mechanism that we will consider is that of menu selection. Users select one or several options, levels, or settings on the computer. Communication between the user and the computer takes place via a finite, well-defined set of tokens rather than via an open-ended command language. In the evaluation of the human/computer interface as means of communication and control, a number of global factors may be generated. Some of these are listed in Table 1.1. The acceptable levels of such ergonomic factors and the trade-offs between them depend, of course, on the particular task and the user community.
Table 1.1
Factors to be Considered in the Design of Human/Computer Systems
________________________________________________________________________________
* System Productivity
Applicability of system to task
Number of tasks completed
Quality of output
* Human Performance
Speed of performance
Rate and type of errors
Quality of solutions to problems
* Training time and effectiveness
Time to learn how to use the system
Frequency of reference to documentation
Human retention of commands over time
Transfer of training
* Cognitive Processes
Appropriateness of the mental model
Degree of mental effort
* Subjective satisfaction
Satisfaction with self
Satisfaction with system
Satisfaction with performance
________________________________________________________________________________
From a managerial perspective, productivity is the bottom line. The software must first and foremost be applicable to the task at hand. When this is the case, one may assess the factors of quantity and quality of work performed. Much has been written about productivity and its measurement in the business and management literature; however, in this book we are not primarily interested in productivity at the global level but in the factors that ultimately contribute to this end.
Human performance is an observable factor that is most often used in assessing the ergonomics of a system. Performance is function of three basic variables: speed, accuracy, and quality. Well-designed systems reduce the time that it takes for the user to perform a task. At the same time, error rate should be kept at a minimum. The severity of errors depends on their type. Some are easily correctable, others are devastating. Finally, a well-designed system should promote optimal solutions to problems. This is the ultimate goal of many systems that involve planning, decision making, design, and information retrieval.
Training time and effectiveness is also an extremely important factor. How long does it take for the user to come up to speed? This involves both the acquisition of knowledge about the system, skill in using it, and the ability to accommodate changes. Rumelhart and Norman (1978) refer to these three components as accretion (the acquisition of knowledge or accumulation of information), tuning (the modification of categories or adjustment), and restructuring (reinterpreting, reorganizing, or gaining a new understanding of information). Training may emphasize one of these to the exclusion of the others so that users may learn rote tasks but not be able to figure out how to perform other functions. Furthermore, systems may be designed so as to minimize the need for training. Designers may want a system that brings the novice on very rapidly by reducing the need for accretion even though it may limit the speed of asymptotic performance. Menu selection systems have often served this purpose by using menus that list and explain all of the options. Speed is reduced due to transmission and display time of text as well as reading time. Alternatively, one may want to maximize the speed of performance of experienced users even though training time is greatly lengthened. Command languages as well as highly abbreviated menus have accomplished this purpose. However, to make use of command languages, users must spend a considerable amount of time learning the commands and options.
The amount of documentation for the user varies greatly from system to system and depends on the level of user. More documentation is not necessarily better, and certainly a high frequency of reference to the documentation during use is detrimental to performance. Menu systems have been used to drastically reduce the need to refer to documentation although in older systems with slow response time it has not been without cost in terms of speed of performance. This may not be a factor in newer systems allowing near instantaneous access to pull-down or pop-up menu displays.
An often overlooked factor in the use of computers is the human retention of commands over time. Once a user has learned a command or a function to a specified level, is it retained over time? A command that has been learned, but infrequently used, may not be remembered. Each time the user desires to use it, he must consult the documentation. On the other hand, in a menu selection system an infrequently selected option is not lost. However, another retention problem often occurs. The user may forget where an item is located in a complex menu tree and spend an inordinate amount of time searching for it.
Transfer of training has always been an important factor in ergonomics. The question is whether training on one system will reduce the amount of training necessary to learn another system. If it does, there is positive transfer. If it doesn't, there is no transfer. Positive transfer depends on the degree to which one system shares common elements with another system and the degree to which the user is able to restructure the knowledge gained about one system to knowledge about the new system. Negative transfer depends on the degree to which one system has elements that conflict with another. "Integrated" software packages for office automation share many concepts and commands across specific applications. Incompatible packages may include similar commands or menu options that perform different functions or similar functions that are evoked by different commands or options. Observing the user learning a new system, coming back to a system after time, and switching between systems tells us a lot about the design of the software.
The effectiveness of human/computer interface design may also be assessed on the grounds of cognitive processes on the part of the user. It has been suggested that well-designed systems promote an effective mental model for the user of how the system operates. The user possessing this model gains a high-level understanding of how things work and is able to perform tasks and solve problems faster and more effectively than without such a model.
Another cognitive factor has to do with the mental effort expended by the user in performing a task. The designer's goal should be to allow the user to concentrate on higher-level processing rather than on mundane, low-level tasks. For example, a word processor should reduce the mental effort required for scrolling, cutting and pasting, and other mundane functions so that the writer may concentrate on composing the text.
A factor that is increasing in importance is user satisfaction. Satisfaction takes on a number of dimensions itself (Norman & Anderson, 1982). Users are able to assess their own performance and mastery of the system. User's satisfaction with self is often independent of satisfaction with the system. Many user's take pride in the fact that they have mastered a very poorly human-factored system and implemented work-arounds. Users may also assess satisfaction with the system attributes that are independent of productivity. Users are often impressed with bells, whistles, and brand names that have nothing to do with usability and performance.
These first two factors aside, user's ratings of subjective satisfaction can be directed to assess human/computer performance and productivity as a whole. Often such ratings are directly related to objective measures of performance. In addition, measures of satisfaction tap factors that cannot be objectively measured. The user may be viewed as a sensitive monitoring instrument of the system. Much care must be taken in assessing user satisfaction and many problems surround such measurements. Nevertheless, user satisfaction is being seen as a key to unlocking the complex interactions at the human/computer interface.
1.1.2 Three Paradigms of Design. It has been pointed out by Anderson and Olson (1985), Sterling (1974), and Holt and Steveson (1977) that human factors considerations must be integrated into the design process from the beginning and as it progresses. Unfortunately, very few systems have been designed with human factors as a high priority item. The reason for this may be traced to the historical development of machines, and it will prove instructive to review this design process.
Historically, the design of man/machine systems proceeded first with the power unit, the drive mechanism, the mill, and finally control as shown in the top panel of Figure 1.2. The early automobile with its internal combustion engine had a very unfriendly requirement. The driver or an assistant had to get out of the car to crank the engine to get it going. Only later was the starter motor added sto give the driver a simple control to start the thing. Consideration of the characteristics and the convenience of operator came in only as an afterthought. The problem with this method is that fixes are often expensive or impossible. Consequently, the user must often contend with an intractable control interface. This same paradigm unfortunately continues in many areas of hardware and software development today.
A somewhat better paradigm is to establish ergonomic guidelines first as shown in middle panel of Figure 1.2. They become a set of standard specifications. Software development must then proceed from the guidelines inward to the machine through the control interface. This can also lead to some problems. It is not truly possible to codify ergonomics in a dynamic environment nor can software development afford to wait for the publication of such handbooks. Consequently, this approach generally leads to standardization on the obsolete and suboptimal.
The paradigm that is advocated here is to develop the ergonomics and the engineering together in a parallel-interacting process as illustrated in bottom panel of Figure 1.2. This paradigm is being employed by more and more of the industry. Teams of human factors specialists are being linked up with software development teams to provide initial analysis of tasks and user characteristics, to conduct research on prototypes, and to conduct user acceptance tests on the final product.
This approach is particularly important for systems that are heavily dependent on the flow of control at the human/computer interface. The work of human/computer interaction specialists in design rests on two major components: theoretical models of the man/machine interface and methods of applied experimental research. These are discussed in the next two sections.
1.2 A Model of the Human/Computer Interface
Models of the human/computer interface depend heavily on cognitive psychology. The psychological processes of attention, memory, information processing, decision making, and problem solving must be taken into account. One of the most important features in such models is the flow and feedback of information through the interface. The user needs information from the computer and the computer cannot function without information from the user. A major component of this interaction is the flow of control information. The computer gives information to prompt the user for input and the user supplies input that directs the subsequent operations. Smooth operation of the system requires a timely flow of information that is relatively free of error states in the machine and in the user. Error states in the machine can be well defined. Generally machine errors at the interface occur when (a) input values fall outside an allowable range or disagree with required type, (b) required resources are not available, or (c) a call is made to a non-existant function or location. Error states also occur in the user. In contrast to machine errors, user error states are not well-defined since they arise from subjective states when for some given computer output or prompt, the user does not know what to do. User error states are characterized by confusion, lack of understanding, and lack of knowledge of what to do next.
Figure 1.3 shows a schematic model of the flow of information and control at the human/computer interface that has been adapted from Norman, et al. (1981). The model first emphasizes that the system is embedded within a task situation. The user, for example, may be monitoring an industrial process, or he may be engaged in information retrieval. The task determines a number of overriding factors, such as the cost of errors, importance of speed, and considerations that define the successful completion of the task. Both the human (represented by a circle) and the machine (represented by a rectangle) reside in environments that provide information, constraints, and contexts. It must be remembered that the user interacts not only with the machine but more fundamentally with the environment. He or she attends to information coming in and generates information going out. A similar picture is shown for the machine which may monitor an environment and retrieve information as well as generate output to the environment.
The non-overlapped area within the circle represents cognitive processes involved in the task that are not directly related to the human/computer interface. The non-overlapped area within the rectangle similarly represents the computer procedures involved in the task that are not directly related to the interface. The overlapping area represents processes that pertain to the interface. These include the mapping of information through keying or other input devices to machine representation of data and the mapping of machine representations of data to information presented on the screen or other output device.
At each point, U-shaped arrows are used to indicate the feedback cycles and reverberating characteristics of information flow through interfaces. We may think of this as handshaking, error checking, and synchronization on the part of the computer or as eye-hand coordination, verification, and timing on the part of the human.
The most important area from the present perspective is the overlapping area of the human/computer interface. It is in this area that flow of control is passed back and forth between the user and the machine. Flow of control will be defined as a sequence of steps in a process that is determined by a set of rules. Control is passed from one step to another such that the next step in the sequence is determined by the outcome of the current step and the rules of control. Within the computer, the program residing in memory determines the flow of control in conjunction with the processor. Within the human, flow of control is thought of as a sequence of mental operations determined by cognitive processes. Although we are only beginning to understand the processes of mental operations, we can observe the products of thought and generate theories about the likelihood of particular responses on the part of the user. In human/computer interaction, flow of control is shared and at times passed back and forth between the user and the machine. In light of our limited knowledge about user thought processes and the added complexity incurred by shared control, it is no wonder that the design of the human/computer interface is no easy matter and that it remains as a major area of concern in computer science, cognitive psychology, and ergonomics.
Although the model in its present form is largely conceptual in nature, it helps to delineate the concerns that an ergonomics specialist should have when helping to design the interface. These include the four areas shown in Figure 1.3: (a) the characteristics of the task and the environment, (b) the characteristics of the human cognitive processing, (c) the specifications for the computer processing, and (d) the implementation of the human/computer interface.
1.2.1 Characteristics of Tasks and Environments. It is obvious that different tasks and different environments impose different needs and constraints on human/computer interaction. What is not so obvious is how to meet these requirements. An analysis of the task and environment is the first step. Take, for example, the task of balancing one's check book. The task requires data input to the computer, verification of records, and a series of computational steps. The environment includes information about current balance, canceled checks, etc., as well as time and resource constraints that affect the motivation and/or frustration level of the user.
Tasks may be characterized along the following dimensions:
* Simple--Complex. Simple tasks involve few steps with little demand on the user or on the computer. Complex tasks involve many steps and impose high demands on the user and possibly the computer as well. Of course, the task as implemented may allocate simple or complex parts to the human or computer. Writing a novel on a word processor is a complex task for the human and a comparatively straightforward one for the computer. On the other hand, an information retrieval problem may be rather straightforward to the user but comparatively complex for the system.
* Structured--Unstructured. Structured tasks have a preplanned course; whereas, unstructured tasks may involve creative planning and redirection. In structured tasks, such as checkbook balancing, the flow of control may be relegated to the computer. On the other hand, for unstructured tasks, such as writing a novel, the user maintains control over most aspects of the task except the mundane operations of computer housekeeping.
Tasks may be characterized by many other dimensions such as the degree to which input vs. output predominates or the extent to which the user is an active vs. passive participant. Chapter 3 will discuss task analysis in greater detail; however, at this point it is sufficient to be aware of the fact that task characteristics impose certain demands on the user and on the system.
Environments are often linked to tasks, but we may also define two characteristics of environments as follows:
* Time Critical--Resource Limited. In many situations the environment imposes a time constraint. For example, the information must be retrieved before the Senate Subcommittee hearing at 10:00 am; the decision to deploy the torpedo must be made in 30 seconds. Such environments have a psychological impact on the user and require thought as to how to implement the human/computer interface to achieve acceptable levels of performance.
* Controllable--Immutable. The user and the system may be able to alter the environment. This may be an inherent part of the task in industrial control. Control of resources adds an additional level of concern to the user and needs to be considered in terms of the cognitive demands on the user. On the other hand, the environment may be immutable in the sense that neither the user or the computer can have an affect on it.
Environments may also be characterized by a number of other dimensions such as whether they are information rich vs. information scarce, safe vs. hazardous, etc. These characteristics will prove to be important when it comes to the design and evaluation of the user interface particularly as it relates to flow of control.
1.2.2 Characteristics of the Human User. Two views of the user exist. The user is either an extension of the system or the system is an extension of the user. In the first case, the user may, unfortunately, be viewed merely as an input device not unlike an optical scanner or an analog-to-digital input channel. In the second case, the system is seen as providing enhanced memory, processing, and communication abilities to the user. The particular view adopted has strong implications concerning the flow of control between the user and the system and the particular mental operations involved in the cognitive processing of the user.
Eight components of cognitive processing are specified in the model and are indicated by the arrows into and within the circle shown in Figure 1.3. The first arrow at the top left indicates the user's attention to certain input from the task environment such as instructions, data, documentation, etc. The second arrow moving to the right represents problem solving which may involve planning and information processing that occurs before the user inputs information to the computer. The third arrow across the top represents the user's intention for input to the computer. It may involve the formulation of a command or a plan for menu search. The fourth arrow represents the actual response production that transfers information to the computer. The user may type a command, select an option, or point to a screen location.
Take a situation in which the user directs his or her attention to a memo requesting that John Smith's telephone number be changed to 454-6333. The user conceives of the solution to the problem as follows: Find John Smith's file, locate the telephone number in his file, and then change the number. The intended solution must then be implemented in terms of the human/computer interface. The user may find John Smith's file by selecting an option, "find by name" and then typing in the name. To accomplish this the user must generate the response productions to select this option and to type the name. Given that the file is located, the user may select the options "update" and "telephone number" and finally type the new number.
The bottom chain of arrows in Figure 1.3 starting from the rightmost arrow into the circle indicates the reverse processing of information originating from the computer and ultimately altering the task environment. The first arrow at the right represents the display of information. Typically, this is information displayed on the screen but may also include displays in other modalities. The next arrow indicates the user's encoding and interpretation of that information. The next arrow moving left represents internal evaluation and cognitive processing of the computer output, and the last arrow shows the final result of the process in of supplying an answer to the task environment.
At each stage of interaction a different display is generally shown. In the example above, the first display may list a set of functions. The user interprets this display by encoding the information as meaningful options or messages. The option "find by name" is encoded and then evaluated as the desired function. Feedback is also encoded and evaluated. For example, if the message is "John Smoth not found," the evaluation is that a typographic error was made. Finally, when the change of number is acknowledged, the user may produce an overt answer to the memo indicating that the task was completed.
User characteristics affect the processing at each of these points as they depend on perceptual skills, attention, memory, and motor skills. User characteristics may be grouped into three types:
* Knowledge Characteristics. Users vary in terms of their knowledge about the system. In general, we can no longer say that novice users have little or no knowledge of system operation and that expert users do. Amount of knowledge cannot be considered as a unidimensional attribute. Instead, we must consider more carefully what the user knows about (a) the task domain in terms of semantic and procedural knowledge, (b) the representation of the task domain on the computer, and (c) the computer in terms of semantic and syntactic knowledge.
* Cognitive Characteristics. Users vary in their ability to solve problems, make decisions, and perform mental tasks. The assessment of information-processing capacities and their relationship to performance has been the subject of much work in cognitive psychology (e.g., Hunt, 1978; Sternberg, 1977). An analysis of the particular cognitive components involved in a task should prove useful in system design.
* Skill Characteristics. Users vary in their ability to read and type text, draw graphic images, point at objects, and track moving targets. These skills may be of varying importance in using a system and in many cases require considerable training and practice. It should also be mentioned that new skills are developed by users through extensive practice.
1.2.3 Computer Processing. An analogous set of arrows is shown in the box in Figure 1.3 representing the computer. These pertain to input/output to the environment that is not necessarily a part of the human/computer interface and to internal processing. The arrows that point into and out of the intersection between the human and the computer will be discussed in the next section. In terms of the characteristics of the computer, designers must take into consideration its speed and memory and processing capacities. However, in the present context we are only interested in how these characteristics manifest themselves at the human/computer interface.
1.2.4 The Human/Computer Interface. The intersection of the circle denoting the human and the box denoting the computer in Figure 1.3 represents the human/computer interface. In the present conceptualization, the interface is an area. The reason for this is to capture the idea that the interface is not merely a surface through which information travels, but rather it is a shared area that includes the user's cognitive model of the system and the system's model of the user. The idea of a shared area is particularly important when it comes to modeling flow of control.
Interactive systems have been designed to provide users with different types of control over the operating system. We can characterize these types in terms of the amount and complexity of information transmitted by the computer in prompting the user for input and the amount and complexity of information transmitted to the computer by the user in directing the next action.
Menu selection provides a highly interactive style of control by listing available options. Menus can convey much information to the user and aid novice users as well as more experienced users. In general, little or no training may be required at the onset depending on how self-explanatory the menus are. However, as the user works with the system, he gains knowledge about the alternatives, the structure, and capability of the program. Menus require some type of selection process that may or may not involve the keyboard. Interaction is structured so that there are rarely points at which the user is functionally locked out of the system. Unfortunately, menus can appear, and in fact be, restrictive in the capabilities offered to the user.
Command languages have always been seen as offering the user the most powerful and flexible control over the system. They allow for complex input on the part of the user but provide little in the way of prompting the user. The two problems with command languages are that substantial training is required and that it is difficult to provide user aids. Attempts have been made to take into consideration the wealth of natural language knowledge of the user and provide natural language command and query systems. These tend to reduce but do not eliminate the need for extensive training.
Software designers working with human/computer interface designers need to determine the optimal level of complexity and flow of control that will best serve the needs of the user. Furthermore, it is often the case that one needs to change the levels of complexity and amount of information flow at different points in the interaction and with different levels of experience. A number of guidelines have been generated as to when one mode or another is appropriate and how to implement a particular mode. Some are based on research and others are often speculative, sometimes based on extant theory in cognitive psychology and otherwise based on the opinion of the writer. In this book, we emphasize empirical research as directed by cognitive theory.
1.3 Research Methods
No matter how reasonable and intuitive a guideline may be, if it is not supported by data, it is mere conjecture. The rational approach to design suffers from two fallacies. First, what's rational to the designer may be idiotic to the user. The designer views the system from a different perceptive. A certain feature or attribute may make sense in light of the whole system, but to the user that feature (or lack thereof) makes no sense for the specific task at hand. The second fallacy is that designers are rarely objective about their design. Having spent a number of man-years developing a system, there may be no end of ego-involvement in the system. Empirical studies provide the objective proving ground for claims between competing systems and design features. Fortunately, empirical research does more than settle arguments, it also reveals the importance of design issues. Some aspects of design may have a critical impact on performance, while others may be irrelevant.
Three basic paradigms exist for research on ergonomic issues. These are discussed briefly in the next sections and in greater detail in a subsequent chapter.
1.3.1 Observational Studies. Observational studies are the easiest to conduct. Unfortunately, they are also the most unreliable and open to interpretation. In an observational study, one or several systems may be selected and researchers observe users interacting with the system. Verbal protocols may be elicited in which the user explains what and why he is doing something. Time, productivity, and error data may be collected. Analysis may be at a purely verbal descriptive level or based on quantitative measures. Conclusions are tentative at best since researchers have little or no control over conditions. For example, one may conclude that users took longer to perform tasks on System A than on System B; but the tasks may not be comparable, system response time may be different, the groups of users may not be equated, etc. Investigators must thoroughly weigh and rule out alternative explanations.
The major strength of observational studies is their ability to generate hypotheses about design features that can be studied in more controlled environments not open to multiple interpretation.
1.3.2 Survey Studies. Questionnaires provide a structured approach in which the user assesses factors related to the human/computer interface. Users may record objective events or subjective evaluations. Objective events include the number of times they have used a particular system, the number of times a system crash occurred while using the system, and number of tasks completed. In each case, since the events are objective, the researcher could also record these events and compare that record with the user assessments. Although user assessments bear a strong correlational relationship to the actual measure, they are not perfect. User assessments are introspective and, therefore, are subject to the properties of human memory and to biases in reporting. Because of this, one cannot assume that user assessments reflect the true values of the measures. On the other hand, it is quite possible to establish, through empirical verification, the reliability and validity of user responses. Furthermore, when comparisons are between different groups of subjects rather than with an absolute criterion, it may not matter that responses are biased as long as the sources of bias are constant among the groups.
Users may also be asked to make subjective evaluations of system attributes. For example, they may be asked to check statements that apply to the system use (e.g., "Documentation is adequate" or "I do not understand program operation.") or rate system attributes on a 10-point scale (e.g., ease of use, speed, tendency to make errors). When users make subjective evaluations, there is generally no way to compare their responses with objective measures of those attributes. However, evaluations such as overall satisfaction with the system may be statistically related to objective system attributes such as system response time or screen resolution.
The key to measuring subjective evaluations is to ensure reliability and internal consistency of the ratings. By reliability we mean that users display little error in making ratings and that if they rated the same attributes a second time, there would be a relatively strong relationship between the two ratings. By internal consistency we mean that the ratings follow a logical relationship. For example, if subjects are asked to compare three systems, A, B, and C, and they rate A superior to B, B superior to C, then they must rate A superior to C in order to be internally consistent. Without internal consistency, the meaningfulness of the results are in question.
Survey data is useful in describing systems, for detecting strong and weak points, and for suggesting improvements. For example, in a study by Norman (1986) students rated the overall ease of use of hypothetical systems described by lists of positive and negative attributes. The impact of each attribute was scaled relative to all others. Figure 1.4 lists the descriptions of the attributes and graphs the impact of each attribute on the ratings. One of the most important positive attributes, at least for this group of students, was the ability of the system to adapt to different types of users. The most telling negative attribute was having a display described as confusing and difficult to read. Such ratings help to set design priorities.
Survey questionnaires should include demographic data on the user. Typically we want to know the age, sex, work experience, and training of the user. We may also want to include psychometric measures to assess intellectual skills, cognitive functions, and knowledge. Analysis of the data may look for interrelationships among the variables. For example, we may be interested in whether there is a relationship between memory ability and preference for a particular type of software, or between the rated number of errors and rated overall satisfaction with the system. Questionnaires may also be effectually used in conjunction with controlled experimental studies.
1.3.3 Experimental Studies. The major strength of the experimental study is its ability to localize unambiguously an effect in a particular design factor. Experimental design is used to control all variables except those that are being tested. The steps are as follows:
First, sample participants for the study from the population of interest. To the extent that users are a diverse group, it is important to assess the individual differences of the users. For example, one might need to know (a) the level of experience with computers, terminals, etc.; (b) familiarly with the generic task such as accounting, information retrieval, or programming; (c) demographic variables of age and sex; (d) cognitive measures of analytical skills, verbal and visual memory, reaction time, etc.
Second, one or several design features are selected for study. These are systematically varied in such a way that their impact on performance can be unambiguously assessed. For example, we may be interested in comparing alphabetic vs. random ordering of options in a menu as well as the number of options (e.g., 4, 8, 16, 32). Software must be written or altered so that the features are implemented at each level.
Finally, we must select one or several variables to measure. Table 1.1 lists some of the types of variables that may be of interest. The variables must be defined in such a way as to allow valid and reliable measurements to be taken. Typically additional software must be written to capture these measurements.
The experiment must be carefully monitored as it progresses in order to detect flaws in the design and methodology. Once it is completed, statistical analysis is used to sort out the results and give evidence as to the reliability of the findings.
The experimental approach is not without its drawbacks. Experiments are costly and often overrestrictive. The results may not generalize beyond the artificial conditions set in the lab. However, to the extent that the investigator establishes realistic conditions and assesses appropriate dependent, measures the results gain validity.
1.4 Summary
The architecture of the human/computer interface is of concern to designers of computer-based systems. Menu selection as a mode of communication and control is playing an increasing role as part of that architecture. The relationship of ergonomic research and the design process is fundamental in developing guidelines in the use of menu selection. The type of ergonomic research advocated here for design involves a parallel interaction of system designers and researchers as development progresses. A number of ergonomic issues were discussed relating to trade-offs between factors such as speed of performance and error rate, amount of training necessary and asymptotic performance.
A conceptual model of the human/computer interface was advanced that helps to delineate the ergonomic factors into (a) the task environment, (b) the cognitive processing characteristics of the user, (c) the characteristics of the computer, and (d) the flow of information through the human/computer interface. Menu selection as a mode of control involves planning and decision processes on the part of the user. These processes may be facilitated or restricted depending on whether the implementation matches the needs and expectations of the user and the task environment.
In the next chapter we take an analytic look at the various types of menu selection systems. A number of features and design characteristics will be outlined. Many of these are factors that have been the subject of empirical research and are discussed in subsequent chapters.
Among the new architectures of the human/computer interface that specifically deal with flow of control is the design of menu selection systems. Figure 1.1 illustrates several such systems. Users are presented with a list of options from which they can choose and some mechanism by which to indicate their choice. The characteristics of menu selection are that (a) the interaction is, in part, guided by the computer; (b) the user does not have to recall commands from memory, and (c) user response input is generally straight forward. The four examples shown in Figure 1.1 highlight the variety of such menu systems.
In the early days of interactive systems, menu selection seemed to be an intuitively simple solution to the problem of user control. The unfortunate result was that without additional thought, poorly designed systems proliferated. A number of basic questions about menu selection were overlooked. For example, when is menu selection preferable over other forms of interaction such as a command language? How should the menu selection systems be designed? How does the structure of menu selection change the process of control in the mind of the user? How should menu selection systems differ for use by novice and expert users? The answers to these questions and many others are crucial in helping to determine how to design a usable and efficient human/computer interface.
Menu selection is emphasized in this book as a principle mode of control used in conjunction with other modes such as form fill-in, command languages, natural language, and direct manipulation. It is felt by the author that in terms of flow of control, menu selection is emerging as the mode of choice. Other modes come into play to handle different demands on the human/computer interface. Switching from one mode to another is often necessary and must be done gracefully with clear expectation on the part of the user. Specialized modes, their integration with menu selection, and the problems of switching between modes will be discussed as it is appropriate.
In this chapter we will consider the human/computer interface in terms of flow of control. The user has certain tasks to accomplish and, consequently, wants to direct the computer to perform a subset of those tasks. The problem from the user's perspective is knowing what the computer can do and knowing how to direct it to do those tasks. The problem from the system designer's point of view is knowing what functions to implement and how the computer should inform the user about the availability and invocation of these functions. In doing so, the computer must be assigned a certain measure of control over the flow of operations; but ultimately, control rests with the user.
In the next section the process of designing the human/computer interface and its importance will be discussed. Following this a cognitive model of the human/computer interface is proposed that helps to define and specify the issues of how humans work with machines and how machines must be designed so that they are usable by humans. Finally, the use of experimental methods in the design process will be discussed. Empirical research is the mainstay of human factors and ergonomics, and it is important to understand how and why experiments are conducted.
1.1 Research and Design of the Human/Computer Interface
One of the major steps in the software development life cycle is the translation of system requirements into an integrated set of task specifications that define the functional capabilities of the software and hardware (Jensen & Tonis, 1979). The problem is that such requirements are often dictated by analysts with little thought about the end user. Consequently, user's needs and limitations are often not factored into the equation. No matter how well implemented other parts of the system may be, if the human/computer interface is intractable, the system will falter and ultimately fail. Ledgard, Singer, and Whiteside (1981) point out the incredible cost of poor human engineering in the design of interactive systems. They identify four costs:
1. The direct costs of poor design are observed in wasted time and excessive errors. The system itself may have been designed for maximum efficiency in terms of memory management, input/output, and computation but, it may sit idly by as the user ponders an error message or thumbs through documentation looking for the correct command. Menu selection systems may alleviate such wasted time by self-documenting the system and reduce errors by limiting user input to only the set of legal options.
2. Indirect costs are incurred in the time to learn a system. A novice user often has a lot to learn about a system. This typically involves reading manuals, working through tutorials, memorization of commands, and development of performance skills. The time to learn such systems can be a great cost that must be born by either the employee or the employer or both. For the employer, cost of training employees is particularly important when employee turnover is high, often an indirect result itself of poor human engineering due to a third cost.
3. A cost that is paid when users are frustrated and irritated by a poorly designed system. The psychological cost can be great and can result in a number of spin off problems such as low morale, decreased productivity, and high employee turnover rate. It has been said that systems should be fun to use rather than frustrating. They should encourage exploration rather than intimidate users and inhibit use.
4. Finally, this leads to the cost of limited or lack of use. Poorly designed systems, no matter how powerful, will simply not be used. Users will tend to employ only those components that are of use to them. It is typically the case that for systems with 40 plus commands, only about 7 commands show any frequency of use. Limited use is one of the greatest costs because it negates the benefit side of the equation.
Since the turn of the century psychologists and engineers have been interested in studying the generic man/machine problem in order to reduce costs such as these. Research has been conducted under a number of different names conveying the varied intent of the researchers. The term "human engineering," for example, conveys the idea that the user or operator limitations and capabilities may be engineered into the system. He or she is a part of the mechanism. The early time and motion analysts realized that both man and machine could be retrained or redesigned for greater efficiency and compatibility. The term "human factors" arose from World War II when it was realized that equipment was not performing at its rated levels. There was a human factor that introduced error into the system. Other terms such as "biomechanics," "ergonomics," and "psychotechnology" emphasize the physiological and psychological aspects. The more generic term "applied experimental psychology" indicates the importance of systematic practical research methodology but, unfortunately, de-emphasizes theoretical development. For the present discussion, the term "ergonomics" will be used to convey the importance of both theory and research, as well as the idea that we are dealing with systems.
Today a multidisciplinary approach to the human/computer problem is being taken. Research teams include cognitive psychologists, computer scientists, specialists in subject domains such as management, library and information science, medicine, etc., as well as a new breed of specialists in the emerging discipline of human/computer interaction. The multidisciplinary approach is extremely important insofar as menu selection is concerned in that designs generally attempt to implement a task domain on a computer system while satisfying the cognitive constraints of the users. The team approach allows each set of concerns to be represented and voiced.
1.1.1 Issues in Design. In this book we are interested in the issue of user control over automatic processes. The control mechanism that we will consider is that of menu selection. Users select one or several options, levels, or settings on the computer. Communication between the user and the computer takes place via a finite, well-defined set of tokens rather than via an open-ended command language. In the evaluation of the human/computer interface as means of communication and control, a number of global factors may be generated. Some of these are listed in Table 1.1. The acceptable levels of such ergonomic factors and the trade-offs between them depend, of course, on the particular task and the user community.
Table 1.1
Factors to be Considered in the Design of Human/Computer Systems
________________________________________________________________________________
* System Productivity
Applicability of system to task
Number of tasks completed
Quality of output
* Human Performance
Speed of performance
Rate and type of errors
Quality of solutions to problems
* Training time and effectiveness
Time to learn how to use the system
Frequency of reference to documentation
Human retention of commands over time
Transfer of training
* Cognitive Processes
Appropriateness of the mental model
Degree of mental effort
* Subjective satisfaction
Satisfaction with self
Satisfaction with system
Satisfaction with performance
________________________________________________________________________________
From a managerial perspective, productivity is the bottom line. The software must first and foremost be applicable to the task at hand. When this is the case, one may assess the factors of quantity and quality of work performed. Much has been written about productivity and its measurement in the business and management literature; however, in this book we are not primarily interested in productivity at the global level but in the factors that ultimately contribute to this end.
Human performance is an observable factor that is most often used in assessing the ergonomics of a system. Performance is function of three basic variables: speed, accuracy, and quality. Well-designed systems reduce the time that it takes for the user to perform a task. At the same time, error rate should be kept at a minimum. The severity of errors depends on their type. Some are easily correctable, others are devastating. Finally, a well-designed system should promote optimal solutions to problems. This is the ultimate goal of many systems that involve planning, decision making, design, and information retrieval.
Training time and effectiveness is also an extremely important factor. How long does it take for the user to come up to speed? This involves both the acquisition of knowledge about the system, skill in using it, and the ability to accommodate changes. Rumelhart and Norman (1978) refer to these three components as accretion (the acquisition of knowledge or accumulation of information), tuning (the modification of categories or adjustment), and restructuring (reinterpreting, reorganizing, or gaining a new understanding of information). Training may emphasize one of these to the exclusion of the others so that users may learn rote tasks but not be able to figure out how to perform other functions. Furthermore, systems may be designed so as to minimize the need for training. Designers may want a system that brings the novice on very rapidly by reducing the need for accretion even though it may limit the speed of asymptotic performance. Menu selection systems have often served this purpose by using menus that list and explain all of the options. Speed is reduced due to transmission and display time of text as well as reading time. Alternatively, one may want to maximize the speed of performance of experienced users even though training time is greatly lengthened. Command languages as well as highly abbreviated menus have accomplished this purpose. However, to make use of command languages, users must spend a considerable amount of time learning the commands and options.
The amount of documentation for the user varies greatly from system to system and depends on the level of user. More documentation is not necessarily better, and certainly a high frequency of reference to the documentation during use is detrimental to performance. Menu systems have been used to drastically reduce the need to refer to documentation although in older systems with slow response time it has not been without cost in terms of speed of performance. This may not be a factor in newer systems allowing near instantaneous access to pull-down or pop-up menu displays.
An often overlooked factor in the use of computers is the human retention of commands over time. Once a user has learned a command or a function to a specified level, is it retained over time? A command that has been learned, but infrequently used, may not be remembered. Each time the user desires to use it, he must consult the documentation. On the other hand, in a menu selection system an infrequently selected option is not lost. However, another retention problem often occurs. The user may forget where an item is located in a complex menu tree and spend an inordinate amount of time searching for it.
Transfer of training has always been an important factor in ergonomics. The question is whether training on one system will reduce the amount of training necessary to learn another system. If it does, there is positive transfer. If it doesn't, there is no transfer. Positive transfer depends on the degree to which one system shares common elements with another system and the degree to which the user is able to restructure the knowledge gained about one system to knowledge about the new system. Negative transfer depends on the degree to which one system has elements that conflict with another. "Integrated" software packages for office automation share many concepts and commands across specific applications. Incompatible packages may include similar commands or menu options that perform different functions or similar functions that are evoked by different commands or options. Observing the user learning a new system, coming back to a system after time, and switching between systems tells us a lot about the design of the software.
The effectiveness of human/computer interface design may also be assessed on the grounds of cognitive processes on the part of the user. It has been suggested that well-designed systems promote an effective mental model for the user of how the system operates. The user possessing this model gains a high-level understanding of how things work and is able to perform tasks and solve problems faster and more effectively than without such a model.
Another cognitive factor has to do with the mental effort expended by the user in performing a task. The designer's goal should be to allow the user to concentrate on higher-level processing rather than on mundane, low-level tasks. For example, a word processor should reduce the mental effort required for scrolling, cutting and pasting, and other mundane functions so that the writer may concentrate on composing the text.
A factor that is increasing in importance is user satisfaction. Satisfaction takes on a number of dimensions itself (Norman & Anderson, 1982). Users are able to assess their own performance and mastery of the system. User's satisfaction with self is often independent of satisfaction with the system. Many user's take pride in the fact that they have mastered a very poorly human-factored system and implemented work-arounds. Users may also assess satisfaction with the system attributes that are independent of productivity. Users are often impressed with bells, whistles, and brand names that have nothing to do with usability and performance.
These first two factors aside, user's ratings of subjective satisfaction can be directed to assess human/computer performance and productivity as a whole. Often such ratings are directly related to objective measures of performance. In addition, measures of satisfaction tap factors that cannot be objectively measured. The user may be viewed as a sensitive monitoring instrument of the system. Much care must be taken in assessing user satisfaction and many problems surround such measurements. Nevertheless, user satisfaction is being seen as a key to unlocking the complex interactions at the human/computer interface.
1.1.2 Three Paradigms of Design. It has been pointed out by Anderson and Olson (1985), Sterling (1974), and Holt and Steveson (1977) that human factors considerations must be integrated into the design process from the beginning and as it progresses. Unfortunately, very few systems have been designed with human factors as a high priority item. The reason for this may be traced to the historical development of machines, and it will prove instructive to review this design process.
Historically, the design of man/machine systems proceeded first with the power unit, the drive mechanism, the mill, and finally control as shown in the top panel of Figure 1.2. The early automobile with its internal combustion engine had a very unfriendly requirement. The driver or an assistant had to get out of the car to crank the engine to get it going. Only later was the starter motor added sto give the driver a simple control to start the thing. Consideration of the characteristics and the convenience of operator came in only as an afterthought. The problem with this method is that fixes are often expensive or impossible. Consequently, the user must often contend with an intractable control interface. This same paradigm unfortunately continues in many areas of hardware and software development today.
A somewhat better paradigm is to establish ergonomic guidelines first as shown in middle panel of Figure 1.2. They become a set of standard specifications. Software development must then proceed from the guidelines inward to the machine through the control interface. This can also lead to some problems. It is not truly possible to codify ergonomics in a dynamic environment nor can software development afford to wait for the publication of such handbooks. Consequently, this approach generally leads to standardization on the obsolete and suboptimal.
The paradigm that is advocated here is to develop the ergonomics and the engineering together in a parallel-interacting process as illustrated in bottom panel of Figure 1.2. This paradigm is being employed by more and more of the industry. Teams of human factors specialists are being linked up with software development teams to provide initial analysis of tasks and user characteristics, to conduct research on prototypes, and to conduct user acceptance tests on the final product.
This approach is particularly important for systems that are heavily dependent on the flow of control at the human/computer interface. The work of human/computer interaction specialists in design rests on two major components: theoretical models of the man/machine interface and methods of applied experimental research. These are discussed in the next two sections.
1.2 A Model of the Human/Computer Interface
Models of the human/computer interface depend heavily on cognitive psychology. The psychological processes of attention, memory, information processing, decision making, and problem solving must be taken into account. One of the most important features in such models is the flow and feedback of information through the interface. The user needs information from the computer and the computer cannot function without information from the user. A major component of this interaction is the flow of control information. The computer gives information to prompt the user for input and the user supplies input that directs the subsequent operations. Smooth operation of the system requires a timely flow of information that is relatively free of error states in the machine and in the user. Error states in the machine can be well defined. Generally machine errors at the interface occur when (a) input values fall outside an allowable range or disagree with required type, (b) required resources are not available, or (c) a call is made to a non-existant function or location. Error states also occur in the user. In contrast to machine errors, user error states are not well-defined since they arise from subjective states when for some given computer output or prompt, the user does not know what to do. User error states are characterized by confusion, lack of understanding, and lack of knowledge of what to do next.
Figure 1.3 shows a schematic model of the flow of information and control at the human/computer interface that has been adapted from Norman, et al. (1981). The model first emphasizes that the system is embedded within a task situation. The user, for example, may be monitoring an industrial process, or he may be engaged in information retrieval. The task determines a number of overriding factors, such as the cost of errors, importance of speed, and considerations that define the successful completion of the task. Both the human (represented by a circle) and the machine (represented by a rectangle) reside in environments that provide information, constraints, and contexts. It must be remembered that the user interacts not only with the machine but more fundamentally with the environment. He or she attends to information coming in and generates information going out. A similar picture is shown for the machine which may monitor an environment and retrieve information as well as generate output to the environment.
The non-overlapped area within the circle represents cognitive processes involved in the task that are not directly related to the human/computer interface. The non-overlapped area within the rectangle similarly represents the computer procedures involved in the task that are not directly related to the interface. The overlapping area represents processes that pertain to the interface. These include the mapping of information through keying or other input devices to machine representation of data and the mapping of machine representations of data to information presented on the screen or other output device.
At each point, U-shaped arrows are used to indicate the feedback cycles and reverberating characteristics of information flow through interfaces. We may think of this as handshaking, error checking, and synchronization on the part of the computer or as eye-hand coordination, verification, and timing on the part of the human.
The most important area from the present perspective is the overlapping area of the human/computer interface. It is in this area that flow of control is passed back and forth between the user and the machine. Flow of control will be defined as a sequence of steps in a process that is determined by a set of rules. Control is passed from one step to another such that the next step in the sequence is determined by the outcome of the current step and the rules of control. Within the computer, the program residing in memory determines the flow of control in conjunction with the processor. Within the human, flow of control is thought of as a sequence of mental operations determined by cognitive processes. Although we are only beginning to understand the processes of mental operations, we can observe the products of thought and generate theories about the likelihood of particular responses on the part of the user. In human/computer interaction, flow of control is shared and at times passed back and forth between the user and the machine. In light of our limited knowledge about user thought processes and the added complexity incurred by shared control, it is no wonder that the design of the human/computer interface is no easy matter and that it remains as a major area of concern in computer science, cognitive psychology, and ergonomics.
Although the model in its present form is largely conceptual in nature, it helps to delineate the concerns that an ergonomics specialist should have when helping to design the interface. These include the four areas shown in Figure 1.3: (a) the characteristics of the task and the environment, (b) the characteristics of the human cognitive processing, (c) the specifications for the computer processing, and (d) the implementation of the human/computer interface.
1.2.1 Characteristics of Tasks and Environments. It is obvious that different tasks and different environments impose different needs and constraints on human/computer interaction. What is not so obvious is how to meet these requirements. An analysis of the task and environment is the first step. Take, for example, the task of balancing one's check book. The task requires data input to the computer, verification of records, and a series of computational steps. The environment includes information about current balance, canceled checks, etc., as well as time and resource constraints that affect the motivation and/or frustration level of the user.
Tasks may be characterized along the following dimensions:
* Simple--Complex. Simple tasks involve few steps with little demand on the user or on the computer. Complex tasks involve many steps and impose high demands on the user and possibly the computer as well. Of course, the task as implemented may allocate simple or complex parts to the human or computer. Writing a novel on a word processor is a complex task for the human and a comparatively straightforward one for the computer. On the other hand, an information retrieval problem may be rather straightforward to the user but comparatively complex for the system.
* Structured--Unstructured. Structured tasks have a preplanned course; whereas, unstructured tasks may involve creative planning and redirection. In structured tasks, such as checkbook balancing, the flow of control may be relegated to the computer. On the other hand, for unstructured tasks, such as writing a novel, the user maintains control over most aspects of the task except the mundane operations of computer housekeeping.
Tasks may be characterized by many other dimensions such as the degree to which input vs. output predominates or the extent to which the user is an active vs. passive participant. Chapter 3 will discuss task analysis in greater detail; however, at this point it is sufficient to be aware of the fact that task characteristics impose certain demands on the user and on the system.
Environments are often linked to tasks, but we may also define two characteristics of environments as follows:
* Time Critical--Resource Limited. In many situations the environment imposes a time constraint. For example, the information must be retrieved before the Senate Subcommittee hearing at 10:00 am; the decision to deploy the torpedo must be made in 30 seconds. Such environments have a psychological impact on the user and require thought as to how to implement the human/computer interface to achieve acceptable levels of performance.
* Controllable--Immutable. The user and the system may be able to alter the environment. This may be an inherent part of the task in industrial control. Control of resources adds an additional level of concern to the user and needs to be considered in terms of the cognitive demands on the user. On the other hand, the environment may be immutable in the sense that neither the user or the computer can have an affect on it.
Environments may also be characterized by a number of other dimensions such as whether they are information rich vs. information scarce, safe vs. hazardous, etc. These characteristics will prove to be important when it comes to the design and evaluation of the user interface particularly as it relates to flow of control.
1.2.2 Characteristics of the Human User. Two views of the user exist. The user is either an extension of the system or the system is an extension of the user. In the first case, the user may, unfortunately, be viewed merely as an input device not unlike an optical scanner or an analog-to-digital input channel. In the second case, the system is seen as providing enhanced memory, processing, and communication abilities to the user. The particular view adopted has strong implications concerning the flow of control between the user and the system and the particular mental operations involved in the cognitive processing of the user.
Eight components of cognitive processing are specified in the model and are indicated by the arrows into and within the circle shown in Figure 1.3. The first arrow at the top left indicates the user's attention to certain input from the task environment such as instructions, data, documentation, etc. The second arrow moving to the right represents problem solving which may involve planning and information processing that occurs before the user inputs information to the computer. The third arrow across the top represents the user's intention for input to the computer. It may involve the formulation of a command or a plan for menu search. The fourth arrow represents the actual response production that transfers information to the computer. The user may type a command, select an option, or point to a screen location.
Take a situation in which the user directs his or her attention to a memo requesting that John Smith's telephone number be changed to 454-6333. The user conceives of the solution to the problem as follows: Find John Smith's file, locate the telephone number in his file, and then change the number. The intended solution must then be implemented in terms of the human/computer interface. The user may find John Smith's file by selecting an option, "find by name" and then typing in the name. To accomplish this the user must generate the response productions to select this option and to type the name. Given that the file is located, the user may select the options "update" and "telephone number" and finally type the new number.
The bottom chain of arrows in Figure 1.3 starting from the rightmost arrow into the circle indicates the reverse processing of information originating from the computer and ultimately altering the task environment. The first arrow at the right represents the display of information. Typically, this is information displayed on the screen but may also include displays in other modalities. The next arrow indicates the user's encoding and interpretation of that information. The next arrow moving left represents internal evaluation and cognitive processing of the computer output, and the last arrow shows the final result of the process in of supplying an answer to the task environment.
At each stage of interaction a different display is generally shown. In the example above, the first display may list a set of functions. The user interprets this display by encoding the information as meaningful options or messages. The option "find by name" is encoded and then evaluated as the desired function. Feedback is also encoded and evaluated. For example, if the message is "John Smoth not found," the evaluation is that a typographic error was made. Finally, when the change of number is acknowledged, the user may produce an overt answer to the memo indicating that the task was completed.
User characteristics affect the processing at each of these points as they depend on perceptual skills, attention, memory, and motor skills. User characteristics may be grouped into three types:
* Knowledge Characteristics. Users vary in terms of their knowledge about the system. In general, we can no longer say that novice users have little or no knowledge of system operation and that expert users do. Amount of knowledge cannot be considered as a unidimensional attribute. Instead, we must consider more carefully what the user knows about (a) the task domain in terms of semantic and procedural knowledge, (b) the representation of the task domain on the computer, and (c) the computer in terms of semantic and syntactic knowledge.
* Cognitive Characteristics. Users vary in their ability to solve problems, make decisions, and perform mental tasks. The assessment of information-processing capacities and their relationship to performance has been the subject of much work in cognitive psychology (e.g., Hunt, 1978; Sternberg, 1977). An analysis of the particular cognitive components involved in a task should prove useful in system design.
* Skill Characteristics. Users vary in their ability to read and type text, draw graphic images, point at objects, and track moving targets. These skills may be of varying importance in using a system and in many cases require considerable training and practice. It should also be mentioned that new skills are developed by users through extensive practice.
1.2.3 Computer Processing. An analogous set of arrows is shown in the box in Figure 1.3 representing the computer. These pertain to input/output to the environment that is not necessarily a part of the human/computer interface and to internal processing. The arrows that point into and out of the intersection between the human and the computer will be discussed in the next section. In terms of the characteristics of the computer, designers must take into consideration its speed and memory and processing capacities. However, in the present context we are only interested in how these characteristics manifest themselves at the human/computer interface.
1.2.4 The Human/Computer Interface. The intersection of the circle denoting the human and the box denoting the computer in Figure 1.3 represents the human/computer interface. In the present conceptualization, the interface is an area. The reason for this is to capture the idea that the interface is not merely a surface through which information travels, but rather it is a shared area that includes the user's cognitive model of the system and the system's model of the user. The idea of a shared area is particularly important when it comes to modeling flow of control.
Interactive systems have been designed to provide users with different types of control over the operating system. We can characterize these types in terms of the amount and complexity of information transmitted by the computer in prompting the user for input and the amount and complexity of information transmitted to the computer by the user in directing the next action.
Menu selection provides a highly interactive style of control by listing available options. Menus can convey much information to the user and aid novice users as well as more experienced users. In general, little or no training may be required at the onset depending on how self-explanatory the menus are. However, as the user works with the system, he gains knowledge about the alternatives, the structure, and capability of the program. Menus require some type of selection process that may or may not involve the keyboard. Interaction is structured so that there are rarely points at which the user is functionally locked out of the system. Unfortunately, menus can appear, and in fact be, restrictive in the capabilities offered to the user.
Command languages have always been seen as offering the user the most powerful and flexible control over the system. They allow for complex input on the part of the user but provide little in the way of prompting the user. The two problems with command languages are that substantial training is required and that it is difficult to provide user aids. Attempts have been made to take into consideration the wealth of natural language knowledge of the user and provide natural language command and query systems. These tend to reduce but do not eliminate the need for extensive training.
Software designers working with human/computer interface designers need to determine the optimal level of complexity and flow of control that will best serve the needs of the user. Furthermore, it is often the case that one needs to change the levels of complexity and amount of information flow at different points in the interaction and with different levels of experience. A number of guidelines have been generated as to when one mode or another is appropriate and how to implement a particular mode. Some are based on research and others are often speculative, sometimes based on extant theory in cognitive psychology and otherwise based on the opinion of the writer. In this book, we emphasize empirical research as directed by cognitive theory.
1.3 Research Methods
No matter how reasonable and intuitive a guideline may be, if it is not supported by data, it is mere conjecture. The rational approach to design suffers from two fallacies. First, what's rational to the designer may be idiotic to the user. The designer views the system from a different perceptive. A certain feature or attribute may make sense in light of the whole system, but to the user that feature (or lack thereof) makes no sense for the specific task at hand. The second fallacy is that designers are rarely objective about their design. Having spent a number of man-years developing a system, there may be no end of ego-involvement in the system. Empirical studies provide the objective proving ground for claims between competing systems and design features. Fortunately, empirical research does more than settle arguments, it also reveals the importance of design issues. Some aspects of design may have a critical impact on performance, while others may be irrelevant.
Three basic paradigms exist for research on ergonomic issues. These are discussed briefly in the next sections and in greater detail in a subsequent chapter.
1.3.1 Observational Studies. Observational studies are the easiest to conduct. Unfortunately, they are also the most unreliable and open to interpretation. In an observational study, one or several systems may be selected and researchers observe users interacting with the system. Verbal protocols may be elicited in which the user explains what and why he is doing something. Time, productivity, and error data may be collected. Analysis may be at a purely verbal descriptive level or based on quantitative measures. Conclusions are tentative at best since researchers have little or no control over conditions. For example, one may conclude that users took longer to perform tasks on System A than on System B; but the tasks may not be comparable, system response time may be different, the groups of users may not be equated, etc. Investigators must thoroughly weigh and rule out alternative explanations.
The major strength of observational studies is their ability to generate hypotheses about design features that can be studied in more controlled environments not open to multiple interpretation.
1.3.2 Survey Studies. Questionnaires provide a structured approach in which the user assesses factors related to the human/computer interface. Users may record objective events or subjective evaluations. Objective events include the number of times they have used a particular system, the number of times a system crash occurred while using the system, and number of tasks completed. In each case, since the events are objective, the researcher could also record these events and compare that record with the user assessments. Although user assessments bear a strong correlational relationship to the actual measure, they are not perfect. User assessments are introspective and, therefore, are subject to the properties of human memory and to biases in reporting. Because of this, one cannot assume that user assessments reflect the true values of the measures. On the other hand, it is quite possible to establish, through empirical verification, the reliability and validity of user responses. Furthermore, when comparisons are between different groups of subjects rather than with an absolute criterion, it may not matter that responses are biased as long as the sources of bias are constant among the groups.
Users may also be asked to make subjective evaluations of system attributes. For example, they may be asked to check statements that apply to the system use (e.g., "Documentation is adequate" or "I do not understand program operation.") or rate system attributes on a 10-point scale (e.g., ease of use, speed, tendency to make errors). When users make subjective evaluations, there is generally no way to compare their responses with objective measures of those attributes. However, evaluations such as overall satisfaction with the system may be statistically related to objective system attributes such as system response time or screen resolution.
The key to measuring subjective evaluations is to ensure reliability and internal consistency of the ratings. By reliability we mean that users display little error in making ratings and that if they rated the same attributes a second time, there would be a relatively strong relationship between the two ratings. By internal consistency we mean that the ratings follow a logical relationship. For example, if subjects are asked to compare three systems, A, B, and C, and they rate A superior to B, B superior to C, then they must rate A superior to C in order to be internally consistent. Without internal consistency, the meaningfulness of the results are in question.
Survey data is useful in describing systems, for detecting strong and weak points, and for suggesting improvements. For example, in a study by Norman (1986) students rated the overall ease of use of hypothetical systems described by lists of positive and negative attributes. The impact of each attribute was scaled relative to all others. Figure 1.4 lists the descriptions of the attributes and graphs the impact of each attribute on the ratings. One of the most important positive attributes, at least for this group of students, was the ability of the system to adapt to different types of users. The most telling negative attribute was having a display described as confusing and difficult to read. Such ratings help to set design priorities.
Survey questionnaires should include demographic data on the user. Typically we want to know the age, sex, work experience, and training of the user. We may also want to include psychometric measures to assess intellectual skills, cognitive functions, and knowledge. Analysis of the data may look for interrelationships among the variables. For example, we may be interested in whether there is a relationship between memory ability and preference for a particular type of software, or between the rated number of errors and rated overall satisfaction with the system. Questionnaires may also be effectually used in conjunction with controlled experimental studies.
1.3.3 Experimental Studies. The major strength of the experimental study is its ability to localize unambiguously an effect in a particular design factor. Experimental design is used to control all variables except those that are being tested. The steps are as follows:
First, sample participants for the study from the population of interest. To the extent that users are a diverse group, it is important to assess the individual differences of the users. For example, one might need to know (a) the level of experience with computers, terminals, etc.; (b) familiarly with the generic task such as accounting, information retrieval, or programming; (c) demographic variables of age and sex; (d) cognitive measures of analytical skills, verbal and visual memory, reaction time, etc.
Second, one or several design features are selected for study. These are systematically varied in such a way that their impact on performance can be unambiguously assessed. For example, we may be interested in comparing alphabetic vs. random ordering of options in a menu as well as the number of options (e.g., 4, 8, 16, 32). Software must be written or altered so that the features are implemented at each level.
Finally, we must select one or several variables to measure. Table 1.1 lists some of the types of variables that may be of interest. The variables must be defined in such a way as to allow valid and reliable measurements to be taken. Typically additional software must be written to capture these measurements.
The experiment must be carefully monitored as it progresses in order to detect flaws in the design and methodology. Once it is completed, statistical analysis is used to sort out the results and give evidence as to the reliability of the findings.
The experimental approach is not without its drawbacks. Experiments are costly and often overrestrictive. The results may not generalize beyond the artificial conditions set in the lab. However, to the extent that the investigator establishes realistic conditions and assesses appropriate dependent, measures the results gain validity.
1.4 Summary
The architecture of the human/computer interface is of concern to designers of computer-based systems. Menu selection as a mode of communication and control is playing an increasing role as part of that architecture. The relationship of ergonomic research and the design process is fundamental in developing guidelines in the use of menu selection. The type of ergonomic research advocated here for design involves a parallel interaction of system designers and researchers as development progresses. A number of ergonomic issues were discussed relating to trade-offs between factors such as speed of performance and error rate, amount of training necessary and asymptotic performance.
A conceptual model of the human/computer interface was advanced that helps to delineate the ergonomic factors into (a) the task environment, (b) the cognitive processing characteristics of the user, (c) the characteristics of the computer, and (d) the flow of information through the human/computer interface. Menu selection as a mode of control involves planning and decision processes on the part of the user. These processes may be facilitated or restricted depending on whether the implementation matches the needs and expectations of the user and the task environment.
In the next chapter we take an analytic look at the various types of menu selection systems. A number of features and design characteristics will be outlined. Many of these are factors that have been the subject of empirical research and are discussed in subsequent chapters.
No comments:
Post a Comment