Endeavour_flight_deck

Although not something the Skitka lab is continuing to actively explore at the moment, another area that Dr. Skitka has studied (with her collaborator Kathy Mosier and others) is automation bias. Automation bias refers to a specific class of errors people tend to make in highly automated decision making contexts, when many decisions are handled by automated aids (e.g., computers), and the human actor is largely present to monitor on-going tasks. Highly automated decision making contexts include monitoring of nuclear power reactors, automatic pilots, and even spell checking programs. Omission errors occur when the human decision maker does not notice an automated decision aid failure (e.g., when a spell check program fails to alert the user of a misspelled word). A commission error is when the human decision maker fails to catch an active error of an automated decision aid (e.g., when a spell checker program suggests and incorrectly spelled word to replace a correctly spelled one). Our research documented the existence and relative prevalence of automation bias using both student and professional samples, as well as examined various context effects and potential correctives. This research was supported by grant from NASA-AMES, NCC-2-237. Abstracts from some representative examples of this program of research are provided below.

Mosier, K. L. & Skitka, L. J., Dunbar, M., & McDonnell, L. (2001). Air crews and automation bias: The advantages of teamwork? International Journal of Aviation Psychology, 11, 1-14.

A series of recent studies on automation bias, the use of automation as a heuristic replacement for vigilant information seeking and processing, has investigated omission and commission errors in highly automated decision environments. Most of the research on this phenomenon has been conducted in a single-person performance configuration. This study was designed to follow up on that research to investigate whether the error rates found with single pilots and with teams of students would hold in the context of an aircraft cockpit, with a professional aircrew. In addition, this study also investigated the efficacy of possible interventions involving explicit automation bias training and display prompts to verify automated information. Results demonstrated the persistence of automation bias in crews compared with solo performers. No effects were found for either training or display prompts. Pilot performance during the experimental legs was most highly predicted by performance on the control leg and by event importance. The previously found phantom memory phenomenon associated with a false engine fire event persisted in crews. <pdf>

Skitka, L. J., Mosier, K. L., & Burdick, M. (2000). Accountability and automation bias. International Journal of Human-Computer Studies, 52, 701 - 717.

Although generally introduced to guard against human error, automated devices can fundamentally change how people approach their work, which in turn can lead to new and different kinds of error. The present study explored the extent to which errors of omission (failures to respond to system irregularities or events because automated devices fail to detect or indicate them) and commission (when people follow an automated directive despite contradictory information from other more reliable sources of information because they either fail to check or discount that information) can be reduced under conditions of social accountability. Results indicated that making participants accountable for either their overall performance or their decision accuracy led to lower rates of &&automation bias''. Errors of omission proved to be the result of cognitive vigilance decrements, whereas errors of commission proved to be the result of a combination of a failure to take into account information and a belief in the superior judgement of
automated aids.

Skitka, L.J., Mosier, K. L., Burdick, M., & Rosenblatt, B. (2000). Automation bias and errors: Are crews better than individuals? International Journal of Aviation Psychology, 10, 85-97.

The availability of automated decision aids can sometimes feed into the general human tendency to travel the road of least cognitive effort. Is this tendency toward “automation bias” (the use of automation as a heuristic replacement for vigilant information seeking and processing) ameliorated when more than one decision maker is monitoring system events? This study examined automation bias in two-person crews versus solo performers under varying instruction conditions. Training that focused on automation bias and associated errors successfully reduced commission, but not omission, errors. Teams and solo performers were equally likely to fail to respond to system irregularities or events when automated devices failed to indicate them, and to incorrectly follow automated directives when they contradicted other system information.<pdf>

Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51, 991-1006.

Computerized system monitors and decision aids are increasingly common additions to critical decision-making contexts such as intensive care units, nuclear power plants and aircraft cockpits. These aids are introduced with the ubiquitous goal of reducing human error''. The present study compared error rates in a simulated flight task with and without a computer that monitored system states and made decision recommendations. Participants in non-automated settings out-performed their counterparts with a very but not perfectly reliable automated aid on a monitoring task. Participants with an aid made errors of omission (missed events when not explicitly prompted about them by the aid) and commission (did what an automated aid recommended, even when it contradicted their training and other 100% valid and available indicators). Possible causes and consequences of automation bias are discussed

Made in RapidWeaver