Ask any experienced dev and they will tell you: in attempting to solve almost any problem with software, we are liable to create others.

To better understand this dynamic, I read (ok, skimmed) Robert Hoffman and Laura Militello’s lengthy Perspectives on Cognitive Task Analysis, an acronym-heavy technical survey peppered with vignettes of industrial snafus and over cluttered fighter pilot displays.

In it, I found an excellent summary of Lisanne Bainbridge’s often cited Ironies of Automation paper from 1983.

And given that an analysis of nuclear plant procedure preceded this section, I decided to illustrate these ironies with GIFs of America’s favorite nuclear plant operator – Homer J. Simpson.

So without further ado, the ironies of automation.

Automation necessitates monitoring…

One consequence of automation is that the human is actually given a nearly impossible task. If a process can be specified and a computer can make complex decisions more quickly and effectively than a human, the operator must determine when the automation is working properly.

The monitor needs to know what the correct behavior of the process should be. … Such knowledge requires special training or special displays … [there may be] no way in which the human operator can check in real-time that the computer is following its rules correctly.

…encourages deskilling

A second consequence is that skills deteriorate when they are not used. As we know from studies of vigilance, “an operator will not monitor the automatics effectively if they have been operating acceptably for a long period.” The operator has less opportunity to explore and understand how the process is working.

An operator will only be able to generate successful new strategies for unusual situations if he has an adequate knowledge of the process … the operator has in his head not raw data about the process state but the results of making predictions and decisions about the process which will be useful in future situations.

This develops only through practice with feedback, and the monitoring role often precludes that.

…but requires skilled intervention

A third consequence involves what happens when an automated process goes awry.

If the human is not involved in on-line control he will not have a detailed knowledge of the current state of the system. … When manual take-over is needed there is likely to be something wrong with the process so that unusual actions will be needed to control it, and one can argue that the operator needs to be more rather than less skilled. … By taking away the easy parts of his task, automation can make the more difficult parts of the human operator’s task more difficult.

…while enabling poor supervision

Monitoring is often predicated on the notion that the operator can call in specialized expertise in unusual situations. Here too is an irony: “The supervisor too will not be able to take over if he has not been reviewing his relevant knowledge or practicing a crucial skill.”

…and demoralizing an overly liable operator.

In sum, the monitor has a job that is at once very boring and very responsible, “yet there is no opportunity to acquire or maintain the qualities required to handle the responsibility … [when] the job is ‘de-skilled’ by being reduced to monitoring.” Bainbridge cited studies showing that job satisfaction is higher and stress lower when workers are actively engaged in the control of processes that are complex yet highly controllable.

Alarms only help so much…

Potential solutions to these conundrums have themselves resulted in ironies of automation. One approach is to create alarms and specialized displays for use in certain kinds of unusual situations. Catastrophic problems can be easy to detect.

However, the trends that show a path to failure are not always obvious; displays that are ideal for normal situations may disguise abnormal ones.

…because automation encourages routine.

Furthermore, the automated systems work constantly to correct deviations from the norm, and thus when an alarm sounds or a catastrophic break has occurred, the trend may be beyond the capacity of the human monitor to understand and act on quickly, compounded by the fact that the operator will be most practiced in using the displays that are for routine operations and routine monitoring activities. And if the human operator does not believe or agree with the computer, he may be unable to trace back what it was that the computer did.

Another approach is to create displays that are designed to match the operator’s level of skill […] or proficiency level (e.g., trainee versus journeyman versus expert). In theory, the computer could detect the level of skill or strategic style of the operator and adjust the display accordingly. Bainbridge argued that such capabilities of multiple displays might confuse rather than help:

The changes between knowledge-based thinking and “reflex” reaction is not solely a function of practice, but also depends on the uncertainty of the environment, so that the same task elements may be done using different types of skill at different times. … We do not know how confused operators would be by display changes which were not under their own control … although operators evidently do think at different levels of complexity and abstraction at different times, it is not clear that they would be able to use, or choose, many different displays under time stress.

…and encourage poor comprehension.

Another irony that stems from the creation of better interfaces links back to the issue of the degradation of knowledge:

The more processing for meaning that some data has received, the more effectively it is remembered. This makes one wonder how much the operator will learn about the structure of the process if information about it is presented so successfully that he does not have to think about it to take it in. It certainly would be ironic if we find that the most compatible display is not the best display to give the operator after all! A highly compatible display [that] supports rapid reactions [may not support] acquisition of the knowledge and thinking skills needed in abnormal conditions.

Repair can require improvisation…

There are also ironies that are resultant from training.

It is inadequate to expect the operator to react to unfamiliar events solely by consulting operating procedures. These cannot cover all the possibilities, so the operator is expected to monitor them and fill in the gaps. However, it is ironic to train operators in following instructions and then put them in the system to provide intelligence.”

…and simulation can only help so much.

High-fidelity simulations can help workers maintain skills and can provide opportunities to practice some nonroutine situations, but they cannot help in dealing with unknown faults and complex failures having multiple causes that cannot be anticipated (and hence cannot be simulated). A final irony is that “the most successful automated systems, with rare need for manual intervention … may need the greatest investment in human operator training.”

tldr: Automation adds complexity and that means more challenging decisions.

Thus, for a number of reasons it is necessary for the human to be able to understand and follow the operations of the automation. This is, of course, an irony that the automated system was created to help the human cope with the complexity of process being controlled but ends up forcing the human to understand the complexities of the automated system. This can make human performance worse in a number of ways. For instance, when the operator does not understand or trust the automation, he will attempt to make control decisions anyway, and so the additional task of having to monitor the automation adds to workload—ironic, because automation is intended to reduce workload.

Bainbridge cited instances in which automation does help, for instance, aircraft autopilots that work to free the pilot from online control and thus allow the pilot to think about the challenges or abnormalities. In such cases, the human knows which process the computer is thinking about and (to some degree) what the computer is trying to accomplish. Good interfaces are ones in which key types of information are presented in dedicated displays (e.g., one display of process plant layout and another of process functionality). “Operators should not have to page between displays to obtain information about abnormal states in parts of the process other than the one they are currently thinking about, nor between displays giving information needed within a single decision process.”

Does your automation encourage better decisions – or worse?

For a more patient read, consider Adrian Colyer’s take.

Next

How to livestream a presentation with screen sharing and audio sharing

Previous

Grappling with Complexity