The Human-Machine Interface in Anesthesiology: Corollaries and Lessons Learned From Aviation and Crewed Spaceflight
2020; Lippincott Williams & Wilkins; Volume: 130; Issue: 5 Linguagem: Inglês
10.1213/ane.0000000000004628
ISSN1526-7598
AutoresCraig S. Jabaley, Grant C. Lynde, Mark Caridi-Scheible, Vikas N. O’Reilly-Shah,
Tópico(s)Spaceflight effects on biology
ResumoThe modern practice of anesthesiology was built on incremental technological advances resulting in substantial patient safety improvements. Seeking to continue this trend, our specialty has looked to the aerospace industry as an exemplar of best practices, including those related to technology and the human-machine interface. In comparison, anesthesiology is on a parallel but lagged technological journey. The Wright Brothers relied entirely on human senses during their inaugural flight in 1903, but high-quality, standardized flight instruments rapidly followed. The first autopilot system, introduced in 1912, used instrument output to manipulate aerodynamic control surfaces. Contemporaneous advances in anesthesiology were comparably more rudimentary, such as the development of orotracheal intubation techniques between approximately 1895 and 1913. As early as the 1950s, perceptive and cognitive weaknesses inherent to the human condition were recognized as potential barriers to aerospace advancements. Future National Aeronautics and Space Administration administrator Richard Horner noted that human "reasoning, judgment, and flexibility of response" were the raison d'etre for having pilots at the helm, but he also lamented that the human link in the chain of aircraft safety is the element that "improves the least in successive generations."1,2 In light and celebration of the 50th anniversary of the successful Apollo 11 mission to the surface of the moon, the time is ripe to reflect on how the history of aviation and crewed spaceflight offer lessons and insight that can inform modern parallel efforts toward automation in anesthesiology. OPEN- AND CLOSED-LOOP CONTROL SYSTEMS IN ANESTHESIOLOGY Maintenance of physiological homeostasis and anesthetic depth are core responsibilities of the anesthesiologist. Considering blood pressure as an explicative example: as we observe deviation from baseline during a case, we mentally reconcile concurrent clinical circumstances with the direction, magnitude, and timing of the deviation. Myriad and potentially subtle factors then influence us to accept the current state, obtain new measurements or data, or intervene. In the language of control system theory, we serve as feedback controllers, comparing the value of a process variable against a desired set point and applying a control signal if needed. Intermittent vasopressor boluses would be akin to an "on-off," or hysteresis, controller (eg, a thermostat). As will be intuitively familiar to the practicing anesthesiologist, this type of on-off control signal, akin to a phenylephrine bolus, may lead to over- or undercorrection against the desired set point, or blood pressure in this example. Control systems theory terms this an oscillating error around the desired set point. That error is governed by inertia and the hysteresis gap, which is the lag between a control signal input and changes to the output. In contrast to closed-loop control, open-loop control systems are designed based on externally developed models to inform the control signal. Target-controlled infusion (TCI) is one example clinically familiar to many anesthesiologists outside the United States. TCI utilizes population-derived pharmacokinetic models for computer-controlled drug delivery to achieve an (unmeasured) target concentration of intravenous anesthetic. TCI has seen broad worldwide adoption.3 In anesthesia, systems are critically dependent on having a human-in-the-loop. Clinical judgment is required to set the desired end point and monitor patient response. While TCI provides a degree of anesthetic stability, dynamic changes in situation and requirements demand clinician input. That said, we must consider whether there is a way forward with closed-loop control systems in anesthesiology. We can look to the aerospace industry for successful examples of closed-loop control systems to understand design, testing, and implementation considerations. LESSONS IN INDEPENDENT CONTROL SYSTEMS AND AUTHORITY FROM AVIATION Early in the history of aircraft design, stability and agility were opposing goals for engineers. Features that would cause an aircraft to naturally return to straight-and-level flight after light turbulence reduced maneuverability. Later, the increased power afforded by jet engines enabled supersonic flight with attendant stability challenges. Pilots were found to easily overcompensate when correcting in-flight instabilities, and within the industry, this was termed pilot-induced oscillation (PIO). Within anesthesiology, analogous provider-induced oscillation can be seen, for example, in bolus-based blood pressure management. PIO considerations came to a head during the development and operation of the X-15, a piloted hypersonic flight test platform and the first operational spaceplane. After launching from a B-52 bomber at altitude, it could exit the atmosphere under rocket power and would land unpowered. Piloting the X-15 proved challenging owing to limited visibility, encumbrance from the necessary pressure suit, and unusual flight characteristics. To control the plane both in the atmosphere and in the lower reaches of space, the X-15 required traditional aerodynamic control surfaces within the atmosphere and thrusters (ie, a reaction control system) beyond the atmosphere. A sophisticated closed-loop electronic stability augmentation system (SAS) enabled predictable control responses during all phases of flight, serving to reduce PIO, and maintained the precise attitude control required for atmospheric reentry. While the X-15 program was considered a success, it is worthwhile to note that maladaptive output from the SAS under unusual flight conditions led to the X-15 program's single-pilot fatality in 1967, and the program was closed the following year. Looking ahead, these systems matured rapidly such that the Space Shuttle orbiter was taken through reentry and landing under partial manual control only once during STS-2, by former X-15 pilot Joe Engle.2 Recent challenges with the Boeing 737 Max 8 offer contemporaneous evidence that lessons related to automation, authority, and the value of a "human-in-the-loop" remain to be fully appreciated. The Max 8 was designed to decrease fuel consumption, and its larger engine nacelles sat forward and high on the wings of the tried and tested 737 airframe to ensure adequate ground clearance. This introduced pitch instability, which was worsened at high angles of attack. To improve control predictability and familiarity from prior 737 iterations, a new system was introduced: the Maneuvering Characteristics Augmentation System (MCAS). Initially designed to cause a single small nose-down deflection in response to 2 sensors, it was later granted larger and repeated deflection authority in response to a single sensor when flight testing revealed low-speed pitch instability. For a myriad of reasons, MCAS was neither well documented nor communicated to pilots during the rollout of the Max 8.4 This fateful chain of engineering and human-factors decisions contributed to 2 high-profile fatal accidents. FUTURE DIRECTIONS FOR INDEPENDENT CONTROL SYSTEMS IN ANESTHESIOLOGY Closed-loop control systems have been investigated clinically for the control of mechanical ventilation, hypnosis, analgesia, neuromuscular blockade, blood pressure, temperature, fluid resuscitation, and glucose.5 The extent to which automated systems have pervaded the perioperative environment is perhaps underappreciated. For example, all newer anesthesia machines feature modes of ventilation that grant substantial authority to the machine. In conventional pressure-controlled ventilation, we assume control of inspiratory pressures and monitor tidal volumes as changes in respiratory system compliance and patient effort vary. Feedback (ie, alarms) occurs only beyond specified thresholds of tidal volume values (Table 1, level 1). Conversely, during more complex volume-targeted, pressure-controlled mechanical ventilation, the machine assumes control of inspiratory pressures, monitors tidal volumes with automated adjustment, and alerts when predefined inspiratory pressure boundaries are exceeded (Table 1, level 4). Table 1. - System for Pilot Authorization of Control of Tasks Level and Operational Relationship Computer Autonomy Pilot Authority Computer–Pilot Relationship Information on Performance 5: Automatic Full Interrupt Computer monitored by pilot System on/off, failure warnings 4: Direct support Action unless revoked Revoking action Computer backed up by pilot Feedback on action, alerts, and warning on failure of action 3: In support Advice, and if authorized, action Acceptance of advice and authorizing action Pilot backed up by the computer Feed-forward advice and feedback on action, alerts, and warnings on failure of authorized action 2: Advisory Advice Acceptance of advice Pilot assisted by the computer Feed-forward advice 1: At call Advice only if requested Full Pilot assisted by computer only when requested Feed-forward advice only on request 0: Under command None Full Pilot None (performance is transparent) Adapted from Bonner et al.6 Contains public sector information licensed under the Open Government Licence v3.0. The primary take-away from related aerospace experience is the paramount importance of high-quality input signals. Human physiology, however, presents additional challenges. Physiological systems commonly exhibit delayed responses and hysteresis. These limit the performance of automated systems and, coupled with design or programming flaws, can lead to oscillatory phenomena. Despite stringent safety controls and clinical vetting of these life-critical devices, clinical oversight cannot, and possibly may not be ever, be completely eliminated when using these devices.7 In a thoughtfully articulated overview of the factors contributing to the demise of the Sedasys propofol delivery system, Goudra and Singh8 identified significant limitations to its functionality and tensions in the regulatory framework. First, the devices were not truly closed-loop in that they could only lighten the anesthetic dose, not deepen it, and those adjustments were made on the basis of clinically limited input signals. Second, there was a tension between the regulatory framework under which the device was approved—the provision of moderate sedation—and the increasing demand on the part of patients and proceduralists for "in-flight" anesthetic stability during endoscopy—requiring deep sedation. Finally, the devices were marketed specifically to endoscopists in contravention to the Food and Drug Administration (FDA) labeling that required propofol only to be administered by providers skilled in resuscitation and airway management. For these reasons and more, anesthesiologists must remain engaged in systems development and should be involved in their management and monitoring when utilized. Related considerations have been well laid out by Parvinian et al7 in a report of the proceedings of an FDA workshop on this very topic. Well-designed and implemented control systems may confer patient safety benefits. By leveraging electroencephalographic proxies for the depth of sedation, such systems have been found to avoid excessive anesthetic dosing and shown promising results in the mitigation of adverse neurocognitive effects.9 Similar investigations demonstrating the benefit of these systems over usual care have been performed with respect to fluid administration.10 Automation can be purposely designed and implemented by clinicians to facilitate the allocation of scarce cognitive resources and attention to higher-order tasks, streamline workflow, and accommodate certain innate human weaknesses as has been done in the aerospace domain. PERSPECTIVES ON INTEGRATED CONTROL SYSTEMS FROM THE HISTORY OF DIGITAL AUTOPILOTS Project Mercury, the first American crewed spaceflight program, began in 1958. Even this early, automation was sufficiently robust to raise questions about the role, if any, of manual control. Voas,11 an early "human factors" expert, advocated that human participation in spaceflight "provides added reliability and flexibility of flight." He delineated 8 important human tasks, to which we have added some equivalents in the anesthesia domain (Table 2). Table 2. - Project Mercury Astronaut Responsibilities and Anesthetic Analogs11 Task Project Mercury Astronaut Duties Anesthetic Analogs Systems management Monitoring of systems, failure isolation, assumption of manual control Machine and equipment checks, hand ventilation in the event of critical machine failure Programming or sequence monitoring Attention to critical events of launch and reentry with "rapid and accurate reactions to malfunction cues" Adaptive responses to expected and unexpected events during induction and emergence Control Manipulation of vehicle attitude (ie, pitch, roll, and yaw) Maintenance phase interventions Navigation Use of ground references and astronavigation to determine position Continuous monitoring Communications Receipt of information from, and relay of information to, ground control Communication between trainees or advanced practice provider with attending Research observations Evaluation of data from a "unique position" Physician scientist bedside insights Self-regulation Maintenance of sound mind and body under duress Wellness Ground work Preparatory and recovery operations Precase preparations and postoperative management Project Gemini introduced rendezvous and docking, which were essential to later lunar missions. The complexities of orbital mechanics, such as applying retrograde thrust to drop altitude and "overtake" an object in a higher orbit (ie, slowing down to speed up), came into full view during Gemini IV's failed inaugural attempt at orbital rendezvous.2 As the reader might now expect, these counterintuitive considerations were overcome on later missions through a combination of simulation and real-time decision support to astronauts provided by a digital computer. Project Apollo was substantially more complex as missions called for translunar navigation in addition to spacecraft state vector estimation and attitude control. After some debate, a lunar orbit rendezvous mission profile, which spared launch mass but required docking in lunar orbit, led to the decision to unify computational guidance and control systems into a single computer-controlled digital autopilot fitted both to the command module and lunar module (LM): the Apollo Guidance Computer (AGC).2 Although radical at the time, the AGC proved integral to the program's success and served to automate many spacecraft functions. The LM, designed to operate only in the vacuum of space, was complex and unwieldy. "Manual" control inputs were parsed through the digital autopilot, which used closed feedback loops to create predictable and familiar responses in this unusual craft despite fuel sloshing in its tanks, changes in its center of gravity and mass as fuel was expended, and orientation variations during different phases of descent to the lunar surface. Its attitude, horizontal velocity, vertical velocity, and estimated landing position could all be independently manipulated. This enabled its piloting astronaut to focus on visual assessment and selection of a suitable landing site late in the descent, which was felt to require human judgment. Although the LM was made capable of mechanically landing itself without manual control input after Apollo 12, this feature was never used during a mission.12 FUTURE DIRECTIONS FOR INTEGRATED CONTROL AND ADVISORY SYSTEMS IN ANESTHESIOLOGY Integrated control systems analogous to the AGC or modern autopilots will likely serve to advance the practice of anesthesiology and facilitate better care of more acute patients undergoing increasingly complex procedures. However, the complete integration of digital autopilot systems stands in stark contrast to the technological fragmentation in modern operating rooms. Even if these obstacles were overcome, a holistic and reliable model of human physiology remains elusive. Taken together, these considerations suggest that an integrated closed-loop multivariable approach to human physiological systems control, akin to the AGC, remains a seemingly distant goal. Clinical guidance systems may represent an attractive intermediate waystation to this goal by helping to integrate multiple physiological signals into guidance that can be accepted or rejected with varying degrees of automated implementation (Table 1, levels 1–3). These systems could optimize performance in complex or otherwise counterintuitive clinical scenarios, akin to the prior example of slowing down to speed up while in orbit. For example, increased afterload may be required to offset ischemia in the setting of severe aortic stenosis. Clinical decision support in anesthesiology will continue to evolve as newer machine-learning techniques will be applied to datasets with ever-increasing fidelity, temporal resolution, and number of channels. SIMULATION IN ANESTHESIOLOGY Simulation is increasingly common in medical education. Simulation can be divided into 3 common approaches: task trainers for the acquisition of specific skills, high-fidelity simulation replicating the patient and clinical environment, and team training with or without the use of concurrent simulation. Task trainers in anesthesiology, such as airway mannequins and ultrasound simulators, are associated with favorable learner perceptions and improved technical competence. Better adherence to bundled compliance measures, such as those for central line insertion, may also be seen. On balance, educational outcomes are mixed, and evidence concerning patient-centered outcomes remains sparse.13 LESSONS LEARNED FROM THE STUDY AND REFINEMENT OF THE HUMAN-MACHINE INTERFACE IN AEROSPACE SIMULATION Edwin Link is credited with developing the earliest high-fidelity flight simulation device in 1929, dubbed the Link Trainer, which marked a dramatic leap forward from antecedent rudimentary trainers (ie, the Antoinette barrel in approximately 1909). It coupled controls and instruments to a moving platform, which was manipulated via organ bellows driven by an electric pump. The trainers were adopted by the predecessor to the modern US Air Force in 1934 when a series of pilots died due to insufficient skill with instrumented flying, and these trainers were subsequently utilized worldwide. Simulators for Projects Mercury, Gemini, Apollo, and the Space Shuttle program were all subcontracted to Link. These included computer-aided full-mission simulators, part-task, and moving base simulators (ie, the dizzying free-attitude trainer). Simulation also made it feasible to perform usability testing of human-machine interfaces after initial design and prototyping. In addition, astronauts were better acclimated to onboard systems and could rehearse routine tasks as well as approaches to recover from failure. Simulation prompted mission control staff to review the AGC's program alarms and delineate steps to recover from different failure modes, which proved critical to the success of Apollo 11.14 FUTURE DIRECTIONS FOR SIMULATION IN ANESTHESIOLOGY The impressive fidelity of aerospace simulation has not yet reached medicine, and while our specialty has recognized the importance of repetition in automatizing behaviors during critical events, we have yet to routinely operationalize that through simulation. Simulation in anesthesiology training programs is increasing but variable and time, financial, and human resources have been identified as potential barriers.15 As a result, simulation for anesthesiology trainees may be as infrequent as once or twice yearly, compared to simulation comprising up to half of overall training time for astronauts during the Apollo program. Recognizing these limitations, we must not overlook the role of simulation in studying human behavior and human-machine interactions in the perioperative environment. For example, it has been used to evaluate trainee performance while sleep-deprived and study the impact of noisy operating room environments.16,17 Translating other human factors lessons from aerospace in crisis management and situational awareness has been fruitful. Checklists, again adapted from aerospace experience, have been shown to improve compliance with perioperative process measures, but again impact on clinical outcomes is less certain.18,19 The aerospace experience should motivate a recommitment to the development of robust and rigorous simulation to improve performance during rare adverse events. CONCLUSIONS Feedback loops and automation have the potential to augment our capabilities, mitigate weaknesses, and bring increased safety and efficiency for patients. Though these innovations have been gradual and variably adopted, they will likely reach a critical mass beyond which the delivery of a "routine" anesthetic may begin to feel unfamiliar—gradually changing the role of anesthesiologists. Challenges to professional identity frequently accompany technological innovation. Pilots are no more immune than anesthesiologists to fears that progressively less skill and art will be required for routine job performance. For example, drug-specific reversal agents have greatly simplified the routine management of neuromuscular blockade, which previously involved a substantial measure of clinical judgment. We, like pilots, must recognize that while lower barriers to entry could erode professional trust and respect, associated evolution of the specialty will bring entirely new opportunities and challenges. Automation should be designed to extend and bolster our clinical practice by helping prioritize finite attentiveness and other cognitive resources for important tasks while entrusting less acute tasks to the machine. In contrast to the aerospace industry, our study of both the cognitive strengths of the anesthesiologist and the means by which to compensate for shortcomings is in its relative infancy. However, we can again learn from decades of experience in the aerospace industry related to human-centered automation to inform our future directions (Table 3). Engineers and systems designers will need to partner with anesthesiologists to better understand our typical workflows and high-priority needs, and anesthesiologists should expect to engage in these design processes to promote the development of meaningful advancements. Table 3. - Principles of Human-Centered Automation20 General guidelines The human operator must be in command. To command effectively, the human operator must be involved. To remain involved, the human operator must be appropriately informed. The human operator must be informed about automated systems behavior. Automated systems must be predictable. Automated systems must also monitor the human operators. Each agent in an intelligent human-machine system must have knowledge of the intent of the other agents. Functions should be automated only if there is good reason for doing so. Automation should be designed to be simple to train, to learn, and to operate. Specific requirements and guidelines Automated systems must be comprehensible to pilots. Automation must ensure that the pilot is not removed from the command role. A primary objective of automation is to maintain and enhance situation awareness. All automation elements and displays must contribute to this objective. Management automation should make airplanes easier to manage. Designers must assume that human operators will rely on reliable automation, because they will. With continued innovation, the practice of anesthesiology will evolve and the anesthesiologist along with it; however, the core human strengths of creativity, innovation, and adaptability will remain paramount. These same traits facilitated the safe return of Apollo 13 after catastrophic systems failure. As automation becomes more commonplace, our profession's core tenet of vigilance will remain critical to ensuring patient safety. On a case-level basis, clinical judgment and intuition must serve to guide the extent to which authority is ceded to automated systems. ACKNOWLEDGMENTS The authors thank David A. Mindell, PhD, Professor of Aeronautics and Astronautics, and Dibner Professor of the History of Engineering and Manufacturing, Massachusetts Institute of Technology, Boston, MA, for his study on the relationship between humans and machines, including Digital Apollo: Human and Machine in Spaceflight, which served to inform and inspire this work. DISCLOSURES Name: Craig S. Jabaley, MD. Contribution: This author helped to conceive the work, acquire and interpret primary source information, draft and revise the work, and approved the final version to be published. Name: Grant C. Lynde, MD, MBA. Contribution: This author helped to acquire and interpret primary source information, draft and revise the work, and approved the final version to be published. Name: Mark E. Caridi-Scheible, MD. Contribution: This author helped to conceive the work, revise the work critically for important intellectual content, and approved the final version to be published. Name: Vikas N. O'Reilly-Shah, MD, PhD. Contribution: This author helped to conceive the work, acquire and interpret primary source information, draft and revise the work, and approved the final version to be published. This manuscript was handled by: Maxime Cannesson, MD, PhD.
Referência(s)