Patient Safety

Patient Safety

David E. Eibling

“Primum, non nocere” Attributed to Hippocrates (1).

Since the days of Hippocrates, it has generally been assumed that attempts to heal are unpredictable, and those attempts occasionally result in unintentional harm. Injury, and even death, were inevitable for a few unfortunate patients. Health care providers, patients, and the public judged that the natural history of the disease process itself justified these occasional undesired outcomes. When facing imminent surgical or other interventions, patients could only hope that they were protected by their choice of the best doctors and hospitals. Moreover, the statistical odds of themselves or their family members becoming a victim of an iatrogenic injury were small enough to offer some level of comfort. In essence, they relied on hope—hoping it was not “their turn” for something to “go wrong.” Physicians likewise were well aware that medical care was imperfect, that diagnoses were sometimes flawed, and that treatment attempts were often poorly conceived, planned, and executed. The occasional medical mistakes were inevitable. Preservation of their own confidence and decision-making ability often relied on the assumption that mistakes were made by other, less able practitioners working in other hospitals and that they were protected by their superior knowledge and skills and superior institution. Many poor outcomes could, (and often were) with minimal effort, be attributed to the patient or their disease process itself.

Of course, not all adverse events that impact patients are due to mistakes. Patients can be harmed by “Acts of God,” unforeseen circumstances such as weather or other anomalies that destroy hospitals and infrastructure (tornados, earthquakes, tsunamis, or hurricanes to list just a few from recent years), killing or injuring inpatients along with others in the affected community. Many adverse events, such as infections or falls, long assumed to fall into this “Acts of God” category, are now being shown to be preventable. The growing assumption is that failing to prevent potentially preventable harmful occurrences represents failure and can be interpreted as a mistake. The list of these events continues to increase as investigators demonstrate that interventions can impact the incidence of or harm caused by these occurrences. In recent years, new payment policies in the United States that withhold reimbursement for the treatment of such events add credence to the fact that errors of omission are also mistakes.

Mistakes judged as preventable were interpreted by both society and the medical establishment as indicative of the failings of individual providers who by definition must have been flawed in some way. The assumption that “someone caused this and they have to pay” fed a burgeoning tort claim system whereby the injured patient could seek restitution. The inevitability of human error occurring in the context of ever-increasing complexity and uncertainty led to the flourishing of the medicolegal industry, which grew dramatically in the last half of the 20th century. Accompanying the growth of this industry was wholesale proliferation of armament for the plaintiffs and their legal representation, driven by enormous increases in tort claim awards. Defensive measures were introduced, such as a malpractice insurance industry, often supported by legal mandate, and risk management strategies that sought to protect the health care industry, at the cost of considerable expenditures of resources. As of 2011, efforts to retard the growth of the medical malpractice industry by limiting the amount of financial rewards have been only minimally effective, suggesting that alternative strategies will be required. Shifting the current medical malpractice paradigm to one that improves care, like so many other needed cultural changes, has, by default, been relegated to future generations. The reader is referred to Chapter 199, Medical Legal Issue, of this book for a more in-depth discussion of medicolegal implications for otolaryngologic practice.

Contributing to this ongoing impasse is the culture of “name, shame, and blame,” an inbred tradition in
which most physicians practicing today were trained. The assumption that most mistakes are due to flaws in individual physician performance conveys multiple costs to both the physician and his or her family and colleagues, as well as the organization in which he or she practices. This focus on the individual mandates the identification of the person responsible for the outcome, formal evaluation of his or her performance, and then restriction of his or her practice or enrolling the person in an educational program so that the error does not recur. At times the criminal justice system has been activated, with the result that health care workers who make mistakes are arrested with criminal charges, and even incarceration. Unfortunately, not only does this approach usually fail to identify and rectify the underlying problem, it impairs the objective analysis of human error and assures that the failure will occur again. Examples of misdirected peer review programs are prevalent in 2011 and challenge forward-thinking leaders who are attempting to prevent or mitigate future mistakes. With the exception of a few leaders in patient safety, this paradigm has persisted through the first decade of the current century.

Although these sentiments remain prevalent throughout health care, other industries have long recognized that focusing on individual practitioners is a failed strategy. These industries recognized that human performance is imperfect and highly context dependent, a recognition that drove the adaptation of their corporate structure to this “new” reality. Industries that modified their corporate culture and production systems to adapt to the abilities of their employees, rather than focusing solely on the performance of those employees, excelled in the marketplace due to reduced production costs. Gurus such as Shewforth (2) and Demming (3), initially eschewed by US industries, were embraced by the postwar Japanese, with dramatic results. The effect of this paradigm shift in Japanese industry became evident over the past quarter of a century as automobiles from Japan began to make major inroads into the US automaker’s monopoly on the roads and highways of the United States. Medicine was late in accepting the premise that context mattered. The recognition that the environment and systems within which human practitioners provided care is a critical factor in outcomes and that it is possible to alter that context through informed system design is only slowly dawning on the industry. As of 2011, medicine was still struggling with the concept that imperfections in complex systems lead to the majority of preventable adverse events, with the focus on error prevention still largely focused on individuals (4,5). Dekker pointed out that this paradigm makes sense to leaders since there are institutional benefits to be had by focusing blame on individuals. Doing so relieves the organization from embarking on often difficult and costly change. “The judgment that this was human error simply produced too many organizational benefits” (6). Unfortunately, this strategy simply leads to repeated errors, a lesson learned long ago by other industries. Studying the aviation industry provides insight into the ongoing efforts of the patient safety movement as it attempts to change the entrenched culture of the health care industry (7).


Aviation is generally considered the first industry to position safety as its top priority, a logical choice for the industry since aircraft crashes dramatically impact productivity through loss of not only the aircraft but also often the pilot as well! Following the Second World War (WWII), the United States Air Force (USAF) discovered to their dismay that no amount of pilot training would suffice to prevent crashes if the aircraft could not be safely flown by a human. The advent of jet aviation was met with a rash of accidents due to the increasing mismatch between the capabilities of high-performance aircraft and high-performance pilots. The USAF was instrumental in the development and maturing of the science of human factors in order to assess human capabilities during flight and to use this knowledge to inform aircraft design to accommodate to these capabilities. Formal investigation into the limits of human performance drove the development of the science of human factors, a discipline that informs system design to assure that the airplane can be flown by a real human. If the pilot becomes unconscious during a high-G maneuver, it does not matter how well the plane is designed; it will crash and be destroyed along with the pilot. Initial human factors research was directed toward human physiologic capability, such as performance characteristics under high G-forces, or alteration in pressurization and oxygen availability. It became apparent that cognitive resources were also challenged by high-performance flight, and that too little information, too much information, or information provided in a confusing manner would lead to accidents just as well as if the pilot had blacked out. This recognition drove the development of new cockpit information systems, and aviation became the “poster child” for “human-centered design” with reliability and safety as primary goals.

Despite dramatic improvements in aircraft design in the years following WWII, a number of high-visibility aircraft crashes in the 1970s pointed out vividly that even the presence of a highly skilled pilot in the optimally designed cockpit of a technically advanced aircraft could not always avert disaster. The loss of United 193 and Eastern 401 and the improbable crash of two loaded 747s on a fogshrouded runway in Tenerife in March 1977 (8) were a wake-up call for the industry. Aviation came to the understanding not only that theirs was an imperfect system but that the industry must shoulder the responsibility of improving the system. Inherent in this understanding was the recognition that the established paradigms were untenable and that change was necessary—and the optimism
that such change was possible. As a result, multiple innovations were introduced in the 1970s and 1980s, which led to dramatic reductions in the aviation accidents. These include no-fault near-miss reporting, standardized simulator rehearsal, crew resource management (CRM) training (9), and rapid and widespread dissemination of information regarding safety threats ( Accessed May, 2011). The most critical change, however, was the fact the industry embraced a “culture of safety” in which all employees are empowered—and expected—to respond immediately and report safety threats (7). Many of these cultural changes are being introduced into health care, but as of 2011, these are best described as “in process.” Despite widespread encouragement (10), and although many organizations actively promote the concept of a “safety culture” and survey their employees to assure they are achieving their goals, medicine is still far behind aviation in this regard.


Medicine was rudely awakened to the extent of the problem of medical error by the publication of the Institute of Medicine report “To Err is Human” published in 1999 (11). This report estimated that somewhere between 44,000 and 98,000 Americans died each year in hospitals as a direct result of medical error. Based on the Harvard Medical Practice study (12) published nearly a decade previously, the figures in the report were initially strongly disputed. It seemed impossible that the same number of deaths occurred each day due to medical errors as would be caused by two 747 crashes each day. It has since become apparent that even the 98,000 figure is a gross underestimate of the true magnitude of the number of deaths due to medical error. The numbers are staggering, with the current best estimate is that 1 in 20 Americans will die due to a medical error. Medical mistakes account for the eighth most common cause of death in the United States, more than deaths due to HIV and breast cancer combined! The economic costs are staggering, with a recent estimate that the total societal cost of health care-related adverse events in the United States is now approaching a trillion dollars annually (13)!

Our knowledge of the fundamental science behind medical error is based on the work of psychologists such as James Reason (14) who undertook to determine why humans make mistakes. The research into this area is immense and has recently been recognized by the health care industry as pertinent to an understanding of medical error. Reason classically separates human error into two major types, that of a mistake, which is choosing the wrong plan to achieve a specific goal, or a slip, which would be defined as failing to execute the plan that one has chosen. Ordering an antibiotic to which the responsible organism is not susceptible might be a mistake, and picking the wrong vial out of the medication drawer might be a slip. Human error is inevitable as illustrated by the title of the 1999 IOM report “To Err Is Human.” Human error is a by-product of human attributes, which enable humans to interact with a complex, perceptively rich world. Human error stems from the same cognitive functions that enable the filtering of sensory input, focusing attention on specific goals, pattern recognition, and sequencing of events. Reason quotes Ernest Mach who eloquently stated “accomplishment and error stem from the same source, only the outcome differentiates the two” nearly a century earlier. Other disciplines, particularly those focusing on studying human factors, have adapted these principles through “cognitive engineering” to design strategies whereby the human-technology interface can be improved to reduce the propensity for human error (15).


Those who have assimilated the prior paragraphs will quickly recognize that the title of this section is inaccurate. Human error, by its nature, is not preventable. Helmreich (9) pointed out that the goal of error prevention is actually the prevention of injury. He proposed the term “error troika” as a fundamental concept useful in designing systems with the goal of injury (or crash) prevention. Errors will happen, but it is possible to design systems to (a) reduce the likelihood of error, (b) “trap” the error before it can progress, or (c) mitigate or prevent the effect of the error once it has occurred. Medication administration strategies often seek to employ all three components of the “troika,” with variable success. In a similar vein, the famous “Swiss Cheese” illustration of Reason (14) points out that well-designed systems have multiple layers of protection, or barriers, placed between the error and the “target.” These barriers vary in permeability, however, and typically have “holes” that reduce or negate their effectiveness. Whenever the “holes” line up, the effects of the error are experienced by the “target.” Keeping this illustration in mind as the reader peruses the examples of safety innovations in this chapter will assist in developing new strategies to protect patients from the unintended actions of wellmeaning health care workers.


The seminal article by Lucian Leape published in 1994 (16) was the first to clearly identify medical error as a cause of adverse outcomes in health care. Dr. Leape made a number of pertinent observations, critically by noting that although human error was inevitable, it did not occur randomly. In his paper, he drew heavily on the work of Reason and others, and nearly two decades later, his observations remain critical to our understanding. The concept that error was not random was a particularly crucial observation since it suggests that types and sites of error are predictable. Experience
in other industries (notably aviation) had demonstrated that prevention of accidents or their sequela was feasible by first investigating accidents that had occurred and using the knowledge gained to inform system redesign. At the time of Leape’s report, the Aviation Safety Reporting System (ASRS) ( had already been in place for 18 years and had already effected major changes in aviation. Leape pointed out that it was only by studying adverse events in an attempt to identify patterns and discern the causes could future events be prevented or mitigated. Furthermore, Leape emphasized the importance of reporting by frontline staff. Thirty years prior to the publication of “Error in Medicine,” Shimmel, chief resident at one of the Yale-New Haven Hospitals, introduced the concept of systematically reporting all adverse events occurring on his inpatient service in an effort to identify and categorize the underlying causes with the ultimate goal of prevention (17).

Anesthesia was the first medical specialty to seek to systematically study medical error. In the late 1970s, faced with mounting public awareness of the risks of anesthesia, the American Society of Anesthesiologists sought to study the problem. The Anesthesia Patient Safety Foundation (APSF) was founded in 1985 and chartered to study anesthesia mishaps and propose interventions, The APSF introduced what was at the time a revolutionary approach, the study of “closed claims” in which injured patients or their families had been compensated for an anesthetic adverse event. The story of how a single malpractice insurance company was induced to release this proprietary information (which could have been damaging to its profitability) is one of the intriguing success stories of patient safety and illustrates vividly the effect one or two individuals can make!

The APSF closed claim study (18) revealed that fully one-third of the events that had resulted in a payment to the plaintiff (closed claim) were due to a respiratory cause. This finding prompted the emphasis on the development of new technology (pulse oximetry and capnography) as well as difficult airway algorithms. Astonishingly, within a period of less than 6 years following the report of the APSF and dissemination of its findings, a substantial reduction in the incidence of perioperative respiratory events was reported (19). The success story of the APSF should be viewed as justification to dedicating sufficient resources to identify and study adverse outcomes as the first step in reducing the risk of medical mistakes or the effects of those mistakes.

The National Patient Safety Foundation was established in the 1990s, as a result of a major conference convened in 1996 to address the issues of patient safety ( Accessed May, 2011). This conference brought together members of both the medical community and the human factors communities and was perhaps the first time that the mismatch between the complexity of health care and human capabilities was identified as the fundamental cause of medical error.

Largely unheralded outside of the specialty, it was not until the IOM report in 1999 that the previously unrecognized epidemic of death and injury occurring during medical care reached a level of awareness sufficient to promote action. The IOM report galvanized the nation and prompted the institution of a wide range of investigation and interventions. To quote Robert Wachter (5), “in short, everywhere one looked, one found evidence of major problems with patient safety.” The IOM followed with additional reports, new organizations and government agencies (Agency for Healthcare Research and Quality [AHRQ]) were established, and hospitals established patient safety offices to address the issues at the facility level. The Veterans Health Administration (VHA) established the National Center for Patient Safety (NCPS) at Ann Arbor, appointing James Bagian, prior astronaut-physician, as its first director. The NCPS was the first system-wide effort to capture and study adverse events and institutionalized patient safety efforts within the VHA nationwide ( Accessed May, 2011). The state of Pennsylvania established a state level safety agency in 2002 (20) and Congress passed the Patient Safety and Quality Improvement Act in 2005 (available at Accessed July, 2011).

Others, such as Peter Pronovost from Johns Hopkins, were unwilling to accept “business as usual” and initiated interventions based on the “basic science” of studying adverse events and near misses. Within a decade, challenges that had been previously assumed to be inevitable and unmanageable, such as hospital-acquired infections such as methicillin-resistant Staphylococcus aureus (21) and central line infections (22), were successfully tackled by patient safety investigators. These innovators studied the problems, formulated hypotheses, instituted new practices, and, in these examples, demonstrated remarkable success. It is instructive to note that investigations in patient safety do not follow the classic randomized clinical trial paradigm (RCT) since to do so may be unethical. Rather, most patient safety innovations follow the “Plan, Do, Study, Act” paradigm through which innovations are “tested” in actual practice and then studied prior to widespread dissemination and implementation.

The Joint Commission (previously known as the Joint Commission on the Accreditation of Healthcare Organizations) adapted new patient safety goals (JCPSGs) that are updated on a yearly basis and served as criteria for assessing hospital performance in patient safety. Essentially all hospitals in the United States utilize the JCPSGs as a roadmap for improving patient safety in their institutions ( Accessed May, 2011). Reviewing these goals is an effective strategy to identify what the organizational targets in patient safety are at any specific time.

Despite these and many other innovations, as of 2011, the epidemic of errors continues to extract an unacceptable toll, both human and financial. In a classic editorial in Health Affairs, “The IOM report: 10 years later” Carolyn Clancy, Director of AHRQ, pointed out that painfully little had occurred, primarily due to an inability to change an intransient culture (23). The rest of this chapter reviews a number of concepts and interventions to guide the otolaryngologist who is beginning his or her “safety journey.” The reader will hopefully identify some achievable targets that can be utilized by otolaryngologists, health care teams, and enterprise organizations to reduce the risk of harm to the patients entrusted their care.


Confusion often exists in defining a perceived boundary between the definitions of quality and safety. In reality, the boundary is indistinct, although safety can be considered as a prerequisite for quality (5). High-quality care is assumed to be effective care, typically defined by favorable clinical outcomes coupled with cost-effectiveness, usually compliant with established guidelines when such guidelines exist. However, to be high quality, care must, by definition, first be safe! Poor quality or ineffective care might be safe, but if the care fails to halt disease progression, one can argue that it is by nature unsafe. A leader in patient safety is reported to have quipped that “50 years ago healthcare was safe, cheap, … and ineffective, whereas now it is effective, expensive, … and dangerous,” implying that the complexity required to increase effectiveness of intervention paradoxically increases the risk for injury. Most would assert that the risk is worthwhile. For example, undertreatment of a malignancy to reduce the risk of complications to zero is not desirable since the patient may lose years of valuable life. Regardless of the definition of quality, there are some inherent differences in the strategies employed; quality has the goal of effecting an optimal outcome for the patient, whereas safety seeks to prevent causing harm while doing so.


Understanding the causes of medical error requires an indepth examination of systems. Systems consist of people, technology, policies, coordination, and other strategies intended to result in a desired outcome. An aircraft flight occurs due to a closely linked system of the aircraft itself as well as its crew, but also the maintenance and ground personnel and equipment, scheduling, routing, weather, and a plethora of interrelated functions. Systems science has been highly defined in other industries, but only recently has received attention in health care, which by its nature consists of multiple overlapping highly complex systems. Roberson and coauthors emphasized the importance of critically examining whole systems when analyzing processes of care and how these processes affect patient outcomes (24). “Systems-based practice” represents one of the six core competencies as defined by the Accreditation Council for Graduate Medical Education and the concepts are familiar to all otolaryngologists who have trained in the past decade. Some of the basic principles of systems are well defined and include the principles enumerated in Table 198.1.


Common goal for all system components

Goal-driven system design

Unpredictability expected

Expect and accommodate human error

Feedback loops with short cycle times

Standardization is baseline

Reliant on teams rather than single individuals

From Roberson DW, Kentala E, Healy GB. Quality and safety in a complex world: why systems science matters to otolaryngologists. Laryngoscope 2004;114:1810-1814.


The emphasis on system design as the underlying threat to patient safety shifts the attention from individual performance to system performance. Although this shift is more likely to lead to prevention of adverse events, it has been construed by some to mean that the individual shares less responsibility. This conflict has been addressed at a high level within the aviation industry through the adoption of “just culture” (25). A just culture recognizes that expert practitioners (i.e., pilots or physicians) typically must accommodate multiple goals, such as keeping on a flight schedule and avoiding stormy weather conditions. In order to perform at high levels, these practitioners must possess some degree of flexibility in order to satisfactorily resolve conflicts and achieve desired outcomes (26). However, within each profession, there exist boundaries within which accomplished practitioners remain. These are often well recognized within the profession, even if not discretely defined by policy or written procedures. Performing outside of these boundaries is not an inadvertent human error but is an intentional act and considered “reckless behavior.” In a seminal report published in 2001, David Marx has emphasized that these boundaries must, by nature of their significance, be defined by the profession itself (27). In aviation, an example would be completion of the preflight checklist; examples in surgery include the universal preprocedure pause for “time out” or completion of a postoperative sponge count prior to wound closure. Just as pilots would never consider initiating a flight without completion of the checklist, surgeons would never
consider initiating a surgical procedure without first doing a “time out” or closing the wound and leaving the operating room without a reconciled sponge count. In both settings, individuals function within teams that help assure that “forgetting” to do so is extremely unlikely. Refusing to comply with expectations of the profession after being reminded would be considered “outside the norm” and “reckless behavior” and place the practitioner at risk of censure.

Practitioners are also responsible for the systems within which they work. The level of accountability varies by the level of administrative responsibility, but even house staff bear some responsibility and must assume partial “ownership” of the systems in which they work (28). One way in which practitioners can participate in system improvements is by identifying threats to safety, reporting, collecting, collating, and assessing these threats, then develop interventions to reduce them. The challenges are immense, beginning with the task of identification and reporting.


Safety shows itself only by the events that do not happen”— Erik Hollnagel

The adage “if you can’t measure it, you can’t improve it” has been proven correct throughout all of science. Fundamental changes in health care did not begin until pioneer surgeons began assessing their outcomes in an effort to determine whether they were, in fact, effecting positive change for their patients. Over the past several years, a plethora of metrics for quality (although painfully few in otolaryngology) have been introduced, impacting essentially all of health care in the United States ( Accessed May 18, 2011). However, few metrics exist to assess patient safety. Attempts to identify differences in outcomes are challenged by the heterogeneous nature of patient selection, institution, treatment and modality, collecting metric, and comorbidity. Nearly 20 years ago, the VHA attempted to measure surgical outcomes to the institution of a risk-adjusted morbidity and mortality (M&M) measure now referred to as Veterans Administration Surgical Quality Improvement Program (VASQIP) (29). An almost identical voluntary program termed National Surgical Quality Improvement Program (NSQIP) (the term originally referred to the VA program well into the first decade of the 21st century) is now managed by the American College of Surgeons. VASQIP and NSQIP measurement systems all involve chart review by trained personnel who attempt to quantify not only outcomes but also comorbidities to assess relative risks. Statistical adjustments are then made utilizing the relative risk to report a “risk-adjusted” outcome in which observed postoperative adverse events are compared with a risk-adjusted “expected” outcome. Although not sufficiently granular to facilitate identification of specific safety threats, use of the data has been demonstrated to drive improvements in quality and safety within the VA (29). Data entry relies on standardized training of reviewers, who vary nonetheless as they are human. As a result, even with standardized data parameters, variation is introduced into the measurement system. Newer electronic data mining may have the potential to reduce this variability; however, as Pronovost noted in a commentary in 2011 (30

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

May 24, 2016 | Posted by in OTOLARYNGOLOGY | Comments Off on Patient Safety

Full access? Get Clinical Tree

Get Clinical Tree app for offline access