MEDICATION ERRORS: CAUSES, PREVENTION AND REDUCTION
2002; Wiley; Volume: 116; Issue: 2 Linguagem: Inglês
10.1046/j.1365-2141.2002.03272.x
ISSN1365-2141
AutoresJonathan M. Allard, Jane Carthey, Judith U. Cope, Matthew Pitt, Suzette Woodward,
Tópico(s)Healthcare Technology and Patient Monitoring
ResumoThere is a myth in health care that human error can be eliminated altogether, as evidenced by calls to aggressively seek a zero error rate (Anderson & Ellis, 1999). However, errors are an inevitable consequence of human performance (Reason, 1990, 1997) and are symptoms of broader systems problems, not causes in themselves. Only by viewing errors as sources of information about the 'safety health' of an organization can we learn appropriate lessons to improve patient safety. In health care, the system includes many practitioners, each with different roles, who create safety by battling against its intrinsic inefficiencies, who prevent things going wrong by adapting to and resolving issues, by detecting errors and anticipating hazards on a daily basis (Cook et al, 2000). The 'systems approach' to error analysis has been widely adopted in the aviation and nuclear industries, in which a commonly used paradigm of organizational accidents is Reason's Swiss cheese model (Reason, 1990, 1997). This model distinguishes between active failures and latent conditions. Active failures are errors and violations committed by people at the sharp end of the system, i.e. pilots, control room operators and, in this domain, pharmacists, blood bank technicians, nurses, doctors, etc. Active failures have an immediate impact on safety. Latent conditions arise from fallible decisions made by the higher management in an organization, by regulators, governments, designers, manufacturers and policy makers. Latent conditions lead to weaknesses in the organization's defences, increasing the likelihood that when active failures occur they will combine with existing preconditions, breach the system's defences and result in an organizational accident. Latent conditions and active failures lead to windows of opportunity in the system's defences which, when aligned across several levels of a system, lead to an adverse event (i.e. death or critical incident). This article reviews research on medication errors during drug delivery and transfusion medicine with a view to understanding the underlying latent conditions (i.e. systems problems) that contribute to them. Adverse drug events (ADEs) are defined as '… injury resulting from a medical intervention relating to a drug' (Bates et al, 1995). Drug errors have been estimated to account for over a quarter of causes of ADEs (Bates et al, 1995). Such errors are defined as any preventable event that may cause or lead to inappropriate medication use or patient harm while the drug is in the control of the health care professional, patient or consumer (US Pharmacopeia, 1995). All stages of the drug delivery process (i.e. prescribing, transcribing, dispensing and administration) are susceptible to error. Table I lists some examples of errors from the research literature and case reports. Research on drug errors has covered several themes, including the incidence of errors per medical speciality and per stage of the drug delivery process (Brennan et al, 1991; Bates et al, 1995, 1999; Bates, 1999; 2000), types of drugs most commonly linked to errors (Lesar et al, 1997), time of day and shift work effects (Raju et al, 1989; Lesar et al, 1990), the relationship between staff levels of experience (Lesar et al, 1990) or calculation skills and the incidence of errors (Calliari, 1995; Rowe et al, 1998). The latent conditions (i.e. systems problems) that lead to drug errors have also been identified (Leape et al, 1995; Cohen et al, 1996; Wilson et al, 1998). The landmark Harvard Medical Practice Study reported that adverse events occurred in 3·7% of hospitalized patients (Brennan et al, 1991), with 19% of these events resulting from drug complications (Leape et al, 1991). Four years later, the adverse drug event prevention study analysed patient records from 4031 adult admissions to two USA hospitals over a 6-month period. Results showed that 49% of errors occurred in the drug ordering stage (i.e. prescribing), 11% in transcription, 14% in dispensing and 26% in administration. The most frequent types of errors in the drug prescribing stage were wrong dose, wrong frequency, wrong choice and known allergy (Bates et al, 1995; Leape et al, 1995). This study also analysed error recovery, i.e. the circumstances under which errors were detected and corrected. Errors were more likely to be recovered when they occurred in the early stages of the drug delivery process. Whereas 42% of drug prescribing and 37% of dispensing errors were recovered, none of the administration errors were. Results showed that nurses were most likely to recover errors. Sixteen 'systems failures' which led to these drug errors were also identified (Leape et al, 1995). Systems failures are factors pertaining to the organization and its' processes which increase the likelihood that clinicians and nurses will make errors. These were drug knowledge dissemination, dose and identity checking, patient information availability, transcribing prescriptions, allergy defence (i.e. disseminating information about known patient allergies through the institution), medication prescription tracking, interservice communication, device use, standardization of doses and frequencies, standardization of drug distribution in a unit, lack of uniform procedures, preparation of intravenous medications by nurses, transfers or transition problems, conflict resolution (i.e. resolving incompatible goals between departments and professional groups), staffing and work assignments, and feedback about ADEs. Many of these processes involve points of communication between health care professionals, departments or wards. Leape et al (1995) concluded that poor communication practices were the most common type of systems problem. The incidence of drug errors has been shown to increase as more intensive levels of patient care are needed (Vincer et al, 1989; Wilson et al, 1998). An analysis of a paediatric cardiac ward and cardiac intensive care unit (ICU) has shown that drug errors were seven times more likely to occur in the ICU setting than on the ward (Wilson et al, 1998). In a recent study involving a retrospective review of incident reports from a Scottish paediatric hospital, results showed that the highest medication error rates occurred in the Neonatal Intensive Care Unit (NICU) and Paediatric Intensive Care Unit (PICU) (Ross et al, 2000). Over a 5-year period, reported error rates varied from 0·98% in the NICU and 0·77% in the PICU (per number of ward admissions) to only 0·22% on medical wards and 0·04% on the surgical unit. It has also been shown that PICUs have higher prescribing error rates than NICUs (Folli et al, 1987). This finding was attributed to the greater heterogeneity of patient and procedural factors in PICUs, in which there was more variation in patient illnesses and patients were more likely to be treated by complex drug regimens than in NICUs. Finally, a higher incidence of prescribing errors has also been reported for paediatrics and emergency medicine (5·93 and 5·5 per 1000 prescriptions respectively) than for other medical specialities (Lesar et al, 1997). Evidence suggests that a significant proportion of ADEs are caused by multiple errors occurring at different points of the drug delivery process; for example, 21% of ADEs in one study resulted from failures at multiple stages (Leape et al, 1995). Drug administration errors may originate in the prescribing, transcribing or dispensing stages and remain undetected by in-built system checks. Hence, drug administration may often be the final, but not the only point, at which errors occur. Counting and categorizing errors is of limited value. More useful insights can be gained by tracing errors through the system and understanding the circumstances under which error recovery takes place. Haematology and oncology is often disadvantaged from the outset by error counting studies because many disorders, for example acute lymphoblastic and acute myeloid leukaemia, have complex drug protocols issued by the United Kingdom Children's Cancer Study Group. Health care systems which treat such patients invariably use more drugs, and more complex combinations of drug therapies, than other medical specialities. Such drug therapy regimens have become increasingly complex and intensive as supportive care (i.e. anti-emetics, colony-stimulating factors, bone marrow transplantation) has improved (Cohen et al, 1996). It is important that these factors are taken into account when making comparisons between haematology and other medical specialities. Research without a denominator tells us very little about the relative incidence of errors across medical specialities and results may be misleading. For example, in the recent study by Ross et al (2000), an error frequency count showed that the majority of errors occurred on medical wards (115/195 or 59%), with the fewest errors occurring on NICUs and PICUs. However, the inclusion of the denominator 'number of admissions per ward' led to a reversal in this trend when included in the analysis (Ross et al, 2000). It is also important to choose an appropriate denominator. Studies that have used the number of admissions per ward (for example, Ross et al, 2000) may underestimate the magnitude of the denominator because patients may take numerous drugs. It is therefore best to use the number of drugs prescribed, dispensed and administered per 24 h or per week or the number of occasions in which error was possible if cross-speciality comparisons are to be made (for example, Bates et al, 1995; Ridge et al, 1995). Error counting studies have other important weaknesses. Firstly, definitions of error vary across studies and this invariably influences what is included in the analysis process. Secondly, studies based on the retrospective analysis of patient notes or incident reports are suspect because the data may be unrepresentative. The low incident reporting rate in the study by Ross et al (2000) may say more about the hospital's incident reporting culture than the incidence of errors across specialities. Significant differences have also been found between the classes of drugs associated with prescribing errors. Beta-blockers, theophyllines and anticonvulsants had the highest error frequencies in one study of a multidisciplinary paediatric intensive care unit (Bodrun & Butt, 1992). Xanthines, cardiovascular agents, antimicrobials and narcotics were the drugs most frequently associated with errors in another study based in a tertiary care teaching hospital (Lesar et al, 1997). There were 20·55 prescribing errors per 1000 prescriptions of xanthines and 13·55 prescribing errors per 1000 prescriptions of antimicrobials, compared with only 1·14 per 1000 prescriptions of anticoagulants. In a third study, based on an analysis of medical event reports in a USA national database, heparin, lidocaine, adrenaline and potassium chloride were identified as the drugs most commonly involved in critical incidents (Edgar et al, 1994). In a more recent study, 56% (109/195 errors) of all reported errors involved intravenous drug administration, with antibiotics/antivirals, parenteral nutrition/intravenous fluids and anticancer drugs being the three categories of drug most frequently involved in intravenous medication errors (Ross et al, 2000). These findings have to be interpreted cautiously because some types of drugs are intrinsically more harmful than others. The likelihood of detecting an error is a function of its' consequences for the patient. Hence, in the studies cited above, dangerous drugs such as xanthines, anticoagulants and opioids may be over-represented because their harmful effects are more evident than those of other classes of drug. In a haematology context, prescribing a 10× dose of vincristine is more dangerous, if left unrecovered and given to the patient, than giving one additional dose of paracetamol. Developing valid and reliable methods to assess error severity is important, and there are several examples in the literature (Bradbury et al, 1993; Dean & Barber, 1999). It is important to examine the context in which drug errors occur. Previous research has shown a correlation between increased prescribing error rates and the number of admissions (Lesar et al, 1990), so the busier the ward, the higher the error frequency. In the same study, the highest error rate occurred between 12·00 and 15·59, and the lowest error rate occurred between 20·00 and 23·59. Lesar et al (1990) attributed this finding to the fact that more prescriptions were written during the afternoon shift. In terms of shift work effects, it has also been shown that drug error rates were higher on day shifts than on night and evening shifts in a PICU and NICU setting (Raju et al, 1989). This result was attributed to fewer prescriptions being written and dispensed on the evening and night shifts, compared with a higher frequency of prescriptions on the day shift. Taken together, these studies show that situational factors such as the number of patients and prescriptions may lead to errors. These findings correspond to those in other industries such as aviation and nuclear power plants, in which it has been shown that high pilot or operator workload and poorly designed shift patterns are linked to increased error rates (Smith & Folkard, 1993; Chou et al, 1996). Research findings on the relationship between level of experience and error frequency are equivocal. Whereas some studies have found that inexperienced doctors and nurses make more drug errors (Lesar et al, 1990; Arndt, 1994; Wilson et al, 1998), other studies have not (Koren et al, 1983; Rowe et al, 1998). Lesar et al (1990) studied prescribing practices in a tertiary care teaching hospital and collected data on 905 errors in a 1-year study period (n = 289, 411 prescriptions). First-year post-graduate residents had the highest error rate (4·25 per 1000 prescriptions written). Fourth-year or greater residents had the lowest error rates (0·81 per 1000 prescriptions written). Similarly, in a study set in a paediatric cardiac department, the number of prescribing errors was shown to double when new doctors joined the rotation (Wilson et al, 1998). In contrast, Koren et al (1983) tested 85 doctors and nurses on their ability to calculate volumes of drugs commonly administered to paediatric patients. In terms of experience, the proportion of nurses who made errors increased with the length of their professional experience, with 50% of nurses who had at least 11 years' experience making errors compared with only 25·8% of nurses with between 3 and 10 years of experience (Koren et al, 1983). Doctors' calculation skills may also be an important determinant of error rates (Koren & Haslam, 1994; Rowe et al, 1998). Rowe et al (1998) carried out tests to determine the drug calculation error rates among a group of paediatric residents. They found no correlation between the length of training and the likelihood of making a mistake, i.e. the most junior residents were no more likely to make drug calculation errors than more experienced residents. However, some of the residents tested did appear to be more 'error prone' than others. Residents who committed 10-fold dosing errors made more drug errors overall than those residents who did not. Similar results have been found for the calculation skills of nurses, in which studies have shown that nurses who perform badly on calculation tests make more errors (Miller, 1992; Calliari, 1995). A limitation with calculation skill studies is that they are often carried out under test conditions, and the extent to which they can be extrapolated to a busy ward environment, in which distractions and extraneous communications can disrupt the task, is open to question. Such studies are also limited by their focus on the individual doctor or nurse as the source of error. This approach does not take account of 'latent conditions' which other research has shown to be important. In addition to problems identified by Leape et al (1995), other types of systems failure have been noted. These include distracters which interrupt prescribing, dispensing and administration tasks (Gladstone, 1995), and the absence of redundant checking processes to ensure that errors are quickly recovered (Cohen et al, 1998). Poor organizational policies are also important. For example, a complicated surgical antibiotic prophylaxis policy increased the number of wrong dose errors in one unit, and a departmental rule that prescription charts should be re-written every day led to an increase in the number of transcription errors on another ward in the same hospital (Wilson et al, 1998). Poorly designed equipment can also increase risk. A common problem is the absence of fail-safe mechanisms when re-setting the rate of infusion pumps, which has led to over-infusion of drugs (Brown et al, 1997; Lin et al, 1998; Wilson et al, 1998). Problems may also result from varying operational requirements between different infusion pumps. In one case, confusion between two Graseby syringe drivers, the MS26 and MS16A, led to a fatal over-infusion of morphine in a patient being treated for stomach cancer (Carlisle et al, 1996). The underlying design problem was that one pump is calibrated in mm/h the other is calibrated in mm/d. During a syringe changeover, the nurse applied the calibration principles for the MS16A to a MS26 pump. Such errors of transference (Reason, 1990), in which the principles for operating one type of device are incorrectly applied to another, have led institutions to use a single standard pump throughout the hospital. They have also led to calls for safer design of infusion pumps (Cousins & Upton, 1995). There are several case reports in the literature describing fatal overdoses of drugs commonly used during haematology/oncology treatment, including doxorubicin (Edgar et al, 1994; Back et al, 1995), cisplatin (Chu et al, 1993; Edgar et al, 1994), vinblastine (Conter et al, 1991) and vincristine (Kosmidis et al, 1991; Stones, 1998). Other case reports describe the toxicity-induced death of patients when vincristine has been administered intrathecally (Manelis et al, 1982; Jackson & Hassan, 1997; Michelagnoli et al, 1997; Fernandez et al, 1998) or when a vaguely written prescription has been misinterpreted by the person responsible for administering the drug. For example, Davis et al (1992) have described how a prescription for the bone resorption inhibitor aredia (pamidronate) was misread as adria, a nickname for doxorubicin. Valuable insights into active errors and latent conditions can be learnt from such reports. 'Confirmation bias' has been identified as a contributory factor in intrathecal methotrexate overdoses (Lee et al, 1997). Confirmation bias is the tendency to seek information that confirm one's beliefs and to ignore evidence to the contrary. This bias has also been shown to be a factor underpinning name confusions between drugs, for example 'Revia' and 'Revex' (Cohen, 1995). Poorly written protocols and the use of dose escalation trials in cancer chemotherapy which obscure pre-existing knowledge about appropriate doses can lead to prescribing errors (Hunt & Rapp, 1996). For dispensing errors, dilutions required to prepare paediatric doses (particularly for neonates), inappropriate drug storage and drug checking policies may also increase risk (Cousins & Upton, 1996; Hunt & Rapp, 1996). In a survey of 160 USA oncology staff, stress, under-staffing, lack of experience and unclear prescriptions were judged to contribute to medication errors (Schulmeister, 1999). Chemotherapy incident reports have also identified other problems; admitting a patient to a ward in which there is no specialist haematology/oncology expertise [Department of Health (DoH), 2000], communication between shared care hospitals about the treatment plan for a patient, transportation of drugs from pharmacy to the ward or operating theatre (DoH, 2000), lack of expertise of the person administering the drug (Schulmeister, 1997) and ineffective system checks for error recovery (Gorman, 1995). For example, in the Betsy Lehman case, the patient was due to receive a high cyclophosphamide dose daily on each of 4 d. The doctor mistakenly wrote the prescription so that the combined 4-d dose was to be administered in 1 d. This error was not picked up by pharmacy staff or by the nurses at the patient's bedside (Gorman, 1995). Various interventions to reduce drug errors have been put forward: prescribing errors may be reduced by the introduction of computerized prescribing systems (Hunt & Rapp, 1996; Bates, 2000), by pharmacist participation in drug rounds (Leape et al, 1999), and by ensuring that key reference material, such as the British National Formulary, is readily available, especially to junior staff (Ferner, 1995; Cohen et al, 1996). A set of error reduction recommendations specific to haematology and oncology has been put forward by Cohen et al (1996) as follows: 1. Educate health care providers to improve their levels of knowledge about chemotherapy drugs and potential drug errors. 2. Design the drug delivery system to have as many independent checks in it as possible, including independent calculation checks by the prescribing physician, pharmacist and nurse. 3. Establish maximum single and total course dosage limits at each institution. These should be communicated in educational programmes for staff. 4. Standardize the prescribing language by using the full names of drugs and routes (for example, INTRAVENOUS versus INTRATHECAL written out in full and in capital letters rather than abbreviated to IV and IT). All doses should be expressed in milligrams or units, prescriptions should be dated, and the prescriber should use a leading zero but not a trailing zero. The patient's current body surface area should also be written on the prescription. 5. Collaborate with drug manufacturers to eliminate ambiguous dosing information on package inserts and in textbooks. 6. Educate patients and their relatives about their drug regime as they are the last line of defence in the system. 7. Set up an interdisciplinary team that continuously reviews the drug delivery process and feeds back findings to hospital staff. Goldspiel et al (2000) have shown how systems analysis, using many of the principles listed above, led to a 23% decrease in chemotherapy prescribing errors in one USA hospital. Specific recommendations aimed at reducing intrathecal chemotherapy errors have also been put forward (Fernandez et al, 1998; Woods, 2001). Firstly, intrathecal chemotherapy should be requested by an oncology physician and co-signed by a second physician or pharmacist. In terms of design and labelling of equipment, the same authors have stated that intrathecal syringes should have slip tips (not luer locks) and should be clearly labelled INTRATHECAL. Secondly, they also note the importance of storing, transporting and administering these drugs separately from vinca alkaloids to ensure that wrong route errors are avoided. Syringes containing vinca alkaloids should be clearly labelled 'For intravenous use only. Fatal if administered intrathecally'. Thirdly, it is recommended that intrathecal chemotherapy should only be administered by a trained chemotherapy giver and that, prior to drug administration, the physician should carry out a verbal check, reading out loud the drug labels with a second checker. Quite often recommendations for reducing drug errors result from an investigation of one type of incident (for example, Fernandez et al, 1998; Woods, 2001). This can lead to local repairs being implemented which protect the organization against that specific incident but which do not take a holistic look at the system (Cook et al, 1998; Crane, 2000). When making recommendations to reduce error rates, one must consider the adverse effects of an intervention on other parts of the system. This is especially important when technological solutions such as computerized prescribing are being considered (Bates et al, 1998, 1999). Potential adverse effects include incompatibility with other aspects of the users' (i.e. pharmacists, doctors and nurses) tasks and increased non-compliance, whereby the user circumvents the system because it is unworkable in practice. Increased computerization may also increase the frequency of data inputting errors (Ferner, 1995) and make dose calculations more difficult by concealing the intermediate stages of the calculation so that errors are more difficult to recover (Dillner, 1993). Studies which evaluate these types of potential adverse effects are a prerequisite for the introduction of computerized prescribing into clinical practice (Sheridan & Thompson, 1994; Bates, 2000; Nightingale et al, 2000). Despite the recommendations of Fernandez et al (1998), vincristine incidents have continued to take place. Since 1985 there have been 14 such incidents in the UK, 10 of which were fatal (Laurance, 2001). The recent death of Wayne Jowett at Queens Medical Centre, Nottingham, was the latest such tragedy and has prompted a Department of Health investigation (Woods, 2001). So why do similar scenarios of failure keep recurring? One of the authors of this article analysed the vincristine-related death of Richie William, a patient at Great Ormond Street Hospital. The results of this analysis were disseminated to risk managers and clinicians in other UK centres and have more recently been summarized in the report 'An Organization with a Memory' (DoH, 2000). At present, however, there is no national UK incident database to which every institution reports. Such a database exists in the United States (Food and Drug Administration, 2001) and it has been suggested that a similar one should be set up here. A national incident reporting database would allow lessons to be learnt universally, not on an ad hoc basis. Issues pertinent to developing such a database are discussed in the following section. 'A near miss is any situation which has clearly significant and potentially serious (safety related) consequences' (Van der Schaaf et al, 1991). Near miss reporting systems are widely used in the nuclear (Berman & Collier, 1996), aviation (NASA, 1986; Billings, 1998) and chemical industries (Van der Schaaf et al, 1991). Central to the concept of near misses is the notion that some form of recovery took place, i.e. an accident sequence was initiated and then either by chance or by the actions of the individual, team or organization it was recovered from prior to having negative consequences (Van der Schaaf et al, 1991; Barach & Small, 2000). The effectiveness of critical incident and near miss reporting systems depends on several factors, including organizational culture, the reporting structure and the quality assurance measures in place to check the accuracy of the data. Under-reporting of incidents is a serious problem. Evidence suggests that the existence of a blame culture in medicine leads to under-reporting (Gladstone, 1995; Vincent et al, 1999; DoH, 2000). Another cause of under-reporting may be the design of the incident reporting system. It is important that reports are submitted to a neutral organization which anonymizes and then publishes them on an annual basis (Myhre & McRuer, 2000). Previous research has shown that the organizational culture in which the incident reporting system is developed is essential to its success (Reason, 1997). High rates of reporting are found in organizations that have a 'just culture' in which the aims of the system are to learn lessons about human and organizational problems, and error recovery, rather than as a mechanism to apportion blame (Upton & Cousins, 1995; Reason, 1997). Cross-validation of incident information using multiple data collection methods, i.e. interviews, questionnaires and patient records, is essential to check the facts of the case and to identify all the latent conditions that were involved (Beckman et al, 1996). Ideally, a panel of medical experts from different specialities should be involved when categorizing the active and latent errors. Data should be collected on the number of times that the expert panel disagrees on the classification of a root cause, as this is a good quality assurance measure. As with drug delivery, transfusion medicine comprises multiple communication interfaces between wards, the blood bank, operating theatre, and the involvement of various health care professionals ranging from nurses, consultant and specialist registrars in haematology/oncology, pharmacists, surgeons, etc. Errors in transfusion medicine can be distinguished between those that occur in the blood bank and those that originate in other parts of the hospital. Blood bank errors include testing the wrong sample, issuing an incorrect unit of blood, incorrect transcription errors in filling out labels, and attaching labels to the wrong unit of blood (Linden et al, 2000; Marconi & Sirchia, 2000). Errors in other hospital locations include sending an incorrect request to the blood bank, phlebotomy errors and failure to check that the appropriate blood is being given to the correct patient (Taswell et al, 1994). Previous research has shown the relative frequency of errors at different stages of transfusion medicine. Myhre & McRuer (2000) have reviewed studies on the incidence of fatal errors in blood transfusion and summarized error rates per stage of the transfusion process. In this analysis, the transfusion process was broken down into drawing the specimen from the patient, errors in the laboratory/blood bank, administering the transfusion and other causes. Percentage error rates in drawing the wrong specimen from the patient have ranged between 1% (1 out of 126 incident reports) (Camp & Monaghan, 1981) to 20% (23/111 reports) (McClelland & Phillips, 1994) across studies. Specimen exchange errors in the laboratory have ranged from 5% (6/111) (McClelland & Phillips, 1994) to 21% (9/44) of fatal incident reports (Honig & Bove, 1980). Errors transfusing blood to the wrong patient have varied between 20% (25/126) of fatal incident reports in one study (Camp & Monaghan, 1981) to 75% (82/111) in another study (McClelland & Phillips, 1994). These findings have led to calls to computerize transfusion information systems to reduce errors (Gael & Richards, 1997). However, trying to design human error out of the system by increasing computerization solves one set of problems but breeds a new generation of others (Sheridan & Thompson, 1994). This is evidenced in a study which compared information issued by the blood bank computer system and written information on transfusion reports returned to the blood bank from the wards (Zimmerman et al, 1999). Results showed that discrepant information in the recipient's identity and the blood component status occurred in 1·24% of transfusion data records reviewed, or 1 in every 81 transfusions (n = 49 224 transfusions). For example, blood components were sometimes recorded as 'discarded' by the computer system record, but a completed transfusion report stating that the component had been given was later returned to the blood bank. Transfusion medicine, like anaesthetics, has been quick to realize the value of learning lessons from critical incident and near miss reporting systems. One example of a near miss in transfusion medicine is when the blood bank issues incorrect blood but the error is captured by the clinician and nurse at the patient's bedside (Linden et al, 1992; Linden, 1999; Linden & Schmidt, 1999). A second example is shown in Table II, which shows how incompatible surgical admission and blood transfusion policies, coupled with distractions during the cross-matching procedure, can erode system defences. In this case, in-built checking mechanisms led to successful error recovery. In the UK, Williamson et al (1999) have reported the findings for the first 2 years of a confidential transfusion incident reporting system, the SHOT (Serious Hazards in Transfusion) initiative. Between October 1996 and September 1998, 366 events were reported in 276 hospitals (164 hospitals returned a nil-to-report card showing that they had no transfusion incidents to report). Of these, 191 (52%) errors involved the wrong blood being given to a patient, 55 (15%) acute transfusion reaction, 51 (14%) delayed transfusion reaction, 22 (6%) post-transfusion purpura, 27 (8%) acute lung injury, 12 (3%) infections transmitted via transfusion and 8 (2%) graft-versus-host disease. Sixty-two out of 191 events involving blood being transfused to the wrong patient were ABO incompatible transfusions. Analyses of these events identified problems in the request for blood or the sampling from the patient, collection of the wrong blood from the bank refrigerator, omission of checks comparing the blood bags to the patient's case records, and a failure of two-person checking procedures at the patient's bedside to detect the discrepancy between the blood bag and the patient. This last error type occurred in 80 cases. Linden et al (2000) reported the results of a 10-years experience of incident reporting in New York state. Results showed that there was erroneous administration for 1 of 19 000 red blood cell units administered. Errors originating in the blood bank were responsible for 29% of events and included testing of the wrong specimen and issuing an incorrect unit of blood. Fifty-one per cent of events occurred outside the blood transfusion unit. These errors included administering blood to the wrong patient and phlebotomy errors. It was also found that transfusion errors were less likely in the largest institutions, a finding attributed to their greater experience. However, such findings could result because these hospitals had better interfaces within the transfusion process, an area which needs investigation in future research. Both the New York and the SHOT data show that adverse transfusion events can result from multiple errors at different points in the system. For example, in the SHOT analysis 74/177 cases analysed resulted from between two and seven errors. Similarly, the New York data shows that two or more errors occurred in 15% of reported events. The vast majority of these errors involved the blood bank issuing an incorrect component that could have potentially been picked up during a final check at the patient's bedside, but which were not. This is supported by the SHOT data, which also identified the problems with double-checking failures at the patient's bedside. These findings show the problem of relying on the last line of defence in the system to detect and recover errors. Clinicians and nurses working in busy, stressful conditions may incorrectly assume that checks further upstream will have identified a cross-matching problem and omit the final check. Ibojie & Urbaniak (2000) carried out a retrospective analysis of 189 046 blood products screened for transfusion in a Scottish hospital over a 4-year period. They compared the incidence of actual mistransfusion events with the incidence of near misses. A near miss was defined as any error that, if left undetected, could result in an wrong determination of blood group or in the issue of an incorrect or inappropriate blood component, but which was recovered from before the transfusion took place. Results showed a 3:1 ratio (21/7) of near misses to actual mistransfusion events, highlighting the potential value of information on near misses to learn lessons about system safety. Furthermore, non-compliance to guidelines occurred in 20 out of 21 near misses. The root cause of this non-compliance behaviour was that protocols were not readily accessible to staff when they needed to refer to them. A key feature of the near miss reporting systems used in other industries is that they are based on error taxonomies that allow the incident investigator to trace back the causal path to identify systems problems. A similar approach was taken in the development of the Medical Event Reporting System for Transfusion Medicine (MERS-TM) (Battles et al, 1998; Kaplan et al, 1998). This is based on the Eindhoven Classification Model, an error taxonomy which was originally developed to investigate near misses in the chemical process industry, which in turn was based on psychological theories of error causation. This error taxonomy classifies the root causes of an adverse event into four main categories; technical, organizational, human errors and other factors (including patient-/donor-related characteristics). Technical problems include inadequate design of equipment and material defects, including faulty weld seams on blood bags or poor adhesiveness of labels. Organizational root causes include unavailability or poor quality of protocols from the blood bank or transfusion service and management priorities which result in a conflict between production and safety. An analysis of events from two transfusion services in the USA has shown that technical design problems including design of equipment, software and materials was the primary root cause in a cohort of 423 transfusion events. The most frequently occurring human errors were checking failures, a finding which is consistent with other studies. Medication errors can only be prevented and reduced by focusing on the system as a whole, not on the individual clinician or nurse. A national critical incident and near miss reporting database which ensures the whole haematology community learns lessons about latent conditions and active errors is essential. This will only be successful in improving patient safety if the appropriate reporting culture and feedback mechanisms are in place. Research at the Institute of Child Health and Great Ormond Street Hospital for Children NHS Trust benefits from Research and Development funding received from the NHS Executive.
Referência(s)