From the Editors
2013; Wiley; Volume: 33; Issue: 7 Linguagem: Inglês
10.1111/risa.12098
ISSN1539-6924
Autores ResumoRisk Analysis provides an outlet for new research and applications that advance the state-of-the-art of the field. The journal can also serve as a forum for important discussions about how well the current state of practice of risk analysis is serving the needs of decision and policy makers, and how it can do so better. This issue begins with a discussion about how to use risk analysis most effectively in the European Union (EU) to further responsible, wise, and effective governance and regulation of risks. Ragnar Löfstedt offers a perspective on risk governance in the EU, as viewed in the context of a European Parliament's Informal Working Group on Risk. He and commentators Girling, Nappert, and Rothstein, raise such fundamental questions as how governments should address uncertain risks, especially when risks, control costs, and benefits of interventions are unevenly distributed within and among participating countries; how different countries and cultures understand “risk” differently as a potential basis for legitimate intervention; and how powerful interest groups can exploit risk analysis terms and processes to pursue other political or economic ends (e.g., creating barriers to competition, or using public fears to manipulate policies and polities) under the guise of risk management. An important theme is the potential tension between evidence-based risk management, focusing on facts, data, and scientific evidence about risks; and concern-driven risk management that responds more to local concerns (or “pet risks”) and that may invoke the precautionary principle to urge politically popular actions that would be rejected by an evidence-based process. Occasionally, Risk Analysis publishes profiles and interviews with thought leaders whose ideas have helped to define the field and practice of risk analysis. This issue includes a fascinating profile of Sheila Jasanoff, written by Mike Greenberg and Karen Lowrie, discussing her education and career and her many contributions to understanding the interplay of science, technology, institutions, and political cultures in shaping risk regulations, laws, and policies. The discussions by Löfstedt and commentators, as well as several other articles in this issue, discussed later, well illustrate the vital and ongoing importance of these insights in risk management decisions, ranging from management of flood risks and evacuation planning in the Netherlands to adoption of seatbelt and motorcycle helmet laws in countries with different degrees of political freedom. Prions, the misfolded protein particles that are causal agents of bovine spongiform encephalopathy (BSE, or “mad cow disease”) in cattle and of classical and atypical scrapie in sheep, may be found in waste water from livestock. In two companion articles, Adkin et al. develop quantitative estimates of the prion load and assess the resulting risks of new BSE and scrapie cases arising in Great Britain from contaminated waste water. Based on models in which an uncertain fraction of released prions survive and eventually can contaminate feeding animals and create new cases, the authors conclude that, under current conditions, waste water from livestock might cause one new BSE infection in cattle every 1,000 years; one additional case of classical scrapie about every 30 years; and one additional case of atypical scrapie about every 33 years. However, uncertainties are very large, and the estimated risks could be as large as one excess case on average every 769 years for BSE and every 16 years for each type of scrapie; or as low as one excess case every 555,556 years for BSE and one excess case every 2,500 or 3,300 years for classical and atypical scrapie, respectively. Bouwkegt et al. apply quantitative microbiological risk assessment to estimate the human health risks of infection (possibly leading to Legionnaires’ disease) from inhaling the bacterium Legionella pneumophilia arising from pool water in Dutch bathing establishments. Based on a model of exposure from bubbles that burst and generate inhalable droplets, together with a dose-response model for guinea pigs (used as a surrogate for humans), they recommend bacterial concentrations of less than 100 colony-forming units per liter of pool water. However, if repeated low-level exposures build immunity against infection, then human risks may be smaller than those estimated from guinea pig dose-response data. The authors conclude that stricter control of bacterial concentrations in pool waters may be required to protect human health. Two articles in this issue compare previous model-based predictions for chemical uptake and metabolism to observed values, based on in vivo human and animal data, and report some substantial discrepancies and opportunities for improvement. Bogen notes that uptake rates of dilute aqueous organic chemicals through living skin is roughly an order of magnitude greater than the rates projected based on diffusion and physiochemical regression modeling. This discrepancy between in vivo measurements and model-based projections from in vitro data requires explanation and highlights the practical importance of using in vivo data to obtain realistic risk models. Knutsen et al. develop a new physiologically based pharmacokinetic (PBPK) model describing the time courses of uptake, metabolism, distribution, and elimination of benzene and its reactive metabolites in bone marrow and other physiological compartments, during and following inhalation exposure to benzene. Benzene at sufficiently high and prolonged exposures causes human bone marrow toxicity and increases risk of acute myeloid leukemia. The new model incorporates data on human liver CYP2E1-specific activity (rather than using rodent metabolism data as a surrogate) and is validated with human biomonitoring data from Chinese benzene worker populations and other human studies. Compared to earlier benzene PBPK models that relied more heavily on mouse data rather than human data, the new model reveals significantly different metabolic rate parameters for humans, with much lower production rates for key oxidative metabolites (hydroquinone, catechol, phenylmercapturic acid, and muconic acid) that may predict the human leukemia and bone marrow and blood toxicity risks caused by inhalation exposures to benzene. Again, the value of using in vivo human data to develop more accurate models is clear. Bogen illustrates that previous model-based estimates probably under-estimate real-world in vivo dermal uptake rates, whereas Knutsen et al. show that previous benzene PBPK models might have overestimated human bone marrow toxicity of benzene exposures, depending on how remaining uncertainties are resolved. Iqbal and Öberg compare fuzzy arithmetic and probably bounds (p-box) analyses for describing and propagating input uncertainties in multi-compartment models, with applications to characterizing the uncertain environmental fates for benzene, pyrene, and DDT based on uncertain partition coefficients and other uncertain inputs. The authors conclude that the fuzzy arithmetic and p-box approaches give similar results in the chosen application, with fuzzy arithmetic able to give either more conservative (wider interpercentile uncertainty intervals) or less conservative (narrower intervals) uncertainty estimates than probability bounds analysis, depending on whether input uncertainties are assumed to be independent. Suppose that flood, fire or hurricane danger leads local authorities to warn residents to evacuate; that threat of an epidemic leads public health officials to urge vaccination; or that toxic smoke from a chemical fire prompts local authorities to warn that people should stay inside and shut off their air conditioning and ventilation and close their windows. What determines whether people will comply with such warnings and advice? Hearing and paying attention to the message, being afraid that not heeding it will lead to harm, and believing that one can obey the advice and thereby reduce the risk of harm, are plausible mediators of citizen responses to government warnings. So are the comments of others on social network sites (SNS) such as Twitter. Verroen et al. use questionnaires about beliefs in the efficacy of self-protective behaviors and about intentions to perform them in the case of a hypothetical chemical fire from freight trains to show that “high-efficacy” messages (leading respondents to believe that they can do what is recommended and that it will work in reducing risk) make SNS feedback irrelevant in affecting intention to engage in self-protective behaviors. By contrast, low-efficacy messages leave citizens more likely to turn to SNS in deciding what to do. High-efficacy messages backed by supportive comments on SNS are most effective in stimulating intentions to self-protect. This research suggests the importance of social media in enhancing high-efficacy government messages and in substituting for low-efficacy ones. Could going to work be a life-and-death decision in the event of extreme weather? Zahran et al. report that tornados that strike during week days, when people can shelter in business buildings, are significantly less lethal than tornados that strike on weekends. Conversely, hurricanes that occur during the week, when more people may be tempted to go to work instead of sheltering or fleeing, are significantly more lethal than weekend hurricanes. If these suggested causal explanations turn out to hold at the individual level, then improving automated emergency broadcasts for tornados (perhaps via cell phones) may be especially valuable in saving lives during weekends and in the summer, when people are less likely to be at work. Conversely, incenting people to evacuate during weekday hurricanes may help to reduce the disproportionate lethality of these events. In another article about evacuation, Kolen et al. discuss a probabilistic scenario-based method, populated with expert judgment inputs, for evaluating and comparing the life-saving benefits of different combinations of mass evacuation strategies in the event of a potential flood. Evacuation strategies considered range from preventive evacuation (from the area at risk to safe areas elsewhere) to more local evacuations (from homes to relatively safe areas and buildings within the threatened areas) to “vertical evacuation” (sheltering in the upper parts of buildings, without leaving). The proposed method, called EvacuAid, evaluates how the threat and lead time, responses by authorities and citizens, and physical infrastructure affect the numbers of lives expected to be saved by different evacuation strategies. A case study application to dike rings in the Netherlands identified how these factors affect the “Evacuation Dilemma Point,” defined as the cross-over point in the lead time between early warning of a flood and onset of autonomous (rather than planned and coordinated) responses, such that preventive evacuation would save more lives for longer lead times, and vertical evaluation would save more lives for shorter lead times. This cross-over point is often on the order of a day to several days, highlighting both the importance of preparation for vertical evacuation in case of floods without several days of warning, and also the importance of improving traffic flows and coordination of preventive mass evacuations. The U.S. Border Patrol (USBP) faces a high volume of contraband smuggling into the U.S. across the border with Mexico, with hundreds of thousands of illegal aliens and millions of pounds of marijuana, as well as weapons, cash, and drugs intercepted each year. Allocating limited resources to stem this flow can potentially benefit tremendously from relatively quick and simple risk analysis. Levine and Waters describe a four-month effort to apply risk analysis concepts to allocate funds to the Tucson Section of the USBP (from the 2010 appropriation bill H.R. 6080, which allocated $600M to Southwest border operations). The analysis asked station chiefs to estimate for each zone along the Tucson section of the border the annual numbers (best guesses and subjective confidence intervals) of transits of illegal aliens from Mexico to the interior of the United States, tons of marijuana, and assaults on agents. The station chiefs were then asked to guess how these numbers would change if deployed levels of various countermeasures were changed (e.g., patrol agents, types and amounts of fencing, surveillance equipment, forward operating bases, motorcycles, all-terrain vehicles, horse patrols, etc.) Interactions among countermeasures and possible responses of smugglers to changes in defensive resource allocations were not modeled. The results of the elicited judgments were displayed as heat maps and in other formats using GIS software, and were used to suggest where adding new resources, or reallocating existing resources, among zones might lead to the greatest reductions in numbers of adverse events of each type per year. The short time frame of this study precluded long-term evaluation and validation of the recommendations, but the study showed that judgments could be elicited and used to make allocation decisions in a matter of months. How effective the resulting decisions prove to be, and whether they can be greatly improved by using optimization and other formal analytic modeling techniques, might be worth investigating in subsequent studies. How can risk analysis help to prevent future occurrences of undersea oil and gas accidents such as Deepwater Horizon? Cai et al. develop a modeling approach for mapping logical flow charts of undersea operations to Bayesian networks (BNs); classifying conditions and events that influence the risk of failures into major factor categories (e.g., hydraulic, mechanical, human, software, and hardware factors); and using the resulting BNs to quantify the probability of successful (accident free) operation over any stated time interval as the reliabilities of these factors are varied. Conditional probability tables at each node (elicited from experts and/or derived from relevant historical data) represent component reliability information, and directed arcs between nodes represent interdependencies among events, respectively. Mutual information between events is used to quantify the importance of low-level events for predicting high-level ones, such as system failure. The authors apply the proposed BN methodology, implemented using the Netica BN modeling software, to a case study of the probability of failure-on-demand to successfully close a subsea ram blowout preventer, e.g., because of failures in all of the duplicate electronic or hydraulic control systems. The BN analysis identifies hydraulic and mechanical factors as being far more important than the modeled human, software, and hardware factors in contributing to risk of failures. Automobile seatbelts and motorcycle helmets reduce road fatality risks, yet not all individual car drivers, passengers, and motorcyclists would necessarily choose to use them voluntarily in the absence of laws requiring them. What factors drive the social decision to enact seatbelt and motorcycle laws? Law, Noland, and Evans apply a panel data analysis, measuring the same variables in the same 31 countries year after year between 1963 and 2002. Over these four decades, enactment of seatbelt and motorcycle helmet laws was associated with measures indicating increased political freedom, democracy, and political stability in government; higher education levels; greater per-capita incomes; and more equitable distributions of income. Which is preferable: a policy that reduces a health risk from 2 in a million to 1 in a million for everyone, or a policy that reduces it from 2 in a million to 1 in a million for almost everyone, but from 2 in a million to zero for a select few? Should the answer depend on how the few are selected, e.g., based on need or on wealth or at random? Standard cost-benefit analysis identifies the latter change as Pareto-superior to, and hence socially preferable to, the former. Matthew Adler, in his most recent book, Well-Being and Fair Distribution: Beyond Cost-Benefit Analysis, suggests that such analysis is incomplete. As discussed in this issue in an insightful review by Arden Rowell, Adler argues that both evaluation of outcome distributions (e.g., giving more weight to changes that benefit the worst-off) and use of welfare changes, rather than only dollar impacts, can and should be used to improve upon current practices of risk-cost-benefit analysis. Rowell considers Adler's goal worthwhile, but raises several cogent questions and concerns. An increasing number of submissions to Risk Analysis now deal with quantifying fairness-efficiency trade-offs, and this seems likely to become a more commonly discussed part of risk analyses in the years ahead. Adler's book helps to clarify the problem and to offer constructive proposals for how best to incorporate concerns for fairness into risk management policies.
Referência(s)