Artigo Acesso aberto Revisado por pares

Varieties of Risk

2019; Wiley; Volume: 101; Issue: 2 Linguagem: Inglês

10.1111/phpr.12598

ISSN

1933-1592

Autores

Philip A. Ebert, Martin Smith, Ian Durbach,

Tópico(s)

Psychology of Moral and Emotional Judgment

Resumo

The notion of risk plays a central role in economics, finance, health, psychology, law and elsewhere, and is prevalent in managing challenges and resources in day-to-day life. In recent work, Duncan Pritchard (2015, 2016) has argued against the orthodox probabilistic conception of risk on which the risk of a hypothetical scenario is determined by how probable it is, and in favour of a modal conception on which the risk of a hypothetical scenario is determined by how modally close it is. In this article, we use Pritchard's discussion as a springboard for a more wide-ranging discussion of the notion of risk. We introduce three different conceptions of risk: the standard probabilistic conception, Pritchard's modal conception, and a normalcy conception that is new (though it has some precursors in the psychological literature on risk perception). Ultimately, we argue that the modal conception is ill-suited to the roles that a notion of risk is required to play and explore the prospects for a form of pluralism about risk, embracing both the probabilistic and the normalcy conceptions. We take the view that a risk judgment always implicates a body of evidence, which we refer to as the background evidence. In cases where the background evidence is not made explicit, we take it to be supplied by the context of utterance and, in typical cases, to be the evidence possessed by the one making the judgment. That is, we are inclined towards a contextualist semantics for utterances such as 3 and 4, on which their truth conditions feature an evidence parameter, the value of which is fixed by the context. The semantics of such utterances is not, however, our primary concern here. As well as making categorical risk judgments such as the above, we often make comparisons. While we may judge that the risk of a plane crash is very low, we may also judge that there is a higher risk of a car crash on the way to the airport. As well as judging that there's a high risk of food poisoning at a particular restaurant, we might also judge that there is a lower risk of food poisoning at the restaurant next door. Moreover, while we often speak about the riskiness of feared events, such as plane crashes, food poisoning, etc., we can also assess the risk of states of affairs. For instance, before drilling into the wall of a 1970s West Australian house, one might assess the risk that the wall contains asbestos, or jurors in a criminal trial, when contemplating a guilty verdict, might consider the risk that the defendant is innocent, or a mountaineer may ponder the risk that the snow conditions are unfavourable for a climb. Here, we treat propositions as the primary bearers of risk, with the riskiness of an event or state of affairs corresponding to the riskiness of the proposition that the event occurs, or the state of affairs obtains. As well as making judgments about the risk of specific events and states of affairs, people also assess the risk of activities or decisions, saying things like 'Drilling into this wall is risky', 'It would be risky to attempt a climb under these conditions'. These judgments are important to understanding the connections between risk and decision making but we put them to one side here. According to the probabilistic account of risk, the risk of a proposition P is determined by the probability of P–the higher the probability, the higher the risk. On this view, the risk of P is higher than the risk of Q just in case P is more probable than Q is. The probability here should be interpreted as evidential probability–probability conditional upon the background evidence. We follow Pritchard in treating the probabilistic account as the orthodox account of risk (Pritchard, 2015, section 1). It is important to note, however, that this is somewhat different to the definition of risk that has become standard in professional risk management and in some economics textbooks. On this definition, risk is equated with expected disvalue: the probability of an outcome multiplied by a measure of how severe or detrimental it would be. There is some evidence to suggest that this is best regarded as a technical definition (and a relatively recent one) which doesn't directly connect with our ordinary risk judgments (Boholm et.al. 2016).1 In any case, our focus here will be on risk comparisons made in cases where severity is held constant and the two accounts of risk generate the same predictions. It is widely acknowledged that the intuitive risk judgments that people are inclined to make don't always align with what the probabilistic account would predict. In research dating back to at least the 1970s psychologists have identified a range of ways in which people's intuitive judgments systematically deviate from what the probabilistic account would sanction. These results have been treated not as evidence against the probabilistic account but, rather, as revealing important heuristics and biases which guide our judgments about risk and probability. For example, Kahneman & Tversky propose a number of heuristics that underlie risk judgments, such as the availability and representation heuristics, which in many cases lead to systematic deviations from what the probabilistic account would predict to be correct (Kahneman & Tversky 1973, Tversky & Kahneman 1974; for an overview, see Kahneman 2011). "feelings about risk are largely insensitive to changes in probability, whereas cognitive evaluations do take probability into account. As a result, feelings about risk and cognitive risk perceptions often diverge, sometimes strikingly." (Loewenstein, et.al. 2001, p.271) Proponents of the 'risk as feeling' hypotheses adopt a dual system approach to risk judgments whereby some risk judgments will be the output of a broadly cognitive system, and predominantly determined by features such as estimated probabilities, while other judgments will be generated by a more affective system, and influenced by the ease or vividness with which the outcomes can be imagined, and by personal experience with the relevant outcome (Loewenstein, et.al. 2001). That risk judgments can deviate from the probabilistic norm is thus to be expected on a dual system approach. (Bomb 1) An evil scientist has rigged up a large bomb, which he has hidden in a populated area. If the bomb explodes, many people will die. There is no way of discovering the bomb before the time it is set to detonate. The bomb will only detonate, however, if a set of six specific numbers between 1 and 49 come up on the next national lottery draw. The odds of these numbers appearing is fourteen million to one. It is not possible to interfere with this lottery draw.2 (Bomb 2) Same as above, however the bomb will only detonate if a series of three highly unlikely events obtains. First, the weakest horse in the field at the Grand National, Lucky Loser, must win the race by at least ten furlongs. Second, the worst team remaining in the FA Cup draw, Accrington Stanley, must beat the best team remaining, Manchester United, by at least ten goals. And third, the Queen of England must spontaneously choose to speak a complete sentence of Polish during her next public speech. The odds of this chain of events occurring are fourteen million to one. According to Pritchard, (Bomb 1) is riskier than (Bomb 2) and, if forced to choose, we ought to prefer (Bomb 2) to (Bomb 1). We admit there may be some temptation to judge as Pritchard does, but would be cautious in drawing any immediate conclusions about the viability of the probabilistic account. Even if this judgment is widely shared (more on this below), this would seem to put Pritchard's example in the same category as other existing examples in which risk judgments have been shown to deviate from what the probabilistic account predicts. Nonetheless, Pritchard invests his thought experiment with an added significance: as putting pressure on the probabilistic account of risk, rather than exposing a heuristic or bias which can affect our judgments. A number of heuristics and biases that psychologists have identified could potentially come into play in the kinds of cases Pritchard describes. One thing that we might observe is that the conditions for triggering a detonation in (Bomb 1) seem far easier to imagine than the conditions described in (Bomb 2). To imagine that the weakest horse in the Grand National wins by at least 10 furlongs or that the worst team remaining in the FA cup draw beats the best team remaining by at least 10 goals or that the Queen spontaneously chooses to speak a complete sentence of Polish during her next public speech plausibly involves the construction of a rich accompanying narrative. In contrast, to imagine six particular numbers coming up in the next National Lottery draw requires no particular narrative. In a range of studies, psychologists have shown that there is a positive correlation between the ease with which an event can be imagined or recalled, and how probable we estimate the event to be–a phenomenon variously labelled as the availability heuristic (Kahneman & Tversky 1973, Tversky & Kahneman 1974) or the simulation heuristic (Kahneman & Tversky 1982). Additionally, one might hypothesise that ease of imagining may exert an influence on intuitive risk judgments even when probabilities are made explicit. That is, one might hypothesise that the greater ease with which one can imagine the scenario as described in (Bomb 1) might lead one to assign it a higher risk than (Bomb 2) even though the probabilities are stipulated to be equal.3 Further, according to the competence hypothesis (Heath & Tversky 1991), preference to bet, when the relevant options are judged equally probable, can depend on how knowledgeable and competent people take themselves to be with respect to the bets in question. Heath & Tversky showed that sports fans prefer to take bets on sporting events rather than on chance events, even when they judge the probabilities to be equal, since success in the former would be attributable to one's knowledge and competence as opposed to mere luck. Given that (Bomb 1) is a scenario in which success or failure is simply due to luck, someone who takes himself to be somewhat knowledgeable about the conditions in (Bomb 2) may be expected to prefer that option. In the highly likely event that a subject thereby succeeds in saving lives, that subject may expect to deserve some credit.4 Another confounding factor is the plausible variation, across the two scenarios, of one's confidence about the probability assignments in question. The probability of a lottery outcome can be straightforwardly determined, given the properties of the lottery. However, it is much less clear how to determine the probabilities of the triggering conditions in (Bomb 2). Pritchard simply stipulates a value for the probability of an explosion in (Bomb 2), but it is natural to take this as much more speculative and less certain than the corresponding value in (Bomb 1).5 Gärdenfors & Sahlin (1982) and Goldsmith & Sahlin (1983) highlight a number of ways in which preferences in an equiprobable choice scenario are affected by differences in one's confidence about the probabilities. In a finding that is particularly significant for the present discussion, Goldsmith and Sahlin observe that when asked to choose between equiprobable bets under lose-or-not-lose conditions (where only a neutral or adverse outcome is in prospect), some subjects prefer bets where the probabilities are judged to be less certain. Such a mechanism offers another possible explanation for a preference for (Bomb 2).6 Finally, some (including the authors) might well think that the stipulated probabilities in (Bomb 2) are unreasonably high. The probability that the weakest horse in the Grand National wins by at least 10 furlongs and the worst team remaining in the FA cup draw beats the best team remaining by at least 10 goals and the Queen spontaneously chooses to speak a complete sentence of Polish during her next public speech might reasonably be regarded to be far lower than the stipulated value of 1 in 14 million.7 Given a more realistic estimate of this probability, the probabilistic account will straightforwardly predict that the risk in (Bomb 2) is lower than in (Bomb 1), offering another possible explanation for the intuition. These hypotheses are admittedly speculative, yet they offer potential ways of explaining Pritchard's judgment about the bomb scenario in a way that is compatible with the probabilistic account of risk. In so far as Pritchard intends to challenge the probabilistic account as a descriptively adequate theory of ordinary risk judgments, hypotheses of this kind need to be considered. Moreover, as mentioned above, it is also crucial, in this context, that Pritchard's judgment is widely shared. To investigate this further, we put together a short survey using the original vignettes from Pritchard (2015). The details of our survey (recruitment, vignettes, results) can be found in the appendix, which includes a more thorough discussion of the results. However, to summarise the main findings: while the claim that people tend to prefer (Bomb 2) over (Bomb 1) receives support from our survey, the claim that people judge (Bomb 2) to be less risky is not supported, with the majority of subjects judging that the risk of the bomb detonating, in the two scenarios, is equal (See Figure 1). These results don't, on their own, show that Pritchard's judgment about the bomb cases is wrong, and don't necessarily undermine his case against the probabilistic account. After all, Pritchard could maintain that his premises don't need to be supported by experimental surveys. We won't engage here in a wider debate about philosophical methodology. Rather, more modestly, we think that what the survey results suggest is that, even in the kinds of cases to which Pritchard draws attention, the probabilistic account offers a reasonable, though far from perfect, prediction as to how people judge risk.8 You've left enough time to get to work, so there are really only two ways in which you could be late: if the car breaks down or you get stuck in a serious traffic jam. The risk of the car breaking down is low–the car is still relatively new and you've just had it serviced. And the risk of you getting stuck in a traffic jam is low too–it's usually quiet on the roads at this time, and there are no reports of traffic problems. So, you needn't worry, the risk of you being late to work is low. Formally, checklist reasoning involves the following inference pattern: Not only is this an intuitive pattern, but in certain settings it may be that something approaching this reasoning is formally prescribed. Checklist reasoning is implicated, for instance, in the practice of de minimis risk management, on which risks that are deemed to be suitably low are ignored (see Comar, 1979, Mumpower, 1986, for critical discussion, see Peterson, 2002). More precisely, de minimis risk management involves ranking adverse possibilities in terms of their risk and specifying a low, but non-zero, level of risk to serve as the de minimis threshold. Those possibilities which fall below the threshold are disregarded, while those which are above the threshold are subjected to a comprehensive risk analysis in order to determine whether measures should be taken to protect against them. This approach may have been first explicitly employed by the US Food and Drug Administration in the 1960s, and is still widely used in the regulation of health and environmental risks (see Peterson, 2002). Suppose P and Q both fall below the de minimis risk threshold. If one disregards the possibility of P and takes no measures against it, and disregards the possibility of Q and takes no measures against it, then one has in effect disregarded the possibility of P ∨ Q and taken no measures against it. But, unless checklist reasoning is valid, there is no guarantee that the risk of P ∨ Q is below the de minimis level. Consider again the above example. Suppose the risk of a car breakdown and of a traffic jam are both reckoned to be below the de minimis threshold. On the de minimis approach, both of these possibilities are disregarded and no measures are taken against them. Since these are the only ways in which you could be late for the meeting, no measures are taken against this possibility. And yet, if checklist reasoning fails, then the risk that you will be late for the meeting may well be above the de minimis level, in which case the de minimis approach recommends that we do take measures against it, or at least seriously consider the option of doing so. Hence, without the validity of checklist reasoning, it is doubtful that this approach to risk management is coherent.9 Another setting in which checklist reasoning seems to play a normative role is in the context of legal fact-finding. Many of the rules of criminal procedure are designed to minimise the risk of convicting an innocent person. The high standard of proof for criminal trials–beyond reasonable doubt–is intended to ensure that a defendant cannot be convicted unless the risk that he or she is innocent, given the presented evidence, is very low. And yet, most criminal charges will involve a number of essential elements, and the established legal practice is to apply the standard separately to each element (see, for example, Allen & Jehl 2004, Allen 2008, Spottswood 2016). For instance, to be guilty of theft, in many jurisdictions, a defendant must have: A defendant will be innocent of theft if any one of these conditions is not met. Under prevailing legal practice, a defendant will be convicted if each of these conditions is proved beyond reasonable doubt. But this merely ensures that there is a low risk that (i) the defendant did not take property from the rightful owner, there is a low risk that (ii) the defendant did not intend to permanently deprive the owner of that property, and there is a low risk that (iii) the defendant had the owner's permission. To conclude from this that there is a low risk that the defendant is in fact innocent of the crime requires checklist reasoning. On the probabilistic account of risk, however, checklist reasoning is an invalid inference pattern. The probability of P ∨ Q can be higher than both the probability of P and the probability of Q.10 If I roll a fair die, the probability that I will roll a 6 is ⅙ and the probability that I will roll a 5 is ⅙. The probability that I will roll either a 5 or a 6 is ⅓. According to the probabilistic account of risk, even if the risk of P is low and the risk of Q is low, it needn't follow that the risk of P ∨ Q is low. Here then are two established practices which are arguably at odds with the probabilistic account of risk. Importantly, we don't mean to suppose that such practices are beyond criticism–indeed, we are open to the idea that they may need reform. It seems to us, however, that the existence of these practices provides good motivation to consider other notions of risk that may make different predictions about checklist reasoning. We will turn to such a notion in what follows. As mentioned above, Pritchard uses the Bomb example not merely to put pressure on the probabilistic account of risk, but to motivate an alternative which he terms the modal account of risk. In spite of the odds against it, the triggering event in (Bomb 1) is, according to Pritchard, something that could easily happen–all that is needed is for a few coloured balls to fall in the right way at the right time. In contrast, the triggering conditions in (Bomb 2) all seem to be very far-fetched–it couldn't easily happen that the weakest horse in the Grand National wins by at least 10 furlongs or that the worst team remaining in the FA cup draw beats the best team remaining by at least 10 goals or that the Queen spontaneously chooses to speak a complete sentence of Polish during her next public speech (or so we would tend to think). One way to formalise the idea of easy possibility is by appealing to an ordering of possible worlds reflecting how similar they are to the actual world–an idea familiar to philosophers since Lewis (1973, 1979). The degree of similarity of another world is determined by how much would need to change about the actual world in order to bring it into conformity with the world in question (Pritchard, 2015, p. 443f). As Pritchard points out, very little would need to change in order for a set of six numbers to come up in the next national lottery draw–all that is required is that six coloured balls fall in a particular configuration. Thus, there is a very similar or close world in which this obtains. More generally, it seems that any lottery outcome could obtain as easily as any other. Amongst the most similar worlds in which the lottery is run will be worlds in which each outcome obtains, including those which feature the six specified numbers. In contrast to this, a great deal would need to change (so we might suppose) in order for the weakest horse in the Grand National to win by at least 10 furlongs or the worst team in the FA cup draw to beat the best team by at least 10 goals or the Queen to choose to speak a complete sentence of Polish during her next public speech. As such, the most similar worlds in which these events occur are very dissimilar or distant worlds. According to the modal account of risk, the risk of a proposition P is determined by the similarity of the most similar worlds in which P is true: the more similar these worlds, the higher the risk. On this view, the risk of P is higher than the risk of Q just in case the most similar worlds in which P is true are more similar than the most similar worlds in which Q is true. Naturally, this world ordering should be restricted to worlds in which the background evidence holds, and should exclude worlds which are inconsistent with the background evidence. As a result, on the modal account of risk (Bomb 1) and (Bomb 2) have very different associated risks. The most similar worlds in which the bomb detonates in (Bomb 1) are much closer to actuality than the most similar worlds in which the bomb detonates in (Bomb 2). Accordingly, the risk of the bomb detonating is higher in (Bomb 1) than (Bomb 2), despite the probabilities being the same in both cases. While Pritchard doesn't make this point, it is interesting to observe that the modal account does validate checklist reasoning. Any world in which P ∨ Q is true is either a world in which P is true or a world in which Q true. As such, the most similar worlds in which P ∨ Q is true will either be the most similar worlds in which P is true, or the most similar worlds in which Q is true, or a mixture of the two.11 Suppose again that I throw a fair die. The most similar worlds in which I throw a 5 or a 6 cannot be more similar than the most similar worlds in which I throw a 5 and more similar than the most similar worlds in which I throw a 6. After all, any world in which I throw either a 5 or a 6 must be either a world in which I throw a 5 or a world in which I throw a 6. On the modal account, therefore, the risk of P ∨ Q cannot be higher than the risk of P and the risk of Q–it will, in fact, be equal to the higher of these risks. Hence if, on the modal account, the risk of P is low and the risk of Q is low, it follows that the risk of P ∨ Q is low–just as required by checklist reasoning. Although it has a number of intriguing features, the modal account of risk faces a difficulty which raises questions as to whether it can play one of the fundamental roles that we require from a notion of risk. One thing we can immediately observe is that no world counts as more similar to the actual world than it is to itself. It follows that, on the modal account, any event which actually happens must be at maximally high risk of happening, and any state of affairs which actually obtains must be at maximally high risk of obtaining. To put the point slightly differently, if P is true, it follows that P could easily be true–it is contradictory to say "P is true but P couldn't easily be true". If P is true, then there is a maximally similar world–the actual world–at which P is true, in which case, according to the modal account, there is then a maximal risk of P. This, we submit, can give rise to a potentially serious concern about the modal notion of risk. Suppose one is about to drill into a wall in a West Australian house built in the 1970s, and is wondering about the risk that the wall contains asbestos. On the modal account, if the wall really does contain asbestos, then the risk is maximally high. In this case, there is a maximally similar world–the actual world–in which the wall contains asbestos. If, on the other hand, the wall does not contain asbestos, then, according to the modal account, the risk will be lower–the closest worlds in which this is true will be somewhat distant from actuality, depending upon further facts of the case. In any event, on the modal account it seems that one cannot make a judgment about the risk that the wall contains asbestos without taking a view as to whether it does contain asbestos. Similarly, consider again the example of jurors in a criminal trial who are contemplating a guilty verdict and reasoning about the risk that the defendant is innocent. Suppose the prosecution produced two independent eyewitnesses to the crime who were willing to identify the defendant as the culprit. What is the risk that the defendant is innocent–that he did not commit the crime–given the testimonial evidence? If the defendant did commit the crime, and the eyewitnesses are reliable and truthful, then the modal account predicts that the risk is low. In this case, the most similar worlds in which the defendant is innocent will be very dissimilar–not only will the facts of the case be different in these worlds, but the eyewitnesses will be lying or mistaken. What if, on the other hand, the defendant is in fact innocent, and the eyewitnesses are lying or mistaken? In this case, on the modal account, the risk that the defendant is innocent given the testimony, is maximally high. After all, in this case, the most similar world in which the defendant is innocent is the actual world.12 Again, on the modal account, it seems that one cannot assess the risk that the defendant is innocent without already taking a stand on whether he is innocent or guilty: if he is guilty, the risk is low and if he is innocent the risk is maximally high. This seems to be of little help when it comes to actually making a decision (for related discussion see Smith, 2018, pp. 1204-1205).13 Importantly, this issue also affects Pritchard's reasoning about the Bomb scenarios above. His reasoning about these scenarios effectively takes it for granted that the triggering conditions in (Bomb2) are not met–namely, that the weakest horse in the Grand National won't win the race by at least ten furlongs, and the worst team in the FA Cup draw won't beat the best team by at least ten goals, and the Queen won't choose to speak a complete sentence of Polish during her next public speech. If we were in a world in which the triggering conditions in (Bomb 2) are met, the modal account would predict that the risk in (Bomb 2) is at least as high as the risk in (Bomb 1) or even higher. A proponent of the modal account could point out that there are some cases in which one can judge a proposition P to be high risk without the need to take a view on whether P is true. In (Bomb 1), for instance, the modal account does allow us to judge that there is a high risk that, say, (14, 6, 32, 20, 12, 41) will be the winning numbers, while remaining neutral on whether they will be the winning numbers. Irrespective of whether these numbers come up in the actual world, we know that there is a very similar world in which they do. While this much should be granted, it remains the case that if the modal account is correct, then one cannot judge that there is a low risk of P without thereby presupposing that P is false. Of course, even if P is actually true, one could still, on the modal account, be in an evidential position in which it is reasonable to judge that there is a low risk of P. That is, a defender of the modal account could insist that, even if the wall does contain asbestos or the defendant is innocent, one could still reasonably judge these propositions to be low risk. Importantly, however, on the modal account, these judgments could never be true–and this is in stark contrast to what the probabilistic account predicts. On the probabilistic account, the risk that the wall contains asbestos or the risk that the defendant is innocent, is purely a function of the available evidence, and does not depend on whether the wall actually contains asbestos, or whether the defendant is actually innocent. Now, while the jury is still out on whether the modal account can deal with this difficulty in a satisfying manner, we think there is an alternative notion of risk that can reproduce some of the benefits of the modal account without facing this particular problem. Given a possible proposition, one question we can consider is how probable it is that the proposition is true. Another question we might consider is how easy or difficult it would be for that proposition to be true. As the preceding discussion illustrates, our answer to the second question is not determined by our answer to the first. We might judge that the bomb could more easily explode in (Bomb 1) than (Bomb 2) even though the probabilities of an explosion are equal. A third question that we might consider is how normal or abnormal it would be for a proposition to be true. It is somewhat natural to judge that it would be more abnormal for the bomb to detonate in (Bomb 2) than in (Bomb 1), even though the probabilities are equal. Although i

Referência(s)