Artigo Acesso aberto Revisado por pares

Does the Noncombatant Immunity Norm Have Stopping Power? A Debate

2020; The MIT Press; Volume: 45; Issue: 2 Linguagem: Inglês

10.1162/isec_a_00393

ISSN

1531-4804

Autores

Scott D. Sagan, Benjamin Valentino, Charli Carpenter, Alexander H. Montgomery,

Tópico(s)

Torture, Ethics, and Law

Resumo

Our 2015 survey experiment—reported in the 2017 International Security article “Revisiting Hiroshima in Iran”—asked a representative sample of Americans to choose between continuing a ground invasion of Iran that would kill an estimated 20,000 U.S. soldiers or launching a nuclear attack on an Iranian city that would kill an estimated 100,000 civilians.1 Fifty-six percent of the respondents preferred the nuclear strike. When a different set of subjects instead read that the air strike would use conventional weapons, but still kill 100,000 Iranians, 67 percent preferred it over the ground invasion. These findings led us to conclude that “when provoked, and in conditions where saving U.S. soldiers is at stake, the majority of Americans do not consider the first use of nuclear weapons a taboo and their commitment to noncombatant immunity is shallow.”2By 2015, we had been researching American public opinion on the use of nuclear weapons and the ethics of war for several years. Many of our previous findings about the U.S. public's hawkish attitudes had been unsettling. Nevertheless, the levels of public support we found in this study for a strike that so clearly violated ethical and legal principles on the use of force were deeply troubling.We proposed, therefore, that future research on the nuclear taboo and the noncombatant immunity norm focus on interventions that might blunt these disturbing instincts of the American public. We are gratified that Charli Carpenter and Alexander Montgomery have taken up that challenge and are contributing to the emerging debate on this important subject.3 A number of the ideas they advance are important: scholars should study the sources and kinds of information and arguments that citizens would likely receive in real conflicts; the influence of historical analogies; and the differences and similarities among civilian elite attitudes, military views, and public opinion.4Nevertheless, we find Carpenter and Montgomery's main critiques unconvincing. We remain deeply skeptical about how much stopping power legal and ethical norms are likely to exert on the U.S. public if it is ever faced with the kind of terrible dilemmas that can emerge in the crucible of war. We believe that the unsettling findings of our experiments make the effort to understand public opinion, and to discover how to influence it, particularly urgent. Our common goal is to create experiments that illuminate how the public would react in real-world crises, maximizing what is called the “external validity” of experiments. In this response, we propose some novel ways to realize that common objective.We applaud Carpenter and Montgomery's efforts to replicate our findings and assess the degree to which legal and ethical norms affect public opinion. We wish we could replicate their findings in turn to examine related questions, confirm the accuracy of measures, and assess alternative interpretations. Unfortunately, we are unable to do so. For despite agreeing to this debate, and despite us sharing our data with them, Carpenter and Montgomery declined to share their replication data or even their online appendix with us before publication.Nevertheless, a careful reading of their article reveals important reasons to be skeptical of their central conclusions. Carpenter and Montgomery's main claim is that the scenarios we used were “psychologically stacked in favor of atrocity” because we chose “not to mention international law or norms.” They argue that omitting references to law or norms constitutes “priming by omission.”To test this argument, they begin by replicating the conventional attack condition from our 2015 experiment. They report that 57 percent of subjects indicated that they preferred the air strike, 10 percent less than we found. As they acknowledge in footnote 55, however, this difference is not statistically significant. Therefore, they clearly state that “we do not dispute Sagan and Valentino's overall finding.”Carpenter and Montgomery then ran several experiments using an altered version of our original conventional weapons scenario that substituted the words “Iranian Civilians” for “Iranian City” in the headline (a change that might have led some subjects to believe that the United States would target all civilians in Iran).5 In these experiments, subsets of subjects were asked to consider international law and ethical norms before indicating their preference for the air strike or ground war in the Iran scenario, and in others after they indicated their preference. Surprisingly, Carpenter and Montgomery do not report the results of the two direct experiments comparing the pairs of conditions in which subjects were primed or not primed on legal knowledge and ethical sensitivity. They do not report the results from the two groups that received the law question before and after the air strike question at all. They do report a decline in preferences for the strike (from 54 percent to 46 percent) when subjects received the ethics prime before the Iran question, but this decline refers to a comparison between one group that received the ethics prime before the air strike question, and subjects pooled from two different subgroups that did not, although the different treatments these pooled groups received is unclear (Carpenter and Montgomery never explicitly describe each of their nine different treatment groups). In footnote 63, nonetheless, Carpenter and Montgomery acknowledge that even this change is not statistically significant at the conventional p < .05 level.One of the core tenets of experimental research is that researchers manipulate only one variable between any two comparison conditions. Carpenter and Montgomery repeatedly violate that rule in reporting their results. For the sake of transparency, in their reply, Carpenter and Montgomery should report the means and standard errors of their key experimental conditions (at least groups 1, 2, 6, and 7) separately and the results of the direct comparisons between their primed and unprimed conditions for law and ethics.Even if some of Carpenter and Montgomery's results are statistically significant, the effect is substantively small. Unlike Carpenter and Montgomery, we do not find it reassuring that 46 percent of respondents preferred the strike even after being primed to consider the ethics of targeting civilians. Nor are we reassured to read that 39 percent of respondents who “strongly agreed” that killing civilians was wrong nevertheless preferred the strike that would kill 100,000 of them. We were even less comforted after reading in footnote 62 that an additional “80 percent of those who only somewhat agreed with the ethical norm supported the strike.” For the sake of transparency, in their reply, Carpenter and Montgomery should report the total percentages of respondents who supported the strike and clarify which groups received the questions about law and ethics, and in what order, before or after answering the question about Iran.Carpenter and Montgomery do report that subjects who agree that it is never legally permissible to target civilians are less likely to prefer the air strike. They acknowledge, however, that 45 percent of subjects who agreed that targeting “the civilian population” violates international law nevertheless supported doing just that. Indeed, if we include the percentages of subjects who answered the law question incorrectly and preferred the strike, it appears that a majority or near majority of all subjects primed on international law actually preferred the strike. We do not understand how these results make Carpenter and Montgomery “far less pessimistic” about the public's apparent willingness to violate the noncombatant immunity principle.6The relatively small effect of priming subjects on considerations of law is mirrored in studies of torture and drone strikes that Carpenter and Montgomery cite for support. Geoffrey Wallace, for example, finds that telling subjects that torture violates both U.S. and international law reduced support by 6 percent, a drop he describes as “a systematic but substantively modest effect.”7 Sarah Kreps and Wallace report that priming subjects that certain U.S. drone strikes were illegal decreased support by between 6 percent and 8 percent. They acknowledge, however, that “over 40% of the public approves of the strikes even when told they would violate international law, almost twice as many subjects as opposed the strikes,” and that legal priming “does not make the public more willing to put their own troops in harm's way.”8According to Carpenter and Montgomery, our article understated the true force of ethics and law because a “framing effect was created through the either/or structure of the Iran scenario question.” They call this effect “the tyranny of closed-ended questions.” In one “Revisiting Hiroshima” experiment, however, we provided respondents with a third option—a diplomatic settlement in which Ayatollah Ruhollah Khamenei was permitted to remain as a spiritual leader under a democratic government. Forty-one percent of our subjects chose that option, but 40 percent still preferred to launch a nuclear strike.The basic closed-ended design that we have used in many experiments is routinely employed in public opinion experiments, however, including many of those cited favorably by Carpenter and Montgomery. It also forms the foundation of the famous “trolley car” experiments, designed by moral philosophers to assess moral intuitions about killing.9 Closed-ended questions are particularly helpful for testing the strength of competing norms because they force respondents to confront difficult dilemmas. Carpenter and Montgomery, however, argue that this kind of question produced “moral confusion” and exaggerated “public antipathy” to noncombatant immunity.We do not think that subjects who expressed a desire for a third option in their open responses are suffering from “moral confusion.” Instead, they are (understandably) seeking to avoid the moral dilemma that they confront. Carpenter and Montgomery claim that “because norm conflicts can reduce support for prohibition norms in warfare, pitting the protection of Iranian civilians against the protection of U.S. troops could have biased Sagan and Valentino's experiment in favor of striking the city.” Yet, that is exactly what researchers testing for the stopping power of norms should do: “stress test” norms to determine how much they constrain behavior when other values are at stake.Carpenter and Montgomery repeatedly claim that our Iran scenario is a “tough test” for the power of norms. Nevertheless, it is a realistic and relevant test, for it is exactly in such scenarios that the United States might be tempted to violate the principle of noncombatant immunity. As history has shown, and as our experiments have repeatedly found, it is easier for people to voice support for an abstract normative principle (such as whether they believe killing civilians is always wrong or illegal) than it is to uphold that principle when it conflicts with other core values. Violating such principles may produce distress and a sense of tragedy, but for many Americans, these emotions do not possess “stopping power” when U.S. soldiers' lives are perceived to be at risk.We disagree with Carpenter and Montgomery that studies that do not prime subjects on ethics or law are guilty of “priming by omission,” or that this concept constitutes a useful critique of any survey experiment. Although we did not prime subjects to consider ethics or law, neither did we prime them to consider the potential environmental effects of a strike; provide graphic images of the Iranian victims; or discuss the potential that a nuclear or conventional attack would create a horrible precedent, increasing the likelihood of similar attacks against the United States. It seems plausible that these considerations would decrease support for violations of noncombatant immunity as much as or more than priming on law or ethics.Carpenter and Montgomery claim that our scenario gives subjects “every possible interest-based reason to target civilians baked into the vignette.” This is simply not true. Our news story did not report that Iran had supported terrorists or threatened Israel. Subjects did not read that there were military targets in the city, or that the civilians there might be contributing to the war effort by helping to supply Iranian troops. These considerations would likely increase support for violations of noncombatant immunity. If so, are not Carpenter and Montgomery also guilty of “priming by omission”?Our study, like all survey experiments, unavoidably omitted countless considerations that might have swayed public opinion one way or another. The only meaningful way that “priming by omission” could be said to bias our results would be if it could be shown that the balance of all omitted considerations that citizens would encounter in the real world favored opposition to the air strike. Carpenter and Montgomery have no way of knowing how these competing considerations would balance out and, therefore, no grounds to claim that our experiment was any less externally valid than their own.We believe that Carpenter and Montgomery are correct to argue that legal and moral considerations would be involved in real-world public discussions in military crises. We also believe that such crises would produce competing claims about the legality and morality of different military options. To examine this possibility, in August 2019, we conducted a survey experiment, administered by YouGov, to a representative sample of Americans.10Four hundred fifty subjects were randomly assigned to one of three experimental conditions. The baseline condition replicated the main elements of the 2015 “Revisiting Hiroshima” nuclear scenario. In a second condition, the story was amended to say that the Joint Chiefs of Staff (JCS) had “concluded that the nuclear attack would violate international laws of armed conflict that the U.S. has signed. The Geneva Conventions, which have governed conduct during war since 1949, specifically outlaw deliberate attacks on civilians.” In the third condition, the story reported that the JCS disagreed about whether the attack “would violate international laws of armed conflict that the U.S. has signed. Some members of the JCS argued that The Geneva Conventions, which have governed conduct during war since 1949, specifically outlaw deliberate attacks on civilians. Other members of the JCS, however, claimed the strike would be legal since there is a medium sized Iranian military base within the city and the strike could be targeted against that base.” We believe that this latter argument is pretextual and misguided, and that such an attack would be illegal. Nevertheless, it reinforces external validity because this kind of tendentious argument has been made in the past to justify attacks on civilians, following the atomic bombings of Hiroshima and Nagasaki, and during the Korean and Vietnam Wars.11The results are presented in figure 1. As in “Revisiting Hiroshima,” subjects were asked whether they preferred the nuclear air strike or continuation of the ground war. They were then asked whether they believed the strike “would violate the international laws of war.” Fifty-six percent of subjects thought the strike would be illegal, and 48 percent of subjects preferred the air strike in the baseline treatment. Eighty-two percent of subjects who read that the JCS said that the strike would be illegal agreed, but 40 percent of respondents who read that news story nevertheless preferred the strike. This 8 percent drop in support was not statistically significant (p = .25). When subjects read that the JCS disagreed about whether the strike was legal, however, belief that the strike would be illegal fell back to 58 percent, and preferences for the attack increased to 50 percent, although the change in preferences also was not statistically significant (p = .11).This experiment also demonstrated the importance of partisanship. Although neither our experiments nor Carpenter and Montgomery's mentioned President Barack Obama or President Donald Trump by name, the evidence suggests that the cause of the decline in support from 2015 and 2019 was a decreased willingness of Democrats in 2019 to support a strike they assumed was ordered by Trump. In our original 2015 survey, 51 percent of Democrats and 66 percent of Republicans said they preferred the strike (presumably ordered by Obama) on Iran. In 2019, however, only 32 percent of Democrats supported the strike, while 68 percent of Republicans still did. Republicans in the 2019 survey were 26 percent more likely to support the strike, even when controlling for other factors such as age and education.12How would the many conflicting claims that the public would hear in the real-world balance out? Carpenter and Montgomery are right that some respondents considered the analogy to World War II when weighing their preferences, but this only reinforces the external validity of our experiment, for such analogies would likely appear in real-world debates about military options. It is noteworthy that John Bolton, the lawyer who served as national security adviser in the Trump administration from 2018 to 2019, both advocated for an illegal preventive war against North Korea and argued that “Truman acted decisively and properly to end the war” by dropping the bomb.13 Moreover, the illegality of the bombings under prevailing law in 1945 is still debated, and some legal scholars argue that Hiroshima, even if a war crime, might still be justified as the “lesser evil” given the strategic situation.14 When confronted with conflicting claims about law and ethics, the U.S. public will not always side with experts who say an action is wrong—even if those experts are right.Perhaps the clearest example of the regrettably weak stopping power of norms is public reaction to the revelations in 2004 that the United States had been “waterboarding” captured terrorist suspects, a technique widely recognized as torture. Shortly after the news broke, the Pew Research Center conducted a poll asking respondents, “Do you think the use of torture against suspected terrorists in order to gain important information can [often/sometimes/rarely/never] be justified?” Only 32 percent said that torture was never justified.15 In subsequent years, Americans were exposed to conflicting arguments, including that waterboarding was illegal and unethical, and that U.S. “enhanced interrogation” techniques did not amount to torture or were justified even if they did. As figure 2 shows, Pew's polling found that support for torture actually increased after 2004. Further evidence came in 2015 when Pew asked, “Following the September 11th, 2001 terrorist attacks, the US government used interrogation methods that many consider to be torture on people suspected of terrorism. Were these interrogation methods justified or not justified?” Fifty-eight percent said that they were justified; only 37 percent said that they were not.16Although Congress finally banned waterboarding in 2015, the ban could still be overturned or subverted.17 During the 2016 presidential campaign, Trump repeatedly advocated torture, arguing that the United States “should go much stronger than waterboarding.”18 In a poll conducted in April 2016, we asked a sample of Americans whether they agreed or disagreed with the statement that “the United States should use interrogation techniques much stronger than waterboarding to obtain information from captured foreign terrorist suspects.” Fifty-nine percent agreed. Sadly, for most Americans, moral and legal arguments against waterboarding did not have “stopping power.”Air strikes such as those described in our experiments should never be launched. They would be immoral and illegal, violating the principles of non-combatant immunity, proportionality, and precaution enshrined in the Additional Protocols to the Geneva Conventions and accepted by the United States as binding customary law.19 They probably would not work, and could even backfire, increasing resistance among adversaries and creating a dangerous precedent for future violations by others. Our research shows, however, that it would be naïve to assume that the majority of Americans, or even all American leaders, agree with that view.Carpenter and Montgomery claim that they find little evidence of the kind of retributive instincts we reported among some subjects in “Revisiting Hiroshima.” Because they have not shared their data, we cannot check on their interpretations of respondents' open-ended answers. We warn researchers, however, against using respondents' self-reporting as the measure of these all-too-human instincts. In our 2015 experiment and our 2019 extension, support for the death penalty for convicted murderers was among the strongest correlates of preferences for the strike against Iran across every condition.20 Peter Liberman demonstrates that retributiveness helps explain support for war and argues that individuals' explanations for why they support using force often mask this effect: “Out of wishful thinking or social desirability bias, it seems, people habitually exaggerate the instrumental purposes of punishments they actually favor for retributive reasons.”21We need just war doctrine and the law of armed conflict not to reflect the public's ethical instincts, which are all too often not very ethical. We need just war doctrine and the law of armed conflict to constrain common human instincts. We need these rules to stay the hand of vengeance.It is important for scholars to keep their eyes open to common human frailties and not to wear rose-colored glasses. Carpenter and Montgomery assert that “Americans care deeply about the civilian immunity norm and the Geneva Conventions.” Most Americans, however, do not have a clue what these laws mean. Janina Dill and Livia Schubiger find that 44.8 percent of Americans report that they “know these rules (the Geneva Conventions) exist, but not what they demand,” 26 percent “know a little about what they demand,” while only 7.2 percent “know a lot about what they demand,” and 22 percent “have never heard of such rules.”22In a previous critique, Carpenter and Montgomery concluded that “if there's any key lesson from both the Stanford/Dartmouth study [“Revisiting Hiroshima”] and ours, it's that we [Americans] need more education on the Geneva Conventions.”23 We wholeheartedly agree with that sentiment. We just disagree about how steep this uphill battle will be.We thank Scott Sagan and Benjamin Valentino for their thoughtful reply to our article.1 We appreciate their sharing their 2015 data with us after publication of their article, which we are now in a position to reciprocate upon publication of ours. We invite them and other scholars to download the online data appendix, and encourage researchers to contribute to this literature by conducting studies that include direct measurements of the power of norms.2Indeed, we were thrilled to see that Sagan and Valentino do just that in their reply. Although they “remain deeply skeptical about how much stopping power legal and ethical norms are likely to exert on the U.S. public,” we found it heartening that the specific changes they introduced in their Iran bombing vignette caused a change in the direction of restraint, given that the vignette is still largely stacked against these effects—so much so that we might have expected the opposite result. Below we address wider points of debate between Sagan and Valentino and ourselves. We then discuss their new findings and preview some of our additional research findings.Overall, this debate is broadly about how scholars think about the study of norms in international security. We do not seek to contradict Sagan and Valentino's findings so much as to suggest avenues to broaden and enrich the important research agenda they have helped pioneer. We agree with Sagan and Valentino on the importance of conducting research in this area. We also agree that deliberate attacks on civilians are illegal, immoral, and ineffective and consequently should never be carried out. We disagree, however, in three key areas.First, we differ on an assumption about the burden of proof in demonstrating that norms have constraining power. Sagan and Valentino's original findings demonstrating substantial support for air strikes against civilian targets led them to conclude that Americans' “commitment to noncombatant immunity is shallow.”3 However, we argue that “in the absence of robust ethical norms against nuclear use and civilian targeting, one would expect unanimous support for the strike.”4 Indeed, we find that only 54 percent of Americans would now even contemplate supporting such an act, and priming with a question regarding ethical norms reduces support to 46 percent, which is below a majority and a substantive political threshold. Sagan and Valentino's new results show that less than 50 percent of Americans would support such an act, with a similar reduction of about 8 percent when exposed to information regarding international law.Second, we disagree on how and how much to rely on statistical measures to capture complex moral reasoning. For example, Sagan and Valentino note that the drop we record from 54 to 46 percent support for bombing is not statistically significant—if one sets the threshold for significance at the much-debated threshold of p < 0.05.5 As we note in our article, we find a drop of 11 percent (with p < 0.01) for the combined effect of asking the sentiment question and changing the title from “city” to “civilians.” In short, it makes little sense to live and die at 0.05, rejecting results if they do not meet this arbitrary threshold and accepting them if they do. Moreover, we note that in our open-ended findings, only 34 percent of Americans wanted to bomb the city. The qualitative study of open-ended questions may be an improved method for survey experimentalists, not only because citizens and policymakers are rarely presented with only two choices, but also because they can explain their own complex causal and moral thinking.6 Alternatively, surveys of elites might better capture the likelihood that international norms would stay the hand of atrocity because it is not the public that ultimately decides whether the laws of war will be followed.Third, we disagree with Sagan and Valentino on validity standards in survey vignettes and questions. We agree with them that our study is no more or less externally valid than theirs, given that both studies were conducted on a representative random sample—but this is a low bar. Studies should also seek to be ecologically valid: experimental vignettes must be reasonably representative of real-world conditions under which a decision would be made.7 We think that Sagan and Valentino's new variation on the survey continues to illustrate the importance of these points in considering what can be inferred from survey experiments about the power of norms.We were pleased that, after we presented our findings at the Center for International Security and Cooperation at Stanford in February 2019, Sagan and Valentino conducted their own augmented replication, adding vignettes in which the legality of deliberately attacking civilians is at least called into question by the Joint Chiefs of Staff (JCS). This modification addresses a major point in our article—the absence of any variation on international law information. Yet, we think that Sagan and Valentino's new finding also confirms and further illustrates our argument. Sagan and Valentino find an 8 percent drop in support for the strike in the condition where the JCS acknowledge it would be illegal.8 That a single mention of international law resulted in any “stopping power” demonstrates that simply mentioning legal considerations can have an effect, even in scenarios where one might expect the opposite. Indeed, this remains a very hard case: the other forms of bias we identified remain the same in Sagan and Valentino's newer experiment, including the invocation of utilitarian thinking, the portrayals of relevant actors, and embedded contested causal assumptions.First, even when the Joint Chiefs agree that the strike would be illegal, they are still proposing it as an option. This juxtaposition suggests to respondents that military leaders believe that international law may be ignored or disregarded when circumstances suit, inviting cost-benefit rather than categorical moral thinking. This reinforces a key bias of the original vignette, as citizens take cues from authorities. If the only authoritative voice mentioning the Geneva Conventions is the same actor placing a war crime on the policy agenda, this is less an invocation of the norm than of the necessity and legitimacy to override it. We would expect any drop in support to be small in such a case, if not to see opinion go in the other direction given the authority of the JCS. We were therefore heartened both that strike support dropped and that it dropped as much as 8 percent.Second, the portrayal of the players in Sagan and Valentino's vignette remains misaligned with political reality, leading to questionable ecological validity. In a real-world scenario, we would expect actors such as the United Nations, civil liberties groups, human rights groups, and retired generals and policymakers to oppose the bombing, stating that it is illegal to intentionally target citizens under the Geneva Conventions. Indeed, the plan itself might originate much more plausibly in the White House rather than the JCS: as we have argued, it is unlikely that the JCS would make such an ineffective proposal, and even less likely that they would make an illegal one.9Third, even without the imprimatur of the Joint Chiefs, and even with the insertion of language about moral opposition, Sagan and Valentino's vignette still contains an embedded contested causal assumption that bombing would end the war. As we describe in our article, the entire premise of this experiment is based on respondents accepting as “fact” the claim that aerial massacres of civilians can end wars. It is especially problematic to suggest that the contemporary U.S. military believes this. It would still be problematic even if the bombing proposal came from another source, unless the vignette also included information that the causal logic of the plan was disputed by experts.10Otherwise, the structure of Sagan and Valentino's experiment remains a very hard case for restraint given these framing effects and those that we discuss in our article, which all limit the type of inferences about real-world public opinion that could be made from this experiment.11 We would be curious to see a more ecologically valid version of the experiment where the plan did not come from the JCS at all, but rather from the White House or perhaps a shadowy consulting firm; where concern about legality and morality were expressed by at least some actors opposing rather than supporting the plan; and where it is made clear that the causal claims underlying the policy proposal are contested assumptions rather than “facts.”Sagan and Valentino respond to the latter point by referencing the evolution of public opinion on torture, suggesting that public opinion has not always been swayed by moral arguments. We think the example they give also actually proves our point about the implications of embedding contested causal assumptions and utilitarian primes in survey questions about adherence to categorical norms. They cite results from a Pew survey question that reads, “Do you think the use of torture against suspected terrorists in order to gain important information can [often/sometimes/rarely/never] be justified?” Like their survey, which we argue misleads respondents about the efficacy of terror bombing, the torture survey question they cite implies the causal claim that torturing terror suspects is reliably effective at gaining actionable intelligence.12 In each case, citizens are asked by a researcher or pollster to believe these claims as a basis for engaging in utilitarian thinking. It is therefore unsurprising that they would do so.These forms of bias can have broader effects as well. Charli Carpenter and Alexandria Nylen hypothesize that embedding contested assumptions and inviting particular modes of thinking in polls affects respondents' understanding of norms; this effect, in turn, can be amplified if those polls are reported to wider audiences.13 A new article by Carpenter, Alexander Montgomery, and Nylen finds evidence for these hypotheses: asking respondents to contemplate war crimes leaves respondents more likely to believe that international law allows such acts in certain circumstances; moreover, those who hear reports of earlier studies showing support for war crimes are likelier to also favor such war crimes.14 Polls such as these are not only measures of public opinion; they are discursive sites where public understandings of international norms are constructed.15 Although further study is needed, we think the use of biased, consequentialist-logic-based polls, and the dissemination of their results in the mass media, are part of what explains the apparent chipping away of American public opposition to categorical norms such as the torture norm.We are deeply concerned that the same could occur through repeated polls of civilian targeting, the nuclear taboo, or other laws-of-war questions, depending on how they are conducted. Scholars such as Sagan and Valentino and us—whose goal is to determine how human security norms can be bolstered rather than undermined—will want to think carefully about these findings. None of this is to say that experimental surveys are not one of many useful ways to explore important questions at the intersection of empirical science and moral philosophy. We hope this correspondence is only the beginning of a rich dialogue on how to investigate this intersection.Scott Sagan and Benjamin Valentino thank Janina Dill, Katherine McKinney, and Allen Weiner for comments on drafts of their response.

Referência(s)
Altmetric
PlumX