Morality is fundamentally an evolved solution to problems of social co‐operation
2020; Wiley; Volume: 26; Issue: 2 Linguagem: Inglês
10.1111/1467-9655.13255
ISSN1467-9655
AutoresDavid N. Gellner, Oliver Scott Curry, J. W. Cook, Mark Alfano, Soumhya Venkatesan,
Tópico(s)Evolutionary Game Theory and Cooperation
ResumoThis debate took place at the Association of Social Anthropologists (ASA) conference in Oxford on 21 September 2018, following the model of the Group Debates in Anthropological Theory at the University of Manchester (GDAT). It brought together and into confrontation two of anthropology's relatively new sub-fields (new at least in their current incarnations), namely evolutionary anthropology and the anthropology of morality and/or ethics. Although organized by a social anthropology professional body, the conference organizers – in line with the wishes of the ASA committee at the time of the call for conference proposals (in 2016) – sought to encourage participation from all forms of anthropology, including archaeology. It was therefore fitting that the debate should pose a question that is of interest across the broad spectrum of anthropology and well beyond, highlighting, we hoped, the venerable anthropological ambition to contribute to the resolution of long-standing and intractable philosophical questions. The proposition, 'morality is fundamentally an evolved solution to problems of social co-operation', encapsulates a theory developed by Oliver Scott Curry, along with colleagues attached to the Institute of Cognitive and Evolutionary Anthropology (ICEA) and (since 2019) the Centre for the Study of Social Cohesion (CSSC) within the School of Anthropology and Museum Ethnography (SAME) in Oxford. This theory, known as 'morality as co-operation' or MAC, seeks to explain morality in a systematic cross-cultural manner by means of controlled and operationalized comparison (Curry 2016; Curry, Mullins & Whitehouse 2019). It seemed appropriate to ask Oliver to propose the motion and to select his own seconder. As prospective chair of the debate, I approached colleagues who might be interested in opposing the motion from the perspective of the new anthropology of morality, and the idea of the debate began to take shape.11 The text follows very closely what was actually said on the day, but small adjustments have been made in response to suggestions from two anonymous reviewers. At the outset of the debate, and before any arguments had been heard, an indicative vote was held: thirteen people were in favour of the proposition, six were against, and there were six abstentions. At the end of the debate, another vote was held, which went entirely the other way: four people voted for the proposition, twenty-four against, and two abstained. It would be unwise to read too much into the votes, however. The debate stretched over two sessions with a coffee break in the middle. Many people who were present at the beginning were no longer in the room at the end; many people arrived, other parallel sessions having finished, who were not there at the beginning. (Owing to the sheer number of panels at the conference, it was not possible to clear two plenary sessions for the debate.) As one might have expected, especially given the framing of Curry's theory as scientific, the debate set up an opposition between a reductionist evolutionary account of morality, on the one side, and a humanist and anti-reductionist stance, on the other (and, depending on your point of view, 'reductionist' should not necessarily be understood negatively; most of the time, reductionist explanation is just what science does and indeed it could be argued to be the glory of science). Is the co-operation of bees or ants anything to do with, or even remotely the same thing as, co-operation by humans? Can the social behaviour of closely related species tell us anything about the social behaviour of humans? If we put aside the insect-human and primate-human comparisons or contrasts, is it possible to compare co-operation and morality across very different societies? Does it make sense to assume that there is a single virtue of generosity or bravery that can be meaningfully compared in very different contexts, or even between different generations? Or is comparison simply impossible? Can issues of scale be ignored for the sake of comparison? Do they fatally undermine any attempt to construct systematic comparable datasets, or can they comfortably be accounted for within a scientific theory? If morality is not about co-operation, then what is it about? Ultimately, whether you find any plausibility in attempts to generalize across time and space, with all the necessary simplifications that requires, may depend on whether you are a natural lumper or a natural splitter. Splitters will always prefer to focus on the cultural and historical differences – which undoubtedly are always there. What is morality? Where does it come from, how does it work, what is it for? Are there any universal moral values, or does morality vary radically from place to place? Scholars have debated these questions for millennia; now, thanks to science, we have the answers. Converging lines of evidence – from game theory, ethology, psychology, and anthropology – suggest that morality is a collection of biological and cultural solutions to the problems of co-operation recurrent in human social life. For 50 million years, humans and their ancestors have lived in social groups (Shultz, Opie & Atkinson 2011). During this time, they have faced a range of different problems of co-operation, and they have evolved and invented a range of different solutions to them. Natural selection favoured adaptations for realizing the tremendous opportunities for mutually beneficial non-zero-sum interaction that social life affords. More recently, humans built on these beneficent biological foundations with cultural innovations – norms, rules, laws – that further boost co-operation. Together, these biological and cultural mechanisms provide the motivation for social, co-operative, and altruistic behaviour; they provide the criteria by which we evaluate the behaviour of others. And, according to the theory of 'morality as co-operation' (MAC), it is precisely this collection of co-operative traits – these instincts, intuitions, and institutions – that constitute human morality (Curry 2016). What's more, because there are many different types of co-operation (technically, many different stable strategies for achieving superior equilibria in non-zero-sum games), the theory leads us to expect, and can explain, many different types of morality. Kin selection explains why we feel a special duty of care for our families, and why we abhor incest. Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. Social exchange explains why we trust others, reciprocate favours, feel guilt and gratitude, make amends, and forgive. Conflict resolution explains why we engage in costly displays of prowess such as bravery and generosity, why we defer to our superiors, why we divide disputed resources fairly, and why we recognize prior possession. As predicted by MAC, these seven moral rules – love your family, help your group, return favours, be brave, defer to your superiors, be fair, and respect others' property – appear to be universal across cultures. My colleagues and I analysed 600 ethnographic accounts of ethics from sixty societies, comprising over 600,000 words (Curry, Mullins & Whitehouse 2019). We found, first, that these seven co-operative behaviours were always considered morally good. Second, we found examples of most of these morals in most societies. Crucially, there were no counter-examples – no societies in which any of these behaviours were considered morally bad. And third, we observed these morals with equal frequency across continents; they were not the exclusive preserve of 'the West' or any other region. For example, among the Amhara, 'flouting kinship obligation is regarded as a shameful deviation, indicating an evil character'. In Korea, there exists an 'egalitarian community ethic [of] mutual assistance and cooperation among neighbors [and] strong in-group solidarity'. 'Reciprocity is observed in every stage of Garo life [and] has a very high place in the Garo social structure of values'. Among the Maasai, 'Those who cling to warrior virtues are still highly respected', and 'the uncompromising ideal of supreme warriorhood [involves] ascetic commitment to self-sacrifice … in the heat of battle, as a supreme display of courageous loyalty'. The Bemba exhibit 'a deep sense of respect for elders' authority'. The Kapauku 'idea of justice' is called 'uta-uta, half-half … [the meaning of which] comes very close to what we call equity'. And among the Tarahumara, 'respect for the property of others is the keystone of all interpersonal relations' (all quoted in Curry, Mullins & Whitehouse 2019: 55). These results suggest that there is a common core of universal moral principles. Morality is always and everywhere a co-operative phenomenon. Everyone everywhere agrees that co-operating, promoting the common good, is the right thing to do. MAC does not predict that moral values will be identical across cultures. On the contrary, it predicts 'variations on a theme': moral values will reflect the value of different types of co-operation under different social and ecological conditions. Indeed, it was our impression that these societies did vary in how they prioritized or ranked the seven moral values. With further research, gathering new data on moral values in contemporary societies, we shall be able to explore the causes of this variation (Curry, Jones Chesters & van Lissa 2019). Further research will also be needed to investigate whether there are additional types of co-operation that can explain additional types of morality; and whether this co-operative account can be extended to incorporate as yet under-theorized aspects of morality, such as sexual and environmental ethics. In this way, through the steady application of scientific method, we will discover whether co-operation fulfils its promise of providing the elusive 'grand unified theory of morality' that at last explains both the commonalities and the varieties of ethical experience. In The Hitchhiker's Guide to the Galaxy, Douglas Adams tells us that many millions of years ago a race of beings created a super-computer, called Deep Thought, to calculate 'The Answer to Life, the Universe and Everything'. Deep Thought took seven and a half million years to run the program and on the Day of the Answer large crowds gathered to hear what the great computer had come up with. After warning them that they wouldn't like it, Deep Thought revealed that 'The Answer to Life, the Universe and Everything' is … 42. The problem, as Deep Thought pointed out to the unhappy beings, was that they had never actually known what the question was. '42' is a perfectly good answer; the problem is that it is not an answer to a question that anyone thought they had asked. Today's proposition, that morality is fundamentally an evolved solution to problems of social co-operation, presents us with a similar kind of answer. To propose that morality is an evolved solution for co-operation is to explain 'what morality is, where it comes from, how it works and what it is for' (Curry 2016: 44). It is the equivalent of asking under what selective pressure morality arose, through what mechanisms it works, and what function it performs in the perpetuation of human evolution. It is to imagine a set of problems, and a set of dispositions, values, or behaviours as the solution to those problems, each generally of the same category or type though varying in specific content, with each instance of morality being a variant of a basic sort of 'solution'. I find this representation of morality odd and unlikely. I have a series of objections, which I will seek to keep distinct in the course of what follows. My objections are empirical (I don't think that morality is 'like that'), theoretical (I don't find the argument convincing), and moral (I don't think that people should think of morality in this way). I will argue that the proposition is wrong and wrong-headed on three counts: first, it misunderstands the nature of explanation; second, it mischaracterizes co-operation; and, third, it mistakenly portrays morality. I will demonstrate that the consequences of these mistakes are irrelevance, overconfidence, and functionalist sophistry. It is a mistake to think that in explaining morality as an evolutionary function we have 'explained morality'. It is one kind of explanation, one that is interesting, but largely irrelevant to the study of morality. The question 'What is morality for?' and the question 'What is the good life?' are different kinds of questions, and we need not assume that they will have the same answers. The degree to which any explanation will be sufficient or persuasive depends in part on the question that motivates it and the audience for whom it is intended. Explanation is always motivated – it is always an explanation in answer to a certain kind of question – and so explanation can never be simply of a thing (a value, a mood, a process, an organization, etc.). To co-opt an example from Putnam (1978: 42-3; cited in Laidlaw 2007), Professor X is found naked in the girls' dormitory at midnight. Now, this can be explained correctly by saying that (a) he was naked in the girls' dormitory at midnight, so he could not have exited the dormitory before midnight without exceeding the speed of light, and that (b) nothing can travel faster than the speed of light (and certainly not naked professors). This is an explanation for Professor X's night-time location, but it is not an explanation that is relevant to most of the questions that most people would have about the circumstances of his nocturnal adventures. Neurobiology, cognitive psychology, and Darwinian evolutionary theory provide important insights into panhuman dispositions. Evolutionary developments in social co-operation in our hominin ancestors led to the domestication of fire, collective child-rearing, and co-operative hunting. These may have created strong psychological predispositions towards pro-social behaviour, but they reveal as much about morality as the Neolithic Revolution reveals about the fall of the Berlin Wall or the unifying magnetism of David Hasselhoff. The result of applying this method to the study of morality is a generality that is true, to the extent that it is true (and of course it can be argued that it is false: Haidt & Graham 2007; Haidt & Joseph 2011; see Wong 2006). However, it is irrelevant to any understanding of particularity, and fails to deal with any meaning that morality might have for anyone going about the business of living their lives. Explaining what morality is 'for' would be admirable if it was the answer to a question that anyone had asked, but it is not, and posed in this way the answer may as well be 42. In this case, the explanation of morality as a solution to a problem is incoherent both because it is an unsatisfactory answer to questions anyone might have about morality, and because it presents a unitary 'Grand Unified Theory of Morality' (Curry, Mullins & Whitehouse 2019), where no such theory could exist. My suggestion here isn't that one kind of explanation is somehow better or worse than another, but that there are different kinds of explanation, and they do the work of answering different kinds of questions. As such, a theory that explains morality as 'fundamentally' a solution to an evolutionary problem is both myopic and wrong-headed. In his proposition, our opponent writes: 'Scholars have debated these questions [about the nature of morality] for millennia; now, thanks to science, we have the answers' (see also Curry 2016: 27). But even if we were persuaded that this proposition provided one sufficient explanation of morality (which, I will argue, it doesn't), it must surely be hubris to assert that because something 'scientific' has been said on a subject, that is the final word and nothing else of value may be contributed. That this proposition would pass by excluding most of what we normally mean by morality is important for assessing its persuasiveness. Our opponent tells us that we need find morality 'baffling' (Curry 2016: 27) no longer, because 'now, thanks to science, we have the answers' (above). This works because it both implicitly and explicitly excludes humanistic methods from the study of morality. As Laidlaw (2007) points out, the irony of finding a Grand Unified Theory of Morality in game theory is that it pits cognitive anthropology in a zero-sum relationship with humanist disciplines. Putting critiques of the form of the proposition aside, does MAC make logical sense on its own terms? 'Co-operation' is an odd word because its meaning is almost entirely positive. It does not mean 'manipulative', 'orchestrating the will of others'; it does not mean 'lacking in autonomy'. It means 'working together towards the same ends', 'assistance and support', 'mutual benefit'. These are usually seen as positive traits or activities (as our opponents argue); 'co-operation' is understood to be a good thing. As such, 'co-operation' amongst humans is informed by a normative meaning, which is different to the meaning of 'co-operation' in evolutionary theory. Human co-operation is rarely a goal or value in and of itself, but rather a consequence of other goals and values as people seek to lead meaningful lives. My understanding of evolutionary theory is that units of selection may be understood to be in 'competitive interactions' that have 'a winner and a loser', or 'cooperative interactions' that result in 'win-win situations' (Curry, Mullins & Whitehouse 2019: 48). However, only humans may be understood to 'co-operate' or 'compete' in the full human sense of having appropriate motives. Most selective competition does not require competitive motives and, as Midgley says: 'Absolutely none of it below the human level can proceed from dynastic ambition' (1979: 446). All sorts of animals co-operate and respond to the co-operative behaviour of others. Yet bees, ants, and so on, are not thought of as moral in the way that humans are. Eighteenth-century naturalists may have projected their moral values on to the industry of the beehive (Daston 2004), but it is rare today to find anyone who thinks of bees as 'moral' because they co-operate. The 'technical' co-operation of nonhumans is a different order of co-operation to that of humans, informed by moral values, friendship, love, what have you. In this 'non-technical' sense, nonhumans cannot co-operate, or compete for that matter. To borrow from Midgley, bees and ants cannot co-operate 'any more than atoms can be jealous, elephants abstract or biscuits teleological' (1979: 439). It might be countered that the proposition rests on a higher-order theory. Our opponents might concur that bees or ants are not moral, whilst still claiming that they are co-operating, and that human co-operation is of the same order. They may claim that they mean 'co-operation' in the technical sense with reference to humans and that humans have evolved 'morality' in order to make us do the thing that this term describes (whereas bees haven't because they presumably don't need to). Once one concedes that human 'co-operation' is informed by complex motivation, then it cannot be used to refer to 'technical' co-operation equivalent to that of bees or what have you. Co-operation, when used in a 'non-technical' sense, is already morally loaded. To say that 'morality is fundamentally an evolved solution to problems of social co-operation' is either to shift the way in which one is using 'co-operation' to a sense which incorporates some understanding of complex motivation into the processes of natural selection, or it is to maintain a 'technical' use of the word, and therefore to necessarily discount motivation from the analysis. You can't have it both ways. One problem with today's proposition is that it strongly suggests that an elemental analysis provides an explanation of morality. For example, elsewhere, our opponent has extended today's proposition to develop 'a novel taxonomy of moral values – a "Periodic Table of Ethics"' (Curry 2016: 37). Morality can be divided into its 'elements', which we can study, combine, and experiment with. In the process, 'the study of morality has at last become a branch of science' (Curry 2016: 29). The problem is that morality is not the sort of 'thing' that can have distinct units. This is the same mistake, in a reverse direction, that Aristotelian physics made when it extended an explanation of purpose from humans to inanimate matter: stones do not have purposes, but neither does morality have elements (Midgley 2001). A moment's reflection reveals that moral concepts, friendship, bravery, humility, and so on, couldn't possibly be thought of as unchanging or remotely stable. On one level, the level of the proposition, I may identify friendship, commitment, fidelity, and justice in some sort of external way, such that friendship at 50 may be equivalent to friendship at 15, or courage in Papua New Guinea may be thought of as the same 'thing' as courage in the Bronx. For such comparisons to take place, the comparison must be some unit of behaviour, which is taken to be comparable. It is something that is external to the subject, visible in speech or action, and democratic in the sense that all people experience it in the same way and to the same degree. The other sense of knowing what a moral concept means is knowing its value in depth, for example knowing the value of friendship. And that cannot be as democratic as the unitary model would have it because as soon as we introduce words like justice, fidelity, or friendship we necessarily introduce ideas of process and context. For us as historically and culturally located humans, the meaning of moral concepts 'deepens' as we learn. As a concept like friendship is known it is transformed, as are we through the knowing of it. The move is towards the personal, towards the ideal limit, not backwards towards a separable comparable public unit (Murdoch 2001 [1971]: 28). My moral objection to the proposition is the lurking fatalism that informs it: the idea that moral deliberation, learning, and growth are illusory. Fatalism is seductive because it offers a simple explanation for why I am good, or why I fail to be so; why some people love their neighbours and others don't. The answer is to be found not in personal fallibility or the messy complexity of human life but in an evolved need for co-operation. This is not the calculating prudence of a Hobbesian social insurance policy; we do these things, it is thought, because they are the mechanisms by which the species is perpetuated. If one thing must, by definition, count on the terms with which people understand it, it is surely morality. Otherwise, to tell people that their more decent feelings are not for themselves, that they are the product of powers over which they have no influence, is not to take them at their word. It is to discount human freedom and will. The proposition doesn't explain morality; instead it claims that on morality's own terms, it isn't really there at all. I am persuaded that humans have some innate adaptive machinery, and that this informs who we are and what we do, but I do not think that morality is because of this. I have demonstrated that, on its own terms, the proposition is absurd, since it links the positive moral value of co-operative behaviour to a subject for which it can make no sense at all: evolutionary co-operation. Morality is not a 'solution': it cannot be 'for' something in the way that a deep socket wrench is for a ratchet head. Nor can morality be sufficiently accounted for through an elemental analysis: no single explanation of morality can account for the historically specific forms that morality takes, or the work of self-reflection, will, value, judgement, and hope. I can see why a unitary theory might be appealing, or at least it might seem to be appealing until you get the answer: 'co-operation' is no more useful or intellectually or morally satisfying an answer than '42'. Explaining morality as an evolved mechanism for co-operation means that my explanations of myself do not count for themselves, and as such it explains away all and any meaning that morality might have, individually or socially. I propose that my argument, that moral concepts and ways of thinking have meaning in relation to motivation and context in culturally and historically situated life, provides a better account of morality than the proposition. To be clear, I am not arguing for an alternative explanation of what morality is for. I hope to have demonstrated that the question is wrong-headed and could never have a satisfactory answer. However, my approach helps us to account for moral striving, process, and variety. Of course, it would be possible to make an argument like this for any kind of concept: that our concept of the economy, or debt, or exchange, for example, is transformed through our engagement with it (indeed many would say that is basic to anthropological thought). My argument is not that other things couldn't be framed in this way; it is that moral concepts must be. Many thanks for including a philosopher in your debate. I hope that my more normative perspective on this fascinating proposition will be interesting and useful. In philosophy, we distinguish two aspects of ethics: axiology and deontics. Axiology is the theory of the good: it's meant to describe and explain the values that contribute to a person's welfare, either instrumentally or intrinsically. Deontics is the theory of right action: it's meant to describe and explain what it is for an action, policy, or institution to be obligatory, permissible, or impermissible. A pair of related sources of values are needs (Weil 2002 [1949]) and capabilities (Sen 1985). On the one hand, needs characterize minimal conditions for human lives to be worth living. Needs range from the most obvious biological constraints, such as air, water, food, clothing, shelter, and touch, to more sophisticated and enculturated necessities. I'm sure that most of us here would feel naked and alone without access to electricity and Wi-Fi. On the other hand, capabilities characterize the range of powers that transform a life of bare coping to one of flourishing. These include literacy, numeracy, emotional competence, practical reason, friendship, and a modicum of material and political control (Nussbaum 2000). Values answer to needs and capabilities. Something is valuable to the extent that it satisfies needs and supports capabilities, disvaluable to the extent that it frustrates needs and undermines capabilities. If this is right, then to the extent that there are species-universal needs and capabilities, there will be species-universal values. Owing to our embodiment, finitude, and interdependency, we humans do in fact share many needs and capabilities. For that reason, it should be unsurprising that we also share many of our values. By rooting values in needs and capabilities, we avoid committing the naturalistic fallacy. We should admit to ourselves with all due severity exactly what will be necessary for a long time to come and what is provisionally correct, namely: collecting material, formulating concepts, and putting into order the tremendous realm of tender value feelings and value distinctions that live, grow, reproduce, and are destroyed, – and, perhaps, attempting to illustrate the recurring and more frequent shapes of this living crystallization, – all of which would be a preparation for a typology of morals (Nietzsche 2001 [1886]: 186; original emphases). In this passage, Nietzsche simultaneously recognizes the variance in moralities and posits patterns within the variance. Such patterns are liable to arise because we often face trade-offs among our values. These trade-offs can be individual or social, synchronic or diachronic. I can't satisfy all of my needs and cultivate all of my capabilities simultaneously. Even over the course of a lucky and well-lived life, no one manages to fulfil all of their sometimes-competing values. And of course, satisfying one person's needs can make it more difficult to satisfy another person's needs. If I eat all the food, you may go hungry. Game theory is a mathematical abstraction that enables us to model these sorts of trade-off. In particular, the study of non-zero-sum games, in which one person's benefit needn't always come at another person's cost, helps us to think about relationships and encounters in which it's possible for both or all parties to benefit simultaneously. These are the sorts of relationship and encounter in which co-operation is possible, and it is here that the hypothesis of morality-as-co-operation comes into play. In particular, the MAC hypothesis holds that the function of morality is to help people to find, and motivate them to enact, co-operative solutions whenever possible. What I take Curry, Mullins, and Whitehouse (2019) to have shown is closely connected to these claims. They argue that there are seven pillars of morality: family values, group loyalty, reciprocity, hawkish heroism, dovish deference, fairness, and property. Each of these moral elements helps us to negotiate trade-offs in values that are endemic to the human condition, which is why they are associated with well-studied non-zero-sum games. Such games sometimes have multiple equilibria, which helps to explain why different solutions are visible or prominent in different communities. In addition, the exact implementation of an abstractly characterized solution can differ from group to group and within groups over time. What all solutions must do, however, is help those who play the game to pursue, promote, and protect their values. In other words, these solutions are
Referência(s)