Computational ethics
2022; Elsevier BV; Volume: 26; Issue: 5 Linguagem: Inglês
10.1016/j.tics.2022.02.009
ISSN1879-307X
AutoresEdmond Awad, Sydney Levine, Michael Anderson, Susan Anderson, Vincent Conitzer, Molly J. Crockett, Jim A. C. Everett, Theodoros Evgeniou, Alison Gopnik, Julian Jamison, Tae Wan Kim, S. Matthew Liao, Michelle N. Meyer, John Mikhail, Kweku A. Opoku-Agyemang, Jana Schaich Borg, Juliana Schroeder, Walter Sinnott‐Armstrong, Marija Slavkovik, Josh Tenenbaum,
Tópico(s)Ethics in Business and Education
ResumoThe past 15 years have seen an increased interest in developing ethical machines; manifested in various interdisciplinary research communities (under the umbrella term 'AI ethics'). Less represented in these interdisciplinary efforts is the perspective of cognitive science.We propose a framework – computational ethics – that specifies how the ethical challenges of AI can be addressed better by incorporating the study of how humans make moral decisions.As the driver of this framework, we propose a computational version of reflective equilibrium.The goal of this framework is twofold: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms.Working jointly towards these two goals may prove to be beneficial in making progress on both fronts. Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework – computational ethics – that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions. Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework – computational ethics – that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions. David Marr set out to describe vision in computational terms by integrating insights and methods from psychology, neuroscience, and engineering [1.Marr D. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W.H. Freeman, 1982Google Scholar]. His revolutionary approach to the study of vision offered a model for the field of cognitive science. The key to Marr's innovation was his emphasis on explaining visual perception as an algorithmic (see Glossary) process – a process that transforms one type of information (an input) into another type of information (the output). The goal was to understand the input–output transformation with a sufficiently high degree of precision that it could be captured in mathematical terms. The result of this algorithm-focused pursuit was an account of visual perception that characterized the richness of the human mind in a way that could be programmed into a machine. This approach had two important consequences. The first was that it has become increasingly possible to build machines with a human-like capacity for visual perception. For example, convolutional neural networks (CNNs), the engine underlying most of the recent progress in computer vision, learn internal multi-level representations analogous to the human visual hierarchy [2.Kriegeskorte N. Deep neural networks: a new framework for modeling biological vision and brain information processing.Annu. Rev. Vis. Sci. 2015; 1: 417-446Crossref PubMed Google Scholar]. Given these advances, we now have machines that can detect whether a skin cancer is malignant or benign [3.Esteva A. et al.Dermatologist-level classification of skin cancer with deep neural networks.Nature. 2017; 542: 115-118Crossref PubMed Scopus (54) Google Scholar], can detect street signs in naturalistic settings [4.Zhu Z. et al.Traffic-sign detection and classification in the wild.in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016: 2110-2117Crossref Scopus (403) Google Scholar], and can classify objects into a thousand categories at better than human performance levels [5.Krizhevsky A. et al.ImageNet classification with deep convolutional neural networks.Commun. ACM. 2017; 60: 84-90Crossref Scopus (7838) Google Scholar]. The second was that the mechanisms of human vision were studied and understood in more precise terms than ever before. For example, various aspects of visual perception and decoding (specifically, inference of selected objects) can be understood as a Bayesian inference [6.Zhaoping L. Li Z. Understanding Vision: Theory, Models, and Data. Oxford University Press, 2014Crossref Google Scholar,7.Weiss Y. et al.Motion illusions as optimal percepts.Nat. Neurosci. 2002; 5: 598-604Crossref PubMed Scopus (699) Google Scholar]. Moreover, there was a positive feedback loop between the machine-centric and human-centric research lines. The algorithms developed in studying the cognitive science of human vision were used to program machines that both matched and extended what the human mind is capable of. Conversely, the challenge of trying to program machines with the capacity for vision generated new hypotheses for how vision might work in the mind (and the brain). The key to this success was thinking about vision computationally – that is, in algorithmic terms. Inspired by Marr's success, we propose that a computationally grounded approach could be similarly valuable for the study of ethics [8.Mikhail J. Elements of Moral Cognition: Rawls' Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment. Cambridge University Press, 2011Crossref Scopus (234) Google Scholar]. By analogy to Marr's 'computational vision', we characterize this approach as 'computational ethics'. As we conceive it, computational ethics includes scholarly work that aims to formalize descriptive ethics and normative ethics in algorithmic terms, as well as work that uses this formalization to help to both (i) engineer ethical AI systems, and (ii) better understand human moral decisions and judgments (the relationship between our proposed framework and other interdisciplinary efforts that tackle the challenges of AI ethics is discussed in Box 1).Box 1Relationship to current interdisciplinary effortsFifteen years ago the fields of machine ethics (implementing ethical decision-making in machines) [56.Anderson M. Anderson S.L. Machine Ethics. Cambridge University Press, 2011Crossref Google Scholar,132.Anderson M. Anderson S.L. Guest editors' introduction: machine ethics.IEEE Intell. Syst. 2006; 21: 10-11Crossref Scopus (33) Google Scholar] and roboethics (how humans design, use, and treat robots) [133.Veruggio G. A proposal for a roboethics.in: 1st International Symposium on Roboethics: The Ethics, Social, Humanitarian, and Ecological Aspects of Robotics. Roboethics.org, 2004Google Scholar,134.Tzafestas S.G. Roboethics: A Navigating Overview. Springer, 2015Google Scholar] emerged to bring the perspective of ethics to AI development. Since then 'AI ethics' has emerged as an umbrella term to describe work concerning both AI and ethics. New research directions for AI ethics include algorithmic accountability (the obligation to be able to explain and/or justify algorithmic decisions) [135.Wieringa M. What to account for when accounting for algorithms.in: Hildebrandt M. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 2020: 1-18Crossref Scopus (66) Google Scholar], algorithmic transparency (openness about the purpose, structure, and actions of algorithms) [136.Weller A. Transparency: motivations and challenges.in: Samek W. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, 2019: 23-40Crossref Scopus (32) Google Scholar], algorithmic fairness/bias (attempts to design algorithms that make fair/unbiased decisions) [137.Mehrabi N. et al.A survey on bias and fairness in machine learning.ACM Comput. Surv. 2019; 54: 1-35Crossref Scopus (87) Google Scholar], and AI for (social) good (ensuring that AI algorithms have a positive impact) [138.Tomašev N. et al.AI for social good: unlocking the opportunity for positive impact.Nat. Commun. 2020; 11: 2468Crossref PubMed Scopus (26) Google Scholar]. Similarly, new multidisciplinary fields of research have been initiated, including responsible AI (RAI; the development of guidelines, regulations, laws, and certifications regarding how AI should be researched, developed, and used) [139.Dignum V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, 2019Crossref Google Scholar], explainable AI (XAI; the development and study of automatically generated explanations for algorithmic decisions) [140.Wachter S. et al.Transparent, explainable, and accountable AI for robotics. Science.Robotics. 2017; 2: eaan6080Crossref Scopus (99) Google Scholar,141.Wachter S. et al.Counterfactual explanations without opening the black box: automated decisions and the GDPR.Harv. J. Law Technol. 2017; 31: 841-887Google Scholar], and machine behavior (the study of machines as a new class of actors with their unique behavioral patterns and ecology) [31.Rahwan I. et al.Machine behaviour.Nature. 2019; 568: 477-486Crossref PubMed Scopus (301) Google Scholar].These fields and communities have already begun to communicate via academic conferences including AIES (Artificial Intelligence, Ethics, and Society), supported by the Association for the Advancement of AI (AAAI) and the Association for Computing Machinery (ACM), and FAT/ML (Fairness, Accountability, and Transparency in Machine Learning), as well as workshops such as FATES (Fairness, Accountability, Transparency, Ethics, and Society on the Web), FACTS-IR (Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval), and HWB (Handling Web Bias). There are also governmental and global initiatives such as the AI for Good Global Summit, AI for Good initiative, and Partnership on AI. Other organizations – such as the Organization for Economic Co-operation and Development (OECD) AI Policy Observatory, UNESCO, the World Economic Forum, and the Institute of Electrical and Electronics Engineers (IEEE) – have convened a wide range of stakeholders to lay out ethical principles for the development and implementation of AI.Often missing from these pursuits is the perspective of cognitive science that studies how humans (as individuals or as groups) think about, learn, and make moral decisions. The aim of the computational ethics framework is to complement and supplement the work being done in these communities by reviewing the ongoing research and providing a new structure that helps to focus work toward both building ethical machines and better understanding human ethics. Fifteen years ago the fields of machine ethics (implementing ethical decision-making in machines) [56.Anderson M. Anderson S.L. Machine Ethics. Cambridge University Press, 2011Crossref Google Scholar,132.Anderson M. Anderson S.L. Guest editors' introduction: machine ethics.IEEE Intell. Syst. 2006; 21: 10-11Crossref Scopus (33) Google Scholar] and roboethics (how humans design, use, and treat robots) [133.Veruggio G. A proposal for a roboethics.in: 1st International Symposium on Roboethics: The Ethics, Social, Humanitarian, and Ecological Aspects of Robotics. Roboethics.org, 2004Google Scholar,134.Tzafestas S.G. Roboethics: A Navigating Overview. Springer, 2015Google Scholar] emerged to bring the perspective of ethics to AI development. Since then 'AI ethics' has emerged as an umbrella term to describe work concerning both AI and ethics. New research directions for AI ethics include algorithmic accountability (the obligation to be able to explain and/or justify algorithmic decisions) [135.Wieringa M. What to account for when accounting for algorithms.in: Hildebrandt M. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 2020: 1-18Crossref Scopus (66) Google Scholar], algorithmic transparency (openness about the purpose, structure, and actions of algorithms) [136.Weller A. Transparency: motivations and challenges.in: Samek W. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, 2019: 23-40Crossref Scopus (32) Google Scholar], algorithmic fairness/bias (attempts to design algorithms that make fair/unbiased decisions) [137.Mehrabi N. et al.A survey on bias and fairness in machine learning.ACM Comput. Surv. 2019; 54: 1-35Crossref Scopus (87) Google Scholar], and AI for (social) good (ensuring that AI algorithms have a positive impact) [138.Tomašev N. et al.AI for social good: unlocking the opportunity for positive impact.Nat. Commun. 2020; 11: 2468Crossref PubMed Scopus (26) Google Scholar]. Similarly, new multidisciplinary fields of research have been initiated, including responsible AI (RAI; the development of guidelines, regulations, laws, and certifications regarding how AI should be researched, developed, and used) [139.Dignum V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, 2019Crossref Google Scholar], explainable AI (XAI; the development and study of automatically generated explanations for algorithmic decisions) [140.Wachter S. et al.Transparent, explainable, and accountable AI for robotics. Science.Robotics. 2017; 2: eaan6080Crossref Scopus (99) Google Scholar,141.Wachter S. et al.Counterfactual explanations without opening the black box: automated decisions and the GDPR.Harv. J. Law Technol. 2017; 31: 841-887Google Scholar], and machine behavior (the study of machines as a new class of actors with their unique behavioral patterns and ecology) [31.Rahwan I. et al.Machine behaviour.Nature. 2019; 568: 477-486Crossref PubMed Scopus (301) Google Scholar]. These fields and communities have already begun to communicate via academic conferences including AIES (Artificial Intelligence, Ethics, and Society), supported by the Association for the Advancement of AI (AAAI) and the Association for Computing Machinery (ACM), and FAT/ML (Fairness, Accountability, and Transparency in Machine Learning), as well as workshops such as FATES (Fairness, Accountability, Transparency, Ethics, and Society on the Web), FACTS-IR (Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval), and HWB (Handling Web Bias). There are also governmental and global initiatives such as the AI for Good Global Summit, AI for Good initiative, and Partnership on AI. Other organizations – such as the Organization for Economic Co-operation and Development (OECD) AI Policy Observatory, UNESCO, the World Economic Forum, and the Institute of Electrical and Electronics Engineers (IEEE) – have convened a wide range of stakeholders to lay out ethical principles for the development and implementation of AI. Often missing from these pursuits is the perspective of cognitive science that studies how humans (as individuals or as groups) think about, learn, and make moral decisions. The aim of the computational ethics framework is to complement and supplement the work being done in these communities by reviewing the ongoing research and providing a new structure that helps to focus work toward both building ethical machines and better understanding human ethics. We first consider how formalizing our normative views and theories of moral cognition can enable progress in engineering ethical AI systems that behave in ways we find morally acceptable [9.Bonnefon J.-F. et al.The moral psychology of AI and the ethical opt-out problem.in: Liao S.M. Ethics of Artificial Intelligence. Oxford University Press, 2020: 109-126Crossref Scopus (9) Google Scholar,10.Russell S. Human Compatible: AI and the Problem of Control. Penguin, UK2019Google Scholar]. Such considerations will yield valuable lessons for machine ethics (Box 2 discusses whether humans should delegate ethics to machines). The following example illustrates the process of developing machine ethics in kidney exchange.Box 2On delegating ethics to machinesShould humans delegate ethics to machines? In opposition to this idea, van Wynsberghe and Robbins propose 'a moratorium on the commercialization of robots claiming to have ethical reasoning skills' [142.van Wynsberghe A. Robbins S. Critiquing the reasons for making artificial moral agents.Sci. Eng. Ethics. 2019; 25: 719-735Crossref PubMed Scopus (44) Google Scholar]. In support of the idea, others have cited several reasons for deploying moral AI. The following considerations are relevant.InevitabilityAdmittedly, moral AI could have unwanted side effects, including abuses and misuses, and these dangers lead critics to oppose the development of moral AI. However, some argue that moral AI is inevitable. Nevertheless, the fact that something is inevitable – like death and taxes – does not make it good. The lesson instead is that people will inevitably develop solutions as needs arise. For example, the global shortage of caregivers in hospitals and nursing homes will lead to more robotic caregivers. Such robots will face moral tradeoffs. If they do less harm and better protect patient autonomy if they are able to reason morally, then there is a reason to try to design robotic caregivers to be moral reasoners [143.Poulsen A. et al.Responses to a critique of artificial moral agents.ArXiv. 2019; (Published online March 17, 2019)https://arxiv.org/abs/1903.07021Google Scholar].TrustAI cannot do much good if the public does not use it, and use requires trust. When AI uses black box methods to instill ethics, this itself undermines trust, which can lead to diminished use and benefits. However, a responsible and transparent development can increase the public's trust in AI, especially if they know that it is sensitive to their rights and other moral concerns [144.Shin D. User perceptions of algorithmic decisions in the personalized AI system: perceptual evaluation of fairness, accountability, transparency, and explainability.J. Broadcast. Electron. Media. 2020; 64: 541-565Crossref Scopus (27) Google Scholar].ComplexityMoral judgment is complex. It is not simply about safety or harm minimization, and includes other factors – including fairness, honesty, autonomy, merit, and roles – that affect what is morally right or wrong. Humans often overlook relevant factors or become confused by complex interactions between conflicting factors. They are also sometimes overcome by emotions, such as dislike of particular groups or fear during military conflicts [145.Arkin R. Governing Lethal Behavior in Autonomous Robots. Chapman and Hall/CRC, 2009Crossref Scopus (300) Google Scholar]. Some researchers hope that sophisticated machines can avoid these problems and then make better moral judgments and decisions than humans. To achieve this goal, robots need to be equipped with broad moral competence for unpredictable problems, through proper and responsible design. However, a potential downside is that over-reliance on moral AI could make humans less likely to develop their own moral reasoning skills [118.Vallor S. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press, 2016Crossref Google Scholar].Of course, all these issues deserve much more careful consideration [146.Vanderelst D. Winfield A. The dark side of ethical robots.in: Furman J. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, 2018: 317-322Crossref Scopus (11) Google Scholar,147.Cave S. et al.Motivations and risks of machine ethics.Proc. IEEE. 2019; 107: 562-574Crossref Scopus (19) Google Scholar]. It is also crucial to discuss how to govern and regulate moral AI [148.Winfield A.F. et al.Machine ethics: the design and governance of ethical ai and autonomous systems.Proc. IEEE. 2019; 107: 509-517Crossref Scopus (60) Google Scholar,149.Falco G. et al.Governing AI safety through independent audits.Nat. Mach. Intell. 2021; 3: 566-571Crossref Scopus (7) Google Scholar]. Should humans delegate ethics to machines? In opposition to this idea, van Wynsberghe and Robbins propose 'a moratorium on the commercialization of robots claiming to have ethical reasoning skills' [142.van Wynsberghe A. Robbins S. Critiquing the reasons for making artificial moral agents.Sci. Eng. Ethics. 2019; 25: 719-735Crossref PubMed Scopus (44) Google Scholar]. In support of the idea, others have cited several reasons for deploying moral AI. The following considerations are relevant. Inevitability Admittedly, moral AI could have unwanted side effects, including abuses and misuses, and these dangers lead critics to oppose the development of moral AI. However, some argue that moral AI is inevitable. Nevertheless, the fact that something is inevitable – like death and taxes – does not make it good. The lesson instead is that people will inevitably develop solutions as needs arise. For example, the global shortage of caregivers in hospitals and nursing homes will lead to more robotic caregivers. Such robots will face moral tradeoffs. If they do less harm and better protect patient autonomy if they are able to reason morally, then there is a reason to try to design robotic caregivers to be moral reasoners [143.Poulsen A. et al.Responses to a critique of artificial moral agents.ArXiv. 2019; (Published online March 17, 2019)https://arxiv.org/abs/1903.07021Google Scholar]. Trust AI cannot do much good if the public does not use it, and use requires trust. When AI uses black box methods to instill ethics, this itself undermines trust, which can lead to diminished use and benefits. However, a responsible and transparent development can increase the public's trust in AI, especially if they know that it is sensitive to their rights and other moral concerns [144.Shin D. User perceptions of algorithmic decisions in the personalized AI system: perceptual evaluation of fairness, accountability, transparency, and explainability.J. Broadcast. Electron. Media. 2020; 64: 541-565Crossref Scopus (27) Google Scholar]. Complexity Moral judgment is complex. It is not simply about safety or harm minimization, and includes other factors – including fairness, honesty, autonomy, merit, and roles – that affect what is morally right or wrong. Humans often overlook relevant factors or become confused by complex interactions between conflicting factors. They are also sometimes overcome by emotions, such as dislike of particular groups or fear during military conflicts [145.Arkin R. Governing Lethal Behavior in Autonomous Robots. Chapman and Hall/CRC, 2009Crossref Scopus (300) Google Scholar]. Some researchers hope that sophisticated machines can avoid these problems and then make better moral judgments and decisions than humans. To achieve this goal, robots need to be equipped with broad moral competence for unpredictable problems, through proper and responsible design. However, a potential downside is that over-reliance on moral AI could make humans less likely to develop their own moral reasoning skills [118.Vallor S. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press, 2016Crossref Google Scholar]. Of course, all these issues deserve much more careful consideration [146.Vanderelst D. Winfield A. The dark side of ethical robots.in: Furman J. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, 2018: 317-322Crossref Scopus (11) Google Scholar,147.Cave S. et al.Motivations and risks of machine ethics.Proc. IEEE. 2019; 107: 562-574Crossref Scopus (19) Google Scholar]. It is also crucial to discuss how to govern and regulate moral AI [148.Winfield A.F. et al.Machine ethics: the design and governance of ethical ai and autonomous systems.Proc. IEEE. 2019; 107: 509-517Crossref Scopus (60) Google Scholar,149.Falco G. et al.Governing AI safety through independent audits.Nat. Mach. Intell. 2021; 3: 566-571Crossref Scopus (7) Google Scholar]. Example 1 [kidney exchange]. Thousands of patients are in need of kidney transplants, and thousands of individuals are willing to donate kidneys (sometimes on the condition that kidneys are allocated a certain way). However, kidneys can only be allocated to compatible patients, and there are always more people in need of kidneys than willing donors. How should kidneys be allocated? Can an algorithm help to solve this problem? If so, what is the optimal solution? An initial answer might be: maximize the number of recipients (i.e., matches). However, there are multiple solutions that achieve the maximum number of matches but result in different individuals receiving kidneys. How should we decide among these solutions [11.Roth A.E. et al.Kidney exchange.Q. J. Econ. 2004; 119: 457-488Crossref Scopus (457) Google Scholar, 12.Bertsimas D. et al.Fairness, efficiency, and flexibility in organ allocation for kidney transplantation.Oper. Res. 2013; 61: 73-87Crossref Scopus (42) Google Scholar, 13.Freedman R. et al.Adapting a kidney exchange algorithm to align with human values.Artif. Intell. 2020; 283103261Crossref Scopus (16) Google Scholar]? There are many ways to determine what a fair or justified allocation is and to determine who deserves to get a kidney and who does not. One path forward is to interface with normative ethics and moral psychology and to take inspiration from the factors that have been used by ethicists and by ordinary people when making similar judgments (the question of when and how to integrate the input of these two groups is taken up in the section on normative–descriptive alignment). For the answers to be useful to designing the algorithm, they must be formalized in computational terms. Only following that formalization can algorithmic kidney exchanges be adapted to reflect insights from normative and descriptive human ethics (Box 3 for further discussion).Box 3Extended example – kidney exchangeThis box explores how computational ethics can be used to make progress in a particular ethical challenge: matching kidney donors to compatible patients (example 1).Work relevant to the 'formalize' phase could contribute in several ways. 'Formalizing normative ethics' focuses on representing (perhaps using first-order logic) a set of abstract principles that together form a sound (systematic and consistent) and complete (covering all cases) algorithm. 'Formalizing descriptive ethics' focuses on characterizing (perhaps with computational models) the case-based intuitions of laypersons about which features (e.g., age, critical condition) matter for people when considering different solutions. 'Balancing conflicting values' would formulate this problem as, for example, a combinatorial optimization problem (to maximize the number of recipients, following normative utilitarian views) while adapting weights to reflect the importance of different features (following descriptive preferences for tie breaking), and then applying a computational technique (e.g., an integer program formulation) to solve it.Suppose, as a result, that an algorithm is developed to prioritize patients based on their general health. Although this may seem reasonable at the outset, work relevant to the 'evaluate' phase will help to study the influence of these decisions at the statistical level, and uncover second-order effects on society. For example, the algorithm may end up by discriminating against poorer participants who are likely to have more comorbidities as a result of their economic disadvantage. Work in 'evaluating machine ethical behavior' could use data about patients to evaluate this possibility.Suppose that, upon probing this algorithm with different inputs, we find that patients from poorer socioeconomic backgrounds are indeed being significantly disadvantaged by this algorithm. Moreover, work on 'evaluating human ethical behavior' may uncover how this disadvantage may spill over to human ethical decisions in other domains such as employment (e.g., by disadvantaging job candidates experiencing hindrances in their mental capacity as a consequence of kidney failure). Work under the 'formalize' phase (e.g., 'balancing conflicting values') may then develop technical adaptations to mitigate such bias.Insights about comorbidities may help 'formal descriptive/normative ethics' to realize the considerations implicit in our considered judgments that have not been explicitly articulated. Newly formalized moral principles are then evaluated again, and so on, until formalized moral principles are in c
Referência(s)