Artigo Revisado por pares

Not All Intelligence is Artificial: Data Science, Automation, and AI Meet HI

2019; Mary Ann Liebert, Inc.; Volume: 23; Issue: 2 Linguagem: Inglês

10.1089/omi.2019.0003

ISSN

1557-8100

Autores

Vural Özdemir,

Tópico(s)

Big Data and Business Intelligence

Resumo

OMICS: A Journal of Integrative BiologyVol. 23, No. 2 CommentaryFree AccessNot All Intelligence is Artificial: Data Science, Automation, and AI Meet HIVural ÖzdemirVural ÖzdemirAddress correspondence to: Prof. Vural Özdemir, MD, PhD, DABCP, Senior Advisor, Writer and Researcher, Technology, Society and Democracy, Toronto, Ontario, Canada, and Adjunct Professor, School of Biotechnology, Amrita Vishwa Vidyapeetham (Amrita University), Kerala, India E-mail Address: vural.ozdemir@alumni.utoronto.caSenior Advisor, Writer and Researcher, Technology, Society and Democracy, Toronto, Canada.School of Biotechnology, Amrita Vishwa Vidyapeetham (Amrita University), Kerala, India.Search for more papers by this authorPublished Online:15 Feb 2019https://doi.org/10.1089/omi.2019.0003AboutSectionsPDF/EPUB Permissions & CitationsPermissionsDownload CitationsTrack CitationsAdd to favorites Back To Publication ShareShare onFacebookTwitterLinked InRedditEmail Not All Intelligence Is ArtificialFor decades, automation was a key focus in manufacturing and industrial engineering. Life sciences and health care are now following suit. The advent of data science, embedded sensors, artificial intelligence (AI), wireless connectivity, and the Internet of Things throughout the planet have raised the prospects for automation, smart hospital design and the home health care industry for an aging population. Extreme automation in laboratory sciences and services across institutions, time zones, and geographical borders is a hot topic for research and health care administration.Diverse health technology applications are being affected by extreme automation and data science. These include, for example, real-life Big Data collection for omics biomarker–phenotype association studies, digital drugs with embedded sensors to monitor patients' treatment adherence, and laboratory services to be delivered by AI-powered collaborative robots working together with health professionals (Gulland, 2017; Özdemir and Hekim, 2018). Also, automation is at the epicenter of customized manufacturing of biomarker test panels for precision medicine in various common and rare diseases worldwide.We are submerged in a sea of Big Data, AI, and extreme digital connectivity at an unprecedented planetary scale, creating what has been recently termed the "Quantified Planet" (Özdemir, 2018a). But in the current AI and automation gold rush, we might also be drifting away from human intelligence (HI). If we are to harness the new technologies for robust, reproducible, and responsible innovation in biology and medicine, we need data science, automation, and AI working in tandem with HI, and so that they serve science and society, rather than vice versa.What Is HI for Data Science and AI?First, in a context of data science and AI, HI can be defined as a collection of contextual tacit knowledges on human values, responsibility, empathy, intuition, or care for another living being that cannot be readily described or executed by algorithms. I recognize that some avid AI technology fans might contest this point, perhaps suggesting that human empathy, compassion, or all tacit human knowledge can be codified in some algorithmic form and format. Still, I wish to present an alternative view that there is an upper limit to the extent to which tacit human qualities and contexts can be mimicked or engineered with AI. We need HI and its tacit qualities that in effect determine quality in science (Ravetz, 2016), and its applications in health care and society.Second, HI in data science pertains to knowledge about the broader social and political context in which Big Data and its provenance and applications are situated. It is often falsely assumed that technologies bring about change in society. On the contrary, it is the value-loaded decisions made by funders, persons, and institutions that cause social change. It follows, therefore, that HI has much to do with the ability to excavate the tacit human values, politics, and power embedded in Big Data provenance and its AI-driven applications. To excavate the value-loaded backstage of science, technology, and innovation, we need, however, new kinds of societal literacy in critical political science so as to read the power-laden subtext embedded in daily laboratory life (Latour, 1987).Third, HI requires interdisciplinary and boundary-crossing soft skills, beyond operating laboratory equipment, to allow for systems thinking over the entire trajectory of Big Data provenance, and so as to bridge the "two cultures" divides between science, and society and politics of innovation. For example, the following questions beg for answers concerning tacit HI knowledge on Big Data provenance and/or application context: What and who are the sources of Big Data, who is generating them, and by what funding? Who has access to Big Data, and who does not?Which innovation governance models are chosen for AI applications, and to serve what ends? Who is included, or excluded in governance, and why?Which epistemological frames are being deployed for emerging technology governance (Özdemir, 2019; Özdemir and Springer, 2018; Rip, 2016; Stilgoe et al., 2014; von Schomberg, 2013)?To what extent are critical theory and critical governance taken into account to discern Big Data provenance and associated AI applications?Why Is HI Important for AI and Data Science?One of the first things I learned in graduate school was evidence "is" always used in singular form, whereas data "are" always plural. That piece of knowledge served me well as editor, author, and reviewer over the years. But there is another underappreciated dimension to data that seems to be overlooked in the gold rush for Big Data, automation, and AI. Data have a provenance: the assortment of technical, social, and political forces that enact on data in their trajectory from study design, funding, choice of laboratory technology platforms to transfer, and distribution of data across laboratories, analysts, and user communities. Big Data are no exception, and have a sociotechnical provenance that tends to be overlooked in data science applications.Put in other words, data are never "just" data; science is never "just" science; technology is never "just" technology, and they are never free from provenance, human values, power, and politics (Collingridge, 1980; Didier et al., 2015; Feyerabend, 2011; Fisher, 2017; Özdemir et al., 2017; Özdemir, 2019; Sarewitz, 2016): Decades of research on knowledge workers such as scientists and journalists have shown that production and communication of knowledge are inherently political (value-loaded) acts and cannot be otherwise. It is impossible to separate the knowledge from the knower and her/his social and political context. Yet, the dogma that knowledge is value-free continues to dominate. Old habits die hard.The initial leaders of the Enlightenment project (e.g., Francis Bacon) and their disciples have unquestioningly championed the assumption of value-free apolitical knowledge production for the last 400 years.Everyday life, for that matter, is political. Even a pleasant smile can be political if it is deployed as a commodity to garner social capital and influence (Özdemir, 2018b).Without literacy on Big Data provenance and appreciation of the politics of innovation, quality in science can be compromised (Ravetz, 2016; Saltelli and Giampietro, 2017). By making the technical, social, and political dimensions of Big Data provenance transparent, HI makes AI accountable and responsible.A recently recognized issue in online communities might further clarify the importance of HI and associated tacit knowledge and soft skills in discerning Big Data provenance, and by extension, assuring quality in AI science. The veracity of Big Data generated by online communities rests on the assumption that much of the Internet traffic is human. But some analysts are posing the following question: How much of the Internet is fake? Although the answers vary (Read, 2018), bots camouflaged as people to collect user data from humans are becoming difficult to ignore. Even more worrisome is the recent recognition of Big Data generated by bot-to-bot traffic. In this case, bots follow other bots to collect online data (Read, 2018). The bot-to-bot web traffic pushes the scale of threats to Big Data veracity further. These unprecedented emergent phenomena tell us that unless Big Data provenance is routinely taken into account, downstream AI-driven applications from Big Data may suffer from the "garbage in garbage out" predicament. Needless to say, these phenomena are not simply technical but also social and political. The online monopolies that create and harness Big Data add to these concerns (Özdemir, 2018a).HI informed by critical social science and humanities can usefully shape the real-life context in which automation and other AI applications are currently emerging. For example, AI algorithms offer new prospects for prediction of disease and treatment outcomes, and by extension, empowering precision medicine. But the same advanced tools used for pattern recognition and predictive forecasting can also be deployed, for example, as facial recognition technologies that threaten global democracy and civil rights. That is, extreme digital connectivity, Big Data, and AI-driven automation create a fertile potential for new power structures for authoritarian governance and pansurveillance by one person in total control of knowledge networks in science and society, directly or through connected proxies (Larson, 2018; Özdemir, 2018b). HI knowledge, its tacit qualities, and soft skills are essential so as to "see through and beyond" new technologies, and identify the opaque human values driving technology and its regulation (Conley, 2011; Guston, 2014; Harris, 2019; Latour, 1987; Thoreau and Delvenne, 2012; White, 2018).In sum, HI-informed Big Data provenance and mapping of AI application contexts are ultimately important to prevent the type 3 (framing) errors in data science, that is, "finding the right answers for the wrong questions." Data science, automation, and AI would be served well and stand the test of time and context, if they were informed by the tacit qualities and knowledges of HI.OutlookFour ways to cultivate HI-informed innovation ecosystemsIf HI offers the broader sociotechnical context on Big Data provenance and AI applications, how can we cultivate HI-informed innovation ecosystems?The first idea is to routinely deploy metadata to boost Big Data provenance. Metadata are defined as "data about data," and include the technical and social/political context in which Big Data are generated and valorized as products (Özdemir et al., 2014). In addition, metadata would bode well for inclusive credit attribution to data scientists for their important contributions to data generation and provenance in an innovation ecosystem.The second idea is to promote new forms of social literacy on the "two cultures" divide, and so that engineers, physicians, and scientists are trained in critical theory and political science as part of their education. This would equip the next generation of innovators with skills to read the subtext, human values, power, and social context that shape the creation of scientific knowledge and innovations that stand the test of time and application contexts.The third idea is to clarify a centuries-old misconception in science, engineering, medicine, and technology communities. Political science scholarship makes the politics in science, that is, creation and contestation of power, transparent and accountable, and thus contributes to responsible innovation and critical governance of emerging technologies. Political science expertise is an excellent antidote to opaque politics and unaccounted human power in science that threaten technology ethics, responsible innovation, and global democracy (Özdemir, 2018b).The fourth, and perhaps the most important idea, is that the regimes of truth are inextricably linked to power (Foucault, 1980; Haraway, 1988). Many of the currently established fields such as pharmacogenomics, the study of gene-by-drug interactions (Kalow et al., 1999; Ozdemir et al., 2000), initially started out as fringe contested ideas outside the dominant scientific establishment in the mid 20th century (Ozdemir et al., 2009). As scientists, we ought to recognize that consensus over new technologies is for textbooks (Sarewitz, 2011), whereas science, technology, and innovation advance by contestation, dissent, and disagreements. HI could welcome and appreciate dissent and opposing views on emerging technologies. By broadening our approach to technology governance beyond the practice of forced consensus, HI tacit knowledges build reflexivity and resilience against uncertainties, ignorance, and unknowns in science and society, and thus help cultivate responsible innovation across AI, automation, and data science communities.DisclaimerNo funding was received in support of this article. Views expressed are author's personal opinions only and do not necessarily reflect those of the affiliated institutions.Author Disclosure StatementThe author declares that there are no competing financial interests.ReferencesCollingridge D. (1980). The Social Control of Technology. New York: St. Martin's Press. Google ScholarConley SN. (2011). Engagement agents in the making: On the front lines of socio-technical integration. Sci Eng Ethics 17, 715–721. Crossref, Medline, Google ScholarDidier C, Duan W, Dupuy JP, et al. (2015). Acknowledging AI's dark side. Science 349, 1064–1065. Crossref, Medline, Google ScholarFeyerabend PK. (2011). The Tyranny of Science. Cambridge, United Kingdom: Polity Press. Google ScholarFisher E. (2017). Responsible innovation in a post-truth moment. J Respons Innov 4, 1–4. Crossref, Google ScholarFoucault M. (1980). In: Power/Knowledge: Selected Interviews and Other Writings, 1972–1977. Gordon C, ed. New York: Pantheon Books. Google ScholarGulland A. (2017). Sixty seconds on digital drugs. BMJ 359, j5365. Crossref, Medline, Google ScholarGuston DH. (2014). Understanding "anticipatory governance." Soc Stud Sci 44, 218–242. Crossref, Medline, Google ScholarHaraway D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspectives. Fem Stud 14, 575–599. Crossref, Google ScholarHarris J. (2019). Together we can thwart the big-tech data grab. Here's how. The Guardian, January 7. Google ScholarKalow W, Ozdemir V, Tang BK, Tothfalusi L, and Endrenyi L. (1999). The science of pharmacological variability: An essay. Clin Pharmacol Ther 66, 445–447. Crossref, Medline, Google ScholarLarson C. (2018). Who needs democracy when you have data? MIT Technology Review, August 20. Google ScholarLatour B. (1987). Science in Action. Cambridge, MA: Harvard University Press. Google ScholarOzdemir V, Kalow W, Tang BK, et al. (2000). Evaluation of the genetic component of variability in CYP3A4 activity: A repeated drug administration method. Pharmacogenetics 10, 373–388. Crossref, Medline, Google ScholarOzdemir V, Suarez-Kurtz G, Stenne R, et al. (2009). Risk assessment and communication tools for genotype associations with multifactorial phenotypes: The concept of "edge effect" and cultivating an ethical bridge between omics innovations and society. OMICS 13, 43–61. Link, Google ScholarÖzdemir V, Kolker E, Hotez PJ, et al. (2014). Ready to put metadata on the post-2015 development agenda? Linking data publications to responsible innovation and science diplomacy. OMICS 18, 1–9. Link, Google ScholarÖzdemir V, Dandara C, Hekim N, et al. (2017). Stop the spam! Conference ethics and decoding the subtext in post-truth science. What would Denis Diderot say? OMICS 21, 658–664. Google ScholarÖzdemir V. (2018a). The dark side of the moon: The Internet of Things, industry 4.0, and the quantified planet. OMICS 22, 637–641. Link, Google ScholarÖzdemir V. (2018b). The Fly on the Wall… Agos Newspaper, December 24. Google ScholarÖzdemir V, and Hekim N. (2018). Birth of industry 5.0: Making sense of big data with artificial intelligence, "The Internet of Things" and next-generation technology policy. OMICS 22, 65–76. Link, Google ScholarÖzdemir V, and Springer S. (2018). What does "Diversity" mean for public engagement in science? A new metric for innovation ecosystem diversity. OMICS 22, 184–189. Link, Google ScholarÖzdemir V. (2019). Towards an "ethics-of-ethics" for responsible innovation. In: Handbook of Responsible Innovation. A Global Resource. von Schomberg R, and Hankins J, eds. Cheltenham, UK: Edward Elgar Publishing, (in press). Crossref, Google ScholarRavetz J. (2016). How should we treat science's growing pains? The Guardian, June 8. Google ScholarRead M. (2018). How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually. New York Magazine (Intelligencer), December 26. Google ScholarRip A. (2016). The clothes of the emperor. An essay on RRI in and around Brussels. J Respons Innov 3, 290–304. Crossref, Google ScholarSaltelli A, and Giampietro M. (2017). What is wrong with evidence based policy, and how can it be improved? Futures 91, 62–71. Crossref, Google ScholarSarewitz D. (2011). The voice of science: Let's agree to disagree. Nature 478, 7. Crossref, Medline, Google ScholarSarewitz D. (2016). Saving science. The New Atlantis 49 (Spring/Summer), 4–40. Google ScholarStilgoe J, Lock SJ, and Wilsdon J. (2014). Why should we promote public engagement with science? Public Underst Sci 23, 4–15. Crossref, Medline, Google ScholarThoreau F, and Delvenne P. (2012). Have STS fallen into a political void? Depoliticisation and engagement in the case of nanotechnologies. Politica Sociedade 11, 205–226. Google Scholarvon Schomberg R. (2013). A vision of responsible research and innovation. In: Responsible Innovation. Owen R, Bessant J, and Heintz M, eds. USA: Wiley, 51–74. Crossref, Google ScholarWhite AJ. (2018). Google.gov The New Atlantis 55, 3–34. Google ScholarAbbreviations UsedAIartificial intelligenceHIhuman intelligenceFiguresReferencesRelatedDetailsCited byDecolonizing Knowledge Upstream: New Ways to Deconstruct and Fight Disinformation in an Era of COVID-19, Extreme Digital Transformation, and Climate Emergency Vural Özdemir and Simon Springer11 May 2022 | OMICS: A Journal of Integrative Biology, Vol. 26, No. 5Artificial Intelligence as Accelerator for Genomic Medicine and Planetary Health Gizem Gulfidan, Hande Beklen, and Kazim Yalcin Arga8 December 2021 | OMICS: A Journal of Integrative Biology, Vol. 25, No. 12From the Editor's Desk: Systems Science 2010–2020, and Post-COVID-19 Vural Özdemir10 February 2021 | OMICS: A Journal of Integrative Biology, Vol. 25, No. 2COVID-19 Digital Health Innovation Policy: A Portal to Alternative Futures in the Making Mustafa Bayram, Simon Springer, Colin K. Garvey, and Vural Özdemir3 August 2020 | OMICS: A Journal of Integrative Biology, Vol. 24, No. 8Integrating Artificial and Human Intelligence: A Partnership for Responsible Innovation in Biomedical Engineering and Medicine Kevin Dzobo, Sampson Adotey, Nicholas E. Thomford, and Witness Dzobo7 May 2020 | OMICS: A Journal of Integrative Biology, Vol. 24, No. 5Sentiment Analysis of the News Media on Artificial Intelligence Does Not Support Claims of Negative Bias Against Artificial Intelligence Colin Garvey and Chandler Maskal7 May 2020 | OMICS: A Journal of Integrative Biology, Vol. 24, No. 5Digging Deeper into Precision/Personalized Medicine: Cracking the Sugar Code, the Third Alphabet of Life, and Sociomateriality of the Cell Vural Özdemir, K. Yalçın Arga, Ramy K. Aziz, Mustafa Bayram, Shannon N. Conley, Collet Dandara, Laszlo Endrenyi, Erik Fisher, Colin K. Garvey, Nezih Hekim, Tanja Kunej, Semra Şardaş, Rene Von Schomberg, Aymen S. Yassin, Gürçim Yılmaz, and Wei Wang14 February 2020 | OMICS: A Journal of Integrative Biology, Vol. 24, No. 2Why Are Some Omics Biotechnologies More Popular Than Others? The Sociomateriality of Glycans Offers New Clues Vural Özdemir14 February 2020 | OMICS: A Journal of Integrative Biology, Vol. 24, No. 2Genomics, The Internet of Things, Artificial Intelligence, and SocietyArtificial Intelligence for Mental Health and Mental Illnesses: an Overview7 November 2019 | Current Psychiatry Reports, Vol. 21, No. 11Veracity Over Velocity in Digital Health Vural Özdemir5 June 2019 | OMICS: A Journal of Integrative Biology, Vol. 23, No. 6Toward Panvigilance for Medicinal Product Regulation: Clinical Trial Design Using Extremely Discordant Biomarkers Vural Özdemir and Laszlo Endrenyi18 March 2019 | OMICS: A Journal of Integrative Biology, Vol. 23, No. 3Panvigilance: Integrating Biomarkers in Clinical Trials for Systems Pharmacovigilance Semra Şardaş and Aslıgül Kendirci18 March 2019 | OMICS: A Journal of Integrative Biology, Vol. 23, No. 3 Volume 23Issue 2Feb 2019 InformationCopyright 2019, Mary Ann Liebert, Inc., publishersTo cite this article:Vural Özdemir.Not All Intelligence is Artificial: Data Science, Automation, and AI Meet HI.OMICS: A Journal of Integrative Biology.Feb 2019.67-69.http://doi.org/10.1089/omi.2019.0003Published in Volume: 23 Issue 2: February 15, 2019Online Ahead of Print:February 1, 2019Keywordsartificial intelligenceautomationdata scienceInternet of Thingstechnology policyPDF download

Referência(s)