AI in medicine: creating a safe and equitable future
2023; Elsevier BV; Volume: 402; Issue: 10401 Linguagem: Inglês
10.1016/s0140-6736(23)01668-9
ISSN1474-547X
Autores Tópico(s)Artificial Intelligence in Healthcare and Education
ResumoThe meteoric progress of generative artificial intelligence (AI)—such as Open AI's ChatGPT, capable of holding realistic conversations, or others of creating realistic images and video from simple prompts—has renewed interest in the transformative potential of AI, including for health. It has also sparked sobering warnings. Addressing the UN Security Council in July, Secretary General António Guterres spoke of the “horrific levels of death and destruction” that malicious AI use could cause. How can the medical community navigate AI's substantial challenges to realise its health potential? AI in medicine is nothing new. Non-generative machine learning can already perform impressively at discrete tasks, such as interpretating medical images. The Lancet Oncology recently published one of the first randomised controlled trials of AI-supported mammography, demonstrating a similar cancer detection rate and nearly halved screen-reading workload compared with unassisted reading. AI has driven progress in infectious diseases and molecular medicine and has enhanced field-deployable diagnostic tools. But the medical applications of generative AI remain largely speculative. Automation of evidence synthesis and identification of de novo drug candidates could expedite clinical research. AI-enabled generation of medical notes could ease the administrative burden for health-care workers, freeing up time to see patients. Initiatives such as the Bill & Melinda Gates Foundation's Global Grand Challenges seek innovative uses of large language models in low-income and middle-income countries (LMICs). These advances come with serious risks. AI performs best at well defined tasks and when models can easily augment rather than replace human judgement. Applying generative AI to heterogeneous data is complicated. The black box nature of many models makes it challenging to appraise their suitability and generalisability. Large language models can make mistakes easily missed by humans or hallucinate non-existent sources. Transfer of personal data to technology firms without adequate regulation could compromise patient privacy. Health equity is a particularly serious concern. Algorithms trained on health-care datasets that reflect bias in health-care spending, for example, worsened racial disparities in access to care in the USA. Most health data come from high-income countries, which could bias models, exacerbating historical injustice and discrimination when used elsewhere. These issues all risk eroding patient trust. How then to ensure that AI is a force for good in medicine? The scientific community has a key role in rigorous testing, validation, and monitoring of AI. The UN is assembling a high-level advisory body to build global capacity for trustworthy, safe, and sustainable AI; it is crucial that health and medicine are well represented. An equitable approach will require a diversity of local knowledge. WHO has partnered with the International Digital Health and AI Research Collaborative to boost participation from LMICs in the governance of safe and ethical AI in health through cross-border collaboration and common guidance. But without investment in local infrastructure and research, LMICs will remain reliant on AI developed in the USA and Europe, and costs could be prohibitive without open access alternatives. At present, the pace of technological progress far outstrips the guidance, and the power imbalance between the medical community and technology firms is growing. Allowing private entities undue influence is dangerous. The UN Secretary General has urged the Security Council to help ensure transparency, accountability, and oversight on AI. Regulators must act to ensure safety, privacy, and ethical practice. The EU's AI Act, for example, will require high risk AI systems to be assessed before approval and subjected to monitoring. Regulation should be a key concern of the first major global summit on AI safety, being held in the UK later this year. Although technology companies should be part of the regulatory conversation, there are already signs of resistance. Amazon, Google, and Epic have objected to proposed US rules to regulate AI in health technologies. The tension between commercial interests and transparency risks compromising patient wellbeing, and marginalised groups will suffer first. There is still time for us to create the future we want. AI could continue to bring benefits if integrated cautiously. It could change practice for the better as an aid—not a replacement—for doctors. But doctors cannot ignore AI. Medical educators must prepare health-care workers for a digitally augmented future. Policy makers must work with technology firms, health experts, and governments to ensure that equity remains a priority. Above all, the medical community must amplify the urgent call for stringent regulation. For more on AI in infectious diseases see Science 2023; 381: 164–70For more on AI in molecular medicine see N Engl J Med 2023; 388: 2456–65For more on the Global Grand Challenges see https://gcgh.grandchallenges.org/challenge/catalyzing-equitable-artificial-intelligence-ai-useFor more on the dangers of biased health data see Science 2019; 366: 447–53For more on WHO's efforts to improve access to AI see https://www.who.int/news/item/06-07-2022-who-and-i-dair-to-partner-for-inclusive-impactful-and-responsible-international-research-in-artificial-intelligence-and-digital-healthFor more on the UN Secretary General's remarks see https://press.un.org/en/2023/sgsm21880.doc.htmFor more on the AI Act see https://artificialintelligenceact.euFor more on the global summit on AI see https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence For more on AI in infectious diseases see Science 2023; 381: 164–70 For more on AI in molecular medicine see N Engl J Med 2023; 388: 2456–65 For more on the Global Grand Challenges see https://gcgh.grandchallenges.org/challenge/catalyzing-equitable-artificial-intelligence-ai-use For more on the dangers of biased health data see Science 2019; 366: 447–53 For more on WHO's efforts to improve access to AI see https://www.who.int/news/item/06-07-2022-who-and-i-dair-to-partner-for-inclusive-impactful-and-responsible-international-research-in-artificial-intelligence-and-digital-health For more on the UN Secretary General's remarks see https://press.un.org/en/2023/sgsm21880.doc.htm For more on the AI Act see https://artificialintelligenceact.eu For more on the global summit on AI see https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence Tech companies criticise health AI regulationsEfforts in the USA to strengthen oversight of AI in health are being resisted by industry. Paul Webster reports. Full-Text PDF Artificial intelligence-supported screen reading versus standard double reading in the Mammography Screening with Artificial Intelligence trial (MASAI): a clinical safety analysis of a randomised, controlled, non-inferiority, single-blinded, screening accuracy studyAI-supported mammography screening resulted in a similar cancer detection rate compared with standard double reading, with a substantially lower screen-reading workload, indicating that the use of AI in mammography screening is safe. The trial was thus not halted and the primary endpoint of interval cancer rate will be assessed in 100 000 enrolled participants after 2-years of follow up. Full-Text PDF ChatGPT: the future of discharge summaries?ChatGPT (Open AI, San Francisco, CA, USA) has taken the world by storm.1 Released to the public in November, 2022, ChatGPT is based on artificial intelligence (AI) technology and trained on data from the internet written by humans, including conversations. This AI-powered chatbot has vast capabilities ranging from poem composition, essay writing, solving coding issues, and explanation of complex concepts including “how can we fix the UK National Health Service?” Some consider that ChatGPT has advanced the online search to the next level;2 the program offers rapid and in-depth understanding of complex matters and generates custom responses in a conversational manner to the exact question asked, recalling its own previous responses. Full-Text PDF Open Access
Referência(s)