Artigo Acesso aberto Revisado por pares

Evaluating GPT-4-based ChatGPT's clinical potential on the NEJM quiz

2024; Volume: 2; Issue: 1 Linguagem: Inglês

10.1186/s44247-023-00058-5

ISSN

2731-684X

Autores

Daiju Ueda, Shannon L. Walston, Toshimasa Matsumoto, Ryo Deguchi, Hiroyuki Tatekawa, Yukio Miki,

Tópico(s)

COVID-19 diagnosis using AI

Resumo

Abstract Background GPT-4-based ChatGPT demonstrates significant potential in various industries; however, its potential clinical applications remain largely unexplored. Methods We employed the New England Journal of Medicine (NEJM) quiz “Image Challenge” from October 2021 to March 2023 to assess ChatGPT's clinical capabilities. The quiz, designed for healthcare professionals, tests the ability to analyze clinical scenarios and make appropriate decisions. We evaluated ChatGPT's performance on the NEJM quiz, analyzing its accuracy rate by questioning type and specialty after excluding quizzes which were impossible to answer without images. ChatGPT was first asked to answer without the five multiple-choice options, and then after being given the options. Results ChatGPT achieved an 87% (54/62) accuracy without choices and a 97% (60/62) accuracy with choices, after excluding 16 image-based quizzes. Upon analyzing performance by quiz type, ChatGPT excelled in the Diagnosis category, attaining 89% (49/55) accuracy without choices and 98% (54/55) with choices. Although other categories featured fewer cases, ChatGPT's performance remained consistent. It demonstrated strong performance across the majority of medical specialties; however, Genetics had the lowest accuracy at 67% (2/3). Conclusion ChatGPT demonstrates potential for diagnostic applications, suggesting its usefulness in supporting healthcare professionals in making differential diagnoses and enhancing AI-driven healthcare.

Referência(s)
Altmetric
PlumX