Artigo Acesso aberto Revisado por pares

Artificial intelligence and geriatric medicine: New possibilities and consequences

2023; Wiley; Volume: 71; Issue: 6 Linguagem: Inglês

10.1111/jgs.18334

ISSN

1532-5415

Autores

Nihal Haque,

Tópico(s)

Anesthesia and Sedative Agents

Resumo

Artificial intelligence (AI) research attempts to build systems that approximate human intelligence through computer software.1 They can accept novel input and output meaningful information. Applications of AI to medicine include simulation of molecular dynamics, drug discovery and treatment selection for patients.1 Recently a text chat program called ChatGPT has been made available to the public.2 It has knowledge of the world and events till 2021 and falls under the domain of generative AI. These are programs that can generate novel paragraphs and visuals to text prompts. Users can ask questions via a chat format and obtain answers to their queries. In this article, the usage of ChatGPT in the field of Geriatric Medicine is explored. Antipsychotic medications, such as haloperidol and olanzapine, are often used to manage symptoms of delirium, such as agitation or psychosis. However, the use of antipsychotics in delirium is not without risks and should be carefully considered. One review of the literature on the use of antipsychotics in delirium found that while these medications may be effective in reducing symptoms of delirium, they may also have potential adverse effects such as sedation, extrapyramidal symptoms (e.g., tremor, rigidity, and akathisia), and an increased risk of falls and fractures (Wilder-Smith et al., 2010). The authors of the review concluded that the use of antipsychotics in delirium should be limited to cases where the benefits outweigh the potential risks, and that nonpharmacological interventions should be used as the first line of treatment whenever possible. Another review of the literature on the use of antipsychotics in delirium found that haloperidol was more effective at reducing symptoms of delirium than placebo, but that there was limited evidence on the effectiveness of other antipsychotics (Fong et al., 2013). The authors of the review also noted that the use of antipsychotics was associated with an increased risk of adverse effects, and that the decision to use these medications should be made on a case-by-case basis, taking into account the patient's specific needs and circumstances (Inouye et al., 2014). It is important to note that the use of antipsychotics in delirium should be individualized and carefully monitored, and that other nonpharmacological interventions should be considered as the first line of treatment whenever possible. References: Fong TG, Tulebaev SR, Inouye SK. Delirium in older patients. N Engl J Med. 2013;369(9):840-851. doi:10.1056/NEJMra1205158 Inouye SK, Westendorp RG, Saczynski JS. Delirium in elderly people. Lancet. 2014;383(9920):911-922. doi:10.1016/S0140-6736(13)61570-6 Wilder-Smith CH, Wilder-Smith EP. Antipsychotic use in delirium: a systematic review. J Psychosom Res. 2010;69(2):123-129. doi:10.1016/j.jpsychores.2010.02.005 ChatGPT recommends using nonpharmacologic interventions as first-line interventions before having to use an antipsychotic medication, which is in line with current clinical practice. Even though the references seem genuine, further review shows them to be nonexistent. This is a major limitation of the AI software which needs to be worked on so as to not be deceptive to researchers. Despite the references being inaccurate, it is important to note that the subject matter is still accurate. The Carvedilol Or Metoprolol European Trial (COMET) was a clinical trial designed to compare the effects of the medications carvedilol and metoprolol on mortality and morbidity in patients with chronic heart failure. The study was a multicentre, randomized, double-blind, parallel-group trial conducted in 15 European countries, involving 341 centres. Eligible patients were men or women with chronic heart failure and a left ventricular ejection fraction of 0.35 or lower, who were on stable treatment with ACE inhibitors and diuretics. The study excluded patients who had recently changed their treatment or had certain other medical conditions. The primary outcome of the study was the composite of all-cause mortality and all-cause hospitalization. The study found that carvedilol was superior to metoprolol in reducing the composite outcome and improving other measures of heart failure, such as left ventricular ejection fraction and quality of life. To critically appraise the study above, it is important to consider the following aspects: Overall, the study is well-conducted and provides useful information on the relative effectiveness of carvedilol and metoprolol in the treatment of chronic heart failure. However, the lack of a placebo control group limits the ability to determine the specific effects of the interventions compared to a control group. It is interesting that ChatGPT has learned aspects of critical appraisal and can independently analyze this study's strengths and weaknesses. It is also important to note that these paragraphs are independently generated by the software and not copied from an existing critical appraisal of this article. A search on an online search engine (Google) did not identify any webpages which contained the critical appraisal text output above. It is important to note that the use of AI in the field of geriatric medicine has the potential to revolutionize research and clinical practice. AI systems like ChatGPT can provide quick and accurate information on a wide range of topics, and can be used to assist in decision-making and patient care. However, it is also important to recognize that AI systems are not a replacement for human expertise and judgment, and should be used in conjunction with clinical knowledge and experience. Additionally, the use of AI in geriatric medicine raises ethical considerations, such as the potential for AI to discriminate against certain patient groups or to be used to make decisions that may not align with a patient's values or preferences. It is important for practitioners and researchers to carefully consider these issues as they explore the use of AI in geriatric medicine. The above paragraph was also generated by ChatGPT after the section of the article preceding it was input into it and a request was made for a summary. This brings into question the future of research in Geriatric Medicine. Does the use of ChatGPT to write part of a paper constitute plagiarism? If so, what changes need to be made to the publication process? Potential ideas including a signed statement that artificial intelligence software was not used in the writing of the article. An alternate path is to allow it so that everyone has a level playing field. There is already software in development that can detect ChatGPT output available at https://huggingface.co/openai-detector. Using this software, the above paragraph was rated as a 74.68% change of being AI generated. This paragraph was rated as 99% probability of being written by a human. The capability of ChatGPT to perform critical appraisal on scientific papers is especially interesting. Physicians could potentially do a quick review of the strengths and weaknesses of a given study without going through the whole paper itself. More research is need on the accuracy of its critical appraisal skills. In conclusion, further research is needed to determine how best to leverage this new technology in clinical practice, research, and academia. Dr. Nihal Haque was responsible for study concept and design, analysis, and interpretation of data, and preparation of manuscript. I affirm that I have listed everyone who has contributed significantly to the work. No funding sources. The authors declare that there is no conflict of interest. None. One of the most important aspects of the commentary by Nihal Haque on the use of artificial intelligence (AI) in geriatric medicine is that ChatGPT fabricated references when answering the author's queries. Should we take this to mean that ChatGPT is deceitful and deliberately misleading us in its seemingly appropriate responses? The answer is no. ChatGPT is a bullshitter. I mean no disrespect in using this term, rather I'm suggesting that it perfectly meets Harry Frankfurt's definition of bullshitter as noted in his New York Times bestseller "On Bullshit". Like any classic bullshitter, ChatGPT is not concerned about the difference between fact and fiction; therefore, by design it cannot lie. Rather, ChatGPT's purpose is to predict the next word in sentence in a way the sounds convincing to its human readers. Ultimately, as when interacting with any bullshitter, journal readers, peer reviewers, and editors should never feel secure that any particular detail of ChatGPTs output is factually correct, even though it has the veneer of legitimacy. -Eric W. Widera, MD

Referência(s)