Explainable Artificial Intelligence (XAI) with IoHT for Smart Healthcare: A Review
2023; Springer Science+Business Media; Linguagem: Inglês
10.1007/978-3-031-08637-3_1
ISSN2199-1081
AutoresSubrato Bharati, M. Rubaiyat Hossain Mondal, Prajoy Podder, Utku Köse,
Tópico(s)Explainable Artificial Intelligence (XAI)
ResumoDiscussing the use of artificial intelligence (AI) in healthcare, explainability is a highly contentious topic. AI-powered systems may be superior at certain analytical tasks, but their lack of explanation continues to breed distrust. Because the majority of existing AI systems are incomprehensible and opaque, it is unlikely that AI technologies will be properly exploited and incorporated into standard clinical practice. We begin by discussing the present state of XAI development, with a focus on its applications in healthcare. Numerous IoHT-related linked health applications have been examined in XAI to establish their privacy, security, and explainability effectiveness. If we employ clinical decision assistance systems (CDAS) based on artificial intelligence, our approach will combine legal, technological, patient, and medical considerations. To gain a better grasp of the significance of explainability in clinical practice, several disciplines focus on distinct fundamental concerns and values. Explainability must be technically appraised in terms of how it could be attained and what it entails for future development. Important legal checkpoints for explainability include informed consent, certification, and licensing for medical equipment. It is important to look at the relationship between medical AI and people from both the patient's and the doctor's points of view.
Referência(s)