Article Versions
Export Article
Cite this article
  • Normal Style
  • MLA Style
  • APA Style
  • Chicago Style
Review Article
Open Access Peer-reviewed

Explainable AI: A Systematic Literature Review Focusing on Healthcare

Nzenwata U. J., Ilori O. O. , Tai-Ojuolape E. O., Aderogba T. A., Durodola O.F., Kesinro P. O., Omeneki E. N., Onah V.O., Adeboye I.V., Adesuyan M.A.
Journal of Computer Sciences and Applications. 2024, 12(1), 10-16. DOI: 10.12691/jcsa-12-1-2
Received July 13, 2024; Revised August 14, 2024; Accepted August 21, 2024

Abstract

The integration of Artificial Intelligence (AI) in healthcare holds immense promise for revolutionizing clinical practices and patient outcomes. However, the lack of transparency in AI decision-making processes poses significant challenges, hindering trust and understanding among healthcare professionals. Explainable Artificial Intelligence (XAI) has emerged as a promising solution to address these concerns by shedding light on AI model predictions and enhancing interpretability. This review explores the efficacy and applications of XAI within the healthcare domain, focusing on key research questions regarding challenges, effectiveness, and utilized algorithms. Through a comprehensive examination of 50 recent literature, we identify challenges related to the integration of XAI into clinical workflows, the necessity for validation and trust-building, and technical hurdles such as diverse explanation methods and data quality issues. Popular XAI algorithms such as SHAP, LIME, and GRAD-CAM demonstrate significant promise in clarifying model predictions and aiding in the interpretation of AI-driven healthcare systems. Overall, this review underscores the immense potential of XAI in revolutionizing healthcare delivery and decision-making processes, emphasizing the need for further research and development to address challenges and leverage its full potential in enhancing healthcare practices.

1. Introduction

Researchers in 1 are actively exploring innovative approaches to seamlessly incorporate Artificial Intelligence (AI) across diverse industries, with healthcare emerging as a sector that has witnessed notable strides in this integration. Nevertheless, due to the sensitive nature of human lives, making decisions using AI models raises a lot of concern as we do not properly understand the intricate workings of most of these models 2. Explainable Artificial intelligence (XAI) simply deals with understanding how machine learning systems make decisions so we can trust the decision it gives us 1. This lack of transparency as seen in 3 can hinder health professionals' trust in AI systems, potentially impeding their integration into healthcare workflows. Moreover, the ethical implications of AI decisions necessitate that AI systems in healthcare are not only accurate but also interpretable, to ensure accountability and justify clinical decisions 2.

1.1. Potential of AI in Healthcare

As listed in 4, Artificial Intelligence exhibits diverse applications within the medical sector, encompassing but not limited to precision medicine, drug discovery, medical visualization, education, and intelligent health records. In the area of diagnosis, treatment, and patient care, there remains a spectrum of untapped potential for artificial intelligence as we explore this burgeoning technology. In precision medicine, traditional machine learning finds its primary application in predicting the success of treatment protocols for individual patients. According to 5, 6 this involves analyzing various patient attributes and contextual factors associated with the treatment to determine the most effective course of action. Also, 6 opines that AI has been used to recognize intricate patterns in medical images such as understanding chest radiographs, allowing them to achieve comparable or superior performance compared to clinicians in certain instances.

7 believes that machine learning algorithms can analyze genetic data, and clinical information that can predict high-risk patients, recommend personalized treatment plans, and prevent adverse events. AI-powered patient engagement tools like chatbots, wearables, and mobile devices support self-care, education, decision-making, and chronic condition management. In 8, 9, it is understood that patients can access their health data, interact with healthcare providers online, and receive personalized recommendations. It is of no doubt that AI assists clinicians in decision-making by processing narrative health data, providing critical summaries of patient information, and improving diagnostic processes. It enhances disease diagnosis, treatment selection, and clinical laboratory testing.

Recent advancements in Explainable Artificial Intelligence (XAI) have made significant progress in addressing a crucial gap – the disparity between AI predictions and the understanding of end-users. These advancements focus on developing techniques that clarify the reasoning behind AI predictions, making them more understandable for people 10. This progress is especially important in areas like healthcare, where the ability to grasp the basis of AI-generated recommendations plays a vital role in influencing patient outcomes 11. Understanding how AI arrives at its decisions is crucial for healthcare professionals and patients alike, as it contributes to informed and confident decision-making in clinical settings.

Additionally, the evolving landscape of regulations now places a growing emphasis on the explainability of these systems. This emphasis is particularly relevant in clinical settings, where ensuring the safe and ethical deployment of AI technologies is a top priority. Adhering to regulatory requirements, which increasingly stress the need for explainability, is essential to guarantee that AI models meet ethical standards and prioritize patient well-being 12. The call for transparency and accountability in AI is not just a response to the growing complexity of these systems but also an acknowledgment of the ethical responsibility associated with their deployment in such critical areas.

There have been a lot of advances in the application of AI in healthcare, however, some challenges still present themselves. By performing a systematic literature review on recent papers in the domain, we aim to answer the following major questions.

1. What are the current challenges and problems in XAI for healthcare?

2. How effective has explainable AI been in healthcare?

3. What are the explainable AI algorithms that have been used?

2. Methodology

In this research, our objective is to explore the efficacy of Explainable Artificial Intelligence (XAI) in the healthcare sector. We are conducting a systematic review employing the scoping study methodology. Our study involves an exhaustive examination of the existing literature within this research domain, focusing on identifying the metrics and algorithms employed.

2.1. Search Strategy

For the systematic literature review, we conducted a comprehensive search for articles related to Explainable AI (XAI) applications across various healthcare domains, spanning from diagnosis to treatment recommendation. Our search strategy involved querying two widely used academic databases: Google Scholar and PubMed. We utilized a combination of relevant keywords such as "explainable AI", and "healthcare", ensuring a broad scope of articles covering a diverse range of applications. We limited our search range from 2020 to 2024, specifically on March 5th, 2024.

Overall, our search strategy aimed to identify a comprehensive selection of literature encompassing various healthcare domains, including but not limited to radiology (use of X-rays for diagnosis and treatment), pathology, cardiology, and oncology, to provide a thorough understanding of the current landscape of XAI applications in healthcare.

2.2. Screening and Eligibility

In this stage, we screened papers based on our specific inclusion and exclusion criteria. Initially, we eliminated survey papers, review papers, and preprints after assessing their abstracts and conclusions, as they did not align with the focus of our research, which centered on the utilization of Explainable Artificial Intelligence (XAI) within a particular domain of healthcare. We systematically identified, screened, and extracted relevant information from all retrieved studies, adhering to the guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 13.

2.3. Data Extraction and Analysis

Here, data from 50 papers was extracted and organized into a Google spreadsheet. Each paper was assigned a unique numeric identifier, and the 10 members of the research team were each allocated 10 papers. Consequently, there was overlap, with at least one paper being shared by two people. The extracted information underwent comparison, collation, and cross-checking by the lead researcher to ensure credibility.

Following data cleaning, which involved removing whitespaces and ensuring consistency among values, the dataset was preprocessed and analyzed using the Pandas library. Plots were generated using Matplotlib and Seaborn. The table below provides descriptions of the columns. Notably, for publishers, journals such as Nature, Scientific Reports, Springer, and other related journals were grouped under 'Springer Nature.' Also, review 14, survey 15, and preprint papers 16 authored by the original researchers and published in journals such as arXiv 17, 18 and MedRxiv 19 were excluded from the analysis.

Search String: The query that was originally made included the words "XAI," "explainable," and "healthcare." From our limited search query, this might have caused us to miss some valuable articles related to our domain of study

Selection of databases: We only made use of Google Scholar and PubMed since our domain is medical and healthcare, and we ignored other databases for the sake of credibility and quality.

Time frame: We have only considered studies from 2020. There may have been some studies before that time that could be of beneficial contribution to our domain of study.

3. Results and Discussions

3.1. RQ 1 - What Are the Current Challenges and Problems in Xai For Healthcare?

The papers present a comprehensive overview of challenges in implementing explainable AI (XAI) within healthcare. Some of the challenges revolve around the integration of AI into clinical workflows and the need for validation and trust-building. This includes the absence of real-world performance data, limited involvement of medical experts in algorithm design, and the necessity for rigorous internal and external validation to increase user trust and confidence in AI-driven decisions 6, 11, 20, 21 22, 23, 24, 25. Moreover, technical hurdles such as the vast number of explanation methods and the need for tailored solutions for each application further complicate the implementation of XAI in healthcare 26, 27. These challenges underscore the importance of addressing issues related to model interpretability, data quality, and trust-building mechanisms to facilitate the effective deployment of AI in clinical practice.

3.2. RQ 2 - How Effective Has Explainable AI Been in Healthcare?

Multiple studies underscore the significance of XAI in enhancing clinical decision-making processes by providing insights into AI model predictions and facilitating better understanding among medical professionals. For instance, one study showcases the potential of XAI techniques such as SHAP and LIME in aiding clinicians to interpret machine learning models for diagnosing diseases like Alzheimer's and retinoblastoma, thereby improving trust and confidence in AI-driven healthcare systems 9, 11 30, 31, 32, 33.

Furthermore, XAI methods have been instrumental in elucidating the decision-making process of complex deep-learning models, particularly in medical image analysis for diseases like pulmonary ailments and stroke detection 3, 34, 35, 36, 37 38, 39, 40, 41 42, 43, 44, 45. These techniques not only increase transparency but also help in identifying the crucial factors influencing model predictions, thereby facilitating more accurate diagnoses and personalized treatment plans 12, 15, 25 46, 47, 48, 49.

The collective findings underscore the immense potential of XAI in revolutionizing healthcare by improving the interpretability of AI models, enhancing trust among healthcare practitioners, and ultimately facilitating better clinical decision-making processes 23 50, 51. As the field continues to evolve, further research and development in XAI are expected to drive innovations that will significantly impact healthcare delivery and patient outcomes.

3.3. RQ 3 - What Are the Explainable AI Methods that Have Been Used?

From the plot above, we can infer that the most popular XAI algorithms being used are SHAP, LIME, and GRAD-CAM. These may be used alone, as demonstrated in 3, 17, 27 52, 53, 54, 55, 56, 57, or in combination, as shown in 9, 11 58, 59, 60, 61, 62. By combining both methods, researchers can leverage the strengths of each approach to gain a more comprehensive understanding of model behavior. In 11 LIME produced segmentations of the images and highlighted the important regions for classification. On the other hand, SHAP provided a more accurate explanation of the model’s predictions by assigning feature importance scores to individual pixels in the image. We observed that SHAP was more effective in identifying important regions of the image, with pink areas highlighting the areas correctly identified as significant in retinoblastoma images and blue areas indicating the lack of significant features in normal images.

4. Conclusion

The integration of Artificial Intelligence (AI) in healthcare has witnessed substantial advancements, offering a myriad of applications ranging from precision medicine to patient engagement tools. However, the opacity of AI decision-making processes poses challenges in fostering trust and understanding among healthcare professionals. Explainable Artificial Intelligence (XAI) emerges as a pivotal solution, that aims to shed light upon AI model predictions and enhance interpretability. This systematic literature review delves into the efficacy and applications of XAI within the healthcare domain, addressing key research questions concerning challenges, effectiveness, and utilized algorithms. Technical hurdles, such as the numerous XAI explanation methods and data quality issues, further underscore the complexity of implementing XAI in healthcare settings. XAI methods such as SHAP, LIME, and GRAD-CAM have demonstrated significant promise in clarifying model predictions, aiding in disease diagnosis, treatment planning, and medical image analysis. The collective findings underscore the immense potential of XAI in revolutionizing healthcare delivery and decision-making processes. As the field continues to evolve, further research and development in XAI are imperative to address challenges and leverage its full potential in enhancing healthcare practices.

References

[1]  Salih Sarp, Murat Kuzlu, E. Wilson, U. Cali, and O. Guler, A Highly Transparent and Explainable Artificial Intelligence Tool for Chronic Wound Classification: XAI-CWC, Jan. 2021.
In article      View Article  PubMed
 
[2]  M. Merry, P. Riddle, and J. Warren, “A mental models approach for defining explainable artificial intelligence,” BMC Medical Informatics and Decision Making, vol. 21, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[3]  J. A. Yeung, Y. Y. Wang, Z. Kraljevic, and J. T. H. Teo, Artificial intelligence (AI) for neurologists: do digital neurones dream of electric sheep?, Practical Neurology, vol. 23, no. 6, pp. 476–488, Dec. 2023.
In article      View Article  PubMed
 
[4]  A. Bohr and K. Memarzadeh, The rise of artificial intelligence in healthcare applications, Artificial Intelligence in Healthcare, vol. 1, no. 1, pp. 25–60, Jun. 2020.
In article      View Article
 
[5]  T. Davenport and R. Kalakota, The Potential for Artificial Intelligence in Healthcare, Future Healthcare Journal, vol. 6, no. 2, pp. 94–98, Jun. 2019.
In article      View Article  PubMed
 
[6]  K. B. Johnson et al., Precision Medicine, AI, and the Future of Personalized Health Care, Clinical and Translational Science, vol. 14, no. 1, Oct. 2020, Available: https:// www.ncbi.nlm.nih.gov/ pmc/articles/PMC7877825/.
In article      View Article  PubMed
 
[7]  C. J. Kelly, A. Karthikesalingam, M. Suleyman, G. Corrado, and D. King, Key challenges for delivering clinical impact with artificial intelligence, BMC Medicine, vol. 17, no. 1, Oct. 2019.
In article      View Article  PubMed
 
[8]  J. Bajwa, U. Munir, A. Nori, and B. Williams, Artificial intelligence in healthcare: transforming the practice of medicine, Future Healthcare Journal, vol. 8, no. 2, pp. e188–e194, 2021.
In article      View Article  PubMed
 
[9]  S. A. Alowais et al., Revolutionizing healthcare: the role of artificial intelligence in clinical practice, BMC Medical Education, vol. 23, no. 1, Sep. 2023.
In article      View Article  PubMed
 
[10]  H. Alami et al., Artificial Intelligence and Health Technology Assessment: Anticipating a New Level of Complexity, Journal of Medical Internet Research, vol. 22, no. 7, p. e17707, Jul. 2020.
In article      View Article  PubMed
 
[11]  T. Raclin et al., Combining Machine Learning, Patient-Reported Outcomes, and Value-Based Health Care: Protocol for Scoping Reviews,” JMIR Research Protocols, vol. 11, no. 7, p. e36395, Jul. 2022.
In article      View Article  PubMed
 
[12]  B. Vasey et al., DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence, Nature Medicine, vol. 27, no. 2, pp. 186–187, Feb. 2021.
In article      View Article  PubMed
 
[13]  Moher, D., Liberati, A., Tetzlaff, J., and Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann. Inter. Med. 151, 264–269.
In article      View Article  PubMed
 
[14]  Kwang Sig Lee and Eun Sun Kim, Explainable artificial intelligence in the early diagnosis of gastrointestinal disease, Diagnostics, vol. 12, no. 11, Nov. 2022.
In article      View Article  PubMed
 
[15]  A.-D. Samaras et al., Explainable classification of patients with primary hyperparathyroidism using highly imbalanced clinical data derived from imaging and biochemical procedures, Applied Sciences, vol. 14, no. 5, p. 2171, Mar. 2024.
In article      View Article
 
[16]  U. Pawar, S. Rea, Ruairi O'reilly, and D. O’shea, Incorporating explainable artificial intelligence (XAI) to aid the understanding of machine learning in the healthcare domain, 2020. Available: https://www.researchgate.net/publication/346717871.
In article      
 
[17]  Sergiusz Wesołowski et al., An explainable artificial intelligence approach for predicting cardiovascular outcomes using electronic health records, PLOS Digital Health, vol. 1, no. 1, p. e0000004, Jan. 2022.
In article      View Article  PubMed
 
[18]  A. Das and P. Rad, Opportunities and challenges in explainable artificial intelligence (XAI): A survey, Jun. 2020. Available: http://arxiv.org/abs/2006.11371.
In article      
 
[19]  V. Sharma, Samarth Chhatwal, and B. Singh, An explainable artificial intelligence based prospective framework for COVID-19 risk prediction.
In article      
 
[20]  José Jiménez-Luna, F. Grisoni, and G. Schneider, “Drug discovery with explainable artificial intelligence,” Nature Machine Intelligence, vol. 2, no. 10, pp. 573–584, Oct. 2020.
In article      View Article
 
[21]  D. Dave, H. Naik, S. Singhal, and P. Patel, Explainable AI meets healthcare: A study on heart disease dataset, Nov. 2020. Available: http://arxiv.org/abs/2011.03195.
In article      
 
[22]  J. Hoffmann et al., Prediction of clinical outcomes with explainable artificial intelligence in patients with chronic lymphocytic leukemia, Current Oncology, vol. 30, no. 2, pp. 1903–1915, Feb. 2023.
In article      View Article  PubMed
 
[23]  Khishigsuren Davagdorj, Jang Whan Bae, Van Huy Pham, Nipon Theera-Umpon, and Keun Ho Ryu, Explainable artificial intelligence based framework for non-communicable diseases prediction, IEEE Access, vol. 9, pp. 123672–123688, 2021.
In article      View Article
 
[24]  Augusto Anguita-Ruiz, A. Segura-Delgado, R. Alcalá, C. M. Aguilera, and Jesús Alcalá-Fdez, EXplainable Artificial Intelligence (XAI) for the identification of biologically relevant gene expression patterns in longitudinal human studies, insights from obesity research, PLoS Computational Biology, vol. 16, no. 4, Apr. 2020.
In article      View Article  PubMed
 
[25]  V. Roessner, J. Rothe, G. Kohls, Georg Schomerus, S. Ehrlich, andC. Beste, Taming the chaos?! Using eXplainable Artificial Intelligence (XAI) to tackle the complexity in mental health research, European Child and Adolescent Psychiatry, vol. 30, no. 8, pp. 1143–1146, Aug. 2021.
In article      View Article  PubMed
 
[26]  Salih Sarp, Murat Kuzlu, E. Wilson, U. Cali, and O. Guler,A highly transparent and explainable artificial intelligencetool for chronic wound classification: XAI-CWC, 2021.
In article      View Article  PubMed
 
[27]  Salman Muneer et al., An IoMT enabled smart healthcare model to monitor elderly people using Explainable Artificial Intelligence (EAI)., Journal of NCBAE, Vol 1.
In article      
 
[28]  Shaker El-Sappagh, J. M. Alonso, S. M.Riazul Islam, A.M. Sultan, and Kyung Sup Kwak, A multilayermultimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease, Scientific Reports, vol. 11, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[29]  A. Raza, Kim Phuc Tran, L. Koehl, and S. Li, “Designing ECG monitoring healthcare system with federated transfer learning and explainable AI,” Knowl. Based Syst., vol. 236, p. 107763, 2021, Available: https://api.semanticscholar.org/CorpusID:235195935.
In article      View Article
 
[30]  S. El-Sappagh, J. M. Alonso, S. M. R. Islam, A. M. Sultan, and K. S. Kwak, A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease, Scientific Reports, vol. 11, no. 1, p. 2660, Jan. 2021.
In article      View Article  PubMed
 
[31]  J. Peng et al., An explainable artificial intelligence framework for the deterioration risk prediction of hepatitis patients, Journal of Medical Systems, vol. 45, no. 5, May 2021.
In article      View Article  PubMed
 
[32]  L. Lindsay, S. Coleman, D. Kerr, B. Taylor, and A. Moorhead, Explainable artificial intelligence for falls prediction, in Communications in Computer and Information Science, Springer, 2020, pp. 76–84.
In article      View Article
 
[33]  Y. Jia, J. McDermid, T. Lawton, and Ibrahim Habli,“The role of explainability in assuring safety of machine learning in healthcare, IEEE Transactions on Emerging Topics in Computing, vol. 10, no. 4, pp. 1746–1760, Oct. 2022.
In article      View Article
 
[34]  F. Vaquerizo-Villar et al., An explainable deep-learning model to stage sleep states in children and propose novel EEG-related patterns in sleep apnea, Computers in Biology and Medicine, vol. 165, Oct. 2023.
In article      View Article  PubMed
 
[35]  F. Xu et al., The clinical value of explainable deep learning for diagnosing fungal keratitis using in vivo confocal microscopy images, Frontiers in Medicine, vol. 8, Dec. 2021.
In article      View Article  PubMed
 
[36]  Z. Naz, Muhammad, T. Saba, A. Rehman, Haitham Nobanee, and Saeed Ali Bahaj, An explainable AI-Enabled framework for interpreting pulmonary diseases from chest radiographs, Cancers, vol. 15, no. 1, Jan. 2023.
In article      View Article  PubMed
 
[37]  Belal Alsinglawi et al., An explainable machine learningframework for lung cancer hospital length of stay prediction, Scientific Reports, vol. 12, no. 1, Dec. 2022.
In article      View Article  PubMed
 
[38]  Esma Cerekci et al., Quantitative evaluation of saliency-based explainable artificial intelligence (XAI) methods in deep learning-based mammogram analysis, European Journal of Radiology, vol. 173, Apr. 2024.
In article      View Article  PubMed
 
[39]  Mohammed Saidul Islam, I. Hussain, Md Mezbaur Rahman, Se Jin Park, and Md Azam Hossain, Explainable artificial intelligence model for stroke prediction using EEG signal, Sensors, vol. 22, no. 24, Dec. 2022.
In article      View Article  PubMed
 
[40]  Z. U. Ahmed, K. Sun, M. Shelly, and L. Mu, Explainable artificial intelligence (XAI) for exploring spatial variability of lung and bronchus cancer (LBC) mortality rates in the contiguous USA, Scientific Reports, vol. 11, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[41]  F. Ullah, J. Moon, H. Naeem, and S. Jabbar, Explainable artificial intelligence approach in combating real-time surveillance of COVID19 pandemic from CT scan and X-ray images using ensemble model, Journal of Supercomputing, vol. 78, no. 17, pp. 19246–19271, Nov. 2022.
In article      View Article  PubMed
 
[42]  F. Ahmed, M. Asif, M. Saleem, U. F. Mushtaq, and M. Imran, Identification and Prediction of Brain Tumor Using VGG-16 Empowered with Explainable Artificial Intelligence, International Journal of Computational and Innovative Sciences, vol. 2, no. 2, pp. 24–33, Jun. 2023, Available: ttps:// ijcis.com/ index.php/ IJCIS/article/view/69.
In article      
 
[43]  I. Hussain and R. Jany, Interpreting stroke-impaired electromyography patterns through explainable artificial intelligence, Sensors, vol. 24, no. 5, Mar. 2024.
In article      View Article  PubMed
 
[44]  A. M. Westerlund, J. S. Hawe, M. Heinig, and Heribert Schunkert,Risk prediction of cardiovascular events by exploration of molecular data with explainable artificial intelligence, International Journal of Molecular Sciences, vol. 22, no. 19, Oct. 2021.
In article      View Article  PubMed
 
[45]  S. I. Nafisah and G. Muhammad, Tuberculosis detection in chest radiograph using convolutional neural network architecture and explainable artificial intelligence, Neural Computing and Applications, vol. 36, no. 1, pp. 111–131, Jan. 2024.
In article      View Article  PubMed
 
[46]  K. Sanjana, V. Sowmya, E. A. Gopalakrishnan, and K. P. Soman, Explainable artificial intelligence for heart rate variability in ECG signal, Healthcare Technology Letters, vol. 7, no. 6, pp. 146–154, Dec. 2020.
In article      View Article  PubMed
 
[47]  L. Schweizer et al., Analysing cerebrospinal fluid with explainable deep learning: From diagnostics to insights, Neuropathology and Applied Neurobiology, vol. 49, no. 1, Feb. 2023.
In article      
 
[48]  M. Gimeno et al., Explainable artificial intelligence for precision medicine in acute myeloid leukemia, Frontiers in Immunology, vol.13, Sep. 2022.
In article      View Article  PubMed
 
[49]  Anwer Mustafa Hilal et al., Modeling of explainable artificial intelligence for biomedical mental disorder diagnosis, Computers, Materials and Continua, vol. 71, no. 2, pp. 3853–3867, 2022.
In article      View Article
 
[50]  Samanta Knapič, A. Malhi, R. Saluja, and K. Främling, Explainable artificial intelligence for human decision support system in the medical domain, Machine Learning and Knowledge Extraction, vol. 3, no. 3, pp. 740–770, Sep. 2021.
In article      View Article
 
[51]  Q. Hu et al., Explainable artificial intelligence-based edge fuzzy images for COVID-19 detection and identification Applied Soft Computing, vol. 123, Jul. 2022.
In article      View Article  PubMed
 
[52]  Bader Aldughayfiq, F. Ashfaq, N. Z. Jhanjhi, and M. Humayun, Explainable AI for retinoblastoma diagnosis: Interpreting deep learning models with LIME and SHAP, Diagnostics, vol. 13, no. 11, Jun. 2023.
In article      View Article  PubMed
 
[53]  Jeong Kyun Kim, Myung Nam Bae, K. Lee, Jae Chul Kim, and Sang Gi Hong, Explainable artificial intelligence and wearable sensor-based gait analysis to identify patients with osteopenia and sarcopenia in daily life, Biosensors, vol. 12, no. 3, Mar. 2022.
In article      View Article  PubMed
 
[54]  T. Mahmud, K. Barua, Sultana Umme Habiba, Nahed Sharmen, Mohammad Shahadat Hossain, and K. Andersson, An explainable AI paradigm for alzheimer’s diagnosis using deep transfer learning, Diagnostics, vol. 14, no. 3, Feb. 2024.
In article      View Article  PubMed
 
[55]  S. D. Mohanty, D. Lekan, T. P. McCoy, M. Jenkins, and P. Manda, Machine learning for predicting readmission risk among the frail: Explainable AI for healthcare, Patterns, vol. 3, no. 1, Jan. 2022.
In article      View Article  PubMed
 
[56]  J. Ma et al., Towards trustworthy AI in dentistry, Journal of Dental Research, vol. 101, no. 11, pp. 1263–1268, Oct. 2022.
In article      View Article  PubMed
 
[57]  C. Duckworth et al., Using explainable machine learning to characterize data drift and detect emergent health risks for emergency department admissions during COVID-19, Scientific Reports, vol. 11, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[58]  M. Merry, P. Riddle, and J. Warren, A mental models approach for defining explainable artificial intelligence, BMC Medical Informatics and Decision Making, vol. 21, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[59]  N. Aslam, Explainable artificial intelligence approach for the early prediction of ventilator support and mortality in COVID-19 patients, Computation, vol. 10, no. 3, Mar. 2022.
In article      View Article
 
[60]  L. M. Thimoteo, M. M. Vellasco, J. Amaral, K. Figueiredo, Cátia Lie Yokoyama, and E. Marques, Explainable artificial intelligence for COVID-19 diagnosis through blood test variables, Journal of Control, Automation and Electrical Systems, vol. 33, no. 2, pp. 625–644, Apr. 2022.
In article      View Article  PubMed
 
[61]  P. A. Moreno-Sánchez, Improvement of a prediction model for heart failure survival through explainable artificial intelligence, Frontiers in Cardiovascular Medicine, vol. 10, 2023.
In article      View Article  PubMed
 
[62]  Salih Sarp, Murat Kuzlu, E. Wilson, U. Cali, and O. Guler, The enlightening role of explainable artificial intelligence in chronic wound classification, Electronics (Switzerland), vol. 10, no. 12, Jun. 2021.
In article      View Article
 

Published with license by Science and Education Publishing, Copyright © 2024 Nzenwata U. J., Ilori O. O., Tai-Ojuolape E. O., Aderogba T. A., Durodola O.F., Kesinro P. O., Omeneki E. N., Onah V.O., Adeboye I.V. and Adesuyan M.A.

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/

Cite this article:

Normal Style
Nzenwata U. J., Ilori O. O., Tai-Ojuolape E. O., Aderogba T. A., Durodola O.F., Kesinro P. O., Omeneki E. N., Onah V.O., Adeboye I.V., Adesuyan M.A.. Explainable AI: A Systematic Literature Review Focusing on Healthcare. Journal of Computer Sciences and Applications. Vol. 12, No. 1, 2024, pp 10-16. https://pubs.sciepub.com/jcsa/12/1/2
MLA Style
J., Nzenwata U., et al. "Explainable AI: A Systematic Literature Review Focusing on Healthcare." Journal of Computer Sciences and Applications 12.1 (2024): 10-16.
APA Style
J., N. U. , O., I. O. , O., T. E. , A., A. T. , O.F., D. , O., K. P. , N., O. E. , V.O., O. , I.V., A. , & M.A., A. (2024). Explainable AI: A Systematic Literature Review Focusing on Healthcare. Journal of Computer Sciences and Applications, 12(1), 10-16.
Chicago Style
J., Nzenwata U., Ilori O. O., Tai-Ojuolape E. O., Aderogba T. A., Durodola O.F., Kesinro P. O., Omeneki E. N., Onah V.O., Adeboye I.V., and Adesuyan M.A.. "Explainable AI: A Systematic Literature Review Focusing on Healthcare." Journal of Computer Sciences and Applications 12, no. 1 (2024): 10-16.
Share
  • Figure 1. PRISMA Flow Diagram of Literature Search and Selection Process showing the number of studies identified, screened, extracted, and included in the review
[1]  Salih Sarp, Murat Kuzlu, E. Wilson, U. Cali, and O. Guler, A Highly Transparent and Explainable Artificial Intelligence Tool for Chronic Wound Classification: XAI-CWC, Jan. 2021.
In article      View Article  PubMed
 
[2]  M. Merry, P. Riddle, and J. Warren, “A mental models approach for defining explainable artificial intelligence,” BMC Medical Informatics and Decision Making, vol. 21, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[3]  J. A. Yeung, Y. Y. Wang, Z. Kraljevic, and J. T. H. Teo, Artificial intelligence (AI) for neurologists: do digital neurones dream of electric sheep?, Practical Neurology, vol. 23, no. 6, pp. 476–488, Dec. 2023.
In article      View Article  PubMed
 
[4]  A. Bohr and K. Memarzadeh, The rise of artificial intelligence in healthcare applications, Artificial Intelligence in Healthcare, vol. 1, no. 1, pp. 25–60, Jun. 2020.
In article      View Article
 
[5]  T. Davenport and R. Kalakota, The Potential for Artificial Intelligence in Healthcare, Future Healthcare Journal, vol. 6, no. 2, pp. 94–98, Jun. 2019.
In article      View Article  PubMed
 
[6]  K. B. Johnson et al., Precision Medicine, AI, and the Future of Personalized Health Care, Clinical and Translational Science, vol. 14, no. 1, Oct. 2020, Available: https:// www.ncbi.nlm.nih.gov/ pmc/articles/PMC7877825/.
In article      View Article  PubMed
 
[7]  C. J. Kelly, A. Karthikesalingam, M. Suleyman, G. Corrado, and D. King, Key challenges for delivering clinical impact with artificial intelligence, BMC Medicine, vol. 17, no. 1, Oct. 2019.
In article      View Article  PubMed
 
[8]  J. Bajwa, U. Munir, A. Nori, and B. Williams, Artificial intelligence in healthcare: transforming the practice of medicine, Future Healthcare Journal, vol. 8, no. 2, pp. e188–e194, 2021.
In article      View Article  PubMed
 
[9]  S. A. Alowais et al., Revolutionizing healthcare: the role of artificial intelligence in clinical practice, BMC Medical Education, vol. 23, no. 1, Sep. 2023.
In article      View Article  PubMed
 
[10]  H. Alami et al., Artificial Intelligence and Health Technology Assessment: Anticipating a New Level of Complexity, Journal of Medical Internet Research, vol. 22, no. 7, p. e17707, Jul. 2020.
In article      View Article  PubMed
 
[11]  T. Raclin et al., Combining Machine Learning, Patient-Reported Outcomes, and Value-Based Health Care: Protocol for Scoping Reviews,” JMIR Research Protocols, vol. 11, no. 7, p. e36395, Jul. 2022.
In article      View Article  PubMed
 
[12]  B. Vasey et al., DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence, Nature Medicine, vol. 27, no. 2, pp. 186–187, Feb. 2021.
In article      View Article  PubMed
 
[13]  Moher, D., Liberati, A., Tetzlaff, J., and Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann. Inter. Med. 151, 264–269.
In article      View Article  PubMed
 
[14]  Kwang Sig Lee and Eun Sun Kim, Explainable artificial intelligence in the early diagnosis of gastrointestinal disease, Diagnostics, vol. 12, no. 11, Nov. 2022.
In article      View Article  PubMed
 
[15]  A.-D. Samaras et al., Explainable classification of patients with primary hyperparathyroidism using highly imbalanced clinical data derived from imaging and biochemical procedures, Applied Sciences, vol. 14, no. 5, p. 2171, Mar. 2024.
In article      View Article
 
[16]  U. Pawar, S. Rea, Ruairi O'reilly, and D. O’shea, Incorporating explainable artificial intelligence (XAI) to aid the understanding of machine learning in the healthcare domain, 2020. Available: https://www.researchgate.net/publication/346717871.
In article      
 
[17]  Sergiusz Wesołowski et al., An explainable artificial intelligence approach for predicting cardiovascular outcomes using electronic health records, PLOS Digital Health, vol. 1, no. 1, p. e0000004, Jan. 2022.
In article      View Article  PubMed
 
[18]  A. Das and P. Rad, Opportunities and challenges in explainable artificial intelligence (XAI): A survey, Jun. 2020. Available: http://arxiv.org/abs/2006.11371.
In article      
 
[19]  V. Sharma, Samarth Chhatwal, and B. Singh, An explainable artificial intelligence based prospective framework for COVID-19 risk prediction.
In article      
 
[20]  José Jiménez-Luna, F. Grisoni, and G. Schneider, “Drug discovery with explainable artificial intelligence,” Nature Machine Intelligence, vol. 2, no. 10, pp. 573–584, Oct. 2020.
In article      View Article
 
[21]  D. Dave, H. Naik, S. Singhal, and P. Patel, Explainable AI meets healthcare: A study on heart disease dataset, Nov. 2020. Available: http://arxiv.org/abs/2011.03195.
In article      
 
[22]  J. Hoffmann et al., Prediction of clinical outcomes with explainable artificial intelligence in patients with chronic lymphocytic leukemia, Current Oncology, vol. 30, no. 2, pp. 1903–1915, Feb. 2023.
In article      View Article  PubMed
 
[23]  Khishigsuren Davagdorj, Jang Whan Bae, Van Huy Pham, Nipon Theera-Umpon, and Keun Ho Ryu, Explainable artificial intelligence based framework for non-communicable diseases prediction, IEEE Access, vol. 9, pp. 123672–123688, 2021.
In article      View Article
 
[24]  Augusto Anguita-Ruiz, A. Segura-Delgado, R. Alcalá, C. M. Aguilera, and Jesús Alcalá-Fdez, EXplainable Artificial Intelligence (XAI) for the identification of biologically relevant gene expression patterns in longitudinal human studies, insights from obesity research, PLoS Computational Biology, vol. 16, no. 4, Apr. 2020.
In article      View Article  PubMed
 
[25]  V. Roessner, J. Rothe, G. Kohls, Georg Schomerus, S. Ehrlich, andC. Beste, Taming the chaos?! Using eXplainable Artificial Intelligence (XAI) to tackle the complexity in mental health research, European Child and Adolescent Psychiatry, vol. 30, no. 8, pp. 1143–1146, Aug. 2021.
In article      View Article  PubMed
 
[26]  Salih Sarp, Murat Kuzlu, E. Wilson, U. Cali, and O. Guler,A highly transparent and explainable artificial intelligencetool for chronic wound classification: XAI-CWC, 2021.
In article      View Article  PubMed
 
[27]  Salman Muneer et al., An IoMT enabled smart healthcare model to monitor elderly people using Explainable Artificial Intelligence (EAI)., Journal of NCBAE, Vol 1.
In article      
 
[28]  Shaker El-Sappagh, J. M. Alonso, S. M.Riazul Islam, A.M. Sultan, and Kyung Sup Kwak, A multilayermultimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease, Scientific Reports, vol. 11, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[29]  A. Raza, Kim Phuc Tran, L. Koehl, and S. Li, “Designing ECG monitoring healthcare system with federated transfer learning and explainable AI,” Knowl. Based Syst., vol. 236, p. 107763, 2021, Available: https://api.semanticscholar.org/CorpusID:235195935.
In article      View Article
 
[30]  S. El-Sappagh, J. M. Alonso, S. M. R. Islam, A. M. Sultan, and K. S. Kwak, A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease, Scientific Reports, vol. 11, no. 1, p. 2660, Jan. 2021.
In article      View Article  PubMed
 
[31]  J. Peng et al., An explainable artificial intelligence framework for the deterioration risk prediction of hepatitis patients, Journal of Medical Systems, vol. 45, no. 5, May 2021.
In article      View Article  PubMed
 
[32]  L. Lindsay, S. Coleman, D. Kerr, B. Taylor, and A. Moorhead, Explainable artificial intelligence for falls prediction, in Communications in Computer and Information Science, Springer, 2020, pp. 76–84.
In article      View Article
 
[33]  Y. Jia, J. McDermid, T. Lawton, and Ibrahim Habli,“The role of explainability in assuring safety of machine learning in healthcare, IEEE Transactions on Emerging Topics in Computing, vol. 10, no. 4, pp. 1746–1760, Oct. 2022.
In article      View Article
 
[34]  F. Vaquerizo-Villar et al., An explainable deep-learning model to stage sleep states in children and propose novel EEG-related patterns in sleep apnea, Computers in Biology and Medicine, vol. 165, Oct. 2023.
In article      View Article  PubMed
 
[35]  F. Xu et al., The clinical value of explainable deep learning for diagnosing fungal keratitis using in vivo confocal microscopy images, Frontiers in Medicine, vol. 8, Dec. 2021.
In article      View Article  PubMed
 
[36]  Z. Naz, Muhammad, T. Saba, A. Rehman, Haitham Nobanee, and Saeed Ali Bahaj, An explainable AI-Enabled framework for interpreting pulmonary diseases from chest radiographs, Cancers, vol. 15, no. 1, Jan. 2023.
In article      View Article  PubMed
 
[37]  Belal Alsinglawi et al., An explainable machine learningframework for lung cancer hospital length of stay prediction, Scientific Reports, vol. 12, no. 1, Dec. 2022.
In article      View Article  PubMed
 
[38]  Esma Cerekci et al., Quantitative evaluation of saliency-based explainable artificial intelligence (XAI) methods in deep learning-based mammogram analysis, European Journal of Radiology, vol. 173, Apr. 2024.
In article      View Article  PubMed
 
[39]  Mohammed Saidul Islam, I. Hussain, Md Mezbaur Rahman, Se Jin Park, and Md Azam Hossain, Explainable artificial intelligence model for stroke prediction using EEG signal, Sensors, vol. 22, no. 24, Dec. 2022.
In article      View Article  PubMed
 
[40]  Z. U. Ahmed, K. Sun, M. Shelly, and L. Mu, Explainable artificial intelligence (XAI) for exploring spatial variability of lung and bronchus cancer (LBC) mortality rates in the contiguous USA, Scientific Reports, vol. 11, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[41]  F. Ullah, J. Moon, H. Naeem, and S. Jabbar, Explainable artificial intelligence approach in combating real-time surveillance of COVID19 pandemic from CT scan and X-ray images using ensemble model, Journal of Supercomputing, vol. 78, no. 17, pp. 19246–19271, Nov. 2022.
In article      View Article  PubMed
 
[42]  F. Ahmed, M. Asif, M. Saleem, U. F. Mushtaq, and M. Imran, Identification and Prediction of Brain Tumor Using VGG-16 Empowered with Explainable Artificial Intelligence, International Journal of Computational and Innovative Sciences, vol. 2, no. 2, pp. 24–33, Jun. 2023, Available: ttps:// ijcis.com/ index.php/ IJCIS/article/view/69.
In article      
 
[43]  I. Hussain and R. Jany, Interpreting stroke-impaired electromyography patterns through explainable artificial intelligence, Sensors, vol. 24, no. 5, Mar. 2024.
In article      View Article  PubMed
 
[44]  A. M. Westerlund, J. S. Hawe, M. Heinig, and Heribert Schunkert,Risk prediction of cardiovascular events by exploration of molecular data with explainable artificial intelligence, International Journal of Molecular Sciences, vol. 22, no. 19, Oct. 2021.
In article      View Article  PubMed
 
[45]  S. I. Nafisah and G. Muhammad, Tuberculosis detection in chest radiograph using convolutional neural network architecture and explainable artificial intelligence, Neural Computing and Applications, vol. 36, no. 1, pp. 111–131, Jan. 2024.
In article      View Article  PubMed
 
[46]  K. Sanjana, V. Sowmya, E. A. Gopalakrishnan, and K. P. Soman, Explainable artificial intelligence for heart rate variability in ECG signal, Healthcare Technology Letters, vol. 7, no. 6, pp. 146–154, Dec. 2020.
In article      View Article  PubMed
 
[47]  L. Schweizer et al., Analysing cerebrospinal fluid with explainable deep learning: From diagnostics to insights, Neuropathology and Applied Neurobiology, vol. 49, no. 1, Feb. 2023.
In article      
 
[48]  M. Gimeno et al., Explainable artificial intelligence for precision medicine in acute myeloid leukemia, Frontiers in Immunology, vol.13, Sep. 2022.
In article      View Article  PubMed
 
[49]  Anwer Mustafa Hilal et al., Modeling of explainable artificial intelligence for biomedical mental disorder diagnosis, Computers, Materials and Continua, vol. 71, no. 2, pp. 3853–3867, 2022.
In article      View Article
 
[50]  Samanta Knapič, A. Malhi, R. Saluja, and K. Främling, Explainable artificial intelligence for human decision support system in the medical domain, Machine Learning and Knowledge Extraction, vol. 3, no. 3, pp. 740–770, Sep. 2021.
In article      View Article
 
[51]  Q. Hu et al., Explainable artificial intelligence-based edge fuzzy images for COVID-19 detection and identification Applied Soft Computing, vol. 123, Jul. 2022.
In article      View Article  PubMed
 
[52]  Bader Aldughayfiq, F. Ashfaq, N. Z. Jhanjhi, and M. Humayun, Explainable AI for retinoblastoma diagnosis: Interpreting deep learning models with LIME and SHAP, Diagnostics, vol. 13, no. 11, Jun. 2023.
In article      View Article  PubMed
 
[53]  Jeong Kyun Kim, Myung Nam Bae, K. Lee, Jae Chul Kim, and Sang Gi Hong, Explainable artificial intelligence and wearable sensor-based gait analysis to identify patients with osteopenia and sarcopenia in daily life, Biosensors, vol. 12, no. 3, Mar. 2022.
In article      View Article  PubMed
 
[54]  T. Mahmud, K. Barua, Sultana Umme Habiba, Nahed Sharmen, Mohammad Shahadat Hossain, and K. Andersson, An explainable AI paradigm for alzheimer’s diagnosis using deep transfer learning, Diagnostics, vol. 14, no. 3, Feb. 2024.
In article      View Article  PubMed
 
[55]  S. D. Mohanty, D. Lekan, T. P. McCoy, M. Jenkins, and P. Manda, Machine learning for predicting readmission risk among the frail: Explainable AI for healthcare, Patterns, vol. 3, no. 1, Jan. 2022.
In article      View Article  PubMed
 
[56]  J. Ma et al., Towards trustworthy AI in dentistry, Journal of Dental Research, vol. 101, no. 11, pp. 1263–1268, Oct. 2022.
In article      View Article  PubMed
 
[57]  C. Duckworth et al., Using explainable machine learning to characterize data drift and detect emergent health risks for emergency department admissions during COVID-19, Scientific Reports, vol. 11, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[58]  M. Merry, P. Riddle, and J. Warren, A mental models approach for defining explainable artificial intelligence, BMC Medical Informatics and Decision Making, vol. 21, no. 1, Dec. 2021.
In article      View Article  PubMed
 
[59]  N. Aslam, Explainable artificial intelligence approach for the early prediction of ventilator support and mortality in COVID-19 patients, Computation, vol. 10, no. 3, Mar. 2022.
In article      View Article
 
[60]  L. M. Thimoteo, M. M. Vellasco, J. Amaral, K. Figueiredo, Cátia Lie Yokoyama, and E. Marques, Explainable artificial intelligence for COVID-19 diagnosis through blood test variables, Journal of Control, Automation and Electrical Systems, vol. 33, no. 2, pp. 625–644, Apr. 2022.
In article      View Article  PubMed
 
[61]  P. A. Moreno-Sánchez, Improvement of a prediction model for heart failure survival through explainable artificial intelligence, Frontiers in Cardiovascular Medicine, vol. 10, 2023.
In article      View Article  PubMed
 
[62]  Salih Sarp, Murat Kuzlu, E. Wilson, U. Cali, and O. Guler, The enlightening role of explainable artificial intelligence in chronic wound classification, Electronics (Switzerland), vol. 10, no. 12, Jun. 2021.
In article      View Article