EXPLAINABLE ARTIFICIAL INTELLIGENCE IN HEALTHCARE: FROM ALGORITHMIC TRANSPARENCY TO TRUST AND SOCIAL ACCEPTANCE IN CLINICAL PRACTICE
Abstract
Background and objective: The rapid expansion of artificial intelligence (AI) in healthcare has resulted in substantial advances in diagnostics, prognostics, clinical decision support systems, and patient monitoring. Despite promising performance, many AI-based systems remain insufficiently understood by clinicians and patients due to their “black-box” nature. This lack of transparency may undermine trust, hinder acceptance, and limit safe integration into routine clinical practice. Explainable artificial intelligence (XAI) has emerged as a response to these challenges by enabling human-interpretable explanations of algorithmic decisions. The objective of this narrative review is to synthesize current evidence on XAI in healthcare, with particular emphasis on its technical foundations, clinical applications, influence on trust and decision-making, and broader social and ethical implications.
Scope of review: This review synthesizes literature published between 2019 and 2025 addressing explainability in medical AI. The analysis includes methodological studies, clinical evaluations, human–computer interaction research, and social science investigations related to transparency, trust, accountability, and acceptance of AI systems. Relevant publications were identified through structured searches of PubMed, MEDLINE, Scopus, and Google Scholar using keywords related to explainable AI, interpretability, ethics, trust, and clinical decision support.
Findings: XAI methods—including feature attribution, model simplification, counterfactual explanations, and visualization techniques—demonstrate potential to enhance clinician understanding of AI outputs and increase confidence in algorithm-assisted decisions. Evidence suggests that explainability may support diagnostic accuracy, reduce automation bias, and facilitate error detection. However, explainability alone does not ensure trust. Clinical context, user expertise, organizational culture, and regulatory frameworks play critical roles in shaping the adoption and appropriate use of explainable systems. Empirical research addressing patient perspectives remains limited.
Conclusions: Explainable AI constitutes an important step toward the responsible and socially acceptable integration of intelligent systems in healthcare. While XAI can enhance transparency and trust, its effectiveness depends on thoughtful design, contextual adaptation to clinical workflows, and alignment with user needs. Further interdisciplinary research is required to standardize explainability approaches, evaluate their real-world impact on clinical outcomes, and address the ethical, legal, and societal challenges associated with medical AI.
References
Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine. 2019;25(1):44–56. doi:10.1038/s41591-018-0300-7.
Samek W, Wiegand T, Müller KR. Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ITU Journal. 2019;1:39–48.
Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digital Health. 2021;3(11):e745–e750. doi:10.1016/S2589-7500(21)00208-9.
Holzinger A, Langs G, Denk H, Zatloukal K, Müller H. Causability and explainability of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery. 2019;9(4):e1312.
Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making. 2020;20:310.
Ribeiro MT, Singh S, Guestrin C. “Why should I trust you?” Explaining the predictions of any classifier. Proc KDD. 2016:1135–1144.
Lundberg SM, Lee SI. A unified approach to interpreting model predictions. NeurIPS. 2017;30:4765–4774.
Tonekaboni S, Joshi S, McCradden MD, Goldenberg A. What clinicians want: contextualizing explainable machine learning for clinical end use. MLHC. 2019.
Shortliffe EH, Sepúlveda MJ. Clinical decision support in the era of artificial intelligence. JAMA. 2018;320(21):2199–2200.
Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: addressing ethical challenges. PLoS Medicine. 2018;15(11):e1002689.
Rajkomar A, Dean J, Kohane I. Machine learning in medicine. New England Journal of Medicine. 2019;380(14):1347–1358.
Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine. 2019;17:195.
London AJ. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Center Report. 2019;49(1):15–21.
Doshi-Velez F, Kim B. Towards a rigorous science of interpretable machine learning. arXiv. 2017.
European Commission. Ethics guidelines for trustworthy AI. 2019.
Esteva A, Robicquet A, Ramsundar B, et al. A guide to deep learning in healthcare. Nature Medicine. 2019;25(1):24–29.
van der Schaar M, et al. How artificial intelligence and machine learning can help healthcare systems respond to COVID-19. Machine Learning. 2021;110:1–14.
McCradden MD, Joshi S, Anderson JA, Goldenberg A. Patient safety and quality improvement: ethical principles for a regulatory approach to AI in healthcare. PLOS Medicine. 2020;17(7):e1003242.
Mittelstadt BD, et al. The ethics of algorithms: Mapping the debate. Big Data & Society. 2016;3(2):1–21.
Sendak MP, D’Arcy J, Kashyap S, Gao M, Nichols M, Corey K, Ratliff W, Balu S. A path for translation of machine learning products into healthcare delivery. EMJ Innovations. 2020;4:19–29.
Copyright (c) 2026 Erwin Grzegorzak, Rafał Pelczar, Maciej Zachara, Mateusz Bartoszek, Patryk Harnicki, Mikołaj Grodzki, Jakub Minas, Paulina Dybiak, Adrian Morawiec, Paweł Słoma, Oliwia Krawczyk

This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles are published in open-access and licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Hence, authors retain copyright to the content of the articles.
CC BY 4.0 License allows content to be copied, adapted, displayed, distributed, re-published or otherwise re-used for any purpose including for adaptation and commercial use provided the content is attributed.

