Integrating Clinical Intelligence and Emotional Responsiveness to Build Patient Trust in Healthcare Delivery
Keywords:
Empathetic AI, Healthcare AI, Artificial Empathy, Patient Trus, Explainable AIAbstract
Artificial Intelligence has significantly transformed diagnostic processes in healthcare by improving speed, pattern recognition, and decision support across clinical settings. However, diagnostic excellence is not determined by accuracy alone. In real-world practice, patients also value compassion, reassurance, trust, and the feeling of being understood during the diagnostic journey. This paper explores the concept of Empathetic AI in Diagnostics, referring to AI systems that not only assist clinical reasoning but also communicate findings in ways that are emotionally responsive, understandable, and supportive of patient needs. Using a literature-based conceptual approach, this study synthesizes recent evidence on diagnostic AI, conversational medical models, artificial empathy, explainable AI, and patient trust. The findings suggest that empathetic AI can strengthen diagnostic interactions by improving patient comprehension, reducing anxiety, and supporting clinicians in delivering more patient-centered care. At the same time, important risks remain, including inauthentic empathy, overreliance on automated advice, bias, privacy concerns, and legal ambiguity. This paper proposes a framework in which empathetic diagnostic AI should be designed around five dimensions: diagnostic accuracy, emotional sensitivity, explainability, clinician oversight, and ethical accountability. The study concludes that the future of diagnostic AI should not focus solely on prediction performance, but also on how AI supports humane, trustworthy, and ethically grounded diagnostic communication.
Downloads
References
[1] J. W. Ayers, A. Poliak, M. Dredze, E. C. Leas, Z. Zhu, J. B. Kelley, D. J. Faix, A. M. Goodman, C. A. Longhurst, M. Hogarth et al., “Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum,” JAMA internal medicine, vol. 183, no. 6, pp. 589–596, 2023.
[2] V. Sorin, D. Brin, Y. Barash, E. Konen, A. Charney, G. Nadkarni, and E. Klang, “Large language models and empathy: systematic review,” Journal of medical Internet research, vol. 26, p. e52597, 2024.
[3] T. A. Wiratno and B. Callula, “Transformation of beauty in digital fine arts aesthetics: An artpreneur perspective,” Aptisi Transactions on Technopreneurship (ATT), vol. 6, no. 2, pp. 231–241, 2024.
[4] H. S. J. Chew and P. Achananuparp, “Perceptions and needs of artificial intelligence in health care to increase adoption: scoping review,” Journal of medical Internet research, vol. 24, no. 1, p. e32939, 2022.
[5] X. He, X. Zheng, and H. Ding, “Existing barriers faced by and future design recommendations for direct-to-consumer health care artificial intelligence apps: scoping review,” Journal of Medical Internet Re-search, vol. 25, p. e50342, 2023.
[6] T. Tu, M. Schaekermann, A. Palepu, K. Saab, J. Freyberg, R. Tanno, A. Wang, B. Li, M. Amin, Y. Cheng et al., “Towards conversational diagnostic artificial intelligence,” Nature, vol. 642, no. 8067, pp. 442–450, 2025.
[7] D. McDuff, M. Schaekermann, T. Tu, A. Palepu, A. Wang, J. Garrison, K. Singhal, Y. Sharma, S. Azizi, K. Kulkarni et al., “Towards accurate differential diagnosis with large language models,” Nature, vol. 642, no. 8067, pp. 451–457, 2025.
[8] L. Seitz, “Artificial empathy in healthcare chatbots: Does it feel authentic?” Computers in Human Behavior: Artificial Humans, vol. 2, no. 1, p. 100067, 2024.
[9] F. Busch, L. Hoffmann, L. Xu, L. J. Zhang, B. Hu, I. Garc´ıa-Ju´arez, L. N. Toapanta-Yanchapaxi, N. Gorelik, V. Gorelik, G. A. Rodriguez-Granillo et al., “Multinational attitudes toward ai in health care and diagnostics among hospital patients,” JAMA network open, vol. 8, no. 6, p. e2514452, 2025.
[10] R. A. Sunarjo, M. H. R. Chakim, S. Maulana, and G. Fitriani, “Management of educational institutions through information systems for enhanced efficiency and decision-making,” International Transactions on Education Technology (ITEE), vol. 3, no. 1, pp. 47–61, 2024.
[11] K. Borys, Y. A. Schmitt, M. Nauta, C. Seifert, N. Kr¨amer, C. M. Friedrich, and F. Nensa, “Explainable ai in medical imaging: An overview for clinical practitioners–beyond saliency-based xai approaches,” European journal of radiology, vol. 162, p. 110786, 2023.
[12] M. Ennab and H. Mcheick, “Enhancing interpretability and accuracy of ai models in healthcare: a comprehensive review on challenges and future directions,” Frontiers in Robotics and AI, vol. 11, p. 1444763, 2024.
[13] F. Zidan, D. Nugroho, and B. A. Putra, “Securing enterprises: harnessing blockchain technology against cybercrime threats,” International Journal of Cyber and IT Service Management, vol. 3, no. 2, pp. 168–173, 2023.
[14] M. Abbasian, I. Azimi, A. M. Rahmani, and R. Jain, “Conversational health agents: a personalized large language model-powered agent framework,” JAMIA open, vol. 8, no. 4, p. ooaf067, 2025.
[15] S. Maity and M. J. Saikia, “Large language models in healthcare and medical applications: a review,” Bioengineering, vol. 12, no. 6, p. 631, 2025.
[16] D. Rustiana, D. Ramadhan, L. Wibowo, A. W. Nugroho, and G. Mahardika, “State of the art blockchain enabled smart contract applications in the university,” Blockchain Frontier Technology, vol. 2, no. 2, pp. 70–80, 2023.
[17] J. Maslinski, R. Grasfield, R. Awasthi, S. Mishra, D. Mahapatra, P. Mathur, and S. MIshra, “Understanding large language models in healthcare: A guide to clinical implementation and interpreting publications,” Cureus, vol. 17, no. 4, p. e82397, 2025.
[18] S. Shool, S. Adimi, R. Saboori Amleshi, E. Bitaraf, R. Golpira, and M. Tara, “A systematic review of large language model (llm) evaluations in clinical medicine,” BMC Medical Informatics and Decision Making, vol. 25, no. 1, p. 117, 2025.
[19] T. Y. C. Tam, S. Sivarajkumar, S. Kapoor, A. V. Stolyar, K. Polanska, K. R. McCarthy, H. Osterhoudt, X. Wu, S. Visweswaran, S. Fu et al., “A framework for human evaluation of large language models in healthcare derived from literature review,” NPJ digital medicine, vol. 7, no. 1, p. 258, 2024.
[20] G. Foresman, J. Biro, A. Tran, K. MacRae, S. Kazi, L. Schubel, A. Visconti, W. Gallagher, K. M. Smith, T. Giardina et al., “Patient perspectives on artificial intelligence in health care: focus group study for diagnostic communication and tool implementation,” Journal of Participatory Medicine, vol. 17, no. 1, p. e69564, 2025.
[21] A. M Astobiza, M. Alonso, and R. Ortega Lozano, “Trust and ai in healthcare: a systematic review,” Monash Bioethics Review, pp. 1–21, 2025.
[22] E. Ratti, M. Morrison, and I. Jakab, “Ethical and social considerations of applying artificial intelligence in healthcare—a two-pronged scoping review,” BMC Medical Ethics, vol. 26, no. 1, p. 68, 2025.
[23] G. Montanari Vergallo, L. L. Campanozzi, M. Gulino, L. Bassis, P. Ricci, S. Zaami, S. Marinelli, V. Tambone, P. Frati et al., “How could artificial intelligence change the doctor–patient relationship? a medical ethics perspective,” in Healthcare, vol. 13, no. 18, 2025.
[24] L. Tang, J. Li, and S. Fantus, “Medical artificial intelligence ethics: A systematic review of empirical studies,” Digital health, vol. 9, p. 20552076231186064, 2023.
[25] M. Chen, B. Zhang, Z. Cai, S. Seery, M. J. Gonzalez, N. M. Ali, R. Ren, Y. Qiao, P. Xue, and Y. Jiang, “Acceptance of clinical artificial intelligence among physicians and medical students: a systematic review with cross-sectional survey,” Frontiers in medicine, vol. 9, p. 990604, 2022.
[26] A. Sauerbrei, A. Kerasidou, F. Lucivero, and N. Hallowell, “The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions,” BMC medical informatics and decision making, vol. 23, no. 1, p. 73, 2023.
[27] L. R. Baghdadi, A. A. Mobeirek, D. R. Alhudaithi, F. A. Albenmousa, L. S. Alhadlaq, M. S. Alaql, and S. A. Alhamlan, “Patients’ attitudes toward the use of artificial intelligence as a diagnostic tool in radiology in saudi arabia: cross-sectional study,” JMIR Human Factors, vol. 11, no. 1, p. e53108, 2024.
[28] S. J. Fransen, T. Kwee, D. Rouw, C. Roest, Q. van Lohuizen, F. Simonis, P. van Leeuwen, S. Heijmink, Y. Ongena, M. Haan et al., “Patient perspectives on the use of artificial intelligence in prostate cancer diagnosis on mri,” European radiology, vol. 35, no. 2, pp. 769–775, 2025.
[29] S. A. Berger, E. H˚aland, and M. Solbjør, “Patient perspectives on trust in artificial intelligence–powered tools in prostate cancer diagnostics,” Qualitative Health Research, p. 10497323251387545, 2025.
[30] B. B. Ozcan, B. E. Dogan, Y. Xi, and E. E. Knippa, “Patient perception of artificial intelligence use in interpretation of screening mammograms: a survey study,” Radiology: Imaging Cancer, vol. 7, no. 3, p. e240290, 2025.
[31] O. Candra, N. B. Kumar, N. K. A. Dwijendra, I. Patra, A. Majdi, U. Rahardja, M. Kosov, J. W. G. Guerrero, and R. Sivaraman, “Energy simulation and parametric analysis of water cooled thermal photovoltaic systems: energy and exergy analysis of photovoltaic systems,” Sustainability, vol. 14, no. 22, p. 15074, 2022.
[32] F. Pesapane, E. Giambersio, B. Capetti, D. Monzani, R. Grasso, L. Nicosia, A. Rotili, A. Sorce, L. Meneghetti, S. Carriero et al., “Patients’ perceptions and attitudes to the use of artificial intelligence in breast cancer diagnosis: a narrative review,” Life, vol. 14, no. 4, p. 454, 2024.
[33] M. A. Ruben, D. Blanch-Hartigan, and J. A. Hall, “What is artificial intelligence (ai)“empathy”? a study comparing chatgpt and physician responses on an online forum: Ruben et al.” Journal of General Internal Medicine, pp. 1–8, 2025.
[34] P. Nong and M. Ji, “Expectations of healthcare ai and the role of trust: understanding patient views on how ai will impact cost, access, and patient-provider relationships,” Journal of the American Medical Informatics Association, vol. 32, no. 5, pp. 795–799, 2025.
[35] Y. Yu, C. A. Gomez-Cabello, S. A. Haider, A. Genovese, S. Prabha, M. Trabilsy, B. G. Collaco, N. G. Wood, S. Bagaria, C. Tao et al., “Enhancing clinician trust in ai diagnostics: A dynamic framework for confidence calibration and transparency,” Diagnostics, vol. 15, no. 17, p. 2204, 2025.
[36] B. Osnat, “Patient perspectives on artificial intelligence in healthcare: A global scoping review of benefits, ethical concerns, and implementation strategies,” International Journal of Medical Informatics, vol. 203, p. 106007, 2025.
[37] M. H. Tilala, P. K. Chenchala, A. Choppadandi, J. Kaur, S. Naguri, R. Saoji, B. Devaguptapu, and M. Tilala, “Ethical considerations in the use of artificial intelligence and machine learning in health care: a comprehensive review,” Cureus, vol. 16, no. 6, 2024.
[38] J. Glenning and L. Gualtieri, “Patient perspectives on artificial intelligence in medical imaging,” Journal of Participatory Medicine, vol. 17, p. e67816, 2025.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Dyah Juliastuti, Yohana F. Cahya Palupi Meilani, Ihsan Nuril Hikam, Julia Nathalie

This work is licensed under a Creative Commons Attribution 4.0 International License.




