Embodied AI Ethics: A Phenomenological Critique of Current AI Explanation Systems

Authors

  • Wenrui Liang

DOI:

https://doi.org/10.54691/ae128c93

Keywords:

Embodied AI Ethics, Phenomenology, Explainable AI, Human-Computer Interaction, Moral Understanding.

Abstract

Current explainable artificial intelligence (XAI) systems face a fundamental challenge, which is the conflict between algorithmic explanations and embodied moral understanding. We examine two typical cases to reveal that human moral understanding possesses irreducible embodied characteristics. These characteristics cannot be fully captured by existing AI explanation technologies. Base on Merleau-Ponty's embodied phenomenology[1], we propose "Embodied AI Ethics" as a new theoretical framework. This framework shifts the focus from making AI more moral to protecting and enhancing human moral capabilities. We demonstrate how current XAI technologies systematically exclude these features. They do this through processes of disembodiment, decontextualization, and desubjectification. Based on these findings, we propose four design principles for embodied AI ethics. These principles provide theoretical guidance for developing AI interfaces that respect and support human moral understanding.

Downloads

Download data is not yet available.

References

[1] M. (2012). Phenomenology of perception (D. A. Landes, Trans.). Routledge. (Original work published 1945)

[2] S., & [10], A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732.

[3] A. D., & [2], S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87, 1085-1139.

[4] S., Mittelstadt, B., & [4], L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99.

[5] L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.

[6] S., Wang, B., Zhu, M., & Zhang, J. (2020). Effectiveness of explainable AI in medical diagnosis: A systematic review. Journal of Medical Internet Research, 22(8), e19340.

[7] S., Joshi, S., McCradden, M. D., & Goldenberg, A. (2019). What clinicians want: Contextualizing explainable machine learning for clinical end use. Machine Learning for Healthcare Conference, 359-380.

Downloads

Published

2025-09-16

Issue

Section

Articles

How to Cite

Liang, Wenrui. 2025. “Embodied AI Ethics: A Phenomenological Critique of Current AI Explanation Systems”. Scientific Journal Of Humanities and Social Sciences 7 (10): 231-37. https://doi.org/10.54691/ae128c93.