Recalling the foundations of Computer Science, we show that AI systems - symbolic or connectionist - are basically computational algorithmic systems.
Their data interpretations and their decision-making lack semantics and are not grounded in reality, but result from human labelling and human design.
We argue that such systems cannot, in essence, experience emotions or show empathy. Neither can they create, nor be endowed with consciousness, in the sense that we give to these concepts when speaking about human beings. The increasing performance of statistical learning systems and their pervasiveness in industry, services and everyday life, has somewhat blurred this reality. Technical robustness, transparent design and responsibility are key for developing and deploying trustworthy Artificial Intelligence systems avoiding misconceptions, respecting human rights and preserving the environment.
Raja Chatila, IEEE Fellow, is Professor of Artificial Intelligence, Robotics and Ethics at Sorbonne University in Paris, France. He is director of the SMART Laboratory of Excellence on Human-Machine Interactions and former director of the Institute of Intelligent Systems and Robotics. He contributed in several areas of Artificial Intelligence and autonomous and interactive Robotics along his career, he has published about 160 papers. His research interests currently focus on human-robot interaction, machine learning and ethics.
He was President of the IEEE Robotics and Automation Society in 2014-2015. He is chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, member of the High-Level Expert Group on AI with the European Commission, and member of the Commission on the Ethics of Research on Digital Science and Technology (CERNA) in France.