DIALOGUES 45. Humanity - AI Symbiosis
A live dialogue between humans and an artificial intelligence system was attempted on the stage of the 45th SNF DIALOGUES event, held in collaboration with the SNF Agora Institute at Johns Hopkins University on Wednesday, August 25 at the Stavros Niarchos Foundation Cultural Center (SNFCC).
The audience and the speakers posed questions to two AI systems, specially designed for DIALOGUES, which responded live in front of the audience.
Renowned experts then joined the discussion, which took place as part of SNF Nostos the evening before the SNF Conference on Humanity and Artificial Intelligence. They grappled with moral, social, and political questions related to the concept of symbiosis between humans and machines, questions whose relevance may seem far off, yet is in fact already present: What role will AI play in society? Can AI systems have morals? Should they be given rights, like humans? And, do they have—or can they develop—emotional intelligence?
“There are philosophers who believe that if there is no organic matter to breed life, then there can be no emotions, such as pain, pleasure, desire, or more complex ones, such as fear, or anxiety… But other philosophers will tell us that there is no need for organic matter, but for electrical circuits as in machines, i.e., mechanical parts that will gradually develop emotions by interacting with the environment, with other human beings and machines, and thus develop a state of mind, and a sense of self,” said Stelios Virvidakis, Professor of Epistemology and Ethics in the Department of History and Philosophy of Science at NKUA. “We cannot always decide on a moral dilemma on the basis of the use of algorithms. Aristotle spoke of prudence, which equals wisdom, the acumen that allows us to discern the complexity of an issue within a complex situation. Can machines ever develop this type of wisdom, which includes emotions, empathy, and relates to emotional intelligence? Morality is not just a matter of obedience and strict unyielding rules. That’s what worries me: which moral system are we going to use to power a machine?” Virvidakis, commenting on the ethical dimension of artificial intelligence, said “machines can be good consistently, while us humans are notoriously inconsistent in our goodness.”
George Giannakopoulos, Artificial Intelligence Research Fellow at the National Center for Scientific Research and Co-founder of SciFY PNPC, talked about training artificial intelligence systems, specifically the two systems—GPT2 and GPT3—used in the dialogue on stage. The first of these systems was created by George Petasis, a researcher at the National Centre for Scientific Research Demokritos and SKEL, The AI Lab at the Institute of Informatics and Telecommunications. The second system was based on the Philosopher AI application. When asked who is ultimately responding to the questions posed to these AI systems, Giannakopoulos replied, “when we include these linguistic models in a dialogue, what we see is essentially a reflection of human expression through a broken mirror. This mirror has been created by science, using data to train the system. Human nature is not only evident in our writings. All of its experience, all of its interaction is absent from the systems we have seen today. Therefore, what we have is a broken reflection of humanity, as expressed through the system and ‘fed’ into it.”
The Dialogues are curated and moderated by Anna-Kynthia Bousdoukou.
*The opinions expressed by DIALOGUES participants, whether representing officially institutions and organizations or themselves, are solely their own and do not necessarily represent the views of the Stavros Niarchos Foundation (SNF) or iMEdD. Speakers’ remarks are made freely, without prior guidance or intervention from the team.