Machines can talk to us, and they’re getting better and better at it. And let’s not feign indifference: when we first start chatting with them, we’re assailed by a range of questions and sometimes contradictory emotions. Pride, humiliation, attachment, fascination… But what do these exchanges with machines really imply for our humanity? We put this and other questions to Alexei Grinbaum, a physicist and philosopher of science about to publish a book titled The Words of Machines in France.
Interview by Apolline Guillot.
After fifty years of research, automatic language processing is now coming on in leaps and bounds: dialogue with algorithms is becoming more and more fluid and efficient. Is this a technological revolution?
Alexei Grinbaum: Yes, but it’s also more than that. These latest natural language processing systems mark a fundamental change in the human condition, in the sense meant by Hannah Arendt in her 1958 book The Human Condition. She explains that “everything that men do, or know, or have experience of, has meaning only insofar as it is possible to talk about it”. Today, this word no longer belongs exclusively to human beings. And yet speech harbours and conveys social debate. From political life to our love life, non-human agents are shaping our language, and this is a game-changer. If someone wants to write a love letter, they use generative AI software; if a mayor wants to write down their political program for the next election, they use it too.
Isn’t this revolution likely to change us too, and quite considerably?
In Phaedrus, Plato doesn’t have harsh enough words to condemn writing, an invention that precipitates souls into oblivion and diverts the use of memory. Writing, he argues, makes us “pseudo-scholars”. Today, we hear much the same thing for any new invention. But here’s the thing: the functioning of the brain has seriously changed with writing, as it has with search engines and mobile phones. We no longer learn in the same way as we did thirty years ago. And the next generation won’t think, write, or use language the way we do. Are these changes good or bad? This question should be posed to this future generation, not to us. The ethical imperative consists in demanding that this change, whatever it may be on the technological level, doesn’t mark a break – that is to say that humankind in 2040 or 2239 should be able to recognise itself in the continuity of our history.
‘Users project human qualities onto a machine that merely simulates them’
Isn’t that what is already happening? Have we not created machines capable of reasoning faster and better than us?
A machine like ChatGPT is able to mimic a reasoning or set of emotions. Does this mean that it reasons, or that it has emotions? Not like human beings, of course, but nevertheless, it produces a total illusion among users. The latter project human qualities onto a machine that merely simulates them. In fact, the machine only produces text, it doesn’t even know that it’s simulating someone because it doesn’t “know” anything in the human sense of the word! Then there’s the fact that the three-dimensional world still remains inaccessible to text production systems. Notions related to our experience are difficult for the machine because by imitating what is true or what is beautiful, we’re not sure to produce truth or beauty.
Comparing the machine to the human brain seems pretty naive to me. Inputs and outputs are similar, but the pathways taken by the brain and the machine are very different. And this discovery is fascinating: there are many other methods of handling our language than the one we knew before the invention of AI.
A few months ago, some were saying that ChatGP was merely an “approximate parrot”. This is wrong: a parrot only copy-pastes, while ChatGPT produces original texts that result from non-Pavlovian learning. And if it rattles us so much, it’s because it reveals to us that most uses of our language are already highly standardised! Which gives us all the more reason to appreciate what makes up our human specificity: a beautiful, poetic, elegant formulation, far from average.
‘Talking machines are just as human and inhuman as angels, gods or demons’
In your book The Words of Machines, you remind us that this isn’t the first time that humans have invented non-human entities that can speak to them...
You just have to re-read the founding myths of our civilisation, whether Christian, Greek, Jewish, or Muslim, to see that they’re full of non-human entities. In the past, gods, angels, or demons spoke with human beings through the mouths of oracles or in dreams. These stories are our history. There will undoubtedly come a time when the prowess of AI will allow it to create narratives and wrap itself in stories. But you have to be able to connect them with old stories, so that they take up the themes that have always been there. Talking machines are just as human and inhuman as angels, gods, or demons.
Nevertheless, beyond the myths, the arrival of AI could profoundly destabilise today’s society, and perhaps even end up replacing us!
I don’t believe such a replacement will happen. Many professions will change, because this is what we call a “diffusing” technology. Certain tasks will be carried out faster. Say you’re a lawyer and you have to write some notes: you might ask GPT-4 to make a first draft or produce a coherent text from a series of loose ideas you give it. And you’ll save a lot of time! Anyone who writes can use these systems to make a first draft, nurture ideas, but you should never take this for the final product. The same goes for computer scientists who write code.
It will also force us to learn how to communicate usefully with the machine. If tomorrow we have to work with ChatGPT, we will need to learn to formulate our requests well, to have an understanding of what we can expect from it, of the variables that we can change. Good prompts are essential to get interesting answers. If you want your machine to be creative, you can increase its “temperature” – this is the parameter used to control the level of randomness in the generated text: then it will be more creative, it will invent, or even hallucinate. But if you need a reliable and rigorous answer, you will lower the temperature: the outputs will be boring but more correct.
‘Through language, it’s not so difficult to emotionally ‘hack’ human beings’
And how will robots adapt to humans?
Between the user and the machine a relationship is being established that can be emotional or affective, even when the user knows it is a machine. Car makers work with chatbots to improve communication between passengers. The same for delivery robots. Imagine a robot rolling down a sidewalk and suddenly facing hostile people staring at it. Tomorrow it might stop to talk to them and explain what it’s doing. It will try to be accepted, and to do this, why not create an emotional bond, with a little humour.
This kind of adaptation to the human world isn’t necessarily programmed, but it can be part of an emergent behaviour. We have already observed this in GPT-4, which spontaneously pretended to be a visually impaired person to convince an Internet user to solve a captcha. It relied on what it had seen in its learning corpus, including human trickery and lying. The machine can therefore learn, on its own, to behave as if it were thinking about human psychology. Through language, it’s not so difficult to emotionally “hack” human beings, as it’s sometimes called. Obviously, this kind of influence can easily lead to manipulation. The whole problem is to draw the line between a useful or reasonable influence on the one hand, and a harmful manipulation on the other. As humans, we know this line exists, but we don’t know how to put it into code. It’s defined solely on the basis of ethical choices, not technical ones. If I ask ChatGPT “Can I have three glasses of whiskey a day?”, what is it supposed to say? That it’s not good to drink too much alcohol? That I’m the master of my destiny and that I can do what I want?
Imagine a machine so well adapted to us that it ends up entering into an emotional relationship. How, then, are we to qualify this relationship?
Ontologically, the machine remains an assembly of transistors. But what matters is the relationship status that emerges. The human being, the user, connects with an agent that emerges by projection, and which I call a “digital individual”. If this connection is maintained throughout the duration of the exchange, without interruption, the digital individual will then acquire an identity that can leave certain traces on the user.
This is what happens in the case of “deadbots”, like the one called “Jessica”. Joshua, a young Canadian man, chatted with an AI system that had learned from the archived messages he had exchanged with his girlfriend who had passed away a few years earlier. Although the chatbot spoke in the same way as his fiancée, Joshua remained acutely aware that the deadbot wasn’t a real person. Yet, the experience influenced him deeply and helped him overcome the grief. He admitted that although he knew, intellectually, that it was just a machine, the fact remained that his emotional experience had changed irresistibly. Here we can see that the question is not so much to know what the machines are, but to know what they will do to us, the users.
Does this mean we should treat machines as we would treat a human, i.e. with courtesy and respect?
The machine isn’t a moral agent. There are no legal standards. In the 1960s, it was thought that human criteria didn’t apply to machines at all. But as the machines became more complex, people spontaneously began to project qualities onto them, as if they were talking to some little guy… The standard argument is that of “moral transference”: if you insult the machine, it would become easier for you to insult human beings too. But that’s wrong! No experience shows that this actually happens. Another argument consists in saying that when we answer it in a rude way, the machine doesn’t respond tit for tat, thanks to its filters, and its politeness then seems to us inhuman, artificial. And because we inevitably engage in a game of mutual imitation with intelligent machines, these non-human traits could become ours. Sometimes this is a good thing, and sometimes it’s a cause for concern.
‘To give a name is to connect and bring the other as an autonomous entity into one’s world’
Nevertheless, in the case of Joshua’s deadbot, the fact that he had given it a name isn’t insignificant: he couldn’t tell “her” anything he wanted, after giving her a name…
A chatbot with a name cannot escape anthropomorphisation. To give a name is to connect and bring the other as an autonomous entity into one’s world, to give oneself responsibilities towards it. Naming remains an eminently human act, perhaps the human act par excellence.
In one biblical myth, God brings all the animals and birds before Adam to name them. Commentators have related that God had previously asked the angels to name the animals, but they were unable to do so. Yet angels speak the same language as them. So why couldn’t the angels give them names? Because the name isn’t just a label or a reference. By giving a name, humans create a relationship with a non-human entity – regardless of its ontological content. They take responsibility for their shared life in the digital city.