For a long time, technology was synonymous with progress and modernity – in short, with something desirable. But today, some see it as a source of concern, perhaps even a mad rush that could lead to our destruction. How did such a paradigm shift happen? In this essay, which serves as preface to the recently published book The Meaning of Tech (in French), Anne-Sophie Moreau explores the underlying reasons for this technophobia.

Has our era become technophobic? For a long time, we were carried by the ideal of the Enlightenment: the miracles of modernity led humanity to believe that technological advances would necessarily go hand in hand with the advent of a better society. A contagious enthusiasm that lasted at least until the Belle Époque and its Universal Exhibitions, when inventors doubled down with creativity and optimism to sketch out a flamboyant tomorrow. But for some philosophers, this joy was gradually tainted with a certain scepticism. Already in the 18th century, starting with his Discourse on Inequality (1755), Jean-Jacques Rousseau wondered if man’s extraordinary ability for infinite progress implied an ability to use the fruits of his inventiveness wisely. Our species, he explained, is distinguished by its “perfectibility”: this allows us to master the arts and crafts and can make us better beings; but it can also make us cruel and even fall beneath animals when we misuse our faculties. The philosopher’s message is clear: our morality doesn’t necessarily grow with our technological prowess.

Today, Rousseauist concerns seem to resonate in fearful discourses on the consequences of automation on human relationships, for example, or the need to return to a form of sobriety, in order to escape a life-destroying productivity. A new fear of the apocalypse has arisen, or at least a fear of having to mourn the world we once knew: must we really choose between eco-anxiety or solastalgia? These anxieties are as pervasive as our representation of the threat is vague: the risks linked to toxic substances or pollutants are distant, invisible. In The Risk Society, sociologist Ulrich Beck explains that “threats from civilization are bringing about a kind of new ‘shadow kingdom’, comparable to the realm of the gods and demons in antiquity.” This was written in 1986, the year of the Chernobyl disaster. Enlightened catastrophists warn us: our outsized intelligence could drive us to extinction.

As for those who don’t believe in the apocalypse, many of them still wonder about the “always more” mindset that has determined our approach to technology so far. It seems that man now feels tiny in the face of his own creations, and that he fears losing his grip over them. Theorists of singularity claim that human intelligence risks being overtaken by AI – or simply dumbed down by our addiction to smartphones, we’re tempted to add. Which can lead many of us to feel that “Promethean shame of being oneself” which Günther Anders spoke of in The Obsolescence of Man (1956): the all too human fear of not being able to match the machine. The recent arrival of ChatGPT in classrooms and workplaces is inflicting a terrible vexation on us: why bother to think for ourselves when an AI can deliver a clear, well-argued, and almost instantaneous answer to all our questions? Or worse: what will remain of our critical spirit and ability to judge in a world in which a superintelligence has imposed a thought that is certainly elaborate, but hopelessly one-sided?

 

‘Technology is much more than a tool. We don’t use it like we do a hammer, to drive a nail’

 

The extraordinary power of technology

The problem is that technology is much more than a tool. We don’t use it like we do a hammer, to drive a nail. When we speak of “technology”, we no longer hear the old definition of the word which formerly referred to a “science of techniques”; we suggest that technique makes up a system, that it imposes something like a logic – a logos, which in Greek means both “language” and “reason”. The tool is an extension of our arm; it doesn’t deprive us of the decision to carry out this or that action. Just because I own a hammer doesn’t mean I have to drive nails. Technology, on the other hand, does more than extend our capabilities. It may not have a will of its own, but it exercises a certain form of power insofar as it upsets and reorganises our ways of life in a structural way. Martin Heidegger insisted on the difference between an old wooden bridge connecting two banks – the result of a technical know-how that allows nature to live – and a power plant that “challenges forth” the river to deliver its hydraulic pressure (The Question Concerning Technology, and Other Essays (1977). The latter “dams up” nature. It sets up a device that transforms everything into resources and tends towards an end we no longer master. Today, the question arises both in the field of energy and in that of transportation: the car isn’t just a means of getting around; it involves an infrastructure that profoundly modifies our territorial organisation and imposes the consumption of certain resources, to the extent that we have ended up becoming dependent on them. When did we decide to establish a carbonaceous civilization with such harmful but foreseeable consequences? And how can we get rid of it without sinking into a generalised climate of techno-pessimism?

Faced with these abysmal issues, the temptation is strong to simply condemn technology. You can’t stop progress, the saying goes; of course you can, answer those in favour of degrowth. It might be delusional to want to solve the climate crisis with connected gadgets, but it would be a shame to condemn any innovation that could allow us to move forward – as long as we stop seeing technological advances as the engine of a blind economic growth, which pollutes the planet and which a large part of the population hardly gains from in any tangible way, and start seeing it as a tool for social, environmental, and cultural progress. For centuries, technology has improved people’s daily lives. Should we then, for purely ideological reasons, deprive ourselves of its advances? And moreover, refrain from developing those of the future in the name of a religion of nature that blindly condemns any form of artifice?

 

‘The sign of a technology that works is that we forget it. This is why we tend to neglect what it brings us on a daily basis’

 

The sign of a technology that works is that we forget it. The extraordinary power that the machine gives us only makes itself felt when it escapes us – like when the computer crashes and we get annoyed at the idea of ​​having to do without it for a few minutes. This is why we tend to neglect what technology brings us on a daily basis: once it’s there, it blends into the background. Take remote communication tools: being able to exchange with colleagues by videoconference is now a given. These aren’t just tools, but technologies, in the sense that they have profoundly changed the world of work – including in terms of recruitment, since it’s now possible to hire people who would never have had access to certain positions before. They impose their device on us, Heidegger would rail, but isn’t that also in the interest of certain workers? For many professions, the practice of remote work is an achievement which we simply can’t imagine going back on. Who would like to renounce the physical and mental liberation it gave us from such burdens as traffic or office noise? It’s not about claiming that these technologies have no perverse effects, but recognising their advantages, and above all, remembering that it’s everyone’s responsibility to avoid their misuse. Making remote work a vector of emancipation isn’t a technological problem, but a matter of human decisions: managers don’t have to multiply online meetings or digitally monitor their employees...

And finally, let’s try to remember that we’re far from being able to imagine future innovations. The technological revolutions of tomorrow are not yet possible, as Henri Bergson would say. “If I knew what the great dramatic works of tomorrow will be, I would write them,” the philosopher said when someone asked him his views on the future of literature. We mistakenly tend to think that the works to come are shut up in a “cabinet of possibilities” to which we would only need to obtain the key, he explains in The Possible and the Real (1930). The same goes for future discoveries: once they occur, the possibility of discovering them will seem obvious to us; but for now, we have no idea what they might look like. Who could have imagined, in the past, the arrival of a vaccine against cancer or even nuclear fusion? One thing is certain: it will be our responsibility to decide how we want to use future technologies, and what world we will then allow to be established.

 

‘The way we demonise technology is connected to our mania of ascribing it to imaginary powers’

 

Taking responsibility

Basically, making technology a power beyond our control is an easy way to absolve ourselves of our responsibilities. The way we demonise it is connected to our mania of ascribing it to imaginary powers, as Gilbert Simondon has shown. The philosopher offers insightful thought on our ambiguous relationship with machines, especially robots, which we readily fantasise as autonomous, uncontrollable creatures, even endowed with hostile intentions towards us. These fears are now being reactivated by the prospect of seeing them replace waiters and caregivers and plunge us into a dehumanised world. But “the robot does not exist, […] it is not a machine, any more than a statue is a living being, but only a figment of the imagination,” Simondon argues (On the mode of existence of technical objects, 1958). We’re obsessed with this fictional figure because we wrongly believe that the more elaborate a machine is, the closer it is to the automaton; whereas in reality, the machine endowed with the highest technicality is, on the contrary, an “open machine”, which “contains a certain margin of indeterminacy.” Moreover, “the set of open machines presupposes man as a permanent organiser, as a living interpreter of the machines in relation to each other.” ChatGPT has no conscience. It remains programmed by humans. AI is certainly biassed, but that is because we feed it with our own biases. It’s up to us to orchestrate our use of these technologies, instead of fantasising about their omnipotence.

With great power comes great responsibility. No wonder technology and its development raise ethical and political questions. The tool extends the arm of the worker; technology, on the other hand, extends the capacities of all humanity. This “enlarged body awaits a supplement of soul,” Bergson writes in The Two Sources of Morality and Religion (1932): it’s up to us to direct technology towards the advent of a better society, instead of expecting society to adapt to technological innovations. In other words, our ethics must progress as fast as our science.

  

‘Our ethics must progress as fast as science’

 

In his essay Is Capitalism Moral? (in French, 2004), the philosopher André Comte-Sponville distinguishes three orders: “economic-technical-scientific”, “legal-political”, and “moral” or “ethical”. According to him, technology, like capitalism, belongs to the first order: it’s only concerned with the possible and the impossible (unlike politics, which decides what is legal or not, and morality, which judges what is good or bad), which means that it is basically “amoral”. Leaders have the responsibility of questioning the ethical nature of their activity. Increasingly, decision-makers are being asked not only to question what they can do, but what they should do; and so the responsibility of distancing oneself from some of the possibilities offered by technology falls on their shoulders too – not just on those of philosophers.

 

Ideas in the book

Following a meeting with Olivier Girard, who heads Accenture in France and Benelux and supports companies on issues related to their technological transformation, the idea was born of bringing together leaders and philosophers to discuss the societal upheavals induced by technology. These exchanges culminated in The Meaning of Tech, a collection of interviews enriched by Philonomist’s philosophical perspective and Olivier’s technological lights, and published in France in May 2023 by Philosophie Magazine Éditeur.

The purpose of this book is to create a dialogue between personalities from two worlds that might seem far removed: that of theory and that of practice. To our delight, the thinkers and leaders we contacted agreed to play along and confront their views on some of the trickiest aspects of the societal upheavals caused by technology. In this book, they discuss the impact recent inventions may have had on fields as varied as work, mobility, sovereignty, or even the training of our brains.

  • In an introductory dialogue, the anthropologist Pascal Picq compares his views with those of Olivier Girard: what makes our era one of technological rupture? Does technological development necessarily lead to political progress or more democracy? How can we make sure it goes hand in hand with the development of a just society?
  • The philosopher Julia de Funès and Christophe Catoir, the head of Adecco, discuss the ways in which new technologies are disrupting the world of work: can we trust algorithms to facilitate recruitment? Does remote work mean more autonomy for employees? Are we really doomed to lose our jobs because of technology?
  • The geographer Michel Lussault wonders about the future of mobility with Catherine Guillouard, shortly before her departure from the RATP group, which she led for several years: how can we imagine low-carbon transportation that remains accessible to all? Is hydrogen a solution for the future? Who will fund the transition to a greener mobility?
  • The philosopher Catherine Malabou and the president of the École polytechnique Éric Labaye discuss the limits of our brain in the face of AI, the genius of which can sometimes be the object of unwarranted fantasies: are we really destined to be overrun by AI? Can a machine be creative? How can we change our approach to training, to meet these new challenges?
  • Patrice Caine, CEO of the Thales group, talks with computer scientist and philosopher Jean-Gabriel Ganascia about the question of our sovereignty in the digital age: what is left of states in a world globalised by the internet? Can we avoid the threat of cyberwar? And where should we draw the line between individual freedoms and the security provided by technologies such as digital surveillance?
  • Patricia Barbizet, who chairs the board of directors of the Paris Philharmonic, talks with corporate philosopher Luc de Brabandere about the complex relationship between creativity and technology: what is the difference between creativity and innovation? Do technological possibilities stifle creativity? How can we rediscover the meaning of the artistic experience in an era of dematerialisation?
  • And finally, Angeles Garcia-Poveda, chair of the board of directors of Legrand and headhunter for the recruitment firm Spencer Stuart, discusses the impact of technology on diversity with Laurence Devillers, professor of computer science and specialist in human-machine interactions: why are women so absent from tech companies? Is AI biassed? How can we create inclusive technology?

Marked by their open-mindedness and curiosity towards the future, these discussions between leaders and thinkers open up new perspectives – without claiming to resolve all the questions posed by technology, of course. They sometimes reveal disagreements, but also surprising convergences on certain subjects. Notes of optimism mingle with areas of concern, while new questions emerge. All of which should help fuel a calm and resolutely thoughtful debate on the future of technologies, which won’t stop changing our lives anytime soon.

 

Picture © Alex Shuper / Unsplash
Translated by Jack Fereday
2023/05/23 (Updated on 2023/07/07)