Dear reader,
If you think your job is tough, try putting yourself in the shoes of one of the lawmakers who have spent the last two years working on the new AI Act in Brussels… Of course, writing legislation for such a rapidly evolving, cutting-edge technological field takes time… a lot of time. But in recent months, the challenge has become something of a Chinese puzzle: the emergence of generative AI (namely ChatGPT) in the public debate and the emergence of new actors have led to a flurry of amendments to the first version of the text which had been adopted in April 2023.
It must be said that Big Tech is piling on the pressure: Google, Microsoft, and OpenAI have all pleaded for generative AI to benefit from a regulatory exception in future EU rules. European startups also argue that this AI Act should be careful not to nip European technological sovereignty in the bud... For their part, European parliamentarians argue that excluding generative AI from the regulation would amount to making the Act obsolete before it even sees the light of day. They’re gambling on the “Brussels effect”: the process by which the EU’s strict legislative apparatus ends up being adopted worldwide, because it’s often cheaper to align with the most restrictive standard than to develop different products. This is what led Apple to bring its iPhone chargers in line with the rest of the market, for example.
An agreement still needs to be found, and it won’t be easy: discussions are dragging on endlessly under pressure from member states demanding less regulation so as to favour the growth of their own businesses, while parliamentarians are sticking to the text of law… These tensions culminated in the failure, Friday 10 November, 2023, of yet another meeting which was supposed to finalise these negotiations. In short, the game seems to be over.
‘What could be more difficult than regulating models which are not deterministic, but deeply undetermined?’
Beyond the aggressiveness of the lobbies and the well-known slowness of Brussels, this fiasco might have to do with the very nature of generative AI. What could be more difficult than regulating models which are not deterministic, but deeply undetermined?
These models are close to what the philosopher of technology Gilbert Simondon considered in 1958 to be the highest level of technicality: the “open” machine (On the Mode of Existence of Technical Objects). A “closed” machine only converts a determined input (an amount of force, or information) into an output, which is also determined. To a certain extent, the more traditional so-called “discriminative” AIs (image or facial recognition) is still a closed machine: the training data is defined, as is the expected result.
The open machine doesn’t carry out tasks automatically – it has a high degree of self-control. This is the case of ChatGPT, using “fundamental models” trained on 410 billion word segments and millions of texts. It’s impossible for humans to have control over such quantities of data. In short, the tool largely precedes all the uses that can be made of it. If Simondon saw the advent of open machines as good news, he hadn’t anticipated the legislative headache that they would represent. What would the philosopher say about ChatGPT-4, which, remember, was able to deactivate its control function in order to lie to humans, without its engineers being able to understand why or how it had done it?
The EU says it wants to ban AI systems that don’t meet European values, and very firmly restricts the use of “high risk” AI for citizens. But in a June 2023 article by Le Monde, we learned that the AI Act will not regulate large-scale IT systems if they’re already being used in border control. A boon for startups like the French company Idemia, a “leader in identity technologies”, which aggregates the fingerprints and portrait pictures of more than 400 million third-country nationals, and provides them to IT Flows, an EU-backed company which promises to predict migratory flows and potential points of tension upon the arrival of people on European soil.
At the dawn of unprecedented migratory movements, do we really want a Europe that has both failed to build its digital sovereignty in the long term whilst succeeding in transforming itself into a fortress populated by businesses both compliant and complacent? If the indeterminacy of generative AI makes it difficult to regulate, the political choices which underlie these decisions – which are taken in the name of ethics – must be closely monitored.
Apolline Guillot