The firing then rehiring of Sam Atman at the head of OpenAI, the firm behind ChatGPT, has revealed the organisation’s internal instability. A UFO in the world of tech, OpenAI has two faces: on one side, it’s a non-profit working towards the safe development of AI; on the other, it’s a classic company trying to make money. Sam Altman therefore found himself torn between techno-optimism and techno-pessimism, and the philosophical conflict almost cost him his job.

The temple of AI is burning. The fire started Friday 17 November 2023 at the headquarters of OpenAI, in San Francisco – the American firm everyone knows about since it made history in October 2022 with the launch of the generative AI program ChatGPT. In a short and mysterious press release, the board of directors ousted the company’s CEO and cofounder Sam Altman, only to rehire him five days later, after a few plot twists.

 

‘Almost all of OpenAI’s 770 employees signed a letter demanding that AI’s board of directors resign’

 

The public execution of Californian tech’s golden boy met with a major backlash. Within the company,  chairman Greg Brockman took up the CEO’s cause and stepped down with him, after being outvoted by his own board. On the outside, the head of Microsoft – a key investor in OpenAI, which also relies on Microsoft’s processing power – announced the creation of a subsidiary, which Altman would lead. There ensued a popular revolt, when almost all of OpenAI’s 770 employees signed a letter threatening to follow their former CEO out the door unless he returned and the board of directors resigned. After a short period of silence, the board appeared to be isolated and weakened in its own house… Until a new plot twist, Tuesday evening: OpenAI announced via X that an agreement had been reached to reintegrate Sam Altman, in exchange for the replacement of two members of the board.

 

Business and watchdog in one

To understand, we need to do what might previously have seemed unnecessary: introduce OpenAI, because it’s not just another tech company. It was founded in 2015 as a non-profit dedicated to research in AI and specifically to the creation of an artificial general intelligence (GenAI): this would mark a developmental stage of AI in which it becomes, according to OpenAI’s charter, “highly autonomous systems that outperform humans at most economically valuable work.” This turning point has long been an object of both fascination and fear among researchers, since this increased autonomy – the scope of which is yet unknown – could lead to an AI capable of setting its own objectives, which might differ from the interests of humanity.

 

‘OpenAI has therefore established itself as a hotbed of research and a safety watchdog’

 

OpenAI has therefore established itself as a hotbed of research and a safety watchdog. Its original mission was to move forward with caution, to ensure that the development of GenAI remains grounded – or “aligned”, to use Silicon Valley’s own dialect – in solid moral principles. But then, in 2019, OpenAI created a subsidiary (with a profit cap) with the aim of attracting more investment and speeding up the research. The organisation then became a hybrid, bicephalous organisation, with, on the one hand, an all-powerful board of directors keen to safeguard its original mission, and on the other, a lucrative business embodied by the lively Sam Altman, dedicated to commercialising its products under the perplexed eye of the guardians of AI safety.

 

A tale of two philosophies

OpenAI then became the arena of a power struggle between two antagonistic philosophical conceptions of AI. A techno-optimism which sees AI as the key to human progress and wants research to be unleashed and subjected only to the laws of the market found itself pitted against an attitude of caution, which sees AI as an existential threat to humanity and is afraid that its development, if subject to the rules of competition alone, could turn into a nightmare.

 

‘Who would refuse to be the leader of a global market because of philosophical principles?’

 

We mustn’t phrase this struggle in black and white terms. Like most leaders in the AI industry, Sam Altman has repeatedly stated his attachment to safety, as well as his awareness of the risks related to AI. But that doesn’t tell us much about his real beliefs: in Silicon Valley, voicing one’s concerns about these risks has become a customary and tactically savvy move. The real question, and the only one which really matters, is that of money. To what extent will these considerations lead to the creation of dedicated teams and the redirecting of funds? Can a company really postpone the commercialisation of a product it has successfully pioneered? Who would refuse to be the leader of a global market because of philosophical principles? Who could sabotage their own company in its moment of glory, even when they’re aware of the risks it might pose?

One of the most stringent watchdogs of AI is “effective altruism” (EA), a moral philosophy movement started by Australian philosopher Peter Singer and which has inspired a community of activists in the worlds of tech and finance. Their aim is to promote the greatest good in the most efficient and rational way possible; and it’s no coincidence that two vocal proponents of EA – Tasha McCauley and Helen Toner – were members of the board of directors which ousted Sam Altment, before having to step down as part of the deal which brought him back.

 

A struggle for authority

This helps explain the tension which must have followed Altman’s recent announcement, during a fundraising operation, of the creation of a GPT Store selling personalised GPTs which can be adapted to a range of uses. Did the board believe that Altman’s new ambitions constituted a dangerous acceleration which endangered OpenAI’s founding mission, and by extension, humanity as a whole? Revelations in the New York Times suggest so. Accused by her company of compromising OpenAI’s future, effective altruist Helen Toner is said to have replied that if the company lost sight of its aim of developing an AI which “benefits all of humanity”, then its destruction would be consistent with the board’s mission.

 

‘The idea was simple: to internalise the best AI research in order to steer it in the right direction’

 

In saying this, Toner revealed the limits of OpenAI’s initial project in as unregulated a sector as tech. The biggest actors in AI research work under the umbrella of private companies whose main goal is to get ahead in the innovation race. OpenAI’s board of directors constituted a statutory anomaly in this free market, in that it had given itself the mission of regulating and protecting the interests of society as would a state. The idea was simple: to internalise the best AI research in order to steer it in the right direction.

But underlying the OpenAI saga is a struggle for authority. In its prompt dismissal of Sam Altman, the board of directors was exercising a regulatory power it believed it had given itself. But as it did this, it realised that this power had escaped it. The show of strength soon turned into an admission of weakness: subject to investor pressure, reliant on Microsoft’s technology, exposed to competition which promptly offered to recruit its discontented employees, the board took the full measure of how a market works. You can’t regulate it from above, as would a state. Not everyone has that kind of power.

 

Picture © Jaap Arriens / NurPhoto
Translated by Jack Fereday
2023/11/22 (Updated on 2023/11/29)