Since its launch in March 2023, Chat GPT-4 has stirred a lot of debate. But what does the latest version of this chatbot really change? Apolline Guillot explores the question with the help of the analysis of physicist and philosopher Alexei Grinbaum.

1. What can we expect from GPT-4?

GPT-4 is often described as an artificial intelligence. But behind this word, we forget that GPT-4 is first and foremost a machine for producing text: it has no reasoning or thinking of its own. This is called a large language model, i.e. a statistical model that uses a large number of data (books, press articles, web pages, and social media), to learn how to generate the words and sentences that follow when given a sequence of words. The function of GPT-4 is therefore to produce relevant and sophisticated texts that best meet a need expressed by the prompt – the short sequence of words used to ask it to perform a task.

Contrary to what sceptics claim, GPT-4 isn’t a sophisticated “calculating machine”, or even an “approximate parrot”, to use the words of France’s Minister for Digital Transition, Jean-Noël Barrot. “The outputs are original, it’s not copy-paste or plagiarism,” says Alexei Grinbaum, physicist and philosopher of science. “The machine learns as we learn, by reading different books and listening to what is being said around us.” That isn’t to say that GPT learns “like us”: “The input and output resemble what a human can understand and produce, but the path between the two has nothing to do with the reasoning in a brain, even if we arrive at the same type of results.”

 

‘If we judge GPT-4 to be ‘approximate’, it’s simply because we don’t know how to ask the right questions in the right way’

—Alexei Grinbaum

 

The comparison with a human being is therefore simply not relevant, whether it’s to criticise it or to extol its performance. “GPT-4 isn’t a parrot, and if we judge it to be ‘approximate’, it’s simply because we don’t know how to ask the right questions in the right way,” Alexei Grinbaum further explains. If we’re so often helpless in front of these new interfaces, it could be because we still treat them a bit like talking encyclopaedias or search engines like Google, which are designed to meet a very specific targeted need – a definition, a gratin recipe, train tickets… The difference is that GPT-4 wasn’t designed to meet a specific, pre-existing need. On the contrary, it’s by learning to address it in an effective and intelligent way that we will really come to understand what we can do with it. In the same way that one must practise before learning how to use a jigsaw, one must have the curiosity to learn how to write requests to GPT-4 to hope to get something out of it. Hence the proliferation of articles with titles that make you smile, such as “What should you ask ChatGPT?”, or “How do you talk to ChatGPT?”.

 

‘The AI ​​is so attached to obeying human demand that by dint of looking for errors in its own productions it begins to ‘hallucinate’’

 

2. Why does GPT-4 sometimes invent crazy answers?

Sometimes, the AI ​​is so attached to obeying human demand that, by searching too keenly for errors in its own productions, it begins to “hallucinate”. If GPT-4 is asked several times in a row to cor…

You have 75% left to read
Want to read the rest of the article?
Please subscribe to join the Philonomist community of thinkers & innovators, and read as much content as you want. Subscription offers
You're an individual reader?
Subscribe to Philonomist and gain free access to all our content and archives for 7 days. You'll also receive our weekly newsletter. No commitment. No bank details required.

You're already subscribed to Philonomist via your employer?
Connect to your account by filling in the following details (please provide your professional email address).