For the past half-century, Daniel Kahneman has been successfully debunking the idea that individuals and markets behave rationally – so successfully, that he was was awarded the Nobel Prize in Economics.
The laureate of the 2002 Memorial Prize in Economic Sciences is probably one of the most unlikely winners in the award’s history: by his own admission, Daniel Kahneman has never even taken an economics class. But as a psychologist working alongside his friend Amos Tversky, he is one of the main pioneers of “behavioral economics”: a new field of knowledge which has influenced policy-makers and political leaders across the world, including the former US President, Barack Obama. Much of Kahneman’s success reflects the shortcomings of mainstream economists, who traditionally lack psychological insight. They prefer instead to promote the idea of a perfectly rational – and largely fictitious – homo economicus, which is cornerstone to their neo-liberal ideal of a free-for-all between competing individuals. In this sense, the 2008 economic crisis served as dramatic reality check. By meticulously dismantling this image of man as a rational agent, using a combination of science and imagination, Daniel Kahneman has shown that we are actually quite the opposite – in fact more often than not, we’re busy making blunders... Why? Because our mind is a battlefield where two systems of thought are constantly competing: “System 1”, which is fast, intuitive, and over-confident, and “System 2”, which is reasonable, conscientious, but likely to get distracted...
Let’s start at the beginning. At the age of 21, you were already developing psychological tests for the Israeli army…
Daniel Kahneman: Israel was still a young country, and they were lacking experts… Everything was yet to be invented. This test, which is still in use today, proved to be an enlightening experience. As a psychologist, I had to find a way of predicting a new recruit’s behaviour in the field. The existing protocole up until then had been to interview each one for about twenty minutes, and freely discuss a variety of topics in order to get a general idea of what their future performances might be like. But the feedback we got a few months later almost always contradicted these first impressions. That’s when I came across the work of the American psychologist, Paul E. Meehl (1920-2003), who held that an array of statistics was more reliable than a person’s intuitive judgment. So I came up with a test which was very simple: recruits would be asked a set of predefined questions about unrelated attributes, such as “sociability”, “responsibility”, or “male pride”, and marked from 1 to 5... It worked much better. It also works for professional recruitment: it's more reliable to evaluate a set of specific skills than to follow a sudden inspiration, and say things like “I looked him in the eye and liked what I saw.”
Did this experience inspire you to work on the biases of intuitive thinking?
At the time, I was mainly struck by the irony of the situation: some of our strongest convictions can turn out to be utterly worthless. There was no logic to it, and I was the one who had to fathom it out – I was outraged! And the bad results we were getting did nothing to boost our confidence. At the time I called it an “illusion of validity”. I only started to focus on intuitive biases much later.

How did that happen?
I was teaching statistics, which turned out to be a difficult subject to teach. The reason for this is that statistics are counter-intuitive: we never take them into account spontaneously. Take the phenomenon known as “regression to the mean”, which means that variables which are extreme on one measurement tend to be much closer to the average on the next... When I told an officer that rewarding progress was more efficient than punishing mistakes, he disagreed. He said that each time he punished a pilot for a bad manoeuvre, the same pilot did better the time after, and vice versa: whenever he rewarded them, their next performance was worse. But he was seeing a causal relation where there was none: he was just dealing with random fluctuations with a predictable regression to the mean. Each time a pilot makes a mistake or accomplishes a major feat, he tends to return to his usual level. Throughout life, we’re plagued by perverse information: statistically, we tend to be punished for our kindness and rewarded for our wickedness.
According to your main theory, our brain is managed by two conceptual characters, “System 1” and “System 2”. How would you describe them?
System 1 governs our intuition: it’s automatic, makes associations, searches for causal relations and concentrates on particular things. It doesn’t get on well with statistics and large categories. It prefers stories, and looks for coherence. But it doesn't draw this coherence from large quantities of knowledge and proof – it can actually draw strong conclusions from very little. That’s what I call WYSIATI: “What You See Is All There Is”, which underlies most impressions we have of things. Take the “Halo effect”, for example, when you see all kinds of qualities (intelligence, integrity, competence) in someone you just met at a party and happened to like... It should also be said that the ease with which System 1 finds its answers reinforces its confidence in its judgment.
‘Supporting two contradictory scenarios requires a considerable amount of energy’
And opposite you find System 2, a rather lazy counterpart…
System 2 can resist System 1’s suggestions by reasoning, and using logical analysis to subject our illusion of validity to self-critique. But it only intervenes when it’s forced to. When System 2 enters the scene, our pupils become dilated, our heart beats faster, our brain receives a dose of glucose… Supporting two contradictory scenarios requires a considerable amount of energy. That’s why, most of the time, System 2 just validates the scenarios and explanations of System 1: it’s easier to slip towards certainty than to remain stuck in a state of doubt.
Hence the many false justifications due to a lack of facts?
Let’s take a look at statistics... One study across 3,141 American counties revealed that the lowest rates of kidney cancer are found in rural, sparsely populated counties which traditionally vote Republican. If you think about it for a few seconds, forget about the Republican factor, and focus on the rural nature of these counties, you’ll find that yes, a healthier lifestyle, without pollution or chemical foods, explains the lower rates of cancer. But the same study reveals that the highest rates of kidney cancer are also found in counties which are rural, sparsely populated, and Republican-leaning! So this time, you’ll answer that of course, poverty and fewer health centres easily explain this figure... Each time, you’ll draw on your associative memory to find the most coherent explanation. But the truth is that the smaller samples statistically record extreme results, in one way or the other – in other words, they’re simply not representative.
Your book is full of these unsettling examples, which expose our mistakes in real time…
That’s the key to our success: all our articles used to begin with an experiment the reader could take part in himself and which would directly reveal to him the intuitive biases he falls for. I was quite impressed by Gestalt psychology from Germany. The images it used allowed readers to experience different forms of illusion: in one picture, depending on whether you focus on two profiles facing each other or a vase in the middle, you see two different pictures. That’s what inspired our way of systematically subjecting the reader to ironic experiments on themselves.

But although our intuition isn’t always correct, it can still be helpful: we undertake things with a hope that might be excessive, but without it we wouldn’t try anything…
Yes, that’s the “optimistic bias” which is the very engine of capitalism: we always exaggerate our chances of success. But thankfully this tendency is offset by another bias, “loss aversion”. There’s a balance between these two tendencies, which preserves us from excessive temerity or conservatism: we’re torn between brave predictions and timid decisions.
Your work on intuition led to a controversy which, for once, was rather productive…
Our research on intuition came under attack from the opposite school led by Gary Klein, who holds that most of the time intuitive thought processes get it right. Rather than engage in an indirect exchange, I proposed that we work together in what I call “adversarial collaboration”: the idea is to write an article with your opponent about your differences. He accepted, and an exchange followed that lasted seven or eight years, and resulted in an article called “Conditions for intuitive expertise: a failure to disagree”. It turned out that we weren’t actually talking about the same thing. There’s a difference between short-term anticipation and long-term predictions, between a fireman who senses that a house is going to collapse within a minute and a Middle-East expert making predictions about how the region will evolve. Intuitive competence can only develop in an environment which is sufficiently regular to be predictable and if you can learn these regularities over a long period of time – it takes about 10,000 hours to become an expert. That’s the case for the firemen and nurses Gary Klein had been studying, but not for economists or political scientists. In the same way, because a therapist can guess how a patient is going to react to what they’re about to say, they might also think they can predict the patient’s situation in a year’s time. Overconfidence leads us beyond our domain of expertise.
‘The idea of Homo economicus is profoundly ingrained in American individualism’
You’ve spoken about your first encounter with the concept of Homo economicus – a fictitious, rational agent, imposed by the neoliberals from the school of Chicago… Does that explain the current crisis of capitalism?
Probably, but the idea of Homo economicus is profoundly ingrained American individualism: individuals are responsible for their choices and must live with the consequences. And if you say that “individuals are rational”, that means there’s no need to protect them, either from themselves or the greed of companies. Yet our research shows that most of the time, on the contrary, our decisions – under the influence of System 1 – are marred in a number biases, which lead us away from rationality.
Hence the concept of “libertarian paternalism”?
Yes, our work inspired the economist Richard Thaler and the legal scholar Cass Sunstein to develop this idea in their 2008 book Nudge, which became the bible of behavioral economics. The idea is to find ways of helping people to make the right decisions without restricting their freedom. Hence a form of libertarian paternalism which allows States and institutions to nudge people. For example: 96% of the Swedish population are organ-donors, compared to only 4% of the population in Denmark… Why? The former must tick a box in their driving licence to refuse to be an organ donor, whereas the latter must tick a box to accept.
These concepts were quite successful under Barack Obama...
Indeed, Cass Sunstein was an advisor to the Obama administration and managed to implement about thirty measures, including the “Save for tomorrow” program, in which people commit to automatically transferring part of their salary each month to a retirement scheme. But behavioral economics doesn’t belong to one party: in the United Kingdom, it’s the Conservative Prime Minister David Cameron who asked Richard Thaler to help set up a “Behavioral Insights Team”, otherwise known as the “Nudge unit”...
‘Although I’m rather pessimistic as to the lucidity of individuals, I’m quite optimistic about the future of organisations’
You’ve been studying the crafty rationale of System 1 for decades, and still, you admit to falling victim to its tricks… Is there hope yet?
The automatic nature of System 1 means it’s very difficult to be aware of your own mistakes. But on the other hand it’s easy to see the flawed thinking patterns of other people. So although I’m rather pessimistic as to the lucidity of individuals, I’m quite optimistic about the future of organisations. We need to reinforce our everyday vocabulary with useful expressions such as “halo effect”, “risk aversion”, “illusion of validity”, or the “framing effect”, which explains how the way a problem is presented can influence our decision... We can’t do much about our own illusions, but with the right vocabulary we can point out each other’s cognitive biases.
Daniel Kahneman’s systems and their influence
Thinking, Fast and Slow (Farrar, Straus & Giroux, 2011)
This bestseller has proved to be a fun read for those of us who boast about being intuitive. Using dozens of clever and often striking thought experiments, the author shows just how fallible our spontaneous judgments can be. From doctors who will readily prescribe a treatment if it has a 90% survival rate but not if it has a 10% death rate (the “framing effect”) to students who see more truth in aphorisms when they rhyme (“cognitive ease”)... Kahneman’s examples leave a lasting impact, and show that it’s high time we learned not to turn away from our intuition, but to overcome our feelings of omnipotence.