From self-driving cars to smart hospitals, artificial intelligence has the potential to either make us or break us... But can it be programmed to tell right from wrong? Five experts give us their take on the ethics of programming.
Even today, the number thirteen continues to spell doom – hence the missing 13th floor in many American elevators and hotels. That it's also the number of questions used in the “Moral machine” may not bode well for the future of AI... Since it was put online by MIT scientists in 2016, the platform has received feedback from over 40 millions people. It seeks to test our intuition when faced with extreme moral dilemmas; but unlike the tricky thought experiments of 20th century philosophers, this test has a modern twist: here the moral agent isn't a person, but a self-driving car, just like the one Google has in store for us... So let's look at the options: the vehicle can either drive straight into a barrier, killing its user and her cat, or swerve to avoid it, and run over three children on a pedestrian crossing. What should it be programmed to do? Now let's say the children are running a red light, and that there's also a baby on board? This is no idle question: the imminent advent of self-driving cars calls for a re-examination of what analytical philosophers refer to as the “trolley problem”, which was initially thought up by Philippa Foot in 1967. Picture a runaway trolley racing down a track on which five people are tied up and unable to move. If you pull a lever, the trolley will change tracks and miss them, but then it will run into and kill one other person. Do you pull the lever, and take responsibility for the person's death? Most people would. The plot then thickens: you can stop the trolley altogether, by pushing an obese man onto the track. Do you do it? Here many people recoil from directly sacrificing someone, even if the end result is the same; which goes to show: few people are 100% consequentialist in real life – there's more to moral decision-making than the mere consequences of our actions.
The driverless car problem
Jean-François Bonnefon is a professor of cognitive psychology at the Toulouse School of Economics, and one of the Moral Machine's main designers. I put it to him: if tomorrow Google starts selling cars that are programmed to crash into a wall under certain circumstances, I'll be more than reticent to use it, and will certainly never put any of my children in it! Can Google really sell these cars unless they ensure the safety of their passengers first? Bonnefon is unfazed:“You seem to prefer driving alone, he says. But let's say the driverless car divides the risk of accident by ten, or five, or even two... Then wouldn't it be irrational to avoid these benefits from fear of one extremely rare scenario? On the contrary, don't you have a moral obligation to save your children from the dangers inherent to your own driving?”
So am I alone in dreading the prospect of a car that would potentially kill me to save the lives of two strangers? I ask Nicholas Evans, philosophy professor at the University of Massachusetts Lowell. He and his fellow researchers received a half-a-million dollar grant to develop an algorithm capable of solving the trolley problem. I point out that the main reason SUVs are so popular, especially among American dads, is that they can destroy more or less any obstacle they run into. It might not be an altruistic piece of machinery, but it's certainly a reassuring one. “That's an interesting argument!” Evans replies. “Suppose you're a utilitarian and a consequentialist. You want to maximise the well-being of the greatest number of people, right? You think that if all cars were autonomous that would be a good thing, since the number accidents would drop by 90%. But you know that such a world will only exist if these vehicles can protect their passengers at all cost, because if not, no-one will use them. Here a coherent utilitarian would accept to follow a deontological rule which is non-negotiable – that cars must always protect their passengers – in order to maximise people's collective well-being. This is what we call “self-effacing utilitarianism”. To attain strictly utilitarian results, it is sometimes necessary to follow a deontological rule.”
Jean-François Bonnefon sees things differently: “Some moral decisions are difficult to take, so we avoid thinking about them. That's why we like to say that human life is unpredictable. With a touch of cowardice, we say: 'If an extreme situation occurs, we'll see what happens in the spur of the moment'... The problem we have with driverless cars – and which I see as an opportunity – is that they force us to lay out the ground rules. Now we have to agree on a set of principles, address the cases where they might not work, discuss it, and come to a decision. We're now forced to enter that grey area we've been gladly putting to one side. And keep in mind that our free will remains intact!” But the tricky part, I object, is that decisions are always made in a specific context. Imagine that my wife is terminally ill, and that I'm the only financial support for our children, don't I have a good reason not to sacrifice myself, even if that means running over three or four people? “You need to look at all the possible scenarios in advance, including this one, and decide. We need to move from a situation of hypocrisy, where we can make a posteriori arrangements with morality, towards a priori ethical decisions.”
‘Can you imagine a car following the fifth commandment: “Thou shalt not kill”?’
Nicholas Evans, philosopher
But is it even possible to shed light on these moral grey areas? Nicholas Evans has a more nuanced approach: “The problem is that we don't know what a truly deontological car would look like. Can you imagine a car following the fifth commandment: 'Thou shalt not kill'? Of course not. The simple fact that we're programming a machine requires a consequentialist and utilitarian approach, even if we're philosophically adverse to these options.”
This takes me to yet another paradox: with self-driving cars, drinking bans will be made redundant, since we will no longer be behind the wheel. “True, Evans smiles, “but alcoholics rarely drink so much that they drop dead, which means the number of human lives saved by autonomous cars on the road will outweigh the risks of increased alcohol consumption.” How about organ donations? Crash victims in a state of brain death are the main source of organ transplants. How will we cope without them? “One answer is that we need to develop synthetic organs faster than we do autonomous cars.”
Let's consider this other aspect of the trolley dilemma: should we save the lives of young people over those of the elderly – in other words, maximise not the number of individual lives we can save, but the number of years of life? Here we might have another problem on our hands. Let's suppose luxury cars, like Porsche and Maserati, continue to exist in the future: you generally can't afford these vehicles until you're in your fifties... So will they be aimed at youth, or will these companies continue to prioritise older owners? Nicholas Evans agrees:“Yes, high-end car manufacturers could end up favouring their core customer base, i.e. older people with a higher social status. I can only see one way of counteracting this trend: the risk of damaging their image. I can already imagine the headlines in the newspapers: 'X-brand limousines are baby-killers!'”
Another question: who will be responsible in case of an accident? No offence, but as the author of the algorithm, wouldn't that be you, Nicholas Evans? “I'm working on an algorithm for a simulation, not a real car! But driverless cars will change the way we think. The notion of 'driver' will disappear, so they'll be no need for insurance! Households will save quite a bit of money. Of course, there will still be the occasional accident, but it will be a much rarer occurrence. As things stand today, the responsibility for someone's death doesn't lie with Seat, Renault, or Volkswagen. Cars take lives, but manufacturers don't take any blame. In the future, they might not want to make autonomous cars for fear of being sued. It's therefore possible that once a car has been tested and meets the correct standards, the State could be the one to cover the risks.”
The terminator conundrum
At the opposite end of the spectrum, we find machines specially designed to kill people: drones. In recent years, arms manufacturers have been offering to develop autonomous killer robots – drones which would no longer be controlled from a distance, like the Predator model in use today, but instead would follow their own mission objectives. According to the roboethicist Ronald C. Arkin, the main argument for using them is that they will have a tendency to self-destruct, whereas human troops on the ground will want to save their own lives at all cost, regardless of collateral damage. A drone will implode rather than kill a civilian or a child. But Peter Asaro, another roboethicist and associate professor at the New School in New York City, is not convinced: “First of all, it's difficult to predict the behaviour of human beings on the battlefield, and we know that many soldiers have sacrificed their life to save a comrade or to secure a position. On the other hand, I've never heard of military commanders sacrificing sophisticated military equipment... Has an army ever destroyed tanks, helicopters, or drones to save civilian lives?”
Asaro is a co-founder the International Committee for Robot Arms Control (ICRAC), an ONG made up of researchers pushing for more legislation on the use of robot weapons. Its 2015 campaign to “stop robot killers” was endorsed by 1,500 renowned figures, including Elon Musk, Stephen Hawking, and the philosopher Daniel Dennett.
Proponents of autonomous weapons also argue that machines can be programmed to avoid all kinds of misconduct, such as opening fire on civilians, women, elderly people or children. Peter Ansaro readily admits that“one of the first principles of the law of war is the distinction between civilians and soldiers, and if this only required visual recognition, machines might perform better in this area. But things aren't so simple, he says, because of that other category of 'civilians taking part in hostilities'. Here, the criteria isn't visual but behavioural. Is that civilian merely picking up a stone to throw it at a tank in protest, or is he following a military command? Is he a threat? These behavioural aspects can only be deciphered with a sense of social interaction and psychology. Military law only allows the targeting of civilians directly taking part in combat, but machines can't identify them.” There's also a second principle in the law of war – proportionality –, which only allows the use of force in so far as it is proportional to the military objectives and threat. In other words, a whole town can't be destroyed just to take out five enemy troops. Here too human judgement is required: “These dilemmas can be very tricky. Which buildings can be destroyed in order to complete the mission? Imagine a hospital and a school situated right next to an ammunition dump – do you take the risk? You can avoid killing schoolchildren by dropping bombs at night, but the hospital would still be full of people. There's no standard answer. That's why human military command must take legal responsibility for this kind of decision. If a machine decides and ends up killing 300 schoolchildren, who is responsible for the massacre? The machine? The programmer? We have no choice in the matter: we need to stick to the doctrine currently used by the US army, and 'keep humans in the loop'.”
‘War is only just when we wage it despite ourselves. With machines, you lose this tension’
Nolen Gertz, philosopher
Nolen Gertz, philosophy professor at the University of Twente in the Netherlands, makes a similar point: “The tradition of 'just war' theory, which dates back to Augustine, teaches us that it's morally reprehensible for combatants to take pleasure from killing. A war can only be just when its participants are neither nihilistic nor sadistic, when they fundamentally hate killing, and will only go against their moral principles in situations of self-defence or emergency. This is a subtle position: war is only just when we wage it despite ourselves. With machines, you lose this tension.”
But what if you turn the argument round? Imagine if robots could wage entire wars instead of humans, thereby allowing us to win without exposing the lives of soldiers or letting lethal decisions weigh on the conscience of drone pilots... Why wouldn't we save our own population from all this suffering? “I don't believe this argument is valid, Gertz insists, because technology is not neutral – it affects and shapes our own behaviour, as well as our ethical decisions. So let's imagine that one day we can declare war and that, as you say, we can decimate the enemy without any human, or even psychological cost. Basically, robots will do all the dirty work for us. That would make the prospect of war considerably more acceptable to public opinion, political leaders, and military commanders... In other words, the act of killing would be too easy. Imagine a world where robots are roaming around with the right to kill? Would you want to live in such a world?”
AI progress in healthcare
‘A probe breaks during an operation, injuring the patient. Who is responsible?’
Kenneth W. Goodman, professor of medicine
Healthcare is another field where the ethical stakes are high and AI is coming on in leaps in bounds. Watson, IBM's Jeopardy-winning supercomputer, has already detected a rare form of leukemia in just ten minutes by cross-referencing data from millions of oncology articles, which would have taken human scientists several weeks. But let's suppose one day a computer makes a serious diagnostic mistake: who will be responsible? “This is a beautiful problem”, says Kenneth W. Goodman, professor of medicine and director of the University of Miami Bioethics Program. “Take the case of a surgeon who's been successfully using the same probe for over ten years: one day it breaks, injuring the patient. Is the surgeon responsible? Yes, if he has the same status as the captain of a ship. But I think this is a case of shared responsibility. With the spread of Clinical Decision Support Systems (CDSS), we're entering an era with many more levels of responsibility. Who built the database that AI draws from? Who wrote the programming, keeping in mind that most of the code being used was written by several people? Some of it was taken from open source software, then transformed by scientists or private companies for a specific purpose. We're going to need some international regulations in this no man's land...”
Peter Asaro has also been following the healthcare situation closely. He points out that hospitals are already facing ethical dilemmas surrounding the allocation of rare resources, such as organs for transplants. “Which patients should benefit from them? There's always a risk that the decision will be influenced by the doctor's own bias, his or her sympathy towards the patient, or even how much money the patient can spend on this vital care... In these precise scenarios, the recommendations of a computer might possibly be more objective, more ethical, even if they then need to be validated by a human doctor.”
This seems to be one of the few cases where the moral judgement of humans might be less relevant than the conclusions of a computer. What about the rest? Kenneth Goodman refers us to an early experiment involving AI: “A robot has three objects in front of it: a sphere, a pyramid, and a wooden cube. An examiner asks it to put the the pyramid on top of the cube. The robot does it successfully. Well done! The examiner then asks it to put the sphere on top of the pyramid. It tries over and over again, but the sphere keeps rolling down... The robot doesn't know that it's impossible. This is a useful parable for our time: our moral judgement always presupposes a background of implicit, unquantifiable knowledge, which can't always be programmed or even made explicit. Us humans never just follow rules, because explicit rules only work in a very narrow sphere of action. The difference between AI and humans isn't just about the interpretation of context – it's also about the multitude of principles which humans take for granted and hold as so self-evident that they needn't be spelled out. That's why the ethical judgement of humans can't be replaced...” We're tempted to add: as things stand...