Could giving algorithms a sense of uncertainty could make them more ethical?
If a self-driving car cannot stop itself from killing one of two pedestrians, how should its software choose who lives and who dies?
As artificial intelligence becomes an ever-greater part of our lives, the same, often nightmarish, often solutionless ethical questions that humans have faced since the dawn of time will become part of how machines are designed.
And if we can’t solve these problems ourselves, how do we expect machines to? Artificial intelligence reporter Karen Hao, writing in the MIT Technology Review, explained:
“Algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal. When you start dealing with multiple, often competing objectives or try to account for intangibles like freedom and wellbeing, a satisfactory mathematical solution doesn’t always exist.”
In 2014, researchers at the MIT Media Lab designed an experiment called Moral Machine, asking people how they thought self-driving cars should prioritize lives. The idea was that the responses would provide insight into the ethical priorities of different cultures.
Millions of people in 233 countries and territories responded to questions over whether self-driving cars should prioritize humans, pets, passengers, pedestrians, more lives over fewer, women over men, young over old, fit over sickly, higher social status over lower, law-abiders over criminals.
Participants from more collectivist cultures like China and Japan were less likely to spare the young over the old – perhaps because of a greater emphasis on respecting the elderly. Participants from countries with high levels of economic inequality were less egalitarian in their treatment of rich people versus poor people.
Peter Eckersley, director of research at US organisation the Partnership on ArtificiaI Intelligence, has published a paper looking at how artificial intelligence can incorporate ethics. He believes building uncertainty into algorithms could be important.
“We as humans want multiple incompatible things, and our behavior as moral beings is full of uncertainty,” he told the Review.
“But when we try to take that ethical behavior and apply it in artificial intelligence, it tends to get concretized and made more precise.
“There are many high-stakes situations where it’s actually inappropriate – perhaps dangerous – to program in a single objective function that tries to describe your ethics.”
One option, of course, is for the system to simply allow humans to make the decision. For example, an algorithm intended to help make medical decisions could present two options – one for maximizing the person’s lifespan, one for minimizing their suffering – and let doctors choose.
But Roman Yampolskiy, an associate professor of computer science at the University of Louisville, thinks we’re too dumb. “Nosingle person can understand the complexity of the whole stock market or military response systems,” he told the Review. “So we’ll have no choice but togive up some of our control to machines.”
If your company is developing new technology, you could be eligible for government funding through the research and development tax credits scheme. At R&D Tax Solutions, we specialise in helping companies make successfulR&D claims. Have a look at our r& d calculator and r&d tax credit examples to see how much you could be eligible for – and call us at our Manchester office on 0161 298 1010 to see how we can help.