The idea of self-driving cars is an alluring one. Being able to kick back and watch a film or read a book, rather than fight ever-increasingly gridlocked traffic is a dream for many commuters. But, like with most technology, no one is worried when things work correctly, the problems arise when things go wrong. Who’s to blame when there’s an accident?
Who’s fault?
The established pattern for responsibility tends to mean that it’s the manufacturer that takes the blame. After all, they are the ones that made the item work in that particular way. So if a car overheats and catches fire, it’s clear that the manufacturer is at fault. The challenge with AI is how much control can the creator have over its product.
AI, at its core, is all about having a computer system that can learn for itself. The area of research is often called Machine Learning. The principle is that instead of writing hard and fast rules, the AI can learn for itself, using examples. The upside of this is that computers can learn to handle tasks where it’s too difficult to come up with the set of rules and this typically covers lots of tasks that people are good at and computers have traditionally been bad at.
Cars, however, represent a larger problem than most AI tasks. Learning how to beat a chess grandmaster is one thing, but being in control of a ton of moving automobile leads to life and death scenarios. It might seem that the problem is an easy one, “don’t crash or run over anyone”, but the problem is much more complex than that.
In fact, there is a whole world of ethical complexity with a set of thought experiments, called the Trolley Car Dilemma, which shows off the difficulties. A simple example would be if the car had to make a choice between running over one person or running over six people. It might seem obvious that the car should only run over the one person, thereby saving the greatest number.
Now take this a step further and imagine that you know the single individual, but the six are strangers. How would you want the car to act then? Would you be willing to drive a car which would run over your Mother to save half a dozen unknown people? The concept is humourously and messily shown in episode 5, season 2 of The Good Place (watch a clip here).
It might be tempting to think that you could solve the problem by adding more intelligence. The vehicle could ascertain who the people are, check them out on LinkedIn perhaps, find their profiles. Surely then you could make a more informed decision. But then what happens when you have a situation where you have six people of one demographic and six of another. Perhaps of differing race or social standing. Self-driving cars could make decisions that might appear elitist or racist. No manufacturer is going to want to be responsible for that.
Random is fair
Perhaps it’s worth taking a moment to consider making things random. This might sound crazy on the surface, but there are definite upsides to the idea. We are at a point where technology means that we can make genuinely random numbers, using quantum physics.
The car could make decisions that are as random as nature itself and having evolved in nature we are more acclimatised to idea that the world can be random, whereas we expect computers to be completely rational, to the point where we trust computers over common sense (as witnessed in the classic “Computer says no” sketch in Little Britain).
Wouldn’t it be much easier, and indeed possibly even fairer on a cosmic scale, to say that the result of the accident was random or, depending on your faith, an act of God? To let the universe decide instead of ploughing through tons of data to try to figure out how an extremely complex computer system came to the decision it did. Perhaps there will be a tendency to baulk at this, given the human race’s delusion of being in control of everything.
Or perhaps the unrealistic expectation that we have the power to be in control. Accidents are likely to always happen and AI systems will always run into circumstances that they are unfamiliar with. Some good examples of confused machines can be found in the short stories “I Robot” by Isaac Asimov, which demonstrate that even a simple set of rules can cause problems. So whichever solution we go with, we are going to have to acknowledge that occasionally the computer will say “no”, just when we least want it to.
Jonathan has a varied history, having written for publications such as Asian Woman but also technical magazines such as Networking+. He also has a background in IT so he's been instrumental in the technical side of getting Global Indian Stories launched. As co-founder, he also keeps writing, sub-editing, and handling the social media.