Artificial Intelligence is with us today.  When you fire up your smart phone’s map application, it knows where you’re probably going based on the time of day, and without prompting, gives you a bit of advice about how to get there and what traffic jam to avoid. That’s pretty cool. And from there we easily slide into an optimistic outlook for self-driving cars.

Let’s take a long hard look at the smooth-talking claims behind autonomous cars, “Big AI”, the Singularity, and the presumption that within a few years we will see a human-like awareness in computers.  Let’s start with the limits of computation.

When I did first year computer science in the 1970s, we were taught about the fundamental limits of algorithms. Logicians know with mathematical certainty that there are some things that algorithms can’t do, but Silicon Valley has forgotten the lesson, and instead is barrelling down the road of autonomous vehicles.

Algorithmic failures are deeply unpredictable. Already there have been some horrible missteps in machine vision. Recall the “racist” image classification algorithms.  The problem goes beyond developers’ bias infecting their work; it’s about an optimism that has infected the whole of the AI project.  If a computer can’t even solve the Halting Problem – predicting whether or not a program is going to stop – then what chance is there really that an autonomous car can be delegated responsibility for life-and-death decisions?

An AI engineer’s first encounter with ethics may be the Trolley Problem. But what they can fail to glean from Wikipedia is that the Trolley Problem has no resolution. The moral of the story is that our philosophical frame of reference shapes our response to life and death questions.  So it cannot be coded. We won’t ever have a program driving a car that’s going to come up with right answers all the time (or socially acceptable ones) to real life moral dilemmas. 

Think about the good old courtroom drama.  Why does this TV genre never grow old? It's because real-life problems of accountability and responsibility play out in myriad unpredictable ways.  There's always some unforseen twist - a precedent - that makes tough cases so absorbing. Even real life lawyers can't predict the outcomes of legal cases (which is why we have real life lawyers).  There is no algorithm for these things, and after a while, the AI industry is going to find that self-driving car crashes get messy. 

Nevertheless, I've heard automobile executives speculate that customers might be given a configuration option when setting up their new self-driving cars, to prioritise the life of the driver or that of the pedestrian in the event of a looming accident. If anyone thinks that a computer can be reliably programmed to make that sort of call, then that mindset is itself unethical.

We urgently need a more sophisticated way of framing AI, around an understanding that there are some things that computers just can’t do.

Every algorithm is always going to reach its limit, where it’s either going to tip into unpredictable behavior, or just grind to a halt because it can’t figure out what to do. One of the tricks in human intelligence is we seem to know when to call for help. We can realise our limitations, and pick when we need a second opinion, or seek counsel from a trusted advisor, or take a poll.  There can be no universal algorithm for detecting a responding to failure.  Any algorithm for detecting failure will itself occasionally fail, and then what will happen?

I’m not saying that there’s something mystical going on in the human brain, but there are some deep cognitive problems that we haven’t worked out yet. So I find it unethical for captains of industry to be treating self-driving cars as almost a solved problem. AI is much harder than it looks.