what computers ought to do

Illustration of an airplane

As many of you know, I’ve been an airplane addict since childhood, and I’m an avid pilot. I recently had the chance to meet with the Chief R&D Officer of a major airline. Naturally, we started talking about airplanes and the role of computers in airplanes. We asked ourselves if there will ever be commercial self-flying airplanes. We agreed the answer is “almost never”.

Experience has shown that a computer can significantly increase, or guarantee, safety in familiar situations. The limiting factor is not computing power, which is getting better, but the development of decision-making scripts: even the best programming teams cannot foresee all possibilities. A computer simply lacks the ability to make unconventional decisions or deal with anything unfamiliar or out of the ordinary. This fundamental problem has not changed in the last thirty years.

The role of the pilot will become even more demanding in the future: in a few exceptional, and difficult to predict, situations, an innovative solution must be found very quickly. An individual pilot might encounter a situation like this only once in their career. Successful crash landings are a prime example of this.

In effect, the machine helps when help is not necessary and leaves us in the lurch when it is. Human beings remain the decisive safety factor in the cockpit, despite their faults. As the saying goes: The most important switch of the autopilot is the off-switch.

Not only is this true for airplanes, but many experts have come to the conclusion that there are narrow limits to so-called artificial intelligence. Even common sense can only be imitated by a computer to a very limited extent. A computer is not innovative — it can only measure and calculate. Artificial intuition — or the digital development of creative and new ideas — is still a long way off.

But we’ve been hearing a different story for decades. AI is still the rising hero, beating the world’s top humans at chess, Jeopardy, or Go. And many fear that AI will take our jobs, or even take over humanity itself.

Instead of bashing and criticizing, I want to take a different approach: After chess grandmaster Gary Kasparov was defeated by IBM’s computer Deep Blue, he wondered if a human could work together with a computer. He invented a new form of chess in which humans and computers cooperate, instead of contending with each other. Kasparov named this form of chess “advanced chess”.

One interesting insight came up in a tournament of advanced chess, where all kinds of contestants were invited, from amateurs to grandmasters, and from simple computers to the most powerful AIs, or combinations thereof. The match between a single human and a single computer was already decided, so it was clear that a human with a computer would beat a single human as well.

Interestingly, a human grandmaster with an older laptop was able to beat a world-class supercomputer. Many were certain that the grandmasters with powerful computers would prove superior, so it was a real surprise that an amateur playing with three weak computers won the tournament. The three computers were running three different chess programs, and when they disagreed on the next move, the amateur guided the computers to investigate those moves further.

It’s not enough to have a computer, or technology, or human culture alone. To make real progress, we need the co-evolution of technology (tools, inventions, and physical artifacts) and culture (our practices, skills, or methodologies). Real magic happens when technology and culture co-evolve and support each other.

The whole is greater than the sum of its parts. The amateur chess player succeeded because he combined the strengths of three computers with skills he excelled at. In other words, he complemented, or augmented, his skillset.

Today’s computers can be used to make decisions in almost all aspects of life. They can flip coins in much more sophisticated ways than the most patient human beings. They can steer our lives, make business decisions, or fly airplanes. They may even be able to arrive at “correct” decisions, in some cases. We all know they can do that.

It’s not about what objectives, goals, and purposes can be delegated to computers. That would be the wrong question to ask. The relevant issues aren’t technological, they are ethical. They cannot be settled by asking questions beginning with “can”. The real question is not whether it can be done, but whether it is appropriate to delegate important questions to a machine.

The limits we apply to computers should be stated in terms of what they ought to achieve. It’s not a matter of what a computer can do, but what a computer ought to do. If we don’t have a way to make a computer intuitive or wise, we should not give computers tasks that demand wisdom.


Want more ideas like this in your inbox?

My letters are about long-lasting, sustainable change that fundamentally amplify our human capabilities and raise our collective intelligence through generations. Would love to have you on board.