ethical limits of computing

Illustration of a female silhouette

There’s a video clip I keep coming back to. It’s part of a documentary, showing the reaction of legendary filmmaker and Studio Ghibli co-founder Hayao Miyazaki to a demonstration of AI-generated movements of a model of a human body, which — lacking any sense of self-preservation — drags itself along using its head as a foot.

After seeing the brief demo of this grotesque figure, Miyazaki pauses, saying that it reminds him of a friend of his with a disability so severe he can’t even high five: “Thinking of him, I can’t watch this stuff and find it interesting. Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted.”

He expresses his negative feelings and disinterest in the demonstration, telling the creators, “I strongly feel that this is an insult to life itself.”

The stunning scene that follows is why I keep coming back to the clip. The technologists look crushed. And they try to explain their original goals: “This is just our experiment … We don’t mean to do anything by showing it to the world.”

But Miyazaki is crushed as well.

The ensuing suspense shows the clash of two worlds, and, in some ways, crystallizes the point. In the context of computer science, creating such a model is highly complicated and challenging. But the creators didn’t seem to have any clue that what they were doing would inflict any pain on other people outside their context.

From Miyazaki’s point of view, the technical achievement of this demonstration was overshadowed by a lack of compassion, empathy, and humanity that was a travesty to human existence.

And we see these kinds of clashes in computing every day. There’s a notion that computers are objective and unbiased, because they work with discrete numbers, formulas, and evaluation, distilling all questions and answers down to a 0 or 1. Some people believe that if we could just use computers properly, all the world’s problems would disappear. In other words, they think it’s just a problem of adapting technology.

But this point of view completely ignores the question of whether a computer, or its actions, result in good or bad outcomes for humans and society. We wrongly believe that the computer just exists as an entirely neutral technology. There is a built-in assumption that whatever a computer can do, it should do: that it is not our place to ask the purpose or impose limitations. And so everyone uses — and is used by — computers, for purposes that seem to know no boundaries.

But things do happen if boundaries are crossed. We have seen harm caused by tech companies for years. We have seen proof that they choose not to enforce boundaries and choose not to do less harm. And we see so many companies that know exactly what they’re doing, but they keep doing it anyway.

I think the view we have of computers is upside-down. We start with the instrument, thinking it must be good for something, and look for things that we think are well suited to the computer.

But how did we get to this point? For too many years, the computer has been a solution looking for a problem.

I think a much better approach would be to start the other way around — with the question of what we want to achieve in the first place. Then, we could identify some priorities, asking ourselves what the most urgent problems are. And once we identified the urgent problems, we can decide whether the computer would be useful in solving them.

But this is my personal utopia: the world doesn’t work this way, and maybe it never will. Instead of people thinking about advancing humanity, the potential of making quick money, or becoming rich and famous, is too alluring.

But I don’t really think the urgent ethical questions in computing are about machines becoming self-aware or taking over the world. Instead, they are about why and how people exploit each other, or introduce harm through computers and programs.

Maybe it’s time to create a guide to help us use computers in more appropriate ways — a code of ethics for computing. Many other professions have a code of ethics: doctors have their Hippocratic Oath, lawyers have their own professional oaths, and law enforcement officials swear to protect and serve the people.

A code of ethics would not make everything instantly better, but it would surely be better than “move fast and break things”.

As the computer embraces more and more of our world, we urgently need to have a shared culture of what is — and is not — an appropriate use of computers. Most importantly, we should be taught that power must not be divorced from accountability.


Want more ideas like this in your inbox?

My letters are about long-lasting, sustainable change that fundamentally amplify our human capabilities and raise our collective intelligence through generations. Would love to have you on board.