summing up 83

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox or find previous editions here.

The Web Is a Customer Service Medium, by Paul Ford

The web seemed to fill all niches at once. It was surprisingly good at emulating a TV, a newspaper, a book, or a radio. Which meant that people expected it to answer the questions of each medium, and with the promise of advertising revenue as incentive, web developers set out to provide those answers. As a result, people in the newspaper industry saw the web as a newspaper. People in TV saw the web as TV, and people in book publishing saw it as a weird kind of potential book. But the web is not just some kind of magic all-absorbing meta-medium. It's its own thing. And like other media it has a question that it answers better than any other. That question is:

Why wasn't I consulted?

Humans have a fundamental need to be consulted, engaged, to exercise their knowledge (and thus power), and no other medium that came before has been able to tap into that as effectively.

every form of media has a question that it's fundamentally answering. that is something i've been alluding a few episodes ago. you might think you already understand the web and what users want, but in fact the web is not a publishing medium nor a magic all-absorbing meta-medium. it's its own thing.

Superintelligence: The Idea That Eats Smart People, by Maciej Cegłowski

AI risk is string theory for computer programmers. It's fun to think about, interesting, and completely inaccessible to experiment given our current technology. You can build crystal palaces of thought, working from first principles, then climb up inside them and pull the ladder up behind you.

People who can reach preposterous conclusions from a long chain of abstract reasoning, and feel confident in their truth, are the wrong people to be running a culture.

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

there is this idea that with the nascent ai technology, computers are going to become superintelligent and subsequently end all live on earth - or variations of this theme. but the real threat here is a different one. these seductive, apocalyptic beliefs prevent people from really working to make a difference and ignoring the harm that is caused by the current machine learning algorithms.

Epistemic learned helplessness, by Scott Alexander

When I was young I used to read pseudohistory books; Immanuel Velikovsky's Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn't believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn't so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas.

I guess you could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments are just going to be a bad idea so I don't even try.

the smarter someone is, the easier it is for them to rationalize and convince you of ideas that sound true even when they're not. epistemic learned helplessness is one of those concepts that's so useful you'll wonder how you did without it.