summing up 107

summing up is a recurring series of interesting articles, talks and insights on culture & technology that compose a large part of my thinking and work. Drop your email in the box below to get it – and much more – straight in your inbox.

The Best Way to Predict the Future is to Create It. But Is It Already Too Late? by Alan Kay

Albert Einstein's quote "We cannot solve our problems with the same levels of thinking that we used to create them" is one of my favorite quotes. I like this idea because Einstein is suggesting something qualitative. That it is not doing more of what we're doing. It means if we've done things with technology that have gotten us in a bit of a pickle, doing more things with technology at the same level of thinking is probably gonna make things worse.

And there is a corollary with this: if your thinking abilities are below threshold you are going to be in real trouble because you will not realize until you have done yourself in.

Virtually everybody in the computing science has almost no sense of human history and context of where we are and where we are going. So I think of much of the stuff that has been done as inverse vandalism. Inverse vandalism is making things just because you can.

One important thing here is to not get trapped by our bad brains, by our thinking abilities. We're limited in this regard, and only tools, methods, habits and understanding⋅can help us to learn to see and finally get to a higher level of thinking.

Don’t be seduced by the pornography of change, by Mark Ritson

Marketing is a fascinating discipline in that most people who practice it have no idea about its origins and foundations, little clue about how to do the job properly in the present, but unbounded enthusiasm to speculate about the future and what it will bring. If marketers became doctors they would spend their time telling patients not what ailed them, but showing them an article about the future of robotic surgery in the year 2030. If they took over as accountants they would advise clients to forget about their current tax returns because within 50 years income will become obsolete thanks to lasers and 3D printing.

There are probably two good reasons for this obsession with the future over the practical reality of the present. First, marketing has always managed to attract a significant proportion of people who are attracted to the shiny stuff. Second, ambitious and overstated projections in the future are fantastic at garnering headlines and hits but have the handy advantage of being impossible to fact check.

If your job is to talk about what speech recognition or artificial intelligence will mean for marketing then you have an inherent desire to make it, and you, as important as possible. Marketers take their foot from the brake pedal of reality and put all their pressure on the accelerator of horseshit in order to get noticed, and future predictions provide the ideal place to drive as fast as possible.

And this does not only apply to marketing. Many fields today, computing & technology included, are currently obsessing about a revolutionary potential that has always been vastly, vastly overhyped. The hype surrounding these topics is sometimes so pervasive that raising skepticism can often be seen as one's failure to recognize that the hype is deserved.

Design in the Era of the Algorithm, by Josh Clark

Let’s not codify the past. On the surface, you’d think that removing humans from a situation might eliminate racism or stereotypes or any very human bias. In fact, the very real risk is that we’ll seal our bias—our past history—into the very operating system itself.

Our data comes from the flawed world we live in. In the realms of hiring and promotion, the historical data hardly favors women or people of color. In the cases of predictive policing, or repeat-offender risk algorithms, the data is both unfair and unkind to black men. The data bias codifies the ugly aspects of our past.

Rooting out this kind of bias is hard and slippery work. Biases are deeply held—and often invisible to us. We have to work hard to be conscious of our unconscious—and doubly so when it creeps into data sets. This is a data-science problem, certainly, but it’s also a design problem

The problem with data is not only the inherited bias in the data set, but also in algorithms that treat data as unbiased facts and humans believing in the objectivity of the results. Biases are only the extreme cases that make these problems visible, but the deeper issue is that we prevent ourselves to interpret and study nature and thereby defining our limits of interpretation.


Want more ideas like this in your inbox?

My letters are about long-lasting, sustainable change that fundamentally amplify our human capabilities and raise our collective intelligence through generations. Would love to have you on board.