summing up 112

summing up is a recurring series of interesting articles, talks and insights on culture & technology that compose a large part of my thinking and work. Drop your email in the box below to get it – and much more – straight in your inbox.

Preventing the Collapse of Civilization, by Jonathan Blow

My thesis is that software is actually in decline right now. I don't think most people would believe me if I say that, it sure seems like it's flourishing. So I have to convince you at least that this is a plausible perspective.

What I'll say about that is that collapses, like the Bronze Age collapse, were massive. All civilizations were destroyed, but it took a hundred years. So if you're at the beginning of that collapse, in the first 20 years you might think "Well things aren't as good as they were 20 years ago but it's fine".

Of course I expect the reply to what I'm saying to be "You're crazy! Software is doing great, look at all these internet companies that are making all this money and changing the way that we live!" I would say yes, that is all happening. But what is really happening is that software has been free riding on hardware for the past many decades. Software gets "better" because it has better hardware to run on.

Our technology is accelerating at a frightening rate, a rate beyond our understanding of its impact. We spend millions of dollars and countless hours researching newer, faster tools, but we haven’t bothered to research the most fundamental, strategic issues that will provide the highest payoffs for augmenting our abilities.

Andreessen's Corollary: Ethical Dilemmas in Software Engineering, by Bryan Cantrill

I think that the key with ethics is not answers. Don't seek answers. Seek to ask questions. Tough questions. Questions that may make people feel very uncomfortable. Questions that won't necessarily have nice, neat answers. These questions are going to be complicated, but it is the act of asking them, that allows us to consider them. If we don't ask them, we're going to simply do the wrong thing.

And I think that if you've got an organization that in which question asking is encouraged, I think you will find that you will increasingly do the right thing. That you are less likely, I think, to move adrift with respect to these principles.

Questions are more important than answers. Answers change over time and different circumstances, even for the same person, while questions endure.

21st Century Design, by Don Norman

Most of our technical fields who study systems leave out people, except there's some object in there called people. And every so often there are people who are supposed to do something to make sure the system works. But there's never any analysis of what it takes to do that, never any analysis of whether this is really an activity that's well suited for people, and when people are put into it or they burn out or they make errors etc., we blame the people instead of the way the system was built. Which doesn't at all take into account people's abilities and what we're really good at – and also what we're bad at.

Collaboration shows us that the world often isn’t zero-sum. It doesn't have to be humans versus technology, technology versus humans or humans versus other humans. Collaboration shows us that the whole is greater than the sum of its parts. And that collaboration is succeeding because of their differences, not despite.

summing up 111

summing up is a recurring series of interesting articles, talks and insights on culture & technology that compose a large part of my thinking and work. Drop your email in the box below to get it – and much more – straight in your inbox.

Socializing technology for the mobile human, by Bill Buxton

Everybody's into accelerators and incubators and wants to be a millionaire by the time they're 24 by doing the next big thing. So let me tell you what I think about the next big thing: there's no such thing as the next big thing! In fact chasing the next big thing is what is causing the problem.

That the next big thing isn't a thing. The next big thing is a change in the relationship amongst the things that are already there. Societies don't transform by making new things but by having their internal relationships change and develop.

I'd argue that what we know about sociology and how we think about things like kinship, moral order, social conventions, all of those things that we know about and have a language through social science apply equally to the technologies that we must start making. If we don't have that into our mindset, we're just gonna make a bunch of gadgets, a bunch of doodads, as opposed to build an ecosystem that's worthy of human aspirations. And actually technological potential.

We’re living in the present and we’ve forgotten that true innovation is about system transformation, not just a linear forward progression. That distinction is key to understanding the problem.

Privacy Rights and Data Collection in a Digital Economy, by Maciej Cegłowski

The internet economy today resembles the earliest days of the nuclear industry. We have a technology of unprecedented potential, we have made glowing promises about how it will transform the daily lives of our fellow Americans, but we don’t know how to keep its dangerous byproducts safe.

There is no deep reason that weds the commercial internet to a business model of blanket surveillance. The spirit of innovation is not dead in Silicon Valley, and there are other ways we can grow our digital economy that will maintain our lead in information technology, while also safeguarding our liberty. Just like the creation of the internet itself, the effort to put it on a safer foundation will require a combination of research, entrepreneurial drive and timely, enlightened regulation. But we did it before, and there’s no reason to think we can’t do it again.

No technology is entirely positive or even neutral. Every technology is both a burden and a blessing. It is never a matter of either/or – it will always be both. And we must ask with urgency, is whether we're gonna manage the machine or whether it will manage us.

Notes on AI Bias, by Benedict Evans

I often think that the term ‘artificial intelligence’ is deeply unhelpful in conversations like this. It creates the largely false impression that we have actually created, well, intelligence – that we are somehow on a path to HAL 9000 or Skynet – towards something that actually understands. We aren’t. These are just machines, and it’s much more useful to compare them to, say, a washing machine. A washing machine is much better than a human at washing clothes, but if you put dishes in a washing machine instead of clothes and press start, it will wash them. They’ll even get clean. But this won’t be the result you were looking for, and it won’t be because the system is biased against dishes. A washing machine doesn’t know what clothes or dishes are - it’s just a piece of automation, and it is not conceptually very different from any previous wave of automation.

That is, just as for cars, or aircraft, or databases, these systems can be both extremely powerful and extremely limited, and depend entirely on how they’re used by people, and on how well or badly intentioned and how educated or ignorant people are of how these systems work.

Is it really about making machines and tools smarter and more intelligent? Or about augmenting the individual to be smarter or to be more productive? Maybe we should aim for something different: contributing to raising our collective intelligence. Because that is the intelligence we're part of, that shapes us as we shape it, that defines our culture and ultimately the borders of our world.

summing up 110

summing up is a recurring series of interesting articles, talks and insights on culture & technology that compose a large part of my thinking and work. Drop your email in the box below to get it – and much more – straight in your inbox.

Defining the Dimensions of the “Space” of Computing, by Weiwei Hsu

Traditionally, we have thought of computing not in terms of a space of alternatives but in terms of improvements over time. Moore’s Law. Faster. Cheaper. More processors. More RAM. More mega-pixels. More resolution. More sensors. More bandwidth. More devices. More apps. More users. More data. More “engagement.” More everything.

While trending technologies dominate tech news and influence what we believe is possible and probable, we are free to choose. We don’t have to accept what monopolies offer. We can still inform and create the future on our own terms. We can return to the values that drove the personal computer revolution and inspired the first-generation Internet.

Glass rectangles and black cylinders are not the future. We can imagine other possible futures — paths not taken — by searching within a “space of alternative” computing systems. In this “space,” even though some dimensions are currently less recognizable than others, by investigating and hence illuminating the less-explored dimensions together, we can co-create alternative futures.

It's difficult to suspend our current view of how technology shapes our world, and to imagine something completely new or different. We never paused and asked whether there was a way to build from a better, different blueprint instead of building on top of the existing technology. One of the most thoughtful pieces I've read lately.

Hypertext and Our Collective Destiny, by Tim Berners-Lee

It is a good time to sit back and consider to what extent we have actually made life easier. We have access to information: but have we been solving problems? Well, there are many things it is much easier for individuals today than 5 years ago. Personally I don't feel that the web has made great strides in helping us work as a global team.

Perhaps I should explain where I'm coming from. I had (and still have) a dream that the web could be less of a television channel and more of an interactive sea of shared knowledge. I imagine it immersing us as a warm, friendly environment made of the things we and our friends have seen, heard, believe or have figured out. I would like it to bring our friends and colleagues closer, in that by working on this knowledge together we can come to better understandings. If misunderstandings are the cause of many of the world's woes, then can we not work them out in cyberspace. And, having worked them out, we leave for those who follow a trail of our reasoning and assumptions for them to adopt, or correct.

Technology does not and cannot solve humanity's problems. We can enable, augment, and improve with technology, but ultimately humans have to deal with human problems.

Rebuilding the Typographic Society, by Matthew Butterick

Now and then there’s a bigger event—let’s call it a Godzilla moment—that causes a lot of destruction. And what is the Godzilla? Usually the Godzilla is technology. Technology arrives, and it wants to displace us—take over something that we were doing. That’s okay when technology removes a burden or an annoyance.

But sometimes, when technology does that, it can constrict the space we have for expressing our humanity. Then, we have to look for new outlets for ourselves, or what happens? What happens is that this zone of humanity keeps getting smaller. Technology invites us to accept those smaller boundaries, because it’s convenient. It’s relaxing. But if we do that long enough, what’s going to happen is we’re going to stagnate. We’re going to forget what we’re capable of, because we’re just playing in this really tiny territory.

The good news is that when Godzilla burns down the city with his fiery breath, we have space to rebuild. There’s an opportunity for us. But we can’t be lazy about it.

Technology should not replace humans, but it should play out its real strength, which is amplifying human capabilities. And once we understand how technology works, we can begin to focus on improving its quality, creating tools that truly make things cheaper, faster, and better without destroying the very fabric of our humanity.

summing up 109

summing up is a recurring series of interesting articles, talks and insights on culture & technology that compose a large part of my thinking and work. Drop your email in the box below to get it – and much more – straight in your inbox.

What the Hell is Going On? by David Perell

Like fish in water, we’re blind to how the technological environment shapes our behavior. The invisible environment we inhabit falls beneath the threshold of perception. Everything we do and think is shaped by the technologies we build and implement. When we alter the flow of information through society, we should expect radical transformations in commerce, education, and politics.

By understanding information flows, we gain a measure of control over them. Understanding how shifts in information flow impact society is the first step towards building a better world, so we can make technology work for us, not against us.

We shape our technological environment and our technological environment shapes us. But if we're blind to that change, how can we go against the negative effects and augment the positive ones?

The Long Nose of Innovation, by Bill Buxton

What the Long Nose tells us is that any technology that is going to have significant impact in the next 10 years is already at least 10 years old. Any technology that is going to have significant impact in the next 5 years is already at least 15 years old, and likely still below the radar. Hence, beware of anyone arguing for some “new” idea that is “going to” take off in the next 5 years, unless they can trace its history back for 15. If they cannot do so, most likely they are either wrong, or have not done their homework

The Long Nose redirects our focus from the “Edison Myth of original invention”, which is akin to an alchemist making gold. It helps us understand that the heart of the innovation process has far more to do with prospecting, mining, refining, goldsmithing, and of course, financing.

It's such an interesting notion, that technology innovation is not that fast moving thing it seems to be. Rather, technological change takes time, revolutionary change even more so. As the saying goes, it takes years to become famous over night.

WTF and the importance of human/tool co-evolution, by Tim O'Reilly

I think one of the big shifts for the 21st century is to change our sense of what collective intelligence is. Because we think of it as somehow the individual being augmented to be smarter, to make better decisions, maybe to work with other people in a more productive way. But in fact many of the tools of collective intelligence we are contributing to and the intelligence is outside of us. We are part of it and we are feeding into it.

It changes who we are, how we think. Shapes us as we shape it. And the question is whether we're gonna manage the machine or whether it will manage us. Now we tell ourselves in Silicon Valley that we're in charge. But you know, we basically built this machine, we think we know what it's going to do and it suddenly turns out not quite the way we expected.

We have to think about that fundamental goal that we give these systems. Because, yes there is all this intelligence, this new co-evolution and combination of human and machine, but ultimately it's driven by what we tell it to optimize for.

So much in computing is optimized for the single user. The personal computer, the smartphone, but also apps and most of our infrastructure. These days almost every room is equipped with electricity, light and buttons to control it. But just imagine a world, where everyone would carry a flashlight in their pockets, seeing only one thing at a time, charge it up every night, buy a new version every two years, and having one hand always busy. How small and lonely that world would be.

summing up 108

summing up is a recurring series of interesting articles, talks and insights on culture & technology that compose a large part of my thinking and work. Drop your email in the box below to get it – and much more – straight in your inbox.

Big Idea Famine, by Nicholas Negroponte

I believe that 30 years from now people will look back at the beginning of our century and wonder what we were doing and thinking about big, hard, long-term problems, particularly those of basic research. They will read books and articles written by us in which we congratulate ourselves about being innovative. The self-portraits we paint today show a disruptive and creative society, characterized by entrepreneurship, start-ups and big company research advertised as moonshots. Our great-grandchildren are certain to read about our accomplishments, all the companies started, and all the money made. At the same time, they will experience the unfortunate knock-on effects of an historical (by then) famine of big thinking.

We live in a dog-eat-dog society that emphasizes short-term competition over long-term collaboration. We think in terms of winning, not in terms of what might be beneficial for society. Kids aspire to be Mark Zuckerberg, not Alan Turing.

Ask yourself: What ideas, spaces and lifestyles will you leave behind for your grandchildren?

Forget privacy: you're terrible at targeting anyway, by Avery Pennarun

The state of personalized recommendations is surprisingly terrible. At this point, the top recommendation is always a clickbait rage-creating article about movie stars or whatever Trump did or didn't do in the last 6 hours. That's not what I want to read or to watch, but I sometimes get sucked in anyway, and then it's recommendation apocalypse time, because the algorithm now thinks I like reading about Trump, and now everything is Trump. Never give positive feedback to an AI.

This is, by the way, the dirty secret of the machine learning movement: almost everything produced by ML could have been produced, more cheaply, using a very dumb heuristic you coded up by hand, because mostly the ML is trained by feeding it examples of what humans did while following a very dumb heuristic.

There's no magic here. If you use ML to teach a computer how to sort through resumes, it will recommend you interview people with male, white-sounding names, because it turns out that's what your HR department already does. If you ask it what video a person like you wants to see next, it will recommend some political propaganda crap, because 50% of the time 90% of the people do watch that next, because they can't help themselves, and that's a pretty good success rate.

There's lots of talk about advancements in artificial intelligence or machine learning, but very little about their shortcomings and effects on society. Surrounded by hysteria, mistaken extrapolations, limited imagination and many more mistakes we're distracted from thinking productively about our future.

The Bomb in the Garden, by Matthew Butterick

Now, you may say “hey, but the web has gotten so much better looking over 20 years.” And that’s true. But on the other hand, I don’t really feel like that’s the right benchmark, unless you think that the highest role of design is to make things pretty. I don’t.

I think of design excellence as a principle. A principle that asks this: Are you maximizing the possibilities of the medium?

That’s what it should mean. Be­cause other­wise it’s too easy to congratulate ourselves for doing nothing. Because tools & technologies are always getting better. They expand the possibilities for us. So we have to ask ourselves: are we keeping up?

We somehow think, technology becomes better because it gets faster. But that is simply confusing technology consumption for technology innovation. Consumption simply allows us to do more of the same, while innovation augments us to do things that were previously impossible.

summing up 107

summing up is a recurring series of interesting articles, talks and insights on culture & technology that compose a large part of my thinking and work. Drop your email in the box below to get it – and much more – straight in your inbox.

The Best Way to Predict the Future is to Create It. But Is It Already Too Late? by Alan Kay

Albert Einstein's quote "We cannot solve our problems with the same levels of thinking that we used to create them" is one of my favorite quotes. I like this idea because Einstein is suggesting something qualitative. That it is not doing more of what we're doing. It means if we've done things with technology that have gotten us in a bit of a pickle, doing more things with technology at the same level of thinking is probably gonna make things worse.

And there is a corollary with this: if your thinking abilities are below threshold you are going to be in real trouble because you will not realize until you have done yourself in.

Virtually everybody in the computing science has almost no sense of human history and context of where we are and where we are going. So I think of much of the stuff that has been done as inverse vandalism. Inverse vandalism is making things just because you can.

One important thing here is to not get trapped by our bad brains, by our thinking abilities. We're limited in this regard, and only tools, methods, habits and understanding⋅can help us to learn to see and finally get to a higher level of thinking.

Don’t be seduced by the pornography of change, by Mark Ritson

Marketing is a fascinating discipline in that most people who practice it have no idea about its origins and foundations, little clue about how to do the job properly in the present, but unbounded enthusiasm to speculate about the future and what it will bring. If marketers became doctors they would spend their time telling patients not what ailed them, but showing them an article about the future of robotic surgery in the year 2030. If they took over as accountants they would advise clients to forget about their current tax returns because within 50 years income will become obsolete thanks to lasers and 3D printing.

There are probably two good reasons for this obsession with the future over the practical reality of the present. First, marketing has always managed to attract a significant proportion of people who are attracted to the shiny stuff. Second, ambitious and overstated projections in the future are fantastic at garnering headlines and hits but have the handy advantage of being impossible to fact check.

If your job is to talk about what speech recognition or artificial intelligence will mean for marketing then you have an inherent desire to make it, and you, as important as possible. Marketers take their foot from the brake pedal of reality and put all their pressure on the accelerator of horseshit in order to get noticed, and future predictions provide the ideal place to drive as fast as possible.

And this does not only apply to marketing. Many fields today, computing & technology included, are currently obsessing about a revolutionary potential that has always been vastly, vastly overhyped. The hype surrounding these topics is sometimes so pervasive that raising skepticism can often be seen as one's failure to recognize that the hype is deserved.

Design in the Era of the Algorithm, by Josh Clark

Let’s not codify the past. On the surface, you’d think that removing humans from a situation might eliminate racism or stereotypes or any very human bias. In fact, the very real risk is that we’ll seal our bias—our past history—into the very operating system itself.

Our data comes from the flawed world we live in. In the realms of hiring and promotion, the historical data hardly favors women or people of color. In the cases of predictive policing, or repeat-offender risk algorithms, the data is both unfair and unkind to black men. The data bias codifies the ugly aspects of our past.

Rooting out this kind of bias is hard and slippery work. Biases are deeply held—and often invisible to us. We have to work hard to be conscious of our unconscious—and doubly so when it creeps into data sets. This is a data-science problem, certainly, but it’s also a design problem

The problem with data is not only the inherited bias in the data set, but also in algorithms that treat data as unbiased facts and humans believing in the objectivity of the results. Biases are only the extreme cases that make these problems visible, but the deeper issue is that we prevent ourselves to interpret and study nature and thereby defining our limits of interpretation.

summing up 106

summing up is a recurring series of interesting articles, talks and insights on culture & technology that compose a large part of my thinking and work. Drop your email in the box below to get it – and much more – straight in your inbox.

The "Next Big Thing" is a Room, by Steve Krouse

Our computers have lured us into a cage of our own making. We’ve reduced ourselves to disembodied minds, strained eyes, and twitching, clicking, typing fingertips. Gone are our arms and legs, back, torsos, feet, toes, noses, mouths, palms, and ears. When we are doing our jobs, our vaunted knowledge work, we are a sliver of ourselves. The rest of us hangs on uselessly until we leave the office and go home.

Worse than pulling us away from our bodies, our devices have ripped us from each other. Where are our eyes when we speak with our friends, walk down the street, lay in bed, drive our cars? We know where they should be, and yet we also know where they end up much of the time. The tiny rectangles in our pockets have grabbed our attention almost completely.

These days almost every room is equipped with electricity, light and buttons to control it. It's so common that we hardly trouble ourselves thinking about it. It is not a device we carry around in our pockets, have to charge up every night and buy a new version every two years – like a flashlight. Imagine how small and lonely a world like this would be, where everyone carries his own, personal flashlight, seeing only one thing at a time and having one hand always busy.

Machine Teaching, Machine Learning, and the History of the Future of Public Education, by Audrey Watters

Teaching machines were going to change everything. Educational television was going to change everything. Virtual reality was going to change everything. The Internet was going to change everything. The Macintosh computer was going to change everything. The iPad was going to change everything. And on and on and on.

Needless to say, movies haven’t replaced textbooks. Computers and YouTube videos haven’t replaced teachers. The Internet has not dismantled the university or the school house. Not for lack of trying, no doubt. And it might be the trying that we should focus on as much as the technology.

The transformational, revolutionary potential of these technologies has always been vastly, vastly overhyped. And it isn’t simply that it’s because educators or parents are resistant to change. It’s surely in part because the claims that marketers make are often just simply untrue.

The hype surrounding our technologies is sometimes so pervasive that raising skepticism can often be seen as one's failure to recognize that the hype is deserved. This is the game we're playing. It's no longer about the real transformational power, about real change & potential, but mostly about a superficial pop culture.

What We Actually Know About Software Development, and Why We Believe It’s True, by Greg Wilson

Think about how much has changed. Some things have changed without recognition, sadly the way we build software hasn't. There is nothing you do day by day that wouldn't have been familiar to me 25 years ago. Yes, you're using more powerful machines, yes you're using browsers and all this other stuff. But the way you work day to day has not improved.

Most of software development today is based on myth, superstition or arrogance. And this won't change until we're willing to be humble enough to admit when we're wrong. Only then we can find out how the world actually works and do things based on that knowledge.

summing up 105 - the mother of all demos

Today, exactly 50 years ago, a man invented the future.

If you've been following my writing, talks and ideas you've certainly heard his name: Doug Engelbart.

On 9th December 1968, he and his team demonstrated the prototype of his vision at the Fall Joint Computer Conference in San Francisco in front of about 1,000 computer professionals.

This demo introduced so many key concepts we still use today: the computer mouse, windows, graphics, video conferencing, word processing, copy & paste, hypertext, revision control, a collaborative real-time editor and much more. No wonder it's also known as the Mother of all Demos.

What's so striking about Engelbart's demo however isn't how much has changed since then, but how many things have stayed the same.

To celebrate this somewhat special day, I want to deviate a bit from my usual format and highlight some of his key ideas which impress me to this day.

The Mother of all Demos, which I alluded to earlier, is certainly one of the most important pieces of our computer history. If you can spare some time this holiday season, I can only commend to watch parts of this demo. It was a jaw-dropping experience for me. And a testament to what can happen when you get a bunch intelligent people together and ask them to invent the future.

The ABCs of Organizational Improvement is a framework I rely heavily on when working with clients. It depicts three types of basic activities which should be ongoing in any healthy business:

(A) Business as usual: Processes you can find in every business and include the core activities, such as developing a product, manufacturing, marketing, sales etc. It is all about execution and carrying out today's strategy.

(B) Improving how we do that: Thinking about how to improve the ability to perform A. This includes training, hiring, adopting new tools & processes, workflows or bringing in external consultants.

(C) Improving how we improve: How can we improve how we improve? How can we get better at inventing better processes in B? It's this part most businesses struggle with, but at the same time brings the most value. This kind of meta-thinking is the shift from an incremental to an exponential improvement and ultimately the advancement of the business as a whole.

Augmenting Human Intellect: A Conceptual Framework lays down Engelbart's fundamental vision. In there you can find his famous example of taping a pencil to a brick and thereby significantly slowing down the ability to write. When you make it harder to do the lower parts of an activity, it becomes almost impossible to do the higher parts of an activity – like exploring ideas, structuring your thoughts & ideas or to distill the essence of something to the essential. Our tools influence the thoughts we can think, and bad tools interfere with thinking well.

Engelbart's vision went much further, as he intended to augment human intellect and enable people to think in powerful new ways, to collectively solve urgent global problems. To really understand what he means by that, you have to forget today. You have to forget everything you know about computers. You have to forget everything you think you know about computers. His vision is not about computers, it's about us and the future of mankind:

Technology should not aim to replace humans, rather amplify human capabilities.

Engelbart’s vision & philosophy continues to influence many technologists today, myself included, I hope I could explain why.

summing up 104

summing up is a recurring series of interesting articles, talks and insights on culture & technology that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

People don't change, by Peter Gasston

People – I think – don't change that much. What changes over time are cultural differences and values, but people have the same goals, the same desires and the same urges.

Technology matches our desires, it doesn't make them. People haven't become more vain because now we have cameras. Cameras have been invented and they became popular because we've always been a bit vain, we've always wanted to see ourselves. It's just the technology was never in place to enable that expression of our characters before.

The more I study history the more I understand that people from different cultures, people from different historical periods... we're not exceptional, there's nothing exceptional about us, there's nothing exceptional about them. The technology might be new, but the way we react to it, the way we use it, is the same it always has been.

Whatever we think about ourselves, we aren't more intelligent than our ancestors. Neither were they more intelligent than we are. But technology and knowledge plays it's role in augmenting us – and that is what makes us better.

Education That Takes Us To The 22nd Century, by Alan Kay

When we get fluent in powerful ideas, they are like adding new brain tissue that nature didn't give us. It's worthwhile thinking about what it means to get fluent in something like calculus and to realize that a normal person fluent in calculus can outthink Archimedes. If you're fluent at reading you can cover more ground than anybody in the antiquity could in an oral culture.

So a good question for people who are dealing with computing is what if what's important about computing is deeply hidden? I can tell you as far as this one, most of the computing that is done in most of industry completely misses most of what's interesting about computing. They are basically at a first level of exposure to it and they're trying to optimize that. Think about that because that was okay fifty years ago.

Probably the most important thing I can urge on you today is to try and understand that computing is not exactly what you think it is. You have to understand this. What happened when the internet got done and a few other things back in the 70s or so was a big paradigm shift in computing and it hasn't spilled out yet. But if you're looking ahead to the 22nd century this is what you have to understand otherwise you're always going to be steering by looking in the rearview mirror.

If someone today could outthink Archimedes and anyone who is literate can cover more ground than any oral culture... What can someone do with a computer today? The most interesting point is that it isn't as much as we think. We keep mouthing platitudes about innovation and pretend we're much more advanced than our ancestors. But the more you look at what computing can really be about, the more pathetic everything we're doing right now sounds.

Why History Matters, by Audrey Watters

“Technology is changing faster than ever” – this is a related, repeated claim. It’s a claim that seems to be based on history, one that suggests that, in the past, technological changes were slow; now, they’re happening so fast and we’re adopting new technologies so quickly – or so the story goes – that we can no longer make any sense of what is happening around us, and we’re just all being swept along in a wave of techno-inevitability.

Needless to say, I don’t think the claim is true – or at the very least, it is a highly debatable one. Some of this, I’d argue, is simply a matter of confusing technology consumption for technology innovation. Some of this is a matter of confusing upgrades for breakthroughs – Apple releasing a new iPhone every year might not be the best rationale for insisting we are experiencing rapid technological change. Moreover, much of the pace of change can be accounted for by the fact that many new technologies are built atop – quite literally – pre-existing systems: railroads followed the canals; telegraphs followed the railroads; telephones followed the telegraphs; cable television followed the phone lines...

So why then does the history of tech matter? It matters because it helps us think about beliefs and practices and systems and institutions and ideology. It helps make visible, I’d hope, some of the things that time and familiarity has made invisible. It helps us think about context. It helps us think about continuity as much as change. And I think it helps us be more attuned to the storytelling and the myth-making that happens so frequently in technology and reform circles.

We're confusing technology consumption for technology innovation. Innovation augments ourselves to do things that were previously impossible, consumption just allows us to do more of the same. Maybe better, faster of whatever, but still the same.

summing up 103

summing up is a recurring series on digital strategy topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Rethinking CS Education, by Alan Kay

If you want to get something done, the way you do it is not so much trying to convince somebody but to create a tribe that is a conspiracy. Because anthropologically that is what we are more than any other thing. We are tribal beings and we tend to automatically oppose things outside of our tribe, even if they're good ideas, because that isn't the way we think – in fact we don't think we're not primarily thinking animals.

The theatrical part of this is lots bigger than we think, the limitations are much smaller than we think and the relationship we have with our heritage is that we are much more different than we think we are. I hate to see computing and computer science watered down to some terrible kind of engineering that the Babylonians might have failed at.

That is pathetic! And I'm saying it in this strong way because you need to realize that we're in the middle of a complete form of bullshit that has grown up out of the pop culture.

We're stuck in conversations around hypes and trending technological topics. At the same time our world gets ever more complex and throws ever more complex problems at us. I really hope that we can grow up soon and use the power the computer grants us to actually augment ourselves.

Neither Paper Nor Digital Does Active Reading Well, by Baldur Bjarnason

A recurring theme in software development is the more you dig into the research the greater the distance is between what actual research seems to say versus what the industry practices.

Develop a familiarity with, for example, Alan Kay’s or Douglas Engelbart’s visions for the future of computing and you are guaranteed to become thoroughly dissatisfied with the limitations of every modern OS. Reading up hypertext theory and research, especially on hypertext as a medium, is a recipe for becoming annoyed at The Web. Catching up on usability research throughout the years makes you want to smash your laptop agains the wall in anger. And trying to fill out forms online makes you scream ‘it doesn’t have to be this way!’ at the top of your lungs.

That software development doesn’t deal with research or attempts to get at hard facts is endemic to the industry.

It seems crazy to me that most other subjects look at their history, while computing mostly ignores the past, thinking that new is always better. The problem these days isn't how to innovate, but how to get society to adopt the good ideas that already exist.

If Software Is Eating the World, What Will Come Out the Other End? by John Battelle

So far, it’s mostly shit. Most of our society simply isn’t benefiting from this trend of software eating the world. In fact, most of them live in the very world that software ate.

The world is not just software. The world is physics, it’s crying babies and shit on the sidewalk, it’s opioids and ecstasy, it’s car crashes and Senate hearings, lovers and philosophers, lost opportunities and spinning planets around untold stars. The world is still real.

Software – data, code, algorithms, processing – software has dressed the world in new infrastructure. But this is a conversation, not a process of digestion. It is a conversation between the physical and the digital, a synthesis we must master if we are to avoid terrible fates, and continue to embrace fantastic ones.

Only those who know nothing of the history of technology believe that a technology is entirely neutral. It always has implications, positive and negative. And all too often we seem to ignore the downsides of this in our physical world. The world we live in, and the technology as well.

summing up 102

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

The Cult of the Complex, by Jeffrey Zeldman

In an industry that extols innovation over customer satisfaction, and prefers algorithm to human judgement (forgetting that every algorithm has human bias in its DNA), perhaps it should not surprise us that toolchains have replaced know-how.

Likewise, in a field where young straight white dudes take an overwhelming majority of the jobs (including most of the management jobs) it’s perhaps to be expected that web making has lately become something of a dick measuring competition.

It was not always this way, and it needn’t stay this way. If we wish to get back to the business of quietly improving people’s lives, one thoughtful interaction at a time, we must rid ourselves of the cult of the complex. Admitting the problem is the first step in solving it.

Solutions to many problems seem most brilliant when they appear most obvious. Simple even. But in many cases we throw everything we have against the wall and see what sticks. It's on us to recognize when we forget that our job is to solve business, client and most importantly human problems.

The Relativity of Wrong, by Isaac Asimov

In every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern "knowledge" is that it is wrong.

The basic trouble, you see, is that people think that "right" and "wrong" are absolute; that everything that isn't perfectly and completely right is totally and equally wrong. However, I don't think that's so. It seems to me that right and wrong are fuzzy concepts.

A very interesting thought which reminds me very much of a short poem by Piet Hein: The road to wisdom? — Well, it's plain and simple to express: Err and err and err again but less and less and less.

Known Unknowns, by James Bridle

Technology does not emerge from a vacuum; it is the reification of the beliefs and desires of its creators. It is assembled from ideas and fantasies developed through evolution and culture, pedagogy and debate, endlessly entangled and enfolded. The belief in an objective schism between technology and the world is nonsense, and one that has very real outcomes.

Cooperation between human and machine turns out to be a more potent strategy than trusting to the computer alone.

This strategy of cooperation, drawing on the respective skills of human and machine rather than pitting one against the other, may be our only hope for surviving life among machines whose thought processes are unknowable to us. Nonhuman intelligence is a reality—it is rapidly outstripping human performance in many disciplines, and the results stand to be catastrophically destructive to our working lives. These technologies are becoming ubiquitous in everyday devices, and we do not have the option of retreating from or renouncing them. We cannot opt out of contemporary technology any more than we can reject our neighbors in society; we are all entangled.

As we envision, plan and build technology, a human bias will always be part of it. We can't just pass our responsibility to technology and bury our head in the sand. The question we should and have to pose ourselves is be a different one. How can we use and leverage technology as a tool, as ways to augment ourselves to do things that were previously impossible? I think collaboration and cooperation might be an answer.

summing up 101

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. Drop your email in the box below to get it – and much more – straight in your inbox.

The "Space" of Computing, by Weiwei Hsu

Today, we have been building and investing so much of our time into the digital world and we have forgotten to take a step back and take a look at the larger picture. Not only do we waste other people's time by making them addicted to this device world, we have also created a lot of waste in the real world. At the same time we're drowning in piles and piles of information because we never took the time to architect a system that enable us in navigating through them. We're trapped in these rectangular screens and we have often forgotten how to interact with the real world, with real humans. We have been building and hustling - but hey, we can also slow down and rethink how we want to dwell in both the physical world and the digital world.

At some point in the future we will leave this world and what we'll leave behind are spaces and lifestyles that we've shaped for our grandchildren. So I would like to invite you to think about what do we want to leave behind, as we continue to build both digitally and physically. Can we be more intentional so that we shape and leave behind a more humane environment?

What we use a computer for on a daily basis, is only a small part of what a computer could offer us. Instead, most of our conversation evolves around hypes and trending technological topics. What we desperately need is to take a step back, and figure out ways of thinking to tackle complex problems in a ever more complex world.

Pace Layering: How Complex Systems Learn and Keep Learning, by Stewart Brand

Fast learns, slow remembers. Fast proposes, slow disposes. Fast is discontinuous, slow is continuous. Fast and small instructs slow and big by accrued innovation and by occasional revolution. Slow and big controls small and fast by constraint and constancy.  Fast gets all our attention, slow has all the power.

All durable dynamic systems have this sort of structure. It is what makes them adaptable and robust.

The total effect of the pace layers is that they provide a many-leveled corrective, stabilizing feedback throughout the system.  It is precisely in the apparent contradictions between the pace layers that civilization finds its surest health.

We're too often thinking about the superficial, the fast, the shallow. And that is not necessarily a bad thing - but it will easily become if it's the only thing we do. This concept is one of those that, once your brain has been exposed, you start seeing everywhere.

The Garden and the Stream: A Technopastoral, by Mike Caulfield

I find it hard to communicate with a lot of technologists anymore. It’s like trying to explain literature to someone who has never read a book. You’re asked “So basically a book is just words someone said written down?” And you say no, it’s more than that. But how is it more than that?

I am going to make the argument that the predominant form of the social web — that amalgam of blogging, Twitter, Facebook, forums, Reddit, Instagram — is an impoverished model for learning and research and that our survival as a species depends on us getting past the sweet, salty fat of “the web as conversation” and on to something more timeless, integrative, iterative, something less personal and less self-assertive, something more solitary yet more connected. I don’t expect to convince many of you, but I’ll take what I can get.

We can imagine a world that is so much better than this one. And more importantly we can build it. But in order to do that we have to think bigger than the next hype, the next buzzword and the next press release. We have to seriously interrogate the assumptions that are hidden in plain sight.

summing up 100

Wow. After sharing and discussing close to a thousand (964 to be precise) articles, talks, essays, videos and links, my summing up column turns 100.

I originally started this series a little over five years ago to keep track on what I was reading. Little did I know then how much this effort helped me build up a large part of my expertise, methods, strategies and way of thinking. I'm also quite relieved that in all that time, nobody asked me about the swedish conspiracy.

To celebrate this somewhat special occasion, I want to deviate a bit from the usual format and highlight some key figures and favourite articles which impress me to this day.

Doug Engelbart, one of the fathers of personal computing, is definitely one of my personal heroes. He dedicated his life to the pursuit of developing technology to augment human intellect. He didn't see this as a technological problem though, but as a human problem, with technology falling out as part of a solution. His methods and models are brilliant and I rely heavily on them when working with clients.

When thinking about the future, you can't do it better than Alan Kay. Perhaps he is one of the best known computing visionaries still around today and his reasoning is spot on when it comes to invention, innovation and strategies how to succeed in a digital world.

Neil Postman is one of my favourite media critics and funnily enough was never categorically against technology. But he warned us vigorously to not be suspicious of technology. His predictions, cautions and propositions on how we become used by technology rather than make use of technology have been spot on so far – unfortunately.

There's often a thin line between madness and genius and Ted Nelson walks that line confidently. The original inventor of hypertext, internet pioneer and visionary saw the need for interconnected documents decades before the World Wide Web was born. And even now his vision is far from being complete – luckily the size of his ambition hasn't changed.

Bret Victor is one of the thinkers I respect most in our industry. His talks and essay have been highly influential to me. In the spirit of Doug Engelbart, Bret thinks deeply about how to create a new dynamic medium that shapes computing for the 21st century and allows us to see, understand and solve complex problems.

It's rare that I don't fall in love with talks by Maciej Cegłowski, talking mostly on the excesses and impacts of technology on society. His style of storytelling along with ingenious insights is just amazing.

Audrey Watters is mostly known for her prolific work on education technology issues and tech in general. The witty way she interrogates the stories about technology we tell ourselves – or have been told to us – is full of deep insight.

Finally for those of you who can't get enough, I had a hard time leaving these tidbits out – you're welcome: When We Build by Wilson Miner, Stephen Fry's The future of humanity and technology, Memento Product Mori: Of ethics in digital product design by Sebastian Deterding, The Web's Grain by Frank Chimero and last but not least John Cleese on Creativity In Management.


Thanks a lot for your continued support and feedback over the last years, it is heavily appreciated. You're very welcome to subscribe to this series and get it directly in your inbox along with some cool stuff that you won't find anywhere else on the site.

Lastly, if you have any feedback, critique, tips, ideas, comments or free bags of money, I'd be very glad to hear from you. Thank you.

summing up 99

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Measuring Collective IQ, by Doug Engelbart

We have the opportunity to change our thinking and basic assumptions about the development of computing technologies. The emphasis on enhancing security and protecting turf often impedes our ability to solve problems collectively. If we can re-examine those assumptions and chart a different course, we can harness all the wonderful capability of the systems that we have today.

People often ask me how I would improve the current systems, but my response is that we first need to look at our underlying paradigms—because we need to co-evolve the new systems, and that requires new ways of thinking. It’s not just a matter of “doing things differently,” but thinking differently about how to approach the complexity of problem-solving today.

in a world where we've grown multiple orders of magnitude in our computing capacity, where we spend millions of dollars on newer, faster tools and technology, we put little emphasis on how we can augment human thinking and problem solving. and as doug says, it is not about thinking differently about these problems, it is thinking differently about our ability to to solve these problems.

The Seven Deadly Sins of Predicting the Future of AI, by Rodney Brooks

Suppose a person tells us that a particular photo is of people playing Frisbee in the park, then we naturally assume that they can answer questions like “what is the shape of a Frisbee?”, “roughly how far can a person throw a Frisbee?”, “can a person eat a Frisbee?”, “roughly how many people play Frisbee at once?”, “can a 3 month old person play Frisbee?”, “is today’s weather suitable for playing Frisbee?”. Today’s image labelling systems that routinely give correct labels, like “people playing Frisbee in a park” to online photos, have no chance of answering those questions. Besides the fact that all they can do is label more images and can not answer questions at all, they have no idea what a person is, that parks are usually outside, that people have ages, that weather is anything more than how it makes a photo look, etc., etc.

Here is what goes wrong. People hear that some robot or some AI system has performed some task. They then take the generalization from that performance to a general competence that a person performing that same task could be expected to have. And they apply that generalization to the robot or AI system.

Today’s robots and AI systems are incredibly narrow in what they can do. Human style generalizations just do not apply. People who do make these generalizations get things very, very wrong.

we are surrounded my hysteria about artificial intelligence, mistaken extrapolations, limited imagination any many more mistakes that distract us from thinking productively about the future. whether or not ai succeeds in the long term, it will nevertheless be developed and used with uncompromising efforts – regardless of any consequences.

Using Artificial Intelligence to Augment Human Intelligence, by Shan Carter and Michael Nielsen

Unfortunately, many in the AI community greatly underestimate the depth of interface design, often regarding it as a simple problem, mostly about making things pretty or easy-to-use. In this view, interface design is a problem to be handed off to others, while the hard work is to train some machine learning system.

This view is incorrect. At its deepest, interface design means developing the fundamental primitives human beings think and create with. This is a problem whose intellectual genesis goes back to the inventors of the alphabet, of cartography, and of musical notation, as well as modern giants such as Descartes, Playfair, Feynman, Engelbart, and Kay. It is one of the hardest, most important and most fundamental problems humanity grapples with.

the speed, performance or productivity of computers are mostly red herrings. the main problem is how we can leverage the computer as a tool. in different words, how can we use the computer to augment ourselves to do things that were previously impossible?

summing up 98

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

How To Be a Systems Thinker, by Mary Catherine Bateson

There has been so much excitement and sense of discovery around the digital revolution that we’re at a moment where we overestimate what can be done with AI, certainly as it stands at the moment.

One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it's willing to make projections when it hasn’t been provided with everything that would be relevant to those projections.

after all, computers are still tools we should take advantage of, to augment ourselves to do things that were previously impossible, to help us make our lives better. but all too often it seems to me that everyone is used by computers, for purposes that seem to know no boundaries.

Fantasies of the Future: Design in a World Being Eaten by Software, by Paul Robert Lloyd

Drawing inspiration from architectural practice, its successes and failures, I question the role of design in a world being eaten by software. When the prevailing technocratic culture permits the creation of products that undermine and exploit users, who will protect citizens within the digital spaces they now inhabit?

We need to take it upon ourselves to be more critical and introspective. This shouldn’t be too hard. After all, design is all about questioning what already exists and asking how it could be improved for the better.

Perhaps we need a new set of motivational posters. Rather than move fast and break things, perhaps slow down and ask more questions.

we need a more thoughtful, questioning approach to digital. how does a single technology, a tool or a digital channels help us improve? the answer is out there somewhere, but he have to stop ourselves more often to ask "why?".

Storytime, by Ron Gilbert

The grand struggle of creativity can often be about making yourself stupid again. It's like turning yourself into a child who views the world with wonderment and excitement.

Creating something meaningful isn't easy, it's hard. But that's why we should do it. If you ever find yourself being comfortable on what you're making or creating, then you need to push yourself. Push yourself out of your comfort zone and push yourself to the point of failure and then beyond.

When I was a kid, we would go skiing a lot. At the end of the day all the skiers were coming to the lodge and I used to think it was the bad skiers that were covered in snow and it was the good skiers that were all cleaned, with no snow on them. But turns out to be the exact opposite is true: it was the good skiers that were covered in snow from pushing themselves, pushing themselves beyond the limits and into their breaking points, getting better and then pushing themselves harder. Creativity is the same thing. It's like you push hard, you push until you're scared and afraid, you push until you break, you push until you fall and then you get up and you do it again. Creativity is really a journey. It's a wonderful journey the you part you start out as one person and that you end as another.

it's always a lot harder to create something meaningful than just creating something. but that's exactly the reason why you should do it. a great talk by one of my favourite game designers.

summing up 97

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Everything Easy is Hard Again, by Frank Chimero

So much of how we build websites and software comes down to how we think. The churn of tools, methods, and abstractions also signify the replacement of ideology. A person must usually think in a way similar to the people who created the tools to successfully use them. It’s not as simple as putting down a screwdriver and picking up a wrench. A person needs to revise their whole frame of thinking; they must change their mind.

The new methods were invented to manage a level of complexity that is completely foreign to me and my work. It was easy to back away from most of this new stuff when I realized I have alternate ways of managing complexity. Instead of changing my tools or workflow, I change my design. It’s like designing a house so it’s easy to build, instead of setting up cranes typically used for skyscrapers. Beyond that, fancy implementation has never moved the needle much for my clients.

So, I thought it would be useful remind everyone that the easiest and cheapest strategy for dealing with complexity is not to invent something to manage it, but to avoid the complexity altogether with a more clever plan.

a fancy implementation has never moved the needle much for my clients either. what has though is to build relationships and let technology support this process. we are an increasingly digital society, yes, but that doesn't mean we have to let technology take over.

How To Become A Centaur, by Nicky Case

Human nature, for better or worse, doesn’t change much from millennia to millennia. If you want to see the strengths that are unique and universal to all humans, don’t look at the world-famous award-winners — look at children. Children, even at a young age, are already proficient at: intuition, analogy, creativity, empathy, social skills. Some may scoff at these for being “soft skills”, but the fact that we can make an AI that plays chess but not hold a normal five-minute conversation, is proof that these skills only seem “soft” to us because evolution’s already put in the 3.5 billion years of hard work for us.

So, if there’s just one idea you take away from this entire essay, let it be Mother Nature’s most under-appreciated trick: symbiosis.

Symbiosis shows us you can have fruitful collaborations even if you have different skills, or different goals, or are even different species. Symbiosis shows us that the world often isn’t zero-sum — it doesn’t have to be humans versus AI, or humans versus centaurs, or humans versus other humans. Symbiosis is two individuals succeeding together not despite, but because of, their differences. Symbiosis is the “+”.

zero sum games most often win our attention, but the vast majority of our interactions are positive sum: when you share, when you buy, when you learn, when you talk. similarly with technology and computers: we can only improve if we use technology to augment ourselves in order to allow for new, previously-impossible ways of thinking, of living, of being.

How To Save Innovation From Itself, by Alf Rehn

At the same time as we so happily create everything from artificial intelligences to putting "smart" into absolutely bloody everything – At the same time, there are still so many actual, real problems unsolved. I do not need a single more problem solved, every one of my actual problems have been solved. There is not a single thing I could even dream of wanting that hasn't been already created. Yes, I can upgrade. I can buy a slightly cooler car. I can buy slightly better clothes. I can buy slightly faster phones. But frankly I am just consuming myself into the grave, because I have an empty life.

In all this innovation bullshit, what has happened, is that rather than look at true, meaningful change, we have turned innovation into one more bullshit phrase, into one more management buzzword. Do we actually have discussions about whether we're doing meaningful work or just work that happens to be paid at the moment. We need regardless of what company we work in, we need to look at the products we create, the things we create, and say "yes, this can matter". But it can not just matter to me, it needs to matter to someone else as well.

We have blind spots, we all have them. We all have our biases. It is acceptable perchance to have a bias as an individual. But when the entire community or an entire nation has a bias, this says we have not gone far enough.

we seem to spend so much talent, research, time, energy and money to create things that nobody needs, just because we feel we have to innovate somehow. and the problem isn't how to innovate or the innovation per se, but how to get society to adopt the good ideas that already exist.

summing up 96

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Inadvertent Algorithmic Cruelty, by Eric Meyer

Algorithms are essentially thoughtless. They model certain decision flows, but once you run them, no more thought occurs. To call a person “thoughtless” is usually considered a slight, or an outright insult; and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves.

this hits home on so many levels. we throw technology at people, hoping something will stick. instead, we should use the computer and algorithms to augment ourselves to do things that were previously impossible, to help us make our lives better. that is the sweet spot of our technology.

10 Timeframes, by Paul Ford

The time you spend is not your own. You are, as a class of human beings, responsible for more pure raw time, broken into more units, than almost anyone else. You are about to spend whole decades, whole centuries, of cumulative moments, of other people’s time. People using your systems, playing with your toys, fiddling with your abstractions. And I want you to ask yourself when you make things, when you prototype interactions, am I thinking about my own clock, or the user’s? Am I going to help someone make order in his or her life?

If we are going to ask people, in the form of our products, in the form of the things we make, to spend their heartbeats—if we are going to ask them to spend their heartbeats on us, on our ideas, how can we be sure, far more sure than we are now, that they spend those heartbeats wisely?

our technological capability changes much faster than our culture. we first create our technologies and then they change our society and culture. therefore we have a huge responsibility to look at things from the other person's point of view – and to do what's best for them. in other words, be considerate.

Finding the Exhaust Ports, by Jon Gold

Lamenting about the tech industry’s ills when self-identifying as a technologist is a precarious construction. I care so deeply about using the personal computer for liberation & augmentation. I’m so, so burned out by 95% of the work happening in the tech industry. Silicon Valley mythologizes newness, without stopping to ask “why?”. I’m still in love with technology, but increasingly with nuance into that which is for us, and that which productizes us.

Perhaps when Bush prophesied lightning-quick knowledge retrieval, he didn’t intend for that knowledge to be footnoted with Outbrain adverts. Licklider’s man-computer symbiosis would have been frustrated had it been crop-dusted with notifications. Ted Nelson imagined many wonderfully weird futures for the personal computer, but I don’t think gamifying meditation apps was one of them.

every day we make big fuss about a seemingly new hype (ai, blockchain, vr, iot, cloud, ... what next?). as neil postman and others have cautioned us, technologies tend to become mythic. that is, perceived as if they were god-given, part of the natural order of things, gifts of nature, and not as artifacts produced in a specific political and historical context. and by that we completely fail to recognize how we can use technology to augment ourselves to do things that were previously impossible, to help us make our lives better.

summing up 95

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Legends of the Ancient Web, by Maciej Cegłowski

Radio brought music into hospitals and nursing homes, it eased the profound isolation of rural life, it let people hear directly from their elected representatives. It brought laugher and entertainment into every parlor, saved lives at sea, gave people weather forecasts for the first time.

But radio waves are just oscillating electromagnetic fields. They really don't care how we use them. All they want is to go places at the speed of light. It is hard to accept that good people, working on technology that benefits so many, with nothing but good intentions, could end up building a powerful tool for the wicked. But we can't afford to re-learn this lesson every time.

Technology interacts with human nature in complicated ways, and part of human nature is to seek power over others, and manipulate them. Technology concentrates power. We have to assume the new technologies we invent will concentrate power, too. There is always a gap between mass adoption and the first skillful political use of a medium. With the Internet, we are crossing that gap right now.

only those who know nothing about technological history believe that technology is entirely neutral. it has always a bias towards being used in certain ways and not others. a great comparison to what we're facing now with the internet.

Silicon Valley Is Turning Into Its Own Worst Fear, by Ted Chiang

In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind.

I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”

the problem is this: if you're never exposed to new ideas and contexts, if you grow up only being shown one way of thinking about businesses & technology and being told that there are no other ways to think about this, you grow up thinking you know what we're doing.

The resource leak bug of our civilization, by Ville-Matias Heikkilä

When people try to explain the wastefulness of today's computing, they commonly offer something I call "tradeoff hypothesis". According to this hypothesis, the wastefulness of software would be compensated by flexibility, reliability, maintability, and perhaps most importantly, cheap programming work.

I used to believe in the tradeoff hypothesis as well. However, during recent years, I have become increasingly convinced that the portion of true tradeoff is quite marginal. An ever-increasing portion of the waste comes from abstraction clutter that serves no purpose in final runtime code. Most of this clutter could be eliminated with more thoughtful tools and methods without any sacrifices.

we too often seem to adjust to the limitations of technology, instead of creating solutions for a problem with the help of technology.

summing up 94

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Some excerpts from recent Alan Kay emails

Socrates didn't charge for "education" because when you are in business, the "customer starts to become right". Whereas in education, the customer is generally "not right". Marketeers are catering to what people want, educators are trying to deal with what they think people need (and this is often not at all what they want).

Another perspective is to note that one of the human genetic "built-ins" is "hunting and gathering" – this requires resources to "be around", and is essentially incremental in nature. It is not too much of an exaggeration to point out that most businesses are very like hunting-and-gathering processes, and think of their surrounds as resources put there by god or nature for them. Most don't think of the resources in our centuries as actually part of a human-made garden via inventions and cooperation, and that the garden has to be maintained and renewed.

these thoughts are a pure gold mine. a fundamental problem for most businesses is that one cannot innovate under business objectives and one cannot accomplish business objectives under innovation. ideally, you need both, but not at the same time.

How a handful of tech companies control billions of minds every day, by Tristan Harris

When we talk about technology, we tend to talk about it as this blue sky opportunity. It could go any direction. And I want to get serious for a moment and tell you why it's going in a very specific direction. Because it's not evolving randomly. There's a hidden goal driving the direction of all of the technology we make, and that goal is the race for our attention. Because every new site or app has to compete for one thing, which is our attention, and there's only so much of it. And the best way to get people's attention is to know how someone's mind works.

A simple example is YouTube. YouTube wants to maximize how much time you spend. And so what do they do? They autoplay the next video. And let's say that works really well. They're getting a little bit more of people's time. Well, if you're Netflix, you look at that and say, well, that's shrinking my market share, so I'm going to autoplay the next episode. But then if you're Facebook, you say, that's shrinking all of my market share, so now I have to autoplay all the videos in the newsfeed before waiting for you to click play. So the internet is not evolving at random. The reason it feels like it's sucking us in the way it is is because of this race for attention. We know where this is going. Technology is not neutral, and it becomes this race to the bottom of the brain stem of who can go lower to get it.

we always seem to have the notion that technology is always good. but that is simply not the case. every technology is always both a burden and a blessing. not either or, but this and that.

The world is not a desktop, by Mark Weiser

The idea, as near as I can tell, is that the ideal computer should be like a human being, only more obedient. Anything so insidiously appealing should immediately give pause. Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstanding, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer I need to talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the center of attention.

in a world where computers increasingly become human, they inevitably will become the center of attention. the exact opposite of what they should be: invisible and helping to focus our attention to ourselves and the people we live with.

summing up 93

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

The future of humanity and technology, by Stephen Fry

Above all, be prepared for the bullshit, as AI is lazily and inaccurately claimed by every advertising agency and app developer. Companies will make nonsensical claims like "our unique and advanced proprietary AI system will monitor and enhance your sleep" or "let our unique AI engine maximize the value of your stock holdings". Yesterday they would have said "our unique and advanced proprietary algorithms" and the day before that they would have said "our unique and advanced proprietary code". But let's face it, they're almost always talking about the most basic software routines. The letters A and I will become degraded and devalued by overuse in every field in which humans work. Coffee machines, light switches, christmas trees will be marketed as AI proficient, AI savvy or AI enabled. But despite this inevitable opportunistic nonsense, reality will bite.

If we thought the Pandora's jar that ruined the utopian dream of the internet contained nasty creatures, just wait till AI has been overrun by the malicious, the greedy, the stupid and the maniacal. We sleepwalked into the internet age and we're now going to sleepwalk into the age of machine intelligence and biological enhancement. How do we make sense of so much futurology screaming in our ears?

Perhaps the most urgent need might seem counterintuitive. While the specialist bodies and institutions I've mentioned are necessary we need surely to redouble our efforts to understand who we humans are before we can begin to grapple with the nature of what machines may or may not be. So the arts and humanities strike me as more important than ever. Because the more machines rise, the more time we will have to be human and fulfill and develop to their uttermost, our true natures.

an outstanding lecture exploring the impact of technology on humanity by looking back at human history in order to understand the present and the future.

We're building a dystopia just to make people click on ads, by Zeynep Tufekci

We use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I've written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it's not that the people who run Facebook or Google are maliciously and deliberately trying to make the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it's not the intent or the statements people in technology make that matter, it's the structures and business models they're building. And that's the core of the problem.

So what can we do? We need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won't be easy. We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don't see how we can postpone this conversation anymore. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue.

no new technology has only a one-sided effect. every technology is always both a burden and a blessing. not either or, but this and that. what bothers me is that we seem to ignore the negative impact of new technologies, justifying this attitude with their positive aspects.

the bullet hole misconception, by daniel g. siegel

If you're never exposed to new ideas and contexts, if you grow up only being shown one way of thinking about the computer and being told that there are no other ways to think about this, you grow up thinking you know what we're doing. We have already fleshed out all the details, improved and optimized everything a computer has to offer. We celebrate alleged innovation and then delegate picking up the broken pieces to society, because it's not our fault – we figured it out already.

We have to tell ourselves that we haven't the faintest idea of what we're doing. We, as a field, haven't the faintest idea of what we're doing. And we have to tell ourselves that everything around us was made up by people that were no smarter than us, so we can change, influence and build things that make a small dent in the universe.

And once we understand that, only then might we be able to do what the early fathers of computing dreamed about: To make humans better – with the help of computers.

the sequel to my previous talk, the lost medium, on bullet holes in world war 2 bombers, page numbering, rotating point of views and how we can escape the present to invent the future.

summing up 92

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Crapularity Hermeneutics, by Florian Cramer

The problem of computational analytics is not only in the semantic bias of the data set, but also in the design of the algorithm that treats the data as unbiased fact, and finally in the users of the computer program who believe in its scientific objectivity.

From capturing to reading data, interpretation and hermeneutics thus creep into all levels of analytics. Biases and discrimination are only the extreme cases that make this mechanism most clearly visible. Interpretation thus becomes a bug, a perceived system failure, rather than a feature or virtue. As such, it exposes the fragility and vulnerabilities of data analytics. 

The paradox of big data is that it both affirms and denies this “interpretative nature of knowledge”. Just like the Oracle of Delphi, it is dependent on interpretation. But unlike the oracle priests, its interpretative capability is limited by algorithmics – so that the limitations of the tool (and, ultimately, of using mathematics to process meaning) end up defining the limits of interpretation. 

we're talking a lot about the advancement of computational analytics and artificial intelligence, but little about their shortcomings and effects on society. one of those is that for our technology to work perfectly, society has to dumb itself down in order to level the playing field between humans and computers. a very long but definitely one of the best essays i read this year.

Resisting the Habits of the Algorithmic Mind, by Michael Sacasas

Machines have always done things for us, and they are increasingly doing things for us and without us. Increasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a compelling and reasonable critique of this scramble to outsource various dimensions of the human experience.

our reliance on machines to make decisions for us leads us to displace the most important human elements in favor of cheaper and faster technology. doing that however we outsource meaning-making, moral judgement and feeling – which is what a human being is – to machines.

Your Data is Being Manipulated, by Danah Boyd

The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape.

We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.

we're past the point where developing fancy new technologies is a fun project for college kids. our technologies have real implications on the world, on our culture and society. nevertheless we seem to miss a kind of moral framework on how technology is allowed to alter society.

summing up 91

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

The Best Way to Predict the Future is to Issue a Press Release, by Audrey Watters

Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues, to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

we are making computers in all forms available, but we're far away from generating new thoughts or breaking up thought patterns. instead of augmenting humans with the use of computers like imagined by the fathers of early personal computing, our computers have turned out to be mind-numbing consumption devices rather than a bicycle for the mind that steve jobs envisioned.

Eliminating the Human, by David Byrne

I have a theory that much recent tech development and innovation over the last decade or so has an unspoken overarching agenda. It has been about creating the possibility of a world with less human interaction. This tendency is, I suspect, not a bug—it’s a feature.

Human interaction is often perceived, from an engineer’s mind-set, as complicated, inefficient, noisy, and slow. Part of making something “frictionless” is getting the human part out of the way.

But our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. “We” do not exist as isolated individuals. We, as individuals, are inhabitants of networks; we are relationships. That is how we prosper and thrive.

the computer claims sovereignty over the whole range of human experience, and supports its claim by showing that it “thinks” better than we can. the fundamental metaphorical message of the computer is that we become machines. our nature, our biology, our emotions and our spirituality become subjects of second order. but in order for this to work perfectly, society has to dumb itself down in order to level the playing field between humans and computers. what is most significant about this line of thinking is the dangerous reductionism it represents.

User Interface: A Personal View, by Alan Kay

The printing press was the dominant force that transformed the hermeneutic Middle Ages into our scientific society should not be taken too lightly–especially because the main point is that the press didn’t do it just by making books more available, it did it by changing the thought patterns of those who learned to read.

I had always thought of the computer as a tool, perhaps a vehicle–a much weaker conception. But if the personal computer is a truly new medium then the very use of it would actually change the thought patterns of an entire civilization. What kind of a thinker would you become if you grew up with an active simulator connected, not just to one point of view, but to all the points of view of the ages represented so they could be dynamically tried out and compared?

the tragic notion is that alan kay assumed people would be smart enough to try out and see different point of views. but in reality, people stick rigidly to the point of view they learned and consider all others to be only noise or worse.

summing up 90

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Memento Product Mori: Of ethics in digital product design, by Sebastian Deterding

Why especially for us in the digital industry – although we are automating away more and more and more of our work and we're becoming wealthier and wealthier by every measure – do we feel like we're more and more short of time, overwhelmed and overworked. Or to put the question differently: Do you remember when email was fun?

The weird hard truth is: this is us. We, the digital industry, the people that are working in it are the ones who make everything, everything in our environment and work life ever more connected, fast, smooth, compelling, addicting even. The fundamental ethical contradiction for us is that we, the very people who suffer the most and organize the most against digital acceleration, are the very ones who further it.

a great talk challenging us to reflect on the moral dimensions of our work, especially in the digital product world.

Driverless Ed-Tech: The History of the Future of Automation in Education, by Audrey Watters

“Put me out of a job.” “Put you out of a job.” “Put us all out of work.” We hear that a lot, with varying levels of glee and callousness and concern. “Robots are coming for your job.”

We hear it all the time. To be fair, of course, we have heard it, with varying frequency and urgency, for about 100 years now. “Robots are coming for your job.” And this time – this time – it’s for real.

I want to suggest that this is not entirely a technological proclamation. Robots don’t do anything they’re not programmed to do. They don’t have autonomy or agency or aspirations. Robots don’t just roll into the human resources department on their own accord, ready to outperform others. Robots don’t apply for jobs. Robots don’t “come for jobs.” Rather, business owners opt to automate rather than employ people. In other words, this refrain that “robots are coming for your job” is not so much a reflection of some tremendous breakthrough (or potential breakthrough) in automation, let alone artificial intelligence. Rather, it’s a proclamation about profits and politics. It’s a proclamation about labor and capital.

a brilliant essay on automation, algorithms and robots, and why the ai revolution isn't coming. not because the machines have taken over, but because the people who built them have.

Personal Dynamic Media, by Alan Kay & Adele Goldberg

“Devices” which variously store, retrieve, or manipulate information in the form of messages embedded in a medium have been in existence for thousands of years. People use them to communicate ideas and feelings both to others and back to themselves. Although thinking goes on in one’s head, external media serve to materialize thoughts and, through feedback, to augment the actual paths the thinking follows. Methods discovered in one medium provide metaphors which contribute new ways to think about notions in other media. For most of recorded history, the interactions of humans with their media have been primarily nonconversational and passive in the sense that marks on paper, paint on walls, even “motion” pictures and television, do not change in response to the viewer’s wishes.

Every message is, in one sense or another, a simulation of some idea. It may be representational or abstract. The essence of a medium is very much dependent on the way messages are embedded, changed, and viewed. Although digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. Moreover, this new “metamedium” is active—it can respond to queries and experiments—so that the messages may involve the learner in a two-way conversation. This property has never been available before except through the medium of an individual teacher. We think the implications are vast and compelling.

this great essay from 1977 reads so much like a description of what we do these days that it seems unexceptional – which makes it so exceptional. moreover however it thinks so much further – which also makes it quite sad to read.

summing up 89

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox.

Information Underload, by Mike Caulfield

For many years, the underlying thesis of the tech world has been that there is too much information and therefore we need technology to surface the best information. In the mid 2000s, that technology was pitched as Web 2.0. Nowadays, the solution is supposedly AI.

I’m increasingly convinced, however, that our problem is not information overload but information underload. We suffer not because there is just too much good information out there to process, but because most information out there is low quality slapdash takes on low quality research, endlessly pinging around the spin-o-sphere.

we certainly have issues creating the right filters for valuable content, but it also seems to me that it was never easier to create valuable content – and never harder to find it. one reason i publish this ongoing series.

The Shock of Inclusion, by Clay Shirky

To the question "How is Internet is changing the way we think?", the right answer is "Too soon to tell." This isn't because we can't see some of the obvious effects already, but because the deep changes will be manifested only when new cultural norms shape what the technology makes possible.

The Internet's primary effect on how we think will only reveal itself when it affects the cultural milieu of thought, not just the behavior of individual users. We will not live to see what use humanity makes of a medium for sharing that is cheap, instant, and global. We are, however, the people who are setting the earliest patterns for this medium. Our fate won't matter much, but the norms we set will.

there is a vast differences between a tool and a medium. we make use of tools to improve a single capability but a medium changes a whole culture. for example a website as a tool might enable you to present your business on the web, however to prepare a business for the next decade a digital transformation is needed which includes tools like a website, automation and digital communication channels.

Is there an Artificial God? by Douglas Adams

Imagine a puddle waking up one morning and thinking, 'This is an interesting world - an interesting hole I find myself in - fits me rather neatly, doesn't it? In fact it fits me staggeringly well, must have been made to have me in it!' This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be alright, because this world was meant to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.

There are some oddities in the perspective with which we see the world. The fact that we live at the bottom of a deep gravity well, on the surface of a gas covered planet going around a nuclear fireball 90 million miles away and think this to be normal is obviously some indication of how skewed our perspective tends to be, but we have done various things over intellectual history to slowly correct some of our misapprehensions.

So, my argument is that as we become more and more scientifically literate, it's worth remembering that the fictions with which we previously populated our world may have some function that it's worth trying to understand and preserve the essential components of, rather than throwing out the baby with the bath water; because even though we may not accept the reasons given for them being here in the first place, it may well be that there are good practical reasons for them, or something like them, to be there.

although this speech goes much further than the topics i discuss here, it's a very profound idea. unknowingly we often make up stories about why our products and websites work or fail. regardless of whether we accept these stories as true or not, we can always find some practical reasons in them we should adapt while looking for the truth.

summing up 88

summing up is a recurring series on how we can make sense of computers. drop your email in the box below to get it straight in your inbox or find previous editions here.

The Myth of a Superhuman AI, by Kevin Kelly

We don’t call Google a superhuman AI even though its memory is beyond us, because there are many things we can do better than it. These complexes of artificial intelligences will for sure be able to exceed us in many dimensions, but no one entity will do all we do better. It’s similar to the physical powers of humans. The industrial revolution is 200 years old, and while all machines as a class can beat the physical achievements of an individual human, there is no one machine that can beat an average human in everything he or she does.

I understand the beautiful attraction of a superhuman AI god. It’s like a new Superman. But like Superman, it is a mythical figure. However myths can be useful, and once invented they won’t go away. The idea of a Superman will never die. The idea of a superhuman AI Singularity, now that it has been birthed, will never go away either. But we should recognize that it is a religious idea at this moment and not a scientific one. If we inspect the evidence we have so far about intelligence, artificial and natural, we can only conclude that our speculations about a mythical superhuman AI god are just that: myths.

my probably most shared article this month and some very wise words indeed. what bugs me most however is that artificial intelligence (ai) seems to displace intelligence augmentation (ia). we try to make computers smarter, but we completely forget about making humans smarter–with the help of computers.

How to Invent the Future, by Alan Kay

As computing gets less and less interesting, its way of accepting and rejecting things gets more and more mundane. This is why you look at some of these early systems and think why aren't they doing it today? Well, because nobody even thinks about that that's important. Come on, this is bullshit, but nobody is protesting except old fogeys like me, because I know it can be better. You need to find out that it can be better. That is your job. Your job is not to agree with me. Your job is to wake up, find ways of criticizing the stuff that seems normal. That is the only way out of the soup.

it seems the more advanced our hardware and technology becomes the less we seem to innovate. i think one part of why we had so much innovation in the early days of computing was because there were people working on it who were musicians, poets, biologists, physicists or historians who were trying to make sense of this new medium to solve their problems. an argument i have proposed in my talk the lost medium last year.

The Pattern-Seeking Fallacy, by Jason Cohen

When an experiment produces a result that is highly unlikely to be due to chance alone, you conclude that something systematic is at work. But when you’re “seeking interesting results” instead of performing an experiment, highly unlikely events will necessarily happen, yet still you conclude something systematic is at work.

The fallacy is that you’re searching for a theory in a pile of data, rather than forming a theory and running an experiment to support or disprove it.

in the noise of randomness in our world we often find patterns. look at enough clouds, trees or rocks and you're predestined to find a shape like a face, animal or familiar object. the problem is this: when we look at enough random data we'll find a pattern to our liking and at the same time discarding plenty of valid results that just don't fit this pattern.

summing up 87

summing up is a recurring series on topics & insights on how we can make sense of computers that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox or find previous editions here.

How Technology Hijacks People’s Minds, by Tristan Harris

The ultimate freedom is a free mind, and we need technology to be on our team to help us live, feel, think and act freely.

We need our smartphones, notifications screens and web browsers to be exoskeletons for our minds and interpersonal relationships that put our values, not our impulses, first. People’s time is valuable. And we should protect it with the same rigor as privacy and other digital rights.

the way we use, create and foster technology today will be looked back the same way as we look back at the use of asbestos in walls & floors or naive cigarette smoking. creating useful technology is not about creating a need in the user, but to create things that are good for the user.

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance, by Maciej Cegłowski

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we're good people. We like freedom. How could we have built tools that subvert it?

We need a code of ethics for our industry, to guide our use of machine learning, and its acceptable use on human beings. Other professions all have a code of ethics. Librarians are taught to hold patron privacy, doctors pledge to “first, do no harm”. Lawyers, for all the bad jokes about them, are officers of the court and hold themselves to high ethical standards.

Meanwhile, the closest we’ve come to a code of ethics is “move fast and break things”. And look how well that worked.

the tools we shape, shape us and create a new world. but technology and ethics aren't easy to separate – that new world doesn't necessarily have to be a better world for all of us. maybe just for some.

Is it really "Complex"? Or did we just make it "Complicated"? by Alan Kay

Even a relatively small clipper ship had about a hundred crew, all superbly trained whether it was light or dark. And that whole idea of doing things has been carried forward for instance in the navy. If you take a look at a nuclear submarine or any other navy vessel, it's very similar: a highly trained crew, about the same size of a clipper. But do we really need about a hundred crew, is that really efficient?

The Airbus 380 and the biggest 747 can be flown by two people. How can that be? Well, the answer is you just can’t have a crew of about a hundred if you’re gonna be in the airplane business. But you can have a crew of about a hundred in the submarine business, whether it’s a good idea or not. So maybe these large programming crews that we have actually go back to the days of machine code, but might not have any place today.

Because today – let's face it – we should be just programming in terms of specifications or requirements. So how many people do you actually need? What we need is the number of people that takes to actually put together a picture of what the actual goals and requirements of this system are, from the vision that lead to the desire to do that system in the first place.

much of our technology, our projects and our ideas comes down to focusing on everything but the actual requirements and original problem. nevertheless it doesn't matter how exceptional of a map you can draw if someone asks for directions to the wrong destination.

summing up 86

summing up is a recurring series on topics & insights on how we can make sense of computers that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox or find previous editions here.

Twenty-Five Zeros, by Robert C. Martin

The interesting thing about where we are now, after 25 orders of magnitude in improvement in hardware, is that our software has improved by nothing like that. Maybe not even by one order of magnitude, possibly not even at all.

We go through lots of heat and lots of energy to invent new technologies that are not new technologies. They're just new reflections, new projections of old technologies. Our industry is in some sense caught in a maelstrom, in a whirlpool, from where it cannot escape. All the new stuff we do isn't new at all. It's just recycled, old stuff and we claim it's better because we've been riding a wave of 25 orders of magnitude. The real progress has not been in software, it has been in hardware. In fact there's been virtually no real, solid innovation in the fundamental technology of software. So as much as software technology changes in form, it changes very little in essence.

a very interesting talk built on the argument that hardware has advanced by extraordinary amounts, while software didn't keep pace at all. programming, our technologies and architectures are basically still the same as in early days of computing, only ever returning as recycled reflections, powered by improved hardware. my talk the lost medium last year followed a similar line of thought.

Step Off This Hurtling Machine, by Alex Feyerke

Today, we're similarly entwined with our networks and the web as we are with nature. Clearly, they're not as crucial as the plants that produce our oxygen, but the networks are becoming increasingly prevalent. They've become our nervous system, our externalised memory, and they will only ever grow denser, connecting more people and more things.

The network is the ultimate human tool and in time it will become utterly inseparable from us. We will take it with us when we eventually leave for other planets, and it will outlast many of the companies, countries, religions, and philosophies we know today. The network is never going away again.

I wish for a cultural artefact that will easily convey this notion today, that will capture the beauty and staggering opportunity of this human creation, that will make abundantly clear just how intertwined our fates are. To make clear that it is worth preserving, improving and cherishing. It's one of the few truly global, species-encompassing accomplishments that has the power to do so much for so many, even if they never have the power to contribute to it directly.

But to get there, we must not only build great tools, we must build a great culture. We will have achieved nothing if our tools are free, open, secure, private and decentralised if there is no culture to embrace and support these values.

the more technology gets entwined with humanity, the more important it is to not only see the technological benefits, but also the impacts it has on our society and culture. a sobering view on the developer community.

razzle dazzle websites, by yours truly

we're not really helping our users to find what they were looking for. in my opinion, all websites should have one and only one call to action and the whole website should support and build up to that.

your website is not about you, it is about how you can help your clients. optimize for that.

my take on what world war 1 camouflage has to do with ad-filled, chaotic websites and how we can improve.

summing up 85

summing up is a recurring series on topics & insights on how we can make sense of computers that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox or find previous editions here.

Admiral Shovel and the Toilet Roll (transcript), by James Burke

In order to rectify the future I want to spend most of my time looking at the past because there’s nowhere else to look: (a) because the future hasn’t happened yet and never will, and (b) because almost all the time in any case the future is not really much more than the past with extra bits attached.

To predict you extrapolate on what’s there already. We predict the future from the past, working within the local context from within the well-known box, which may be why the future has so often in the past been a surprise. I mean, James Watt’s steam engine was just supposed to drain mines. The printing press was just supposed to print a couple of Bibles. The telephone was invested by Alexander Graham Bell just to teach deaf people to talk. The computer was made specifically to calculate artillery shell trajectories. Viagra was just supposed to be for angina. I mean; what else?

current technology is on a path to fundamentally change how our society operates. nevertheless we fail to predict the impact of technology in our society and culture. an excellent argument for the importance of an interdisciplinary approach to innovation in technology.

Thought as a Technology, by Michael Nielsen

It requires extraordinary imagination to conceive new forms of visual meaning. Many of our best-known artists and visual explorers are famous in part because they discovered such forms. When exposed to that work, other people can internalize those new cognitive technologies, and so expand the range of their own visual thinking.

Images such as these are not natural or obvious. No-one would ever have these visual thoughts without the cognitive technologies developed by Picasso, Edgerton, Beck, and many other pioneers. Of course, only a small fraction of people really internalize these ways of visual thinking. But in principle, once the technologies have been invented, most of us can learn to think in these new ways.

a marvellous article on how user interfaces impact new ways of thinking of the world. technological progress always happens in a fixed context and is almost always a form of optimization. a technological innovation however, would have to happen outside of this given, fixed context and existing rules.

The Long Web, by Jeremy Keith

Next time somebody says to you, “The internet never forgets”, just call bullshit on that. It’s absolute bollocks! Look at the data. The internet forgets all the time. The average lifespan of a web page is months, and yet people are like, “Oh, you’ve got to be careful what you put online, it’ll be there forever: Facebook never forgets, Google never forgets.” No, I would not entrust our collective culture, our society’s memory to some third party servers we don’t even know.

What we need is thinking about our culture, about our society, about our preserving what we’re putting online, and that’s kind of all I ask of you, is to think about The Long Web, to think about the long term consequences of what we’re doing because I don’t think we do it enough.

It isn’t just about what we’re doing today. We are building something greater than the Library of Alexandria could ever have been and that is an awesome—in the true sense of the word—responsibility.

with the web we're building something greater than the library of alexandria. to do this well we have to build our sites for the long haul. it’s something we don’t think about enough in the rush to create the next thing on the web.

summing up 84

summing up is a recurring series on topics & insights on user experience and how we can make sense of computers that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox or find previous editions here.

As We May Link, by Jeremy Keith

The web is just twenty years old and I’m not sure that we have yet come to terms with the power that this new medium grants us. When we create websites, it’s all too easy for us to fall into old patterns of behaviour and treat our creations as independent self-contained islands lacking in outbound links. But that’s not the way the web works. The sites we build should not be cul-de-sacs for the inquisitive visitors who have found their way to our work by whatever unique trails they have followed. We should recognise that when we design and publish information on the humblest homepage or the grandest web app, we are creating connections within a much larger machine of knowledge, a potential Turing machine greater than any memex or calculus racionator.

this is such a powerful idea i've been referring to a lot recently. the computer and the web are powerful tools which could fundamentally amplify our human capabilities. i am only afraid that we're not able to see and grasp the big picture yet.

Error Messages Are Evil, by Don Norman

Our technology is designed by technologists who know what is good for that technology, namely highly precise, accurate, detailed information. Well, that ay be good for machines, but what about what is good for people? People are bad at precision and accuracy. At monitoring dull stuff for long periods. Force us to do those things, to act like machines, and of course we will fail. You call it human error: I call it machine error, or if you prefer, bad design.

too often we punish our users for not being able to predict the system's design, be it a website, app or program. but make no mistake, this is not about eliminating feedback from the system. when needed, the feedback should change to a collaborative one, rather than a confrontational one – human computer interaction, not confrontation.

Wobbly Tables and the Problem with Futurism, by Philip Dhingra

I’m amazed by all the great advances that have been made in the past 15 years, but I’m even more amazed by areas that haven’t changed. But perhaps the silver lining in the Banality of Futurism is that the room for growth won’t be in fixing life’s inconveniences, but rather in the human condition.

a very interesting thought on how acclimated we are to quirks and nuisances in our user interfaces. the future will probably be as awkward as the times we live in today. i've referred to a similar issue in a previous episode.

summing up 83

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox or find previous editions here.

The Web Is a Customer Service Medium, by Paul Ford

The web seemed to fill all niches at once. It was surprisingly good at emulating a TV, a newspaper, a book, or a radio. Which meant that people expected it to answer the questions of each medium, and with the promise of advertising revenue as incentive, web developers set out to provide those answers. As a result, people in the newspaper industry saw the web as a newspaper. People in TV saw the web as TV, and people in book publishing saw it as a weird kind of potential book. But the web is not just some kind of magic all-absorbing meta-medium. It's its own thing. And like other media it has a question that it answers better than any other. That question is:

Why wasn't I consulted?

Humans have a fundamental need to be consulted, engaged, to exercise their knowledge (and thus power), and no other medium that came before has been able to tap into that as effectively.

every form of media has a question that it's fundamentally answering. that is something i've been alluding a few episodes ago. you might think you already understand the web and what users want, but in fact the web is not a publishing medium nor a magic all-absorbing meta-medium. it's its own thing.

Superintelligence: The Idea That Eats Smart People, by Maciej Cegłowski

AI risk is string theory for computer programmers. It's fun to think about, interesting, and completely inaccessible to experiment given our current technology. You can build crystal palaces of thought, working from first principles, then climb up inside them and pull the ladder up behind you.

People who can reach preposterous conclusions from a long chain of abstract reasoning, and feel confident in their truth, are the wrong people to be running a culture.

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

there is this idea that with the nascent ai technology, computers are going to become superintelligent and subsequently end all live on earth - or variations of this theme. but the real threat here is a different one. these seductive, apocalyptic beliefs prevent people from really working to make a difference and ignoring the harm that is caused by the current machine learning algorithms.

Epistemic learned helplessness, by Scott Alexander

When I was young I used to read pseudohistory books; Immanuel Velikovsky's Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn't believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn't so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas.

I guess you could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments are just going to be a bad idea so I don't even try.

the smarter someone is, the easier it is for them to rationalize and convince you of ideas that sound true even when they're not. epistemic learned helplessness is one of those concepts that's so useful you'll wonder how you did without it.

summing up 82

summing up is my recurring series on topics & insights that compose a large part of my thinking and work. please find previous editions here or subscribe below to get them straight in your inbox.

When We Invented the Personal Computer, by Steve Jobs

A few years ago I read a study – I believe it was in Scientific American – about the efficiency of locomotion for various species on the earth. The study determined which species was the most efficient, in terms of getting from point A to point B with the least amount of energy exerted. The condor won. Man made a rather unimpressive showing about 1/3 of the way down the list.

But someone there had the insight to test man riding a bicycle. Man was twice as efficient as the condor! This illustrated man's ability as a tool maker. When man created the bicycle, he created a tool that amplified an inherent ability. That's why I like to compare the personal computer to the bicycle. The personal computer is a 21st century bicycle if you will, because it's a tool that can amplify a certain part of our inherent intelligence.

i just love steve jobs’ idea of comparing computers to a bicycle for the mind. so much actually, that i used it in my talk the lost medium last year. we humans are tool builders and we can fundamentally amplify our human capabilities with tools. tools that take us far beyond our inherent abilities. nevertheless we're only at the early stages of this tool. we've already seen the enormous changes around us, but i think that will be nothing to what's coming in the next hundred years.

Teaching Children Thinking, by Seymour Papert

The phrase “technology and education” usually means inventing new gadgets to teach the same old stuff in a thinly disguised version of the same old way. Moreover, if the gadgets are computers, the same old teaching becomes incredibly more expensive and biased towards its dullest parts, namely the kind of rote learning in which measurable results can be obtained by treating the children like pigeons in a Skinner box.

there is this notion that our problems are easily being solved with more technology. doing that we're throwing technology against a wall to see what sticks rather than asking what the technology could offer and who that could help. papert is talking about education, and even if that is a vital part of our society, his thinking applies to so much more.

The Computer for the 21st Century, by Mark Weiser

The idea of integrating computers seamlessly into the world at large runs counter to a number of present-day trends. "Ubiquitous computing" in this context does not just mean computers that can be carried to the beach, jungle or airport. Even the most powerful notebook computer, with access to a worldwide information network, still focuses attention on a single box. By analogy to writing, carrying a super-laptop is like owning just one very important book. Customizing this book, even writing millions of other books, does not begin to capture the real power of literacy.

Furthermore, although ubiquitous computers may employ sound and video in addition to text and graphics, that does not make them "multimedia computers." Today's multimedia machine makes the computer screen into a demanding focus of attention rather than allowing it to fade into the background.

computers should fit the human environment, instead of forcing humans to enter theirs. especially mobile computing is a major paradigm shift, but right now we're becoming slaves of our own devices. weiser puts out some very interesting ideas on how computers could integrate in our environment and enhance our abilities there.

summing up 81

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

Computers for Cynics, by Ted Nelson

The computer world deals with imaginary, arbitrary, made up stuff that was all made up by somebody. Everything you see was designed and put there by someone. But so often we have to deal with junk and not knowing whom to blame, we blame technology.

Everyone takes the structure the computer world as god-given. In a field reputedly so innovative and new, the computer world is really a dumbed down imitation of the past, based on ancient traditions and modern oversimplification that people mistake for the computer itself.

it is quite easy to get the idea that the current state of the computer world is the climax of our great progress. and it's really not. ted nelson, one of the founding fathers of personal computing and the man who invented hypertext, presents his cynical, amusing and remarkably astute overview of the history of the personal computer - after all he's been there since the beginnings. it is especially interesting in contrast with our current view on computers, information and user experience.

Deep-Fried Data, by Maciej Cegłowski

A lot of the language around data is extractive. We talk about data processing, data mining, or crunching data. It’s kind of a rocky ore that we smash with heavy machinery to get the good stuff out.

In cultivating communities, I prefer gardening metaphors. You need the right conditions, a propitious climate, fertile soil, and a sprinkling of bullshit. But you also need patience, weeding, and tending. And while you're free to plant seeds, what you wind up with might not be what you expected.

This should make perfect sense. Human cultures are diverse. It's normal that there should be different kinds of food, music, dance, and we enjoy these differences. But online, our horizons narrow. We expect domain experts and programmers to be able to meet everyone's needs, sight unseen. We think it's normal to build a social network for seven billion people.

we hear a lot about artificial intelligence, big data or deep learning these days. they are all referring to the same generic approach of training a computer with lots of data and it learns to recognize structure. these techniques are effective, no doubt, but what we often overlook is that you only get out what you put into it.

Programming and Scaling, by Alan Kay

Leonardo could not invent a single engine for any of his vehicles. Maybe the smartest person of his time, but he was born in the wrong time. His IQ could not transcend his time. Henry Ford was nowhere near Leonardo, but he happened to be born in the right century, a century in which people had already done a lot of work in making mechanical things.

Knowledge, in many many cases, trumps IQ. Why? This is because there are certain special people who invent new ways of looking at things. Henry Ford was powerful because Issac Newton changed the way Europe thought about things. One of the wonderful things about the way knowledge works is if you can get a supreme genius to invent calculus, those of us with more normal IQs can learn it. So we're not shut out from what the genius does. We just can't invent calculus by ourselves, but once one of these guys turns things around, the knowledge of the era changes completely.

we often ignore the context we create a digital product in. however the context defines the space of possible solutions. and not only that, it also defines the borders of our world. what is so interesting about this thought is that you don't need a massive brain, but you need to be able to see and connect ideas in order to advance humanity.

summing up 80

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

the lost medium, by yours truly

Our history shows if you free yourself from the idea that the current state of the computer world is the climax of our great progress we can almost magically give rise to new technologies, ideas and visions that do not only amplify humans, but also produce tremendous wealth for our society. We've ended up focusing too much on technology, on things, on devices and not enough on ideas, on the medium computer. The computer world is not yet finished and many ideas for the collective good are waiting to be discovered. We shape our tools and thereafter our tools shape us.

a talk on how we can use the medium computer to augment our human capabilities. it is the summing up (pun intended) of my research over the last years and foundation for things yet to come. i don't want to add much commentary to this as i already said everything in my talk. feel free to share it and if you have questions, feedback or critique, i'd love to hear from you!

The Anti-Mac User Interface, by Don Gentner & Jakob Nielsen

The GUIs of contemporary applications are generally well designed for ease of learning, but there often is a trade-off between ease of learning on one hand, and ease of use, power, and flexibility on the other hand.

Today's children will spend a large fraction of their lives communicating with computers. We should think about the trade-offs between ease of learning and power in computer-human interfaces. If there were a compensating return in increased power, it would not be unreasonable to expect a person to spend several years learning to communicate with computers, just as we now expect children to spend 20 years mastering their native language.

is it better to have a website, an app, a computer which is easy to use or one that offers high-performance but is difficult and time consuming to learn? of course, simpler user interfaces may be easier to learn and use, but it will be hard work to accomplish difficult tasks. on the other hand, high performance interfaces call for considerably more skills, but the ratio of time to effort is dramatically higher. and this is a very interesting perspective: if you happen to use the computer as a tool for your lifetime, isn't it worth to invest time, become skillful and save time in the long run? i don't know the answer to this question, but i guess the truth is as always somewhere in the middle.

Bots won't replace apps. Better apps will replace apps, by Dan Grover

As a user, I want my apps to have some consistent concept of identity, payments, offline storage, and data sharing. I want to be able to quickly add someone in person or from their website to my contacts. The next time I do a startup, I want to spend my time specializing in solving a specific problem for my users, not getting them over the above general hurdles.

I don’t actually care how it happens. Maybe the OS makers will up their game. Or maybe it’ll be delivered in some magic, blockchain-distributed, GNU-licensed, neckbeard-encrusted solution that the masses, in a sudden epiphany, repent to.

But more than anything, rather than screwing around with bots, I want the tech industry to focus on solving these major annoyances and handling some of the common use cases I described that my phone ought to do better with by now.

this comes back to the basic notion of technology does not solve our problems. it's the things we do with the technologies. and i think that the human side of our products and technologies is often ignored in favour of technical decisions, which often do not bring any direct value to the people.

summing up 79

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

No to NoUI, by Timo Arnall

We must abandon invisibility as a goal for interfaces; it’s misleading, unhelpful and ultimately dishonest. It unleashes so much potential for unusable, harmful and frustrating interfaces, and systems that gradually erode users and designers agency. Invisibility might seem an attractive concept at first glance, but it ignores the real, thorny, difficult issues of designing and using complex interfaces and systems.

when was the last time you visited a website, used an app or a device and just couldn't find a way to do what you wanted? it seems to me that we're always optimizing for design, but seldom for the actual user. a user interface is about machines helping us, instead of us adapting to machines.

How to Use a MAGAZINE, by Khoi Vinh

a tongue-in-cheek parody of the concept of instructional screens of apps. it's funny as we don't need instructions to use a magazine. but it also shows how complex they are if we see them as interfaces, as there's a lot of learned conventions going on here. everything is learned however, and we constantly have to make assumptions on what our audience knows and has already learnt.

In Search of Tomorrow, by Chris Granger

The craziest realization for me has been that if we took a step back and stop thinking about programming for a moment, we managed to come up with a thing that doesn't look like programming anymore. It's just asking questions and formatting results. And that encompasses all of the things we wanna do. That is an amazing result to me.

I'm not saying we've done it and I have no idea what programming is gonna look like in 10 years. But my hope is that whatever programming does look like, that it looks nothing like the programming we have now. The last thing I want is you guys who are trying to cure cancer or trying to understand the cosmos or whatever you're doing have to worry about these ridiculous things that have nothing to do with the amazing stuff you're trying to do. I don't want to look like that at all.

Because at the end of the day, the real goal here is a thinking tool and that is what we have to get back to.

a talk on the progress of experiments to make programming easier and more accessible. after all, we don't need to program our computers, we need a way to solve our problems and augment our capabilities. programming is not a tool for building apps or websites, it is a tool to think with. and the more accessible it becomes, the better for humanity.

summing up 78

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

Why I love ugly, messy interfaces – and you probably do too, by Jonas Downey

If beautiful, fresh, clean, and simple are so important, why hasn’t someone upended all of these products with something nicer? It’s not for a lack of trying. There are countless simpler, better-looking Craigslist and Photoshop competitors, for example. The answer is that these products do an incredible job of solving their users’ problems, and their complex interfaces are a key reason for their success.

just recently, a client of mine showed me their new website for his consulting business. four months in the work, fully responsive, a position statement sharp as a razor and lightning fast. he asked me about my opinion, smiling from ear to ear. i looked at it and asked how they think they would connect to their prospects? where is the call to action? ehm.. he said. do you have any workflows, like email courses, newsletter and contact possibilities for your prospects? ahem.. what does the workflow or funnel from the landing page towards qualification and acquisition of a customer look like? uhh.. you see, design is utterly important, but functionality trumps design. a digital product which looks nice, but does not solve a user's problem is not helpful. a digital product which solves a user's problem like a champ but looks shitty is pitiful but works. good design and functionality together is killer.

The Future Mundane, by Nick Foster

We often assume that the world of today would stun a visitor from fifty years ago. In truth, for every miraculous iPad there are countless partly broken realities: WiFi passwords, connectivity, battery life, privacy and compatibility amongst others. The real skill of creating a compelling and engaging view of the future lies not in designing the gloss, but in seeing beyond the gloss to the truths behind it.

when we create a new digital product, be it an app, a website or even a physical device, we almost always think about the best case in which our product is being used. the perfect customers, the ideal environment, an internet connection which does not break and so on. we neglect to focus on the failures, the frustrations, the feelings of our users or what they can't do. thinking about where and how our products are being used, where they fail and how they limit our users however, would make our digital products compelling and expand our notion of design for the future.

The future of software and the end of apps, by Paul Chiusano

It's hard to imagine organizing computing without some notion of applications. But why do people use computers? People use computers in order to do and express things, to communicate with each other, to create, and to experience and interact with what others have created. But what is important, what truly matters to people is simply being able to perform these actions. That each of these actions presently take place in the context of some 'application' is not in any way essential. In fact, how lovely it would be if the functionality of our current applications could be seamlessly accessed and combined with other functions in whatever ways we imagine. This sort of activity could be a part of the normal interaction that people have with computers, not something reserved only for 'programmers', and not something that requires navigating a tedious mess of ad hoc protocols, dealing with parsing and serialization, and all the other mumbo-jumbo that has nothing to do with the idea the user (programmer) is trying to express. The computing environment could be a programmable playground, a canvas in which to automate whatever tasks or activities the user wished.

have you ever noted the sheer amount of apps and applications you are using every day and nevertheless seem not quite capable of doing what you actually want? we artificially limit the potential of computers by confining our functionality into containers. but what if you could compose the functionality into a fluent and powerful environment? basically an environment which augments your capability? this is a great round up on how this could be possible.

summing up 77

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

Design machines, by Travis Gertz

There’s a lot of hubris hidden in the term “user experience design.” We can’t design experiences. Experiences are reactions to the things we design. Data lies to us. It makes us believe we know what a person is going through when they use our products. The truth is that it has no insight into physical mental or physical ability, emotional state, environmental conditions, socioeconomic status, or any other human factor outside of their ability to click on the right coloured box in the right order. Even if our machines can assume demographic traits, they will never be able to identify with each person’s unique combination of those traits. We can’t trust the data. And those who do will always be stuck chasing a robotic approach to human connection.

we often don't know our users. we don't know how they feel right in that moment, what problems they have, what solutions they're looking for. and this ignorance often leads us to design robotic experiences in a one fits all approach. and the truth is, we are designing boring, predictable and repetitive websites and digital products. an exuberance of data and patterns leads into mechanical and repetitive interactions with our users. we have to design better systems, we have to provoke and establish human connections in our technologies. or how else will we prove that we are better than a machine?

Intuitive Interfaces, by Jef Raskin

The term "intuitive" is associated with approval when applied to an interface, but this association raises the issue of the tension between improvement and familiarity. As an interface designer I am often asked to design a "better" interface to some product. Usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client’s own products. Even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive. It is a classic "catch 22." The client wants something that is significantly superior to the competition. But if superior, it cannot be the same, so it must be different. Therefore it cannot be intuitive, that is, familiar. What the client usually wants is an interface with at most marginal differences that, somehow, makes a major improvement. This can be achieved only on the rare occasions where the original interface has some major flaw that is remedied by a minor fix.

you probably heard about often cited quote/joke that the only intuitive interface is the nipple. it's been around for quite some time in the ux/hci community. and it is funny, cute and completely wrong. no technology is intuitive. it is all just familiar or unfamiliar at first. what we want though from technology are interfaces and interactions that feel familiar, learnable and evident. an interface should teach us in ways we can get better, allow us to have new ideas and solutions to things and speak to us in ways we can understand. this doesn't mean that technology should never challenge or surprise us, it definitely should. but it should also grow together with the user.

on discoverability, by yours truly

if there is no way to discover what operations are possible just by looking at the screen and the interaction is numbed with no feedback by the devices, what's left? the interaction gets reduced to experience and familiarity where we only rely on readily transferred, existing skills.

it strikes me as quite interesting that we spend a huge part of our time developing digital products on static visual traits without thinking about how and where they are being used. but digital products are not just isolated tools, they are always used within an environment. and that environment has work collaboratively with our users. in order to achieve that interaction elements have to be discoverable. too often though vital elements are concealed in the user interface.

summing up 76

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

Normal Considered Harmful, by Alan Kay

Normal is the greatest enemy with regard to creating the new. And the way of getting around this, is you have to understand normal, not as reality, but just a construct. And a way to do that, for example, is just travel to a lot of different countries – and you'll find a thousand different ways of thinking the world is real, all of which is just stories inside of people's heads. That's what we are too. Normal is just a construct – and to the extent that you can see normal as a construct inside yourself, you've freed yourself from the constraints of thinking this is the way the world is. Because it isn't. This is the way we are.

some very interesting points on the challenge of real innovation. this talk is probably best summarized by acknowledging that all understanding begins with not accepting the world as it appears. and this is very much true for the tools and products we use and create ourselves. alan kay's idea is quite big, and it is almost too big an idea to see and it is very hard to actually see it. part of the problem is that we have to make a distinction between the computer and computing as a technology and computers as a medium. only then we can come up with better ideas and solutions.

When U.S. air force discovered the flaw of averages, by Todd Rose

The consensus among fellow air force researchers was that the vast majority of pilots would be within the average range on most dimensions. After all, these pilots had already been pre-selected because they appeared to be average sized. The scientists also expected that a sizable number of pilots would be within the average range on all 10 dimensions. But they were stunned when they tabulated the actual number.

Zero. There was no such thing as an average pilot. If you’ve designed a cockpit to fit the average pilot, you’ve actually designed it to fit no one.

By discarding the average as their reference standard, the air force initiated a quantum leap in its design philosophy, centred on a new guiding principle: individual fit. Rather than fitting the individual to the system, the military began fitting the system to the individual. In short order, the air force demanded that all cockpits needed to fit pilots whose measurements fell within the 5-per-cent to 95-per-cent range on each dimension.

every time my clients talk about the user, or the average user i get an uneasy feeling. you see, there is no average user. and every time you design your product for an average user you similarly designed it to fit no one. so how do we get out of this dilemma? the first approach might be user research done right (jobs to be done is an interesting approach), the latter might be products which adapt to the user's current level of knowledge and experience. the idea here being quite simple: user experience is a moving target. and as we use a product and improve our understanding of it, the user interface should adapt and improve as well.

The Surrender of Culture to Technology, by Neil Postman

Every technology has an inherent bias, has both unique technical limitations and possibilities. That is to say every technology has embedded in its physical form a predisposition to it being used in certain ways and not others. Only those who know nothing of the history of technology believe that a technology is entirely neutral or adaptable. In other words each technology has an agenda of its own and so to speak gives us instructions on how to fulfil its own technical destiny. We have to understand that fact but we must not and especially we must not underestimate it. Of course we need not be tyrannized by it, we do not always have to go in exactly the direction that a technology leads us toward going. We have obligations to ourselves that may supersede our obligations to any technology

i see neil postman as one of the best media and technology critics of our time. his basic gist is quite simple: we have to become aware of the environments we live in and how we and our understanding of the world adapt to it without being aware of the process. in his talk he poses the following seven questions, with the argument that questions are more important than answers. because answers change over time and different circumstances even for the same person, while questions endure:

  • what is the problem to which a technology claims to be a solution?
  • whose problem is it?
  • what new problems will be created because of solving an old one?
  • which people and institutions will be most harmed?
  • what changes in language are being promoted?
  • what shifts in economic and political power are likely to result?
  • what alternative media might be made from a technology?

summing up 75

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

Pernicious Computer Traditions, by Ted Nelson

The computer world is not yet finished, but everyone is behaving as though everything was known. This is not true. In fact, the computer world as we know it is based upon one tradition that has been waddling along for the last fifty years, growing in size and ungainliness, and is essentially defining the way we do everything. My view is that today’s computer world is based on techie misunderstandings of human thought and human life. And the imposition of inappropriate structures through the computer and through the files and the applications is the imposition of inappropriate structures on the things we want to do in the human world.

ted nelson is one of the founding fathers of personal computing and the man who invented hypertext. recently, i've been reading and watching a lot of his stuff and his rebellious view on the current state of computing is particularly interesting. technology is shining back on us and the abstractions we created hurt and limit us. this view is actually quite similar with marshall mcluhan's basic premise "we shape our tools, and our tools shape us".

The Physical Web, by Scott Jenson

You can see this pattern over and over again, we kind of have the old, we slowly work our way into the future, revolve it and then something comes along and scares us and pulls us back to the beginning. So there are two critical psychological points to this shape of innovation, two lessons I think we have to learn. The one is the fact that we have this familiarity, we will always borrow from the past and we have to somehow transcend it. And we just need to appreciate that and talk about that a little bit more to see what we're borrowing. But the other one, I think is also important, is this idea of maturity, because it forms a form of intellectual gravity well. It's like we worked so damn hard to get here, we're not leaving. It kinda forms this local maximum and people just don't want to give it up. We feel like we had somehow gotten to this magical point and it was done. It was like here forever and we can kind of rest. And you can never rest in this business. I think it's important for us to realize both of these two extremes and how we want to break out of this loop.

the two lessons here, that we'll always borrow from the past, and that maturity is an intellectual gravity well that is hard to escape from are very important to grasp and understand. it kinda explains and goes very well together with ted nelson's view above. we get comfortable with what we have and won't give it up lightly. but we have to reconsider our mature designs in order to be able to innovate.

The Web's Grain, by Frank Chimero

We often think making things for the web is a process of simplifying–the hub, the dashboard, the control panel are all dreams of technology that coalesces–but things have a tendency to diverge into a multiplicity of options. We pile on more tools and technology, each one increasingly nuanced and minor in its critical differences. Clearly, convergence and simplicity make for poor goals. Instead, we must aim for clarity. You can’t contain or reduce the torrent of technology, but you can channel it in a positive direction through proper framing and clear articulation.

this inspirational reflection takes us back to reinvestigate how we see and use the web and what our role in creating innovating experiences across the web should be. what would happen if we stopped treating the web like a blank canvas to paint on and instead like a material to build with?

summing up 74

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

Douglas Engelbart Interviewed by John Markoff

Q: Let's talk about the future and what got lost. There were some ideas that got taken away and turned into commercial products, whole industries. But i think i've come to understand that your feeling is that some things didn't get taken away. Tell me about what still needs to be done.

I think the focus needs to be on capabilities that you can drive. Take the business of cost of learning and set it aside until you assess the values of the higher capabilities you can go after. It seemed like there was some level that got set in the early seventies around "it has to be easy to learn". And with all due respect to all the human computer interface guys that's just to me as we'd all still be riding tricycles.

A big part of that is the paradigms we use. One example is the "book paradigm" that's just built into everybody's sort of pragmatical outlook. That's the way you read and study. And you say well no wait, that is just a way an artifact that they call printing and such produced things that would help you do that. We got brand new sets of artifacts now, so let's change our paradigms, let's see what we can do. And that is what I started doing in the sixties.

today's ubiquitous graphical user interface has its roots in doug engelbart's groundshattering research in the sixties. many of the concepts he invented were further developed at xerox parc and successfully commercialized in the apple macintosh, whereupon they essentially froze. twenty years later, despite thousand-fold improvements along every technological dimension, the concepts behind today's interfaces are almost identical to those in the initial mac. this is a very interesting interview with one of the fathers of personal computing which touches on many points of this development.

Moving from Critical Review to Critique, by Jared Spool

I ask teams whether they do critiques. “Oh, yes. All the time,” they tell me. However, when I ask them what it is they do, it’s basically a meeting where someone’s work is criticized for what it’s missing. It’s a meeting where people who haven’t given the design problem or solution much thought, until that moment, rip apart the work of someone who has. These critical design reviews are miserable experiences. Everyone completely dreads them. The experience makes them feel like crap. And then it’s time to schedule another one.

What makes a critique different from a critical design review is we are not there to find flaws. We’re there to learn from the design and to explore where it works well and where it could be improved. In a well-run critique, we explicitly separate out the discussion of “What are we trying to do with this design?” from the discussion of “Does this rendition accomplish it?”

this article has made a tremendous impact on my understanding of what makes a critique worthwhile, particularly at engagements at my clients. to me there is still the notion that between many teams, be it design, product and development, there seems to be mismatch in understanding, and a lot of headaches coming out of it. critique however is an important part of any design process and the feedback you get through a well-run critique is tremendously helpful to create a better product, make better decisions and work together more efficiently.

The Internet of NO Things, by Roope Mokka

As technology keeps developing faster and faster, all the technologies that are now in a smartphone will become the size of a piece of paper and be available for the price of a piece of paper as well. What we have to understand is that when technology gets developed enough it disappears, it ceases to be understood as technology; it becomes part of the general man-made ambience of our life. Look around you, there are amazing technologies already around us that have vanished. This house is a very typical example of disruptive technology, not to mention this collection of houses and streets and other infrastructure, know as the city, invented some thousands of years ago. Houses and cities are technologies. Our clothing is a technology, the food on the tables is the end product of masses of technologies. These are all technologies that have in practice disappeared: they are on the background and nobody (outside of dedicated professionals) thinks of them as technologies.

with all this buzz about the internet of things, i find it refreshing to talk about what comes after the internet of things. arthur c. clarke once famously remarked that any sufficiently advanced technology is indistinguishable from magic. i really like this idea of how technology disappears after it has been established.

summing up 73

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

How the internet flips elections and alters our thoughts, by Robert Epstein

We have also learned something very disturbing – that search engines are influencing far more than what people buy and whom they vote for. We now have evidence suggesting that on virtually all issues where people are initially undecided, search rankings are impacting almost every decision that people make. They are having an impact on the opinions, beliefs, attitudes and behaviours of internet users worldwide – entirely without people’s knowledge that this is occurring. This is happening with or without deliberate intervention by company officials; even so-called ‘organic’ search processes regularly generate search results that favour one point of view, and that in turn has the potential to tip the opinions of millions of people who are undecided on an issue.

we all run interesting experiments in and with our products in order to make them more usable, behave more efficiently and make them better. this is a chilling essay based on proven research and featuring some really interesting experiments and examples in which big players such as google and facebook are creating massive influence over people’s behaviors and opinions – without any of us really being able to detect that it’s happening. the issue of invisible algorithms is a very important one. it is important in order to know how we are being affected ourselves as well as how we can use these strategies responsibly for our products & services.

Responsive Web Design: Relying Too Much on Screen Size, by Luke Wroblewski

Don’t assume screen adaptation is a complete answer for multi-device Web design. Responsive Web design has given us a powerful toolset for managing a critical part of the multi-device world. But assuming too much based on screen size can ultimately paint you into a corner.

It’s not that adapting to screen size doesn’t matter, as I pointed out numerous times, it really does. But if you put too much stock in screen size or don’t consider other factors, you may end up with incomplete or frankly inappropriate solutions. How people interact with the Web across screens continues to evolve rapidly and our multi-device design methods need to be robust enough to evolve alongside.

this is what i preach my clients for years. everybody seems to be always hyping this new technology or this new feature or that new paradigm. but in the end this only goes so far. what is important is that your product or service runs well in the ecosystem it will be used in, not the one it was designed for, be it a website, an application or a mobile app.

The Web of Alexandria, by Bret Victor

Whenever the ephemerality of the web is mentioned, two opposing responses tend to surface. Some people see the web as a conversational medium, and consider ephemerality to be a virtue. And some people see the web as a publication medium, and want to build a "permanent web" where nothing can ever disappear. Neither position is mine. If anything, I see the web as a bad medium, at least partly because it invites exactly that conflict, with disastrous effects on both sides. For people who have grown up with HTTP and URLs, it can be hard to see anything wrong with them. The tendency is to blame individual behavior – "You should have mirrored that data!" "You shouldn't have put those photos online!" But the technical properties of a medium shape social practice, and if the resulting social practice is harmful, it's the medium that is at fault.

how can you call the web a publishing medium when your bookshelf can just vanish? on the other hand, how can it be that deleted content still emerges from the deep sea of the web? the web is a single, increasingly complex infrastructure which has been adopted for mutually incompatible purposes. more importantly however bret victor made me realize that there’s so many ways we limit ourselves with technology.

summing up 72

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

When We Build, by Wilson Miner

When we design this new generation of digital tools for this ecosystem's screen, we have a longer horizon ahead of us than just the next software platform or the next version of a technology. We have a longer lineage behind us than just the web or software design or even computers. Steve Jobs said the thing that separates us from the high primates is that we're tool builders. We make things, we make things that change our lives and we make things that changed the world. This is a long and long lasting tradition. We shape our tools, and our tools shape us. We're a product of our world and our world is made of things. Things we use, things we love, things we carry with us and the things we make. We're the product of our world but we're also its designer. Design is the choices we make about the world we want to live in.

this is an exceptional talk on media and human perceptions. as more of the tools we live with every day become digital instead of physical, our opportunity – and responsibility – as designers is increasing. currently we are in a unique position to shape the tools we will use in the next century, and to define how those tools will shape us, create and dictate our behavior. the gist is extremely relevant for us right now, as our way of thinking, our opportunities and our technology are highly dependent on the quality and potential of our tools. highly recommended

Why I can’t convince executives to invest in UX (and neither can you), by Jared Spool

I’ve been pitching our services for 23 years and I’ve never once successfully convinced an executive of anything.

Our success has always come from projects where the client team, including the senior management, already understood the value of great user experiences. I haven’t convinced them because they didn’t need convincing.

You can’t convince a smoker to quit smoking. They need to just decide they’ll do it. On their own. When they are ready. It’s the same with executives. Neither I, you, nor anybody else can convince an executive to invest in user experience.

this article hits very close to home for me as i often had similar experiences. in all successful pitches i rarely pushed for ux, but almost always took a step back and found out what executives were already convinced of. at large they have something they want to improve, be it related to revenues, reducing costs, increasing the number of customers, increasing sales or making their team more effective. good user experience can help with each of those things. but there is rarely a one-size-fits-all solution for any of these. however, once you start talking about what executives are already convinced of, things get much more easy. how can you argue with a customer who is struggling checking out a product on your website or one not being able to use the app you provide? once you are no longer trying to change their focus, you're playing directly into their main field of attention.

on ditching css frameworks and preprocessors, by yours truly

What if I were to tell you...CSS is already a framework for styling HTML, and that by actually taking the time to learn it, one can make significantly less shitty websites that are actually responsive, don’t require a quad-core with 8GB of ram just to render, and that another front-end-developer who isn’t hip on whatever flavor-of-the-month bullshit framework can actually be able to maintain it?

in this short post i try to make a point on how ditching existing css frameworks and preprocessors is the first step towards a modern, bloat free web. in my opinion you will save time and money in the long run by reducing abstraction, being able to update it easily and avoiding extra cruft, and of course the latest css modules specification will make those frameworks and preprocessors superfluous in the medium term. i'd be very interested to hear about your comments and experiences on this topic!

summing up 71

not too long ago, i announced the last edition of summing up. but i also announced that this series will live on, as there was a lot of positive feedback over the years, encouraging me to continue and to look out for different formats. today, after lots of experimentation, i'll continue summing up. it will be a bit shorter, a bit more unsteady and will feature less unicorns (sorry!). thanks a lot for your support and your feedback, it was heavily appreciated. you're very welcome to subscribe to this little series and get it directly in your inbox along with some cool stuff that you won't find anywhere else on the site. now, without further ado...

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

The Website Obesity Crisis, by Maciej Cegłowski

These comically huge homepages for projects designed to make the web faster are the equivalent of watching a fitness video where the presenter is just standing there, eating pizza and cookies.

The world's greatest tech companies can't even make these tiny text sites, describing their flagship projects to reduce page bloat, lightweight and fast on mobile.

I can't think of a more complete admission of defeat.

amen. maciej cegłowski is one of my favourite speakers, his talks are always very insightful, charming and funny. but most importantly he hits the nail on the head. every single time. his talk is about why the modern web is so bloated and slow, and why it matters. i've found this true with my own clients, many of them come to me with ridiculous large websites and few results to show for it all. i've found though, that along with relevant content, the speed of websites is one of the most important factors of success. once, a client told me - after we've finished the project - that their website was the only one which came up when she entered the metro. i liked that.

5 Steps To Re-create Xerox PARC's Design Magic, by Alan Kay & John Pavlus

We live in a world full of hype. When I look at most of the Silicon Valley companies claiming to do invention research, they're really selling pop culture. Pop culture is very incremental and is about creating things other than high impact. Being able to do things that change what business means is going to have a huge impact - more than something that changes what social interaction means in pop culture.

to me, xerox parc is still one of the greatest legends and success stories in computing of all time. so many concepts, like the graphical user interface, the mouse, the laser printer, object oriented programming and ethernet were invented and incubated there. along that, great minds like doug engelbart or alan kay had their heyday there. on the other hand, there are so many companies out there trying to do "innovation", be it r&d labs, startup incubators and similar. this article is a great summary on how you can implement the main points in your organization as well. after all, if your business does not evolve, it'll die.

1,000 True Fans, by Kevin Kelly

Young artists starting out in this digitally mediated world have another path other than stardom, a path made possible by the very technology that creates the long tail. Instead of trying to reach the narrow and unlikely peaks of platinum hits, bestseller blockbusters, and celebrity status, they can aim for direct connection with 1,000 True Fans. It's a much saner destination to hope for. You make a living instead of a fortune. You are surrounded not by fad and fashionable infatuation, but by True Fans. And you are much more likely to actually arrive there.

this article has been around for quite some time and it is certainly famous in certain circles. with good reason. if you're an artist, an entrepreneur or think about launching your own product, it is a must-read. but it also applies to small to medium businesses. the kernel is this: to be successful you don't have to be hugely famous. it is in fact much easier to be important to a selected group of people.

summing up is dead, long live summing up

it's been almost two years, since i started with the first edition in this series. 70 editions, 877 links, 68 videos, 35 papers, 0 unicorns. that's a lot. unfortunately also a lot for the reader. i repeatedly got the feedback that each edition was too long, too comprehensive, too many interesting links, no unicorns (well, actually there were a few) and no time to read/watch them all.

and i wanna take this criticism to heart. that is why edition 70 will be the last one in this series. but, please don't despair. i've already got plans on how to continue with a different format. thanks for your time, for your suggestions, the nice emails and the great advice. hope to see you soon.

summing up 70

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • a whole new world, shipping often is a great way to achieve short-term business gains and a great way to discover what your customer wants. but when you're a programmer, and you are the customer, and you're writing the system, and you've been using these tools for 20 years, you don't need customer discovery. you need to go off and sit in a hammock for a couple of years and think hard. the thing which i find surprising is that we have not only all this legacy, but we have a paralysis around it. we can't even imagine replacing it. highly recommended
  • the real computer revolution hasn't happened yet, by alan kay. so we had a sense that the personal computer's ability to imitate other media would both help it to become established in society, and that this would also make if very difficult for most people to understand what it actually was. our thought was: but if we can get the children to learn the real thing then in a few generations the big change will happen. 32 years later the technologies that our research community invented are in general use by more than a billion people, and we have gradually learned how to teach children the real thing. but it looks as though the actual revolution will take longer than our optimism suggested, largely because the commercial and educational interests in the old media and modes of thought have frozen personal computing pretty much at the "imitation of paper, recordings, film and tv" level. recommended
  • datacide, we were told to surf the web, but in the end, the web serf'd us. the right way is to turn off, buy in and cash out. reinforce the grand narrative and talk about how social is going to bring people together, not just online, but in the real world. how it will augment our interactions and make us more open. how in five years you'll be able to meet your true love through an algorithm that correlates your itunes activity to your medical history and how that algorithm will be worth a billion fucking dollars. and it's through that magical cloud of squandered human potential that skinner emerges once again and starts poking his finger into your brain. we are not going to escape this crisis by putting ourselves in a cage. there is no opt-out anymore. you can draw the blinds, deadlock your door, smash your smartphone, and only carry cash, but you'll still get caught up in their all-seeing algorithmic gaze. they've datafied your car, your city and even your snail mail. this is not a conspiracy, it's the status quo, and we've been too busy displacing our anxiety into their tidy little containers to realize what's going on. recommended
  • the computer revolution hasn't happened yet (transcript), by alan kay. i'm going to use a metaphor for this talk which is drawn from a wonderful book called the act of creation by arthur koestler. one of the great books he wrote was about what might creativity be. he realized that learning, of course, is an act of creation itself, because something happens in you that wasn't there before. he used a metaphor of thoughts as ants crawling on a plane. in this case it's a pink plane, and there's a lot of things you can do on a pink plane. you can have goals. you can choose directions. you can move along. but you're basically in the pink context. it means that progress, in a fixed context, is almost always a form of optimization, because if you're actually coming up with something new, it wouldn't have been part of the rules or the context for what the pink plane is all about. creative acts, generally, are ones that don't stay in the same context that they're in. he says, every once in a while, even though you have been taught carefully by parents and by school for many years, you have a blue idea. maybe when you're taking a shower. maybe when you're out jogging. maybe when you're resting in an unguarded moment, suddenly, that thing that you were puzzling about, wondering about, looking at, appears to you in a completely different light, as though it were something else. he also pointed out that you have to have something blue to have blue thoughts with. i think this is generally missed in people who specialize to the extent of anything else. when you specialize, you are basically putting yourself into a mental state where optimization is pretty much all you can do. you have to learn lots of different kinds of things in order to have the start of these other contexts

summing up 69

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • coding is not the new literacy, coding requires us to break our systems down into actions that the computer understands, which represents a fundamental disconnect in intent. most programs are not trying to specify how things are distributed across cores or how objects should be laid out in memory. we are not trying to model how a computer does something. instead, we are modeling human interaction, the weather, or spacecraft. we are employing a set of tools designed to model how computers work, but we're representing systems that are nothing like them. we have to create a new generation of tools that allow us to express our models without switching professions and a new generation of modelers who wield them. to put it simply, the next great advance in human ability comes from being able to externalize the mental models we spend our entire lives creating. that is the new literacy. and it's the revolution we've all been waiting for. highly recommended
  • shonda rhimes '91, commencement address, when people give these kinds of speeches, they usually tell you all kinds of wise and heartfelt things. they have wisdom to impart. they have lessons to share. they tell you: follow your dreams. listen to your spirit. change the world. make your mark. find your inner voice and make it sing. embrace failure. dream. dream and dream big. as a matter of fact, dream and don't stop dreaming until all of your dreams come true. i think that's crap. i think a lot of people dream. and while they are busy dreaming, the really happy people, the really successful people, the really interesting, engaged, powerful people, are busy doing
  • keep your crises small, initially, the films our teams put together, they're a mess. it's like everything else in life - the first time you do it, it's a mess. sometimes it's labeled "first time, it's a failure", but that's not even the right word to use. it's just like, you get the first one out, you learn from it, and the only failure is if you don't learn from it, if you don't progress

summing up 68

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • start-ups and emotional debt, i realize that many people who do successful start-ups say it was the best thing that ever happened to them. but they've also become different people, and they are not the same people they would have been if they had decided to pursue another course. they have different sets of relationships, different skills, different attitudes, and different desires. they really have no idea what kind of person they otherwise would have been become. recommended
  • waffling, i've always been very dubious about the idea of learning from people who have been successful. there's this whole cult of worshipping rich people, reading interviews with them, getting their opinions on things, trying to learn what made them successful. i think it's mostly nonsense. the thing is, if you just look at who the biggest earners are, it's almost entirely luck. the point is if you just look at successful business people, they will probably be confident, decisive, risk takers, aggressive at seizing opportunities, aggressive about growing the business quickly, etc. that doesn't mean that those are the right things to do. it just means that those are variance-increasing traits that give them a chance to be a big success
  • why don't software development methodologies work?, my own experience, validated by cockburn's thesis and frederick brooks in no silver bullet, is that software development projects succeed when the key people on the team share a common vision, what brooks calls "conceptual integrity." this doesn't arise from any particular methodology, and can happen in the absence of anything resembling a process. i know the feeling working on a team where everyone clicks and things just get done
  • 7 principles of rich web applications, the web remains one of the most versatile mediums for the transmission of information. as we continue to add more dynamism to our pages, we must ensure that we retain some of its great historical benefits while we incorporate new ones
  • "the road to wisdom? - well, it's plain and simple to express: err and err and err again but less and less and less", piet hein

summing up 67

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • the center of "why?", by alan kay. living organisms are shaped by evolution to survive, not necessarily to get a clear picture of the universe. for example, frogs' brains are set up to recognize food as moving objects that are oblong in shape. so if we take a frog's normal food - flies - paralyze them with a little chloroform and put them in front of the frog, it will not notice them or try to eat them. it will starve in front of its food! but if we throw little rectangular pieces of cardboard at the frog it will eat them until it is stuffed! the frog only sees a little of the world we see, but it still thinks it perceives the whole world. now, of course, we are not like frogs! or are we? highly recommended (pdf)
  • "interface matters to me more than anything else, and it always has. i just never realized that. i've spent a lot of time over the years desperately trying to think of a "thing" to change the world. i now know why the search was fruitless - things don't change the world. people change the world by using things. the focus must be on the "using", not the "thing". now that i'm looking through the right end of the binoculars, i can see a lot more clearly, and there are projects and possibilities that genuinely interest me deeply", bret victor
  • ivan sutherland's sketchpad, a man-machine graphical communication system, the decision actually to implement a drawing system reflected our feeling that knowledge of the facilities which would prove useful could only be obtained by actually trying them. it was implicit in the research nature of the work that simple new facilities should be discovered which, when implemented, should be useful in a wide range of applications, preferably including some unforseen ones. it has turned out that the properties of a computer drawing are entirely different from a paper drawing not only because of the accuracy, ease of drawing, and speed of erasing provided by the computer, but also primarily because of the ability to move drawing parts around on a computer drawing without the need to erase them. had a working system not been developed, our thinking would have been too strongly influenced by a lifetime of drawing on paper to discover many of the useful services that the computer can provide (pdf)
  • "i got this wild dream in my head about what would help mankind the most, to go off and do something dramatic, and i just happened to get a picture of how, if people started to learn to interact with computers, in collective ways of collaborating together, and this was way back in the early 50s, so it was a little bit premature. so anyways, i had some gi bill money left still so i could just go after that, and up and down quite a bit through the years, and i finally sort of gave up", douglas engelbart

summing up 66

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • federated education: new directions in digital collaboration, as advocates we're so often put in a situation where we have to defend the very idea that social media is an information sharing solution that we don't often get to think about what a better solution for collaboration would look like. because there are problems with the way social media works now. minority voices are squelched, flame wars abound. we spend hours at a time as rats hitting the skinner-esque levers of twitter and tumblr, hoping for new treats - and this might be ok if we actually then built off these things, but we don't. we're stuck in an attention economy feedback loop where we react to the reactions of reactions (while fearing further reactions), and then we wonder why we're stuck with groupthink and ideological gridlock. we're bigger than this and we can envision new systems that acknowledge that bigness. we can build systems that return to the the vision of the forefathers of the web. the augmentation of human intellect. the facilitation of collaboration. the intertwingling of all things. this is one such proposal. maybe you have others. highly recommended
  • your app is good and you should feel good, there's no disincentive to honking at people for the slightest provocation. there's little recourse for abuse. it's such an asymmetrical, aggressive technology, so lacking in subtlety. it kind of turns everyone into a crying baby - you can let the people around you know that you're very upset, but not why. i think the internet is like this sometimes, too. the internet is like a car horn that you can honk at the entire world. recommended
  • the best investment advice you'll never get, don't try to beat the market and don't believe anyone who tells you they can - not a stock broker, a friend with a hot stock tip, or a financial magazine article touting the latest mutual fund. seasoned investment professionals have been hearing this anti-industry advice, and the praises of indexing, for years. but while wall street has considerable soul-searching to do, full blame for the gouging of naive investors does not lie with the investment management industry alone. there is an innate cultural imperative in this country to beat the odds, to do better than the joneses. it's simply difficult for most of us to accept average returns on our money, or on anything for that matter. recommended
  • forget shorter showers - why personal change does not equal political change, i think we're in a double bind. a double bind is where you're given multiple options, but no matter what option you choose, you lose, and withdrawal is not an option. at this point, it should be pretty easy to recognize that every action involving the industrial economy is destructive. so if we choose option one - if we avidly participate in the industrial economy - we may in the short term think we win because we may accumulate wealth, the marker of "success" in this culture. but we lose, because in doing so we give up our empathy, our animal humanity. and we really lose because industrial civilization is killing the planet, which means everyone loses. if we choose the "alternative" option of living more simply, thus causing less harm, but still not stopping the industrial economy from killing the planet, we may in the short term think we win because we get to feel pure, and we didn't even have to give up all of our empathy, but once again we really lose because industrial civilization is still killing the planet, which means everyone still loses. the third option, acting decisively to stop the industrial economy, is very scary for a number of reasons, including but not restricted to the fact that we'd lose some of the luxuries to which we've grown accustomed, and the fact that those in power might try to kill us if we seriously impede their ability to exploit the world - none of which alters the fact that it's a better option than a dead planet. any option is a better option than a dead planet

summing up 65

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • rage against the machines, in many ways, we are our greatest technological constraint. the slow and steady march of human evolution has fallen out of step with technological progress: evolution occurs on millennial time scales, whereas processing power doubles roughly every other year. our ancestors who lived in caves would have found it advantageous to have very strong, perhaps almost hyperactive pattern-recognition skills - to be able to identify in a split-second whether that rustling in the leaves over yonder was caused by the wind or by an encroaching grizzly bear. nowadays, in a fast-paced world awash in numbers and statistics, those same tendencies can get us into trouble: when presented with a series of random numbers, we see patterns where there aren't any. we have to view technology as what it always has been - a tool for the betterment of the human condition. we should neither worship at the altar of technology nor be frightened by it. nobody has yet designed, and perhaps no one ever will, a computer that thinks like a human being. but computers are themselves a reflection of human progress and human ingenuity: it is not really "artificial" intelligence if a human designed the artifice. highly recommended
  • the sixth stage of grief is retro-computing, technology is what we share. i don't mean "we share the experience of technology." i mean: by my lights, people very often share technologies with each other when they talk. strategies. ideas for living our lives. we do it all the time. parenting email lists share strategies about breastfeeding and bedtime. quotes from the dalai lama. we talk neckties, etiquette, and minecraft, and tell stories that give us guidance as to how to live. a tremendous part of daily life regards the exchange of technologies. we are good at it. it's so simple as to be invisible. can i borrow your scissors? do you want tickets? i know guacamole is extra. the world of technology isn't separate from regular life. it's made to seem that way because of, well... capitalism. tribal dynamics. territoriality. because there is a need to sell technology, to package it, to recoup the terrible investment. so it becomes this thing that is separate from culture. a product. highly recommended
  • worse is better is worse, the real quarrel with the paper i have is about what it teaches people. it warps the minds of youth. it is never a good idea to intentionally aim for anything less than the best, though one might have to compromise in order to succeed. maybe richard means one should aim high but make sure you shoot - sadly he didn't say that. he said "worse is better," and though it might be an attractive, mind-grabbing headline seducing people into reading his paper, it teaches the wrong lesson - a lesson he may not intend, or a lesson poorly stated. i know he can say the right thing, and i wish he had (pdf)
  • airport codes: a history and explanation of airport abcs

summing up 64

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • the collapse of complex business models, when ecosystems change and inflexible institutions collapse, their members disperse, abandoning old beliefs, trying new things, making their living in different ways than they used to. it's easy to see the ways in which collapse to simplicity wrecks the glories of old. but there is one compensating advantage for the people who escape the old system: when the ecosystem stops rewarding complexity, it is the people who figure out how to work simply in the present, rather than the people who mastered the complexities of the past, who get to say what happens in the future. recommended
  • notes on present status and future prospects, every new conceptual idea (unlike a mathematical one) must go through a phase of facing opposition from two sides - the entrenched establishment who thinks that its toes are being stepped on, and a lunatic fringe that springs up, seemingly by spontaneous generation, out of the idea itself. those whose fame and fortune are based on their very real accomplishments using previous methods have a strong vested interest in them and will raise strenuous opposition to any attempt to replace them. this phenomenon has been very well documented in many cases. in contrast to the establishment which is protecting something that has some demonstrated value, the lunatic fringe has no vested interest in anything because it is composed of those who have never made any useful contribution to any field. instead, they are parasites feeding on the new idea; while giving the appearance of opposing it, in fact they are deriving their sole sustenance from it, since they have no other agenda. the establishment and the lunatic fringe have the common feature that they do not understand the new idea, and attack it on philosophical grounds without making any attempt to learn its technical features so they might try it and see for themselves how it works. many will not even deign to examine the results which others have found using it; they know that it is wrong, whatever results it gives. there is no really effective way to deal with this kind of opposition; one can only continue quietly accumulating the evidence of new useful results, and eventually the truth will be recognized (pdf)
  • sequelitis - mega man classic vs. mega man x, games are supposed to be fun, and reading manuals isn't fun; it's pretty much the opposite of fun. but it is also true for software in general. manuals are pointless when we can learn about the game in the best and most natural way imaginable: by playing the actual game. you learn by doing, provided you have a well designed sandbox that lets you safely experiment as you're starting out in the game
  • trivial, the word "trivial" offers many opportunities to inappropriately reduce an item to its most basic components; allowing us to ignore the beauty that lies in the process. the value of a network is greater than the sum of its parts, but a simple misstep in vocabulary undermines it all
  • the fastest man on earth - the story of john paul stapp, stapp was promoted to the rank of major, reminded of the 18 g limit of human survivability, and told to discontinue tests above that level

summing up 63

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • is there love in the telematic embrace? by roy ascott. it is the computer that is at the heart of this circulation system, and, like the heart, it works best when least noticed - that is to say, when it becomes invisible. at present, the computer as a physical, material presence is too much with us; it dominates our inventory of tools, instruments, appliances, and apparatus as the ultimate machine. in our artistic and educational environments it is all too solidly there, a computational block to poetry and imagination. it is not transparent, nor is it yet fully understood as pure system, a universal transformative matrix. the computer is not primarily a thing, an object, but a set of behaviors, a system, actually a system of systems. data constitute its lingua franca. it is the agent of the datafield, the constructor of dataspace. where it is seen simply as a screen presenting the pages of an illuminated book, or as an internally lit painting, it is of no artistic value. where its considerable speed of processing is used simply to simulate filmic or photographic representations, it becomes the agent of passive voyeurism. where access to its transformative power is constrained by a typewriter keyboard, the user is forced into the posture of a clerk. the electronic palette, the light pen, and even the mouse bind us to past practices. the power of the computer's presence, particularly the power of the interface to shape language and thought, cannot be overestimated. it may not be an exaggeration to say that the "content" of a telematic art will depend in large measure on the nature of the interface; that is, the kind of configurations and assemblies of image, sound, and text, the kind of restructuring and articulation of environment that telematic interactivity might yield, will be determined by the freedoms and fluidity available at the interface. highly recommended (pdf)
  • on the reliability of programs, by e.w. dijkstra. automatic computers are with us for twenty years and in that period of time they have proved to be extremely flexible and powerful tools, the usage of which seems to be changing the face of the earth (and the moon, for that matter!) in spite of their tremendous influence on nearly every activity whenever they are called to assist, it is my considered opinion that we underestimate the computer's significance for our culture as long as we only view them in their capacity of tools that can be used. they have taught us much more: they have taught us that programming any non-trivial performance is really very difficult and i expect a much more profound influence from the advent of the automatic computer in its capacity of a formidable intellectual challenge which is unequalled in the history of mankind. this opinion is meant as a very practical remark, for it means that unless the scope of this challenge is realized, unless we admit that the tasks ahead are so difficult that even the best of tools and methods will be hardly sufficient, the software failure will remain with us. we may continue to think that programming is not essentially difficult, that it can be done by accurate morons, provided you have enough of them, but then we continue to fool ourselves and no one can do so for a long time unpunished
  • "institutions will try to preserve the problem to which they are the solution", the shirky principle
  • if no one reads the manual, that's okay, if you think about it, the technical writer is in an unusual role. users hate the presence of manuals as much as they hate missing manuals. they despise lack of detail yet curse length. if no one reads the help, your position lacks value. if everyone reads the help, you're on a sinking ship. ideally, you want the user interface to be simple enough not to need help. but the more you contribute to this user interface simplicity, the less you're needed
  • 44 engineering management lessons
  • the grain of the material in design, because of the particular characteristics of a specific piece's grain, a design can't simply be imposed on the material. you can "go with the grain" or "go against the grain," but either way you have to understand the grain of the material to successfully design and produce a work. design for technology shouldn't be done separately from the material - it must be done as an intimate and tactile collaboration with the material of technology.

summing up 62

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • capturing the upside, by clayton christensen. the notion here is that if you create a new business that tries to position itself at the point in a value chain where really attractive money is being made, by the time you get there it probably will have gone, and you can tell where it's gone in a very predictable way, and that's what i want to try to get at here. over on this side of the world, the money tends to be made by the company that designs the architecture, the system, that solves what is not good enough. because it's functionality and reliability that's not good enough, the company that makes this systems that is proprietary and optimized tends to be at the place where most of the profit in the industry is made. because the performance of that kind of a product isn't dictated by the individual components, of which it is comprised; this is determined at the level of the architecture of the system, and that is where the money is made. but on this side, when it becomes more than good enough and the architecture becomes modular, where the money is made flips to the inside of the product. how are you going to do this? anything you can do, the competitors can just copy instantly because in a nonintegrated word, you're outsourcing from a common supplier base, and when the architecture of the system is modular, and it fits together according to industry standards, the better products are not created through clever architectural design; the performance of the product is driven by what's inside. highly recommended
  • seth godin's startup school (transcript), in the summer of 2012 i had an amazing opportunity to spend three days with a group of extremely motivated entrepreneurs - people right at the beginning of building their project, launching their organization. during those three days i took them on a guided tour of some of the questions they were going to have to wrestle with, some of the difficult places they were going through to stand up and say, "this is me. this is what i'm making". highly recommended
  • snowden and the future, by eben moglen. when there is no collective voice for those who are within structures that deceive and oppress, then somebody has to act courageously on his own. someone has to face all the risk for a tiny share of the total benefit that will be reaped by all. such a man may be walking the pathway from slavery to freedom. but any such man worthy of the effort will know that he may also be digging his own grave. when there is no union, we require heroism. or we perish for want of what we should have known, that there was neither collective will nor individual courage to bring us. it takes a union to end slavery because a man who decides that the will of the righteous commands us to free slaves will be called a traitor, and they will hang him-more than once. highly recommended
  • frank sinatra's 1963 playboy interview, i think i can sum up my religious feelings in a couple of paragraphs. first: i believe in you and me. i'm like albert schweitzer and bertrand russell and albert einstein in that i have a respect for life - in any form. i believe in nature, in the birds, the sea, the sky, in everything i can see or that there is real evidence for. if these things are what you mean by god, then i believe in god. but i don't believe in a personal god to whom i look for comfort or for a natural on the next roll of the dice. i'm not unmindful of man's seeming need for faith; i'm for anything that gets you through the night, be it prayer, tranquilizers or a bottle of jack daniel's. but to me religion is a deeply personal thing in which man and god go it alone together, without the witch doctor in the middle. the witch doctor tries to convince us that we have to ask god for help, to spell out to him what we need, even to bribe him with prayer or cash on the line. well, i believe that god knows what each of us wants and needs. it's not necessary for us to make it to church on sunday to reach him. you can find him anyplace. and if that sounds heretical, my source is pretty good: matthew, five to seven, the sermon on the mount

summing up 61

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • magic ink: information software and the graphical interface, by bret victor. today's ubiquitous gui has its roots in doug engelbart's groundshattering research in the mid-'60s. the concepts he invented were further developed at xerox parc in the '70s, and successfully commercialized in the apple macintosh in the early '80s, whereupon they essentially froze. twenty years later, despite thousand-fold improvements along every technological dimension, the concepts behind today's interfaces are almost identical to those in the initial mac. similar stories abound. for example, a telephone that could be "dialed" with a string of digits was the hot new thing ninety years ago. today, the "phone number" is ubiquitous and entrenched, despite countless revolutions in underlying technology. culture changes much more slowly than technological capability. the lesson is that, even today, we are designing for tomorrow's technology. cultural inertia will carry today's design choices to whatever technology comes next. in a world where science can outpace science fiction, predicting future technology can be a nostradamean challenge, but the responsible designer has no choice. a successful design will outlive the world it was designed for. highly recommended
  • visualisation and cognition: drawing things together, by bruno latour. it is not perception which is at stake in this problem of visualization and cognition. new inscriptions, and new ways of perceiving them, are the results of something deeper. if you wish to go out of your way and come back heavily equipped so as to force others to go out of their ways, the main problem to solve is that of mobilization. you have to go and to come back with the "things" if your moves are not to be wasted. but the "things" you gathered and displaced have to be presentable all at once to those you want to convince and who did not go there. in sum, you have to invent objects which have the properties of being mobile but also immutable, presentable, readable and combinable with one another. highly recommended (pdf)
  • cargo cult software engineering, the issue that has fallen by the wayside while we've been debating process vs. commitment is so blatant that it may simply have been so obvious that we have overlooked it. we should not be debating process vs. commitment; we should be debating competence vs. incompetence. the real difference is not which style is chosen, but what education, training, and understanding is brought to bear on the project. rather than debating process vs. commitment, we should be looking for ways to raise the average level of developer and manager competence. that will improve our chances of success regardless of which development style we choose
  • nude portraits, photography project by trevor christensen

summing up 60

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • the internet and hieronymus bosch: fear, protection, and liberty in cyberspace, the internet was conceived in the defense world, designed by academic and industrial research engineers, and transported into the commercial world only after its core design had been widely deployed. engineers are naturally libertarian. whether they are designing cars or computer networks, their ideals are utility, speed, and flexibility. software engineers are spared the ethical issues that confront the designers of munitions; engineering in the world of zeroes and ones inherits the nonnormative, amoral quality of mathematics. even cost, safety, and secrecy are imposed by market forces and governments, which had little force in the internet's gestation period. so there was a garden of eden quality to the internet in its preconsumer days: a combination of fecundity and innocence. the network fostered experimentation and innovation, and little worry about the potential for mischief or evil. highly recommended (pdf)
  • the internet's original sin, i have come to believe that advertising is the original sin of the web. the fallen state of our internet is a direct, if unintentional, consequence of choosing advertising as the default model to support online content and services. through successive rounds of innovation and investor storytime, we've trained internet users to expect that everything they say and do online will be aggregated into profiles (which they cannot review, challenge, or change) that shape both what ads and what content they see. outrage over experimental manipulation of these profiles by social networks and dating companies has led to heated debates amongst the technologically savvy, but hasn't shrunk the user bases of these services, as users now accept that this sort of manipulation is an integral part of the online experience. users have been so well trained to expect surveillance that even when widespread, clandestine government surveillance was revealed by a whistleblower, there has been little organized, public demand for reform and change. only half of americans believe that snowden's leaks served the public interest and the majority of americans favor criminal prosecution for the whistleblower. it's unlikely that our willingness to accept online surveillance reflects our trust in the american government, which is at historic lows. more likely, we've been taught that this is simply how the internet works: if we open ourselves to ever-increasing surveillance-whether from corporations or governments-the tools and content we want will remain free of cost
  • technical debt 101, the main problem is that software just doesn't suddenly fall. there is no gravity in software. the grains of sand, though not properly glued each other to create solid material, keep floating in the air. and this is a problem, because this makes the consequences of bad practices less visible. you can always add some more sand to your software building. some sand will fall, some will move to unexpected places and knowing where to put the new sand will become really hard, but still, the building stands
  • you are not late, in terms of the internet, nothing has happened yet. the internet is still at the beginning of its beginning. if we could climb into a time machine and journey 30 years into the future, and from that vantage look back to today, we'd realize that most of the greatest products running the lives of citizens in 2044 were not invented until after 2014

summing up 59

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • everyone i know is brokenhearted, i don't believe anymore that the answer lies in more or better tech, or even awareness. i think the only thing that can save us is us. and i do think rage is a component that's necessary here: a final fundamental fed-up-ness with the bullshit and an unwillingness to give any more ground to the things that are doing us in. to stop being reasonable. to stop being well-behaved. not to hate those who are hurting us with their greed and psychopathic self-interest, but to simply stop letting them do it. the best way to defeat an enemy is not to destroy them, but to make them irrelevant. recommended
  • how the other half works: an adventure in the low status of software engineers, there was a time, perhaps 20 years gone by now, when the valley was different. engineers ran the show. technologists helped each other. programmers worked in r&d environments with high levels of autonomy and encouragement. to paraphrase from one r&d shop's internal slogan, bad ideas were good and good ideas were great. silicon valley was an underdog, a sideshow, an ellis island for misfits and led by "sheepdogs" intent on keeping mainstream mba culture (which would destroy the creative capacity of that industry, for good) away. that period ended. san francisco joined the "paper belt" cities of boston, new york, washington and los angeles. venture capital became hollywood for ugly people. the valley became a victim of its own success. bay area landlords made it big. fail-outs from mba-culture strongholds like mckinsey and goldman sachs found a less competitive arena in which they could boss nerds around with impunity; if you weren't good enough to make md at the bank, you went west to become a vc-funded founder. the one group of people that didn't win out in this new valley order were software engineers. housing costs went up far faster than their salaries, and they were gradually moved from being partners in innovation to being implementors' of well-connected mba-culture fail-outs' shitty ideas. that's where we are now
  • the problem with founders, the secret of silicon valley is that the benefits of working at a startup accrues almost entirely to the founders, and that's why people repeat the advice to just go start a business. there is a reason it is hard to hire in silicon valley today, and it isn't just that there are a lot of startups. it's because engineers and other creators are realizing that the cards are stacked against them unless they are the ones in charge
  • the pitchforks are coming… for us plutocrats, we rich people have been falsely persuaded by our schooling and the affirmation of society, and have convinced ourselves, that we are the main job creators. it's simply not true. there can never be enough super-rich americans to power a great economy. i earn about 1,000 times the median american annually, but i don't buy thousands of times more stuff. my family purchased three cars over the past few years, not 3,000. i buy a few pairs of pants and a few shirts a year, just like most american men. i bought two pairs of the fancy wool pants i am wearing as i write, what my partner mike calls my "manager pants." i guess i could have bought 1,000 pairs. but why would i? instead, i sock my extra money away in savings, where it doesn't do the country much good. so forget all that rhetoric about how america is great because of people like you and me and steve jobs. you know the truth even if you won't admit it: if any of us had been born in somalia or the congo, all we'd be is some guy standing barefoot next to a dirt road selling fruit. it's not that somalia and congo don't have good entrepreneurs. it's just that the best ones are selling their wares off crates by the side of the road because that's all their customers can afford

summing up 58

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • visualizing algorithms, so, why visualize algorithms? why visualize anything? to leverage the human visual system to improve understanding. or more simply, to use vision to think. highly recommended
  • what scientific concept would improve everybody's cognitive toolkit? the mediocrity principle, the reason this is so essential to science is that it's the beginning of understanding how we came to be here and how everything works. we look for general principles that apply to the universe as a whole first, and those explain much of the story; and then we look for the quirks and exceptions that led to the details. it's a strategy that succeeds and is useful in gaining a deeper knowledge. starting with a presumption that a subject of interest represents a violation of the properties of the universe, that it was poofed uniquely into existence with a specific purpose, and that the conditions of its existence can no longer apply, means that you have leapt to an unfounded and unusual explanation with no legitimate reason. what the mediocrity principle tells us is that our state is not the product of intent, that the universe lacks both malice and benevolence, but that everything does follow rules - and that grasping those rules should be the goal of science
  • intuitive equals familiar, as an interface designer i am often asked to design a "better" interface to some product. usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client's own products. even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive. it is a classic "catch 22." the client wants something that is significantly superior to the competition. but if superior, it cannot be the same, so it must be different (typically the greater the improvement, the greater the difference). therefore it cannot be intuitive, that is, familiar
  • "i think one of the things that really separates us from the high primates is that we're tool-builders. i read a study that measured the efficiency of locomotion for various species on the planet. the condor used the least energy to move a kilometer. and humans came in with a rather unimpressive showing about a third of the way down the list; it was not too proud of a showing for the crown of creation. so, that didn't look so good. but then somebody at scientific american had the insight to test the efficiency of locomotion for a man on a bicycle. and a man on a bicycle-or a human on a bicycle-blew the condor away, completely off the top of the charts. and that's what a computer is to me. what a computer is to me is, it's the most remarkable tool that we've ever come up with. and it's the equivalent of a bicycle for our minds", steve jobs
  • what's on your mind? short film by shaun higton

summing up 57

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • a critique of technocentrism in thinking about the school of the future, by seymour papert. so we are entering this computer future; but what will it be like? what sort of a world will it be? there is no shortage of experts, futurists, and prophets who are ready to tell us, but they don't agree. the utopians promise us a new millennium, a wonderful world in which the computer will solve all our problems. the computer critics warn us of the dehumanizing effect of too much exposure to machinery, and of disruption of employment in the workplace and the economy. who is right? well, both are wrong - because they are asking the wrong question. the question is not "what will the computer do to us?" the question is "what will we make of the computer?" the point is not to predict the computer future. the point is to make it. highly recommended
  • inside the mirrortocracy, if spam filters sorted messages the way silicon valley sorts people, you'd only get email from your college roommate. and you'd never suspect you were missing a thing. lest you get the wrong idea, i'm not making a moral case but a fairly amoral one. it's hard to argue against the fact that the valley is unfairly exclusionary. this implies that there is a large untapped talent pool to be developed. since the tech war boils down to a talent war, the company that figures out how to get over itself and tap that pool wins. highly recommended
  • programming languages, operating systems, despair and anger, it's pretty damn sad that something as limited and now ancient as bash represents some kind of optimum of productivity for many real-world "everyday programming" tasks - and yet fails so miserably for so many other everyday programming tasks due to lack of data abstraction and richness. 90% of the shit that gets written doesn't even involve anything as complicated as finding set partitions. really
  • we lost the war. welcome to the world of tomorrow, we need to assume that it will take a couple of decades before the pendulum will swing back into the freedom direction, barring a total breakdown of civilization as we know it. only when the oppression becomes to burdensome and open, there might be a chance to get back to overall progress of mankind earlier. if the powers that be are able to manage the system smoothly and skillfully, we cannot make any prediction as to when the new dark ages will be over
  • the gloaming, short film by niko nobrain

summing up 56

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • what the theory of "disruptive innovation" gets wrong, the logic of disruptive innovation is the logic of the startup: establish a team of innovators, set a whiteboard under a blue sky, and never ask them to make a profit, because there needs to be a wall of separation between the people whose job is to come up with the best, smartest, and most creative and important ideas and the people whose job is to make money by selling stuff. disruptive innovation is a theory about why businesses fail. it's not more than that. it doesn't explain change. it's not a law of nature. it's an artifact of history, an idea, forged in time; it's the manufacture of a moment of upsetting and edgy uncertainty. transfixed by change, it's blind to continuity. it makes a very poor prophet. highly recommended
  • five things we need to know about technological change, by neil postman. in the past, we experienced technological change in the manner of sleep-walkers. our unspoken slogan has been "technology über alles," and we have been willing to shape our lives to fit the requirements of technology, not the requirements of culture. this is a form of stupidity, especially in an age of vast technological change. we need to proceed with our eyes wide open so that we many use technology rather than be used by it. highly recommended (pdf)
  • how children what? and so in the twenty-three years since the creation of the world wide web, "a bicycle for the mind" became "a treadmill for the brain". one helps you get where you want under your own power. another's used to simulate the natural world and is typically about self-discipline, self-regulation, and self-improvement. one is empowering; one is slimming. one you use with friends because it's fun; the other you use with friends because it isn't. our tools and services increasingly do things to us, not for us. and they certainly aren't about helping us to do things with them
  • "all our inventions are but improved means to an unimproved end", henry david thoreau
  • a developer's responsibility, even though developers sometimes love to put on their headphones and crank out some piece of software wizardry, it's important to occasionally step out of the office and engage with your customers. regularly seeing the daily work-life of your users first-hand helps establish that sense of responsibility to the end-user, and it makes the software better for it
  • how the rainbow color map misleads, despite its importance for perception and visualization, color continues to be a surprisingly little understood topic. people often seem to be content with default colors, or with an arbitrary selection that just happens to look good. but without great care when picking colors, you can do a lot of damage to your visualization

summing up 55

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • ease at work, i visualize my sense of unease or dis-ease as pendulum. and the pendulum swings from over here, we have the master of the universe, we have the programmer who's better than anybody else. then, the other side of the pendulum. and usually this happens after i made a particularly egregious mistake. on this side of the pendulum i am a waste of carbon, i am the worst programmer ever born. now are those stories true? no, but i am addicted telling myself both of those stories. i am wasting time, blowing off opportunities, throwing away the energy and gifts i've been given. so, what am i looking for? i am looking for a space in the middle, i'm still gonna swing. i'm still gonna think i'm better than i really am, i'm still gonna think i'm worse than i really am. but i would like to reduce the amplitude a bunch. if i can do that, i might not be at ease, but i'll be a lot closer than i am today. recommended
  • teaching creative computer science, we've ended up focusing too much on technology, on things, on devices, on those seductive boxes and not enough on ideas. i want our children not only to consume technology but to be imaginative creators of technological artefacts. i want them to be creative writers as well as appreciative readers. i want them to understand what they're doing as well how the stuff that they're using works as well as using it. arthur c. clarke once famously remarked that any sufficiently advanced technology is indistinguishable from magic. i think it's very damaging if our children come to believe that the computer systems they use are essentially magic. that is: not under their control
  • seeing spaces, i think people need to work in a space that moves them away from the kinds of non-scientific thinking that you do when you can't see what you're doing - moves them away from blindly following recipes, from superstitions and rules of thumb - and moves them towards deeply understanding what they're doing, inventing new things, discovering new things, contributing back to the global pool of human knowledge
  • the pivot, in an environment in which start-up resources are not limited, and no one can predict the next winner, and it is easy to measure customer behavior in great detail, the internet is no longer a technology. the internet is a psychology experiment. in this environment, quality is less important than speed. so the most prized technical people are the ones who can work quickly and produce one buggy prototype after another. and that brings me to the next observation. psychology has evolved to be a function of speed plus measurement. we're nearing the point at which the best psychologist in the world is any computer with access to big data, and any start-up that is rapidly testing one idea after another. that's a system that makes sense to me. in a complicated environment, systems work better than goals
  • interview with david graeber, in socialist regimes you couldn't really get fired from your job. as a result you didn't really have to work very hard. so on paper they had eight- or nine-hour days but really everyone was working maybe four or five. you get up. you buy the paper. you go to work. you read the paper. then maybe a little work, and a long lunch, including a visit to the public bath… if you think about it in that light, it makes the achievements of the socialist bloc seem pretty impressive: a country like russia managed to go from a backwater to a major world power with everyone working maybe on average four or five hours a day. but the problem is they couldn't take credit for it. they had to pretend it was a problem, "the problem of absenteeism," or whatever, because of course work was considered the ultimate moral virtue. they couldn't take credit for the great social benefit they actually provided. which is, incidentally, the reason that workers in socialist countries had no idea what they were getting into when they accepted the idea of introducing capitalist-style work discipline. "what, we have to ask permission to go to the bathroom?" it seemed just as totalitarian to them as accepting a soviet-style police state would have been to us. that ambivalence in the heart of the worker's movement remains. on the one hand, there's this ideological imperative to validate work as virtue in itself. which is constantly being reinforced by the larger society. on the other hand, there's the reality that most work is obviously stupid, degrading, unnecessary, and the feeling that it is best avoided whenever possible

summing up 54

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • patterns of software, a collection of essays on patterns, software, writing, business, and my life story. the two essays habitability and piecemeal growth and the bead game, rugs and beauty alone are worth reading the book. highly recommended (pdf)
  • the internet with a human face, one of the worst aspects of surveillance is how it limits our ability to be creative with technology. it's like a tax we all have to pay on innovation. we can't have cool things, because they're too potentially invasive. imagine if we didn't have to worry about privacy, if we had strong guarantees that our inventions wouldn't immediately be used against us. highly recommended
  • finding the right job for your product, by clayton christensen. it appears that the precipitating event that allows the winning strategy of an emerging company to coalesce is the clarification of a job that customers need to get done for which its product is being hired. it is only when the job is well-understood that the business model and the products and services required todo it perfectly become clear. then, and only then, can the company "take off". recommended (pdf)
  • a conversation with werner vogels, giving developers operational responsibilities has greatly enhanced the quality of the services, both from a customer and a technology point of view. the traditional model is that you take your software to the wall that separates development and operations, and throw it over and then forget about it. not here. you build it, you run it
  • "there is always a well-known solution to every human problem - neat, plausible, and wrong", h. l. mencken
  • paths of hate, short film by damian nenow

summing up 53

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • failure, by adam savage. every parent will tell you that you make a rule for your kid and they'll break it, you put a wall up and they'll push against it. there's a prevailing theory that this is how a child learns the shape of the world. and if you don't give them any boundaries they start freaking out. we all know children who don't get any boundaries and they start freaking out because the world feels unsafe to them. they need someone to tell them what the limit is and i think that failure in my life has worked in the exact same way. it doesn't teach me the limit of the world but it teaches me the shape of my intuition. there's one thing i've learned which is that a craftsman isn't somebody who doesn't make mistakes. it's not about the cessation of failure, it's about recognizing that it's occurring, recognizing that it's going to be an inherent part of the process and recognizing that you gotta dance with that. sometimes it's gonna catch up with you, sometimes you're gonna screw things up so badly and it's gonna be fine in the end. i don't trust people that haven't failed. highly recommended
  • a personal computer for children of all ages, by alan kay. with dewey, piaget and papert, we believe that children "learn by doing" and that much of the alienation in modern education comes from the great philosophical distance between the kinds of things children can "do" and much of 20-century adult behavior. unlike the african child whose play with bow and arrow involves him in future adult activity, the american child can either indulge in irrelevant imitation like the child in a nurse's uniform taking care of a doll or is forced to participate in activities which will not bear fruit for many years and will leave him alienated. if we want children to learn any particular area, then it is clearly up to us to provide them with something real and enjoyable to "do" on their way to perfection of both the art and the skill. highly recommended
  • everything is broken, it's hard to explain to regular people how much technology barely works, how much the infrastructure of our lives is held together by the it equivalent of baling wire. but computers don't serve the needs of both privacy and coordination not because it's somehow mathematically impossible. there are plenty of schemes that could federate or safely encrypt our data, plenty of ways we could regain privacy and make our computers work better by default. it isn't happening now because we haven't demanded that it should, not because no one is clever enough to make that happen
  • how did software get so reliable without proof? by tony hoare. this review of programming methodology reveals how much the best of current practice owes to the ideas and understanding gained by research which was completed more than twenty years ago. the existence of such a large gap between theory and practice is deplored by many, but i think quite wrongly. the gap is actually an extremely good sign of the maturity and good health of our discipline, and the only deplorable results are those that arise from failure to recognise it (pdf)
  • "the fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt", bertrand russell

summing up 52

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • urls are already dead, urls are just as important to the web as phone numbers are to the worldwide telephone system. what they're trying to do is to relegate urls to the same level as protocols, phone numbers, and email addresses, which is encoded routing information that most people will rarely use in its raw form
  • designer duds: losing our seat at the table, design must deliver results. designers must also accept that if they don't, they're not actually designing well; in technology, at least, the subjective artistry of design is mirrored by the objective finality of use data. a "great" design which produces bad outcomes - low engagement, little utility, few downloads, indifference on the part of the target market - should be regarded as a failure
  • portrait of a n00b, perl, python and ruby fail to attract many java and c++ programmers because, well, they force you to get stuff done. it's not very easy to drag your heels and dicker with class modeling in dynamic languages, although i suppose some people still manage. and haskell, ocaml and their ilk are part of a 45-year-old static-typing movement within academia to try to force people to model everything. programmers hate that. these languages will never, ever enjoy any substantial commercial success, for the exact same reason the semantic web is a failure. you can't force people to provide metadata for everything they do. they'll hate you.
  • "a man who has the knowledge but lacks the power to express it clearly is no better off than if he never had any ideas at all", pericles' last speech
  • the street performer protocol and digital copyrights, an electronic-commerce mechanism to facilitate the private financing of public works. using this protocol, people would place donations in escrow, to be released to an author in the event that the promised work be put in the public domain. this protocol has the potential to fund alternative or "marginal" works
  • spurious correlations, insane things that correlate with each other

summing up 51

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • programming sucks, the truth is everything is breaking all the time, everywhere, for everyone. right now someone who works for facebook is getting tens of thousands of error messages and frantically trying to find the problem before the whole charade collapses. there's a team at a google office that hasn't slept in three days. somewhere there's a database programmer surrounded by empty mountain dew bottles whose husband thinks she's dead. and if these people stop, the world burns. highly recommended
  • the hundred-year language, inefficient software isn't gross. what's gross is a language that makes programmers do needless work. wasting programmer time is the true inefficiency, not wasting machine time. this will become ever more clear as computers get faster
  • how to be a great software developer, it is all too easy for smart lazy people to flash spikes of brilliance and wow their contemporaries but companies are not built on those people and product does not sit well on spikes. companies are built on people and teams who day in, day out, commit good code that enables others do the same. great product is built by work horses, not dressage horses
  • tdd is dead. long live testing, maybe it was necessary to use test-first as the counterintuitive ram for breaking down the industry's sorry lack of automated, regression testing. maybe it was a parable that just wasn't intended to be a literal description of the day-to-day workings of software writing. but whatever it started out as, it was soon since corrupted. used as a hammer to beat down the nonbelievers, declare them unprofessional and unfit for writing software. a litmus test
  • "while learning something new, many students will think, 'damn, this is hard for me. i wonder if i am stupid.' because stupidity is such an unthinkably terrible thing in our culture, the students will then spend hours constructing arguments that explain why they are intelligent yet are having difficulties. the moment you start down this path, you have lost your focus. i used to have a boss named rock. rock had earned a degree in astrophysics from cal tech and had never had a job in which he used his knowledge of the heavens. once i asked him whether he regretted getting the degree. 'actually, my degree in astrophysics has proved to be very valuable,' he said. 'some things in this world are just hard. when i am struggling with something, i sometimes think: damn, this is hard for me. i wonder if i am stupid. and then i remember that i have a degree in astrophysics from cal tech; i must not be stupid.'", aaron hillegass
  • emerging adults need time to grow up, it's in the interest of all of us to help young people make a successful transition to adulthood, because when they do, everybody benefits. emerging adults want to contribute to their societies, not be passive dependents. nearly all of them are striving hard to make their way in the world, and they aspire to find a form of work that does some good in the world. but their societies are not doing a very good job in reforming educational and employment systems for the modern world, in order to make it possible for young people to make the most of their talents, abilities and energies. the lack of access to high-quality educational opportunities is a scandal in a country such as the united states, which is the wealthiest the world has ever seen. it represents a colossal waste of human potential
  • rate-of-learning: the most valuable startup compensation, a high rate-of-learning is the most bankable asset you can have in the startup world because it's the vehicle by which longterm value is created, both within yourself and for your startup
  • the power of the marginal, i disagree here with yoda, who said there is no try. there is try. it implies there's no punishment if you fail. you're driven by curiosity instead of duty. that means the wind of procrastination will be in your favor: instead of avoiding this work, this will be what you do as a way of avoiding other work. and when you do it, you'll be in a better mood. the more the work depends on imagination, the more that matters, because most people have more ideas when they're happy

summing up 50

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • designed as designer, we've all been fooled the same way. many believe ronald reagan single-handedly defeated communism, tim berners-lee single-handedly invented (everything about) the world wide web, michael jordan single-handedly won 6 nba championships, gillette invented the safety razor.... the list of people (and companies) given more credit than is due could go on, perhaps as long as you like. there's something about our culture that seems to love heroes, that looks for the genius who's solved it all, that seems to need to believe the first to market - the best inventor - reaps justly deserved rewards. two factors combine to manufacture this love of heroes: a failure to perceive the effects of randomness on real life and a need for stories. a name and story are less abstract than an intertwined trail of ideas and designs that leads to a monument. luck and randomness go against the grain of human cognition; it's hard to see the role of the thing designed in its own design; and thus typically only talent is given credit for great achievement. highly recommended (pdf)
  • free is a lie, we can't harbor under this theory of trickle-down technology, which akin to trickle-down economics says if we have enthusiasts who are working to create tools for other enthusiasts, somehow these will also trickle down into being usable products for consumers. but it doesn't work. this is why we've given people personal computers for thirty years when apparently all they wanted were iphones. even worse than that, we told them that they were too dumb to use them, when we were too dumb to create simple enough solutions. it's about user experience. who gets user experience today? apple does, google does, and what's common between these two companies is control. control over all aspects of what they're doing. because it is the combination of components that is the experience. although we as the makers may be able to split them up, dissect them and examine each individually, people do not when they're using it. they either have a great experience or they don't. they either love it or they don't. the age of features is dead, we're living in the age of experiences. recommended
  • work hard you'll get there eventually (hint: no you won't), outdated business rules will not only accomplish little in this modern world, they will interfere with your career progress. if you take this impressing your boss and over-delivery advice to heart you'll very likely to be too busy (because you are working so hard) or too complacent (because your boss is telling you how great you are) to see an opportunity when it presents itself. you'll be blinded by the feeling that you are already doing the right thing when, in fact, you should be consciously seeking out the actual right thing
  • irresponsible journalism, any headline which ends in a question mark can be answered by the word "no". the reason why journalists use that style of headline is that they know the story is probably bullshit, and don't actually have the sources and facts to back it up, but still want to run it
  • passwords are obsolete, it turns out that passwords are obsolete, and they have been for a long time. like the occasional pay phone you find in the back of a run-down restaurant, passwords have been unnecessary for years. the difference is that everyone laughs and reminisces when they see a pay phone, but nobody does that when they see a password field. but they should

summing up 49

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • the future doesn't have to be incremental, by alan kay. suppose you had twice the iq of leonardo, but you were born in 10000 bc; how far you gonna get? zero before they burn you at the stake. henry ford was nowhere near leonardo, but he was able do something leonardo couldn't do. leonardo never was able to invent a single engine for any of his vehicles. but henry ford was born into the right century. he had knowledge and he did not have to invent the gasoline engine, it already had been invented. and so he would be what was called an innovator today, he did not invent anything. but he put things together, and for most things knowledge dominates iq. highly recommended
  • creation myth; xerox parc, apple, and the truth about innovation, when you have a bunch of smart people with a broad enough charter, you will always get something good out of it. it's one of the best investments you could possibly make - but only if you chose to value it in terms of successes. if you chose to evaluate it in terms of how many times you failed, or times you could have succeeded and didn't, then you are bound to be unhappy. innovation is an unruly thing. there will be some ideas that don't get caught in your cup. but that's not what the game is about. the game is what you catch, not what you spill. recommended
  • improving our ability to improve: a call for investment in a new future, by doug engelbart. we need to become better at being humans. learning to use symbols and knowledge in new ways, across groups, across cultures, is a powerful, valuable, and very human goal. and it is also one that is obtainable, if we only begin to open our minds to full, complete use of computers to augment our most human of capabilities (pdf)
  • research, huh! what is it good for? what look like novel ideas from a distance in general turn out, upon closer inspection, to have emerged from a general cloud of research ideas that were knocking around at the time. it's terribly hard to know where ideas came from, once you have them. and that makes it terribly hard to guess well what ideas are going to grow out of whatever's going on now. so perhaps there isn't a better way than to generate lots of solutions, throw them around the place and see what few of them stick to a problem
  • programming languages are the least usable, but most powerful human-computer interfaces ever invented, if there's any truth to the title of this post, it's the implied idea that programming languages are just another type of human-computer interface and the rich and varied design space of user interface paradigms. this has some fun implications. for example, programmers are users too, and they deserve all of the same careful consideration that we give non-programmers using non-programming interfaces. this also means that programming languages researchers are really studying user interface design, like hci researchers do. there aren't two fields we might find more dissimilar in method or culture, but their questions and the phenomena they concern are actually remarkably aligned
  • visual programming languages, a place on the net where one can easily see what all the different visual programming languages (graphical programming languages) look like
  • being useful - a short introduction to proactive experiences, we often forget about the first commandment of user experience: usefulness. being usable or beautiful is easy, being useful is hard work
  • the future of ui and the dream of the ‘90s, the future of interface design isn't a dream from the 90s. the future of interface design is about emotional awareness; connecting us with products the way we connect with each other

summing up 48

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • our comrade the electron, if you look at the history of the kgb or stasi, they consumed enormous resources just maintaining and cross-referencing their mountains of paperwork. imagine what stalin could have done with a decent mysql server. we haven't seen yet what a truly bad government is capable of doing with modern information technology. what the good ones get up to is terrifying enough. highly recommended
  • no one knows what the f*** they're doing or the 3 types of knowledge, the real reason you feel like a fraud is because you have been successful in taking a lot of information out of the "shit you don't know you don't know" category and put it into the "shit you know you don't know" category; you know of a lot of stuff you don't know. the good news is that this makes you very not dangerous. the bad news is that it also makes you feel dumb and helpless a lot of the time. recommended
  • programming as theory building, it is concluded that the proper, primary aim of programming is, not to produce programs, but to have the programmers build theories of the manner in which the problems at hand are solved by program execution
  • toward a better programming, if you look at much of the advances that have made it to the mainstream over the past 50 years, it turns out they largely increased our efficiency without really changing the act of programming
  • regulation ratchets, in any area where we let humans do things, every once in a while there will be a big screwup; that is the sort of creatures humans are. and if you won't decrease regulation without a screwup but will increase it with a screwup, then you have a regulation ratchet: it only moves one way. so if you don't think a long period without a big disaster calls for weaker regulations, but you do think a particular big disaster calls for stronger regulation, well then you might as well just strengthen regulations lots more right now, even without a disaster. because that is where your regulation ratchet is heading
  • the eight-hour burn, we treat scarce resources as being more valuable, and we make more efficient use of them. when you have too much time to work, your work time reduces significantly in perceived value
  • the expert, short film by lauris beinerts

summing up 47

a more or less weekly digest of juicy stuff. please find previous editions here.

  • salary negotiation, negotiating never makes (worthwhile) offers worse. this means you need what political scientists call a commitment strategy: you always, as a matter of policy, negotiate all offers. this also means you do not start negotiating until you already have a yes-if (yes-if we agree on terms). do not start negotiating from no-but (no-but we might hire you anyway if you're really, really effing cheap). recommended
  • lessons from a silicon valley job search, it's a great market to be an engineer, but finding the right job still requires a lot of time and effort. spray applications anywhere and everywhere you like the look of, and see what sticks. a little organisation will go a long way, but a little over-thinking will quickly make you go insane. recommended
  • engineer's guide to us visas
  • don't scar on the first cut, policies are codified overreactions to unlikely-to-happen-again situations. a collective punishment for the wrong-doings of a one-off. and unless you want to treat the people in your environment as five year-olds, "because the policy said so" is not a valid answer
  • a rant about women, it looks to me like women in general, and the women whose educations i am responsible for in particular, are often lousy at those kinds of behaviors, even when the situation calls for it. they aren't just bad at behaving like arrogant self-aggrandizing jerks. they are bad at behaving like self-promoting narcissists, anti-social obsessives, or pompous blowhards, even a little bit, even temporarily, even when it would be in their best interests to do so. whatever bad things you can say about those behaviors, you can't say they are underrepresented among people who have changed the world
  • cargo cult agile, it's unfortunate, though - and a little ironic - that a set of methods created to reduce meetings and waste is being abused to increase them. stand-up meetings are a neat tool, but they're hardly the core of agile development. beware cargo cult agile. don't use stand-up meetings to avoid real communication and collaboration

summing up 46

a more or less weekly digest of juicy stuff. please find previous editions here.

  • why software sucks, good programmers, designers, architects or creators of any kind are simply thoughtful. they are so passionate about making good things, that they will study any discipline, read any book, listen to any person and learn any skill that might improve their abilities to make things worthy of the world. they tear down boundaries of discipline, domain or job title, clawing at any idea, regardless of its origins, that might help them make a better thing. recommended
  • why software sucks, virtually all of the cost of software development is, directly and indirectly, the cost of design. if a student architect could design a skyscraper, push a button, and have some futuristic genesis device instantly construct the building at virtually no cost - and at no danger to anyone - and with perfect components throughout, would he not do so? further, imagine that with a push of another button, the entire building could be reduced back to its constituent atoms
  • how to stop worrying and learn to love the internet, i suppose earlier generations had to sit through all this huffing and puffing with the invention of television, the phone, cinema, radio, the car, the bicycle, printing, the wheel and so on, but you would think we would learn the way these things work, which is this: 1) everything that's already in the world when you're born is just normal 2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it 3) anything that gets invented after you're thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it's been around for about ten years when it gradually turns out to be alright really
  • "technology is stuff that doesn't work yet", bran ferren
  • i'm not the product, but i play one on the internet, we should all stop saying, "if you're not paying for the product, you are the product," because it doesn't really mean anything, it excuses the behavior of bad companies, and it makes you sound kind of like a stoner looking at their hand for the first time
  • why agile has failed, because creating good software is so much about technical decisions and so little about management process, i believe that there is very little place for non-technical managers in any software development organisation. if your role is simply asking for estimates and enforcing the agile rituals: stand-ups, fortnightly sprints, retrospectives; then you are an impediment rather than an asset to delivery
  • on the effectiveness of lectures, listening to lectures is the least effective means of delivering learning, closely followed by reading textbooks. this is not to say that there are not great lecturers and great textbooks - but statistically the overall amount of learning per hour spent in lecture is the lowest of a wide number of possible delivery methods

summing up 45

a more or less weekly digest of juicy stuff. please find previous editions here.

  • money, guilt and the machine, by alan watts. the difference between having a job and having a vocation is that a job is some unpleasant work you do in order to make money, with the sole purpose of making money. but if you do a job with the sole purpose of making money, you are absurd. because if money becomes the goal, and it does when you work that way, you begin increasingly to confuse it with happiness - or with pleasure. yes, one can take a whole handful of crisp dollar bills and practically water your mouth over them. but this is a kind of person who is confused, like a pavlov dog, who salivates on the wrong bell. it goes back to the ancient guilt that if you don't work you have no right to eat; that if there are others in the world who don't have enough to eat, you shouldn't enjoy your dinner even though you have no possible means of conveying the food to them. and while it is true that we are all one human family and that every individual involves every other individual, while it is true therefore we should do something about changing the situation. highly recommended
  • a theory on economic growth, clayton christensen on disruptive innovation. recommended
  • a few words on doug engelbart, our computers are fundamentally designed with a single-user assumption through-and-through, and simply mirroring a display remotely doesn't magically transform them into collaborative environments. if you attempt to make sense of engelbart's design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody engelbart's intent. engelbart hated our present-day systems
  • why atom can't replace vim, a new, shiny, modern editor could one-up vim by fixing some (or hopefully all) of these issues. but before an editor can replace vim, it needs to learn everything that 1976 has to teach - not just the lesson of emacs, but also the lesson of vi
  • the labor illusion: how operational transparency increases perceived value, we demonstrate that because of what we term the labor illusion, when websites engage in operational transparency by signaling that they are exerting effort, people can actually prefer websites with longer waits to those that return instantaneous results - even when those results are identical (pdf)
  • innovative, pragmatic, affordable, accessible, ip-protected, innovation is a state of mind, it's not a job title and it's not an industry. it has to be in every single one of you. it's about you. it's about people and it's about minds. and it's about how open you are to innovate and how open you are to innovation
  • failure is an option, capitalism and entrepreneurial innovation require risk, as it is a fundamental component of business evolution. when companies are allowed to fail, their resources get reallocated in the market, just like a fire that converts sparse undergrowth into fertilizer for the next generation of trees
  • flying the world's fastest plane, behind the stick of the sr-71 blackbird

summing up 44

a more or less weekly digest of juicy stuff. please find previous editions here.

  • a mathematician's lament, sadly, our present system of mathematics education is precisely this kind of nightmare. in fact, if i had to design a mechanism for the express purpose of destroying a child's natural curiosity and love of pattern-making, i couldn't possibly do as good a job as is currently being done - i simply wouldn't have the imagination to come up with the kind of senseless, soul crushing ideas that constitute contemporary mathematics education. the only people who understand what is going on are the ones most often blamed and least often heard: the students. they say, "math class is stupid and boring," and they are right. highly recommended (pdf)
  • seeing the secret state, talk by trevor paglen
  • an open letter to recruiters
  • how to survive in design (and in a zombie apocalypse), the tools that kept you safe thus far, that you've mastered well enough to use in your sleep - those tools will not always be sufficient. even if you're not working on mobile now, there's a good chance you will. soon
  • what makes an experience seem innovative? since customers think standing and waiting is a necessary evil without alternatives, they may not complain about it. organizations that focus on the specific activities to resolve their perceived customer objective, may overlook the deep frustration from tool time that's happening in the gaps between those activities. teams that study the entire experience look into those gaps to see from where the deep frustration emerges. addressing that frustration, when no other product or service has done so, will look innovative to the customer
  • rootless root, the unix koans of master foo
  • umfrage zum integrationstest, by tedros teclebrhan (german)
  • porcelain unicorn, short movie by keegan wilcox

summing up 43

a more or less weekly digest of juicy stuff. please find previous editions here.

  • kill math, it's the responsibility of our tools to adapt inaccessible things to our human limitations, to translate into forms we can feel. microscopes adapt tiny things so they can be seen with our plain old eyes. tweezers adapt tiny things so they can be manipulated with our plain old fingers. calculators adapt huge numbers so they can be manipulated with our plain old brain. and i'm imagining a tool that adapts complex situations so they can be seen, experienced, and reasoned about with our plain old brain
  • so it's different symbols, but it doesn't mean anything, computers aren't a short-cut to something useless: they are a way of making your single-functioning brain more efficient. they are, in fact, an extension of your brain. you can figure out how to do something, explain it to the computer, and then forget how to do it because the computer now does it for you. which is how the world now operates
  • sweep the sleaze, if readers are too lazy to copy and paste the url, and write a few words about your content, then it is not because you lack these magical buttons. if you provide excellent content, social media users will take the time to read and talk about it in their networks. that's what you really want. you don't want a cheap thumbs up, you want your readers to talk about your content with their own voice
  • programming and depression, so at the end of several days worth of programming, and problem-solving, and forward-thinking, all a programmer might get is a "thanks, now here's the next thing i need you to do"
  • be nice to programmers, programming builds an acutely negative mindset over time. i'm always asking the question "what's wrong with this?" positive people are always focusing on "what's good about this?"
  • the joy of programming with bob ross
  • it architecture, the usual suspects, hilarious
  • your 60-hour work week is not a badge of honour, we need to stop being proud of overworking ourselves. it's unhealthy, it stunts the growth of the business, and it's unsustainable. instead, we should be proud of creating or working in an environment that is efficient, organized, and diligent enough to allow people to work regular hours on meaningful work
  • both sides of, photography project by alex john beck

summing up 42

a more or less weekly digest of juicy stuff. please find previous editions here.

  • reinventing explanation, my own personal conviction is that we are still in the early days of exploring the potential that modern media - especially digital media - have for the explanation of science. our current attempts at digital explanation seem to me to be like the efforts of the early silent film-makers, or of painters prior to the florentine renaissance. we haven't yet found our michaelangelo and leonardo, we don't yet know what is possible. in fact, we don't yet have even the basic vocabulary of digital explanation. my instinct is that such a vocabulary will be developed in the decades to come. but that is far too big a goal to attack directly. instead, we can make progress by constructing prototypes, and learning from what they have to tell us. highly recommended
  • startup advice and the clarity of experience, being there - in the arena-gives you the clarity of experience; it's a sixth sense that is the ability to know which pieces of advice are important. unfortunately, the most important lessons you can learn from people with experience tend to be things you don't think are important until you have experience
  • 1,000 true fans, young artists starting out in this digitally mediated world have another path other than stardom, a path made possible by the very technology that creates the long tail. instead of trying to reach the narrow and unlikely peaks of platinum hits, bestseller blockbusters, and celebrity status, they can aim for direct connection with 1,000 true fans. it's a much saner destination to hope for. you make a living instead of a fortune. you are surrounded not by fad and fashionable infatuation, but by true fans. and you are much more likely to actually arrive there
  • curse of the gifted, we've seen the curse of the gifted before. some of us were those kids in college. we learned the hard way that the bill always comes due - the scale of the problems always increases to a point where your native talent alone doesn't cut it any more. the smarter you are, the longer it takes to hit that crunch point - and the harder the adjustment when you finally do
  • don't end the week with nothing, i realized something which is fundamentally true of a lot of day jobs. nothing i did at the job mattered, in the long run. don't end the week with nothing. prefer to work on things you can show. prefer to work where people can see you. prefer to work on things you can own
  • why javascript is doomed, yet, at the same time so many developers get up in the morning and write javascript code. why do they do it, unless they think javascript really is awesome? it's because all the stuff around javascript and built with javascript and on top of javascript is actually pretty awesome
  • how religion destroys programmers, i have an amazing gift to always make the very best technology choice. when i look back at my development career, it seems to me that every programming language i was using at any given time was clearly the best one. the problem with this self-imposed religion is that our technological religion blinds us from the truth
  • make gifts for people, by john green
  • reason for payment, meaningless reasons for payments and money transfers

summing up 41

a more or less weekly digest of juicy stuff. please find previous editions here.

  • the internet regression, when communicating on the internet, we set up a relationship with other people in which the people get less human and the machine gets more human. that is how the three signs of the internet regression come into play: flaming, flirting, and giving. our feelings toward the computer as computer become our feelings toward the people to whom we send e-mail or post messages. we flame to the person as though he or she were an insensitive thing, a machine that can't be hurt. we flirt with the machine as though it were a person and could interact with us, compliantly offering sex. we feel open and giving toward the computer because the computer is open and giving to us. highly recommended
  • we're not even trying, computers are not just a tool for writing code, they are a tool for thinking. we have to stop thinking of code as text and start thinking of code as data
  • in defense of not-invented-here syndrome, if it's a core business function - do it yourself, no matter what
  • whether to delegate, managing energy is more important than managing time. energy is what gets things done, and time is only a crude surrogate for energy. instead of only looking at what you could earn per hour versus what you could hire someone else for per hour, consider the energy it would take you to do something versus the energy it would free to delegate it
  • reasons to be creative, design is about behavior. it's not about how your product looks, it's about how your product behaves. this means that if it behaves like an asshole, people will start to see it like an asshole. and that's a big problem
  • "a complex system that works is invariably found to have evolved from a simple system that worked. a complex system designed from scratch never works and cannot be patched up to make it work. you have to start over with a working simple system", gall's law
  • imitate. we are imperfect mirrors, like a funhouse mirror that distorts what it reflects, even if you try to imitate something, it will turn out much different than the original. maybe better
  • my golden rule for pitching your startup or product, there's a simple rule i use when pitching a product or even a company to someone. i call it "no ands." the rule is simple: you have to be able to describe your idea in a single sentence without using the word "and"
  • life is a game. this is your strategy guide, you might not realise, but real life is a game of strategy. there are some fun mini-games - like dancing, driving, running, and sex - but the key to winning is simply managing your resources
  • imagine finding me, photography project by chino otsuka

summing up 40

a more or less weekly digest of juicy stuff

  • you and your research by richard hamming. i claim that some of the reasons why so many people who have greatness within their grasp don't succeed are: they don't work on important problems, they don't become emotionally involved, they don't try and change what is difficult to some other situation which is easily done but is still important, and they keep giving themselves alibis why they don't. highly recommended
  • it's not information overload. it's filter failure, we have had information overload in some form or another since the 1500's. what is changing now is the filters we use for the most of the 1500 period are breaking, and designing new filters doesn't mean simply updating the old filters. they have broken for structural reasons, not for service reasons. highly recommended
  • it's time to engineer some filter failure, filters do a great job of hiding things that are dissimilar and surprising. but that's the very definition of information! formally it's the one thing that's not like the others, the one that surprises you. i want my filters to fail, and i want dials that control the degrees and kinds of failures
  • asking questions beats giving advice, i rarely advise people on what they should do. i don't know what they should do. instead i try to ask good questions in the hope of creating new ideas for them to consider, and new places for answers to fit
  • why i like java, java is neither a good nor a bad language. it is a mediocre language, and there is no struggle. in haskell or even in perl you are always worrying about whether you are doing something in the cleanest and the best way. in java, you can forget about doing it in the cleanest or the best way, because that is impossible. whatever you do, however hard you try, the code will come out mediocre, verbose, redundant, and bloated, and the only thing you can do is relax and keep turning the crank until the necessary amount of code has come out of the spout
  • we are not normal people, when it comes to building products, the biggest problem technical (and creative) people have is this: increasing the technical challenge while creating a product does not increase the chance for more sales
  • healthy open source projects need people, software is like fruit. it tastes great when it's fresh, but goes bad very quickly. in fact, it is ridiculous how quickly software rots
  • chernobyl's hot mess, "the elephant's foot" is still lethal
  • validation, short film by kurt kuenne

summing up 39

a more or less weekly digest of juicy stuff

  • a tale of two bridges, highly recommended
  • you should be amazed, if you're able to read this, you live in the most amazing time imaginable. and the funniest thing is, most of us never even notice
  • "pyramiding": efficient identification of rare subjects, "pyramiding" is a search process based upon the view that people with a strong interest in a topic or field tend to know people more expert than themselves. we find that pyramiding in each case identifies the best solution within the search space using an average of only 30% of the effort required by mass screening (pdf)
  • on being comfortable not knowing, i won't be able to answer all your questions. rather, i can show you how to be lost productively, and how to become comfortable not knowing things and teaching yourself
  • really achieving your childhood dreams, by randy pausch
  • value is created by doing, doing things is really hard-it's why, for example, you can generally tell people what you're working on without ndas, and most patents never matter. the value, and the difficulty, comes from execution
  • why designers leave, every person who works in a creative field has an aspiration for her work, a yearning for that ideal plane which is the culmination of her taste. when an environment fails, over and over and over again, to provide her with a means to follow her internal compass, then she will leave
  • code is not literature and we are not readers. rather, interesting pieces of code are specimens and we are naturalists. so instead of trying to pick out a piece of code and reading it and then discussing it like a bunch of comp lit. grad students, i think a better model is for one of us to play the role of a 19th century naturalist returning from a trip to some exotic island to present to the local scientific society a discussion of the crazy beetles they found
  • the unsuccessful self-treatment of a case of "writer's block" (pdf)
  • i love all caps and i am never going to stop using them
  • a handyman's toolbox, i'm not against learning new technologies. but i only introduce them when i reach an impasse the simplest tool in my toolbox can't fix
  • a conference call in real life, hilarious
  • the smile man, short film by anton lanshakov

summing up 38

a more or less weekly digest of juicy stuff

  • navigating stuckness, in life, you will become known for doing what you do. that sounds obvious, but it's profound. if you want to be known as someone who does a particular thing, then you must start doing that thing immediately. don't wait. there is no other way. it probably won't make you money at first, but do it anyway. work nights. work weekends. sleep less. whatever you have to do. if you do it well, and for long enough, the world will find ways to repay you. highly recommended
  • the last re-org you'll ever do, here's the problem. the design of an organization is an insurmountable intellectual puzzle. whatever your mental model of the organization might be, it's too simplistic. no human, and no current machine, can handle the complexity. it's literally impossible. highly recommended
  • unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments, those with limited knowledge in a domain suffer a dual burden: not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it (pdf)
  • how dogecoin changed my perspective on cryptocurrency, there's no inherent reason that a piece of paper has a certain amount of value. there is a subjective reason, though: namely, that a large enough number of people agree that usd is a worthwhile commodity. they believe that for a number of reasons, including that our army will go off and fucking kill anybody who threatens to break the spell that makes everyone agree that usd are worth your time to accept
  • debian bug #727708: on diversity, open source does not turn the developers who (often in their spare time) work on the software into slaves of their users. the exact opposite is true, and the developers who do the work have the freedom to force whatever they want on the users of their software. among the freedoms open source gives to all users the relevant one is actually the right to fork: if you don't like a policy decision of an open source project, you can always create a fork that works exactly the way you envision it.
  • vertical align anything with just 3 lines of css
  • how to increase your luck surface area, when you pour energy into a passion, you develop an expertise and an expertise of any kind is valuable. but quite often that value can actually be magnified by the number people who are made aware of it. the reason is that when people become aware of your expertise, some percentage of them will take action to capture that value, but quite often it will be in a way you would never have predicted
  • speaking.io, thoughts on public speaking
  • the 7 habits of highly overrated people, if you're thinking of doing these things, don't. if you're currently doing them, stop
  • how t-shirts are designed in 2011, hilarious
  • orson welles on cold reading
  • camera, a post-punk art-rock from chicago

summing up 37

a more or less weekly digest of juicy stuff

summing up 36

a more or less weekly digest of juicy stuff

summing up 35

a more or less weekly digest of juicy stuff

summing up 34

a more or less weekly digest of juicy stuff

  • the future of programming by bret victor, highly recommended
  • how software companies die, the environment that nurtures creative programmers kills management and marketing types - and vice versa. highly recommended
  • citation needed, whatever programmers think about themselves and these towering logic-engines we've erected, we're a lot more superstitious than we realize. we tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there's no other word for that but "mythology". we just keep mouthing platitudes and pretending the way things are is nobody's fault, and the more history you learn and the more you look at the sad state of modern computing the more pathetic and irresponsible that sounds
  • the tyranny of choice, although some choice is undoubtedly better than none, more is not always better than less (pdf)
  • circle of competence, if you want to improve your odds of success in life and business then define the perimeter of your circle of competence, and operate inside
  • "we must be very careful when we give advice to younger people; sometimes they follow it", edsger w. dijkstra
  • an engineer's guide to stock options
  • the flow fallacy, flow leads to crappy products. because the goal of commercial software development isn't to create code you love - it's to create products your customers will love
  • the myth of multitasking, there is time enough for everything in the course of the day, if you do but one thing at once, but there is not time enough in the year, if you will do two things at a time
  • words, at its heart, web design should be about words. words don't come after the design is done. words are the beginning, the core, the focus. start with words
  • are you with the right partner? the key to succeeding in a relationship is not finding the right person; it's learning to love the person you found
  • why do we value gold?
  • primer on film and digital capture, a technical talk about quality & resolution of film vs. digital capture
  • s'lebn is a freid! die mutter aller imagefilme (german)
  • liquid laughter lounge quartet, ultra lounge, country doom, afterhour slowrock, dark and heavy. great sound & band

summing up 33

a more or less weekly digest of juicy stuff

summing up 32

a more or less weekly digest of juicy stuff

summing up 31

a more or less weekly digest of juicy stuff

  • the mature optimization handbook (pdf), highly recommended
  • the craziest date ever, jeff and i traveled to eight countries in 21 days without changing clothes. it sure beat meeting for coffee. great read
  • you might be asking the wrong questions, going around asking for "feedback" won't get you anything useful. here's how to dig deeper and find real answers
  • "ultimately it comes down to taste. it comes down to trying to expose yourself to the best things that humans have done and then try to bring those things in to what you're doing. i mean picasso had a saying, he said good artists copy great artists steal. and we have always been shameless about stealing great ideas and i think part of what made the macintosh great was that the people working on it were musicians and poets and artists and zoologists and historians who also happened to be the best computer scientists in the world", steve jobs
  • m.v.p., not m.v.p.o.s., in theory it's a great idea - inputting the minimum amount of time necessary to validate a product/feature - but like many great ideas it seems to be frequently bastardized in practice and reinterpreted as what i like to call m.v.p.o.s., or minimum viable piece of shit
  • over my dead body, high quality is nothing you ask your boss for, you do it, you don't even get into the discussion if it pays off or not. either the company is committed to high quality code, that steadily improves or it is doomed. crappy code will fail your customers, your business, your shareholders, your investors and it will fail you
  • a conversation with alan kay, a lot of the problems that computing has had in the last 25 years comes from systems where the designers were trying to fix some short-term thing and didn't think about whether the idea would scale if it were adopted. there should be a half-life on software so old software just melts away over 10 or 15 years
  • never use a warning when you mean undo
  • google doesn't understand people, they have some of the best tech people in the world, and they are wizards with data and the infrastructure it requires. but when it comes to humans they are amateurs. and their products prove it
  • bootstrap bankruptcy, when it comes to custom interfaces, bootstrap isn't a box of bricks, panels, walls and doors with which you can build anything you want. bootstrap is more like a mortgage
  • rfc 3514, firewalls, packet filters, intrusion detection systems, and the like often have difficulty distinguishing between packets that have malicious intent and those that are merely unusual. the problem is that making such determinations is hard. to solve this problem, we define a security flag, known as the "evil" bit, in the ipv4 header. benign packets have this bit set to 0; those that are used for an attack will have the bit set to 1
  • how i learned to work a room, and you can too, if you see a pair of people, the chances are that they arrived together and know they should be mingling. or else they've just met and are, in the back of their minds, worried that they're going to end up talking to this one person all night. either way, they're relieved to see you
  • stop freelancing, remember, you're not a no-strings-attached temporary employee, you're an expert in your field whom clients come to because they want the best product possible
  • and i will show you everything, beautiful comic strip
  • numbers stations
  • terminal cornucopia, can common items sold in airports after the security screening be used to build lethal weapons? yep

summing up 30

a more or less weekly digest of juicy stuff

summing up 29

a more or less weekly digest of juicy stuff

summing up 28

a more or less weekly digest of juicy stuff

summing up 27

a more or less weekly digest of juicy stuff

summing up 26

a more or less weekly digest of juicy stuff

summing up 25

a more or less weekly digest of juicy stuff

summing up 24

a more or less weekly digest of juicy stuff

summing up 23

a more or less weekly digest of juicy stuff

summing up 22

a more or less weekly digest of juicy stuff

summing up 21

a more or less weekly digest of juicy stuff

summing up 20

a more or less weekly digest of juicy stuff

summing up 19

a more or less weekly digest of juicy stuff

summing up 18

a more or less weekly digest of juicy stuff

summing up 17

a more or less weekly digest of juicy stuff

summing up 16

a more or less weekly digest of juicy stuff

summing up 15

a more or less weekly digest of juicy stuff

summing up 14

a more or less weekly digest of juicy stuff

summing up 13

a more or less weekly digest of juicy stuff

summing up 12

a more or less weekly digest of juicy stuff

summing up 11

a more or less weekly digest of juicy stuff

summing up 10

a more or less weekly digest of juicy stuff

summing up 9

a more or less weekly digest of juicy stuff

summing up 8

a more or less weekly digest of juicy stuff

summing up 7

a more or less weekly digest of juicy stuff

summing up 6

a more or less weekly digest of juicy stuff

summing up 5

a more or less weekly digest of juicy stuff

summing up 4

a more or less weekly digest of juicy stuff

summing up 3

a more or less weekly digest of juicy stuff

summing up 2

a more or less weekly digest of juicy stuff

summing up 1

a more or less weekly digest of juicy stuff


Want more ideas like this in your inbox?

My letters are about long-lasting, sustainable change that fundamentally amplify our human capabilities and raise our collective intelligence through generations. Would love to have you on board.