Today, we have been building and investing so much of our time into the digital
world and we have forgotten to take a step back and take a look at the larger
picture. Not only do we waste other people's time by making them addicted to
this device world, we have also created a lot of waste in the real world. At
the same time we're drowning in piles and piles of information because we never
took the time to architect a system that enable us in navigating through them.
We're trapped in these rectangular screens and we have often forgotten how to
interact with the real world, with real humans. We have been building and
hustling - but hey, we can also slow down and rethink how we want to dwell in
both the physical world and the digital world.
At some point in the future we will leave this world and what we'll leave
behind are spaces and lifestyles that we've shaped for our grandchildren. So I
would like to invite you to think about what do we want to leave behind, as we
continue to build both digitally and physically. Can we be more intentional so
that we shape and leave behind a more humane environment?
What we use a computer for on a daily basis, is only a small part of what a
computer could offer us. Instead, most of our conversation evolves around hypes
and trending technological topics. What we desperately need is to take a step
back, and figure out ways of thinking to tackle complex problems in a ever more
Fast learns, slow remembers. Fast proposes, slow disposes. Fast is
discontinuous, slow is continuous. Fast and small instructs slow and big by
accrued innovation and by occasional revolution. Slow and big controls small
and fast by constraint and constancy. Fast gets all our attention, slow has
all the power.
All durable dynamic systems have this sort of structure. It is what makes them
adaptable and robust.
The total effect of the pace layers is that they provide a many-leveled
corrective, stabilizing feedback throughout the system. It is precisely in the
apparent contradictions between the pace layers that civilization finds its
We're too often thinking about the superficial, the fast, the shallow. And that
is not necessarily a bad thing - but it will easily become if it's the only
thing we do. This concept is one of those that, once your brain has been
exposed, you start seeing everywhere.
I find it hard to communicate with a lot of technologists anymore. It’s like
trying to explain literature to someone who has never read a book. You’re asked
“So basically a book is just words someone said written down?” And you say no,
it’s more than that. But how is it more than that?
I am going to make the argument that the predominant form of the social web —
that amalgam of blogging, Twitter, Facebook, forums, Reddit, Instagram — is an
impoverished model for learning and research and that our survival as a species
depends on us getting past the sweet, salty fat of “the web as conversation”
and on to something more timeless, integrative, iterative, something less
personal and less self-assertive, something more solitary yet more connected. I
don’t expect to convince many of you, but I’ll take what I can get.
We can imagine a world that is so much better than this one. And more
importantly we can build it. But in order to do that we have to think bigger
than the next hype, the next buzzword and the next press release. We have to
seriously interrogate the assumptions that are hidden in plain sight.
Wow. After sharing and discussing close to a thousand (964 to be
precise) articles, talks, essays, videos and links, my summing up column
I originally started this series a little over five years ago to keep track on
what I was reading. Little did I know then how much this effort helped me build
up a large part of my expertise, methods, strategies and way of thinking. I'm
also quite relieved that in all that time, nobody asked me about the
To celebrate this somewhat special occasion, I want to deviate a bit from the
usual format and highlight some key figures and favourite articles which
impress me to this day.
Doug Engelbart, one of the fathers of personal computing, is definitely one
of my personal heroes. He dedicated his life to the pursuit of developing
technology to augment human intellect. He didn't see this as a
technological problem though, but as a human problem, with technology falling out
as part of a solution. His methods and
are brilliant and I rely heavily on them when working with clients.
When thinking about the future, you can't do it better than Alan Kay. Perhaps
he is one of the best known computing visionaries still around today and his
reasoning is spot on when it comes to invention, innovation and strategies how
to succeed in a digital world.
Neil Postman is one of my favourite media critics and funnily enough was
never categorically against technology. But he warned us vigorously to not be
suspicious of technology. His predictions, cautions and propositions on how we
become used by technology rather than make use of technology have been
spot on so far – unfortunately.
There's often a thin line between madness and genius and Ted Nelson walks
that line confidently. The original inventor of hypertext, internet pioneer and
visionary saw the need for interconnected documents decades before the World
Wide Web was born. And even now his vision is far from being complete – luckily
the size of his ambition hasn't changed.
Bret Victor is one of the thinkers I respect most in our industry. His talks
and essay have been highly influential to me. In the spirit of Doug Engelbart,
Bret thinks deeply about how to create a new dynamic medium that shapes
computing for the 21st century and allows us to see, understand and solve
It's rare that I don't fall in love with talks by Maciej Cegłowski, talking
mostly on the excesses and impacts of technology on society. His style of
storytelling along with ingenious insights is just amazing.
Audrey Watters is mostly known for her prolific work on education
technology issues and tech in general. The witty way she interrogates the
stories about technology we tell ourselves – or have been told to us – is
full of deep insight.
Thanks a lot for your continued support and feedback over the last years, it is
heavily appreciated. You're very welcome to subscribe to this series and
get it directly in your inbox along with some cool stuff that you won't find
anywhere else on the site.
Lastly, if you have any feedback, critique, tips, ideas, comments or free bags
of money, I'd be very glad to hear from you. Thank you.
We have the opportunity to change our thinking and basic assumptions about the
development of computing technologies. The emphasis on enhancing security and
protecting turf often impedes our ability to solve problems collectively. If we
can re-examine those assumptions and chart a different course, we can harness
all the wonderful capability of the systems that we have today.
People often ask me how I would improve the current systems, but my response is
that we first need to look at our underlying paradigms—because we need to
co-evolve the new systems, and that requires new ways of thinking. It’s not
just a matter of “doing things differently,” but thinking differently about how
to approach the complexity of problem-solving today.
in a world where we've grown multiple orders of magnitude in our computing
capacity, where we spend millions of dollars on newer, faster tools and
technology, we put little emphasis on how we can augment human thinking and
problem solving. and as doug says, it is not about thinking differently about
these problems, it is thinking differently about our ability to to solve these
Suppose a person tells us that a particular photo is of people playing
Frisbee in the park, then we naturally assume that they can answer questions
like “what is the shape of a Frisbee?”, “roughly how far can a person throw a
Frisbee?”, “can a person eat a Frisbee?”, “roughly how many people play
Frisbee at once?”, “can a 3 month old person play Frisbee?”, “is today’s
weather suitable for playing Frisbee?”. Today’s image labelling systems that
routinely give correct labels, like “people playing Frisbee in a park” to
online photos, have no chance of answering those questions. Besides the fact
that all they can do is label more images and can not answer questions at
all, they have no idea what a person is, that parks are usually outside, that
people have ages, that weather is anything more than how it makes a photo
look, etc., etc.
Here is what goes wrong. People hear that some robot or some AI system has
performed some task. They then take the generalization from that performance to
a general competence that a person performing that same task could be expected
to have. And they apply that generalization to the robot or AI system.
Today’s robots and AI systems are incredibly narrow in what they can do. Human
style generalizations just do not apply. People who do make these
generalizations get things very, very wrong.
we are surrounded my hysteria about artificial intelligence, mistaken
extrapolations, limited imagination any many more mistakes that distract us
from thinking productively about the future. whether or not ai succeeds in the
long term, it will nevertheless be developed and used with uncompromising
efforts – regardless of any consequences.
Unfortunately, many in the AI community greatly underestimate the depth of
interface design, often regarding it as a simple problem, mostly about making
things pretty or easy-to-use. In this view, interface design is a problem to be
handed off to others, while the hard work is to train some machine learning
This view is incorrect. At its deepest, interface design means developing the
fundamental primitives human beings think and create with. This is a problem
whose intellectual genesis goes back to the inventors of the alphabet, of
cartography, and of musical notation, as well as modern giants such as
Descartes, Playfair, Feynman, Engelbart, and Kay. It is one of the hardest,
most important and most fundamental problems humanity grapples with.
the speed, performance or productivity of computers are mostly red herrings.
the main problem is how we can leverage the computer as a tool. in different
words, how can we use the computer to augment ourselves to do things that were
There has been so much excitement and sense of discovery around the digital
revolution that we’re at a moment where we overestimate what can be done with
AI, certainly as it stands at the moment.
One of the most essential elements of human wisdom at its best is humility,
knowing that you don’t know everything. There’s a sense in which we haven’t
learned how to build humility into our interactions with our devices. The
computer doesn’t know what it doesn’t know, and it's willing to make
projections when it hasn’t been provided with everything that would be relevant
to those projections.
after all, computers are still tools we should take advantage of, to augment
ourselves to do things that were previously impossible, to help us make our
lives better. but all too often it seems to me that everyone is used by
computers, for purposes that seem to know no boundaries.
Drawing inspiration from architectural practice, its successes and failures, I
question the role of design in a world being eaten by software. When the
prevailing technocratic culture permits the creation of products that undermine
and exploit users, who will protect citizens within the digital spaces they now
We need to take it upon ourselves to be more critical and introspective. This
shouldn’t be too hard. After all, design is all about questioning what already
exists and asking how it could be improved for the better.
Perhaps we need a new set of motivational posters. Rather than move fast and
break things, perhaps slow down and ask more questions.
we need a more thoughtful, questioning approach to digital. how does a single
technology, a tool or a digital channels help us improve? the answer is out
there somewhere, but he have to stop ourselves more often to ask "why?".
The grand struggle of creativity can often be about making yourself stupid
again. It's like turning yourself into a child who views the world with
wonderment and excitement.
Creating something meaningful isn't easy, it's hard. But that's why we should
do it. If you ever find yourself being comfortable on what you're making or
creating, then you need to push yourself. Push yourself out of your comfort
zone and push yourself to the point of failure and then beyond.
When I was a kid, we would go skiing a lot. At the end of the day all the
skiers were coming to the lodge and I used to think it was the bad skiers
that were covered in snow and it was the good skiers that were all cleaned,
with no snow on them. But turns out to be the exact opposite is true: it was
the good skiers that were covered in snow from pushing themselves, pushing
themselves beyond the limits and into their breaking points, getting better
and then pushing themselves harder. Creativity is the same thing. It's like
you push hard, you push until you're scared and afraid, you push until you
break, you push until you fall and then you get up and you do it again.
Creativity is really a journey. It's a wonderful journey the you part you
start out as one person and that you end as another.
it's always a lot harder to create something meaningful than just creating
something. but that's exactly the reason why you should do it. a great talk by
one of my favourite game designers.
So much of how we build websites and software comes down to how we think. The
churn of tools, methods, and abstractions also signify the replacement of
ideology. A person must usually think in a way similar to the people who
created the tools to successfully use them. It’s not as simple as putting down
a screwdriver and picking up a wrench. A person needs to revise their whole
frame of thinking; they must change their mind.
The new methods were invented to manage a level of complexity that is
completely foreign to me and my work. It was easy to back away from most of
this new stuff when I realized I have alternate ways of managing complexity.
Instead of changing my tools or workflow, I change my design. It’s like
designing a house so it’s easy to build, instead of setting up cranes typically
used for skyscrapers. Beyond that, fancy implementation has never moved the
needle much for my clients.
So, I thought it would be useful remind everyone that the easiest and cheapest
strategy for dealing with complexity is not to invent something to manage it,
but to avoid the complexity altogether with a more clever plan.
a fancy implementation has never moved the needle much for my clients either.
what has though is to build relationships and let technology support this
process. we are an increasingly digital society, yes, but that doesn't mean we
have to let technology take over.
Human nature, for better or worse, doesn’t change much from millennia to
millennia. If you want to see the strengths that are unique and universal to
all humans, don’t look at the world-famous award-winners — look at children.
Children, even at a young age, are already proficient at: intuition, analogy,
creativity, empathy, social skills. Some may scoff at these for being “soft
skills”, but the fact that we can make an AI that plays chess but not hold a
normal five-minute conversation, is proof that these skills only seem “soft”
to us because evolution’s already put in the 3.5 billion years of hard work
So, if there’s just one idea you take away from this entire essay, let it be
Mother Nature’s most under-appreciated trick: symbiosis.
Symbiosis shows us you can have fruitful collaborations even if you have
different skills, or different goals, or are even different species.
Symbiosis shows us that the world often isn’t zero-sum — it doesn’t have to
be humans versus AI, or humans versus centaurs, or humans versus other
humans. Symbiosis is two individuals succeeding together not despite, but
because of, their differences. Symbiosis is the “+”.
zero sum games most often win our attention, but the vast majority of our
interactions are positive sum: when you share, when you buy, when you learn,
when you talk. similarly with technology and computers: we can only improve if
we use technology to augment ourselves in order to allow for new,
previously-impossible ways of thinking, of living, of being.
At the same time as we so happily create everything from artificial
intelligences to putting "smart" into absolutely bloody everything – At the
same time, there are still so many actual, real problems unsolved. I do not
need a single more problem solved, every one of my actual problems have been
solved. There is not a single thing I could even dream of wanting that hasn't
been already created. Yes, I can upgrade. I can buy a slightly cooler car. I
can buy slightly better clothes. I can buy slightly faster phones. But frankly
I am just consuming myself into the grave, because I have an empty life.
In all this innovation bullshit, what has happened, is that rather than look at
true, meaningful change, we have turned innovation into one more bullshit
phrase, into one more management buzzword.
Do we actually have discussions about whether we're doing meaningful work or
just work that happens to be paid at the moment. We need regardless of what
company we work in, we need to look at the products we create, the things we
create, and say "yes, this can matter". But it can not just matter to me, it
needs to matter to someone else as well.
We have blind spots, we all have them. We all have our biases. It is acceptable
perchance to have a bias as an individual. But when the entire community or an
entire nation has a bias, this says we have not gone far enough.
we seem to spend so much talent, research, time, energy and money to create
things that nobody needs, just because we feel we have to innovate somehow. and
the problem isn't how to innovate or the innovation per se, but how to get
society to adopt the good ideas that already exist.
Algorithms are essentially thoughtless. They model certain decision flows, but
once you run them, no more thought occurs. To call a person “thoughtless” is
usually considered a slight, or an outright insult; and yet, we unleash so many
literally thoughtless processes on our users, on our lives, on ourselves.
this hits home on so many levels. we throw technology at people, hoping
something will stick. instead, we should use the computer and algorithms to
augment ourselves to do things that were previously impossible, to help us make
our lives better. that is the sweet spot of our technology.
The time you spend is not your own. You are, as a class of human beings,
responsible for more pure raw time, broken into more units, than almost anyone
else. You are about to spend whole decades, whole centuries, of cumulative
moments, of other people’s time. People using your systems, playing with your
toys, fiddling with your abstractions. And I want you to ask yourself when you
make things, when you prototype interactions, am I thinking about my own clock,
or the user’s? Am I going to help someone make order in his or her life?
If we are going to ask people, in the form of our products, in the form of the
things we make, to spend their heartbeats—if we are going to ask them to spend
their heartbeats on us, on our ideas, how can we be sure, far more sure than we
are now, that they spend those heartbeats wisely?
our technological capability changes much faster than our culture. we first
create our technologies and then they change our society and culture. therefore
we have a huge responsibility to look at things from the other person's point
of view – and to do what's best for them. in other words, be considerate.
Lamenting about the tech industry’s ills when self-identifying as a
technologist is a precarious construction. I care so deeply about using the
personal computer for liberation & augmentation. I’m so, so burned out by 95%
of the work happening in the tech industry. Silicon Valley mythologizes
newness, without stopping to ask “why?”. I’m still in love with technology, but
increasingly with nuance into that which is for us, and that which productizes
Perhaps when Bush prophesied lightning-quick knowledge retrieval, he didn’t
intend for that knowledge to be footnoted with Outbrain adverts. Licklider’s
man-computer symbiosis would have been frustrated had it been crop-dusted
with notifications. Ted Nelson imagined many wonderfully weird futures for
the personal computer, but I don’t think gamifying meditation apps was one of
every day we make big fuss about a seemingly new hype (ai, blockchain, vr, iot,
cloud, ... what next?). as neil postman and others have cautioned us,
technologies tend to become mythic. that is, perceived as if they were god-given,
part of the natural order of things, gifts of
nature, and not as artifacts produced in a specific political and historical
context. and by that we completely fail to recognize how we can use technology
augment ourselves to do things that were previously impossible, to help us make our lives better.
Radio brought music into hospitals and nursing homes, it eased the profound
isolation of rural life, it let people hear directly from their elected
representatives. It brought laugher and entertainment into every parlor, saved
lives at sea, gave people weather forecasts for the first time.
But radio waves are just oscillating electromagnetic fields. They really don't
care how we use them. All they want is to go places at the speed of light.
It is hard to accept that good people, working on technology that benefits so
many, with nothing but good intentions, could end up building a powerful tool
for the wicked.
But we can't afford to re-learn this lesson every time.
Technology interacts with human nature in complicated ways, and part of human
nature is to seek power over others, and manipulate them. Technology
We have to assume the new technologies we invent will concentrate power, too.
There is always a gap between mass adoption and the first skillful political
use of a medium. With the Internet, we are crossing that gap right now.
only those who know nothing about technological history believe that technology
is entirely neutral. it has always a bias towards being used in certain ways
and not others. a great comparison to what we're facing now with the internet.
In psychology, the term “insight” is used to describe a recognition of one’s
own condition, such as when a person with mental illness is aware of their
illness. More broadly, it describes the ability to recognize patterns in one’s
own behavior. It’s an example of metacognition, or thinking about one’s own
thinking, and it’s something most humans are capable of but animals are not.
And I believe the best test of whether an AI is really engaging in human-level
cognition would be for it to demonstrate insight of this kind.
I used to find it odd that these hypothetical AIs were supposed to be smart
enough to solve problems that no human could, yet they were incapable of doing
something most every adult has done: taking a step back and asking whether
their current course of action is really a good idea. Then I realized that we
are already surrounded by machines that demonstrate a complete lack of insight,
we just call them corporations. Corporations don’t operate autonomously, of
course, and the humans in charge of them are presumably capable of insight, but
capitalism doesn’t reward them for using it. On the contrary, capitalism
actively erodes this capacity in people by demanding that they replace their
own judgment of what “good” means with “whatever the market decides.”
the problem is this: if you're never exposed to new ideas and contexts, if you
grow up only being shown one way of thinking about businesses & technology and
being told that there are no other ways to think about this, you grow up
thinking you know what we're doing.
When people try to explain the wastefulness of today's computing, they commonly
offer something I call "tradeoff hypothesis". According to this hypothesis, the
wastefulness of software would be compensated by flexibility, reliability,
maintability, and perhaps most importantly, cheap programming work.
I used to believe in the tradeoff hypothesis as well. However, during recent
years, I have become increasingly convinced that the portion of true tradeoff
is quite marginal. An ever-increasing portion of the waste comes from
abstraction clutter that serves no purpose in final runtime code. Most of this
clutter could be eliminated with more thoughtful tools and methods without any
we too often seem to adjust to the limitations of technology, instead of
creating solutions for a problem with the help of technology.
People want stuff for free. Smart people want free stuff that will help
grow their business. Sign up for my monthly letters to learn more about digital
strategy and creating a holistic user experience. In addition I'll send you a
list of the Top 10 Mistakes in your current digital strategy. Knowing what's
wrong is the first step toward making it right.