summing up 99

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Measuring Collective IQ, by Doug Engelbart

We have the opportunity to change our thinking and basic assumptions about the development of computing technologies. The emphasis on enhancing security and protecting turf often impedes our ability to solve problems collectively. If we can re-examine those assumptions and chart a different course, we can harness all the wonderful capability of the systems that we have today.

People often ask me how I would improve the current systems, but my response is that we first need to look at our underlying paradigms—because we need to co-evolve the new systems, and that requires new ways of thinking. It’s not just a matter of “doing things differently,” but thinking differently about how to approach the complexity of problem-solving today.

in a world where we've grown multiple orders of magnitude in our computing capacity, where we spend millions of dollars on newer, faster tools and technology, we put little emphasis on how we can augment human thinking and problem solving. and as doug says, it is not about thinking differently about these problems, it is thinking differently about our ability to to solve these problems.

The Seven Deadly Sins of Predicting the Future of AI, by Rodney Brooks

Suppose a person tells us that a particular photo is of people playing Frisbee in the park, then we naturally assume that they can answer questions like “what is the shape of a Frisbee?”, “roughly how far can a person throw a Frisbee?”, “can a person eat a Frisbee?”, “roughly how many people play Frisbee at once?”, “can a 3 month old person play Frisbee?”, “is today’s weather suitable for playing Frisbee?”. Today’s image labelling systems that routinely give correct labels, like “people playing Frisbee in a park” to online photos, have no chance of answering those questions. Besides the fact that all they can do is label more images and can not answer questions at all, they have no idea what a person is, that parks are usually outside, that people have ages, that weather is anything more than how it makes a photo look, etc., etc.

Here is what goes wrong. People hear that some robot or some AI system has performed some task. They then take the generalization from that performance to a general competence that a person performing that same task could be expected to have. And they apply that generalization to the robot or AI system.

Today’s robots and AI systems are incredibly narrow in what they can do. Human style generalizations just do not apply. People who do make these generalizations get things very, very wrong.

we are surrounded my hysteria about artificial intelligence, mistaken extrapolations, limited imagination any many more mistakes that distract us from thinking productively about the future. whether or not ai succeeds in the long term, it will nevertheless be developed and used with uncompromising efforts – regardless of any consequences.

Using Artificial Intelligence to Augment Human Intelligence, by Shan Carter and Michael Nielsen

Unfortunately, many in the AI community greatly underestimate the depth of interface design, often regarding it as a simple problem, mostly about making things pretty or easy-to-use. In this view, interface design is a problem to be handed off to others, while the hard work is to train some machine learning system.

This view is incorrect. At its deepest, interface design means developing the fundamental primitives human beings think and create with. This is a problem whose intellectual genesis goes back to the inventors of the alphabet, of cartography, and of musical notation, as well as modern giants such as Descartes, Playfair, Feynman, Engelbart, and Kay. It is one of the hardest, most important and most fundamental problems humanity grapples with.

the speed, performance or productivity of computers are mostly red herrings. the main problem is how we can leverage the computer as a tool. in different words, how can we use the computer to augment ourselves to do things that were previously impossible?

summing up 98

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

How To Be a Systems Thinker, by Mary Catherine Bateson

There has been so much excitement and sense of discovery around the digital revolution that we’re at a moment where we overestimate what can be done with AI, certainly as it stands at the moment.

One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it's willing to make projections when it hasn’t been provided with everything that would be relevant to those projections.

after all, computers are still tools we should take advantage of, to augment ourselves to do things that were previously impossible, to help us make our lives better. but all too often it seems to me that everyone is used by computers, for purposes that seem to know no boundaries.

Fantasies of the Future: Design in a World Being Eaten by Software, by Paul Robert Lloyd

Drawing inspiration from architectural practice, its successes and failures, I question the role of design in a world being eaten by software. When the prevailing technocratic culture permits the creation of products that undermine and exploit users, who will protect citizens within the digital spaces they now inhabit?

We need to take it upon ourselves to be more critical and introspective. This shouldn’t be too hard. After all, design is all about questioning what already exists and asking how it could be improved for the better.

Perhaps we need a new set of motivational posters. Rather than move fast and break things, perhaps slow down and ask more questions.

we need a more thoughtful, questioning approach to digital. how does a single technology, a tool or a digital channels help us improve? the answer is out there somewhere, but he have to stop ourselves more often to ask "why?".

Storytime, by Ron Gilbert

The grand struggle of creativity can often be about making yourself stupid again. It's like turning yourself into a child who views the world with wonderment and excitement.

Creating something meaningful isn't easy, it's hard. But that's why we should do it. If you ever find yourself being comfortable on what you're making or creating, then you need to push yourself. Push yourself out of your comfort zone and push yourself to the point of failure and then beyond.

When I was a kid, we would go skiing a lot. At the end of the day all the skiers were coming to the lodge and I used to think it was the bad skiers that were covered in snow and it was the good skiers that were all cleaned, with no snow on them. But turns out to be the exact opposite is true: it was the good skiers that were covered in snow from pushing themselves, pushing themselves beyond the limits and into their breaking points, getting better and then pushing themselves harder. Creativity is the same thing. It's like you push hard, you push until you're scared and afraid, you push until you break, you push until you fall and then you get up and you do it again. Creativity is really a journey. It's a wonderful journey the you part you start out as one person and that you end as another.

it's always a lot harder to create something meaningful than just creating something. but that's exactly the reason why you should do it. a great talk by one of my favourite game designers.

summing up 97

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Everything Easy is Hard Again, by Frank Chimero

So much of how we build websites and software comes down to how we think. The churn of tools, methods, and abstractions also signify the replacement of ideology. A person must usually think in a way similar to the people who created the tools to successfully use them. It’s not as simple as putting down a screwdriver and picking up a wrench. A person needs to revise their whole frame of thinking; they must change their mind.

The new methods were invented to manage a level of complexity that is completely foreign to me and my work. It was easy to back away from most of this new stuff when I realized I have alternate ways of managing complexity. Instead of changing my tools or workflow, I change my design. It’s like designing a house so it’s easy to build, instead of setting up cranes typically used for skyscrapers. Beyond that, fancy implementation has never moved the needle much for my clients.

So, I thought it would be useful remind everyone that the easiest and cheapest strategy for dealing with complexity is not to invent something to manage it, but to avoid the complexity altogether with a more clever plan.

a fancy implementation has never moved the needle much for my clients either. what has though is to build relationships and let technology support this process. we are an increasingly digital society, yes, but that doesn't mean we have to let technology take over.

How To Become A Centaur, by Nicky Case

Human nature, for better or worse, doesn’t change much from millennia to millennia. If you want to see the strengths that are unique and universal to all humans, don’t look at the world-famous award-winners — look at children. Children, even at a young age, are already proficient at: intuition, analogy, creativity, empathy, social skills. Some may scoff at these for being “soft skills”, but the fact that we can make an AI that plays chess but not hold a normal five-minute conversation, is proof that these skills only seem “soft” to us because evolution’s already put in the 3.5 billion years of hard work for us.

So, if there’s just one idea you take away from this entire essay, let it be Mother Nature’s most under-appreciated trick: symbiosis.

Symbiosis shows us you can have fruitful collaborations even if you have different skills, or different goals, or are even different species. Symbiosis shows us that the world often isn’t zero-sum — it doesn’t have to be humans versus AI, or humans versus centaurs, or humans versus other humans. Symbiosis is two individuals succeeding together not despite, but because of, their differences. Symbiosis is the “+”.

zero sum games most often win our attention, but the vast majority of our interactions are positive sum: when you share, when you buy, when you learn, when you talk. similarly with technology and computers: we can only improve if we use technology to augment ourselves in order to allow for new, previously-impossible ways of thinking, of living, of being.

How To Save Innovation From Itself, by Alf Rehn

At the same time as we so happily create everything from artificial intelligences to putting "smart" into absolutely bloody everything – At the same time, there are still so many actual, real problems unsolved. I do not need a single more problem solved, every one of my actual problems have been solved. There is not a single thing I could even dream of wanting that hasn't been already created. Yes, I can upgrade. I can buy a slightly cooler car. I can buy slightly better clothes. I can buy slightly faster phones. But frankly I am just consuming myself into the grave, because I have an empty life.

In all this innovation bullshit, what has happened, is that rather than look at true, meaningful change, we have turned innovation into one more bullshit phrase, into one more management buzzword. Do we actually have discussions about whether we're doing meaningful work or just work that happens to be paid at the moment. We need regardless of what company we work in, we need to look at the products we create, the things we create, and say "yes, this can matter". But it can not just matter to me, it needs to matter to someone else as well.

We have blind spots, we all have them. We all have our biases. It is acceptable perchance to have a bias as an individual. But when the entire community or an entire nation has a bias, this says we have not gone far enough.

we seem to spend so much talent, research, time, energy and money to create things that nobody needs, just because we feel we have to innovate somehow. and the problem isn't how to innovate or the innovation per se, but how to get society to adopt the good ideas that already exist.

summing up 96

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Inadvertent Algorithmic Cruelty, by Eric Meyer

Algorithms are essentially thoughtless. They model certain decision flows, but once you run them, no more thought occurs. To call a person “thoughtless” is usually considered a slight, or an outright insult; and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves.

this hits home on so many levels. we throw technology at people, hoping something will stick. instead, we should use the computer and algorithms to augment ourselves to do things that were previously impossible, to help us make our lives better. that is the sweet spot of our technology.

10 Timeframes, by Paul Ford

The time you spend is not your own. You are, as a class of human beings, responsible for more pure raw time, broken into more units, than almost anyone else. You are about to spend whole decades, whole centuries, of cumulative moments, of other people’s time. People using your systems, playing with your toys, fiddling with your abstractions. And I want you to ask yourself when you make things, when you prototype interactions, am I thinking about my own clock, or the user’s? Am I going to help someone make order in his or her life?

If we are going to ask people, in the form of our products, in the form of the things we make, to spend their heartbeats—if we are going to ask them to spend their heartbeats on us, on our ideas, how can we be sure, far more sure than we are now, that they spend those heartbeats wisely?

our technological capability changes much faster than our culture. we first create our technologies and then they change our society and culture. therefore we have a huge responsibility to look at things from the other person's point of view – and to do what's best for them. in other words, be considerate.

Finding the Exhaust Ports, by Jon Gold

Lamenting about the tech industry’s ills when self-identifying as a technologist is a precarious construction. I care so deeply about using the personal computer for liberation & augmentation. I’m so, so burned out by 95% of the work happening in the tech industry. Silicon Valley mythologizes newness, without stopping to ask “why?”. I’m still in love with technology, but increasingly with nuance into that which is for us, and that which productizes us.

Perhaps when Bush prophesied lightning-quick knowledge retrieval, he didn’t intend for that knowledge to be footnoted with Outbrain adverts. Licklider’s man-computer symbiosis would have been frustrated had it been crop-dusted with notifications. Ted Nelson imagined many wonderfully weird futures for the personal computer, but I don’t think gamifying meditation apps was one of them.

every day we make big fuss about a seemingly new hype (ai, blockchain, vr, iot, cloud, ... what next?). as neil postman and others have cautioned us, technologies tend to become mythic. that is, perceived as if they were god-given, part of the natural order of things, gifts of nature, and not as artifacts produced in a specific political and historical context. and by that we completely fail to recognize how we can use technology to augment ourselves to do things that were previously impossible, to help us make our lives better.

summing up 95

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Legends of the Ancient Web, by Maciej Cegłowski

Radio brought music into hospitals and nursing homes, it eased the profound isolation of rural life, it let people hear directly from their elected representatives. It brought laugher and entertainment into every parlor, saved lives at sea, gave people weather forecasts for the first time.

But radio waves are just oscillating electromagnetic fields. They really don't care how we use them. All they want is to go places at the speed of light. It is hard to accept that good people, working on technology that benefits so many, with nothing but good intentions, could end up building a powerful tool for the wicked. But we can't afford to re-learn this lesson every time.

Technology interacts with human nature in complicated ways, and part of human nature is to seek power over others, and manipulate them. Technology concentrates power. We have to assume the new technologies we invent will concentrate power, too. There is always a gap between mass adoption and the first skillful political use of a medium. With the Internet, we are crossing that gap right now.

only those who know nothing about technological history believe that technology is entirely neutral. it has always a bias towards being used in certain ways and not others. a great comparison to what we're facing now with the internet.

Silicon Valley Is Turning Into Its Own Worst Fear, by Ted Chiang

In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind.

I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”

the problem is this: if you're never exposed to new ideas and contexts, if you grow up only being shown one way of thinking about businesses & technology and being told that there are no other ways to think about this, you grow up thinking you know what we're doing.

The resource leak bug of our civilization, by Ville-Matias Heikkilä

When people try to explain the wastefulness of today's computing, they commonly offer something I call "tradeoff hypothesis". According to this hypothesis, the wastefulness of software would be compensated by flexibility, reliability, maintability, and perhaps most importantly, cheap programming work.

I used to believe in the tradeoff hypothesis as well. However, during recent years, I have become increasingly convinced that the portion of true tradeoff is quite marginal. An ever-increasing portion of the waste comes from abstraction clutter that serves no purpose in final runtime code. Most of this clutter could be eliminated with more thoughtful tools and methods without any sacrifices.

we too often seem to adjust to the limitations of technology, instead of creating solutions for a problem with the help of technology.

summing up 94

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Some excerpts from recent Alan Kay emails

Socrates didn't charge for "education" because when you are in business, the "customer starts to become right". Whereas in education, the customer is generally "not right". Marketeers are catering to what people want, educators are trying to deal with what they think people need (and this is often not at all what they want).

Another perspective is to note that one of the human genetic "built-ins" is "hunting and gathering" – this requires resources to "be around", and is essentially incremental in nature. It is not too much of an exaggeration to point out that most businesses are very like hunting-and-gathering processes, and think of their surrounds as resources put there by god or nature for them. Most don't think of the resources in our centuries as actually part of a human-made garden via inventions and cooperation, and that the garden has to be maintained and renewed.

these thoughts are a pure gold mine. a fundamental problem for most businesses is that one cannot innovate under business objectives and one cannot accomplish business objectives under innovation. ideally, you need both, but not at the same time.

How a handful of tech companies control billions of minds every day, by Tristan Harris

When we talk about technology, we tend to talk about it as this blue sky opportunity. It could go any direction. And I want to get serious for a moment and tell you why it's going in a very specific direction. Because it's not evolving randomly. There's a hidden goal driving the direction of all of the technology we make, and that goal is the race for our attention. Because every new site or app has to compete for one thing, which is our attention, and there's only so much of it. And the best way to get people's attention is to know how someone's mind works.

A simple example is YouTube. YouTube wants to maximize how much time you spend. And so what do they do? They autoplay the next video. And let's say that works really well. They're getting a little bit more of people's time. Well, if you're Netflix, you look at that and say, well, that's shrinking my market share, so I'm going to autoplay the next episode. But then if you're Facebook, you say, that's shrinking all of my market share, so now I have to autoplay all the videos in the newsfeed before waiting for you to click play. So the internet is not evolving at random. The reason it feels like it's sucking us in the way it is is because of this race for attention. We know where this is going. Technology is not neutral, and it becomes this race to the bottom of the brain stem of who can go lower to get it.

we always seem to have the notion that technology is always good. but that is simply not the case. every technology is always both a burden and a blessing. not either or, but this and that.

The world is not a desktop, by Mark Weiser

The idea, as near as I can tell, is that the ideal computer should be like a human being, only more obedient. Anything so insidiously appealing should immediately give pause. Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstanding, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer I need to talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the center of attention.

in a world where computers increasingly become human, they inevitably will become the center of attention. the exact opposite of what they should be: invisible and helping to focus our attention to ourselves and the people we live with.

summing up 93

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

The future of humanity and technology, by Stephen Fry

Above all, be prepared for the bullshit, as AI is lazily and inaccurately claimed by every advertising agency and app developer. Companies will make nonsensical claims like "our unique and advanced proprietary AI system will monitor and enhance your sleep" or "let our unique AI engine maximize the value of your stock holdings". Yesterday they would have said "our unique and advanced proprietary algorithms" and the day before that they would have said "our unique and advanced proprietary code". But let's face it, they're almost always talking about the most basic software routines. The letters A and I will become degraded and devalued by overuse in every field in which humans work. Coffee machines, light switches, christmas trees will be marketed as AI proficient, AI savvy or AI enabled. But despite this inevitable opportunistic nonsense, reality will bite.

If we thought the Pandora's jar that ruined the utopian dream of the internet contained nasty creatures, just wait till AI has been overrun by the malicious, the greedy, the stupid and the maniacal. We sleepwalked into the internet age and we're now going to sleepwalk into the age of machine intelligence and biological enhancement. How do we make sense of so much futurology screaming in our ears?

Perhaps the most urgent need might seem counterintuitive. While the specialist bodies and institutions I've mentioned are necessary we need surely to redouble our efforts to understand who we humans are before we can begin to grapple with the nature of what machines may or may not be. So the arts and humanities strike me as more important than ever. Because the more machines rise, the more time we will have to be human and fulfill and develop to their uttermost, our true natures.

an outstanding lecture exploring the impact of technology on humanity by looking back at human history in order to understand the present and the future.

We're building a dystopia just to make people click on ads, by Zeynep Tufekci

We use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I've written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it's not that the people who run Facebook or Google are maliciously and deliberately trying to make the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it's not the intent or the statements people in technology make that matter, it's the structures and business models they're building. And that's the core of the problem.

So what can we do? We need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won't be easy. We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don't see how we can postpone this conversation anymore. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue.

no new technology has only a one-sided effect. every technology is always both a burden and a blessing. not either or, but this and that. what bothers me is that we seem to ignore the negative impact of new technologies, justifying this attitude with their positive aspects.

the bullet hole misconception, by daniel g. siegel

If you're never exposed to new ideas and contexts, if you grow up only being shown one way of thinking about the computer and being told that there are no other ways to think about this, you grow up thinking you know what we're doing. We have already fleshed out all the details, improved and optimized everything a computer has to offer. We celebrate alleged innovation and then delegate picking up the broken pieces to society, because it's not our fault – we figured it out already.

We have to tell ourselves that we haven't the faintest idea of what we're doing. We, as a field, haven't the faintest idea of what we're doing. And we have to tell ourselves that everything around us was made up by people that were no smarter than us, so we can change, influence and build things that make a small dent in the universe.

And once we understand that, only then might we be able to do what the early fathers of computing dreamed about: To make humans better – with the help of computers.

the sequel to my previous talk, the lost medium, on bullet holes in world war 2 bombers, page numbering, rotating point of views and how we can escape the present to invent the future.

archive

2017

vdb17 summing up 92 summing up 91 summing up 90 summing up 89 summing up 88 summing up 87 summing up 86 razzle dazzle websites summing up 85 your business in munich podcast summing up 84 summing up 83 humane websites summing up 82

2016

websites are processes summing up 81 the lost medium - live summing up 80 my voxxed days belgrade talk summing up 79 summing up 78 summing up 77 on discoverability summing up 76 summing up 75 summing up 74 summing up 73 summing up 72 on ditching css frameworks and preprocessors summing up 71

2015

we really don't know how to compute the interactive web logic or biology on advice the gervais principle the psychology of pricing change of affiliation empowering kids to create the web we lost information entropy all my blogs are dead project managers, ducks, and dogs marking territory lego's 1981 ad campaign building a princess saving app summing up is dead, long live summing up summing up 70 summing up 69 summing up 68 sketchpad summing up 67

2014

the humane representation of thought summing up 66 summing up 65 networks without networks summing up 64 summing up 63 summing up 62 summing up 61 summing up 60 summing up 59 not just a label relaunch summing up 58 summing up 57 summing up 56 summing up 55 summing up 54 summing up 53 the help me help you dinner summing up 52 summing up 51 summing up 50 summing up 49 summing up 48 summing up 47 summing up 46 summing up 45 summing up 44 summing up 43 summing up 42 summing up 41 the internet regression summing up 40 summing up 39 hiring and firing for startups summing up 38 pencil and paper thinking big ideas summing up 37 summing up 36

2013

summing up 35 the future of computing summing up 34 summing up 33 summing up 32 summing up 31 summing up 30 summing up 29 summing up 28 summing up 27 luck summing up 26 summing up 25 summing up 24 summing up 23 summing up 22 summing up 21 the bullet hole misconception summing up 20 dear mr nokia summing up 19 some thoughts on the real world by one who glimpsed it and fled summing up 18 summing up 17 do not be a front rudder summing up 16 summing up 15 gnome outreach program yearbook 2013 summing up 14 on why removing features makes people unhappy summing up 13 summing up 12 summing up 11 quick tip for remembering names at, during and after meetings summing up 10 summing up 9 summing up 8 summing up 7 summing up 6 summing up 5 application specific passwords for dovecot summing up 4 summing up 3 summing up 2 summing up 1 jquery scramble plugin

2012

typical development processes of free and open source software projects the sane way of adding custom latex packages summer of code is over - a summary the gnome bazaar gnome outreach program yearbook 2012 that is all folks gnome release schedule

2011

creating beautiful graphics with pgfplots return of the travelling gnome missing in action gnome screencasts - take six gnome screencasts - take five gnome screencasts - take four gnome screencasts - take three gnome screencasts - take two gnome screencasts hdr photography cheese was taken over by the portuguese opw is over gnome video effects gallery