I am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. I am unable to show you what it will be, but I can show you some of the pieces and why they matter to me. If you are building a different puzzle, it is possible that these pieces won’t mean much to you, maybe they won’t fit or they won’t fit yet. Then again, these might just be the pieces you’re looking for.

Alan Kay: MIT-EECS 1998 Fall Semester Colloquium Series (VPRI 834)

Applications are an old old bad idea. If you have an object-oriented system what you basically want to do is to gather all the resources you need and make what you want.

Craig Mod: Future reading

To return to a book is to return not just to the text but also to a past self. We are embedded in our libraries. To reread is to remember who we once were, which can be equal parts scary and intoxicating.

James Burke: The Knowledge Web, p. 22

The problem of past information overload has generally been of concern only to a small number of literate administrators and their semiliterate masters. In contrast, twenty-first-century petabyte laptops and virtually free access to the Internet may bring destabilizing effects of information overload that will operate on a scale and at a rate well beyond anything that has happened before. In the next few decades hundreds of millions of new users will have no experience in searching the immense amount of available data and very little training in what to do with it. Information abundance will stress society in ways for which it has not been prepared and damage centralized social systems designed to function in a nineteenth-century world.

Samuel Johnson: The History of Rasselas, Prince of Abissinia, p. ch. 6

Nothing will ever be attempted, if all possible objections must be first overcome.

Weiwei Xu: Expressive tools

Whenever I feel like I’ve gotten it or I understand what’s going on, life hits me in a hard way — it’s like “nope, you don’t have it, you don’t know what you’re doing”. And then you’re going into this cycle of learning more about yourself, and understanding that there’s so much more possibilities, so much more unknowns.

Marshall McLuhan: Understanding Media, p. 248

It was the tandem alignment of wheels that created the velocipede and then the bicycle, for with the acceleration of wheel by linkage to the visual principle of mobile lineality, the wheel acquired a new degree of intensity. The bicycle lifted the wheel onto the plane of aerodynamic balance, and not too indirectly created the airplane. It was no accident that the Wright brothers were bicycle mechanics, or that early airplanes seemed in some ways like bicycles. The transformations of technology have the character of organic evolution because all technologies are extensions of our physical being.

Marshall McLuhan: Understanding Media, p. 68

To behold, use or perceive any extension of ourselves in technological form is necessarily to embrace it. To listen to radio or to read the printed page is to accept these extensions of ourselves into our personal system and to undergo the “closure” or displacement of perception that follows automatically. It is this continuous embrace of our own technology in daily use that puts us in the Narcissus role of subliminal awareness and numbness in relation to these images of ourselves. By continuously embracing technologies, we relate ourselves to them as servomechanisms. That is why we must, to use them at all, serve these objects, these extensions of ourselves, as gods or minor religions. An Indian is the servo-mechanism of his canoe, as the cowboy of his horse or the executive of his clock.

Tom Fishburne: Navigating AI Hype

The biggest risk is letting any new technology distract from the fundamentals. How we adopt a new technology to fulfill our business strategy matters far more than the hype of the technology itself.

Zachary Loeb: “Computers enable fantasies” – On the continued relevance of Weizenbaum’s warnings

Frankly, the “ought” versus the “can” has always been the vital question underneath our adventure with computer technology—and technology more broadly. Though it is a question that tends to be overlooked in favor of an attitude that focuses almost entirely on the “can” and imagines that if something “can” be done that it therefore should or must be done. But as Weizenbaum reminds us, technology isn’t driving these things, people are, people who are responsible for the choices they are making, and people who are so caught up in whether or not their new gadget or program “can” do something that they rarely stop to think whether it “ought” to do so.

Bret Devereaux: On ChatGPT

I am asking our tech pioneers to please be more alive to the consequences of the machines you create. Just because something can be done doesn’t mean it should be done. We could decide to empirically test if 2,000 nuclear detonations will actually produce a nuclear winter, but we shouldn’t. Some inventions – say, sarin gas – shouldn’t be used. Discovering what we can do is always laudable; doing it is not always so. And yet again and again these new machines are created and deployed with vanishingly little concern about what their impacts might be. Will ChatGPT improve society, or just clutter the internet with more junk that will take real humans more time to sort through? Is this a tool for learning or just a tool to disrupt the market in cheating?

Too often the response to these questions is, “well if it can be done, someone will do it, so I might as well do it first (and become famous or rich),” which is both an immorally self-serving justification but also a suicidal rule of conduct to adopt for a species which has the capacity to fatally irradiate its only biosphere. The amount of power our species has to create and destroy long ago exceeded the point where we could survive on that basis.

And that problem – that we need to think hard about the ethics of our inventions before we let them escape our labs – that is a thinking problem and thus one in which ChatGPT is entirely powerless to help us.

Ted Chiang: Why Computers Won’t Make Themselves Smarter

The rate of innovation is increasing and will continue to do so even without any machine able to design its successor. Some might call this phenomenon an intelligence explosion, but I think it’s more accurate to call it a technological explosion that includes cognitive technologies along with physical ones. Computer hardware and software are the latest cognitive technologies, and they are powerful aids to innovation, but they can’t generate a technological explosion by themselves. You need people to do that, and the more the better. Giving better hardware and software to one smart individual is helpful, but the real benefits come when everyone has them. Our current technological explosion is a result of billions of people using those cognitive tools.

Ted Chiang: Why Computers Won’t Make Themselves Smarter

There’s no reason to believe that humans born ten thousand years ago were any less intelligent than humans born today; they had exactly the same ability to learn as we do. But, nowadays, we have ten thousand years of technological advances at our disposal, and those technologies aren’t just physical—they’re also cognitive.

Let’s consider Arabic numerals as compared with Roman numerals. With a positional notation system, such as the one created by Arabic numerals, it’s easier to perform multiplication and division; if you’re competing in a multiplication contest, Arabic numerals provide you with an advantage. But I wouldn’t say that someone using Arabic numerals is smarter than someone using Roman numerals. By analogy, if you’re trying to tighten a bolt and use a wrench, you’ll do better than someone who has a pair of pliers, but it wouldn’t be fair to say you’re stronger. You have a tool that offers you greater mechanical advantage; it’s only when we give your competitor the same tool that we can fairly judge who is stronger. Cognitive tools such as Arabic numerals offer a similar advantage; if we want to compare individuals’ intelligence, they have to be equipped with the same tools.

Simple tools make it possible to create complex ones; this is just as true for cognitive tools as it is for physical ones. Humanity has developed thousands of such tools throughout history, ranging from double-entry bookkeeping to the Cartesian coördinate system. So, even though we aren’t more intelligent than we used to be, we have at our disposal a wider range of cognitive tools, which, in turn, enable us to invent even more powerful tools.

This is how recursive self-improvement takes place—not at the level of individuals but at the level of human civilization as a whole.

Ted Chiang: The Truth of Fact, the Truth of Feeling

We don’t normally think of it as such, but writing is a technology, which means that a literate person is someone whose thought processes are technologically mediated. We became cognitive cyborgs as soon as we became fluent readers, and the consequences of that were profound.

Before a culture adopts the use of writing, when its knowledge is transmitted exclusively through oral means, it can very easily revise its history. It’s not intentional, but it is inevitable; throughout the world, bards and griots have adapted their material to their audiences and thus gradually adjusted the past to suit the needs of the present. The idea that accounts of the past shouldn’t change is a product of literate cultures’ reverence for the written word. Anthropologists will tell you that oral cultures understand the past differently; for them, their histories don’t need to be accurate so much as they need to validate the community’s understanding of itself. So it wouldn’t be correct to say that their histories are unreliable; their histories do what they need to do.

Right now each of us is a private oral culture. We rewrite our pasts to suit our needs and support the story we tell about ourselves. With our memories we are all guilty of a Whig interpretation of our personal histories, seeing our former selves as steps toward our glorious present selves.

Douglas Engelbart: Bootstrapping Our Collective Intelligence

It takes those two things tool system and human to be integrated for humans to be taught, conditioned, trained, etc in order to be effective. That’s all built on top of the genetic thing. Motivation and all that.

Someone says “I’m going to improve their effectiveness. I’m going to invent some gadgets here and plug them in”. There has never been an effective gadget that was plugged in over there that really made a difference until the human system adapted to it. So big changes there cause subsequent very large evolutionary changes in the human system. You’ve got a radical, explosive change like the digital technology and he says “Oh, I’m going to use it to automate things over here, huh?” So this was the prevailing thing in the ’70s. But that just can’t be.

What happens is that the left-hand side, the human side, has got to co-evolve lots of new ways in which it does things, in which it harnesses basic human capabilities. So, we’re talking about revolution like we’ve never seen before and it isn’t going to be like you just sit there and have the equivalent of a document on your screen and you scroll it. That’s not using anything of the computer capabilities or what your mind and your sensory and perceptual machinery can work with. There’s a revolution in the making but its not going to happen unless we find ways to co-evolve those two things.

Howard Rheingold: Tools For Thought, p. 319

It is up to us to decide whether or not computers will be our masters, our servants, or our partners.

It is up to us to decide what human means, and exactly how it is different from machine, and what tasks ought and ought not to be trusted to either species of symbol-processing system. But some decisions must be made soon, while the technology is still young. And the deciding must be shared by as many citizens as possible, not just the experts. In that sense, the most important factor in whether we will all see the dawn of a humane, sustainable world in the twenty-first century will be how we deal with these machines a few of us thought up and a lot of us will be using.

Howard Rheingold: Tools For Thought, p. 266

when it comes to computer software, the human habit of looking at artifacts as tools can get in the way. Good tools ought to disappear from one’s consciousness. You don’t try to persuade a hammer to pound a nail—you pound the nail, with the help of a hammer. But computer software, as presently constituted, forces us to learn arcane languages so we can talk to our tools instead of getting on with the task.

Howard Rheingold: Tools For Thought, p. 231

It would be a sad irony if we were to end up creating a world too complicated for us to manage alone, and fail to recognize that some of our own inventions could help us deal with our own complexity.

Howard Rheingold: Tools For Thought, p. 183

The biggest difference between the citizen of preliterate culture and the industrial-world dweller who can perform long division or dial a telephone is not in the brain’s “hardware”—the nervous system of the highlander or the urbanite—but in the thinking tools given by the culture. Reading, writing, surviving in a jungle or a city, are examples of culturally transmitted human software.

Douglas Engelbart: Tools For Thought, p. 177

If you can improve our capability to deal with complicated problems, you’ve made a significant impact on helping humankind. That was the kind of payoff I wanted, so that’s what I set out to do.

Howard Rheingold: Tools For Thought, p. 15

As we shall see, the further limits of this technology are not in the hardware, but in our minds. The digital computer is based on a theoretical discovery known as “the universal machine,” which is not actually a tangible device but a mathematical description of a machine capable of simulating the actions of any other machine. Once you have created a general-purpose machine that can imitate any other machine, the future development of the tool depends only on what tasks you can think to do with it. For the immediate future, the issue of whether machines can become intelligent is less important than learning to deal with a device that can become whatever we clearly imagine it to be.

Audrey Watters: Hope for the Future

I know that my work makes people feel uncomfortable, particularly when so much of the tech and ed-tech industry mantra rests on narratives promising a happy, shiny, better future. No one appreciates it when someone comes along and says “actually, if you wheel this giant horse — impressive as it looks — inside the gates, it’ll destroy everything you love, everyone you care about. Don’t do it.”

And like Cassandra, it’s exhausting to keep repeating “don’t do it,” and to have folks go right on ahead and do it anyway. I’ve been writing about ed-tech for over a decade now, cautioning people about the repercussions of handing over data, infrastructure, ideology, investment to Silicon Valley types. And for what?

Audrey Watters: What Happens When Ed-Tech Forgets? Some Thoughts on Rehabilitating Reputations

It doesn’t help, of course, that there is, in general, a repudiation of history within Silicon Valley itself. Silicon Valley’s historical amnesia — the inability to learn about, to recognize, to remember what has come before — is deeply intertwined with the idea of “disruption” and its firm belief that new technologies are necessarily innovative and are always “progress.”

Bret Devereaux: Why Don’t We Use Chemical Weapons Anymore?

As someone who thinks humans must learn to peace as well as we have learned to war if we wish to survive in the long run, it is a humbling and concerning thought that we have not come so far as we might like.

James Burke: Connections, p. 294

We are increasingly aware of the need to assess our use of technology and its impact on us, and indeed it is technology which has given us the tools with which to make such an assessment. But the average person is also aware of being inadequately prepared to make that assessment. […]

In the last twenty years television has brought a wide spectrum of affairs into our living-rooms. Our emotional reaction to many of them — such as the problem of where to site atomic power stations, or the dilemma of genetic engineering, or the question of abortion — reveals the paradoxical situation in which we find ourselves. The very tools which might be used to foster understanding and reason, as opposed to emotional reflex, are themselves forced to operate at a level which only enhances the paradox. The high rate of change to which we have become accustomed affects the manner in which information is presented: when the viewer is deemed to be bored after only a few minutes of air time, or the reader after a few paragraphs, content is sacrificed for stimulus, and the problem is reinforced. The fundamental task of technology is to find a means to end this vicious circle, and to bring us all to a fuller comprehension of the technological system which governs and supports our lives. It is a difficult task, because it will involve surmounting barriers that have taken centuries to construct. During that time we have carried with us, and cherished, beliefs that are pre-technological in nature. These faiths place art and philosophy at the center of man’s existence, and science and technology on the periphery. According to this view, the former lead and the latter follow.

Yet, as this book has shown, the reverse is true.

James Burke: Connections, p. 288

Approaches to the study of history tend to leave the layman with a linear view of the way change occurs, and this in turn affects the way he sees the future. Most people, if asked how the telephone is likely to develop during their lifetime, will consider merely the ways in which the instrument itself may change. If such changes include a reduction in size and cost and an increase in operating capability, it is easy to assume that the user will be encouraged to communicate more frequently than he does at present. But the major influence of the telephone on his life might come from an interaction between communications technology and other factors which have nothing to do with technology.

James Burke: Connections, p. 286

In the heroic treatment, historical change is shown to have been generated by the genius of individuals, conveniently labelled “inventors”. In such a treatment, Edison invented the electric light, Bell the telephone, Gutenberg the printing press, Watt the steam engine, and so on. But no individual is responsible for producing an invention ex nihilo. The elevation of the single inventor to the position of sole creator at best exaggerates his influence over events, and at worst denies the involvement of those humbler members of society without whose work his task might have been impossible.

James Burke: Connections, p. 286

We have seen that each one of the modern man-made objects that alters our world, and in so doing changes our lives, molds our behaviour, conditions our thoughts, has come to us at the end of a long and often haphazard series of events. The present is a legacy of the past, but it is a legacy that was bequeathed without full knowledge of what the gift would mean. At no time in the history of the development of the millions of artefacts with which we live did any of the people involved in that development understand what effect their work would have. Is today, then, merely the end-product of a vast and complicated series of accidental connections?

James Burke: Connections, p. 140

The process by which fundamental change comes about at times has nothing to do with diligence, or careful observation, or economic stimulus, or genius, but happens entirely by accident.

James Burke: Connections, p. xi

There’s no grand design to the way history goes. The process does not fall neatly into categories such as those we are taught in school. For example, most of the elements contributing to the historical development of transportation had nothing to do with vehicles. So there are no rules for how to become an influential participant on the web of change. There is no right way. Equally, there is no way to guarantee that your great project meant to alter the course of history will ever succeed.

Things almost never turn out as expected. When the telephone was invented, people thought it would only be used for broadcasting. Radio was intended for use exclusively onboard ships. A few decades ago, the head of IBM said America would never need more than four or five computers.

Change almost always comes as a surprise because things don’t happen in straight lines. Connections are made by accident. Second-guessing the result of an occurrence is difficult, because when people or things or ideas come together in new ways, the rules of arithmetic are changed so that one plus one suddenly makes three. This is the fundamental mechanism of innovation, and when it happens the result is always more than the sum of the parts.

James Burke: Connections, p. viii

Traditional forms of education, with their emphasis on specialization and silo thinking, have done little to prepare us for the onset of abundance. The frontiers of knowledge are being pushed back by specialists, each isolated from (and in many cases unaware of) the work of others like them, but with whom they share the common mission: “Learn more and more about less and less.” When the products of such specialist research go out into the real world they often cause changes unforeseen by their creators (still heads down at the work bench). Some obvious examples: asbestos, Freon, DDT, CO2, non-degradable plastic, GM foods (perhaps).

However, we are beginning to be aware of the need for a broader understanding of the ways innovation changes life faster than the old social institutions can handle. In a few cases we are already enacting legislation to moderate potentially harmful ripple effects in areas such as the environment, health and safety, food and drugs. But given the ever more interconnected nature of the global community and the accelerating rate of change, it would also seem worth considering an approach that might be described as “social ecology” and that would take a wider, more contextual view of innovation and its effects, with a view to encouraging, moderating or discouraging particular projects.

Don Norman: The Design of Everyday Things, p. 286

Reliance on technology is a benefit to humanity. With technology, the brain gets neither better nor worse. Instead, it is the task that changes. Human plus machine is more powerful than either human or machine alone.

Don Norman: The Design of Everyday Things, p. 167

We can’t fix problems unless people admit they exist. When we blame people, it is then difficult to convince organizations to restructure the design to eliminate these problems. After all, if a person is at fault, replace the person. But seldom is this the case: usually the system, the procedures, and social pressures have led to the problems, and the problems won’t be fixed without addressing all of these factors.

Don Norman: The Design of Everyday Things, p. 67

True collaboration requires each party to make some effort to accommodate and understand the other. When we collaborate with machines, it is people who must do all the accommodation. Why shouldn’t the machine be more friendly? The machine should accept normal human behavior, but just as people often subconsciously assess the accuracy of things being said, machines should judge the quality of information given it, in this case to help its operators avoid grievous errors because of simple slips. Today, we insist that people perform abnormally, to adapt themselves to the peculiar demands of machines, which includes always giving precise, accurate information. Humans are particularly bad at this, yet when they fail to meet the arbitrary, inhuman requirements of machines, we call it human error. No, it is design error

Don Norman: The Design of Everyday Things, p. 5

Machines, after all, are conceived, designed, and constructed by people. By human standards, machines are pretty limited. They do not maintain the same kind of rich history of experiences that people have in common with one another, experiences that enable us to interact with others because of this shared understanding. Instead, machines usually follow rather simple, rigid rules of behavior. If we get the rules wrong even slightly, the machine does what it is told, no matter how insensible and illogical. People are imaginative and creative, filled with common sense; that is, a lot of valuable knowledge built up over years of experience. But instead of capitalizing on these strengths, machines require us to be precise and accurate, things we are not very good at. Machines have no leeway or common sense.

Don Norman: The Design of Everyday Things, p. xvii

With the passage of time, the psychology of people stays the same, but the tools and objects in the world change. Cultures change. Technologies change. The principles of design still hold, but the way they get applied needs to be modified to account for new activities, new technologies, new methods of communication and interaction.

Neil Postman: Amusing Ourselves to Death, p. 16

We do not measure a culture by its output of undisguised trivialities but by what it claims as significant.

Alf Rehn: Innovation for the Fatigued, p. 34

Data-driven improvement, whilst not unimportant, is not the same as innovation. Yes, data is important. Yes, machine learning will bring us great things. But these are things that will supercharge business as usual, hone the processes we already know and improve the things we already know. Real innovation, the kind that looks beyond what we do and know now, will for the foreseeable future — and quite possibly far beyond that — be founded on the quirky capacity of human beings to think in entirely novel ways. No algorithm would have been able to consider starting a hotel business without owning any hotels. No form of machine learning would have been able to make the leap from a service-based business such as airlines to a no-service business such as low-cost airlines. Such leaps, which often seem irrational and illogical, are what humans are great at — and as we do not even know exactly how humans manage to make such leaps, we cannot easily expect that we can teach it to machines. Particularly ones that are defined by logic and reason.

Alf Rehn: Innovation for the Fatigued, p. 13

Today, in both our companies and our society, much of the innovation we do and talk about is shallow innovation. We focus on the easy things, the lookalike things, the things legitimated by the media. But there is another form of innovation, one far more important, that is pushed to the back by the glitz and glamour of our current innovation hype. Simply put, in a society in love with shallow innovation talk, deep innovation has become marginalized.

Ted Chiang: Ezra Klein Interviews Ted Chiang

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

Marc Eckō: Unlabel, p. 25

My “vision” didn’t start with billions, my vision started with a can of spray paint and what I could do with it in the next thirty minutes. Entrepreneurs lose sight of that. When Steve Jobs and Steve Wozniak built their first motherboard, they didn’t envision the iPhone. Visions can start small. Visions should start small. They’re incremental, like building Legos:

Snap one block to the next.
Snap another block.

Having an overly majestic “vision” can cripple you with pressure. When I started with graffiti, I thought about my next eighteen hours, not my next eighteen years. Free yourself to do the same.

Doug Engelbart: Here’s an Idea!

These days, the problem isn’t how to innovate; it’s how to get society to adopt the good ideas that already exist. Sure, innovation is critical, but it doesn’t amount to anything unless the rest of the world does something with it. I’ve spent years spinning my wheels, trying to get the world to act on the potential that’s out there. I invented the computer mouse in 1963, and the world didn’t adopt it until nearly 20 years later. When the mouse first came out, people thought that it was too hard to use! So Apple’s mouse has just one button on it.

Why? Because everything has to be easy to use! That mentality has retarded our growth. Everything must be easy to use — but from whose perspective? From the perspective of a brand-new user? What about the user who has a little more experience?

The business of knowledge work can and should move ahead more quickly. Computers can help us augment the human intellect — if we let them. I’m trying to get the world to wake up and to start taking advantage of this potential.

Neil Postman: Technopoly: The Surrender of Culture to Technology, p. 4

It is a mistake to suppose that any technological innovation has a one-sided effect. Every technology is both a burden and a blessing; not either-or, but this-and-that.

Jonathan Blow: Interview with Jonathan Blow

Computers were designed by people. Somebody, somewhere, knows how all these things work. But the proportion of people who understand any given piece is declining over time. And that’s dangerous, actually.

This was made by us and we can make it better. People have all sorts of ways of justifying the status quo. I think there’s sort of a team aspect to it. I find increasingly this attitude that these things were made, they’re like the pyramids of Egypt, where like, they were made by the forefathers and who are you to say that they should be changed? And you should just live in these structures that were made by previous generations because they were smarter than you or something. I hear that phrase sometimes like, “The web browser was made by people who were smarter than me,” is something somebody said to me in an online argument.

That’s scary. Like once programming becomes, we live in these structures created by future generations that we are no longer able to build, that’s a civilization in decline. That’s it. Or at least it’s a civilization that has passed its glory days and can no longer do those things.

Zachary Loeb: Theses on Techno-Optimism

The history of technology certainly demonstrates that there have been moments throughout history when technological shifts have made large significant changes. Though careful historians have worked diligently to emphasize that, contrary to popular narratives, those shifts were rarely immediate and usually interwoven with a host of social/political/economic changes. Nevertheless, techno-optimism keeps people waiting for that next big technological leap forward. The hopeful confidence in that big technological jump, which is surely just around the corner, keeps us sitting patiently as things remain largely the same (or steadily get worse). Faced with serious challenges that our politics seem incapable of addressing, and which technological change have so far been able to miraculously solve, techno-optimism keeps the focus centered on the idea of an eventual technological solution. And most importantly this is a change that will mean that we do not need to do much, we do not need to act, we do not need to be willing to change, we just need to wait and eventually the technology will come along that will do it all for us.

And so we wait. And so we keep waiting, for technology to come along and save us from ourselves.

Alan Kay: What was the last breakthrough in computer programming?

As for programming itself, the rallying cry I’ve tried to put forth is: “It’s not BIG DATA, but BIG MEANING”. In other words, the next significant threshold that programming must achieve is for programs and programming systems to have a much deeper understanding of both what they are trying to do, and what they are actually doing. That this hasn’t happened in the last 35 years is a really unfortunate commentary on the lack of a maturation process for computing.

Joseph Weizenbaum: Weizenbaum examines computers and society

[Your question] in a sense, it is upside-down. You start with the instrument; the question makes the assumption that of course the computer is good for something in education, that it is the solution to some educational problem. Specifically, [your] question is, what is it good for?

But where does the underlying assumption come from? Why are we talking about computers? I understand [you asked because] I’m a computer scientist, not a bicycle mechanic. But There is something about the computer — the computer has almost since its beginning been basically a solution looking for a problem.


The questioning should start the other way — it should perhaps start with the question of what education is supposed to accomplish in the first place. Then perhaps [one should] state some priorities — it should accomplish this, it should do that, it should do the other thing. Then one might ask, in terms of what it’s supposed to do, what are the priorities? What are the most urgent problems? And once one has identified the urgent problems, then one can perhaps say, “Here is a problem for which the computer seems to be well-suited.” I think that’s the way it has to begin.

Alan Kay: The Father Of Mobile Computing Is Not Impressed

If people could understand what computing was about, the iPhone would not be a bad thing. But because people don’t understand what computing is about, they think they have it in the iPhone, and that illusion is as bad as the illusion that Guitar Hero is the same as a real guitar. That’s the simple long and the short of it.

Aldous Huxley: Science, Liberty and Peace, p. 28

Confronted by the data of experience, men of science begin by leaving out of account all those aspects of the facts which do not lend themselves to measurement and to explanation in terms of antecedent causes rather than of purpose, intention and values. Pragmatically they are justified in acting in this odd and extremely arbitrary way; for by concentrating exclusively on the measurable aspects of such elements of experience as can be explained in terms of a causal system they have been able to achieve a great and ever increasing control over the energies of nature. But power is not the same thing as insight and, as a representation of reality, the scientific picture of the world is inadequate, for the simple reason that science does not even profess to deal with experience as a whole, but only with certain aspects of it in certain contexts.

All this is quite clearly understood by the more philosophically minded men of science. But unfortunately some scientists, many technicians and most consumers of gadgets have lacked the time and the inclination to examine the philosophical foundations and background of the sciences. Consequently they tend to accept the world picture implicit in the theories of science as a complete and exhaustive account of reality; they tend to regard those aspects of experience which scientists leave out of account, because they are incompetent to deal with them, as being somehow less real than the aspects which science has arbitrarily chosen to abstract from out of the infinitely rich totality of given facts. Because of the prestige of science as a source of power, and because of the general neglect of philosophy, the popular Weltanschauung of our times contains a large element of what may be called ‘nothing-but’ thinking. Human beings, it is more or less tacitly assumed, are nothing but bodies, animals, even machines; the only really real elements of reality are matter and energy in their measurable aspects; values are nothing but illusions that have somehow got themselves mixed up with our experience of the world; mental happenings are nothing but epiphenomena, produced by and entirely dependent upon physiology; spirituality is nothing hut wish fulfilment and misdirected sex; and so on.

David C. Baker: The Business of Expertise, p. xxvi

The primary beneficiary of every book is the author because—for me, anyway—clarity comes in the articulation and not after it. If I didn’t write, I’d never know what I actually believe, and I hope reading this will inspire you to write for the same reason.

Alan Kay: Alan Kay interviewed by Dave Marvit

If you look at the daily use of people with computers of whatever kind, it’s hard to find any actual new technology in there that is newer than thirty years old, as far as the foundational ideas. The whole idea of graphical user interfaces came from the ’60s and the one we happen to use today was invented at Xerox PARC, but it wasn’t invented in isolation of previous ones. Both the mouse and the tablet were invented in 1964, most people don’t realize that. The ARPANET started working in 1969 and it just morphed into the Internet. The Ethernet was invented at Xerox PARC around 1973, object-oriented programming…

If you take a look at what people are doing on the on the web, what I see is mainly better-looking graphics than we could do 50 years ago. But what I don’t see is Engelbart’s ideas. Engelbart routinely shared screens in real time for the entire work and it was an integral part of the system. It wasn’t an app that you use. It was actually part of the thing. […] This is all in the ’60s and nothing about it is hidden. What I see is something much more like a pop culture today where people are completely indifferent to the past. And it’s not that everything in the past was good but it’s real shame to lose the stuff that was better.

Carl Tashian: At Dynamicland, The Building Is The Computer

Every medium has limits. What is at first liberating eventually becomes a prison that constrains our expression and our range of thought. Because every medium has limits, there’s always an opportunity to invent new media that will expand our ability to understand the world around us, to communicate, and to address major problems of civilization.

A lot of what we use today is extended from our analog past: email, digital books, and digital photos are more or less direct carryovers from physical letters, books, and photos. And this tendency has bled into hardware products: We’ve extended $0.05 pencils and $0.005 paper by creating $200 digital pencils and $1,000 digital tablets. By carrying forward some of the elegance of pencil and paper into the digital realm, we cheat ourselves out of discovering entirely new approaches.

Kate Raworth: A healthy economy should be designed to thrive, not grow

I think it’s time to choose a higher ambition, a far bigger one, because humanity’s 21st century challenge is clear: to meet the needs of all people within the means of this extraordinary, unique, living planet so that we and the rest of nature can thrive.

Nafeez Ahmed: Coronavirus, Synchronous Failure and the Global Phase-Shift

The coronavirus crisis shows us how self-defeating it really is to adopt a raw, ‘fend for yourself’ approach. It simply cannot work. Such an approach would lead to widespread panic, disorder and a rapid dissolution of established governance and distribution systems. Narrow survivalists who are offering this sort of ‘solution’ to people in response to the coronavirus, climate change or other crises, are part of the problem, in fact, part of the old self-centred, materialist-me paradigm from which this entire industrial system has been constructed. I have a message for these folks: If all you have to offer people is to be frightened, to run and horde as many supplies as they can, and bunker down to protect themselves, you’re part of the problem. You’re part of the very system that created the dynamic you’re caught up in. You cannot see the bigger picture. And at a time when the imperative is to build people’s capacities for sense-making, for collective intelligence, for wisdom, for love and compassion, for building and designing and engaging in the emergence of new ecological systems within a new life cycle, your advice is utterly useless.

The real way forward is obvious to anyone who pauses for a moment to reflect on what this present moment really means, in its full context, but that requires stepping beyond the immediate reactionary fears and desires of your psyche and allowing yourself to think, see and presence as a person who is an integral node in the web of life.

That is as follows: for communities across multiple sectors to take the initiative in working together, building new cooperative processes, sharing resources, looking out for our vulnerable neighbours and friends, and ultimately providing each other support in developing public interest strategies informed by collective intelligence.

John F. Kennedy: We choose to go to the Moon

The greater our knowledge increases, the greater our ignorance unfolds.

Andy Matuschak and Michael Nielsen: How can we develop transformative tools for thought?

Really difficult problems – problems like inventing Hindu-Arabic numerals – aren’t solved by good intentions and interest alone. A major thing missing is foundational ideas powerful enough to make progress. In the earliest days of a discipline – the proto-disciplinary stage – a few extraordinary people – people like Ivan Sutherland, Doug Engelbart, Alan Kay, and Bret Victor – may be able to make progress. But it’s a very bespoke, individual progress, difficult to help others become proficient in, or to scale out to a community. It’s not yet really a discipline. What’s needed is the development of a powerful praxis, a set of core ideas which are explicit and powerful enough that new people can rapidly assimilate them, and begin to develop their own practice. We’re not yet at that stage with tools for thought. But we believe that we’re not so far away either.

Andy Matuschak and Michael Nielsen: How can we develop transformative tools for thought?

Conventional tech industry product practice will not produce deep enough subject matter insights to create transformative tools for thought. Indeed, that’s part of the reason there’s been so little progress from the tech industry on tools for thought. This sounds like a knock on conventional product practice, but it’s not. That practice has been astoundingly successful at its purpose: creating great businesses. But it’s also what Alan Kay has dubbed a pop culture, not a research culture. To build transformative tools for thought we need to go beyond that pop culture.

Andy Matuschak and Michael Nielsen: How can we develop transformative tools for thought?

Part of the origin myth of modern computing is the story of a golden age in the 1960s and 1970s. In this story, visionary pioneers pursued a dream in which computers enabled powerful tools for thought, that is, tools to augment human intelligence. One of those pioneers, Alan Kay, summed up the optimism of this dream when he wrote of the potential of the personal computer: “the very use of it would actually change the thought patterns of an entire civilization”.

It’s an inspiring dream, which helped lead to modern interactive graphics, windowing interfaces, word processors, and much else. But retrospectively it’s difficult not to be disappointed, to feel that computers have not yet been nearly as transformative as far older tools for thought, such as language and writing. Today, it’s common in technology circles to pay lip service to the pioneering dreams of the past. But nostalgia aside there is little determined effort to pursue the vision of transformative new tools for thought.

Ivan Sutherland: Closing remarks at Hyperkult XX

It’s been 50 years of a continuous development of usability of machines for all kinds of purposes. And of course the machines are thousands of times more powerful now than they were when we started. The important question for the future is where do we go from here?

Jeff Conklin: Through a Doug Lens

I’m sure you’ve heard the old joke about the drunk guy who’s staggering around under a streetlight. After a little while another guy comes along and asks him what’s he looking for. “My keys!” says the drunk. The other guy looks down and starts looking around. “Is this where you lost them?” he asks casually. ”No I lost them over there,” he says pointing off into the darkness, “but the light’s better over here.”

So I used to think this was a funny joke. I now think it’s possibly one of these one of the most important parables in modern civilization. Because the keys represent something that is potentially of immense value to humanity and to the planet. And the drunk guy represents the computer industry looking mainly under the streetlight of profit and funding, creating technologies that are conceptually an easy reach within the current state of the practice. The point is we mostly build tools that do what computers can do easily not necessarily what humans need.

Leonard Kleinrock: The Augmentation of Douglas Engelbart

I worked in downtown New York and at lunch time we go out and we have lunch, see all the activity, maybe people selling things. And there’d be a guy selling a glass cutter. And he’d make these wonderful cuts on the glass. He said buy this glass cutter, you can do it too.

Of course what’s hidden there is his talent. You don’t buy that you buy this tool! I say that’s like selling a pencil and say this pencil can write Chinese. I think the analogy is correct here. Give somebody a computer — it’s the capability of the user to find ways to use that tool to generate wonders, creativity, put things together. The machine won’t do it by itself.

Bret Victor: Tweet

To be clear, this has nothing to do with Apple being “bad” or “good”. It’s a structural problem. There are vast incentives for logging the forest, planting seeds has become an unfundable nightmare, and many people don’t realize that trees grow from seeds in the first place.

Sam Hahn: Scaling Human Capabilities for Solving Problems that Threaten Our Survival

Unfortunately, often people fail to increase their own capacity. We fail into the “ease of use” trap and don’t choose to evolve our behaviors and practices.

Engelbart illustrates this concept with a simple question, “Would you rather go across town on a tricycle or a bicycle?” A tricycle is obviously easier to learn to ride, but clearly nowhere near as efficient as a bicycle. There’s a learning curve from tricycle to bicycle. There’s a learning curve moving away from tried and true traditional methods, to new practices and ways of thinking that will enable us to become more highly functional beings and teams capable of collaboration.

Darla Hewitt: Applying the A,B,C Principles

In order to improve, human processes and tools need to evolve to meet the goals of the organization. If either the tools or the processes are rigid, the collective cannot improve. If you identify the constraint and it cannot be fixed, then the collective is locked into the current level of capability.

Doug Engelbart: The Engelbart Hypothesis, p. 35

These problems are due to structural factors. We have the opportunity to change our thinking and basic assumptions about the development of computing technologies. The emphasis on enhancing security and protecting turf often impedes our ability to solve problems collectively. If we can re-examine those assumptions and chart a different course, we can harness all the wonderful capability of the systems that we have today. People often ask me how I would improve the current systems, but my response is that we first need to look at our underlying paradigms—because we need to co-evolve the new systems, and that requires new ways of thinking. It’s not just a matter of “doing things differently,” but thinking differently about how to approach the complexity of problem-solving today.

Douglas Engelbart: The Engelbart Hypothesis, p. 32

It is, in fact, the infrastructure that defines what that society is capable of. Each of us is born with a unique set of perceptual motor and mental abilities (vision, hearing, verbal, smell, touch, taste, motion, sensing). We build upon those through our learning new skills and knowledge. We become socialized through culture: language, methods of doing things, customs, organizational behavior, and belief systems. In addition, we learn to use tools and have developed communication systems. The capability infrastructure is the way all of those innate abilities, acquired skills, cultural assumptions, and tools work together.

Change in one tool or custom can have unintended consequences in other tools, customs or, indeed, effect the entire structure. In order to create powerful tools to augment human thinking, we have to change many aspects of the infrastructure, and examine how the tools will be used. The potential for change with the introduction of augmentation technology can create fundamental shifts in the world. While we continue to spend millions of dollars researching newer, faster tools, little research is being done on the most strategic investments that will provide the highest payoffs for augmenting human thinking.

Douglas Engelbart: The Engelbart Hypothesis, p. 20

Through the generations, humans have invented all kinds of tools and methods to support intellectual work. We have entire augmentation systems already. Improving the systems we have for supporting intellectual work really deserves explicit cultivation. I tried to outline the ways the new computer system could help us augment our natural abilities. Imagine how important it would be. I see it as analogous to the way a shovel augments small digging projects, while the bulldozer really augments our ability for big projects.

Charlie Stross: Artificial Intelligence: Threat or Menace?

I am not a believer in the AI singularity — the rapture of the nerds — that is, in the possibility of building a brain-in-a-box that will self-improve its own capabilities until it outstrips our ability to keep up. What CS professor and fellow SF author Vernor Vinge described as “the last invention humans will ever need to make”. But I do think we’re going to keep building more and more complicated, systems that are opaque rather than transparent, and that launder our unspoken prejudices and encode them in our social environment. As our widely-deployed neural processors get more powerful, the decisions they take will become harder and harder to question or oppose. And that’s the real threat of AI — not killer robots, but “computer says no” without recourse to appeal.

I’m running on fumes at this point, but if I have any message to leave you with, it’s this: AI and neurocomputing isn’t magical and it’s not the solution to all our problems, but it is dangerously non-transparent.

Douglas Engelbart: Large-Scale Collective IQ: Facilitating its Evolution

Big problems have to be dealt with collectively and mankind’s not getting collectively smarter at anything like the rate of complexity accelerating. The issues and challenges and problems are increasing steadily, exponentially. And our ability to get a collective understanding of them is not obviously increasing at that rate.

I get this image of mankind in some great big clumsy vehicle that is moving through an environment which is getting rougher and rougher and it’s being impelled faster and faster. Then you look for how is it being steered and oh, it’s a very subtle way in which lots of things are affecting the way it’s being steered. Oh well, that’s good when we were chugging along at a slow rate. But the acceleration means we just got better headlights and better visibility ahead. Now we sure as hell better find some better way to steer that mechanism or it’s just obviously going to crash.


I believe more than ever that if we don’t get collectively smarter we just are likely to get demolished or just really setback to the primitive times.

Meredith Broussard: Artificial Unintelligence: How Computers Misunderstand the World, p. 194

We’ve managed to use digital technology to increase economic inequality in the United States, facilitate illegal drug abuse, undermine the economic sustainability of the free press, cause a “fake news” crisis, roll back voting rights and fair labor protection, surveil citizens, spread junk science, harass and stalk people online, make flying robots that at best annoy people and at worst drop bombs, increase identity theft, enable hacks that result in millions of credit card numbers being stolen for fraudulent purposes, sell vast amounts of personal data, and elect Donald Trump to the presidency. This is not the better world that the early tech evangelists promised. It’s the same world with the same types of human problems that have always existed. The problems are hidden inside code and data, which makes them harder to see and easier to ignore.

We clearly need to change our approach. We need to stop fetishizing tech. We need to audit algorithms, watch out for inequality, and reduce bias in computational systems, as well as in the tech industry.

Meredith Broussard: Artificial Unintelligence: How Computers Misunderstand the World, p. 12

I think we can do better. Once we understand how computers work, we can begin to demand better quality in technology. We can demand systems that truly make things cheaper, faster, and better instead of putting up with systems that promise improvement but in fact make things unnecessarily complicated. We can learn to make better decisions about the downstream effects of technology so that we don’t cause unintentional harm inside complex social systems. And we can feel empowered to say “no” to technology when it’s not necessary so that we can live better, more connected lives and enjoy the many ways tech can and does enhance our world.

Meredith Broussard: Artificial Unintelligence: How Computers Misunderstand the World, p. 7

Technochauvinism is the belief that tech is always the solution. Although digital technology has been an ordinary part of scientific and bureaucratic life since the 1950s, and everyday life since the 1980s, sophisticated marketing campaigns still have most people convinced that tech is something new and potentially revolutionary.

Technochauvinism is often accompanied by fellow-traveler beliefs such as Ayn Randian meritocracy; technolibertarian political values; celebrating free speech to the extent of denying that online harassment is a problem; the notion that computers are more “objective” or “unbiased” because they distill questions and answers down to mathematical evaluation; and an unwavering faith that if the world just used more computers, and used them properly, social problems would disappear and we’d create a digitally enabled utopia. It’s not true. There has never been, nor will there ever be, a technological innovation that moves us away from the essential problems of human nature.

Meredith Broussard: Artificial Unintelligence: How Computers Misunderstand the World, p. 6

Our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed technology. That badly designed technology is getting in the way of everyday life rather than making life easier. Simple things like finding a new friend’s phone number or up-to-date email address have become time-consuming. The problem here, as in so many cases, is too much technology and not enough people. We turned over record-keeping to computational systems but fired all the humans who kept the information up-to-date. Now, since nobody goes through and makes sure all the contact information is accu-rate in every institutional directory, it is more difficult than ever to get in touch with people.

Douglas Engelbart: Augmenting Human Intellect: A Conceptual Framework, p. 131

By our view, we do not have to wait until we learn how the human mental processes work, we do not have to wait until we learn how to make computers more intelligent or bigger or faster, we can begin developing powerful and economically feasible augmentation systems on the basis of what we now know and have. Pursuit of further basic knowledge and improved machines will continue into the unlimited future, and will want to be integrated into the “art” and its improved augmentation systems—but getting started now will provide not only orientation and stimulation for these pursuits, but will give us improved problem-solving effectiveness with which to carry out the pursuits.

Douglas Engelbart: Augmenting Human Intellect: A Conceptual Framework, p. 9

Pervading all of the augmentation means is a particular structure or organization. While an untrained aborigine cannot drive a car through traffic, because he cannot leap the gap between his cultural background and the kind of world that contains cars and traffic, it is possible to move step by step through an organized training program that will enable him to drive effectively and safely. In other words, the human mind neither learns nor acts by large leaps, but by steps organized or structured so that each one depends upon previous steps.

Marshall McLuhan: The Medium Is the Massage, p. 26

All media are extensions of some human faculty — mental or physical.

The wheel is an extension of the foot. The book is an extension of the eye. Clothing is an extension of the skin. Electric circuitry is an extension of the central nervous system.

The extension of any one sense displaces the other senses and alters the way we think — the way we see the world and ourselves. When these changes are made, men change.

Marshall McLuhan: The Medium Is the Massage, p. 93

Professionalism is environmental. Amateurism is anti-environmental. Professionalism merges the individual into patterns of total environment. Amateurism seeks the development of the total awareness of the individual and the critical awareness of the groundrules of society. The amateur can afford to lose. The professional tends to classify and to specialize, to accept uncritically the groundrules of the environment. The groundrules provided by the mass response of his colleagues serve as a pervasive environment of which he is contentedly and unaware. The “expert” is the man who stays put.

Steve Jobs: Secrets of Life

When you grow up you tend to get told the world is the way it is and your life is just to live your life inside the world. Try not to bash into the walls too much. Try to have a nice family life, have fun, save a little money.

That’s a very limited life. Life can be much broader once you discover one simple fact: Everything around you that you call life, was made up by people that were no smarter than you. And you can change it, you can influence it, you can build your own things that other people can use.

The most important thing is to shake off this erroneous notion that life is there and you’re just gonna live in it, versus embrace it, change it, improve it, make your mark upon it. And however you learn that, once you learn it, you’ll want to change life and make it better, cause it’s kind of messed up, in a lot of ways. Once you learn that, you’ll never be the same again.

Joseph Weizenbaum: Computer Power and Human Reason, p. 20

Many machines are functional additions to the human body, virtually prostheses. Some, like the lever and the steam shovel extend the raw muscular power of their individual operators; some, like the microscope, the telescope, and various measuring instruments, are extensions of man’s sensory apparatus. Others extend the physical reach of man. The spear and the radio, for example, permit man to cast his influence over a range exceeding that of his arms and voice, respectively. Man’s vehicles make it possible for him to travel faster and farther than his legs alone would carry him, and they allow him to transport great loads over vast distances. It is easy to see how and why such prosthetic machines directly enhance man’s sense of power over the material world. And they have an important psychological effect as well: they tell man that he can remake himself. Indeed, they are part of the set of symbols man uses to recreate his past, i.e., to construct his history, and to create his future. They signify that man, the engineer, can transcend limitations imposed on him by the puniness of his body and of his senses. Once man could kill another animal only by crushing or tearing it with his hands; then he acquired the axe, the spear, the arrow, the ball fired from a gun, the explosive shell. Now charges mounted on missiles can destroy mankind itself. That is one measure of how far man has extended and remade himself since he began to make tools.

Joseph Weizenbaum: Computer Power and Human Reason, p. 32

The arrival of the Computer Revolution and the founding of the Computer Age have been announced many times. But if the triumph of a revolution is to be measured in terms of the profundity of the social revisions it entrained, then there has been no computer revolution.

Joseph Weizenbaum: Computer Power and Human Reason, p. 38

There is a myth that computers are today making important decisions of the kind that were earlier made by people. Perhaps there are isolated examples of that here and there in our society. But the widely believed picture of managers typing questions of the form “What shall we do now?” into their computers and then waiting for their computers to “decide” is largely wrong. What is happening instead is that people have turned the processing of information on which decisions must be based over to enormously complex computer systems. They have, with few exceptions, reserved for themselves the right to make decisions based on the outcome of such computing processes. People are thus able to maintain the illusion, and it is often just that, that they are after all the decisionmakers. But, as we shall argue, a computing system that permits the asking of only certain kinds of questions, that accepts only certain kinds of “data,” and that cannot even in principle be understood by those who rely on it, such a computing system has effectively closed many doors that were open before it was installed.

Joseph Weizenbaum: Computer Power and Human Reason, p. 130

Science and technology are sustained by their translations into power and control. To the extent that computers and computation may be counted as part of science and technology, they feed at the same table. The extreme phenomenon of the compulsive programmer teaches us that computers have the power to sustain megalomaniac fantasies. But that power of the computer is merely an extreme version of a power that is inherent in all self- validating systems of thought. Perhaps we are beginning to understand that the abstract systems—the games computer people can generate in their infinite freedom from the constraints that delimit the dreams of workers in the real world—may fail catastrophically when their rules are applied in earnest. We must also learn that the same danger is inherent in other magical systems that are equally detached from authentic human experience, and particularly in those sciences that insist they can capture the whole man in their abstract skeletal frameworks.

Joseph Weizenbaum: Computer Power and Human Reason, p. 207

The question I am trying to pursue here is, “What human objectives and purposes may not be appropriately delegated to computers?” We can design an automatic pilot, and delegate to it the task of keeping an airplane flying on a predetermined course. That seems an appropriate thing for machines to do. It is also technically feasible to build a computer system that will interview patients applying for help at a psychiatric out-patient clinic and produce their psychiatric profiles complete with charts, graphs, and natural-language commentary. The question is not whether such a thing can be done, but whether it is appropriate to delegate this hitherto human function to a machine.

Joseph Weizenbaum: Computer Power and Human Reason, p. 227

Computers can make judicial decisions, computers can make psychiatric judgments. They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not be given such tasks. They may even be able to arrive at “correct” decisions in some cases—but always and necessarily on bases no human being should be willing to accept.

There have been many debates on “Computers and Mind.” What I conclude here is that the relevant issues are neither technological nor even mathematical; they are ethical. They cannot be settled by asking questions beginning with “can.” The limits of the applicability of computers are ultimately statable only in terms of oughts. What emerges as the most elementary insight is that, since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.

Joseph Weizenbaum: Computer Power and Human Reason, p. 255

Instrumental reason has made out of words a fetish surrounded by black magic. And only the magicians have the rights of the initiated. Only they can say what words mean. And they play with words and they deceive us. […]

The technologist argues again and again that views such as those expressed here are anti-technological, anti-scientific, and finally anti-intellectual. He will try to construe all arguments against his megalomanic visions as being arguments for the abandonment of reason, rationality, science, and technology, and in favor of pure intuition, feeling, drug-induced mindlessness, and so on. In fact, I am arguing for rationality. But I argue that rationality may not be separated from intuition and feeling. I argue for the rational use of science and technology, not for its mystification, let alone its abandonment. I urge the introduction of ethical thought into science planning. I combat the imperialism of instrumental reason, not reason.

Joseph Weizenbaum: Computer Power and Human Reason, p. 259

Technological inevitability can thus be seen to be a mere element of a much larger syndrome. Science promised man power. But, as so often happens when people are seduced by promises of power, the price exacted in advance and all along the path, and the price actually paid, is servitude and impotence. Power is nothing if it is not the power to choose. Instrumental reason can make decisions, but there is all the difference between deciding and choosing.

People make decisions all day long, every day. But they appear not to make choices. They are, as they themselves testify, like Winograd’s robot. One asks it “Why did you do that?” and it answers “Because this or that decision branch in my program happened to come out that way.” And one asks “Why did you get to that branch?” and it again answers in the same way. But its final answer is “Because you told me to.” Perhaps every human act involves a chain of calculations at what a systems engineer would call decision nodes. But the difference between a mechanical act and an authentically human one is that the latter terminates at a node whose decisive parameter is not “Because you told me to,” but “Because I chose to.”

Joseph Weizenbaum: Computer Power and Human Reason, p. 277

The computer is a powerful new metaphor for helping us to understand many aspects of the world, but that it enslaves the mind that has no other metaphors and few other resources to call on. The world is many things, and no single framework is large enough to contain them all, neither that of man’s science nor that of his poetry, neither that of calculating reason nor that of pure intuition. And just as a love of music does not suffice to enable one to play the violin—one must also master the craft of the instrument and of music itself—so is it not enough to love humanity in order to help it survive. The teacher’s calling to teach his craft is therefore an honorable one. But he must do more than that: he must teach more than one metaphor, and he must teach more by the example of his conduct than by what he writes on the blackboard. He must teach the limitations of his tools as well as their power.

Seymour Papert: What’s the big idea? Toward a pedagogy of idea power

One can take two approaches to renovating School - or indeed anything else. The problem-solving approach identifies the many problems that afflict individual schools and tries to solve them. A systemic approach requires one to step back from the immediate problems and develop an understanding of how the whole thing works. Educators faced with day-to-day operation of schools are forced by circumstances to rely on problem solving for local fixes. They do not have time for “big ideas”.

Tim Berners-Lee: Vannevar Bush Symposium

So, I still have a dream that the web could be less of a television channel, and more of a sea of interactive shared knowledge.

Alan Kay: Vannevar Bush Symposium

Knowing more than your own field is really helpful in thinking creatively. I’ve always thought that one of the reasons the 1960s was so interesting is that nobody was a computer scientist back then. Everybody who came into it came into it with lots of other knowledge and interests. Then they tried to figure out what computers were, and the only place they could use for analogies were other areas. So we got some extremely interesting ideas from that.

And of course, the reason being educated is important is simply because you don’t have any orthogonal contexts if you don’t have any other kinds of knowledge to think with. Engineering is one of the hardest fields to be creative in, just because it’s all about optimizing, and you don’t optimize without being very firmly anchored to the context you’re in. What we’re talking about here is something that is not about optimization, but actually about rotating the point of view.

Neil Postman: Informing Ourselves to Death

There is no escaping from ourselves. The human dilemma is as it has always been, and we solve nothing fundamental by cloaking ourselves in technological glory.

Neil Postman: Informing Ourselves to Death

The average person today is about as naive as was the average person in the Middle Ages. In the Middle Ages people believed in the authority of their religion, no matter what. Today, we believe in the authority of our science, no matter what.

Jonathan Blow: Preventing the Collapse of Civilization

My thesis is that software is actually in decline right now. […] I don’t think most people would believe me if I say that, it sure seems like it’s flourishing. So I have to convince you at least that this is a plausible perspective.


What I’ll say about that is that collapses, like the Bronze Age collapse, were massive. All civilizations were destroyed, but it took a hundred years. So if you’re at the beginning of that collapse, in the first 20 years you might think “Well things aren’t as good as they were 20 years ago but it’s fine”.

Of course I expect the reply to what I’m saying to be “You’re crazy! Software is doing great, look at all these internet companies that are making all this money and changing the way that we live!” I would say yes, that is all happening. But what is really happening is that software has been free riding on hardware for the past many decades. Software gets “better” because it has better hardware to run on.

Don Norman: 21st Century Design

Most of our technical fields who study systems leave out people, except there’s some object in there called people. And every so often there are people who are supposed to do something to make sure the system works. But there’s never any analysis of what it takes to do that, never any analysis of whether this is really an activity that’s well suited for people, and when people are put into it or they burn out or they make errors etc., we blame the people instead of the way the system was built. Which doesn’t at all take into account people’s abilities and what we’re really good at – and also what we’re bad at.

Bill Buxton: Socializing technology for the mobile human

Everybody’s into accelerators and incubators and wants to be a millionaire by the time they’re 24 by doing the next big thing. So let me tell you what I think about the next big thing: there’s no such thing as the next big thing! In fact chasing the next big thing is what is causing the problem.

That the next big thing isn’t a thing. The next big thing is a change in the relationship amongst the things that are already there. Societies don’t transform by making new things but by having their internal relationships change and develop.

I’d argue that what we know about sociology and how we think about things like kinship, moral order, social conventions, all of those things that we know about and have a language through social science apply equally to the technologies that we must start making. If we don’t have that into our mindset, we’re just gonna make a bunch of gadgets, a bunch of doodads, as opposed to build an ecosystem that’s worthy of human aspirations. And actually technological potential.

Benedict Evans: Notes on AI Bias

I often think that the term ‘artificial intelligence’ is deeply unhelpful in conversations like this. It creates the largely false impression that we have actually created, well, intelligence - that we are somehow on a path to HAL 9000 or Skynet - towards something that actually understands. We aren’t. These are just machines, and it’s much more useful to compare them to, say, a washing machine. A washing machine is much better than a human at washing clothes, but if you put dishes in a washing machine instead of clothes and press start, it will wash them. They’ll even get clean. But this won’t be the result you were looking for, and it won’t be because the system is biased against dishes. A washing machine doesn’t know what clothes or dishes are - it’s just a piece of automation, and it is not conceptually very different from any previous wave of automation.

That is, just as for cars, or aircraft, or databases, these systems can be both extremely powerful and extremely limited, and depend entirely on how they’re used by people, and on how well or badly intentioned and how educated or ignorant people are of how these systems work.

Weiwei Hsu: Defining the Dimensions of the “Space” of Computing

While trending technologies dominate tech news and influence what we believe is possible and probable, we are free to choose. We don’t have to accept what monopolies offer. We can still inform and create the future on our own terms. We can return to the values that drove the personal computer revolution and inspired the first-generation Internet.

Glass rectangles and black cylinders are not the future. We can imagine other possible futures — paths not taken — by searching within a “space of alternative” computing systems, as Simon has suggested. In this “space,” even though some dimensions are currently less recognizable than others, by investigating and hence illuminating the less-explored dimensions together, we can co-create alternative futures.

Weiwei Hsu: Defining the Dimensions of the “Space” of Computing

Traditionally, we have thought of computing not in terms of a space of alternatives but in terms of improvements over time. Moore’s Law. Faster. Cheaper. More processors. More RAM. More mega-pixels. More resolution. More sensors. More bandwidth. More devices. More apps. More users. More data. More “engagement.” More everything.

What’s more and more has also become less and less — over time. The first computing machines were so large they filled entire rooms. Over the last fifty years, computers shrank so much that one would fit on a desktop, then in a shirt pocket. Today, computers-on-a-chip are built into “smart” devices all around us. And now they are starting to merge into our environments, becoming invisible and ubiquitous.


Clearly, what we think of as “computing” has changed — and will continue to change. No wonder, then, that most of our models of computing are progressions: timelines.

Tim Berners-Lee: Hypertext and Our Collective Destiny

It is, then, a good time 50 years on to sit back and consider to what extent we have actually made life easier. We have access to information: but have we been solving problems? Well, there are many things it is much easier for individuals today than 5 years ago. Personally I don’t feel that the web has made great strides in helping us work as a global team.

Perhaps I should explain where I’m coming from. I had (and still have) a dream that the web could be less of a television channel and more of an interactive sea of shared knowledge. I imagine it immersing us as a warm, friendly environment made of the things we and our friends have seen, heard, believe or have figured out. I would like it to bring our friends and colleagues closer, in that by working on this knowledge together we can come to better understandings. If misunderstandings are the cause of many of the world’s woes, then can we not work them out in cyberspace. And, having worked them out, we leave for those who follow a trail of our reasoning and assumptions for them to adopt, or correct.

Matthew Butterick: Rebuilding the Typographic Society

Now and then there’s a bigger event—let’s call it a Godzilla moment—that causes a lot of destruction. And what is the Godzilla? Usually the Godzilla is technology. Technology arrives, and it wants to displace us—take over something that we were doing. That’s okay when technology removes a burden or an annoyance.

But sometimes, when technology does that, it can constrict the space we have for expressing our humanity. Then, we have to look for new outlets for ourselves, or what happens? What happens is that this zone of humanity keeps getting smaller. Technology invites us to accept those smaller boundaries, because it’s convenient. It’s relaxing. But if we do that long enough, what’s going to happen is we’re going to stagnate. We’re going to forget what we’re capable of, because we’re just playing in this really tiny territory.

The good news is that when Godzilla burns down the city with his fiery breath, we have space to rebuild. There’s an opportunity for us. But we can’t be lazy about it.

David Perell: What the Hell is Going On?

Like fish in water, we’re blind to how the technological environment shapes our behavior. The invisible environment we inhabit falls beneath the threshold of perception. Everything we do and think is shaped by the technologies we build and implement. When we alter the flow of information through society, we should expect radical transformations in commerce, education, and politics. Right now, these transformations are contributing to anger and anxiety, especially in politics.

By understanding information flows, we gain a measure of control over them. Understanding how shifts in information flow impact society is the first step towards building a better world, so we can make technology work for us, not against us.

Tim O’Reilly: WTF and the importance of human/tool co-evolution

I think one of the big shifts for the 21st century is to change our sense of what collective intelligence is. Because we think of it as somehow the individual being augmented to be smarter, to make better decisions, maybe to work with other people in a more productive way. But in fact many of the tools of collective intelligence we are contributing to and the intelligence is outside of us. We are part of it and we are feeding into it.

It changes who we are, how we think. Shapes us as we shape it. And the question is whether we’re gonna manage the machine or whether it will manage us. Now we tell ourselves in Silicon Valley that we’re in charge. But you know, we basically built this machine, we think we know what it’s going to do and it suddenly turns out not quite the way we expected.


We have to think about that fundamental goal that we give these systems. Because, yes there is all this intelligence, this new co-evolution and combination of human and machine, but ultimately it’s driven by what we tell it to optimize for.

Nicholas Negroponte: Big Idea Famine

I believe that 30 years from now people will look back at the beginning of our century and wonder what we were doing and thinking about big, hard, long-term problems, particularly those of basic research. They will read books and articles written by us in which we congratulate ourselves about being innovative. The self-portraits we paint today show a disruptive and creative society, characterized by entrepreneurship, start-ups and big company research advertised as moonshots. Our great-grandchildren are certain to read about our accomplishments, all the companies started, and all the money made. At the same time, they will experience the unfortunate knock-on effects of an historical (by then) famine of big thinking.


We live in a dog-eat-dog society that emphasizes short-term competition over long-term collaboration. We think in terms of winning, not in terms of what might be beneficial for society. Kids aspire to be Mark Zuckerberg, not Alan Turing.

Matthew Butterick: The Bomb in the Garden

Now, you may say “hey, but the web has gotten so much better looking over 20 years.” And that’s true. But on the other hand, I don’t really feel like that’s the right benchmark, unless you think that the highest role of design is to make things pretty. I don’t.

I think of design excellence as a principle. A principle that asks this: Are you maximizing the possibilities of the medium?

That’s what it should mean. Because otherwise it’s too easy to congratulate ourselves for doing nothing. Because tools & technologies are always getting better. They expand the possibilities for us. So we have to ask ourselves: are we keeping up?

Alan Kay: The Best Way to Predict the Future is to Create It. But Is It Already Too Late?

Albert Einstein’s quote “We cannot solve our problems with the same levels of thinking that we used to create them” is one of my favorite quotes. I like this idea because Einstein is suggesting something qualitative. That it is not doing more of what we’re doing. It means if we’ve done things with technology that have gotten us in a bit of a pickle, doing more things with technology at the same level of thinking is probably gonna make things worse.

And there is a corollary with this: if your thinking abilities are below threshold you are going to be in real trouble because you will not realize until you have done yourself in.

Virtually everybody in the computing science has almost no sense of human history and context of where we are and where we are going. So I think of much of the stuff that has been done as inverse vandalism. Inverse vandalism is making things just because you can.

Mark Ritson: Don’t be seduced by the pornography of change

Marketing is a fascinating discipline in that most people who practice it have no idea about its origins and foundations, little clue about how to do the job properly in the present, but unbounded enthusiasm to speculate about the future and what it will bring. If marketers became doctors they would spend their time telling patients not what ailed them, but showing them an article about the future of robotic surgery in the year 2030. If they took over as accountants they would advise clients to forget about their current tax returns because within 50 years income will become obsolete thanks to lasers and 3D printing.

There are probably two good reasons for this obsession with the future over the practical reality of the present. First, marketing has always managed to attract a significant proportion of people who are attracted to the shiny stuff. It should be populated by people who get turned on by customer data and brand strategy, but both groups are eclipsed by an enormous superficial army of glitter seekers who end up in marketing.

Second, ambitious and overstated projections in the future are fantastic at garnering headlines and hits but have the handy advantage of being impossible to fact check. If I wrote a column saying the best way to buy media was with a paper bag on your head, with strings of Christmas lights wrapped around it you could point to the fact that nobody is doing this as the basis for rejecting my point of view. But if I make that prediction for the year ahead I have the safety of time to protect my idiocy.


If your job is to talk about what speech recognition or artificial intelligence will mean for marketing then you have an inherent desire to make it, and you, as important as possible. Marketers take their foot from the brake pedal of reality and put all their pressure on the accelerator of horseshit in order to get noticed, and future predictions provide the ideal place to drive as fast as possible.

Josh Clark: Design in the Era of the Algorithm

Let’s not codify the past. On the surface, you’d think that removing humans from a situation might eliminate racism or stereotypes or any very human bias. In fact, the very real risk is that we’ll seal our bias—our past history—into the very operating system itself.

Our data comes from the flawed world we live in. In the realms of hiring and promotion, the historical data hardly favors women or people of color. In the cases of predictive policing, or repeat-offender risk algorithms, the data is both unfair and unkind to black men. The data bias codifies the ugly aspects of our past.


Rooting out this kind of bias is hard and slippery work. Biases are deeply held—and often invisible to us. We have to work hard to be conscious of our unconscious—and doubly so when it creeps into data sets. This is a data-science problem, certainly, but it’s also a design problem.

Steve Krouse: The “Next Big Thing” is a Room

Our computers have lured us into a cage of our own making. We’ve reduced ourselves to disembodied minds, strained eyes, and twitching, clicking, typing fingertips. Gone are our arms and legs, back, torsos, feet, toes, noses, mouths, palms, and ears. When we are doing our jobs, our vaunted knowledge work, we are a sliver of ourselves. The rest of us hangs on uselessly until we leave the office and go home.

Worse than pulling us away from our bodies, our devices have ripped us from each other. Where are our eyes when we speak with our friends, walk down the street, lay in bed, drive our cars? We know where they should be, and yet we also know where they end up much of the time. The tiny rectangles in our pockets have grabbed our attention almost completely.

Audrey Watters: Machine Teaching, Machine Learning, and the History of the Future of Public Education

Educational films were going to change everything. Teaching machines were going to change everything. Educational television was going to change everything. Virtual reality was going to change everything. The Internet was going to change everything. The Macintosh computer was going to change everything. The iPad was going to change everything. Khan Academy was going to change everything. MOOCs were going to change everything. And on and on and on.

Needless to say, movies haven’t replaced textbooks. Computers and YouTube videos haven’t replaced teachers. The Internet has not dismantled the university or the school house.

Not for lack of trying, no doubt. And it might be the trying that we should focus on as much as the technology.

The transformational, revolutionary potential of these technologies has always been vastly, vastly overhyped. And it isn’t simply, as some education reformers like to tell it, that it’s because educators or parents are resistant to change. It’s surely in part because the claims that marketers make are often just simply untrue.

Peter Gasston: People don’t change

Technology matches our desires, it doesn’t make them. People haven’t become more vain because now we have cameras. Cameras have been invented and they became popular because we’ve always been a bit vain, we’ve always wanted to see ourselves. It’s just the technology was never in place to enable that expression of our characters before.

Alan Kay: Education That Takes Us To The 22nd Century

Probably the most important thing I can urge on you today is to try and understand that computing is not exactly what you think it is. […] You have to understand this. What happened when the internet got done and a few other things back in the 70s or so was a big paradigm shift in computing and it hasn’t spilled out yet. But if you’re looking ahead to the 22nd century this is what you have to understand otherwise you’re always going to be steering by looking in the rearview mirror.

Alan Kay: Education That Takes Us To The 22nd Century

So a good question for people who are dealing with computing is what if what’s important about computing is deeply hidden? I can tell you as far as this one, most of the computing that is done in most of industry completely misses most of what’s interesting about computing. They are basically at a first level of exposure to it and they’re trying to optimize that. Think about that because that was okay fifty years ago.

Alan Kay: Education That Takes Us To The 22nd Century

When we get fluent in powerful ideas, they are like adding new brain tissue that nature didn’t give us. It’s worthwhile thinking about what it means to get fluent in something like calculus and to realize that a normal person fluent in calculus can outthink Archimedes. If you’re fluent at reading you can cover more ground than anybody in the antiquity could in an oral culture.

Audrey Watters: Why History Matters

“Technology is changing faster than ever” – this is a related, repeated claim. It’s a claim that seems to be based on history, one that suggests that, in the past, technological changes were slow; now, they’re happening so fast and we’re adopting new technologies so quickly – or so the story goes – that we can no longer make any sense of what is happening around us, and we’re just all being swept along in a wave of techno-inevitability.

Needless to say, I don’t think the claim is true – or at the very least, it is a highly debatable one. It depends on how you count and what you count as technological change and how you measure the pace of change. Some of this, I’d argue, is simply a matter of confusing technology consumption for technology innovation. Some of this is a matter of confusing upgrades for breakthroughs – Google updating Docs more regularly than Microsoft updates Office or Apple releasing a new iPhone every year might not be the best rationale for insisting we are experiencing rapid technological change. Moreover, much of the pace of change can be accounted for by the fact that many new technologies are built atop – quite literally – pre-existing systems: railroads followed the canals; telegraphs followed the railroads; telephones followed the telegraphs; cable television followed the phone lines; most of us (in the US) probably use an Internet provider today that began as either a phone company or a cable company. If, indeed, Internet adoption has moved rapidly, it’s because it’s utilized existing infrastructure as much as because new technologies are somehow inherently zippier.

“Technology is changing faster than ever.” It makes for a nice sound bite, to be sure. It might feel true. (It’s probably felt true at every moment in time.) And surely it’s a good rhetorical hook to hang other directives upon: “you simply must buy this new shiny thing today because if you don’t then the world is going to pass you by.”

Alan Kay: Rethinking CS Education

I hate to see computing and computer science watered down to some terrible kind of engineering that the Babylonians might have failed at.

That is pathetic! And I’m saying it in this strong way because you need to realize that we’re in the middle of a complete form of bullshit that has grown up out of the pop culture.

Baldur Bjarnason: Neither Paper Nor Digital Does Active Reading Well

A recurring theme in software development is the more you dig into the research the greater the distance is between what actual research seems to say versus what the industry practices.

Develop a familiarity with, for example, Alan Kay’s or Douglas Engelbart’s visions for the future of computing and you are guaranteed to become thoroughly dissatisfied with the limitations of every modern OS.

Reading up hypertext theory and research, especially on hypertext as a medium, is a recipe for becoming annoyed at The Web.

Catching up on usability research throughout the years makes you want to smash your laptop agains the wall in anger. And trying to fill out forms online makes you scream ‘it doesn’t have to be this way!’ at the top of your lungs.

That software development doesn’t deal with research or attempts to get at hard facts is endemic to the industry.

John Battelle: If Software Is Eating the World, What Will Come Out the Other End?

But the world is not just software. The world is physics, it’s crying babies and shit on the sidewalk, it’s opioids and ecstasy, it’s car crashes and Senate hearings, lovers and philosophers, lost opportunities and spinning planets around untold stars. The world is still real. Software hasn’t eaten it as much as bound it in a spell, temporarily I hope, while we figure out what comes next.

Software – data, code, algorithms, processing – software has dressed the world in new infrastructure. But this is a conversation, not a process of digestion. It is a conversation between the physical and the digital, a synthesis we must master if we are to avoid terrible fates, and continue to embrace fantastic ones.

Isaac Asimov: The Relativity of Wrong

In every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern “knowledge” is that it is wrong.


When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

The basic trouble, you see, is that people think that “right” and “wrong” are absolute; that everything that isn’t perfectly and completely right is totally and equally wrong.

James Bridle: Known Unknowns

Cooperation between human and machine turns out to be a more potent strategy than trusting to the computer alone.

This strategy of cooperation, drawing on the respective skills of human and machine rather than pitting one against the other, may be our only hope for surviving life among machines whose thought processes are unknowable to us. Nonhuman intelligence is a reality—it is rapidly outstripping human performance in many disciplines, and the results stand to be catastrophically destructive to our working lives. These technologies are becoming ubiquitous in everyday devices, and we do not have the option of retreating from or renouncing them. We cannot opt out of contemporary technology any more than we can reject our neighbors in society; we are all entangled.

Weiwei Hsu: The “Space” of Computing

Today, we have been building and investing so much of our time into the digital world and we have forgotten to take a step back and take a look at the larger picture. Not only do we waste other people’s time by making them addicted to this device world, we have also created a lot of waste in the real world. At the same time we’re drowning in piles and piles of information because we never took the time to architect a system that enable us in navigating through them. We’re trapped in these rectangular screens and we have often forgotten how to interact with the real world, with real humans. We have been building and hustling - but hey, we can also slow down and rethink how we want to dwell in both the physical world and the digital world.


At some point in the future we will leave this world and what we’ll leave behind are spaces and lifestyles that we’ve shaped for our grandchildren. So I would like to invite you to think about what do we want to leave behind, as we continue to build both digitally and physically. Can we be more intentional so that we shape and leave behind a more humane environment?

Stewart Brand: Pace Layering: How Complex Systems Learn and Keep Learning

Fast learns, slow remembers. Fast proposes, slow disposes. Fast is discontinuous, slow is continuous. Fast and small instructs slow and big by accrued innovation and by occasional revolution. Slow and big controls small and fast by constraint and constancy. Fast gets all our attention, slow has all the power.

All durable dynamic systems have this sort of structure. It is what makes them adaptable and robust.

Mike Caulfield: The Garden and the Stream: A Technopastoral

I find it hard to communicate with a lot of technologists anymore. It’s like trying to explain literature to someone who has never read a book. You’re asked “So basically a book is just words someone said written down?” And you say no, it’s more than that. But how is it more than that?

This is my attempt to abstract from this experience something more general about the way in which we collaborate on the web, and the way in which it is currently very badly out of balance..

I am going to make the argument that the predominant form of the social web — that amalgam of blogging, Twitter, Facebook, forums, Reddit, Instagram — is an impoverished model for learning and research and that our survival as a species depends on us getting past the sweet, salty fat of “the web as conversation” and on to something more timeless, integrative, iterative, something less personal and less self-assertive, something more solitary yet more connected.

I don’t expect to convince many of you, but I’ll take what I can get.

Shan Carter and Michael Nielsen: Using Artificial Intelligence to Augment Human Intelligence

Unfortunately, many in the AI community greatly underestimate the depth of interface design, often regarding it as a simple problem, mostly about making things pretty or easy-to-use. In this view, interface design is a problem to be handed off to others, while the hard work is to train some machine learning system.

This view is incorrect. At its deepest, interface design means developing the fundamental primitives human beings think and create with. This is a problem whose intellectual genesis goes back to the inventors of the alphabet, of cartography, and of musical notation, as well as modern giants such as Descartes, Playfair, Feynman, Engelbart, and Kay. It is one of the hardest, most important and most fundamental problems humanity grapples with.

Mary Catherine Bateson: How To Be a Systems Thinker

There has been so much excitement and sense of discovery around the digital revolution that we’re at a moment where we overestimate what can be done with AI, certainly as it stands at the moment.

One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it’s willing to make projections when it hasn’t been provided with everything that would be relevant to those projections.

Paul Robert Lloyd: Fantasies of the Future: Design in a World Being Eaten by Software

We need to take it upon ourselves to be more critical and introspective. This shouldn’t be too hard. After all, design is all about questioning what already exists and asking how it could be improved for the better.

What processes can we develop to give us time to properly judge a feature; to consider who it is serving, and if it can be misused? To question those default settings?

Perhaps we need a new set of motivational posters. Rather than “move fast and break things”, perhaps “slow down and ask more questions”.

Ron Gilbert: Storytime

The grand struggle of creativity can often be about making yourself stupid again. It’s like turning yourself into a child who views the world with wonderment and excitement.

Creating something meaningful isn’t easy, it’s hard. But that’s why we should do it. If you ever find yourself being comfortable on what you’re making or creating, then you need to push yourself. Push yourself out of your comfort zone and push yourself to the point of failure and then beyond.

When I was a kid, we would go skiing a lot. At the end of the day all the skiers were coming to the lodge and I used to think it was the bad skiers that were covered in snow and it was the good skiers that were all cleaned, with no snow on them. But turns out to be the exact opposite is true: it was the good skiers that were covered in snow from pushing themselves, pushing themselves beyond the limits and into their breaking points, getting better and then pushing themselves harder. Creativity is the same thing. It’s like you push hard, you push until you’re scared and afraid, you push until you break, you push until you fall and then you get up and you do it again. Creativity is really a journey. It’s a wonderful journey the you part you start out as one person and that you end as another.

Nicky Case: How To Become A Centaur

That’s why Doug [Engelbart] tied that brick to a pencil — to prove a point. Of all the tools we’ve created to augment our intelligence, writing may be the most important. But when he “de-augmented” the pencil, by tying a brick to it, it became much, much harder to even write a single word. And when you make it hard to do the low-level parts of writing, it becomes near impossible to do the higher-level parts of writing: organizing your thoughts, exploring new ideas and expressions, cutting it all down to what’s essential. That was Doug’s message: a tool doesn’t “just” make something easier — it allows for new, previously-impossible ways of thinking, of living, of being.

Nicky Case: How To Become A Centaur

Human nature, for better or worse, doesn’t change much from millennia to millennia. If you want to see the strengths that are unique and universal to all humans, don’t look at the world-famous award-winners — look at children. Children, even at a young age, are already proficient at: intuition, analogy, creativity, empathy, social skills. Some may scoff at these for being “soft skills”, but the fact that we can make an AI that plays chess but not hold a normal five-minute conversation, is proof that these skills only seem “soft” to us because evolution’s already put in the 3.5 billion years of hard work for us.

Nicky Case: How To Become A Centaur

If there’s just one idea you take away from this entire essay, let it be Mother Nature’s most under-appreciated trick: symbiosis.

It’s an Ancient Greek word that means: “living together.” Symbiosis is when flowers feed the bees, and in return, bees pollinate the flowers. It’s when you eat healthy food to nourish the trillions of microbes in your gut, and in return, those microbes break down your food for you. It’s when, 1.5 billion years ago, a cell swallowed a bacterium without digesting it, the bacterium decided it was really into that kind of thing, and in return, the bacterium — which we now call “mitochondria” — produces energy for its host.

Symbiosis shows us you can have fruitful collaborations even if you have different skills, or different goals, or are even different species. Symbiosis shows us that the world often isn’t zero-sum — it doesn’t have to be humans versus AI, or humans versus centaurs, or humans versus other humans. Symbiosis is two individuals succeeding together not despite, but because of, their differences. Symbiosis is the “+”.

Eric Meyer: Inadvertent Algorithmic Cruelty

Algorithms are essentially thoughtless. They model certain decision flows, but once you run them, no more thought occurs. To call a person “thoughtless” is usually considered a slight, or an outright insult; and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves.

Jon Gold: Finding the Exhaust Ports

Lamenting about the tech industry’s ills when self-identifying as a technologist is a precarious construction. I care so deeply about using the personal computer for liberation & augmentation. I’m so, so burned out by 95% of the work happening in the tech industry. Silicon Valley mythologizes newness, without stopping to ask “why?”. I’m still in love with technology, but increasingly with nuance into that which is for us, and that which productizes us.

Perhaps when Bush prophesied lightning-quick knowledge retrieval, he didn’t intend for that knowledge to be footnoted with Outbrain adverts. Licklider’s man-computer symbiosis would have been frustrated had it been crop-dusted with notifications. Ted Nelson imagined many wonderfully weird futures for the personal computer, but I don’t think gamifying meditation apps was one of them.

Maciej Cegłowski: Legends of the Ancient Web

Radio brought music into hospitals and nursing homes, it eased the profound isolation of rural life, it let people hear directly from their elected representatives. It brought laugher and entertainment into every parlor, saved lives at sea, gave people weather forecasts for the first time.

But radio waves are just oscillating electromagnetic fields. They really don’t care how we use them. All they want is to go places at the speed of light.

It is hard to accept that good people, working on technology that benefits so many, with nothing but good intentions, could end up building a powerful tool for the wicked.

But we can’t afford to re-learn this lesson every time.

During the Rwandan genocide in 1994, one radio station, Radio Télévision Libre des Mille Collines, is estimated to have instigated the deaths of 50,000 people. They radicalized and goaded on the killers with a mix of jokes, dance music, and hatred.

It’s no accident that whenever there’s a coup, the first thing the rebels seize is the radio station.

Technology interacts with human nature in complicated ways, and part of human nature is to seek power over others, and manipulate them. Technology concentrates power.

Alan Kay: Some excerpts from recent Alan Kay emails

Socrates didn’t charge for “education” because when you are in business, the “customer starts to become right”. Whereas in education, the customer is generally “not right”. Marketeers are catering to what people *want*, educators are trying to deal with what they think people *need* (and this is often not at all what they *want*). […]

Yet another perspective is to note that one of the human genetic “built-ins” is “hunting and gathering” – this requires resources to “be around”, and is essentially incremental in nature. It is not too much of an exaggeration to point out that most businesses are very like hunting-and-gathering processes, and think of their surrounds as resources put there by god or nature for them. Most don’t think of the resources in our centuries as actually part of a human-made garden via inventions and cooperation, and that the garden has to be maintained and renewed.

Mark Weiser: The World is not a Desktop

The idea, as near as I can tell, is that the ideal computer should be like a human being, only more obedient. Anything so insidiously appealing should immediately give pause. Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstanding, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer I need to talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the center of attention.

Stephen Fry: The future of humanity and technology

Above all, be prepared for the bullshit, as AI is lazily and inaccurately claimed by every advertising agency and app developer. Companies will make nonsensical claims like “our unique and advanced proprietary AI system will monitor and enhance your sleep” or “let our unique AI engine maximize the value of your stock holdings”. Yesterday they would have said “our unique and advanced proprietary algorithms” and the day before that they would have said “our unique and advanced proprietary code”. But let’s face it, they’re almost always talking about the most basic software routines. The letters A and I will become degraded and devalued by overuse in every field in which humans work. Coffee machines, light switches, Christmas trees will be marketed as AI proficient, AI savvy or AI enabled. But despite this inevitable opportunistic nonsense, reality will bite.


If we thought the Pandora’s jar that ruined the utopian dream of the internet contained nasty creatures, just wait till AI has been overrun by the malicious, the greedy, the stupid and the maniacal. […] We sleepwalked into the internet age and we’re now going to sleepwalk into the age of machine intelligence and biological enhancement. How do we make sense of so much futurology screaming in our ears?


Perhaps the most urgent need might seem counterintuitive. While the specialist bodies and institutions I’ve mentioned are necessary we need surely to redouble our efforts to understand who we humans are before we can begin to grapple with the nature of what machines may or may not be. So the arts and humanities strike me as more important than ever. Because the more machines rise, the more time we will have to be human and fulfill and develop to their uttermost, our true natures.

Zeynep Tufekci: We’re building a dystopia just to make people click on ads

We use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I’ve written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it’s not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it’s not the intent or the statements people in technology make that matter, it’s the structures and business models they’re building. And that’s the core of the problem. […]

So what can we do? This needs to change. Now, I can’t offer a simple recipe, because we need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to face and try to deal with the lack of transparency created by the proprietary algorithms, the structural challenge of machine learning’s opacity, all this indiscriminate data that’s being collected about us. We have a big task in front of us. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won’t be easy. We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don’t see how we can postpone this conversation anymore. These structures are organizing how we function and they’re controlling what we can and we cannot do. And many of these ad-financed platforms, they boast that they’re free. In this context, it means that we are the product that’s being sold. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue.

Florian Cramer: Crapularity Hermeneutics

From capturing to reading data, interpretation and hermeneutics thus creep into all levels of analytics. Biases and discrimination are only the extreme cases that make this mechanism most clearly visible. Interpretation thus becomes a bug, a perceived system failure, rather than a feature or virtue. As such, it exposes the fragility and vulnerabilities of data analytics.


The paradox of big data is that it both affirms and denies this “interpretative nature of knowledge”. Just like the Oracle of Delphi, it is dependent on interpretation. But unlike the oracle priests, its interpretative capability is limited by algorithmics – so that the limitations of the tool (and, ultimately, of using mathematics to process meaning) end up defining the limits of interpretation.

L.M. Sacasas: Resisting the Habits of the Algorithmic Mind

Machines have always done things for us, and they are increasingly doing things for us and without us. Increasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a compelling and reasonable critique of this scramble to outsource various dimensions of the human experience.

Danah Boyd: Your Data is Being Manipulated

The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape.

We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.

Audrey Watters: The Best Way to Predict the Future is to Issue a Press Release

Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues, to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

David Byrne: Eliminating the Human

I have a theory that much recent tech development and innovation over the last decade or so has had an unspoken overarching agenda—it has been about facilitating the need for LESS human interaction. It’s not a bug—it’s a feature. We might think Amazon was about selling us books we couldn’t find locally—and it was and what a brilliant idea—but maybe it was also just as much about eliminating human interaction. I see a pattern emerging in the innovative technology that has gotten the most attention, gets the bucks and often, no surprise, ends up getting developed and implemented. What much of this technology seems to have in common is that it removes the need to deal with humans directly. The tech doesn’t claim or acknowledge this as its primary goal, but it seems to often be the consequence. I’m sort of thinking maybe it is the primary goal. There are so many ways imagination can be manifested in the technical sphere. Many are wonderful and seem like social goods, but allow me a little conspiracy mongering here—an awful lot of them have the consequence of lessening human interaction.

Alan Kay: User Interface: A Personal View

McLuhan‘s claim that the printing press was the dominant force that transformed the hermeneutic Middle Ages into our scientific society should not be taken too lightly-especially because the main point is that the press didn’t do it just by making books more available, it did it by changing the thought patterns of those who learned to read.

Though much of what McLuhan wrote was obscure and arguable, the sum total to me was a shock that reverberates even now. The computer is a medium! I had always thought of it as a tool, perhaps a vehicle—a much weaker conception. What McLuhan was saying is that if the personal computer is a truly new medium then the very use of it would actually change the thought patterns of an entire civilization. He had certainly been right about the effects of the electronic stained-glass window that was television—a remedievalizing tribal influence at best. The intensely interactive and involving nature of the personal computer seemed an antiparticle that could annihilate the passive boredom invoked by television. But it also promised to surpass the book to bring about a new kind of renaissance by going beyond static representations to dynamic simulation. What kind of a thinker would you become if you grew up with an active simulator connected, not just to one point of view, but to all the points of view of the ages represented so they could be dynamically tried out and compared?

Sebastian Deterding: Memento Product Mori: Of ethics in digital product design

Why especially for us in the digital industry – although we are automating away more and more and more of our work and we’re becoming wealthier and wealthier by every measure – do we feel like we’re more and more short of time, overwhelmed and overworked. Or to put the question differently: Do you remember when email was fun?


The weird hard truth is: this is us. We, the digital industry, the people that are working in it are the ones who make everything, everything in our environment and work life ever more connected, fast, smooth, compelling, addicting even. The fundamental ethical contradiction for us is that we, the very people who suffer the most and organize the most against digital acceleration, are the very ones who further it.

Audrey Watters: Driverless Ed-Tech: The History of the Future of Automation in Education

“Put me out of a job.” “Put you out of a job.” “Put us all out of work.” We hear that a lot, with varying levels of glee and callousness and concern. “Robots are coming for your job.”

We hear it all the time. To be fair, of course, we have heard it, with varying frequency and urgency, for about 100 years now. “Robots are coming for your job.” And this time – this time – it’s for real.

I want to suggest that this is not entirely a technological proclamation. Robots don’t do anything they’re not programmed to do. They don’t have autonomy or agency or aspirations. Robots don’t just roll into the human resources department on their own accord, ready to outperform others. Robots don’t apply for jobs. Robots don’t “come for jobs.” Rather, business owners opt to automate rather than employ people. In other words, this refrain that “robots are coming for your job” is not so much a reflection of some tremendous breakthrough (or potential breakthrough) in automation, let alone artificial intelligence. Rather, it’s a proclamation about profits and politics. It’s a proclamation about labor and capital.

Alan Kay and Adele Goldberg: Personal Dynamic Media

“Devices” which variously store, retrieve, or manipulate information in the form of messages embedded in a medium have been in existence for thousands of years. People use them to communicate ideas and feelings both to others and back to themselves. Although thinking goes on in one’s head, external media serve to materialize thoughts and, through feedback, to augment the actual paths the thinking follows. Methods discovered in one medium provide metaphors which contribute new ways to think about notions in other media.

For most of recorded history, the interactions of humans with their media have been primarily nonconversational and passive in the sense that marks on paper, paint on walls, even “motion” pictures and television, do not change in response to the viewer’s wishes.


Every message is, in one sense or another, a simulation of some idea. It may be representational or abstract. The essence of a medium is very much dependent on the way messages are embedded, changed, and viewed. Although digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. Moreover, this new “metamedium” is active—it can respond to queries and experiments—so that the messages may involve the learner in a two-way conversation. This property has never been available before except through the medium of an individual teacher. We think the implications are vast and compelling.

Mike Caulfield: Information Underload

For many years, the underlying thesis of the tech world has been that there is too much information and therefore we need technology to surface the best information. In the mid 2000s, that technology was pitched as Web 2.0. Nowadays, the solution is supposedly AI.

I’m increasingly convinced, however, that our problem is not information overload but information underload. We suffer not because there is just too much good information out there to process, but because most information out there is low quality slapdash takes on low quality research, endlessly pinging around the spin-o-sphere.

Douglas Adams: Is there an Artificial God?

Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in - an interesting hole I find myself in - fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, must have been made to have me in it!” This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it’s still frantically hanging on to the notion that everything’s going to be alright, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for. We all know that at some point in the future the Universe will come to an end and at some other point, considerably in advance from that but still not immediately pressing, the sun will explode. We feel there’s plenty of time to worry about that, but on the other hand that’s a very dangerous thing to say. Look at what’s supposed to be going to happen on the 1st of January 2000 - let’s not pretend that we didn’t have a warning that the century was going to end! I think that we need to take a larger perspective on who we are and what we are doing here if we are going to survive in the long term.

There are some oddities in the perspective with which we see the world. The fact that we live at the bottom of a deep gravity well, on the surface of a gas covered planet going around a nuclear fireball 90 million miles away and think this to be normal is obviously some indication of how skewed our perspective tends to be, but we have done various things over intellectual history to slowly correct some of our misapprehensions.

Alan Kay: How to Invent the Future

As computing gets less and less interesting, its way of accepting and rejecting things gets more and more mundane. This is why you look at some of these early systems and think why aren’t they doing it today? Well, because nobody even thinks about that that’s important.


Come on, this is bullshit. But nobody is protesting except old fogeys like me, because I know it can be better. You need to find out that it can be better. That is your job. Your job is not to agree with me. Your job is to wake up, find ways of criticizing the stuff that seems normal. That is the only way out of the soup.

Maciej Cegłowski: Build a Better Monster

We need a code of ethics for our industry, to guide our use of machine learning, and its acceptable use on human beings. Other professions all have a code of ethics. Librarians are taught to hold patron privacy, doctors pledge to “first, do no harm”. Lawyers, for all the bad jokes about them, are officers of the court and hold themselves to high ethical standards.

Meanwhile, the closest we’ve come to a code of ethics is “move fast and break things”. And look how well that worked.

Young people coming into our industry should have a shared culture of what is and is not an acceptable use of computational tools. In particular, they should be taught that power can’t be divorced from accountability.

Alan Kay: Is it really “Complex”? Or did we just make it “Complicated”?

Even a relatively small clipper ship had about a hundred crew, all superbly trained whether it was light or dark. And that whole idea of doing things has been carried forward for instance in the navy. If you take a look at a nuclear submarine or any other navy vessel, it’s very similar: a highly trained crew, about the same size of a clipper. But do we really need about a hundred crew, is that really efficient?

The Airbus 380 and the biggest 747 can be flown by two people. How can that be? Well, the answer is you just can’t have a crew of about a hundred if you’re gonna be in the airplane business. But you can have a crew of about a hundred in the submarine business, whether it’s a good idea or not. So maybe these large programming crews that we have actually go back to the days of machine code, but might not have any place today.

Because today — let’s face it — we should be just programming in terms of specifications or requirements. So how many people do you actually need? What we need is the number of people that takes to actually put together a picture of what the actual goals and requirements of this system are, from the vision that lead to the desire to do that system in the first place.

Robert C. Martin: Twenty-Five Zeros

The interesting thing about where we are now, after 25 orders of magnitude in improvement in hardware, is that our software has improved by nothing like that. Maybe not even by one order of magnitude, possibly not even at all.

We go through lots of heat and lots of energy to invent new technologies that are not new technologies. They’re just new reflections, new projections of old technologies. Our industry is in some sense caught in a maelstrom, in a whirlpool, from where it cannot escape. All the new stuff we do isn’t new at all. It’s just recycled, old stuff and we claim it’s better because we’ve been riding a wave of 25 orders of magnitude. The real progress has not been in software, it has been in hardware. In fact there’s been virtually no real, solid innovation in the fundamental technology of software. So as much as software technology changes in form, it changes very little in essence.

Alex Feyerke: Step Off This Hurtling Machine

Today, we’re similarly entwined with our networks and the web as we are with nature. Clearly, they’re not as crucial as the plants that produce our oxygen, but the networks are becoming increasingly prevalent. […] They’ve become our nervous system, our externalised memory, and they will only ever grow denser, connecting more people and more things.

The network is the ultimate human tool and in time it will become utterly inseparable from us. We will take it with us when we eventually leave for other planets, and it will outlast many of the companies, countries, religions, and philosophies we know today. The network is never going away again.


I wish for a cultural artefact that will easily convey this notion today, that will capture the beauty and staggering opportunity of this human creation, that will make abundantly clear just how intertwined our fates are. To make clear that it is worth preserving, improving and cherishing. It’s one of the few truly global, species-encompassing accomplishments that has the power to do so much for so many, even if they never have the power to contribute to it directly.

But to get there, we must not only build great tools, we must build a great culture. We will have achieved nothing if our tools are free, open, secure, private and decentralised if there is no culture to embrace and support these values.

James Burke: Admiral Shovel and the Toilet Roll

In order to rectify the future I want to spend most of my time looking at the past because there’s nowhere else to look: (a) because the future hasn’t happened yet and never will, and (b) because almost all the time in any case the future, as you’ve heard more than once today, is not really much more than the past with extra bits attached, so to predict, I’m saying we look back.


To predict you extrapolate on what’s there already. We predict the future from the past, working within the local context from within the well-known box, which may be why the future has so often in the past been a surprise. I mean, James Watt’s steam engine was just supposed to drain mines. The printing press was just supposed to print a couple of Bibles. The telephone was invested by Alexander Graham Bell just to teach deaf people to talk. The computer was made specifically to calculate artillery shell trajectories. Viagra was just supposed to be for angina. I mean; what else?

Michael Nielsen: Thought as a Technology

It requires extraordinary imagination to conceive new forms of visual meaning – i.e., new visual cognitive technologies. Many of our best-known artists and visual explorers are famous in part because they discovered such forms. When exposed to that work, other people can internalize those new cognitive technologies, and so expand the range of their own visual thinking.


Images such as these are not natural or obvious. No-one would ever have these visual thoughts without the cognitive technologies developed by Picasso, Edgerton, Beck, and many other pioneers. Of course, only a small fraction of people really internalize these ways of visual thinking. But in principle, once the technologies have been invented, most of us can learn to think in these new ways.

Maciej Cegłowski: Superintelligence: The Idea That Eats Smart People

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.

So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.

Steve Jobs: When We Invented the Personal Computer

A few years ago I read a study — I believe it was in Scientific American — about the efficiency of locomotion for various species on the earth. The study determined which species was the most efficient, in terms of getting from point A to point B with the least amount of energy exerted. The condor won. Man made a rather unimpressive showing about 1/3 of the way down the list.

But someone there had the insight to test man riding a bicycle. Man was twice as efficient as the condor! This illustrated man’s ability as a tool maker. When man created the bicycle, he created a tool that amplified an inherent ability. That’s why I like to compare the personal computer to the bicycle. The personal computer is a 21st century bicycle if you will, because it’s a tool that can amplify a certain part of our inherent intelligence.

Seymour Papert: Teaching Children Thinking

The phrase “technology and education” usually means inventing new gadgets to teach the same old stuff in a thinly disguised version of the same old way. Moreover, if the gadgets are computers, the same old teaching becomes incredibly more expensive and biased towards its dullest parts, namely the kind of rote learning in which measurable results can be obtained by treating the children like pigeons in a Skinner box.

Mark Weiser: The Computer for the 21st Century

The idea of integrating computers seamlessly into the world at large runs counter to a number of present-day trends. “Ubiquitous computing” in this context does not just mean computers that can be carried to the beach, jungle or airport. Even the most powerful notebook computer, with access to a worldwide information network, still focuses attention on a single box. By analogy to writing, carrying a super-laptop is like owning just one very important book. Customizing this book, even writing millions of other books, does not begin to capture the real power of literacy.

Furthermore, although ubiquitous computers may employ sound and video in addition to text and graphics, that does not make them “multimedia computers.” Today’s multimedia machine makes the computer screen into a demanding focus of attention rather than allowing it to fade into the background.

Ted Nelson: Computers for Cynics

The computer world deals with imaginary, arbitrary, made up stuff that was all made up by somebody. Everything you see was designed and put there by someone. But so often we have to deal with junk and not knowing whom to blame, we blame technology.

Everyone takes the structure the computer world as god-given. In a field reputedly so innovative and new, the computer world is really a dumbed down imitation of the past, based on ancient traditions and modern oversimplification that people mistake for the computer itself.

Maciej Cegłowski: Deep-Fried Data

A lot of the language around data is extractive. We talk about data processing, data mining, or crunching data. It’s kind of a rocky ore that we smash with heavy machinery to get the good stuff out.

In cultivating communities, I prefer gardening metaphors. You need the right conditions, a propitious climate, fertile soil, and a sprinkling of bullshit. But you also need patience, weeding, and tending. And while you’re free to plant seeds, what you wind up with might not be what you expected.

If we take seriously the idea that digitizing collections makes them far more accessible, then we have to accept that the kinds of people and activities those collections will attract may seem odd to us. We have to give up some control.

This should make perfect sense. Human cultures are diverse. It’s normal that there should be different kinds of food, music, dance, and we enjoy these differences. Unless you’re a Silicon Valley mathlete, you delight in the fact that there are hundreds of different kinds of cuisine, rather than a single beige beverage that gives you all your nutrition.

But online, our horizons narrow. We expect domain experts and programmers to be able to meet everyone’s needs, sight unseen. We think it’s normal to build a social network for seven billion people.

Alan Kay: Programming and Scaling

Leonardo could not invent a single engine for any of his vehicles. Maybe the smartest person of his time, but he was born in the wrong time. His IQ could not transcend his time. Henry Ford was nowhere near Leonardo, but he happened to be born in the right century, a century in which people had already done a lot of work in making mechanical things.

Knowledge, in many many cases, trumps IQ. Why? This is because there are certain special people who invent new ways of looking at things. Henry Ford was powerful because Issac Newton changed the way Europe thought about things. One of the wonderful things about the way knowledge works is if you can get a supreme genius to invent calculus, those of us with more normal IQs can learn it. So we’re not shut out from what the genius does. We just can’t invent calculus by ourselves, but once one of these guys turns things around, the knowledge of the era changes completely.

Jakob Nielsen: The Anti-Mac Interface

The GUIs of contemporary applications are generally well designed for ease of learning, but there often is a trade-off between ease of learning on one hand, and ease of use, power, and flexibility on the other hand. Although you could imagine a society where language was easy to learn because people communicated by pointing to words and icons on large menus they carried about, humans have instead chosen to invest many years in mastering a rich and complex language.

Today’s children will spend a large fraction of their lives communicating with computers. We should think about the trade-offs between ease of learning and power in computer-human interfaces. If there were a compensating return in increased power, it would not be unreasonable to expect a person to spend several years learning to communicate with computers, just as we now expect children to spend 20 years mastering their native language.

Chris Granger: In Search of Tomorrow

The craziest realization for me has been that if we took a step back and stop thinking about programming for a moment, we managed to come up with a thing that doesn’t look like programming anymore. It’s just asking questions and formatting results. And that encompasses all of the things we wanna do. That is an amazing result to me.

I’m not saying we’ve done it and I have no idea what programming is gonna look like in 10 years. But my hope is that whatever programming does look like, that it looks nothing like the programming we have now. The last thing I want is you guys who are trying to cure cancer or trying to understand the cosmos or whatever you’re doing have to worry about these ridiculous things that have nothing to do with the amazing stuff you’re trying to do. I don’t want to look like that at all.


Because at the end of the day, the real goal here is a thinking tool and that is what we have to get back to.

Nick Foster: The Future Mundane

We often assume that the world of today would stun a visitor from fifty years ago. In truth, for every miraculous iPad there are countless partly broken realities: WiFi passwords, connectivity, battery life, privacy and compatibility amongst others. The real skill of creating a compelling and engaging view of the future lies not in designing the gloss, but in seeing beyond the gloss to the truths behind it. As Frederik Pohl famously said, “a good science fiction story should be able to predict not the automobile but the traffic jam.”

Paul Chiusano: The future of software, the end of apps, and why UX designers should care about type theory

The ‘software as machine’ view is so ingrained in people’s thinking that it’s hard to imagine organizing computing without some notion of applications. But let’s return to first principles. Why do people use computers? People use computers in order to do and express things, to communicate with each other, to create, and to experience and interact with what others have created. People write essays, create illustrations, organize and edit photographs, send messages to friends, play card games, watch movies, comment on news articles, and they do serious work too–analyze portfolios, create budgets and track expenses, find plane flights and hotels, automate tasks, and so on. But what is important, what truly matters to people is simply being able to perform these actions. That each of these actions presently take place in the context of some ‘application’ is not in any way essential. In fact, I hope you can start to see how unnatural it is that such stark boundaries exist between applications, and how lovely it would be if the functionality of our current applications could be seamlessly accessed and combined with other functions in whatever ways we imagine. This sort of activity could be a part of the normal interaction that people have with computers, not something reserved only for ‘programmers’, and not something that requires navigating a tedious mess of ad hoc protocols, dealing with parsing and serialization, and all the other mumbo-jumbo that has nothing to do with the idea the user (programmer) is trying to express. The computing environment could be a programmable playground, a canvas in which to automate whatever tasks or activities the user wished.

Travis Gertz: Design machines

There’s a lot of hubris hidden in the term “user experience design.” We can’t design experiences. Experiences are reactions to the things we design. Data lies to us. It makes us believe we know what a person is going through when they use our products. The truth is that it has no insight into physical mental or physical ability, emotional state, environmental conditions, socioeconomic status, or any other human factor outside of their ability to click on the right coloured box in the right order. Even if our machines can assume demographic traits, they will never be able to identify with each person’s unique combination of those traits. We can’t trust the data. And those who do will always be stuck chasing a robotic approach to human connection.

This is where our power lies. When we stop chasing our tails and acknowledge that we’ll never truly understand what’s going on in everyone else’s minds, we can begin to look at the web and human connection from a fresh lens. Instead of fruitlessly trying to engineer happy experiences for the everyman, we can fold ourselves into our content and listen to its heartbeat. We can let the content give design her voice.

Bret Victor: A Brief Rant on the Future of Interaction Design

I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.

Is that so bad, to dump the tactile for the visual? Try this: close your eyes and tie your shoelaces. No problem at all, right? Now, how well do you think you could tie your shoes if your arm was asleep? Or even if your fingers were numb? When working with our hands, touch does the driving, and vision helps out from the back seat.

Pictures Under Glass is an interaction paradigm of permanent numbness. It’s a Novocaine drip to the wrist. It denies our hands what they do best. And yet, it’s the star player in every Vision Of The Future.

To me, claiming that Pictures Under Glass is the future of interaction is like claiming that black-and-white is the future of photography. It’s obviously a transitional technology. And the sooner we transition, the better.

Alan Kay: Normal Considered Harmful

Normal is the greatest enemy with regard to creating the new. And the way of getting around this, is you have to understand normal, not as reality, but just a construct. And a way to do that, for example, is just travel to a lot of different countries — and you’ll find a thousand different ways of thinking the world is real, all of which is just stories inside of people’s heads. That’s what we are too.

Normal is just a construct — and to the extent that you can see normal as a construct inside yourself, you’ve freed yourself from the constraints of thinking this is the way the world is. Because it isn’t. This is the way we are.

Todd Rose: When U.S. air force discovered the flaw of averages

The consensus among fellow air force researchers was that the vast majority of pilots would be within the average range on most dimensions. After all, these pilots had already been pre-selected because they appeared to be average sized. (If you were, say, six foot seven, you would never have been recruited in the first place.) The scientists also expected that a sizable number of pilots would be within the average range on all 10 dimensions. But they were stunned when they tabulated the actual number.



There was no such thing as an average pilot. If you’ve designed a cockpit to fit the average pilot, you’ve actually designed it to fit no one.

Neil Postman: The Surrender of Culture to Technology

The questions are more important. Answers change over time and in different circumstances even for the same person. The questions endure. Which is why I think of them as a kind of permanent armament with which citizens can protect themselves against being overwhelmed by technology:

  • What is the problem to which a technology claims to be a solution?
  • Whose problem is it?
  • What new problems will be created because of solving an old one?
  • Which people and institutions will be most harmed?
  • What changes in language are being promoted?
  • What shifts in economic and political power are likely to result?
  • What alternative media might be made from a technology?

Neil Postman: The Surrender of Culture to Technology

Every technology has an inherent bias, has both unique technical limitations and possibilities. That is to say every technology has embedded in its physical form a predisposition to it being used in certain ways and not others. Only those who know nothing of the history of technology believe that a technology is entirely neutral or adaptable.


In other words each technology has an agenda of its own and so to speak gives us instructions on how to fulfil its own technical destiny. We have to understand that fact but we must not and especially we must not underestimate it. Of course we need not be tyrannized by it, we do not always have to go in exactly the direction that a technology leads us toward going. We have obligations to ourselves that may supersede our obligations to any technology.

Ted Nelson: Pernicious Computer Traditions

The computer world is not yet finished, but everyone is behaving as though everything was known. This is not true. In fact, the computer world as we know it is based upon one tradition that has been waddling along for the last fifty years, growing in size and ungainliness, and is essentially defining the way we do everything.

My view is that today’s computer world is based on techie misunderstandings of human thought and human life. And the imposition of inappropriate structures through the computer and through the files and the applications is the imposition of inappropriate structures on the things we want to do in the human world.

Scott Jenson: The Physical Web

You can see this pattern over and over again, we kind of have the old, we slowly work our way into the future, revolve it and then something comes along and scares us and pulls us back to the beginning.

So there are two critical psychological points to this shape of innovation, two lessons I think we have to learn. The one is the fact that we have this familiarity, we will always borrow from the past and we have to somehow transcend it. And we just need to appreciate that and talk about that a little bit more to see what we’re borrowing. But the other one, I think is also important, is this idea of maturity, because it forms a form of intellectual gravity well. It’s like we worked so damn hard to get here, we’re not leaving. It kinda forms this local maximum and people just don’t want to give it up. We feel like we had somehow gotten to this magical point and it was done. It was like here forever and we can kind of rest. And you can never rest in this business. I think it’s important for us to realize both of these two extremes and how we want to break out of this loop.

Frank Chimero: The Web’s Grain

We often think making things for the web is a process of simplifying—the hub, the dashboard, the control panel are all dreams of technology that coalesces, but things have a tendency to diverge into a multiplicity of options. We pile on more tools and technology, each one increasingly nuanced and minor in its critical differences. Clearly, convergence and simplicity make for poor goals. Instead, we must aim for clarity. You can’t contain or reduce the torrent of technology, but you can channel it in a positive direction through proper framing and clear articulation.

Technology only adds more—it is never this or that; it is always this and that.

Douglas Engelbart: Douglas Engelbart Interviewed by John Markoff

Q: Let’s talk about the future and what got lost. There were some ideas that got taken away and turned into commercial products, whole industries. But I think I’ve come to understand that your feeling is that some things didn’t get taken away. Tell me about what still needs to be done.

A: I think the focus needs to be on capabilities that you can drive. Take the business of cost of learning and set it aside until you assess the values of the higher capabilities you can go after. It seemed like there was some level that got set in the early seventies around “it has to be easy to learn”. And with all due respect to all the human computer interface guys that’s just to me as we’d all still be riding tricycles.

A big part of that is the paradigms we use. One example is the “book paradigm” that’s just built into everybody’s sort of pragmatical outlook. That’s the way you read and study. And you say well no wait, that is just a way an artifact that they call printing and such produced things that would help you do that. We got brand new sets of artifacts now, so let’s change our paradigms, let’s see what we can do. And that is what I started doing in the sixties.

Roope Mokka: The Internet of NO Things

As technology keeps developing faster and faster, all the technologies that are now in a smartphone will become the size of a piece of paper and be available for the price of a piece of paper as well.

What we have to understand is that when technology gets developed enough it disappears, it ceases to be understood as technology; it becomes part of the general man-made ambience of our life. Look around you, there are amazing technologies already around us that have vanished. This house is a very typical example of disruptive technology, not to mention this collection of houses and streets and other infrastructure, know as the city, invented some thousands of years ago around where today’s Iran is, and scaled from there globally. Houses and cities are technologies. Our clothing is a technology, the food on the tables is the end product of masses of technologies, from fire to other means of cooking. These are all technologies that have in practice disappeared: they are on the background and nobody (outside of dedicated professionals) thinks of them as technologies.

Robert Epstein: How the internet flips elections and alters our thoughts

We have also learned something very disturbing — that search engines are influencing far more than what people buy and whom they vote for. We now have evidence suggesting that on virtually all issues where people are initially undecided, search rankings are impacting almost every decision that people make. They are having an impact on the opinions, beliefs, attitudes and behaviours of internet users worldwide — entirely without people’s knowledge that this is occurring. This is happening with or without deliberate intervention by company officials; even so-called ‘organic’ search processes regularly generate search results that favour one point of view, and that in turn has the potential to tip the opinions of millions of people who are undecided on an issue.

Bret Victor: The Web of Alexandria (follow-up)

Whenever the ephemerality of the web is mentioned, two opposing responses tend to surface. Some people see the web as a conversational medium, and consider ephemerality to be a virtue. And some people see the web as a publication medium, and want to build a “permanent web” where nothing can ever disappear.

Neither position is mine. If anything, I see the web as a bad medium, at least partly because it invites exactly that conflict, with disastrous effects on both sides.

Wilson Miner: When We Build

When we design this new generation of digital tools for this ecosystem’s screen, we have a longer horizon ahead of us than just the next software platform or the next version of a technology. We have a longer lineage behind us than just the web or software design or even computers.

Steve Jobs said the thing that separates us from the high primates is that we’re tool builders. We make things, we make things that change our lives and we make things that changed the world. This is a long and long lasting tradition.

We shape our tools, and our tools shape us. We’re a product of our world and our world is made of things. Things we use, things we love, things we carry with us and the things we make. We’re the product of our world but we’re also its designer. Design is the choices we make about the world we want to live in. We chose where to live. What to surround ourselves with. What to spend our time and energy on.

We make our world what it is and we become the kind of people who live in it.

Alan Kay: 5 Steps To Re-create Xerox PARC’s Design Magic

We live in a world full of hype. When I look at most of the Silicon Valley companies (claiming to do invention research), they’re really selling pop culture. Pop culture is very incremental and is about creating things other than high impact. Being able to do things that change what business means is going to have a huge impact–more than something that changes what social interaction means in pop culture.

Gary Bernhardt: A Whole New World

We have this shipping culture that is poisonous to infrastructure replacement. You have people like Seth Godin saying things like, “Ship often. Ship lousy stuff, but ship. Ship constantly.” –which I think is a great way to achieve short-term business gains, and a great way to discover what your customer wants.

But when you’re a programmer and you are the customer, and you’re writing the system, and you’ve been using these tools for 20 years, you don’t need customer discovery. You need to go off and sit in a hammock for a couple of years and think hard.


We have not only the legacy, but we have a paralysis around it. We can’t even imagine replacing it.

Alan Kay: The Real Computer Revolution Hasn’t Happened Yet

So we had a sense that the personal computer’s ability to imitate other media would both help it to become established in society, and that this would also make if very difficult for most people to understand what it actually was. Our thought was: but if we can get the children to learn the real thing then in a few generations the big change will happen. 32 years later the technologies that our research community invented are in general use by more than a billion people, and we have gradually learned how to teach children the real thing. But it looks as though the actual revolution will take longer than our optimism suggested, largely because the commercial and educational interests in the old media and modes of thought have frozen personal computing pretty much at the “imitation of paper, recordings, film and TV” level.

Alan Kay: The Computer Revolution Hasn’t Happened Yet

I’m going to use a metaphor for this talk which is drawn from a wonderful book called The Act of Creation by Arthur Koestler. Koestler was a novelist who became a cognitive scientist in his later years. One of the great books he wrote was about what might creativity be. He realized that learning, of course, is an act of creation itself, because something happens in you that wasn’t there before. He used a metaphor of thoughts as ants crawling on a plane. In this case it’s a pink plane, and there’s a lot of things you can do on a pink plane. You can have goals. You can choose directions. You can move along. But you’re basically always in the pink context. It means that progress, in a fixed context, is almost always a form of optimization, because if you’re actually coming up with something new, it wouldn’t have been part of the rules or the context for what the pink plane is all about. Creative acts, generally, are ones that don’t stay in the same context that they’re in. He says, every once in a while, even though you have been taught carefully by parents and by school for many years, you have a blue idea. Maybe when you’re taking a shower. Maybe when you’re out jogging. Maybe when you’re resting in an unguarded moment, suddenly, that thing that you were puzzling about, wondering about, looking at, appears to you in a completely different light, as though it were something else.


Art is there to remind us—Great art is there to remind us that whatever context we think we’re in, there are other contexts. Art is always there to take us out of the context that we are in and make us aware of other contexts. This is a very simple—you can even call it a simple minded metaphor—but it will certainly serve for this talk today. He also pointed out that you have to have something blue to have blue thoughts with. I think this is generally missed in people who specialize to the extent of anything else. When you specialize, you are basically putting yourself into a mental state where optimization is pretty much all you can do. You have to learn lots of different kinds of things in order to have the start of these other contexts.

Chris Granger: Coding is not the new literacy

To realize the potential of computers, we have to focus on the fundamental skills that allow us to harness external computation. We have to create a new generation of tools that allow us to express our models without switching professions and a new generation of modelers who wield them.

To put it simply, the next great advance in human ability comes from being able to externalize the mental models we spend our entire lives creating.

That is the new literacy. And it’s the revolution we’ve all been waiting for.

Piet Hein: The Road to Wisdom?

The road to wisdom? — Well, it’s plain
and simple to express:
and err
and err again
but less
and less
and less.

Alan Kay: The Center of “Why?”

Living organisms are shaped by evolution to survive, not necessarily to get a clear picture of the universe. For example, frogs’ brains are set up to recognize food as moving objects that are oblong in shape. So if we take a frog’s normal food — flies — paralyze them with a little chloroform and put them in front of the frog, it will not notice them or try to eat them.

It will starve in front of its food! But if we throw little rectangular pieces of cardboard at the frog it will eat them until it is stuffed! The frog only sees a little of the world we see, but it still thinks it perceives the whole world.

Now, of course, we are not like frogs! Or are we?

Bret Victor: email (9/3/04)

Interface matters to me more than anything else, and it always has. I just never realized that. I’ve spent a lot of time over the years desperately trying to think of a “thing” to change the world. I now know why the search was fruitless – things don’t change the world. People change the world by using things. The focus must be on the “using”, not the “thing”. Now that I’m looking through the right end of the binoculars, I can see a lot more clearly, and there are projects and possibilities that genuinely interest me deeply.

Ivan Sutherland: Sketchpad: A man-machine graphical communication system

The decision actually to implement a drawing system reflected our feeling that knowledge of the facilities which would prove useful could only be obtained by actually trying them. The decision actually to implement a drawing system did not mean, however, that brute force techniques were to be used to computerize ordinary drafting tools; it was implicit in the research nature of the work that simple new facilities should be discovered which, when implemented, should be useful in a wide range of applications, preferably including some unforeseen ones. It has turned out that the properties of a computer drawing are entirely different from a paper drawing not only because of the accuracy, ease of drawing, and speed of erasing provided by the computer, but also primarily because of the ability to move drawing parts around on a computer drawing without the need to erase them. Had a working system not been developed, our thinking would have been too strongly influenced by a lifetime of drawing on paper to discover many of the useful services that the computer can provide.

Douglas Engelbart: Interview

I got this wild dream in my head about what would help mankind the most, to go off and do something dramatic, and I just happened to get a picture of how, if people started to learn to interact with computers, in collective ways of collaborating together, and this was way back in the early 50s, so it was a little bit premature. So anyways, I had some GI bill money left still so I could just go after that, and up and down quite a bit through the years, and I finally sort of gave up.

Mike Caulfield: Federated Education: New Directions in Digital Collaboration

As advocates we’re so often put in a situation where we have to defend the very idea that social media is an information sharing solution that we don’t often get to think about what a better solution for collaboration would look like. Because there are problems with the way social media works now.


Minority voices are squelched, flame wars abound. We spend hours at a time as rats hitting the Skinner-esque levers of Twitter and Tumblr, hoping for new treats — and this might be OK if we actually then built off these things, but we don’t.

We’re stuck in an attention economy feedback loop that doesn’t allow us silent spaces to reflect on issues without news pegs, and in which many of our areas of collaboration have become toxic, or worse, a toxic bureaucracy.

We’re stuck in an attention economy feedback loop where we react to the reactions of reactions (while fearing further reactions), and then we wonder why we’re stuck with groupthink and ideological gridlock.

We’re bigger than this and we can envision new systems that acknowledge that bigness.

We can build systems that return to the vision of the forefathers of the web. The augmentation of human intellect. The facilitation of collaboration. The intertwingling of all things.

William Van Hecke: Your App Is Good And You Should Feel Good

There’s no disincentive to honking at people for the slightest provocation. There’s little recourse for abuse. It’s such an asymmetrical, aggressive technology, so lacking in subtlety. It kind of turns everyone into a crying baby — you can let the people around you know that you’re very upset, but not why.

I think the Internet is like this sometimes, too. The internet is like a car horn that you can honk at the entire world.

Nate Silver: Rage Against The Machines

We have to view technology as what it always has been—a tool for the betterment of the human condition. We should neither worship at the altar of technology nor be frightened by it. Nobody has yet designed, and perhaps no one ever will, a computer that thinks like a human being. But computers are themselves a reflection of human progress and human ingenuity: it is not really “artificial” intelligence if a human designed the artifice.

Paul Ford: The Sixth Stage of Grief Is Retro-computing

Technology is what we share. I don’t mean “we share the experience of technology.” I mean: By my lights, people very often share technologies with each other when they talk. Strategies. Ideas for living our lives. We do it all the time. Parenting email lists share strategies about breastfeeding and bedtime. Quotes from the Dalai Lama. We talk neckties, etiquette, and Minecraft, and tell stories that give us guidance as to how to live. A tremendous part of daily life regards the exchange of technologies. We are good at it. It’s so simple as to be invisible. Can I borrow your scissors? Do you want tickets? I know guacamole is extra. The world of technology isn’t separate from regular life. It’s made to seem that way because of, well…capitalism. Tribal dynamics. Territoriality. Because there is a need to sell technology, to package it, to recoup the terrible investment. So it becomes this thing that is separate from culture. A product.

Roy Ascott: Is There Love in the Telematic Embrace?

It is the computer that is at the heart of this circulation system, and, like the heart, it works best when least noticed—that is to say, when it becomes invisible. At present, the computer as a physical, material presence is too much with us; it dominates our inventory of tools, instruments, appliances, and apparatus as the ultimate machine. In our artistic and educational environments it is all too solidly there, a computational block to poetry and imagination. It is not transparent, nor is it yet fully understood as pure system, a universal transformative matrix. The computer is not primarily a thing, an object, but a set of behaviors, a system, actually a system of systems. Data constitute its lingua franca. It is the agent of the datafield, the constructor of dataspace. Where it is seen simply as a screen presenting the pages of an illuminated book, or as an internally lit painting, it is of no artistic value. Where its considerable speed of processing is used simply to simulate filmic or photographic representations, it becomes the agent of passive voyeurism. Where access to its transformative power is constrained by a typewriter keyboard, the user is forced into the posture of a clerk. The electronic palette, the light pen, and even the mouse bind us to past practices. The power of the computer’s presence, particularly the power of the interface to shape language and thought, cannot be overestimated. It may not be an exaggeration to say that the “content” of a telematic art will depend in large measure on the nature of the interface; that is, the kind of configurations and assemblies of image, sound, and text, the kind of restructuring and articulation of environment that telematic interactivity might yield, will be determined by the freedoms and fluidity available at the interface.

Edsger W. Dijkstra: On the reliability of programs

Automatic computers are with us for twenty years and in that period of time they have proved to be extremely flexible and powerful tools, the usage of which seems to be changing the face of the earth (and the moon, for that matter!) In spite of their tremendous influence on nearly every activity whenever they are called to assist, it is my considered opinion that we underestimate the computer’s significance for our culture as long as we only view them in their capacity of tools that can be used. In the long run that may turn out to be a mere ripple on the surface of our culture. They have taught us much more: they have taught us that programming any non-trivial performance is really very difficult and I expect a much more profound influence from the advent of the automatic computer in its capacity of a formidable intellectual challenge which is unequalled in the history of mankind. This opinion is meant as a very practical remark, for it means that unless the scope of this challenge is realized, unless we admit that the tasks ahead are so difficult that even the best of tools and methods will be hardly sufficient, the software failure will remain with us. We may continue to think that programming is not essentially difficult, that it can be done by accurate morons, provided you have enough of them, but then we continue to fool ourselves and no one can do so for a long time unpunished.

Bret Victor: Magic Ink

Today’s ubiquitous GUI has its roots in Doug Engelbart’s groundshattering research in the mid-’60s. The concepts he invented were further developed at Xerox PARC in the ’70s, and successfully commercialized in the Apple Macintosh in the early ’80s, whereupon they essentially froze. Twenty years later, despite thousand-fold improvements along every technological dimension, the concepts behind today’s interfaces are almost identical to those in the initial Mac. Similar stories abound. For example, a telephone that could be “dialed” with a string of digits was the hot new thing ninety years ago. Today, the “phone number” is ubiquitous and entrenched, despite countless revolutions in underlying technology. Culture changes much more slowly than technological capability.

The lesson is that, even today, we are designing for tomorrow’s technology. Cultural inertia will carry today’s design choices to whatever technology comes next. In a world where science can outpace science fiction, predicting future technology can be a Nostradamean challenge, but the responsible designer has no choice. A successful design will outlive the world it was designed for.

Mike Bostock: Visualizing Algorithms

So, why visualize algorithms? Why visualize anything? To leverage the human visual system to improve understanding. Or more simply, to use vision to think.

Jef Raskin: Intuitive equals Familiar

The term “intuitive” is associated with approval when applied to an interface, but this association and the magazines’ rating systems raise the issue of the tension between improvement and familiarity. As an interface designer I am often asked to design a “better” interface to some product. Usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client’s own products. Even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive. It is a classic “catch 22.” The client wants something that is significantly superior to the competition. But if superior, it cannot be the same, so it must be different (typically the greater the improvement, the greater the difference). Therefore it cannot be intuitive, that is, familiar. What the client usually wants is an interface with at most marginal differences that, somehow, makes a major improvement. This can be achieved only on the rare occasions where the original interface has some major flaw that is remedied by a minor fix.

Steve Jobs: Interview in Memory & Imagination

I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.

And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.

Seymour Papert: A Critique of Technocentrism in Thinking About the School of the Future

So we are entering this computer future; but what will it be like? What sort of a world will it be? There is no shortage of experts, futurists, and prophets who are ready to tell us, but they don’t agree. The Utopians promise us a new millennium, a wonderful world in which the computer will solve all our problems. The computer critics warn us of the dehumanizing effect of too much exposure to machinery, and of disruption of employment in the workplace and the economy.

Who is right? Well, both are wrong – because they are asking the wrong question. The question is not “What will the computer do to us?” The question is “What will we make of the computer?” The point is not to predict the computer future. The point is to make it.

Neil Postman: Five Things We Need to Know About Technological Change

In the past, we experienced technological change in the manner of sleep-walkers. Our unspoken slogan has been “technology über alles,” and we have been willing to shape our lives to fit the requirements of technology, not the requirements of culture. This is a form of stupidity, especially in an age of vast technological change. We need to proceed with our eyes wide open so that we many use technology rather than be used by it.

Alec Resnick: How Children What?

And so in the twenty-three years since the creation of the World Wide Web, “a bicycle for the mind” became “a treadmill for the brain.”

One helps you get where you want under your own power. Another’s used to simulate the natural world and is typically about self-discipline, self-regulation, and self-improvement. One is empowering; one is slimming. One you use with friends because it’s fun; the other you use with friends because it isn’t. One does things to you; one does things for you.

Henry David Thoreau: Walden

Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end, an end which it was already but too easy to arrive at.

Simon Peyton Jones: Teaching creative computer science

We’ve ended up focusing too much on technology, on things, on devices, on those seductive boxes and not enough on ideas. I want our children not only to consume technology but to be imaginative creators of technological artefacts. I want them to be creative writers as well as appreciative readers. I want them to understand what they’re doing as well how the stuff that they’re using works as well as using it.

Arthur C. Clarke once famously remarked that any sufficiently advanced technology is indistinguishable from magic. And I think it’s very damaging if our children come to believe that the computer systems they use are essentially magic. That is: not under their control

Alan Kay: A Personal Computer for Children of All Ages

This new medium will not “save the world” from disaster. Just as with the book, it brings a new set of horizons and a new set of problems. The book did, however, allow centuries of human knowledge to be encapsulated and transmitted to everybody; perhaps an active medium can also convey some of the excitement of thought and creation!

Alan Kay: A Personal Computer for Children of All Ages

With Dewey, Piaget and Papert, we believe that children “learn by doing” and that much of the alienation in modern education coms from the great philosophical distance between the kinds of things children can “do” and much of 20-cerntury adult behavior. Unlike the African child whose play with bow and arrow INVOLVES him in future adult activity, the American child can either indulge in irrelevant imitation (the child in a nurse’s uniform taking care of a doll) or is forced to participate in activities which will not bear fruit for many years and will leave him alienated (mathematics: “multiplication is GOOD for you - see, you can solve problems in books;” music: “practice your violin, in three years we might tell you about music;” etc.).

If we want children to learn any particular area, then it is clearly up to us to provide them with something real and enjoyable to “do” on their way to perfection of both the art and the skill.

Richard P. Gabriel: Designed as Designer

We’ve all been this same way. Many believe Ronald Reagan single-handedly defeated communism, Tim Berners-Lee single-handedly invented (everything about) the World Wide Web, Louis V. Gerstner, Jr. single-handedly rescued IBM in the early 1990s, Michael Jordan single-handedly won 6 NBA championships, Gillette invented the safety razor… The list of people (and companies) given more credit than is due could go on, perhaps as long as you like.

There’s something about our culture that seems to love heroes, that looks for the genius who’s solved it all, that seems to need to believe the first to market—the best inventor—reaps justly deserved rewards.

Two factors combine to manufacture this love of heroes: a failure to perceive the effects of randomness on real life and a need for stories. A name and story are less abstract than an intertwined trail of ideas and designs that leads to a monument.

Alan Kay: The Future Doesn’t Have to Be Incremental

Suppose you had twice the IQ of Leonardo, but you were born in 10,000 BC? How far are you going to get? Zero before they burn you at the stake.

Henry Ford was nowhere near Leonardo, but Henry Ford was able to do something Leonardo couldn’t do. Leonardo never was able to invent a single engine for any of his vehicles. But Henry Ford was born into the right century. He had knowledge, and he did not have to invent the gasoline engine. It had already been invented. And so he would be what was called an innovator today. He did not invent anything, but he put things together and packaged them and organized them and got them out into the public. And for most things, knowledge dominates IQ.

Douglas Engelbart: Improving Our Ability to Improve: A Call for Investment in a New Future

I need to quickly sketch out what I see as the goal – the way to get the significant payoff from using computers to augment what people can do. This vision of success has not changed much for me over fifty years – it has gotten more precise and detailed – but it is pointed at the same potential that I saw in the early 1950s. It is based on a very simple idea, which is that when problems are really difficult and complex – problems like addressing hunger, containing terrorism, or helping an economy grow more quickly – the solutions come from the insights and capabilities of people working together. So, it is not the computer, working alone, that produces a solution. But is the combination of people, augmented by computers.

Douglas Engelbart: Improving Our Ability to Improve: A Call for Investment in a New Future

We need to become better at being humans. Learning to use symbols and knowledge in new ways, across groups, across cultures, is a powerful, valuable, and very human goal. And it is also one that is obtainable, if we only begin to open our minds to full, complete use of computers to augment our most human of capabilities.

Maciej Cegłowski: Our Comrade The Electron

If you look at the history of the KGB or Stasi, they consumed enormous resources just maintaining and cross-referencing their mountains of paperwork. There’s a throwaway line in Huxley’s Brave New World where he mentions “800 cubic meters of card catalogs” in the eugenic baby factory. Imagine what Stalin could have done with a decent MySQL server.

We haven’t seen yet what a truly bad government is capable of doing with modern information technology. What the good ones get up to is terrifying enough.

Chris Granger: Toward a better programming

If you look at much of the advances that have made it to the mainstream over the past 50 years, it turns out they largely increased our efficiency without really changing the act of programming. I think the reason why is something I hinted at in the very beginning of this post: it’s all been reactionary and as a result we tend to only apply tactical fixes.

Douglas Adams: How to Stop Worrying and Learn to Love the Internet

I suppose earlier generations had to sit through all this huffing and puffing with the invention of television, the phone, cinema, radio, the car, the bicycle, printing, the wheel and so on, but you would think we would learn the way these things work, which is this:

1) everything that’s already in the world when you’re born is just normal;

2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;

3) anything that gets invented after you’re thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it’s been around for about ten years when it gradually turns out to be alright really.

Alan Watts: Money, Guilt and The Machine

The difference between having a job and having a vocation is that a job is some unpleasant work you do in order to make money, with the sole purpose of making money. But if you do a job, if you do a job with the sole purpose of making money, you are absurd. Because if money becomes the goal, and it does when you work that way, you begin increasingly to confuse it with happiness — or with pleasure.

Yes, one can take a whole handful of crisp dollar bills and practically water your mouth over them. But this is a kind of person who is confused, like a Pavlov dog, who salivates on the wrong bell. It goes back to the ancient guilt that if you don’t work you have no right to eat; that if there are others in the world who don’t have enough to eat, you shouldn’t enjoy your dinner even though you have no possible means of conveying the food to them. And while it is true that we are all one human family and that every individual involves every other individual… while it is true therefore we should do something about changing the situation.

Bret Victor: A few words on Doug Engelbart

Our computers are fundamentally designed with a single-user assumption through-and-through, and simply mirroring a display remotely doesn’t magically transform them into collaborative environments.

If you attempt to make sense of Engelbart’s design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart’s intent. Engelbart hated our present-day systems.

Paul Lockhart: A Mathematician’s Lament

Sadly, our present system of mathematics education is precisely this kind of nightmare. In fact, if I had to design a mechanism for the express purpose of destroying a child’s natural curiosity and love of pattern-making, I couldn’t possibly do as good a job as is currently being done — I simply wouldn’t have the imagination to come up with the kind of senseless, soul-crushing ideas that constitute contemporary mathematics education.

Everyone knows that something is wrong. The politicians say, “we need higher standards.” The schools say, “we need more money and equipment.” Educators say one thing, and teachers say another. They are all wrong. The only people who understand what is going on are the ones most often blamed and least often heard: the students. They say, “math class is stupid and boring,” and they are right.

Michael Nielsen: Reinventing explanation

My own personal conviction is that we are still in the early days of exploring the potential that modern media – especially digital media – have for the explanation of science. Our current attempts at digital explanation seem to me to be like the efforts of the early silent film-makers, or of painters prior to the Florentine Renaissance. We haven’t yet found our Michaelangelo and Leonardo, we don’t yet know what is possible. In fact, we don’t yet have even the basic vocabulary of digital explanation. My instinct is that such a vocabulary will be developed in the decades to come. But that is far too big a goal to attack directly. Instead, we can make progress by constructing prototypes, and learning from what they have to tell us.

Norman N. Holland: The Internet Regression

When communicating on the Internet, we set up a relationship with other people in which the people get less human and the machine gets more human. That is how the three signs of the Internet regression come into play: flaming, flirting, and giving. Our feelings toward the computer as computer become our feelings toward the people to whom we send e-mail or post messages. We flame to the person as though he or she were an insensitive thing, a machine that can’t be hurt. We flirt with the machine as though it were a person and could interact with us, compliantly offering sex. We feel open and giving toward the computer because the computer is open and giving to us.

Clay Shirky: It’s Not Information Overload. It’s Filter Failure

We have had information overload in some form or another since the 1500’s. What is changing now is the filters we use for the most of the 1500 period are breaking, and designing new filters doesn’t mean simply updating the old filters. They have broken for structural reasons, not for service reasons.

Jonathan Harris: Navigating Stuckness

In life, you will become known for doing what you do. That sounds obvious, but it’s profound. If you want to be known as someone who does a particular thing, then you must start doing that thing immediately. Don’t wait. There is no other way. It probably won’t make you money at first, but do it anyway. Work nights. Work weekends. Sleep less. Whatever you have to do. If you’re lucky enough to know what brings you bliss, then do that thing at once. If you do it well, and for long enough, the world will find ways to repay you.

Bret Victor: Media for Thinking the Unthinkable

All of the examples I’ve shown here are hints. They are nibbling at the corners of a big problem — what is this new medium for understanding systems?

We must get away from pencil-and-paper thinking. Even when working on the computer, we still think in representations that were invented for the medium of paper. Especially programming. Programming languages are written languages — they were designed for writing.

We have an opportunity to reinvent how we think about systems, to create a new medium. I don’t know what that medium is, but if you’d like to help find it, let me know.

Bret Victor: The Future of Programming

Here’s what I think the worst case scenario would be: if the next generation of programmers grows up never being exposed to these ideas, only being shown one way of thinking about programming. So they work on that way of programming—they flesh out all the details, they solve that particular model of programming. They’ve figured it all out. And then they teach that to the next generation. So then the second generation grows up thinking: “Oh, it’s all been figured out. We know what programming is. We know what we’re doing.” They grow up with dogma. And once you grow up with dogma, it’s really hard to break out of it.

Do you know the reason why all these ideas and so many other good ideas came about in this particular time period—in the 60s, early 70s? Why did it all happen then? It’s because it was late enough that technology had got to the point where you could actually do things with the computers, but it was still early enough that nobody knew what programming was. Nobody knew what programming was supposed to be. And they knew they didn’t know, so they just tried everything and anything they could think of.

The most dangerous thought that you can have as a creative person is to think that you know what you’re doing. Because once you think you know what you’re doing, you stop looking around for other ways of doing things. And you stop being able to see other ways of doing things. You become blind.

If you want to be open or receptive to new ways of thinking, to invent new ways of thinking, I think the first step is you have to say to yourself, “I don’t know what I’m doing. We as a field don’t know what we’re doing.” I think you have to say, “We don’t know what programming is. We don’t know what computing is. We don’t even know what a computer is.” And once you truly understand that—and once you truly believe that—then you’re free. And you can think anything.

Mike Hoye: Citation Needed

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.

Steve Jobs: Triumph of the Nerds

Ultimately it comes down to taste. It comes down to trying to expose yourself to the best things that humans have done and then try to bring those things in to what you’re doing. I mean Picasso had a saying he said good artists copy great artists steal. And we have always been shameless about stealing great ideas ehm and I think part of what made the Macintosh great was that the people working on it were musicians and poets and artists and zoologists and historians who also happened to be the best computer scientists in the world.

Alan Kay: A Conversation with Alan Kay

Like I said, it’s a pop culture. Basically, a lot of the problems that computing has had in the last 25 years comes from systems where the designers were trying to fix some short-term thing and didn’t think about whether the idea would scale if it were adopted. There should be a half-life on software so old software just melts away over 10 or 15 years.

Zach Weinersmith: Thought

I often imagine meeting a great person from centuries past and explaining modern science. It would be such a thrill to see the face of Galileo or Newton or Faraday or Mendeleev or Curie or Darwin or Archimedes as they learned what we know now.

But then I remember that no one has yet lived long past a century. All of those people are dead. Unrecoverable. Beyond the reach of technology, no matter its sophistication.

But you, my little one. You are no different from the first human. You aren’t 200 or 2,000 years old. You are 200,000 years old. And I will show you everything.

Jürgen Geuter: No Future Generation

Where the 50s had a vision of the future that was based on technological progress, flying cars and robots cleaning our houses, where the 70s had a cuddly view on the future where everybody loved each other and mother earth would bring us all together (I simplify) we have… nothing. Well, not nothing. We do look at the future but not through the glasses of a positive vision or some goal. Right now our culture has really just one way to look at the future: The dystopia.

Oliver Reichenstein: Learning to See

It is not the hand that makes the designer, it’s the eye. Learning to design is learning to see. Naturally, what designers learn to see as they improve their skills is usually related to design. Doctors don’t see web sites in the same way as web designers, just as web designers don’t see radiographs as doctors do. Our experience sharpens our eyes to certain perceptions and shapes what we expect to see, just as what we expect to see shapes our experience. Our reality is perspectival.

Robert Bringhurst: The Elements of Typographic Style

By all means break the rules, and break them beautifully, deliberately and well.

Bret Victor: A Brief Rant on the Future of Interaction Design

So then. What is the Future Of Interaction?

The most important thing to realize about the future is that it’s a choice. People choose which visions to pursue, people choose which research gets funded, people choose how they will spend their careers.

Despite how it appears to the culture at large, technology doesn’t just happen. It doesn’t emerge spontaneously, pulling us helplessly toward some inevitable destiny. Revolutionary technology comes out of long research, and research is performed and funded by inspired people.

And this is my plea — be inspired by the untapped potential of human capabilities. Don’t just extrapolate yesterday’s technology and then cram people into it.

Bill Watterson: Some Thoughts on the Real World By One Who Glimpsed it and Fled

Creating a life that reflects your values and satisfies your soul is a rare achievement. In a culture that relentlessly promotes avarice and excess as the good life, a person happy doing his own work is usually considered an eccentric, if not a subversive. Ambition is only understood if it’s to rise to the top of some imaginary ladder of success. Someone who takes an undemanding job because it affords him the time to pursue other interests and activities is considered a flake. A person who abandons a career in order to stay home and raise children is considered not to be living up to his potential-as if a job title and salary are the sole measure of human worth.

You’ll be told in a hundred ways, some subtle and some not, to keep climbing, and never be satisfied with where you are, who you are, and what you’re doing. There are a million ways to sell yourself out, and I guarantee you’ll hear about them.

To invent your own life’s meaning is not easy, but it’s still allowed, and I think you’ll be happier for the trouble.

Ray Bradbury: Fahrenheit 451

Stuff your eyes with wonder. Live as if you’d drop dead in ten seconds. See the world. It’s more fantastic than any dream made or paid for in factories. Ask no guarantees, ask for no security, there never was such an animal.