vdb17

with the modest success of my last year's talk the lost medium i was reinvited by the kind folks of voxxed days belgrade to delve into this topic a bit further. vdb17 was an amazing experience – again – being one of the biggest and most inspiring technology conferences in eastern europe with excellent speakers from all over the world and about 800+ attendees.

my previous talk focused a lot on the early days of personal computing, the ingenious ideas we lost over time and the notion that we're not really thinking about how we can use the medium computer to augment our human capabilities.

after delivering this talk however i had the feeling that i left out an important question: what now? how can we improve?

this was the base for my new talk, the bullet hole misconception, in which i'm exploring how we can escape the present to invent the future and what questions must we ask if we are to amplify our human capabilities with computers.

feel free to share it and if you have questions, feedback or critique i'd love to hear from you!

summing up 92

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Crapularity Hermeneutics, by Florian Cramer

The problem of computational analytics is not only in the semantic bias of the data set, but also in the design of the algorithm that treats the data as unbiased fact, and finally in the users of the computer program who believe in its scientific objectivity.

From capturing to reading data, interpretation and hermeneutics thus creep into all levels of analytics. Biases and discrimination are only the extreme cases that make this mechanism most clearly visible. Interpretation thus becomes a bug, a perceived system failure, rather than a feature or virtue. As such, it exposes the fragility and vulnerabilities of data analytics. 

The paradox of big data is that it both affirms and denies this “interpretative nature of knowledge”. Just like the Oracle of Delphi, it is dependent on interpretation. But unlike the oracle priests, its interpretative capability is limited by algorithmics – so that the limitations of the tool (and, ultimately, of using mathematics to process meaning) end up defining the limits of interpretation. 

we're talking a lot about the advancement of computational analytics and artificial intelligence, but little about their shortcomings and effects on society. one of those is that for our technology to work perfectly, society has to dumb itself down in order to level the playing field between humans and computers. a very long but definitely one of the best essays i read this year.

Resisting the Habits of the Algorithmic Mind, by Michael Sacasas

Machines have always done things for us, and they are increasingly doing things for us and without us. Increasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a compelling and reasonable critique of this scramble to outsource various dimensions of the human experience.

our reliance on machines to make decisions for us leads us to displace the most important human elements in favor of cheaper and faster technology. doing that however we outsource meaning-making, moral judgement and feeling – which is what a human being is – to machines.

Your Data is Being Manipulated, by Danah Boyd

The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape.

We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.

we're past the point where developing fancy new technologies is a fun project for college kids. our technologies have real implications on the world, on our culture and society. nevertheless we seem to miss a kind of moral framework on how technology is allowed to alter society.

summing up 91

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

The Best Way to Predict the Future is to Issue a Press Release, by Audrey Watters

Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues, to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

we are making computers in all forms available, but we're far away from generating new thoughts or breaking up thought patterns. instead of augmenting humans with the use of computers like imagined by the fathers of early personal computing, our computers have turned out to be mind-numbing consumption devices rather than a bicycle for the mind that steve jobs envisioned.

Eliminating the Human, by David Byrne

I have a theory that much recent tech development and innovation over the last decade or so has an unspoken overarching agenda. It has been about creating the possibility of a world with less human interaction. This tendency is, I suspect, not a bug—it’s a feature.

Human interaction is often perceived, from an engineer’s mind-set, as complicated, inefficient, noisy, and slow. Part of making something “frictionless” is getting the human part out of the way.

But our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. “We” do not exist as isolated individuals. We, as individuals, are inhabitants of networks; we are relationships. That is how we prosper and thrive.

the computer claims sovereignty over the whole range of human experience, and supports its claim by showing that it “thinks” better than we can. the fundamental metaphorical message of the computer is that we become machines. our nature, our biology, our emotions and our spirituality become subjects of second order. but in order for this to work perfectly, society has to dumb itself down in order to level the playing field between humans and computers. what is most significant about this line of thinking is the dangerous reductionism it represents.

User Interface: A Personal View, by Alan Kay

The printing press was the dominant force that transformed the hermeneutic Middle Ages into our scientific society should not be taken too lightly–especially because the main point is that the press didn’t do it just by making books more available, it did it by changing the thought patterns of those who learned to read.

I had always thought of the computer as a tool, perhaps a vehicle–a much weaker conception. But if the personal computer is a truly new medium then the very use of it would actually change the thought patterns of an entire civilization. What kind of a thinker would you become if you grew up with an active simulator connected, not just to one point of view, but to all the points of view of the ages represented so they could be dynamically tried out and compared?

the tragic notion is that alan kay assumed people would be smart enough to try out and see different point of views. but in reality, people stick rigidly to the point of view they learned and consider all others to be only noise or worse.

summing up 90

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Memento Product Mori: Of ethics in digital product design, by Sebastian Deterding

Why especially for us in the digital industry – although we are automating away more and more and more of our work and we're becoming wealthier and wealthier by every measure – do we feel like we're more and more short of time, overwhelmed and overworked. Or to put the question differently: Do you remember when email was fun?

The weird hard truth is: this is us. We, the digital industry, the people that are working in it are the ones who make everything, everything in our environment and work life ever more connected, fast, smooth, compelling, addicting even. The fundamental ethical contradiction for us is that we, the very people who suffer the most and organize the most against digital acceleration, are the very ones who further it.

a great talk challenging us to reflect on the moral dimensions of our work, especially in the digital product world.

Driverless Ed-Tech: The History of the Future of Automation in Education, by Audrey Watters

“Put me out of a job.” “Put you out of a job.” “Put us all out of work.” We hear that a lot, with varying levels of glee and callousness and concern. “Robots are coming for your job.”

We hear it all the time. To be fair, of course, we have heard it, with varying frequency and urgency, for about 100 years now. “Robots are coming for your job.” And this time – this time – it’s for real.

I want to suggest that this is not entirely a technological proclamation. Robots don’t do anything they’re not programmed to do. They don’t have autonomy or agency or aspirations. Robots don’t just roll into the human resources department on their own accord, ready to outperform others. Robots don’t apply for jobs. Robots don’t “come for jobs.” Rather, business owners opt to automate rather than employ people. In other words, this refrain that “robots are coming for your job” is not so much a reflection of some tremendous breakthrough (or potential breakthrough) in automation, let alone artificial intelligence. Rather, it’s a proclamation about profits and politics. It’s a proclamation about labor and capital.

a brilliant essay on automation, algorithms and robots, and why the ai revolution isn't coming. not because the machines have taken over, but because the people who built them have.

Personal Dynamic Media, by Alan Kay & Adele Goldberg

“Devices” which variously store, retrieve, or manipulate information in the form of messages embedded in a medium have been in existence for thousands of years. People use them to communicate ideas and feelings both to others and back to themselves. Although thinking goes on in one’s head, external media serve to materialize thoughts and, through feedback, to augment the actual paths the thinking follows. Methods discovered in one medium provide metaphors which contribute new ways to think about notions in other media. For most of recorded history, the interactions of humans with their media have been primarily nonconversational and passive in the sense that marks on paper, paint on walls, even “motion” pictures and television, do not change in response to the viewer’s wishes.

Every message is, in one sense or another, a simulation of some idea. It may be representational or abstract. The essence of a medium is very much dependent on the way messages are embedded, changed, and viewed. Although digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. Moreover, this new “metamedium” is active—it can respond to queries and experiments—so that the messages may involve the learner in a two-way conversation. This property has never been available before except through the medium of an individual teacher. We think the implications are vast and compelling.

this great essay from 1977 reads so much like a description of what we do these days that it seems unexceptional – which makes it so exceptional. moreover however it thinks so much further – which also makes it quite sad to read.

summing up 89

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox.

Information Underload, by Mike Caulfield

For many years, the underlying thesis of the tech world has been that there is too much information and therefore we need technology to surface the best information. In the mid 2000s, that technology was pitched as Web 2.0. Nowadays, the solution is supposedly AI.

I’m increasingly convinced, however, that our problem is not information overload but information underload. We suffer not because there is just too much good information out there to process, but because most information out there is low quality slapdash takes on low quality research, endlessly pinging around the spin-o-sphere.

we certainly have issues creating the right filters for valuable content, but it also seems to me that it was never easier to create valuable content – and never harder to find it. one reason i publish this ongoing series.

The Shock of Inclusion, by Clay Shirky

To the question "How is Internet is changing the way we think?", the right answer is "Too soon to tell." This isn't because we can't see some of the obvious effects already, but because the deep changes will be manifested only when new cultural norms shape what the technology makes possible.

The Internet's primary effect on how we think will only reveal itself when it affects the cultural milieu of thought, not just the behavior of individual users. We will not live to see what use humanity makes of a medium for sharing that is cheap, instant, and global. We are, however, the people who are setting the earliest patterns for this medium. Our fate won't matter much, but the norms we set will.

there is a vast differences between a tool and a medium. we make use of tools to improve a single capability but a medium changes a whole culture. for example a website as a tool might enable you to present your business on the web, however to prepare a business for the next decade a digital transformation is needed which includes tools like a website, automation and digital communication channels.

Is there an Artificial God? by Douglas Adams

Imagine a puddle waking up one morning and thinking, 'This is an interesting world - an interesting hole I find myself in - fits me rather neatly, doesn't it? In fact it fits me staggeringly well, must have been made to have me in it!' This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be alright, because this world was meant to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.

There are some oddities in the perspective with which we see the world. The fact that we live at the bottom of a deep gravity well, on the surface of a gas covered planet going around a nuclear fireball 90 million miles away and think this to be normal is obviously some indication of how skewed our perspective tends to be, but we have done various things over intellectual history to slowly correct some of our misapprehensions.

So, my argument is that as we become more and more scientifically literate, it's worth remembering that the fictions with which we previously populated our world may have some function that it's worth trying to understand and preserve the essential components of, rather than throwing out the baby with the bath water; because even though we may not accept the reasons given for them being here in the first place, it may well be that there are good practical reasons for them, or something like them, to be there.

although this speech goes much further than the topics i discuss here, it's a very profound idea. unknowingly we often make up stories about why our products and websites work or fail. regardless of whether we accept these stories as true or not, we can always find some practical reasons in them we should adapt while looking for the truth.

summing up 88

summing up is a recurring series on how we can make sense of computers. drop your email in the box below to get it straight in your inbox or find previous editions here.

The Myth of a Superhuman AI, by Kevin Kelly

We don’t call Google a superhuman AI even though its memory is beyond us, because there are many things we can do better than it. These complexes of artificial intelligences will for sure be able to exceed us in many dimensions, but no one entity will do all we do better. It’s similar to the physical powers of humans. The industrial revolution is 200 years old, and while all machines as a class can beat the physical achievements of an individual human, there is no one machine that can beat an average human in everything he or she does.

I understand the beautiful attraction of a superhuman AI god. It’s like a new Superman. But like Superman, it is a mythical figure. However myths can be useful, and once invented they won’t go away. The idea of a Superman will never die. The idea of a superhuman AI Singularity, now that it has been birthed, will never go away either. But we should recognize that it is a religious idea at this moment and not a scientific one. If we inspect the evidence we have so far about intelligence, artificial and natural, we can only conclude that our speculations about a mythical superhuman AI god are just that: myths.

my probably most shared article this month and some very wise words indeed. what bugs me most however is that artificial intelligence (ai) seems to displace intelligence augmentation (ia). we try to make computers smarter, but we completely forget about making humans smarter–with the help of computers.

How to Invent the Future, by Alan Kay

As computing gets less and less interesting, its way of accepting and rejecting things gets more and more mundane. This is why you look at some of these early systems and think why aren't they doing it today? Well, because nobody even thinks about that that's important. Come on, this is bullshit, but nobody is protesting except old fogeys like me, because I know it can be better. You need to find out that it can be better. That is your job. Your job is not to agree with me. Your job is to wake up, find ways of criticizing the stuff that seems normal. That is the only way out of the soup.

it seems the more advanced our hardware and technology becomes the less we seem to innovate. i think one part of why we had so much innovation in the early days of computing was because there were people working on it who were musicians, poets, biologists, physicists or historians who were trying to make sense of this new medium to solve their problems. an argument i have proposed in my talk the lost medium last year.

The Pattern-Seeking Fallacy, by Jason Cohen

When an experiment produces a result that is highly unlikely to be due to chance alone, you conclude that something systematic is at work. But when you’re “seeking interesting results” instead of performing an experiment, highly unlikely events will necessarily happen, yet still you conclude something systematic is at work.

The fallacy is that you’re searching for a theory in a pile of data, rather than forming a theory and running an experiment to support or disprove it.

in the noise of randomness in our world we often find patterns. look at enough clouds, trees or rocks and you're predestined to find a shape like a face, animal or familiar object. the problem is this: when we look at enough random data we'll find a pattern to our liking and at the same time discarding plenty of valid results that just don't fit this pattern.

summing up 87

summing up is a recurring series on topics & insights on how we can make sense of computers that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox or find previous editions here.

How Technology Hijacks People’s Minds, by Tristan Harris

The ultimate freedom is a free mind, and we need technology to be on our team to help us live, feel, think and act freely.

We need our smartphones, notifications screens and web browsers to be exoskeletons for our minds and interpersonal relationships that put our values, not our impulses, first. People’s time is valuable. And we should protect it with the same rigor as privacy and other digital rights.

the way we use, create and foster technology today will be looked back the same way as we look back at the use of asbestos in walls & floors or naive cigarette smoking. creating useful technology is not about creating a need in the user, but to create things that are good for the user.

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance, by Maciej Cegłowski

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we're good people. We like freedom. How could we have built tools that subvert it?

We need a code of ethics for our industry, to guide our use of machine learning, and its acceptable use on human beings. Other professions all have a code of ethics. Librarians are taught to hold patron privacy, doctors pledge to “first, do no harm”. Lawyers, for all the bad jokes about them, are officers of the court and hold themselves to high ethical standards.

Meanwhile, the closest we’ve come to a code of ethics is “move fast and break things”. And look how well that worked.

the tools we shape, shape us and create a new world. but technology and ethics aren't easy to separate – that new world doesn't necessarily have to be a better world for all of us. maybe just for some.

Is it really "Complex"? Or did we just make it "Complicated"? by Alan Kay

Even a relatively small clipper ship had about a hundred crew, all superbly trained whether it was light or dark. And that whole idea of doing things has been carried forward for instance in the navy. If you take a look at a nuclear submarine or any other navy vessel, it's very similar: a highly trained crew, about the same size of a clipper. But do we really need about a hundred crew, is that really efficient?

The Airbus 380 and the biggest 747 can be flown by two people. How can that be? Well, the answer is you just can’t have a crew of about a hundred if you’re gonna be in the airplane business. But you can have a crew of about a hundred in the submarine business, whether it’s a good idea or not. So maybe these large programming crews that we have actually go back to the days of machine code, but might not have any place today.

Because today – let's face it – we should be just programming in terms of specifications or requirements. So how many people do you actually need? What we need is the number of people that takes to actually put together a picture of what the actual goals and requirements of this system are, from the vision that lead to the desire to do that system in the first place.

much of our technology, our projects and our ideas comes down to focusing on everything but the actual requirements and original problem. nevertheless it doesn't matter how exceptional of a map you can draw if someone asks for directions to the wrong destination.


archive

may 2017

april 2017

march 2017

february 2017

january 2017

december 2016

november 2016

october 2016

september 2016

august 2016

july 2016

june 2016

may 2016

march 2016

february 2016

january 2016

september 2015

july 2015

april 2015

march 2015

february 2015

january 2015

december 2014

november 2014

october 2014

september 2014

august 2014

july 2014

june 2014

may 2014

april 2014

march 2014

february 2014

january 2014

december 2013

november 2013

october 2013

september 2013

august 2013

july 2013

june 2013

may 2013

april 2013

march 2013

december 2012

october 2012

august 2012

july 2012

may 2012

january 2012

december 2011

august 2011

july 2011

june 2011

april 2011

march 2011

january 2011

december 2010

october 2010

august 2010

july 2010

june 2010

april 2010

march 2010

december 2009

november 2009

october 2009

september 2009

august 2009

july 2009

june 2009

april 2009

march 2009

january 2009

november 2008

september 2008

august 2008

june 2008

may 2008

april 2008

march 2008

january 2008

december 2007

november 2007

october 2007

september 2007

august 2007

july 2007

june 2007

may 2007

april 2007

march 2007

february 2007

january 2007