© 2022 WNIJ and WNIU
Northern Public Radio
801 N 1st St.
DeKalb, IL 60115
Northern Public Radio
Play Live Radio
Next Up:
Available On Air Stations

A Conversation with Cory Doctorow

Cory Doctorow

Science-fiction author, technology activist and blogger Cory Doctorow visited the NIU campus recently. Doctorow has long been involved in issues of privacy and freedom related to modern technology. His bestseller “Little Brother” focused on just those issues as a group of teens use the internet to battle government attempts to strip them of their civil liberties.  WNIJ’s Guy Stephens had a chance to sit down with Doctorow for a conversation about the dilemma society faces in the Internet age.


Q. What do you see as the central question facing people in dealing with the internet?

A. I think that there’s two. The first one is that the internet makes it easier for people to do things together. That doesn’t sound like a dilemma, except that it’s a profoundly disruptive phenomenon.

Ronald Coase, who won the Nobel Prize in Economics, in his 1937 paper “The Theory of the Firm,” really talks about the fact that every one of our social institutions exists only to coordinate people, whether that’s a church or a crime syndicate or a corporation. The reason that we have it is because for two or more people to do something together requires some kind of glue so that we can coordinate our activities.

There are a whole bunch of institutions that don’t need necessarily to be institutions any more – at least not really big ones. So we can make things that are on the order of complexity of a skyscraper today with the kind of organization that you might put into a potluck – like Wikipedia. An encyclopedia is something that si so complex that it used to require staffs of hundreds working with experts from all over the wolrd in a very close-tied, coordinative fashion. Now it’s this kind of dynamic ongoing process and  not even a product any more and done In a way that requires a very small foundation with a very small number of employees coordinating a much larger pool of volunteers. So this is profoundly disruptive.

The other thing that’s profoundly disruptive is that we have a single technology that is both our communications technology and our distributions technology. That’s the internet and the computer. And that technology is resolutely general purpose. We’re really used to thinking of complicated things as being special purpose. You know, if you’ve got a car, you can take the speakerphone out and it’s still a car. We think of computers as being special purpose in the same way. We say, “Well, can’t you just make us a computer and an internet that don’t allow file sharing, or that don’t allow terrorism, or that don’t allow the wrong kind of hate speech.” The thing is that, because it’s both the distribution system and the communications system and because it’s resolutely general purpose, the closest we can come to accomplishing that goal is just spying on everything you do with it, which is what we’ve been doing to date when we try to control these things. And when you spy on everything you do with a computer, you spy on everything you do, because everything you do now involves a computer. 

So I think that those are the two reallyprofoundly disruptive things about technology as we experience it today.

Q. You do a number of different activities. Was there a reason to write a story like that as opposed to simply putting out another blog or writing an article or putting out a release that would essentially cover the same topics?

A. I’m a science fiction writer first; that’s what I’ve always done. And I think that science fiction is an important part of how we hold policy discussions about technology because technology discussions – discussions about the consequences of technology -- tend to be pretty bloodless, right? We say if you add universal surveillance – you know, imagine that this is like 1947 and I say to you, I’ve got a bunch of technologies I can use to keep an eye on what people are doing and eventually we’ll expand that web of surveillance to the point we can see all the bad things as they’re happening, send the police to just the right spot, and shut them down. You know, our eye will be on the sparrow as it falls from the branch and we will be as gods. It’s a really hard argument to argue with, ‘cause on its face it sounds very rational. But in 1948, George Orwell wrote and published “1984.”  And “1984” puts a lot of blood and sinew into the argument by taking the reader through the emotional impact of what it means to be ubiquitously surveilled. And what “1984” does is takes us from a position of saying, “I think it would be just kind of icky if you could see everything that I did, and I can’t express it any better than that” and it replaces that very thin argument and replaces it with a really meaty one. We can now use this very useful term: “It would be Orwellian if you were to spy on everything I did.”

Science fiction isn’t a very good predictive literature. Its track record of predicting the future is terrible. But its track record for inspiring futures and preventing them is actually pretty good. And what I hoped to do with “Little Brother” was write a novel that young people could read that might inspire them to think of technology as a tool that could both oppress and liberate them and to make choices that would cause technology to liberate them.

Q. Given this dilemma, given these problems, these questions, what are solutions?

A. I guess it depends on the problem that you’re talking about. But, starting with, say, the war on terror, I would say that if you look at the evidence that transpired, that came out after the Sept. 11th attacks, in particular the recommendations from the 9/11 Commission, what you find is that the  9/11 Commission concluded that law enforcement knew everything that they needed to know to predict the 9/11 attacks. But they knew so many other irrelevant things that they had a hard time joining up the dots until after the fact. And they had some recommendations about using better technology to keep track of what’s going on. Most of those recommendations seem to have been set to one side and instead we seem to have concluded that, if we have some needles that we need to find in a haystack, the thing we need to do before anything else is to make the haystack as big as possible.  We have started to indiscriminately gather data on people who’ve done nothing that generates particularized suspicion -- for example, the NSA’s wiretapping program of all the traffic crossing AT&T’s network – and there’s no theoretical basis to assert that this is going to help us find these rare occurrences, these black swans, that we already had a hard time locating in the much smaller amount of data we were gathering before.

I think that there’s nothing wrong with developing modern policing techniques but those modern policing techniques can’t be founded on the idea that, “Something must be done. There, I’ve done something Something has been done.” I think they have to be founded on a kind of coherent theory that’s better than “It’s better than nothing,” or, “At least it’s not nothing.” We should start by saying, “Let’s gather information in a way that’s discriminating,” that takes account of particularized suspicion instead of gathering information on everyone to see whether or not somebody might be doing something statistically anomalous.

Q. Is there something else that you felt like you just really needed to shout out?

A. Sure. Right now we’re at a cusp in the design of devices and of networks, where we’re moving from the norm being that devices are designed to take orders from their owners to one in which devices are increasingly designed to take orders from remote parties. And that’s under a lot of different rubrics, right? We say, well, we don’t want you changing your phone because we want you locked to your carrier because your carrier subsidized your phone. Or we don’t want you changing your X-Box because we want to make sure you’re not running pirated software on your X-Box. Or, we don’t want you changing your hearing aid because the embedded software in your Cochlear implant is the stuff that the manufacturer thinks is best. And the problem is that, when we start to design devices so that their owners can’t know what’s going on in them and can’t stop them from doing things that they feel don’t represent their best interests, we are on a path to a world in which our devices stop serving us and start controlling us. Any measure or technology put in place to control the owner of a device and disguise the device’s operatings from the owner can be hijacked and used against the owner’s interests. I think it’s really important that we take stock of this and not just march blindly into it.