Iron Laws of Cyberspace The Need for Multidisciplinary Perspectives

August 31, 2004

I am a social scientist – not a legal scholar. So allow me to focus on what social scientists might call laws – generalisations that are sufficiently descriptive over time and place to be viewed as law-like.

Of course, there are few laws in the social sciences, but one frequently hears law-like generalisations about IT and society, many of which involve the law – what might be called ‘iron laws of cyberspace”.

Looking at some of these iron laws could help illustrate intersections between information, technology and the law – and therefore the value of multidisciplinary perspectives.

I direct the Oxford Internet Institute, which is dedicated to multidisciplinary research on the social implications of the Internet and related information and communication technologies. I believe there is a case for legal scholars valuing a multi-disciplinary perspective as well.

But let me add that being a social scientist focused on the study of the Internet, I have gained tremendous respect for legal scholars. Over the last few years, in fact, legal scholars have invigorated debate about information and communication technologies like the Internet in major ways. For example, Professor Lawrence Lessig’s work on the Internet and related ICTs has been incredibly influential. Larry Lessig has done more than any other recent author to inform and enrich debate about the social and economic implications of the Internet, such as through his book (1999), Code and other Laws of Cyberspace, which has certainly influenced my title today.

I will briefly discuss just six questionable maxims or ‘laws of cyberspace’, develop my own alternative framework for thinking about the societal implications of ICTs, and then try to draw some general conclusions for discussion. You may be able to provide me with many more questionable laws of cyberspace, but let’s begin with six of my favourites:

Information, Technology, Legal Aphorisms

1. Code is law.

2. Law lags technology.

3. Information yearns to be free.[1]

4. Limited only by your imagination.

5. Information is power.

6. The more virtual the more real.

Code is Law

Professor Lessig reached thousands of students, academics and practitioners in coining the idea that ‘code is law’. This phrase conveys a very general, but also a specific argument. He argues that commercial interests, such as the music industry, are adopting technical approaches – thorough computer codes — to block the unauthorised copying or distribution of content. Laws are also being written to stop people from circumventing these codes. As Lessig (1999: x) puts it: “Control will be coded, by commerce, with the backing of the government.”

This is but one example, he argues, of a general trend towards the use of law to protect software code that regulates behaviour in favour of some particular commercial interests over others, or in favour of commercial activity versus non-commercial, open public access. Code is being used to control.

On the one hand, this notion follows an important theme in social research on technology – the idea that technologies do matter. I find Professor Lessig’s work most helpful for this very reason. In the 1980s, Langdon Winner made this point when he argued that “artefacts have politics”. Technologies are never neutral. Instead, any technology biases some values or interests over others. Many others have put this point in other ways, saying “networks have politics” or that technology is like policy, but ‘code is law’ has captured more imaginations. On the other hand, it can be accepted as an overly deterministic view and close out social research on the actual implications of code.

Ever since computers were invented, many expectations about the social implications of computing have been built on deterministic forecasts. Specific features of technology are used to rationally extrapolate the likely social consequences. This may seem obvious to many people, but many others, including social and computer scientists, have often moved from specific features of new technologies to extrapolate long-term consequences for society as a whole. We’ve all heard predictions of the paperless office, the growing leisure class, or that geography will no longer matter, and wonder what went wrong. Our predictions go astray because we are looking at the implications of technology from the wrong perspective.

Take another relatively deterministic perspective, the very influential views of Ithiel de Sola Pool, whose book, Technologies of Freedom (1983), captured the prevailing wisdom of the early 1980s, in the midst of excitement over – not the Internet – but videotext. Two decades ago, Pool argued that the new electronic media formed from the convergence of print, film, broadcasting, and common carrier communication should not be taken as analogous to these existing media and regulated as the press, broadcasting or telephone systems. Instead, he argued, the new media were inherently technologies of freedom, where no issues of scarcity of spectrum or pervasiveness of impact justified their regulation by government. Government, Pool argued, should refrain from regulation of a medium that, in the US context, he felt should be protected by the first amendment.

The hardware and software of the new media – videotext to Pool – were inherently democratic and should be protected by law. Code is law and law should protect code. To Pool, ICTs were inherently technologies of freedom, and he himself admitted to being a ‘soft determinist’.[2] To Pool, the new media were technologies of freedom, to Lessig technologies of control.

Moving back further, in the 1970s, in writing about the social impacts of the telephone, Pool (1977) recognised the inherently ‘dual effects’ of ICTs. It is easy to see that the impacts of ICTs are often countervailing – ICTs are not neutral but are double-edged swords. For example, the Internet can be used to protect or invade privacy, to isolate individuals or to build new virtual communities, to free or control the distribution of information. Technical features of the computer and related ICTs do not predetermine societal outcomes.

Code is certainly being used to block the copying of music, but it is also being used by many to circumvent those controls. I remember years ago when I told my class in the US about the news that Michael Jackson’s new CD was designed to prevent copying. The next day students distributed instructions among the class on how to copy the CD – and this was a course in communication, not computer science. How many reading this have downloaded music or other files that might have violated copyright protections? So much for code is law, but maybe in the future.

My own research over decades has most often found ICTs used to reinforce existing structures of power and influence (Danziger et al 1982). They are not inherently supportive of more centralised, or more decentralised, control structures.

There are other disadvantages to the notion that code is law. It involves focusing attention on computer code to the exclusion of many other issues. There is a focus on ‘open source’, which is a fascinating development, but one that is leading to a movement for law to force technology – to force open source development of software. Forcing technology is almost always a mistake. It assumes that certain code is a technology of freedom and uses law to enforce code.

And there is a focus on the law as the key constraint or factor shaping the development and impact of software, to the near exclusion of such factors such as the conceptions of and practices of users. Code is meaningless if rejected by users.

Think of new location-based technologies, such as GPS systems tied to mobile phones. If everyone knows where everyone is, all sorts of good things will happen, such as feeling more secure that you know where your family is at all times. Well, think of potentially unintended social consequences, such as putting your family at more risk by feeling more secure about their whereabouts, feeling more anxiety rather than less now that you are tracking everyone in real time, and being further overwhelmed with more information than you need or want.

So I worry that there are some ways in which the idea that code is law can undermine more systematic empirical analysis of the social shaping implications of the Internet and related ICTs.

Law Lags Technology

Another maxim that gives me even more difficulty is that the law lags behind technology. We all understand what is meant by this phrase – the rapid pace of technological innovation outstrips the ability of the legal system to respond. To quote from one business journal: “technology has outpaced the ability of law to keep up with it.”[3]

Current discussions of global Internet governance and regulation are excellent examples of this tendency. Specifically, the OII is currently undertaking a policy-oriented study of third generation mobile phones and their use by children, what we call our Next Generations Project. In these and other ways, technological advances often seem to leap ahead of law and regulation. Or does it?

Time and again, I am struck in my research by the force of the law in shaping technology. More often than suggested by this maxim, law precedes and shapes code, and new ICTs, rather than following it. That is, most software and hardware initiatives are designed in light of existing and anticipated legal frameworks. For example, I studied the design of an electronic voter guide in the US. Its developers were constantly debating the legal status of every design decision they made. How did it conform to first amendment concerns, campaign finance regulations, liability concerns and more. Or take the design of the v-chip or violence-chip, a technology designed to allow parents in the US to control the viewing of TV by their children. The v-chip’s design, and its endorsement by the government, was shaped by existing legal precedents on censorship and the first amendment. Law and technological innovations combined to spawn the V-chip.

Copyright certainly preceded peer-to-peer sharing of music, and players with deep pockets are taking existing law and regulation into account when designing technologies and related business models. Napster originally ignored legal precedents, but was penalised for doing so. And law was there to do so.

I prefer the notion of technological or digital loopholes. These are cases where law is so explicit that digital technologies, for example, are not specifically covered. This means that law and regulation often need to be adapted to new technologies, but the best developers are constantly anticipating how existing law and regulation will apply to new technologies – and in this respect, it is somewhat misleading to believe that one is behind the other – they are co-evolving.

I find the maxim misleading because all institutions can constrain technological change. When we say that the law lags technology, we often mean that the legal and political processes of decision-making have been slow. This is often the case when there is a lack of consensus on appropriate responses. If we were to say that a consensus on legal remedies to technological loopholes often takes time, then I agree.

Information Yearns to be Free

John Perry Barlow is fond of the phrase “information yearns to be free”. His point is not too far off that of Ithiel de Sola Pool – that the new media are inherently democratic – not susceptible to control. Try as you might, information will break free.

Therefore, commercial and governmental efforts to control information, such as through copyright, are destined to fail, and should fail. They run counter to nature.

Barlow and others attribute this phrase to Stewart Brand. If we go back to the source, Brand, in 1984, said:

“On the one hand information wants to be expensive, because it’s so valuable. . On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all of the time. So you have these two fighting against each other.”

Notice how Barlow and others have simplified the argument, dropping the countervailing forces. Information can be expensive or free. Now it is inherently free. Again, overly deterministic.

I also have many problems with a focus on information. Information is often misleading – what Ackoff called ‘misinformation’, unwanted, not influential, or so broadly defined to be meaningless. The Internet and Web are used to get and provide information, but they are also used to communicate with people, to access services, and to gain access to other technologies – a computer on your desk can enable you to gain access to millions of computers around the world.

So information is too narrow a term, if defined in meaningful ways, since ICTs shape access not only to information but also to people, services and technologies themselves. More importantly to me, it is a single issue, a single value argument. The idea that information should be accessible is appealing to all liberally-minded individuals. It is the essence of America‘s first amendment, protecting freedom of speech and the press, and in line with the sentiments of most liberal democracies of the West. However, there are other values that sometimes compete with information wishing to be free. One is privacy.

Privacy of course has many meanings in different legal systems, but it often refers both to a freedom to be left alone – in this respect, unwanted spam or indecent e-mail can be seen as an invasion of privacy – and to the unauthorised disclosure of personal information, such as using personal information for purposes other than for which it was collected. However, the Internet and related ICTs need to be understood within a broad ecology of choices, rather than a one-dimensional view of the world. There is a need to balance conflicting values.

Digital choices are easy when considering one value in one game or arena of action: Do I want information to be available to the public? Yes. Or do I want to invade someone’s privacy or respect their right to privacy. Of course, we should respect their privacy. But life usually forces us to weigh multiple conflicting values in separate but interrelated games.

In Britain, the Hutton Inquiry focused on the pursuit of the truth in the death of a weapons inspector, David Kelly. Few would question the virtue of that pursuit. However, is it right to violate the privacy of electronic-mail among members of the government in order to pursue that truth? However you answer that question, you must weigh the relative importance of competing values in relation to objectives that go beyond or outside the Hutton inquiry per se.

Privacy-Trust Tension or Privacy-Surveillance Spiral

Many recognize a tension between identity and privacy. We need systems to identify people on the net, it is argued, in order to ensure trust required for important transactions, from e-commerce to e-voting. However, as we put more personal identification data on the line, to enhance trust, we can undermine privacy to a point that it will diminish trust. There is a need to balance privacy and trust concerns in finding the right approach to identification.

That is a healthy tension, which could yield better digital choices if well recognized. However, my fear is that we could move into a privacy-surveillance spiral, where identification systems, including electronic surveillance of all kinds, yield clear successes, leading to even more inroads into privacy.

I think the Hutton Inquiry could be an example. Bringing what was believed to be private intra-office e-mail into the inquiry helped the inquiry get to the bottom of who said what to whom, but its very success will encourage similar incursions. Famous success stories with video surveillance have the same effect of fuelling more video surveillance. In Britain, a case of a cyclist being caught on surveillance cameras punching out the tires of automobiles is one illustration. People felt justice was done in catching the tyre slasher, but the success will fuel ever more surveillance.

Public opinion will – it is rather certain – support such a spiral. The public is willing to surrender privacy for its health, safety, even convenience and consumer credit. So here, as in many other instances, judges, politicians, lawyers, civil servants and leaders from all arenas need to defend and protect all basic digital rights and ensure that competing values are responsibly balanced.

We are involved in multiple games – protecting ourselves and our families, working for a living, supporting various national and public goods. This means that we often face choices that must balance competing values. Slogans like ‘information wants to be free’ tend to wash over these value conflicts and complexities, however appealing they might be on their face.

Limited Only by Your Imagination

Information technology is often viewed as inherently flexible – far more malleable than physical artefacts like roads and buildings. The saying goes that we are only limited by our imaginations. Look at a case in point: www.omnistep.com/.

However, as Paul Quintas (1996: 85-9) pointed out years ago, most large-scale information system developments evolve over years, even decades, of design and programming change, involving teams of developers working on different parts of the system. These legacy systems can be viewed, as Quintas suggested, as ‘electronic concrete’. You might view computer games as an exception – as truly limited only by the imagination of their developers. Maybe, but the latest estimate I’ve heard for the time required to develop a major commercially successful computer game is 150 person-years.

We are not only limited by our software, but also by our imaginations. My colleague Ted Nelson invented the term ‘hypertext’, yet he is dissatisfied with how the concept of hyptertext has been represented in software, and, for example, in the design of the Web. Nelson envisioned an information system that was divorced from the paradigm of paper and hierarchy, but he found system developers and users to be constrained by the prevailing paradigms of paper and hierarchy.

So not only can programmers not always reflect our imaginations in software, but our imaginations are also limited by existing software and hardware.

Information is Power

Another even more common refrain of discussion about the new information society is the view that ‘information is power’. However accepted this notion has become, it can be misleading for a number of reasons.

First, Francis Bacon’s maxim was ‘Knowledge is power’ [Nam et ipsa scientia potestas est.], not information is power. Second, a focus on information can obscure the degree to which information can be what Russell Ackoff called ‘misinformation’. Information can be misleading, unwanted, and overwhelming. People often want information they don’t need, and won’t use, and use information that they shouldn’t. Information can be weak, as in information about the health risks of smoking.

Information is not new to the electronic age, as information has always been significant. Information is critical to agricultural societies as much as post-industrial societies. Even Daniel Bell, the seminal thinker in notions of post-industrial society, recognised that the value of information is not new, only in codification.

Computers have been changing the way we capture, archive, retrieve and otherwise process information and, therefore, how we can access to information – that is the innovation. Every media reconfigures how we get access to information. This was McLuhan’s point about the medium as the message.

But ICTs reconfigure access to more than information. ICTs also reconfigure access to people, services and technologies. They change the way we get access to these resources, but also the outcome. Choices that you and others make about the Internet will change who you know, what you know, what you consume – as well as where and when you consume it – and what know-how and equipment you require.

The critical issue is the degree to which the Internet and related ICTs can reconfigure access to information, people, services and technologies in ways that can erode or enhance the communicative power of different actors.

The More Virtual the More Real

A colleague of mine, Steve Woolgar (2002: 17), has posited a number of rules that emerged from his synthesis of research on information technology and society. One is: ‘the more virtual the more real’. This is an attractive proposition, as it counters a common view that electronic communication tends to substitute for face-to-face or ‘real’ communication. Arguments that the Internet will isolate users at their computer are based on this substitution. The counter argument, that people use the Internet and other ICTs to complement and reinforce real communication, has a great deal of support.

Research within the World Internet Project supports this view of complementarity. Internet users are actually more sociable than nonusers. However, the debate over complementing v substituting for real communication misses a more general argument that ICTs can be used strategically to reconfigure access to people. It has no deterministic effect in either substituting or isolating, but could do either or both. In such ways, it can be used to reconfigure who you know, and who you communicate with, not just how much you communicate. This is suggested by the fact that Internet users make friends and meet people that they initially meet online, and they also have online friends that they never meet in person.

Conclusion: Reconfiguring Access

There are many other maxims we could discuss, such as the notion that you cannot regulate the Internet, but there is insufficient space to deal with these and I don’t want you to think I reject all maxims. For example, I truly believe the maxim that: Youth is wasted on the young.

I don’t wish to be entirely negative – criticizing others’ ideas.

Let me try to quickly develop an alternative maxim, which guides my own work, and which focuses on how ICTs reconfigure access.

I have argued that the major role of the Internet and related ICTs is to reconfigure access to information, people, services and technologies. This is not neutral at all as it changes the relative communicative power of yourself and others. Technologies matter. But they are not deterministic either. ICTs like the Internet reconfigure access in a variety of ways. They bypass gatekeepers, shift the power of senders and receivers, change the geography of access, and more.

Moving With and Beyond Aphorisms

Scepticism

I believe that social scientists and legal scholars can and should bring data, observation, and a healthy scepticism to debates about the Internet, ICTs and the law. It is important that we question claims about technology and the law, and laws about technology, whether they are positive – technologies of freedom – or negative – technologies of control.

Technological Determinism

It is important to avoid technologically deterministic perspectives, as social outcomes do not simply follow from technical change. Technologies matter, but they are designed, adopted, rejected, implemented, managed and used by people in often unpredictable ways.

Complex Ecology of Policy Issues

Social and legal aphorisms usually blind us to the paradoxes and larger ecologies of choices that face real legal and policy issues. It is important that policy choices be understood within a broad ecology of choices, creating a need to balance conflicting values.

Democratic Elitism – Legal Leadership

In a context of growing public use and concern over the Internet, from spam to indecency to surveillance, it is even more essential that opinion leaders defend liberal democratic traditions. They need to carry out this balancing act in a way that brings the public with them, and steers away from such trajectories as a privacy-surveillance spiral.

Conclusion

I hope I have not been too critical of one of your favourite maxims. I am used to people taking me to task for my claims. So let me end by saying that we are actors in a complex ecology of games in which we can and must use computers and networks in more strategic ways to reconfigure access – opening in some cases and closing in others – to shape the communicative power of the public.

Let me apply a definition of an economist, which I recently heard, to all social scientists. To paraphrase a colleague: “A social scientist is someone who sees something that works in practice and then wonders if it works in theory.”

William Dutton is Director of the Oxford Internet Institute. www.oii.ox.ac.uk

References

Barlow, J. P. (1994), ‘The Economy of Ideas’, Wired, 2.03, March, see: http://www.wired.com/wired/archive/2.03/economy.ideas_pr.html

Clarke, R. (2001), “Information Wants to be Free”, Version of 24 February 2000, plus a URL amended 28 August 2001. See: http://www.anu.edu.au/people/Roger.Clarke/II/IWtbF.html

de Sola Pool, I. (1977) (ed.), The Social Impact of the Telephone (Cambridge, Mass.: MIT Press).

(1983), Technologies of Freedom (Cambridge, Mass.: Harvard University Press, Belknap Press).

Dries, M. (1998), ‘Computer Law Lags as Technology Evolves’, Puget Sound Business Journal, 14 September 1998.

Dutton, W. H. (1999), Society on the Line (Oxford: Oxford University Press).

Lessig, L. (1999), Code and other Laws of Cyberspace (New York: Basic Books).

Mansell, R., and Silverstone, R. (1996) (eds), Communication by Design (Oxford: Oxford University Press).

Michels, R. (1959; 1915 orig.), Political Parties (New York: Dover Publications).

Quintas, P. (1996), ‘Software by Design’ in Mansell and Silverstone (1996).

Woolgar, S. (2002) (ed.), Virtual Society? Technology, Cyberbole, Reality (Oxford: Oxford University Press).

Notes


[1] Roger Clarke (2001) credits Stuart Brand with originating this aphorism.

[2] Pool described himself as a ‘soft technological determinist’ (Pool 1983).

[3] Dries, 1998.