Regulating Technologies

March 30, 2009

Professor Brownsword’s theme involved asking how well law performs in controlling the risks presented by rapidly developing technologies such as ICTs and biotechnologies and how lawyers might better contribute to debates about shaping the right regulatory environment. He considered this in the context of three questions exploring in turn the nature of regulatory environments, the ways in which we assess their adequacy, and why these areas should be of interest to lawyers. 

Professor Brownsword’s subtext throughout the lecture was that a narrow legalistic approach is not sufficient to provide the most useful analysis.  Thus he conceptualised the nature of regulatory environments, as environments which produce signals as to behavioural outcomes.  These signals might be normative indicating the right thing for the actor to do, or indicating what ought to be done for the self interest of the actor, or finally might indicate what the actor can do in terms of what is or is not possible in defined circumstances.  The inputs for the consideration of these approaches are to be found in legal theory, ethics, and design and technology.  He illustrated this by reference to a number of regulatory theorists, including Julia Black, Larry Lessig, and Mirielle Hildebrandt, and by reference to simple examples of how everyday regulated activities, such as wearing seat belts, can be regulated by social pressure, by law in the form of punishment for non-compliant behaviour, and by coding or designing the vehicle so that it is immobilised in the event of non-compliant behaviour.  

In assessing the adequacy of regulatory environments, Professor Brownsword raised a series of further questions as to the objectives of regulation and whether they are legitimate or justifiable and as to the mechanisms for pursuing these objectives and whether they are fit for purpose and effective, whether they are properly connected to the technology they seek to regulate and whether they align appropriately with cosmopolitan ideals and values.   

In assessing the legitimacy of regulatory objectives, he identified and compared and contrasted the different approaches and viewpoints brought to the debate by ‘goals people’ whose starting point is an assumed duty to innovate and enhance for the greater utility of society, ‘rights people’ whose perspective is influenced by concerns as to human rights and questions about informed consent, and thirdly ‘dignitarians’ whose perspective is coloured by concepts of individual duty and the avoidance of enhancing innate human characteristics.  In the biotechnology area at least, whether the regulatory environment is attempting to achieve the right objectives will depend on where the observer stands in relation to these three possible approaches.  In the IT space, these views translate loosely, for example in the context of data protection and privacy, into ‘goals people’ favouring action unless very real risks of a serious kind are identified, while ‘rights people’ accept action providing appropriate consents are obtained from relevant populations. In contrast it was argued that ‘dignitarians’ have not been such a prominent voice except in respect of limited narrow areas such as data collection and monitoring in the context of implanted devices.

The process of assessing whether a regulatory environment is seeking to realise objectives in the right way is not straightforward given the challenges of achieving transparency and proper public engagement.  The increasingly ‘difficult for the layman to understand’ nature of new technology can easily inhibit proper engagement by all but a narrow segment of appropriately educated technology specialists, and thereby encourage elitism of an unhealthy kind. 

Turning to the effectiveness of such regimes, regulators themselves may be corrupt, inadequately financed, or hampered by lack of clarity of purpose.  Regulatees in their turn might have their own views as to the appropriateness of the objectives, even when they are known and clearly stated such as resistance to efforts to control peer-to-peer file-sharing.  Even where both regulators and regulatees are functioning effectively, externalities might arise to disrupt the achievement of agreed objectives: spam originating from outside the EU provides an example of a highly potent disruptive externality which potentially undermines otherwise widely accepted regulatory efforts within the EU. 

The fact that society is increasingly turning to emergent technologies to regulate the environment we inhabit makes these issues increasingly relevant in the context of social engineering.  Professor Brownsword cited DNA profiling, CCTV, and biometrics, as examples of a whole new apparatus which is increasingly controlling how people behave, all of which also demonstrate how technological design is increasingly deployed in these arenas in ways which are not necessarily easily comprehended in terms of their implications and possibilities by non-experts. 

Turning to the implications for lawyers, Professor Brownsword believes that we live in a world where technology is ever advancing and pervasive and itself requires regulating, and in a world also where technology, and in particular emerging technologies, are increasingly deployed to achieve regulatory fixes. He argues that in such a world the law must be seen as only one conceptual strand in a multi-disciplinary regulatory environment where other strands are increasingly encountered and are potentially potent.  Lawyers need to understand this, themselves be properly engaged and respond.

In a surveillance society of the kind first envisaged in Jeremy Bentham’s Panopticon, the state can dictate if it chooses any number of violations of traditional human rights or freedoms.  A contrary view might be that, provided that the technological instruments of regulation align with our moral values, why worry?  And yet if the moral dimension and free will are increasingly squeezed out of, or to the edge of, individual decision making, motivation becomes increasingly prudential rather than moral. No longer are the right things done for the right reason and we risk demeaning ourselves.

In short, lawyers should be concerned to contribute to debates about getting the regulatory environment right for emerging technologies; but they should also be concerned about the implications of technology and design replacing law as a channelling mechanism.

This lecture provided an excellent conceptual analysis of the interface between regulation and technology, and an intellectually invigorating start to this new lecture series.  One area that Professor Brownsword did not explore but which merits consideration is the extent to which the education of lawyers, at school, university or afterwards, empowers them sufficiently to engage with the technological issues that in the speaker’s view are now part of any proper consideration of our regulatory regimes. 

Bill Jones, of Counsel is a member of the Outsourcing, Technology and Trade Group at Wragge & Co LLP. Bill Jones is also Chair and a Trustee of SCL. 

The Internet, Society and Law Seminar Series

This lecture was part of a series organised to provide a platform for leading international scholars to address emerging legal issues concerning the Internet: its use, governance and regulation.

The Internet is raising new questions about legal principles, their implementation and enforcement in cyberspace. Does an understanding of law and the Internet simply require a more technologically sophisticated analysis of traditional legal principles, or is the Internet creating a need for new perspectives on law and regulation?

Each seminar will focus on a different emerging legal issue concerning the Internet. Responses to the main lecture will be invited from a lawyer and a social scientist in order to offer a broad range of perspectives on the issue at hand. A moderated audience discussion will follow.

The next lecture in the series, by Dr Elizabeth Judge, takes place on 6 May.