SCL Policy Forum Report: Computer and Communications Law Reform

October 17, 2012

Broadly structured around the theme of computer and communications law reform, this year’s Forum lived up to its reputation for lively debate at the intersection of academic, regulatory and practical perspectives on telecommunications law. This article briefly comments on the several panel discussions which took place.

Cloud and mobile computing

Chris Marsden (Essex) opened the forum with a session setting out his vision for a coherent transnational system of cloud governance rules. He argued that it is unsafe to leave regulation of cloud services to existing self-regulatory systems, which suffer from two major defects. First, they tend to be dominated by large service providers and therefore lack legitimacy in the eyes of their users. Second, the patchwork of rules is uncertain, excessively detailed and contradictory. Jonathan Cave (Warwick) continued the discussion by highlighting several dangers of self-regulation. Cloud systems, he observed, are complex and self-organising, but failure could have a ‘profound impact’ on financial markets, privacy and security. Tim Cowen (Sidley Austin) responded, observing that the private cloud is not a new phenomenon and has largely come into focus as a result of recent consumer-facing cloud services. He identified a shift to a more constructive regulatory approach which emphasises compliance rather than ex post enforcement, and advocated a framework within which both competition and consumer outcomes can be considered. Finally, Nico van Eijk (Amsterdam) advocated a close examination of the real problems involved in cloud computing to determine how communications can best be protected.

Privacy and data retention

Mike Bond (ICC) opened an energetic discussion panel by speaking about the ‘cookie wars’. He identified various difficulties faced by UK companies seeking to comply with the Privacy and Electronic Communications (EC Directive) (Amendment) Regulations 2011, which require informed consent to be given to storing or accessing information on a user’s terminal. Mr Bond identified substantial confusion about the timing and degree of consent required, partly stemming from flexibility inherent in the UK transposition of the Directive. In determining whether consent is adequate, context is everything; he singled out the BBC and BT web sites as examples of best practices. Dave Evans (ICO) explained the ICO’s enforcement strategy, which is focused on fulfilling its statutory duty to communicate details of the new law and assist businesses along the path towards full compliance. It is more concerned about non-compliance in areas of concern to consumers (as evidenced by complaints) or in ways that affect sensitive data. Monetary penalties are ‘not a first step’ for the regulator. A more under-reported change in the law is security breach notification. Its aim is to permit self-protection by consumers and organisations so that appropriate action can be taken following an intrusion. The ICO is taking a ‘cautious approach’ to implementing the law, aware that it could be inundated with notifications if all data controllers participate. A quantitative threshold may be advisable.

Mark Turner (Herbert Smith) spoke about data retention and the Communications Data Bill. The law of data retention, he said, is in a ‘state of suspended disarray’. The Data Retention Directive has been widely criticised as ineffective, inconsistently transposed and ‘the most privacy-invasive instrument ever adopted by the EU’ (quoting the European Data Protection Supervisor). A proposal for a revised directive will be published next year. The basic challenge remains how to balance users’ rights to privacy of communications and data protection against the need to detect and prosecute serious crime. Mr Turner then focused his remarks on the bill, which aims to modernise the retention regime in the UK to include IP telephony data and certain social network information. Response to the draft bill has been overwhelmingly negative. One of its few positive aspects is the inclusion of obligations to keep retained data secure and to prevent unauthorised use. Compliance costs are likely to be high. Function creep is a serious concern, since retained or created data may be used for many purposes, including by private litigants in Norwich Pharmacal proceedings or by correlating independent retained data, which is no longer merely ‘scary science fiction’.

After lunch, Judith Rauhofer (Edinburgh) argued that the right to be forgotten is a ‘sideshow’ which distracts from more important issues in data protection regulation. The so-called right is excessively complicated, onerous, impossible to implement where third parties’ data are commingled, unnecessary in light of alternative control mechanisms, and ultimately less important than education, which can prevent unwanted disclosures. Martijn ten Bloemendal (Google) spoke about privacy by design, data protection impact assessments, and their impact on businesses. He explained how privacy design documents are created by Google employees for all engineering projects; these are then reviewed by a dedicated privacy working group. A good example of privacy by design is the Google Plus ‘circles’ feature, which permits highly granular control over who can access users’ data. Andrea Matsyshwyn (Pennsylvania) gave an update on recent American policy developments, from which she identified an emerging policy of treating large entities more aggressively than small businesses, and more generally a shift towards stronger privacy regulation by the Federal Trade Commission and Federal Communications Commission. Perhaps, she suggested in closing, we should aim to remember rather than to forget.

Mobile payments

This session examined an increasingly active sector in which numerous standards, applications and protocols are vying for adoption. Robert Caplehorn (PayPal) identified the consumer as the clear winner: anyone can now use their smartphones to scan barcodes and compare prices online; this has increased price-based competition and led retailers to fear their transformation into free showrooms for online retailers. Mr Caplehorn identified several issues facing online payment intermediaries which provide ‘wallet in the cloud’ services. These firms face a complex regulatory landscape and considerable uncertainty surrounding the operation of existing offline consumer-protection legislation, such as the Consumer Credit Act. Payment intermediaries are under increasing pressure to regulate transactions (and, indirectly) illegal activity, which increases transaction costs and heightens concern about liability from making false representations. Broader safe harbours may be desirable.

Julia Hörnle (Queen Mary) spoke about the trade-off between merchant and consumer rights protection; a balance must inevitably be struck between them. Her talk focussed largely on consumer protection laws and the recent reviews of the E-Money Directive and Payment Services Directive, which aim to deliver more information to consumers through mandatory disclosure and transparency requirements. Dr Hörnle also identified the growing role of payment service providers as gatekeepers, and the important and controversial roles they play in enforcing laws and dispute resolution policies, and in blocking transactions considered contrary to public policy. To close the panel, James Le Brocq (O2 Money) identified the convergence of commerce and payment technologies. In his view, secure mobile purchasing platforms are about to become very important. These platforms are regulated in two ways — by OFCOM and by the Financial Services Authority under ‘e-money’ licences — and many are submitting to greater regulation to obtain consumer trust.

Keynote: Leveson inquiry

Professor Ian Walden (Queen Mary) delivered a fascinating keynote speech about his experiences as a commissioner of the Press Complaints Commission (‘PCC’) and the need for the Leveson inquiry to propose a system of press regulation which commands the confidence of the media and the public alike. Professor Walden spoke about three issues: first, the regulatory proposals being debated at the Leveson inquiry; secondly, what is not being debated but should be; and thirdly, the wider context of telecommunications reform, including the draft Communications Data Bill 2012.

Assessment of regulatory proposals

To some extent, Professor Walden argued, public inquiries such as Leveson can be understood as ‘cathartic processes’ which give victims an opportunity to be heard after suffering serious harm. It is unclear to what extent those voices will inform the inquiry’s conclusions. The reason the PCC has been in focus during the inquiry is that the media lacks a single industry voice; it is not a homogeneous entity, but consists in reality of numerous publishers, newspapers and journalists. Some commentators expect the PCC to act as a voice of ‘the press’, which was not its original role.

He rejected calls for a licensing model, under which a regulator is asked to license the media profession (somewhat like the Solicitors’ Regulation Authority) and deprive those guilty of malpractice of the right to be journalists. This is a bad idea largely because the industry is so fragmented and comprises such a wide range of journalists. Conversely, it may make sense to regulate specific activities within the supply chain of journalism — such as news gathering or editorial functions, publication and distribution. This was the law’s traditional approach; the PCC, for example, governed aspects of editorial function.

Professor Walden offered a strong defence of the PCC, which he pointed out was never intended to be a broad regulator of the press — only a ‘more limited’ dispute resolution body designed to reduce the cost of access to justice. Its objective was to provide a ‘fast, free and fair’ process for resolving complaints. Nevertheless, recent events have made clear that self-regulation is dead. Co-regulation is one alternative. Anyone may be ‘entitled’ to write, edit and publish, but this could be subject to a statutory framework of backstop powers which aim to ‘give the industry another chance’ with the ultimate threat of legislation. The basic tension in a regulatory model is how to secure adequate independence from the industry, while importing sufficient experience from industry participants and government. The regulator needs this experience (and funding) in order to function effectively. However, as a public authority, the regulator also needs itself to be accountable using mechanisms of judicial review and freedom of information.

Other areas for debate

Professor Walden pointed out that the inquiry has been little concerned with the nature of the press itself, or with extra-legal standards of journalism. However, Professor Walden identified a range of ethical and professional standards which are imposed on the press; these he termed ‘law-plus’. The Editors’ Code of the PCC adds new obligations above and beyond those imposed by law; the National Union of Journalists has a code of conduct; there is now also a Bloggers’ Code of Conduct. Part 2 of the inquiry will focus on press compliance with law but, as has been foreshadowed, will probably not proceed. Ultimately, the press is already subject to the criminal and civil law; there is, in Professor Walden’s view, ‘no shortage of law’. Apart from some exceptions for data protection and non-disclosure of sources, the law applies to the press just like anyone else. Civil litigation is ongoing under existing law, with nearly 300 civil claims for misuse of private information. The real problem is a failure to investigate wrongdoing and enforce the existing rules. Voicemail hacking, for example, was probably a failure of the police rather than regulators; the PCC lacked statutory power to gather evidence or investigate wrongdoing of the kind alleged. Meanwhile, police investigations are costly and remain unfinished.

Professor Walden identified the well-known tension between facilitating freedom of expression and protecting privacy. One policy fulcrum that is often used to balance these interests is the concept of the ‘public interest’. He suggested that a statutory definition of public interest would be a ‘fruitless task’, since it is always a matter of fact and circumstance.

Telecommunications reform

A further area that needs discussion is the disruptive influence of the Internet. Like the film and music industries, traditional content creation and distribution models have been disrupted, and newspapers don’t yet know how to make money online. He put this point bluntly: ‘The Guardian is a charity losing money hand over fist’. The Internet has also resulted in disintermediation — the media is no longer the sole conduit through which information is generated and conveyed. That, Professor Walden said, should be preserved. However, the boundaries between broadcasting, television and newspapers are blurring.

Professor Walden criticised the recent Prince Harry episode as breaching an ‘obvious’ reasonable expectation of privacy. However, once published online, it was almost impossible to prevent the photographs’ publication in newspapers. Like the Spycatcher case, once information is in the public domain no regulator can act as King Canute and hold back the waters. This should remain a question of journalistic ethics. However, the consequences of making a wrong decision may be serious, because the Internet is like an elephant: it does not forget. Many complaints received by the PCC relate to old Internet material which, although truthful, is still accessible and causing damage. Jurors can find out everything about a criminal defendant. The solutions to these problems are unclear.

Professor Walden concluded that Leveson ‘will lead to some regulatory outcome’. However, it needs to be carefully considered and not a knee-jerk reaction like establishing the inquiry was. We should aim for a convergent, integrated regulatory environment, but it must be nuanced and safeguard the marketplace of ideas. Protecting speech in a rapidly-evolving commercial environment is the major challenge facing the government.

Communications and spectrum reform

To commence day 2, Amanda Hale (Herbert Smith) described the shift from a ‘command and control’ model of spectrum allocation to a collective use model. The former was characterised by fixed payments for time-limited exclusive rights; the latter by parallel use of the same spectrum range by multiple users. Another approach is market-based — a hybrid of the two models under which spectrum is auctioned but licences do not prescribe particular uses; spectrum is tradeable and of indefinite duration, while licence fees are set high enough that licensees will not keep spectrum if they cannot use it efficiently. To make this approach work more effectively, greater frequency harmonisation is needed at the international level. OFCOM now uses a mix of these three models. Threats by telecommunications service providers have caused OFCOM to revise its 4G auction proposals and delayed the auctions. Ms Hale concluded that the government now appears to be exercising more control over OFCOM spectrum policy; however, since the 4G auction controversy, it is decreasingly clear where the real power resides. Any return to a ‘command and control’ model is a troubling prospect.

Jean-Jacques Sahel (Skype) reviewed developments in network communications policy since the late 1980s. He identified tensions arising from changes in service delivery and consumption patterns: in particular, convergence between network operators and content supply, and attendant competition concerns where those operators violate network neutrality to promote their own voice services. These ‘reactionary instincts’ from incumbent industries are a threat to sensible regulation. Citing the example of the horse and carriage industry, Mr Sahel expressed concern about incumbents using their market power to block innovation or protect the status quo. Faced with the automobile, carriage operators successfully lobbied for laws which required red flags to be held in front of moving cars; this, he argued, delayed uptake of the automobile by at least 20 years. Regulation should be focussed on users, growth and innovation; these should be expressly recognised as overriding values in statute.

Graham Smith (Bird & Bird LLP) spoke about what isn’t in the Communications Review but should be. First, he criticised ‘scorched earth policies’ – that any price is worth paying to conduct a war on unlawful Internet material. Whether such a policy amounts to an ‘attack on Internet freedom’ depends ultimately on the proportionality of measures; for example, whether they are targeted or impose costs on innocent parties. In light of the increasing use of blocking injunctions and disclosure orders, the Digital Economy Act is starting to appear ‘increasingly expensive and irrelevant’. Second, Mr Smith analysed Max Mosley’s submissions to the Leveson inquiry on the problems presented by the ubiquitous availability of Internet material. Mosley’s proposal for an Internet content tribunal with powers to disconnect individuals from the Internet suffers from numerous difficulties, including the assumption that individual speech can legitimately be subjected to prior restraints. Mr Smith argued forcefully against measures which make speech ‘contingently illegal’ with reliance on the good sense of prosecutors not to intervene. To conclude, Mr Smith recommended repeal of s 127 of the Communications Act 2003 as part of a general effort to align Internet content laws with their offline counterparts, the abolition of ATVOD, the rejection of Mosley’s proposal, and the departure of regulators such as OFCOM from the business of enforcing copyright.

Jeremy Olivier (OFCOM) responded with a general discussion of the Digital Economy Act, the E-Commerce Directive and problems of Internet regulation. He noted the difficulty of achieving ‘directed policy objectives’ in Internet contexts. He agreed with Mr Sahel that a ‘light-touch regime’ is desirable, and agreed with Mr Smith that the Internet does not necessarily require different legislation despite being different in some ways from traditional media. In the context of the DEA, Mr Olivier observed that considerable time had been spent debating the issues, such as the definition of an ISP (which can be ‘quite ambiguous’). He noted the ongoing European Commission consultation on the E-Commerce Directive, and in particular its application to non-traditional hosts such as marketplaces and search engines. Drawing lines remains an ‘iterative and continual’ exercise.

Tim Cowen (Sidley Austin) described the European Commission’s approach to a ‘modern industrial policy’ on interoperability. He explained that the Commission’s current objective is to achieve competition in a social market economy. This contrasts with earlier regulators which focussed on a shorter-term model of competition which prioritised consumer welfare. Undoubtedly, the information society agenda will influence competition policies — in particular, the need to promote innovation, neutrality, and the single digital market. Mr Cowen concluded that the new integrated policy framework is partly a response to convergence in these areas.

Kevin Coates (European Commission) extended this analysis by reference to recent European competition case law. He suggested that the core principles of competition analysis are usable but need to adapt to changing market conditions in digital services. Designing a competition remedy is, he said, like fitting a shoe to a galloping horse — except that the horse sometimes doesn’t gallop all that quickly. As can be seen from the Google/DoubleClick and Microsoft/Commission decisions, privacy is increasingly relevant in EU competition law, alongside other consumer welfare objectives. He argued that this approach to protecting privacy ‘by the back door’ is illegitimate; competition rules should not be distorted in this manner — if such rights are worth protecting, then they are worth protecting from interference by all market participants, and not just those engaging in relevant merger activity or with market power. Finally, Professor van Eijk set out four Dutch cases (Diginotar, KPN/Vodafone, KPN/Cable, ETNO) which highlight the complexity of competition regulation in information markets and the difficulty of regulating quality of service. In his view, regulatory intervention is inevitable because market-based competition will not supply all the answers. Making service providers responsible for all the consequences may be counter-productive; firms may not invest in a market if they are not guaranteed a return, whether due to too much competition or too much regulation, while small operators may simply be unable to give meaningful remedies if the scale of damage caused is large. 

The future

In a now well-established tradition, the Policy Forum closed with a session of predictions. Lilian Edwards (Strathclyde) offered the Delphic observation that data protection reforms ‘may … or may not be enacted’. In her opinion, the Communications Data Bill probably will not proceed before the next election. We may also see the first case of 3D printing piracy. Also likely is some form of driverless car litigation. Mark Turner agreed that 3D piracy was ‘inevitable’, while new liability rules may be needed to deal with the issues posed by driverless cars. Chris Reed (Queen Mary) noted the growth of unmanned aerial vehicles and the risks posed by imperfect technology in increasingly crowded skies. Two other focus points next year will be governance issues (of the press, cloud and intermediaries) and content liability and control (in particular, the ‘clean Internet’ agenda and its spill-over uses of filtering). He called upon scholars to examine critically the trade-off between speech and content standards. Payment intermediaries will increasingly be targeted to shut down unpopular websites (as occurred in the case of Wikileaks). Finally, Chris Marsden (Essex) expressed pessimism about Internet innovation and network design. He observed that the ‘fundamental plumbing of the Internet is in trouble’, it being more difficult for firms to coordinate the implementation of new protocols and services to improve network and client functionality. He suggested that software defaults can make a big difference, as in the case of browser-level tracking opt-outs. In his view, co-regulation will become more common, but not necessarily of the best kind. He offered the view that the Communications Data Bill will pass or return in reincarnated form.

As this summary makes clear, Forum attendees enjoyed lively and insightful discussion of emerging issues in computing and communications policy. Podcasts and presentation materials are now available online at www.scl.org/  

Jaani Riordan is a pupil barrister at 8 New Square and a final-year DPhil candidate in law at Magdalen College, Oxford, where his research considers the liability of Internet intermediaries.