Net Neutrality Debate

August 4, 2006

The ‘net neutrality’ debate is about whether or not network operators should be able to use network infrastructure to discriminate between data packets which travel across their networks for commercial or policy reasons as opposed to network performance reasons.  Technology now allows operators to favour those packets of data which originate from a preferred source and deprioritise or even block packets from non-preferred sources, a process of selection which is often dubbed ‘access tiering’. The ability to handle data on different network tiers has ignited a high profile debate in the United States about whether or not operators should be allowed to discriminate between data packets and, therefore, whether regulatory intervention is needed to constrain how operators run their networks – to force them to be ‘neutral’. US proponents of net neutrality have recently failed in a bid to get net neutrality principles enshrined in law. This debate is now gaining traction in Europe.


 


On one side are the operators who argue that the increasing demands placed on the modern Internet require a level of investment that can and will only occur if the Internet is efficiently commercialised. They say that this commercialisation must involve the ability to implement a ‘user pays’ model for the use of their networks and, hence, the Internet. Those who make high use and profit from the Internet, should, the operators say, pay for that use. The other side of the debate is more complex and is characterised by an eclectic coalition of content and service providers who argue that access tiering threatens the core values and social utility of the Internet and that governments must intervene to prevent access tiering from occurring.


 


The Technological Background


 


In order to understand net neutrality, one must first understand the various ‘layers’ of Internet topology and how each of these layers is susceptible to various regulatory pressures. Broadly speaking, the Internet is comprised of three layers: the physical layer, which includes the tangible objects – computers, wireless devices, wires, routers etc – that connect individuals to the Internet and one another; the logical layer which describes the series of algorithms and standards (including TCP/IP, HTTP and HTML) that allow content layer materials to be understood and transmitted in machine readable form; and the content layer which is made up of the content, information and other meaningful statements that individuals using the Internet perceive, act on and share. Net neutrality is a debate that takes place at the interface of the logical and physical layers.


 


The Internet’s central function is to pass packets of data, via ‘pipes’, along a chain of ‘nodes’ until they reach their destination.  The nodes do not ask questions about the sender of the packet, the recipient, or its content; they simply receive them, analyse the address information and pass them on to the next node.  This ‘dumb’ network treats all packets equally – a principle referred to as ‘bit parity’ and often encapsulated in the phrase ‘end-to-end’ design.  In a dumb network, intelligence is incorporated in the applications that sit at its edges, or ‘ends’.  These applications may themselves perform ‘intelligent’ functions (like blocking junk e-mails) – but the core of the Internet’s infrastructure is not interested in what they do. 


 


The net neutrality debate is really about whether the Internet should retain its end-to-end design or whether the operators, who own and control various aspects of the physical layer, should be permitted to ‘discriminate’ amongst the data that passes across their networks by implementing access tiering.


 


Whilst the Internet is largely a dumb network at the moment, it is important to appreciate that the notion that we currently have a ‘neutral’ Internet is simply false. For example, an IP transit arrangement between a service provider and an operator will typically include service level guarantees, which give the operator a commercial interest in ensuring that those service levels are met.  Similarly, a service provider may pay an operator directly to host content, thus generally guaranteeing a higher quality of service and reliability.  Other strategies include utilising intermediary service providers to host content in local caches at various locations around the globe so as to ensure that data requested by customers never has too far to travel. All these existing strategies, however, lie towards the ‘edge’ of the Internet.  They flow off the back of what users of the Internet are prepared to invest in their infrastructure and services.


 


In contrast, the access tiering models now advocated by some operators are a more sweeping attempt to adjust the Internet’s default settings by placing access controls in operators’ hands and by setting a price for it.  It departs from the ‘best efforts’ rule – the existing default – which treats all data packets the same. Two forms of access tiering are possible:


 


·                     ‘Needs-based discrimination’, which treats packets according to the best efforts rule until such time as there is network congestion.  At this point, certain packets – latency-sensitive ones for example – are prioritised and move to the front of the queue. 


 


·                     ‘Active discrimination’, which occurs when operators inspect all packets and prioritise them in accordance with pre-defined rules irrespective of whether their network is congested.


 


Needs-based and active discrimination overturn the best efforts rule and bit parity. A couple of hypothetical examples illustrate the point. 


 


·                     Service provider discrimination:  An operator, such as BT, could enter into an agreement with Search Engine A under which A’s content is favoured over the content of its rivals including Search Engine B.  At times, this preference may be noticeable to the end user, possibly to the extent that users who become frustrated with Search Engine B migrate to Search Engine A because of its better performance.


 


·                     Application discrimination:  Alternatively, an operator could distinguish between applications, rather than providers.  For example, the operator may decide that latency-sensitive applications such as VoIP or video streaming should be prioritised over less time sensitive packets such as those that make up e-mails or downloads.


 


Application and service provider discrimination can operate in tandem.  So, for example, an operator who runs a telephony network, such as BT, could decide that VoIP competes with its core business of voice calls over circuit switching networks.  It may decide that, given this competitive threat, it will deprioritise all VoIP services.  VoIP may nevertheless continue to cannibalise the operator’s existing revenue streams, in which case the operator may decide to enter the VoIP market itself and prioritise traffic originating from its own VoIP service above all other network traffic, thereby securing a competitive advantage for itself in two markets.


 


For and Against Net Neutrality


 


Investment


 


The primary argument in favour of allowing access tiering is that the pressures being placed on the Internet in terms of the number of users and the types of use are expanding rapidly.  New applications – such as streaming video and voice telephony – are emerging. Furthermore, the Internet is increasingly being used in critical applications such as health monitoring and home security.  These applications either sap bandwidth or demand high levels of service quality, both of which place extra burdens on the infrastructure built and maintained by the operators.  The pipes that facilitate this global data exchange are beginning to buckle under the weight of their own success and investment is needed in the infrastructure to ensure that the Internet can keep up with demands placed on it.


 


The investment needed is large. The next generation of networks are currently being rolled out at an enormous cost.  In 2006 AT&T, for example,  expects to spend approximately $20 billion dollars on its new Project Lightspeed.  In the UK BT is spending £10bn in the next few years on its 21st Century Network.  Telecom Italia estimates that, over the next three to four years, European operators will invest around $97 billion in next-generation networks.


 


There is also increasing demand for security measures – designed to weed out spam e-mail and malicious viruses – implemented at the ‘core’ of the Internet, rather than at the user-interface.  In short, operators want to give the dumb network an expensive education, and in doing so operators are hoping that they will be able to capture a greater share of the ‘value’ generated by Internet users.  They justify this by arguing that they should be allowed to exploit their property interests fully by charging certain content or service providers for enhancing the end-user’s experience.  At a micro level, they argue, this will meet the needs of consumers and defray costs that would otherwise be passed on to them, whilst at a macro level it will hasten the deployment of the next generation of networks.  Any problems can be addressed by anti-trust or competition law regulations and, absent a clear demonstration of market failure, market freedom should be endorsed.


 


The counter-argument runs along two lines.  First, end-users, content providers and service providers have for years been paying for network enhancements through subscription charges and bandwidth charges.  They will continue to do so.  Likewise, even if there is no objection to the notion of the likes of Google paying for the bandwidth it uses, it should not be forced to pay additional rates determined by a third party based solely on the type of application it wishes to channel through the network.


 


The second counter-argument asserts that property rights rhetoric should not disguise the fact that operators are seeking to claim a share of the value created by others. Analogies abound: road builders demanding commissions from travelling salesmen, Microsoft claiming royalties from the sale of works created using Word, or the booksellers’ revenue-share arguments in the ongoing dispute over Google Book Search. So long as operators have sufficient incentives, in terms of economic returns, to build the next generation of network architecture, that is enough. 


 


Innovation and the public interest


 


Net neutrality enthusiasts maintain that access tiering will jeopardise the future of innovation online.  They suggest that the end-to-end principle catalysed the intense levels of innovation that the Internet has spawned.  The WWW, P2P software, VoIP, blogging tools and like innovations may not have emerged from a network infrastructure capable of discriminating between data packets. Simply put, this argument maintains that one of the central virtues of the Internet is its ability to level the playing field for application and content development and support an environment where those start-ups or small providers with the most promising innovations could effectively compete against the monoliths of the day. 


 


Those in favour of network neutrality also suggest that those who support access tiering wish to recreate the offline broadcast model of content distribution online – pre-packaged content fed to passive consumers. This argument posits that the Internet has spawned a vast array of Web sites, applications and basic resources that tap into the production capacity of individuals in the digital networked space. These sites, applications and resources place end-users at the heart of the ‘informational universe’.  Their existence supports the notion that the Internet is a fundamentally different resource from that with which we are familiar offline: a frighteningly efficient communications mechanism and global marketplace for sure, but also an information repository of incredible breadth and diversity and a huge talent pool capable of being harnessed and managed across a range of creative projects.  The richer multi-media environment we are constructing means that some existing resources  – online video repositories ‘youTube.com’ or ‘YouAre.tv’ for instance – and many future initiatives may be stifled or never see the light of day if access tiering is allowed. The proponents of net neutrality regulation who make this argument are asking for limitations to be placed on operators’ private property interests for the sake of the public interest that exists in the maintenance and development of this ‘public space’ provided by the Internet. 


 


The problem, of course, is that damage to these public interests is difficult to measure or even predict.  Whilst the desire of operators to follow the broadcast-model is understandable in the context of companies versed in the industrial age economics of mass-media and facing large network build costs,  it is the Internet’s divergence from this model to a user-centric ‘read/write’ model that leads many to argue that preventing access tiering is essential to preserving and nurturing the public interest in the Internet.


 


Ofcom’s Position


 


In the UK, Ofcom’s principal duties in carrying out its regulatory functions as listed in the Communications Act 2003 are two-fold:  It must ‘further the interests of citizens in relation to communication matters’ and ‘further the interests of consumers in relevant markets’ by promoting competition.  In particular, Ofcom is tasked with ensuring ‘the availability throughout the United Kingdom of a wide range of electronic communications services’.  It is also expected to have regard to, amongst other things, ‘the desirability of encouraging investment and innovation’ and ‘the desirability of encouraging the availability and use of high speed data transfer services’.  The various dichotomies encapsulated in this role – consumers/citizens, innovation/investment – perhaps explain why Ofcom has, to date, adopted a ‘wait and see’ attitude towards net neutrality regulation.


 


In many ways, Ofcom’s position is perfectly understandable.  One of the key drivers underpinning the debate in the US is the perceived lack of competition in the US market for broadband – cable or DSL services dominate the market.  With limited options for consumers to switch, proponents of network neutrality maintain that there are no countervailing market forces in the US to curb discriminatory actions.  In contrast, the broadband market in the UK, at least at the retail level, does not suffer from a comparable lack of competition.  Indeed most players in the market would describe it as cut-throat and many are struggling to generate any profits from broadband services at all.  As a result the operators’ investment arguments described above are perhaps harder to refute in the UK than in the US.


 


Whereas Ofcom has yet to issue any definitive statements on the issue, there are suggestions that it regards countervailing market forces and customers’ ability to switch as adequate protections against the need for affirmative net neutrality regulation.  Hence we may see regulation ‘around the edges’ with a basic consumer protection flavour.  These measures may include the provision of information regarding services and applications that will or may be degraded or unavailable under certain conditions.  Likewise, facilitating customer switching quickly and inexpensively – through regulated contractual termination clauses – may be considered.  Regulators may also take a more direct role in the market by promoting (or reducing barriers to entry for) other forms of access, such as broadband over power lines or publicly funded wireless broadband in areas with high population densities. 


 


Conclusion


 


The net neutrality debate throws up fundamental questions about the structure and form of our existing and emerging communications and content production environment.  In many respects net neutrality is really a question of network engineering: operators tell us that a tiered Internet will improve network efficiency and incentivise investment; others aren’t so sure, claiming that introducing intelligence necessarily introduces complexity that can actually impede network performance.  Yet in other respects, the net neutrality debate addresses more fundamental issues concerning free speech, democratic participation and individual autonomy.  It is the answers to these public interest concerns that should inform and structure any regulatory response (or lack thereof).


 


In the end some form of concession to the need for access-tiering is probably inevitable. The costs faced by network operators are very large. Consumer-led, rather than operator-determined, access tiering, matched with meaningful disclosure requirements and appropriate contractual protections could provide the best balance between the demand that there be an incentive to invest in Internet infrastructure with the public interest in a ‘non-discriminatory’ Internet.  Net neutrality requirements in their strongest incarnation, whilst laudable in many respects, are neither practical nor desirable.  Giving consumers full control of the shift from a ‘neutral’ Internet to a tiered space is perhaps the best way of minimising the negative effects that this forced migration will inevitably produce.


 


Ben Allgrove and Paul Ganley are Associates in the IT/Com Department of Baker & McKenzie, London. They would like to thank Francesca Towers for her assistance in preparing the article.