Delfi v Estonia: Curtailing Online Freedom of Expression?

June 21, 2015

When can freedom of expression online be curtailed? The recent judgment of the Grand Chamber of the European Court of Human Rights in Delfi v Estonia has addressed this issue, in the particular context of comments made upon a news article. This ruling raises interesting questions of both human rights and EU law, and I will examine both in turn.

The Facts

Delfi is one of the largest news portals in Estonia. Readers may comment on the news story, although Delfi has a policy to limit unlawful content, and operates a filter as well as a notice and take down system. Delfi ran a story concerning ice bridges which was accepted as being a well-balanced story; it generated an above average number of responses. Some of these contained offensive material, including threats directed against an individual known as L. Some weeks later L requested that some 20 comments be deleted and damages be paid. Delfi removed the offending comments the same day, but refused to pay damages. The matter then went to court and eventually L was awarded damages, though of a substantially smaller amount than originally claimed. Delfi’s claim to be a neutral intermediary and therefore immune from liability under the EU’s e-Commerce Directive regime was rejected. The news organisation brought the matter to the European Court of Human Rights and lost the case in a unanimous chamber decision. It then brought the matter before the Grand Chamber.

The Grand Chamber Decision

The Grand Chamber in essence, affirmed the outcome and the reasoning of the chamber judgment in the same case, albeit not unanimously. It commenced by re-capping the principles of Article 10 of the ECHR from its previous case law. These are familiar statements of law, but it seems that from the beginning of its reasoning the Grand Chamber had concerns about the nature of content available on the Internet. It commented (at [110]:

‘while the Court acknowledges that important benefits can be derived from the Internet in the exercise of freedom of expression, it is also mindful that liability for defamatory or other types of unlawful speech must, in principle, be retained and constitute an effective remedy for violations of personality rights.’ 

The Grand Chamber then referred to certain Council of Europe Recommendations, suggesting (at [113]):

‘a “differentiated and graduated approach [that] requires that each actor whose services are identified as media or as an intermediary or auxiliary activity benefit from both the appropriate form (differentiated) and the appropriate level (graduated) of protection and that responsibility also be delimited in conformity with Article 10 of the European Convention on Human Rights and other relevant standards developed by the Council of Europe” (see § 7 of the Appendix to Recommendation CM/Rec …). Therefore, the Court considers that because of the particular nature of the Internet, the “duties and responsibilities” that are to be conferred on an Internet news portal for the purposes of Article 10 may differ to some degree from those of a traditional publisher, as regards third-party content.’ 

The Grand Chamber applied the principles of freedom of expression to the facts using the familiar framework. First there must be an interference with the right under Article 10(1) of the Convention, then any restriction should be assessed for acceptability according to a three-stage test. The test requires that the restriction be lawful, achieve a legitimate aim and be necessary in a democratic society. The existence of a restriction on freedom of expression was not disputed, and nor was it disputed that the Estonian rules pertained to a legitimate aim. Two areas of dispute arose: lawfulness and necessary in a democratic society.

Lawfulness

Lawfulness means that the rule is accessible to the person concerned and foreseeable as to its effects. Delfi argued that it could not have anticipated that the Estonian Law of Obligations could apply to it, as it had assumed that it would benefit from intermediary liability derived from the eCommerce Directive. The national authorities had not accepted this classification, so essentially Delfi argued that this was a misapplication of national law. The Grand Chamber re-iterated (as had the chamber) that it is not its task to take the place of the domestic courts but instead to assess whether the methods adopted and the effects they entail are in conformity with the Convention. On the facts, and although some other signatory states took a more ‘differentiated and graduated approach’ as suggested by the Council of Europe recommendation, the Grand Chamber was satisfied that it was foreseeable that the normal rules for publishers would apply. Significantly, the Grand Chamber commented (at [129]), in an approach similar to that of the First Chamber that:

‘as a professional publisher, the applicant company should have been familiar with the legislation and case-law, and could also have sought legal advice.’ 

Necessary in a Democratic Society

The Grand Chamber started its analysis by re-iterating established jurisprudence to the effect that, given the importance of freedom of expression in society, necessity must be well proven through the existence of a ‘pressing social need’. It must determine whether the action was ‘proportionate to the legitimate aim pursued’ and whether the reasons adduced by the national authorities to justify it are ‘relevant and sufficient’. The Grand Chamber also emphasised the role of the media, but also recognised that different standards may be applied to different media. Again it re-iterated its view that the Internet could be harmful, as well as beneficial (at [133]). The Grand Chamber then travelled familiar terrain, stating the need to balance Articles 8 and 10 and approving the factors that the First Chamber took into account: the context of the comments, the measures applied by the applicant company in order to prevent or remove defamatory comments, the liability of the actual authors of the comments as an alternative to the applicant company’s liability, and the consequences of the domestic proceedings for the applicant company (at [142]-[143]).

Here, the Grand Chamber emphasised the content of the comments: that they could be seen as hate speech and were on their face unlawful (at [153]) and that, given the range of opportunities available to anyone to speak on the Internet, obliging a large news portal to take effective measures to limit the dissemination of hate speech and speech inciting violence was not ‘private censorship’ (at [157]). The idea that a news portal is under an obligation to be aware of its content is a key element in the assessment of proportionality. Against this background (rather than one which accepts the notice and take down regime as enough), Delfi’s response had not been prompt. Further, ‘the ability of a potential victim of hate speech to continuously monitor the Internet is more limited than the ability of a large commercial Internet news portal to prevent or rapidly remove such comments’ (at [158]). In the end, the sum that Delfi was fined was not large, and the consequence of the action against the news portal was not that Delfi had to change its business model. In sum, the interference could be justified.

There were two concurring judgments, and one dissent. Worryingly, one of the concurring judges (Zupan?i?), having criticised the possibility of allowing anonymous comments, argued:

‘To enable technically the publication of extremely aggressive forms of defamation, all this due to crass commercial interest, and then to shrug one’s shoulders, maintaining that an Internet provider is not responsible for these attacks on the personality rights of others, is totally unacceptable.

According to the old tradition of the protection of personality rights, …, the amount of approximately EUR 300 awarded in compensation in the present case is clearly inadequate as far as damages for the injury to the aggrieved persons are concerned.’ 

Human Rights Issues: Initial Reaction

This is a long judgment which will no doubt provoke much analysis. Immediate concerns relate to the Court’s concern about the Internet as a vehicle for dangerous and defamatory material, which seems to colour its approach to the Article 10(2) analysis and, specifically, to the balancing of Articles 10 and 8. In recognising that the various forms of media operate in different contexts and with different impact, the Grand Chamber has not recognised the importance of the role of intermediaries of all types (and not just technical intermediaries) in providing a platform for and curating information. While accepting that the Internet may give rise to different ‘duties and responsibilities’, it seems that the standard of care required is high.

Indeed, the view of the portal as having control over user-generated content seems to overlook the difficulties of information management. The concurring opinions go to great length to say that a view which requires the portal only to take down manifestly illegal content of its own initiative is different from a system that requires pre-publication review of user generated content. This may be so, but both effectively require monitoring (or an uncanny ability to predict when hate speech will be posted). Indeed, the dissenting judges say that there is little difference here between this requirement and blanket prior restraint (para 35). Both approaches implicitly reject notice and take down systems, which are used – possibly as a result of the eCommerce Directive framework – by many sites in Europe. This focus on the content has led to reasoning which almost reverses the approach to freedom of expression: speech must be justified to evade liability. In this it seems to give little regard neither to its own case law about political speech, nor its repeated emphasis on the importance of the media in society. 

EU law elements: consistency with the e-commerce Directive? 

The Delfi judgment raises some practical questions for news sites hosting third-party content, especially reader comments.  An underlying concern is how this judgment fits with the EU policy approach towards the Internet and intermediaries in particular.  The eCommerce Directive provides, inter alia, for the limitation of liability for intermediaries, in Articles 12-15.  These provisions were considered important, not just for the free flow of services through the EU but as support to the development of the Internet and services offered on it.  The eCommerce Directive envisages three categories of intermediary – those which are mere conduits, those which offer caching and those which host content.  The essential quality of these intermediaries is that they were facilitators via technical services rather than contributing to the provision of specific content.  It is the scope of this last category that is uncertain, especially given the development of a range of services which challenge the understanding of the Internet as it stood at the time of the enactment of the Directive.  Following the first chamber decision, there was some concern that the judgment did not respect the underlying policy choice about intermediaries, nor reflect the significance of the role of intermediaries for the functioning of the Internet, especially from the perspective of end-users.  The question is how out of line, if at all, is the judgment with the Directive?

The first thing to note before we look at the substance is that the Strasbourg Court was not making the decision about whether Delfi was a neutral or passive intermediary or not.  The Court was rather reviewing the impact of the Estonian court’s reasoning.  In sum, it is far from clear that the court was unreasonable in accepting the Estonian court’s end conclusion (even if we might be critical about some points of its reasoning).

The intermediary liability provisions provide a graduated scale of protection, with the greatest protection going to services that are the most technical.  For hosting services, protection is dependent on lack of knowledge of the offending content.  There have been questions about the interpretation of some of the phrases in Article 14(2) of the Directive, such as ‘awareness’, ‘actual knowledge’ and ‘obligation to act expeditiously’. The Directive envisages notice and take-down regimes as a way to deal with offending content. Articles 14 and 15 do not affect Member States’ freedom to require hosting service providers to apply those duties of care that can reasonably be expected from them and which are specified by national law in order to detect and prevent certain types of illegal activities. (recital 48). Article 15 prevents Member States from imposing on internet intermediaries, with respect to activities covered by Articles 12 to 14, a general obligation to monitor the information they transmit or store or a general obligation to actively seek out facts and circumstances indicating illegal activities.  Article 15 does not prevent public authorities in the Member States from imposing a monitoring obligation in a specific, clearly defined, individual case (recital 47). It is implicit in the foregoing, that Article 15 applies only to intermediaries which can claim the benefit of one of Articles 12, 13 or 14.

A number of cases have been brought before the CJEU to understand better the scope of Article 14, and the extent of the protection in Article 15. For example, SABAM v Netlog (Case C-360/10) concerned a social networking site which received a request from SABAM, the Belgian copyright society, to implement a general filtering system to prevent the unlawful use of musical and audio-visual work by the users of its site.  In addition to confirming the prohibition in Article 15 on monitoring, the CJEU noted that a filter might not be able to distinguish between lawful and illegal content, thus affecting users’ freedom of expression (access to information).  In this the ECJ seems to be reflecting the position the ECtHR took in Yildirim, regarding ‘collateral censorship’.  There is a limitation on carrying the ideas in Netlog across to Delfi in that the rules in Article 15 apply to neutral intermediaries and it is unclear whether the ECJ would find a news site to be neutral in this sense, whether because of the agenda-setting function which ‘invites’ particular responses, or because of the adoption of filtering and moderation systems. 

In the Google Adwords case (Joined Cases C-236/08, C-237/08 and C-238/08, judgment 23rd March 2010), the CJEU held that the test for whether a service provider could benefit from Article 14 of the eCommerce Directive was whether it was ‘neutral, in the sense that its conduct is merely technical, automatic and passive, pointing to a lack of knowledge or control of the data which it stores’ (at [114]).  One could argue that, insofar as a site invites comment on a particular topic, it is not neutral though one might question how overt that invitation might be. In L’Oreal (Case C-324/09, judgment 12 July 2011), the Court held that the Article 14 exemption should not apply where the host plays an ‘active role’ in the presentation and promotion of offers for sale posted by its users so as to give it knowledge of, or control over, related data. Further, if a host has knowledge of facts that would alert a ‘diligent economic operator’ to illegal activity, it must remove the offending data to benefit from the Article 14 exemption.  We might question what the role of moderation and filters are in this context, specifically in terms of giving an intermediary control over content.  As regards the Delfi case itself, there are arguably parallels between the CJEU and ECtHR approaches in that both courts seem to think that those acting in the course of their business are in a better place to assess where and when problems might arise.  A point of difference relates to the views of commercial activities. The ECJ stated in Google Adwords (at [116]):  

It must be pointed out that the mere facts that the referencing service is subject to payment, that Google sets the payment terms or that it provides general information to its clients cannot have the effect of depriving Google of the exemptions from liability provided for in Directive 2000/31.

The reference to ‘general information’ also suggests that contributors’ policies would not be determinative either.

Applying the tests found in L’Oreal v eBay and Google Adwords in Papasavvas v O Fileleftheros, a case concerning on-line defamation in relation to a news story posted by a newspaper on its site, the ECJ ruled (at [45]):

Consequently, since a newspaper publishing company which posts an online version of a newspaper on its website has, in principle, knowledge about the information which it posts and exercises control over that information, it cannot be considered to be an ‘intermediary service provider’ within the meaning of Articles 12 to 14 of Directive 2000/31, whether or not access to that website is free of charge.

There are some similarities to the Strasbourg court’s reasoning, in that both courts point to the idea about control over information.  There are differences, however, in that the control over the defamatory material in Papasavvas was much more direct than in Delfi, and the predictive abilities of newspapers about their audience’s response to stories not in issue.  Nonetheless, it is far from clear that the CJEU would reject the agenda-setting argument the Strasbourg court used, especially given its reasoning in L’Oreal regarding the ‘promotion’ of particular content and the requirements of a diligent economic operator.

The Strasbourg court’s reasoning put Delfi in a position of effectively having to monitor user content.  Had Delfi been found to be an intermediary in the sense of Articles 12-14, this would have been contrary to Article 15 of the eCommerce Directive, as implemented in domestic law.  Given that Delfi was found not to be such an intermediary, then Article 15 does not come into play. It also seems that this finding is not unlikely under EU law.  There is then no automatic conflict between this ruling and the position under EU law.  Whether this outcome is desirable from an Internet policy perspective is another matter.  This case and its consequences may then feed into the review of intermediaries that the EU Commission is planning as part of its Digital Single Market strategy. 

Lorna Woods is Professor of Internet Law at the University of Essex.

This article is an edited version of her blog post on the EU Law Analysis blog: http://eulawanalysis.blogspot.co.uk/. Part of this post was previously published on the LSE Media Policy Project blog.