Fake News: Striking a Balance between Regulation and Responsibility

July 17, 2017

 At the risk of wading into a saturated subject late in the
day, this article sets out a few thoughts as to why the regulation of platforms
in relation to fake news is not the answer to the problem of ‘fake news’.

By fake news I don’t of course mean what Donald Trump deems
to be fake news or any kind of genuine, but sloppy, journalism. I mean the publication
of deliberately fake stories that is designed either to achieve a particular
commercial or political purpose, such as to influence an election, or to entice
internet users to click on a story to generate advertising revenue.

There are four main reasons why I consider that regulating
internet intermediaries such as Google and Facebook is not the answer:

  1. it will be extremely difficult to define fake news and any threshold
    for liability, leading to confusion and litigation; 
  2. plenty of laws and regulations already exist to deal with
    fake news and the Terms of Service of most internet platforms prohibit it; 
  3. internet platforms are already commercially incentivised to
    deal with the problem and are taking effective steps to deal with it; 
  4. the crux of the problem is technology and technology is more
    likely to find effective solutions than legislation, which will quickly become
    out of date – regulation of internet platforms in relation to fake news (and
    other forms of illegal content) will dis-incentivise platforms from taking
    pro-active measures and using the technology they are developing.

These four points are developed below. Ultimately, it comes
down to the difference between responsibility and liability.  Attempting to make service providers liable
for fake news by failing to find it or remove it quickly enough will create a
wave of opposition – and not just from the service providers.  Encouraging platform providers to accept some
social responsibility to deal with the problem is likely to push at an open
door and be consistent with their commercial objectives.

The current platform liability regime

Before explaining why regulation of platforms is a bad idea,
it is worth summarising the current intermediary liability regime, which is set
out in Articles 12-15 of the E-Commerce Directive 2000.  These provisions are misleadingly sometimes
referred to as ‘safe harbour’ provisions, a label which is not just confusing
for those familiar with data protection law relating to data transfers but is
also inaccurate.  The provisions do not
provide a safe harbour; they provide a framework that seeks to limit the
liability of ‘information society service providers’ for activity and content
on their platforms.

The definition of ‘information society service provider’ is
a complex one but applies broadly to internet platform providers, whether they
host content, provide search engines or enable internet access.  The common thread between these service
providers is that they assume a position of passiveness or neutrality in
relation to internet activity or content, as opposed to traditional publishers
who take responsibility for the content they publish.  It is the activity in question that is key
(for example providing a discussion forum) rather than the overall character of
the service provider (for example, a broadband provider or a cloud platform).  So, for example, a newspaper which provides a
discussion forum on its website may be able to claim that it is an information
society service provider in relation to the publication of content on the
discussion forum, but not in relation to edited news content.

For the purposes of this article I refer to information
society service providers as ‘platforms’.

Article 15

Despite its numerical positioning, the starting point to the
platform liability provisions in the E-Commerce Directive is Article 15, which
provides that Member States cannot impose on platforms ‘general obligations to
monitor’ for illegal content.  In essence
that means that Member States cannot legislate, and courts cannot order
measures, against platforms that would require them to search the internet or
their own platforms for illegal content.

Platforms can therefore be required to carry out specific
monitoring, although there is plenty of scope for debate as to when a specific
monitoring obligation may become a general one if its scope is not sufficiently
narrowed.

Article 15 is important and allows platform providers the
space within which to innovate and improve their services without having to
devote substantial resources to speculative searches of the millions of pieces
of content on their platforms for fear of liability.  Without this protection, the floodgates that
prevent a wave of injunctions or search requests would burst open.

Articles 12 to 14

The licence to operate provided by Article 15 is balanced
out by the defences provided by Articles 12 to 14.  Article 12 provides virtual immunity for ‘mere
conduits’, for example internet access providers who simply provide the
infrastructure to allow website operators to publish content.  Article 13 provides some protection for
temporarily stored (‘cached’) content and is therefore of use to search
engines.  Article 14 provides the much
talked about ‘hosting’ defence for service providers who host but don’t control
the content they publish.

Of these provisions, Article 14 is perhaps the most
applicable to platform providers, particularly social media and blogging
platform providers.  The requirement for
knowledge (actual knowledge for damages claims but the lower threshold of constructive
knowledge for criminal and regulatory sanction) means that platforms tend not
to go looking for illegal content for fear of putting themselves ‘on notice’
for the purposes of Article 14 or becoming so ‘active’ in the publication
process that they cease to have the necessary neutrality to be classed as an
information society service provider at all. 
And so the regime that has developed is one of ‘notice and take-down’.

Injunctions

Importantly, Articles 12 to 15 do not prohibit injunctions
provided such injunctions do not have the effect of imposing general monitoring
obligations that would be contrary to Article 15.  A body of case law has developed, mainly in
the copyright and trademarks world, to ensure that injunctions can be granted
against platforms provided that they are ‘necessary, proportionate and
dissuasive.’

Problems with the notice and take-down regime

There are a few difficulties with the above notice and
take-down regime.  First, in relation to
the most offensive illegal content, the damage, for example in harassment and
child abuse cases, is often done at the time of publication or shortly
thereafter before the content has been removed (even if removed very quickly).  Secondly, in relation to the difficult cases
where it is not clear to the platform from the notice of complaint whether the
content is illegal or not (of which there are many), the platform is placed
into the role of decision-maker, notwithstanding that it is not usually well-placed
to perform that role.  Thirdly, the focus
on whether the platform should be liable or not for failure to remove illegal
content quickly enough has led to uncertainty and litigation across Europe when
the focus should instead be on what can be done to prevent further damage or
take action against the person responsible for it.

Notwithstanding the above problems, many thousands of
complaints are resolved every day on the large platforms and the most seriously
illegal content tends to be removed quickly.

Defining fake news

Against that background, what would regulation of platforms
look like in relation to fake news? 
Given the protection of Article 15, Member States will not be able to
impose general obligations on platforms to find and block fake news.  Any regulation is therefore likely to sit
behind the notice and take-down regime described above.  In other words, as is being suggested in
Germany, platform providers may be fined for failure to remove fake news within
a given time period.

If such a regime were to be implemented, the first problem
would be defining fake news and deciding whether the content in question falls
within that definition.  Many items of
fake news will not make such obviously outlandish statements as ‘Yoko Ono: I
had an Affair with Hilary Clinton in the 70s’ and/or be traceable to a group of
teenagers in Macedonia.  What about the
many news articles that contain part truth/part fiction, satire, genuine
errors, and exaggeration? It doesn’t take very long to come up with examples
that would have Google’s moderators, regulators and the courts scratching their
heads.  And of course it’s highly
unlikely that an accepted definition of fake news will emerge across borders,
which would lead to inconsistency across Europe and beyond.

Even if fake news can be defined, when is it sufficiently
damaging such as to impose legal obligations to remove it? On the basis that it
cannot be right to impose liability for false content no matter how meaningless
and trivial it is, defining a threshold for liability will perhaps be even more
difficult than defining fake news itself, given the near impossibility of
proving any causative effects of it.  One
only needs to see how much litigation there has been over the meaning of ‘serious
harm’ in the libel courts since the Defamation Act 2013 came into force.

Existing laws and provisions

It is certainly true that there will be some ‘fake news’
articles that are not unlawful in any way. 
Absent some kind of damaging causative effect, it is generally not
unlawful to post false material on the internet.  However, content that causes serious offence
or damage is often actionable through one law or another, whether it be libel,
data protection, privacy, copyright, harassment or even obscure election laws.

But in cases where no law exists to form the basis of a
removal request, fake content will often be in breach of the terms of use of
the platform because platforms generally don’t like their platforms being used
to spread false content – users don’t find it very useful and advertisers don’t
like it either.  Whilst such terms of use
are not enforceable against other users of the platform, they can at least form
the basis of a robust complaint to the platform and most platforms tend to try
to apply their terms of use.

And if all else fails and it is clearly right that the fake
news be removed in order to protect the rights of individuals or otherwise to
achieve justice, courts tend to have increasingly wide powers to grant
injunctions, which as explained above, are outside of the protections in
Articles 12 to 14 of the E-Commerce Directive. 
For example, s 37 of the Senior Courts Act 1981 provides very wide
jurisdiction for the English courts to grant injunctions and other orders.  If it was discovered that Jeremy Corbyn had
secretly paid hundreds of students to post on Twitter that they are crippled
with debt with the clear intention of manipulating the British public to vote
Labour (in breach of election laws), would the court have the power to grant an
injunction against Corbyn and Twitter to remove the content? The answer is
almost certainly yes (even if an injunction may not achieve the desired effect
in this extreme example).

Service providers are already dealing with the problem

Regulation tends to be slow. 
In the time it has taken the House of Commons to convene a select
committee and receive submissions from interested parties in the UK, the
platforms have got on with sorting out the problem of fake news, no doubt with
the help of some very large brains within their ranks.  The result, in a relatively short space of
time since the issue of fake news blew up during the Brexit referendum and the
US elections, has been the announcement of numerous measures by Google,
Facebook, Twitter and others to, amongst other things, educate their users
about fake news, make it easier to report fake news, respond to removal
requests, appoint third-party fact-checkers, ‘kite mark’ fact-checked content,
and make algorithmic changes to prefer genuine content.

Why are they taking these actions? Cynics may say it’s
because they fear regulation and are under media pressure.  But that ignores the fact that the large platforms
are commercially motivated to ensure that the problem of fake news is
addressed.  Without happy users and
advertisers, the platforms will quickly see their market share fall.  The migration of users and advertisers from
one platform to another can happen much more quickly if service delivery and
user experience falters than if new regulation comes into force. 

Who better placed to deal with the problem of fake news than
the platform providers themselves who are in possession of the data and can
devote resources to understanding the problem and devise and implement speedy
solutions? These solutions may not be perfect as yet but surely they are more
effective than regulation that would be out of date as soon as it comes into
force.

A good example of how self-regulation can lead to positive
results is in relation to the agreement of Google, Microsoft, Facebook and
Twitter to do more to combat hate speech. 
In June 2016 the European Commission announced that these companies had
signed a Code of Conduct.  In June 2017
it released statistics which indicate a very considerable increase in the speed
of removal of hate speech.

A tech solution to a tech problem

Finally, and perhaps most importantly, fake news is primarily
a technology issue in today’s mobile environment.  There is nothing new about fake content and
propaganda.  What is relatively new is
the accuracy of electronic search, targeted mobile advertising, and the concept
of the ‘echo chamber’ of our social media accounts – the idea that the content
we receive is heavily influenced by who we choose to ‘follow’, have as our
friends, and connect with.

Surely the people who devised the algorithms that generate
accurate search, targeted advertising, and bespoke newsfeeds are better placed
than politicians to devise solutions to ensure that our newsfeeds consist of
the content we want to receive and are relatively free from illegal content?

However, in order for the platform providers to be free to develop
new applications for image recognition software, search technology, and
artificial intelligence, we must create a legal and regulatory environment that
encourages rather than disincentives the use of proactive technological
measures.  If platforms fear that by
using these new technologies to find and block illegal content they risk losing
their legal protections, something is clearly wrong.

Maybe it’s time that European regulators examined the legal
and regulatory procedures through which individuals harmed by illegal content
published online can obtain an authoritative and independent adjudication
rather than focussing on who should take the blame for what takes place after
the content has been published.  For
example, the Information Commissioner’s Office resolves many ‘right to be
forgotten’ disputes without either the complainant or the search engines
incurring considerable cost.

Conclusion

There seems to be a fairly good chance that the noise around
fake news will die down eventually as the minority that are influenced by it become
more aware of it and platforms deal with the problem themselves.  But even if the noise does die down, there
will no doubt be another internet content problem waiting to take the place of
fake news.  Whatever that problem is, the
issues raised are likely to be very similar to the issues around hate speech
and fake news, and the same intermediary liability principles will apply.

Knee-jerk regulation in the internet space is rarely
effective.  The cumbersome and barely
used Defamation Act (Operators of Websites) Regulations 2013 are a case in
point.  The environment is simply too
fast for regulators and the courts to keep up. 
But there are other ways of dealing with these problems, the most
important of these being to encourage the development of the same technology
that brings them to our attention in the first place.  The focus should be on responsibility, not
regulation.

Ashley Hurst is a Partner at Osborne Clarke LLP, with a
particular specialism in media and internet disputes