A Call to Regulate Frankenstein

February 28, 2017

 When a parliamentary report cites Mary Shelley’s
Frankenstein in its recitals and proposes new regulation for robots with
artificial intelligence (AI), one cannot be sure whether the 19th or the
21st century has inspired the legislator. On 16 February, the European
Parliament took a step towards the introduction of new regulation of robots in
Europe. Declaring that the EU needs to ‘take the lead’ in this area, the
Parliament endorsed a 
Report that
asks the European Commission to propose rules on robotics and artificial
intelligence, in order to fully exploit their economic potential and to
guarantee a standard level of safety and security. The Report addresses
various kinds of robots: autonomous vehicles, care robots, medical robots and
drones amongst others.

This follows a report published in May 2016 on Civil
Law Rules on Robotics
 which included proposals on the regulation of
the robotics industry at an EU level. Such regulation was argued to be necessary
in order to ensure that the EU and its Member States maintained control over
the regulatory standards at which the industry operated in the EU, as well as
to ensure certainty for enterprises planning to develop their businesses
therein. The Parliamentary Committee on Legal Affairs has released a
preliminary study on the impact
of robotics on civil law

Liability rules

The issue with the question of liability for robots is that
the more autonomous robots are, the less they can be considered simple tools in
the hands of their manufacturer, programmer, owner or user. This makes the
general rules on civil liability potentially insufficient and, in the European
Parliament’s view, calls for new rules which focus on who should be held – partly
or entirely – responsible for the acts or omissions of a robot.

But how to determine who should be responsible for damage
caused by such a machine? The Parliament suggests that an obligatory insurance
scheme as for cars could address the complexity of allocating responsibility
for damage caused by increasingly autonomous robots. This could allow the
manufacturer, the programmer, the owner or the user of a robot to benefit from
limited liability insofar as the use of smart autonomous robots would be covered
by a supplementary fund to ensure that any victims do not remain uncompensated.
Moreover, the Parliament urged the Commission to ensure that a specific legal
status is established for robots in the long-run so as to establish liability
in the event they cause damage.

From this, one can already appreciate the legal as well as
philosophical complexities that arise from regulating robotics, as we find
ourselves on the tipping-point between covering such liability under
established civil rules, and the need to develop new rules to cover acts for
which the direct involvement of a human actor will be to a great extent
eliminated, eg where a robot takes autonomous decisions through machine-learning
beyond its original programming.

These are not new issues being raised by the Parliament.
Indeed, the European Commission has already launched a consultation into the
adequacy of Directive
 on Liability for Defective Products, specifically citing
questions as to whether the existing rules are fit for purpose in relation to ‘technological
developments such as the Internet of Things and autonomous systems’. Questions
of the adequacy of existing rules of liability for new technologies have also
been considered in the work of Working Group 4 within the Association for the
Internet of Things Innovation (AIOTA), and dealt with in the recently-published AIOTA

Whilst questions of liability for damage are important, and
potential challenging when it comes to smart robotic machines, the more
fundamental question in the European space is how to regulate the safety and
performance of such machines as a precondition to them entering the market in
the first place. It is at that level that questions of who is responsible, and
for what, will bite the hardest in the short term, and which will potentially
represent the greatest threat to innovation if not handled correctly.

Safety rules

Besides, all products – including robots – generally have to
be safe, ie they have to comply with reasonable safety expectations by meeting
state-of-the-art requirements. However, in an area that drives developments at
a rapid speed, and also considering that robots’ and AIs’ abilities will
probably be expected to surpass human abilities, it will be increasingly
difficult to define safety standards. Hardware developers and software programmers
are increasingly seeking legal guidance to determine the requirements they are
working against.

Social and ethical considerations – and a new agency

The European Parliament’s rapporteur in her draft report had
suggested taking account of possible negative consequences on the job market
that the use of robots might result in. Interestingly enough, Bill Gates
recently argued that robots that take over jobs should in fact  pay a form of income

However, the Parliament’s majority did not agree to include
the rapporteur’s concerns regarding the impact on the workforce in the final
text approved. Instead, the Parliament potentially created a new workforce by
asking the Commission to consider setting up a European agency for robotics and
artificial intelligence, to supply public authorities with technical, ethical
and regulatory expertise.

In the field of robotics and AI, ethical considerations have
always been a driving factor. As an example, as early as 1942, science fiction
author Isaac Asimov defined ‘the three laws of robotics’, eg that a robot may
not injure a human being. In doing so, it does not come as a surprise that the
Parliament appreciated that the growing use of robotics raises ethical issues,
for example, to do with privacy and safety, and proposed that a voluntary
ethical code of conduct on robotics for researchers and designers is created,
to ensure that they operate in accordance with legal and ethical standards and
that robot design and use respect human dignity. The proposed charter could
have varying binding effects as it would constitute an umbrella framework, a
soft law instrument, covering practices such as the robotics engineers’ ethical
code of conduct, the conduct for research ethics boards, and the designer
licence and user licence.


With many countries, such as US, Japan, China and South
Korea, already developing their own rules on robotics and artificial
intelligence, the EU is now aiming at taking pre-emptive steps to ensure that
any vast roll-out of robotics does not fall below its highly established
standards when implemented in the EU.

Such rules will inflame the necessary discussion of what
happens next, once such robots become part of our daily lives. Developers,
manufacturers and users of robots should closely follow this important debate
which is likely to require a steep learning curve for the legislators and the
enforcing authorities. It is an area that calls for a flexible and forward-looking
approach to law-making and regulation, to avoid a legal environment which
becomes characterised by inefficiencies, stifled innovation, wasted
opportunities, and the need for constant amendments as these technologies
present new challenges that seemed to previously only factor into sci-fi

Dr Falk Schoening is a Partner at the
Brussels office of Hogan Lovells International LLP. His special expertise is in
international antitrust cases which require coordination between different
legal systems or representation vis-à-vis several regulators.

Rod Freeman is and International
Product Lawyer and a Partner at the London office of Hogan Lovells
International LLP

Dr Sebastian Polly is a Partner at the
Munich office of Hogan Lovells International LLP. His focus is on product
liability, product safety and product compliance law.

The authors wish to acknowledge the assistance of Paschalis
Lois, a trainee in Hogan Lovells’ Brussels office, who contributed to this

This article is an edited version of a blog post which
appeared on Hogan Lovells’ Global Media
and Communications Watch