Carry on Automat(r)on: Legal and Ethical Issues relating to Healthcare Robots

May 27, 2013

Popular culture has provided us with numerous examples of a future in which human needs are met by an assemblage of Twikis, Marvins and Mickeys.  Technology has now developed to a point at which robots provide a feasible solution to certain aspects of healthcare provision.  This move has been fuelled both by the growing ageing population and a global drive to cut the cost of service delivery.  The US Government’s ‘Roadmap for US Robotics‘,[1] while strongly prioritising the need for innovation, highlights the potentially significant economic benefits arising from the implementation of robotics in the healthcare field.  The use of medical robotics to perform surgical procedures such as prostate operations is also growing, with over a hundred US hospitals advertising robotic surgery on their websites, often focusing on its benefits and glossing over potential risks.[2]  This article will give an overview of some of the legal and ethical issues relating to the use of robotics in on-going healthcare rather than one-off medical and surgical procedures. 

How are robots used in healthcare? 

Robots are employed across a variety of rehabilitative and socially assistive healthcare regimens.  Robotic mobility aids, such as wheelchair robots and manipulator arms, are used in physical therapy.  Emotional and psychological support can be provided by systems such as the PETS robot, designed to aid in paediatric rehabilitation through interaction and storytelling.[3] These socially assistive functions can be combined in systems such as BIRON, a personal assistant able to find, remember and remind while also interacting and responding to human cues.[4]  One of the most popular therapeutic robots, PARO, which takes the form of a baby harp seal, has been found to lower stress levels and improve communication in dementia patients.[5]  In February 2013 researchers at the University of Salford unveiled P37 S65 a so-called ‘Carebot’ which can carry meals, crack jokes and remind patients to take medication.[6]  Robotic systems have developed to a point at which they provide a feasible alternative to healthcare provided by human practitioners.  This has led to the need to develop legal and ethical principles that both protect patients and support the dissemination of the benefits of this evolving technology. 

Liability 

The introduction of a robot into a healthcare situation adds complications to the assignation of liability if something goes wrong.  There is the potential for damage to be caused to a patient, medical practitioner or other equipment if a robotic system malfunctions.  Ethical issues arise in relation to agency and responsibility, with a need to establish who is in control and at what point a duty arises.  Layers of liability come into play with the potential to apportion blame to designers, programmers, medical staff and even the patient as the end-user.    Asaro[7] believes that these issues need to be addressed by embedding them within existing legal regimes and identifies product liability as a helpful framework.  Creators and manufacturers could be held negligent through both a failure to take proper care and a failure to warn.  In this way, healthcare robots could be treated in the same way as any other manufactured product and liability apportioned according to the principles of negligence.  The level of negligence could be determined according to accepted industry standards.  Difficulties arise, however, in relation to sophisticated systems such as those created by the EU-funded ALIZ-E project,[8] which can evolve in a dynamic manner, building upon interactions with humans and the wider world.  This potentially unpredictable aspect leads to the need for a safety framework which both ensures that the experience of the end-user is paramount in the design process and also provides a strong focus on the ethical duties of the creator.   

Safety standards 

As is evident from the discussion of liability, the development of safety standards is crucial to the wider uptake and acceptance of healthcare robots.  Again, the evolutionary capabilities of some robotic systems cause difficulties in the development of guidelines and principles that cover potentially unforeseeable consequences of the design process.  Such standards need to provide an effective safety framework while also being flexible enough to respond to the rapid pace of technological development.  Harper[9] highlights the importance of mission-worthiness in standards creation; the dependability of the robotic technology in its specific environment.  

In 2005 the International Standards Organisation established an Advisory Group (AG) on standards for mobile service robots.  A set of standards relating to non-medical personal care robots are currently under development.[10]  At a mundane level these cover specific tasks, potential environmental conditions and hazards, while creating validation tests by which to measure safety compliance.  Arguably, a physical malfunction such as a power surge or collision can be easily addressed by focusing upon the nature of the robotic system; what is more difficult to predict is the behaviour of the humans with whom the robot is interacting.  Salter[11] identifies a ‘wildness’ in child-robot interactions which introduces numerous variables into the standards setting process.  In work towards the development of so-called ‘roboethical’ guidelines, Enz argues that no standard is valid without assessment and consideration of human expectations and fears.  While there are difficulties inherent in introducing potential human behaviour into any guidelines, it cannot be overlooked in the development of reliable, robust standards.

Trust 

Robots in the healthcare environment will not be successful if they are not fully trusted by both patients and practitioners.  While safety standards can address potential physical or emotional damage, healthcare robotics will not be effective if users are uneasy about their interaction with the technology.  This goes further than merely feeling safe; patients need to feel comfortable with the technology and able to rely upon it even when in a vulnerable state.[12]  Japan, a country in which over 23% of the population is aged 65 and above and in which there is a severe shortage of domestic labour, has been at the forefront of robotic development.  Even in this apparently tech-friendly nation, however, responses to robot aides have not been entirely positive, with certain systems removed from hospitals due to patients’ lack of trust and their desire for human interaction.[13]   

Research[14] into human/computer interaction has identified a need to develop an emotion-based architecture based upon regulative, expressive and motivational functions.  Hancock’s work identifies and analyses the link between human trust and a robotic system’s level of automation, behaviour, dependability, reliability and predictability.  Responses to robotic systems have also been found to differ significantly according to socio-demographic factors such as age, gender, education and even religious and cultural background.[15] These issues need to be considered in relation to equality of service provision.  If healthcare robots are to take on rehabilitative and therapeutic tasks previously carried out by humans, then they need to elicit a level of positive emotional response from users or they are likely to be rejected outright. 

Robot rights? 

While for this technology to be accepted, there is a need to address human suspicion of robotic systems, ethical issues can also arise when patients develop strong emotional attachments to a robotic healthcare provider.  On-going interaction with sophisticated dynamic systems can lead to a user projecting human qualities onto the technology and perceiving a psychological bond.[16]  If this robotic system were to be violently destroyed, damaged or even reprogrammed this could have a detrimental effect on the mental well-being of its user. Torrance[17] imagines a world in which rights could be extended to robots with humans under a responsibility to treat them ethically.  Darling[18] draws an analogy to second-order rights such as animal rights, which are assigned not only due to the need to protect animals from pain but also due to a need to protect societal values.  Humanity, she argues, could be damaged if an entity onto which human characteristics have been projected is seen to be harmed.  While this approach raises key ethical questions relating to robotic autonomy, quasi-personhood and, ultimately, consciousness, there is a limit to the extension of rights and corresponding duties to robots.  Human tendency to anthropomorphise does not just apply to robotic technology and similar attachments could be made to, for example, computers, televisions and kitchen implements.  It can be argued that robotic technology is not yet sophisticated enough to warrant its own framework of freedoms and responsibilities.  At a more basic level, healthcare practitioners working with robotic systems need to be aware of the potential for emotional links to develop and be provided with guidance on how best to support patients. 

Conclusion 

The issues of liability, safety standards and trust are intrinsically linked to the acceptance and ultimate success of healthcare robotics.  There is, however, a constant need to return to the rationale behind the implementation of this technology and the potential benefits it can bring.   Questions need to be raised around the ethics of the implementation of these systems and whether they are being developed for human good and the enhancement of service provision or whether they merely represent a misguided attempt to cut costs and, in turn, cut corners.

Dr Catherine Easton is a Lecturer in Law at Lancaster University: c.easton@lancaster.ac.uk; Twitter: @EastonCatherine

 

 

 

 

 

 



[1] Computing Community Consortium (2009), A Roadmap for US Robotics: From Internet to Robotics, CRA Computing Research Association http://www.us-robotics.us/reports/CCC%20Report.pdf [Accessed 02/05/13]

[2] Jin, L., Ibrahim, A.,  Naeem  A., Newman, D., Makarov, P.,  Pronovost, M. (2011) Robotic Surgery Claims on United States Hospital Websites 33 (6) November/December

[3] Plaisant, C., Druin, A., Lathan, C.,  Dakhane, K.., Edwards, J. and Montemayor, J. (2000) A storytelling robot for pediatric rehabilitation. In: Proceedings of the fourth international ACM conference

[4] A. Haasch, A.. Hohenner, S. and H¨uwel, S..,(2004) Biron—the Bielefeld robot companion,’ in Proceedings of the International Workshop on Advances in Service Robots, Stuttgart, Germany.

[5] Wada K, Shibata T, Musha T, Kimura S (2008) Robot therapy for elders affected by dementia. In: IEEE Engineering in Medicine and Biology, pp 53–60

[6] BBC News (2013) Robot to care for elderly made at University of Salford  26 February 2013 http://www.bbc.co.uk/news/uk-england-manchester-21590182 [Accessed 03/05/13]

[7] Asaro, P (2007) Robots and Responsibility from a Legal Perspective Proceedings of the IEEE 2007 International. Conference on Robotics and Automation

[8] ALIZ-E (2012) The ALIZ-E Project: Adaptive Strategies for Sustainable Long-Term Social Interaction http://www.dfki.de/KI2012/PosterDemoTrack/ki2012pd09.pdf [Accessed 09/-5/13]

[9] Harper (2010) Towards the Development of International Safety Standards for Human Robot Interaction Int J Soc Robot (2010) 2: 229–234

[10] ISO/DIS 13482 Robots and robotic devices — Safety requirements for non-industrial robots — Non-medical personal care robot http://www.iso.org/iso/home/standards_development/list_of_iso_technical_committees/iso_technical_committee.htm?commid=54138  [Accessed 04/05/13]

[11] Salter, T. et al (2010) How wild is wild? A taxonomy to characterize the ‘wildness’ of child-robot interaction Int J Soc Robot 2 pp405–415

[12] Yagoda et al (2012) YouWant Me to Trust a ROBOT? The Development of a Human–Robot Interaction Trust Scale Rosemarie Int J Soc Robot  4 pp235–248

[13] Fitzpatrick, M. (2011) No, robot: Japan’s elderly fail to welcome their robot overlords 3 February 2011 http://www.bbc.co.uk/news/business-12347219 [Accessed 03/05/13]

[14] Hirth J, and Berns K (2011) Emotion-based architecture for social interactive robots. Int J Soc Robot (2011) 3 pp273–290

[15] Flandorfer.P.  (2012) Population Ageing and Socially Assistive Robots for Elderly Persons: The Importance of Sociodemographic Factors for User Acceptance International Journal of Population Research http://www.hindawi.com/journals/ijpr/2012/829835/ref/ [Accessed 08/05/13]

[16] Wada K, Shibata T, Musha T, Kimura S (2008) Robot therapy for elders affected by dementia. In IEEE Engineering in Medicine and Biology, p 53–60

[17] Torrance S (2008) Ethics and consciousness in artificial agents. Artif Intell Soc 22 (4)

[18] Darling, K. (2012) Extending Legal Rights to Social Robots, MIT We Robot Conference, University of Miami, April 2012