Industry 4.0: Legal Systems, the Social Contract and Health Care

May 11, 2016

I.                 Introduction

Using the keyword of ‘industry 4.0’ for a development in IT makes one important feature clear: the developments described by this term are presently narrowed down to changes in the industrial, i.e. the professional environment. Thus, many questions that might arise if new and ubiquitous IT is used in private environments by end-users are put aside when analyzing chances and risks in a professional environment. Nevertheless, industry 4.0 technology affects our existing legal system and society to an even greater degree than most presently anticipate, including in the non-professional environment.

This article identifies a central problem, provides some answers from a broader legal dimension to regulation and develops some cornerstones for flexible solutions. Examples concentrate on public health care systems. Necessarily, it can give only a rough sketch.

II.               Robotics in Health Care

Automated IT has long entered the medical side of the health care system: robots have been shown to operate better than surgeons; the future may be the fully-digitalized, fully-robot-staffed operation. Intelligent IT makes it possible to integrate new extremities with far more success and usability; stimulation of brain areas by permanent IT cures for Parkinson’s disease, epilepsy and depression; the insulin pump already secures continuous insulin for diabetes patients.

But robotics has also entered the administrative and the care side of our health system. The use of IT administers facilities, personnel and resources in an increasingly expensive system. Robots take care of the transport of patients and goods, they judge the emergency factor of help-calls and decide the amount of care in smart environments. The relationship between patient and doctor via electronic devices including fitness apps and tele-medicine combined with big data will lead to new forms and results of diagnosis and therapy. The impact of industry 4.0 in the health system will become even greater in embedded systems and networks, when self-learning robots become part of larger systems.  In the long run, the work and treatment environment in the health system will change dramatically: ambulant services will increase; robots will accompany patients and assist them – not only in therapy but also in their everyday routines of rehabilitation, medication and prevention; doctors’ decisions will be challenged by robots.

III.             Legal and Societal Difficulties

Although the promise of industry 4.0 is great, there are some difficulties and problems attached to it.

1.      Data and IT security issues

Widely neglected is the field of data/IT security. The more data is stored and used – a necessary condition of industry 4.0 – , the more protection of both the data (content, authority, quality, usability) and the underlying IT system of transfer and storage is needed. A robot that is stopped in the middle of an operation by ransomware? Impossible. An automated medication that is manipulated? No way. A hacked logistics system that continuously delivers dangerous goods through Europe? Beware.

2.      Data protection

Closely connected are data protection issues. Even though, a new European general data protection regulation has just been enacted, the specific dangers to data in embedded systems, ubiquitous computing, big data and profiling have not been addressed. Thus, the protection of personal and business data remains unclear. Data in the health care sector is especially sensitive and especially valuable. Data protection has to assure that this data is protected, but allows further individualization and specification of the individualized connected services. This leads to core problems like usability of data, restrictions on profiling while anonymity and technical solutions increasingly fail as protective measures. State intervention causes even more problems, creating sets of data and tools of access to systems and data.

3.      Choice via algorithms

Self-learning robots function on the basis of algorithms. These will have to set values for choices. In case of a collision: will we want an algorithm to choose to kill five social welfare recipients rather than the CEO of the leading IT company? What are our moral principles that we want robots never to forget? And which are the standards where we will allow learning and change? How do we set them? Do we need humans to overrule machine decision making or do we need robots to overrule human action – and how do we accomplish this in self-learning systems?

4.      Cost versus dignity, solidarity and self-determination

If robots are integrated in care systems, eg feeding and controlling physical functions of a patient confined to bed or following a person with dementia in order to lead her back later, many of these tasks are less costly than care personnel. We can control the daily routines of persons in the health care and work environment. How much individual ‘unhealthy behavior’ can be tolerated by the health system, the employer or social security without solidarity being questioned? How do we set standards for that? Who makes individualizing choices for a person suffering from dementia? How much human interaction needs to be assured in the work and health environment for both patients and employees? These questions become the more urgent as the choice for or against the use of a robot cannot be assigned to the individual and his/her preferences. Rather, a publicly financed health care system decides within its systematic logic and an employer decides according to cost-benefit-analysis. Large-scale effects lead to industry and health care system standards that are not necessarily driven by the best technology, the balancing of interests or societal overall values.

5.      Autonomy, person and liability

In a self-learning system, it becomes impossible to assign the system’s action to an individual’s (or an entity’s) responsibility, control and decisions. Who decides in a self-learning robot system and controls the results? Classical concepts, like constructing liability according to use, fail due to loss of control. Often, it is not even possible to decide whether to enter into a system.

Also, the concept of a person becomes critical. When using a carrying-aid, one might distinguish between the (assisting) robot and the (assisted) person. If, however, technological implants are permanently combined with the human and influence his or her emotions, preferences and behaviour, the borderline between the human and the technological person becomes fluent.

6.      The slippery slope of power in the industry 4.0 environment

These – and far more – questions are waiting for answers, that cannot be the given by technology. so the question of the final decider becomes virulent. Who has the power to decide?

In health care systems, one might assign the decisive power to the health care insurance. However, are they the proper decision-maker? Their democratic legitimacy might be construed with some twist in a public health care system; in a private system, individual interests overrule. Insurance companies are of underestimated importance as an intermediary and a decisive factor in creating industry 4.0, in any case.  It might be a single entrepreneur’s decision how to structure workplace and working conditions; liability insurance, however, often is decisive by administering certain standards or even factually forbidding certain activities and interactions if coverage is denied.

Looking into self-driving car systems, the question of the decision maker becomes even more complicated: The interaction between driver/driven person, (private) insurance regulations/standards, regulator in the interest of third parties and regulating market failure as well as the manufacturer is complicated further by additional persons involved, such as salespersons, repair shops, information services and, not least, the provider of net technology. As in all networks, the tendency towards natural monopolies might also make it difficult to determine the values under which the decision is taken.

IV.            Potential Strings of Solution

As diverse as these problems are, the potential solutions are just as diverse. The perfect instrument does not exist.

1.      De-centralization

Some of the problems described above are rooted in the extent and amount of networks involved and thus the data available to numerous actors. Development tends towards centralization. However, every network and every system is only as secure as its weakest part; IT security and data protection can be secured much more easily in de-centralized systems. Rather than following a connect-all-approach, a more restricted, self-limiting strategy might solve at least some problems, especially in IT-security, data security and data protection. The cost, however, will be less user-friendly solutions and less ready-to-use-technology.

2.      Minimum standards and democratic process

Presently, software engineers are left to the task of deciding most of the above questions. They are not educated to do so. Therefore, they need minimum standards and clear rules under which to develop. These have to be found in a democratic process as this is the only way to legitimize these decisions. Thus, regulatory instances need to accompany the process actively. Governments need to understand, value and discuss the impact of digitalization and industry 4.0 developments with high priority – industry 4.0 also influences their modes of operation and the settings of democracy.

3.      Wide and European regulatory impact

Democratic decision-making calls for parliaments’ regulatory impact and guidance. However, any nationally restricted regulatory effort will fail as industry 4.0 does not stop at borders. Automated car-systems will need a minimum of interactions with international cars; data storage and data transfer typically crosses borders. International regulation, however, is timely and costly, its outcome unclear and potentially threatening to established values in rule-of-law states. Consensus is thus not the primary goal. Rather, the European Union as regulator to one of the largest and most dynamic markets can become a leading regulator by quick action, making use of their established fora for dispute and consensus and a somewhat common value framework.

Any regulatory impact has to integrate the effects of digitalization, which are often not yet understood. Zoning and building law need to adjust to different mobility; tax law needs to understand information markets and transferability of digital business; the concept of a legal person needs to be enhanced; liability rules have to be adjusted; minority and underprivileged groups have to be actively protected, to name just a few.

4.      Strict enforcement

As in any new field of technology, a level-playing field for all involved has not only to be established but has to be enforced. A regulatory impact within the European Union that does not bind all players active in Europe to the same rules will not only lead to inferiority of European services in the digital world; it would also threaten the backbone of Europe as such: stability, equality, legal certainty, predictability. Different from data protection, regulation of industry 4.0 needs to be strictly enforced from the start, and powerful institutions have to be installed.

5.      Differentiation

The variety of industry 4.0 developments calls for sector-specific regulation. In a Herculean task, regulators have to create a system of specific laws in close interaction with each other and general ideas in order to create and govern a fair digitalization as such. Some potential categories could be private versus public environment; professional v. private environment; autonomous v. integrated or reduced capability decision-makers. Rules can then concentrate on typical problems arising for third parties, for consumer protection or in market failure situations. A health care robot can be programmed by professionals but will have to adapt to private needs in a private environment; its level of control and continuous outside intervention has to be different from that of a fully-automated logistics center. At the same time, data protection issues become typically problematic only in the first example, not in the second, where rather business information protection is called for.

V.              Conclusion

Free development of digital automated systems, industry 4.0 and robots left to the impulses of innovation and market standards cannot address nor solve the arising problems. Too often, these effects are side-effects on third parties not involved and not directly affected. Thus, regulatory impact on a European level is needed, allowing for continuous monitoring of the technology. More so, a discussion on risks and challenges and on overall benefits and effects has to be initiated within society to create new standards and informed acceptance. Assistance to create a balanced system can come from the well-established instrumental tools of technology governance.

Prof. Dr. Indra Spiecker genannt Doehmann, LL.M. (Georgetown Univ.), Chair of Administrative Law, Information Law, Environmental Law and  Legal Theory, Director Research Institute on Data Protection, Director Ineges, Institute of European Health Policy and Social Law, University of Frankfurt/Main, Germany.