Keeping Humans at the Heart of FinTech

June 5, 2016

While advances in computer technology are often billed as a battle between humans and machines, technological advances are essentially a human struggle, between different groups of people with competing interests and motives. In the financial context, the struggle is largely over the asymmetry of information between supplier and consumer, and – increasingly – the outright cost of services, as the financial crisis has demonstrated that our banking business models and infrastructure are badly broken. The advances of the past 20 years have also coincided – if not influenced – a decline in faith in society’s institutions, increasing consumer dissatisfaction and activism. In the public sector, we are seeing increasing conflict between notions of ‘national security’ and human rights and freedoms. In the private sector, a new breed of lean ‘facilitators’ have enabled us to use the digital ‘architecture of participation’ to wrest control of our own personal affairs from the one-size-fits-all consumer experience, often free of apparent cost. Yet institutions and businesses have also ridden the same technological wave. They have developed Big Data techniques to create ever more detailed consumer profiles and taken advantage of the architecture of participation as a vehicle for behavioural targeting of advertising and specific services, relying on ‘hidden’ advertising revenue and commissions, as well as ‘freemium’ sales for their revenue. We also tend to get hung up on globalisation and the need for harmonious rules across regions, yet much of the benefit of the internet, for example, has actually occurred at local level, and most of us use our phones and email to stay in touch with local people. 

All these trends are busy feeding into the next wave of tech innovation, most notably: the ‘Internet of Things’ – a battle being fought in the streets, the car, the home and even within the human body as we agree how sensors, software and machines in each context should inter-operate. Peer-to-peer marketplaces, distributed ledger technology; the use of drones, both civilly and militarily; and driverless vehicles are the other main battlegrounds.

As we move further into the era of machine-to-machine communication, concerns about artificial intelligence are inevitably increasing.

But as we discussed at the SCL Tech Law Futures Conference 2015, computers are already more intelligent than humans at some tasks – ‘artificial narrow intelligence’ – but the dawn of artificial general intelligence, where computers can do anything a human can, always seems estimated at 20-40 years away. Machines’ super-intelligence is more an article of faith (especially as we may be unable to recognise it), but the possibility cannot be dismissed and should be addressed in parallel with artificial general intelligence, since the evolution might happen very quickly. 

At any rate, technology remains a very human issue, and we are clearly comfortable with the idea that computers can enhance the human experience and understanding, rather than replace it. Some of us collaborate online and participate in digital marketplaces for everything from second-hand goods, to lending and borrowing, to outsourcing household tasks and spare rooms. Some of this activity is working around technology-based constraints in various industries, such as music or the financial system, but we should recognise and address the fact that there is human decision-making behind those constraints. It is human fallibility, not that of machines, which is behind everything from fraud to malware to the mismanagement of the institutions we rage against.

Yet tech evolution is becoming very complex for humans to administer. Layers and layers of computer code, rules and terms and conditions must be conformed to ensure effective governance of devices, platforms, networks and services. There is an argument for more lawyers and legal thinking, not less, to ensure the machines act in accordance with humans’ wishes; but lawyers cannot act in a vacuum. Operations, risk and finance staff, as well as developers, business analysts, technology analysts, compliance staff, auditors and regulators will also need to collaborate to ensure the machines act consistently with appropriate, human-friendly requirements. 

Co-regulation: ‘new sheriffs’ and potential choke points

No discussion of FinTech can ignore the role of regulation, and financial regulation is just as vulnerable to new forms of technology as regulation in other industries. Retail peer-to-peer marketplaces call for a significant shift in regulatory approach, due to the role of the platform operator and the need to balance the interests of market participants on each side. Distributed ledger technology presents another problem in terms of how to ensure effective governance of the ledger itself.

At an SCL lecture in May 2009, Professor Jonathan Zittrain spoke about the future of the internet being dictated and enforced by the operators of internet platforms acting as private ‘sheriffs’, in the way that Wyatt Earp might have ruled a frontier town in the American west. At the SCL Tech Law Futures Conference 2015, Professor Andrew Murray examined the role of mobile app stores in this context and the extent to which they may operate as choke points to throttle competition.

Zittrain suggested we analyse rule-making in these scenarios through the four quadrants of a coordinate system, with the vertical axis ranging from ‘top-down’ decisions by a dictator or small group of individuals to the evolution of rules bottom-up amongst all interested participants. On the horizontal plane, rules may be generated via a single hierarchy represented on the left or a polyarchy of different people or agencies on the right. North Korea would be located in the upper left quadrant, for example, while Wikipedia would be in the lower right. Ironically, when we feel powerless in ‘lower right’ scenarios, we tend to appeal to top-down or hierarchical bodies for help; while we rebel from the grassroots against totalitarian regimes. That suggests the more trustworthy rule-making structures are in the lower left quadrant.

Analysing tech innovation in this way reveals how perception shifts over time. Social network service providers may test their new features (coded rules) on some customers, but end up imposing the results on the rest. App stores have perhaps evolved bottom-up through the participation of a community of developers and their customers, but operators may reserve to themselves control over certain core features of the operating system or handset that may inhibit innovation and competition away from the platform (at least until a rival is created). 

Such rule-making is critical in peer-to-peer networks and those based on ‘smart’ energy meters and the ‘Internet of Things’. Even cars and other vehicles have become computer networks on wheels with their own governance regime – as the emissions scandal has revealed.  Rules will also be embedded in distributed ledger protocols, and therefore potentially hardwired unless change can be achieved by a majority of participating computers.

Competition authorities and other regulators need to be especially vigilant as technological innovation gathers pace – including through the use of their own use of diagnostic machines, since illicit conduct will be happening in computer code that may be hard for humans to detect.

So when does a trusted service provider cease to be trusted to make and enforce its own rules?

This seems to depend on whether the service provider is perceived to be acting in its own interests or that of its users – or when it loses its ‘human effect’, as Zittrain put it – his example being the preoccupation amongst the Wikipedia editorial community about what Wikipedia is and what it means to be a Wikipedian. I also like to think of it in terms of whether a service provider is acting as a ‘facilitator’, which exists to solve its customers’ problems, or as an ‘institution’, which exists to solve its own problems at its customers’ expense. While still quite subjective, this at least may be judged against more demonstrable characteristics such as alignment with customers’ interests, openness, adaptability, transparency and a commitment to our long-term economic and environmental sustainability.

In this context, it is interesting to consider the impact on rule-making of the major financial scandals of the past decade or so, such as the financial crisis or payment protection insurance. Assumptions about how investment banking groups would act to protect themselves and their stakeholders – the so-called ‘efficient market hypothesis’ – were blown apart during the period before crisis was declared in 2008. Perceived gaps in regulation that allowed such a radical betrayal of trust led to appeals for much more regulation. Many new laws, regulations and rules have been imposed top-down, but most are yet to take effect. Market sentiment also reveals little trust in the efficacy of these ‘solutions’. Little has really changed since 2007. If anything the financial services industry is more concentrated than ever; there is more public subsidy in the form of ‘quantitative easing’ at national and regional level; and the fundamentals are worse than before the financial crisis began. As a result, the scale of fines and compensation has increased and profitability has plummeted amidst low interest rates. This heralds less investment than ever in maintaining creaking silos of legacy technology, resolving mismatches in assessments of banks’ financial position and the vast multitude of ‘fixes’ and ‘workarounds’ in users’ spreadsheets that constitute banking ‘infrastructure’; let alone genuine innovation that connects customers with their actual accounts in core systems.

Meanwhile, there has been a growth in bottom-up alternative financial services, such as peer-to-peer lending and crowdfunding. These were self-regulated by platform operators and their trade bodies until the industry called for proportionate regulation (that appeal to the top right quadrant again), in order to level the playing field, given the ‘kite mark’ and related customer tax incentives associated with regulated financial services. As a result, peer-to-peer lending, for example, was granted regulated status in April 2014 and now has its own ‘Innovative Finance ISA’ in competition with ISAs for cash deposits, stocks and shares that have attracted over £400bn from UK taxpayers to date. This emphasises the fact that regulation and rules are not a panacea. Incentives and disincentives arguably play a more important role in determining whether to innovate and what we programme the machines to do.

The European Commission has adopted a wait-and-see approach to peer-to-peer lending and crowdfunding, as those models are still nascent in many Member States. But the Commission has continued the trend toward supporting alternative payments services since 1999 by widening the scope of payments regulation to cover new ‘payment initiation’ and ‘account information’ services at national level from January 2018.

In addition, the Financial Conduct Authority has been tasked with supporting innovation and competition, and has created an Innovation Hub and Regulatory Sandbox with a view to smoothing the way for new entrants and products. But progress remains slow. The sandbox only goes live in beta in May 2016, and of 413 requests for support from the FCA’s Innovation Hub received as at February, about 215 firms (52%) were given support, yet only 39 firms (18%) have either been authorised (18) or are going through the approval process (21). And in a recent statement defending its record on processing applications for authorisation by P2P lending platforms, the FCA said that it had only processed 8 of 94 applications it had received in the previous six months (about 9%), noting that it has up to 12 months to consider each one.

Hardly an adrenaline shot for a financial system in crisis.

Evolution from P2P to fully distributed marketplaces

While peer-to-peer marketplaces continue to evolve, so does distributed ledger technology from its roots in the Bitcoin protocol in 2009.

A distributed ledger is basically a set of digital records accessible to all computers running the same cryptographic protocol. All participants’ computers view the whole ledger which provides a complete history that is encrypted, immutable, verifiable and auditable. Only those with the correct ‘key’ can access the details associated with a specific record. The difference between this approach and the peer-to-peer model is that the ledger is ‘distributed’ or decentralised in order to eliminate the need for a central authority or intermediary platform to process, validate or authenticate transactions amongst participants.  The result should be a more open, democratic, transparent and cost-effective system than even a peer-to-peer model can deliver.

While some ledgers are fully public (eg the Bitcoin blockchain), it should be noted that ‘permissioned’ ledgers are being developed to more efficiently undertake certain financial market activities, for example. The main benefit to that is cost reduction for the participants concerned, so openness, transparency and democracy don’t necessarily feature here!

It appears that distributed ledger technology could have a horizontal impact similar to the Internet or mobile telephony. But the technolgy seems likely to be most useful in scenarios where items change their state or status frequently and there are many broadly dispersed participants using many different systems. These scenarios are challenging legacy database and trading technology and intermediary business models, so cheaper, faster alternatives are definitely required.

As a result, Accenture estimates that investment in distributed ledger technology for the financial markets will reach $400million by 2019. Groups of financial institutions, such as the Post Trade Distributed Ledger Working Group, R3 and various stock exchanges are leading the way; and several investment banks have filed for ledger patents (and potential choke points).

Specifically, distributed ledgers could also be used to record and track intellectual property rights and other assets; and collect, settle and disburse related payments, either completely or through integration with existing registers and payment systems. So-called ‘smart contracts’ might be written in computer code so that they interoperate and execute automatically, interoperating with existing mobile and Internet applications and/or the ‘Internet of Things’.

Distributed ledgers raise familiar questions for policy makers, legislators, regulators and legal practitioners at both national and international levels. Most authorities have adopted a ‘wait and see’ approach until they better understand the technology and its uses, but significant work is underway at the International Monetary Fund, the Bank of England, the Securities Exchange Commission, the European Commission and Parliament and the Financial Conduct Authority. 

It is yet to be seen whether distributed ledgers could resolve competition issues through the creation of completely open markets, or whether they simply amount to a new domain in which the same problems of monopolistic control manifest themselves. After all, couldn’t the ledger protocol itself operate as a choke point? Would the immutable nature of the ledger mean that any anti-competitive issue could not be resolved at all?

It is too early to expect the many technical challenges identified with distributed ledgers to be resolved, such as scalability, latency and standardisation on various fronts. But it is clear that any project to implement distributed ledger technology is similar to an outsourcing project in many ways. Understanding what the technology can achieve and what it cannot is key, as is understanding the business process it is supposed to replace or replicate (and knowing that it is stable in the first place).  The very nature of a distributed ledger may also mean significantly more resources being dedicated to collaboration with suppliers, customers and industry participants than ever before, including negotiation of smart contracts, rules, standards, liability and regulatory accountability. Accordingly, while the ledger itself might be cheap to run, the process of adopting the technology may not be.

Simon Deane-Johns is a consultant solicitor with Keystone Law and Chair of the SCL Media Board.