What’s the Big Deal? Big Data in the Financial Services Sector

July 14, 2015

The premise of this article is that disruptive digital technologies have changed the way customers want to interact with financial services sector businesses. I ask:

·        What is Big Data and how is it deployed in the financial services (FS) sector?

·        What are the legal, commercial and regulatory issues to consider?

I do not cover the data protection issues or the IPR issues (although IPR will be considered in the context of software procurement and licensing) associated with this technology.

Customers, myself included, are used to the customer-centric technologies experienced in other ‘digitised’ sectors like publishing or music – these have had to adapt to the digital revolution early.  Customers now want that experience replicated in the FS sector.  Banks, for example, are beginning to take notice, investing significant amounts of money in IT upgrades: Australia’s Commonwealth Bank  invested over AUD $1.1 billion in an end-to-end IT transformation project to replace its ageing core banking system and Barclays has been promoting customer-centric technologies like the mobile payments app PingIt for many years.

Clearly, to meet the challenge of such disruptive technologies and maintain (and improve) the share of the customer’s digital wallet, the established FS businesses must continue to improve their customer engagement.  This is particularly important given the rise of challenger banks, like Atom and Metro Bank, and digital start-ups, like Zopa in the peer-to-peer lending space or comparethemarket.com in the insurance space. These newcomers are vying for other parts of the financial services value chain and they are leveraging technology as a key differentiator to cosy up to the customer.

Personalisation of data

One way to improve customer loyalty and rise to the challenge of the digital revolution in the FS sector is through the development of personalised products for customers based on analysis of their previous behaviours (see ‘Big Data in the Financial Services Sector’ section below).  This requires the collection and analysis of large volumes of existing or historic customer data held by FS companies to predict future customer likes and dislikes. 

The shorthand for this data collection and analysis is often referred to as ‘Big Data’.  Along with mobile, ‘the Internet of Things’ and cloud computing, Big Data is the fourth pillar driving change in customer engagement.  In many ways it is the product of the other pillars: the growth of mobile apps and connected devices has increased the amount of information available to companies on consumer behaviour whilst the use of cloud computing technologies has made it cheaper to use powerful Big Data analytics tools to collect and analyse customer data to predict trends.  Big data has big potential: estimates suggest that, via efficiencies, innovation and business creation, Big Data was worth £25 billion to UK businesses in 2011 and may reach an annual value of £41 billion by 2017![1]  

What is Big Data? 

Part of the problem with understanding what Big Data is lies in the fact that there is no single accepted definition for it.  However, it is often referred to as the collection and analysis of large volumes of structured and unstructured data, often of unknown reliability, (potentially) in real-time to create value for companies.

Let’s break that statement down in to what has been coined the ‘five Vs’ associated with Big Data:

1.      Volume: in today’s connected world, huge amounts of data is being created every second, from tweets to video clips and photos to emails.  This data comprises large data sets that are not capable of being reviewed by conventional software tools within acceptable time-frames.

2.     Variety: the data can range from structured (eg numeric data in fixed fields such as spreadsheets) to unstructured (eg rich-media information in videos or images that cannot be neatly placed into spreadsheets).  Big data technology allows users to analyse not only the structured data we find in the FS sector (eg financial data), but also the more complex unstructured data that is becoming more prevalent in order to come to new conclusions and findings.

3.      Veracity: the reliability of the data may not always be known, especially if it is obtained from third-party, publicly available sources (eg ‘open source data’).

4.     Velocity: the data is frequently updated and can be quickly analysed as it is collected/generated without the need for it to be placed into databases (ie analysed in real time).

5.      Value: by predicting new trends based on the analysis of data, banks and insurers can create value for customers by offering them new services (see ‘Big data in the financial services sector’ below).

Legal and commercial issues

All this fast-paced development does bring with it certain legal and commercial challenges.

To make the most of the Big Data revolution, FS companies will need to partner with Big Data software specialists who have the BIA software to allow them to undertake the level of analysis required to maintain competitive advantage. They will also, in the long term, need to look at upgrading their legacy IT infrastructure to handle the new high-volume, high-velocity, high-variety data that is becoming available and to be able to integrate it with pre-existing company and customer data to create value. 

Procuring new BIA software tools and/or undertaking large-scale technology refresh projects comes with a host of issues to consider that I will consider below. 

Software procurement

Many of the issues to consider are similar to the contract issues associated with licensing software in other sectors.

·  Scope of licence: be clear on who will be granted the licence to use the software to collect and analyse the data.  For example, will it just be the customer contracting entity or will any group company want to take the benefit of the services?  If it is the latter, the licence terms will need to be drafted accordingly.  In addition, the licensee may wish outsourced service providers or contractors to be able to use the software to deliver services to it, in which case the scope of the licence will need to cover these third parties.

·  Purpose of licence: make sure the licence aligns with the business purpose! It is a common oversight by software licensees to be unclear on what they intend to use the software for.  If the software will be used for internal analysis then a limited licence may be sufficient.  However, if the licensee intends to use the software to analyse data to then use as the basis to sell personalised products to its customers, then this should be clearly articulated in the licence terms.

Software implementation timetable: software implementations are prone to delays.  It is important that the contract sets out the timetable for implementation of the software, including precise milestones with clear acceptance criteria so the parties can identify whether each milestone has been passed or not.  Also, payment terms should be linked to the sign-off of milestones and liquidated damages should be considered in the event of delays to the timetable, save where they were directly caused by the customer’s acts or omissions.

·  Acceptance testing: the parties should be clear on what the process is for acceptance testing of the software to make sure both parties can clearly identify whether the software performs in accordance with the agreed specification and is free from defects.

Data licensing

Given the sheer volume and variety of data being created, it is not unusual for FS companies to want to combine data owned by third parties with their own data.  Again, the issues to consider should not be unfamiliar to many FS companies as they come across similar concerns in the context of licensing market data.

·  Purpose of licence: as with software procurement, the scope of the data licence needs to be aligned to the purposes for which the business wants to use the data.  For example, if the data licensee wants to combine the licensed data with inhouse data to create new data sets it needs to be granted the relevant rights.  In addition, if it wishes to use the data not just for internal purposes, but also for use by a third party software licensor (eg the BIA software licensor) then the data licence must grant the necessary rights to such third parties.

·  Derived data: the contract should also be very clear on who owns the IPR in any new data created from the data analysis (for example, when data owned by third parties is combined with the customer owned data to create something new and original which is not a copy or extract or modified version of the data sets but a new work based on analysis of the data).  If the licensee does not own the IPR in the derived data then careful attention should be paid to the scope of the licence granted:  for example, does the contract permit the data licensee to use the derived data contained in reports post-termination?

·  Warranties: the data licensee should be clear on the warranties in the contract around accuracy and reliability of any data owned by third parties it relies on to craft personalised customer products and services.  Often, publicly available data (‘open data’) comes with limited warranties as to accuracy.  In such circumstances, it will be up to the data licensee to undertake its own due diligence to be sure the data is error-free before it uses it.

Sector specific issues

Upgrading legacy IT infrastructure to meet the Big Data challenge is not a simple task and may often involve the outsourcing of activities to third-party specialists.  But, this is an issue that the industry appears to be aware of.  In a self-assessment survey conducted by Opinium, involving the participation of 300 senior executives in the financial services sector, almost half revealed that their IT infrastructure was not able to move fast enough to enable the business to make better use of data.[2]

When undertaking IT outsourcing projects, FS companies will need to be aware of the regulatory issues.  The rules relating to outsourcing are dotted throughout the FCA and PRA Handbooks.  For example, regulated firms must notify their regulators prior to outsourcing any regulated activity and ensure the relevant service provider has obtained Part IV permission.

Regulated firms looking to undertake critical outsourcing projects need to take particular note of the SYSC rules in the FCA handbook.  A critical outsource is the outsourcing of an activity that is so important to the regulated firm that the outsourced service provider’s failure to perform such services would materially impact the regulated firm’s ability to comply with its requirements under the FCA Handbook.  The rules apply as guidelines to regulated firms but as mandatory requirements to ‘common platform firms’ like banks, building societies and investment firms.  SYSC 8.1.8 is a useful starting point in terms of analysing the types of requirements that all outsourcing contracts should include. (I recommend it is used as a checklist for terms to include in contracts when considering an IT outsource, whether or not you are a common platform firm.)  It covers principles such as audit, service levels and termination rights and is anchored around the principle that a regulated firm must have in place the necessary mechanisms to ensure continuity of service and no loss of operational control when outsourcing its activities.[3]

Conclusion

Big data will become more and more prevalent as the amount of data generated in the FS sector increases.   It is said that every day we create 2.5 quintillion bytes of data, 90% of which was created in the last two years alone (IBM 2015).  Social media applications alone are said to account for approximately 27% of Big Data used by the banking and financial markets.[4]  In addition, following the financial crisis of 2007, there has been an exponential increase in the amount and granularity of data that banks are being required to report on and disclose to central banks and regulators.  This requirement of greater record keeping is part of a global move by regulators to increase transparency of financial markets and prevent a recurrence of 2008 when complex over-the-counter derivatives were poorly understood and tracked.

FS companies need to leverage the potential benefits of the vast volume of data they collect to provide better services to their customers in an increasingly competitive digital world.  Failure to do so will lead to loss of market share as customers move to more digitally accessible newcomers.  Just watch out for the legal and commercial potholes along the Big Data highway! 

Big Data in the financial services sector

Big data analysis is not something new for banks.  After all, a quicker trading platform, lower latency transactions or better financial analysis equals a more competitive edge.  Banks have been leveraging technological developments to decrease the time it takes to make a trade by introducing high frequency trading.  This real-time analysis of large volumes of market data to make trading decisions is aligned to Big Data. 

However, the trend towards personalisation of data to improve customer engagement has only just begun.  In a variety of FS sectors, incumbent FS companies and start-ups are starting to leverage Big Data software analytic tools (referred to as business intelligence analysis (‘BIA‘) software products) to get closer to the customer by offering valuable and personalised services.

·  Personal finance: Big Data can be used to help customers save money.  Start-up, Lenddo, reviews customer’s bank account information to analyse spending and saving habits and recommends how much users should be saving each month.  This amount is then automatically transferred into a non-interest bearing Lenddo saving account.

·  Usage-based car insurance: Big Data can be used to offer better insurance premiums to safer drivers.  A fitted on-board telematics system monitors how drivers handle the car to determine how safely they are driving and relays that information to insurance carriers who can then use this information to adjust insurance premiums accordingly.

·  Health and life insurance: Big Data can be used to offer better insurance premiums to healthy customers.  Vitality, a joint venture between PruHealth UK and Discovery, offers health and life insurance products to customers.  Customers sign-up to the ‘Vitality’ health scheme and are encouraged to participate in it to improve their ‘score’ to reduce their health and life insurance premiums.  For example, the more often they visit affiliated gyms or buy food from healthy supermarkets the more points they accumulate which can then be used to reduce premiums.

 

Jonny Emmanuel is a Senior Associate at Bird & Bird LLP with a particular interest in the FinTech sector: Jonathan.Emmanuel@twobirds.com’


[1] Centre for Economics and Business Research for SAS ‘Data Equity’, 2012 cited in Parliamentary Office of Science & Technology Post Note ‘Big Data in Business’ Number 469, July 2014  http://researchbriefings.files.parliament.uk/documents/POST-PN-469/POST-PN-469.pdf

[2] http://www.finextra.com/blogs/fullblog.aspx?blogid=10100

 

[3] There are separate rules that insurance companies need to comply with that are beyond the scope of this article.

[4] Big Data @ Work survey – conducted by the OBM Institute for Business Value and the Said Business School @ IBM 2012.