Driving Performance through Service Levels

February 9, 2007

Service levels are one of the key contractual protections for purchasers of outsourced services, whether they are information technology services or business process or “back office” services.  The commercial terms around how the purchaser will be compensated by their supplier in the event of poor performance are commonly among the major issues left on the table near the end of the contract negotiation.  Yet despite the importance of the service levels concept in outsourcing arrangements, it is often the case that the protection potentially afforded by this mechanism is weakened because insufficient attention is devoted to the content of the service levels themselves.  Purchasers of services (in this article I will call them “users”) should spend time conducting diligence to satisfy themselves that their proposed supplier will be able to provide the required level of service.  However, while this is important, it is only the first step.  Surveys have found that many users consider that their suppliers are meeting their service level commitments, but at the same time are unhappy with the overall performance of their contract.  Why is this?  It may be because some purchasers rely on service levels to address problems that they are not designed to resolve.  It is likely that it is also attributable to service levels not being set at the right level in the first place.  Users often depend on their in-house service provision function to help set the service levels, but this approach can be frustrated if the in-house function does not record performance data, or if the data it does record is not reliable or comprehensive.  The temptation for the user – and the danger – is to agree an approach which requires less time and attention at the negotiation stage.  Some users include a broad contractual statement that service levels will not fall below those previously achieved by the in-house function.  Some attach service levels to aspects of the service that have been measured and for which data is available (rather than those which should be measured, risking the position where the user is left with service levels on unimportant aspects of the service).  Some users will agree to negotiate the service levels with their supplier post-contract.  Each of these approaches presents significant risks for the user.  Some service level regimes also fail through a lack of flexibility.  Over the course of most outsourcing contracts the user’s needs and therefore the services it requires will change, and it is crucial that the service level regime is able to keep pace with this change.  This article explores these various challenges and suggests some approaches that users might consider adopting to tackle them.  To put this in some context, this article also explores the key features of a typical service levels regime and suggests how such regimes may be set up to respond to service failure in a way which incentivises good performance, but does not fall foul of the legal restrictions that apply in this area. This article is targeted at users of outsourced services, who will invariably encounter service levels and the issues they present, but most of the principles covered in this article will apply equally to all types of contractual service levels.


Key features of a typical service level regime


Contractual service level regimes are common in all types of sourcing contracts, and particularly so in outsourcing contracts.  But why is that?  There are two main reasons.  First, if properly drafted, they can provide the parties with a relatively straightforward mechanism for resolving the supplier’s failure to perform to the contractually-agreed level.  They can be coupled with pre-determined financial remedies to provide the user with a quick and inexpensive way of being compensated for the losses caused by the supplier’s breach of contract.  Enforcing contractual rights is time-consuming and expensive, and it may be impractical for a user to sue a supplier from whom they are still receiving services.  By contrast, good service level regimes are neither time-consuming nor expensive to police and enforce.  Second, and linked to the first reason, service level regimes act as an incentive for the supplier to perform in accordance with the contract. 


To take a typical service level from a typical information technology outsourcing (ITO), the user may wish to measure the period for which the supplier keeps the e-mail application available.  This would be the “service measure”.  In an outsourcing of a business process or “back office” function (BPO) such as payroll processing, the service measure may be the percentage of occasions on which employee salary payments are successfully made by the first of the month.  Each service measure is linked to a level of performance that the supplier must achieve in order to avoid being in breach.  This is the “service level”.  To use the ITO example above, the service level may be that the supplier achieves e-mail application “uptime” of, say, 98% of the period for which the user would like it to be available. 


But what happens if the service levels are not met?  If they are to be meaningful there must be an adverse consequence of failing to achieve them.  While termination of the contract may be necessary where performance is appalling, this is a nuclear remedy and is usually not appropriate for less significant dips in performance which are almost inevitable in a long-term relationship.  The important thing for the user is that their supplier has an incentive to remedy performance issues when they arise.  Claims of damages and breach will usually involve detailed argument between the parties, which is time-consuming and expensive – and, in the meantime, the supplier may not be incentivised to achieve the agreed level of performance. 


The typical approach in this situation is for the supplier to agree to pay the user predetermined sums, often calculated by reference to a percentage of the monthly or annual service charge, as compensation for the user’s losses arising from the supplier’s breach of contract.  These payments are often termed “service credits”.  Users should beware compensation structures which operate by adjusting the charges (eg the supplier does not get paid a proportion of the full monthly charge unless the service levels are achieved), since there is a risk that these may be construed as a variable price for variable performance in which the supplier will not be in breach of contract for failing to achieve the service levels – it will simply be paid less.  The common approach, and for the user the better approach, is to agree a proportion of the monthly or annual services fees that will be “at risk” of being paid to the user as service credits if the service levels are not achieved.  The size of this proportion is usually subject to fierce negotiation between the parties.  If it is too small, the credits are unlikely to cover the user’s losses arising from performance failing to meet the service levels.  Additionally, if the service levels and service credits are set too leniently, the supplier may still be able to make a profit on the contract by delivering a service which fails to meet the service levels, meaning it is not properly incentivised to achieve them.  If the proportion is too large, the supplier risks losing money on the contract if it is unable to meet the service levels.  This might seem appropriate where there has been a breach of contract, but it is usually a question of degree.  For particularly poor performance, it may well be reasonable for the supplier to lose money through service credits and perhaps also be exposed to other potential remedies, such as termination or a general damages claim.  However, for performance which is only just below the service level, this degree of exposure may be unreasonable.  For these reasons, and unless one party has an unusually strong bargaining position, the at risk amount often ends up being broadly equivalent to the supplier’s margin on the contract.


One legal consideration to keep in mind is that service credits are a form of liquidated damages.  Liquidated damages are pre-agreed damages for a breach of contract and, provided they represent a genuine pre-estimate of the non-breaching party’s loss, are enforceable in English courts.  However, if the stipulated sum is not a genuine pre-estimate of loss it may be a penalty.  English courts will not enforce a penalty as a matter of public policy (see Dunlop Pneumatic Tyre Co Ltd v New Garage and Motor Co Ltd [1915] AC 79).   While the use of expressions such as “liquidated damages” and “penalty” are not conclusive (the courts will look behind such terms to determine the true nature of the mechanism), it is prudent not to describe service credits as “penalties”, and many users also include an express statement in the contract that they have sought to genuinely pre-estimate their loss.  It may aid this argument if, prior to entering into the contract, the user has considered and documented its likely losses from poor service performance, so that this can be produced if the service credits are ever challenged in court as being penalties.


Suggested approaches to common pitfalls


The time available for negotiating an outsourcing agreement is invariably limited, with commercial pressures on both parties to sign the contract quickly.  The parties often tend to concentrate on issues such as cost savings and service transition at the start of the contract and complex exit provisions to provide for the parties’ responsibilities on exit.  While these are clearly crucial issues, the effect of this approach is that the quality of service levels that in turn drive the quality of the “steady-state” services is often given less attention during the negotiations.  This section considers the common challenges faced by users when trying to negotiate a first-class service level regime, and presents some suggested approaches to addressing them:


• Prioritise service levels in the negotiation process.  Users should ensure that sufficient resources are devoted to defining the service levels.  They should also ensure that this work is prioritised and is commenced early in the negotiation process.  Defining the service levels will usually require input from the parts of the business that will be receiving the services, so personnel from outside the main negotiation team are often involved, and will typically be juggling this important task with their day job.  Perhaps because of this, the development of the service levels sometimes loses pace after initial workshops between the parties, but this is dangerous for a user.  If the deal is nearly done but the service levels are behind schedule, there is often significant pressure on the user to agree an approach which offers them less protection – such as agreeing the service levels post-signature.  By this stage, the user’s bargaining power is significantly eroded, and this usually translates into less favourable service level commitments from the supplier.  It is therefore crucial to start the work to define the service levels early in the negotiation process, and then ensure it maintains momentum.  It can be a good idea to appoint a service levels “champion” with this brief.  The “champion” should be someone who understands the services that are to be delivered and has an interest in ensuring they will be of high quality.  It should not be someone who will be transferring to the supplier given the risk that they may be tempted to set a less demanding service level regime to endear themselves to their new employer.  Ideally they will not be part of the user’s main negotiation team, allowing them to continue the work to develop the service levels in parallel to the main contract negotiations.   


• Collect data early.  One of the major challenges for users that are outsourcing a function currently performed in-house is that they often do not know what level of service they currently receive.  The problem is a lack of data: in-house provided services are often unmeasured, or at least are not measured extensively or consistently.  Users who are considering outsourcing services for the first time should try to collect as much data as possible before heading to the negotiation table.  Ideally a user will have at least six months of data for each important aspect of the service it wishes to outsource.  If no performance data has been collected, and the supplier is simply taking over the current staff and assets, the supplier will (understandably) say that it cannot commit to a service level that the user cannot prove is already being achieved.  If the user does not start collecting data until the parties start to negotiate the service levels, there may only be one or two months’ data on which to base the negotiation, and the supplier will again object that such a short measurement period may not fairly reflect the usual performance of the function.  All this leads to a significant risk for the user that the service levels agreed upon by the parties will not drive the supplier to deliver the level of performance the user is expecting, even if they are met. 


• Concentrate on what’s important.  A common mistake amongst users is to assume that service levels can be used to ensure the quality of more or less anything in an outsourcing arrangement.  This often manifests itself in a long list of service levels, each carrying a very small service credit.  Having too many service levels is unwieldy for both parties, but especially for the user.  Also, it rarely provides real comfort that the service levels will be met because the service credit budget will have been spread so thinly that the financial value of individual failures is disproportionately small.  Long lists of service levels are often the result of hurriedly compiled “wish lists”, and could be reduced significantly with some thought.  For example, in an ITO where the supplier is responsible for e-mail, the user will rarely need service levels for both the availability of its email application and the server on which it is hosted – this is duplication.  Users should concentrate service levels and service credits on the aspects of the service that are most critical to them.  A shorter list of service levels means a bigger share of the service credit allocation per service level.  If set up in this way, service level failures will be more expensive for the supplier, and therefore will be more likely to drive good performance. 


• Look beyond service levels where appropriate.  Another common user error is attempting to use service levels to regulate aspects of the service for which they are not suited.  Service levels operate by imposing objective standards and presuppose that performance against these standards can be measured.  They are therefore not suited to aspects of the service that are subjective or incapable of reliable measurement.  Some users try to apply service levels to intangible matters such as the supplier’s relationships with third-party suppliers.  It is important to remember that service levels are a fairly inflexible tool.  This weakness can be especially problematic in BPO arrangements, where the nature of the services is often less automated and less amenable to reliable measurement than typical ITO services.  Where this is the case, it is important to look beyond service levels and consider the use of other contractual mechanisms that are able to take account of more nebulous subject matter, such as end-user satisfaction surveys and bonuses linked to senior management’s perception of whether the contract is being performed to a high standard. 


• Check you can measure performance.   As discussed above, the use of service levels presupposes that it is possible to easily and reliably measure performance against them.  However, the parties often agree a service level without having decided, or even considered, how it will be measured.  This is a point of detail that can seem a little tedious, but is crucial.  Is the service measure even capable of measurement?  If not, service levels are not the right tool and simply won’t work.  If the service measure can be measured, will it be measured manually, or – preferably for the user – using a reliable automated tool?  If no tool is to be used, is the user satisfied that the manual process is reliable?  If a tool is to be used, it should ideally be written into the contract, since the use of a different measuring tool may possibly provide different results (remember to account for any variation if the supplier wants to use a different tool to the one the user used to generate its historical performance data).


• Incentivise the right behaviour.   Shortly after the 118xxx directory enquiries services were introduced, Ofcom’s predecessor Oftel conducted a survey to investigate the levels of quality that were being achieved.  The survey found that up to 40% of callers were being given the wrong telephone numbers.  One of the contributing factors appeared to be that the 118xxx service providers were subject to stringent service levels on speed of response, but not on accuracy.  It therefore made more commercial sense to give out the wrong number than risk incurring financial penalties by spending a little more time in investigating what number the caller actually required.  This is an extreme example, but highlights the need for users to take a step back and consider whether the service levels are balanced and will encourage the right behaviour.  To use an ITO example again, is it better to focus the helpdesk’s attention on answering all calls within 30 seconds, or on ensuring that a high proportion of issues are successfully resolved during the first call?


• Build in flexibility.  The services delivered under an outsourcing contract will change over time, and it is important that the service levels are able to change with them.  Even where the services have not changed, users should be able to adjust the distribution of service credits across the service levels in order to react to changed priorities, or to increase the supplier’s focus on certain services.  It is therefore crucial that the contract sets out a clear mechanism for setting new service levels or adjusting the service credit allocations in the event that the parties are unable to agree them through negotiation.  Suppliers will often try to limit the user’s freedom to adjust the service levels and service credits, and as ever there is a balance.  It will usually not be reasonable for a user to be able to adjust all the service levels whenever it wishes, or load all the service credit allocation onto a single service measure, but at the same time the service levels and credits are often the primary service assurance mechanism for the user, and it is entirely legitimate that this protection be geared to the user’s changing requirements over time.    


• Always remember that service levels are linked to price.  When defining the service levels they expect to receive from their supplier, users often forget that quality of service is inextricably linked to price.  The difference between a service level of 99% and 100% is small in percentage terms but may be significant in practical and financial terms if it means that the supplier has literally no scope for error.  Users commonly make the mistake of asking for a better quality of service than they actually need – the business may not necessarily require a service level of 99% if the current in-house service level is 90%.  The key, once again, is in gathering data.  If the user knows what service level it is achieving in-house and how much resource it is investing in order to achieve that level, it is much better equipped to investigate within its organisation whether an improved level of service is required in certain areas, and therefore whether it is prepared to spend more with the supplier in order to achieve this.


Final thoughts


Getting the right service levels in an outsourcing contract involves a lot of work, and the key message to take from this article is that this work should be started as early as possible.  Leaving it until late in the negotiation process will almost invariably result in the user ending up with a less rigorous service level regime than it might have been possible to achieve.  It is also crucial to approach the issue of service levels with an understanding of what aspects of the service are most important to the business.  Only with this information can the user begin to construct a service level regime that will target good performance where it is needed most.  Good data leads to good service levels.  Knowing what level of service has been achieved in-house over an extended period is invaluable when it comes to agreeing the starting service levels with the supplier.  Bear in mind that business requirements will change over the term of the contract.  Inflexible service level arrangements will quickly become out of step with the user’s requirements and will cause tension in the relationship with the supplier.  It is therefore important to build a service level mechanism which assumes and caters for change.  Finally, it is worth reiterating the point that the first step – that of conducting proper diligence on the supplier – is essential and should not be overlooked.  There is little point in developing a first class service level regime if the supplier simply does not have the quality or resources to achieve it. 


Sam Parr is an Associate in the IT/Com Group at Baker & McKenzie LLP: Sam.Parr@BAKERNET.com