The Reality of Service Credits

May 23, 2011

The inclusion of prescribed Service Levels and the payment of Service Credits, triggered by a failure by the Supplier to meet such Service Levels, has been an accepted part of outsourcing contracts for many years now.  Suppliers consent to the principle as a matter of course.  However, whilst the underlying principle of this mechanism requires little discussion, their construction and content are often cause for debate.  

This article endeavours to challenge some of the fundamental concepts relating to Service Level and Service Credit mechanisms in IT outsourcing contracts.  It also tackles some of the issues which consistently trigger debate between Suppliers and Customers and gives an in-house lawyer’s view on why certain ‘positions’ are taken by Suppliers when negotiating such provisions.

What is the purpose of Service Levels?

When outsourcing a set of IT services, a Customer is concerned that a Supplier can deliver services to the levels required by the Customer and its business units.  Service Levels are an accepted mechanism to describe the levels of service performance the Customer expects the Supplier to deliver.  They cover everything from the uptime of services to the time taken to answer the telephone, depending on the requirements of the Customer’s business units.

Types and classifications

There are often two main types of Service Levels, Critical (or Primary) Service Levels and Key Performance Indicators (or Key Measurements).  The key difference, from a Supplier’s perspective at least, is the consequences of failure to meet them.  Failure to meet a Critical (or Primary) Service Level will often result in the payment by the Supplier of Service Credits, whereas failure to meet a Key Performance Indicator often will not.  The idea is that a Supplier’s focus will be brought to bear primarily on the Critical (or Primary) Service Levels and the Service Credits then ‘incentivise’ or ‘penalise’ the Supplier into complying with its contractual obligations.

There is an alternative view, namely that the imposition of Service Credits is simply a price adjustment to reflect the reduced level of service.  However, rarely are Service Credits calculated by reference to cost or value of the reduced scope and/or the market price for the reduced level of service (which would require an extensive benchmarking exercise). As such, this is view is often put forward purely to avoid the legal  argument that the Service Credit is, in fact, simply a contractual penalty which could be challenged if it does not amount to a genuine estimate of the loss suffered.

Where Critical Service Levels are missed, the imposition of Service Credits sends a clear message to the Supplier that the problem must be resolved and further that such resolution should take precedence over elements of the services that are not subject to the service credit mechanism.  To this end, a Customer needs to ensure that its stakeholders have a thorough understanding of, and have accepted, the Service Levels and that these Service Levels reflect the actual business need and not simply the mechanism used by their advisers in a previous deal.  All Suppliers’ have ‘war stories’ where they were meeting or exceeding all the Service Levels and yet still the customer satisfaction with the services was very low. 

Service Levels can also be classified according to what it is they are measuring, namely the availability of an element of the service and the response and/or rectification of an incident.

Availability Service Levels relate primarily to an IT system being available for a percentage of the total ‘uptime’ required in the contract.  Such Service Levels are reliant on the initial design and implementation of the relevant hardware.  Therefore, it is important that such Service Level requirements are known (or assumed) by the Supplier prior to the creation of its technical solution. 

With respect to the incident related Service Levels and those related to service elements that involve personnel interaction (eg answering phones), these drive the number of resources required to deliver the service. 

The availability of critical systems is usually the Service Level that attracts the highest amount of Service Credits.  Therefore, it can be argued that such Service Levels are solely to ‘incentivise’ the Supplier to ensure the initial design and hardware meets the contractual requirements.  As such, this form of Service Level should be pitched at a level that achieves this goal without penalising the Supplier unduly for a Service Level failure, which, it could be argued, has not caused the Customer any direct loss. 

Cost of risk / failure

When a Supplier refers to costing a particular Service Credit mechanism, they are not costing failure but are costing risk!  In almost all calculations of this cost the Supplier’s view of the risk of failure remains constant, unless the solution materially changes.  Therefore the only variable is the associated potential liability in the event of the failure occurring.  The larger the ‘stick’ the Customer requires, the higher the price demanded by the Supplier.  In such circumstances a Customer could pay more for the right to the credit than it would receive from the Supplier in the event of poor performance and, as such, it does not make economic sense to demand excessive Service Credits.

For example, in a number of bid scenarios the Supplier does not receive the proposed Service Credit mechanism until after it has made its initial ‘pitch’.  Whilst a responsible Supplier will assume a certain level of Service Credit exposure in its initial pricing, the IT solution is the primary driver for the price proposed to the potential Customer.  Subsequently, if the Customer were to suggest a very onerous Service Credit mechanism, and if this mechanism were accepted by the Supplier, one may expect to see a variance in the IT solution (to take account of the increased risk), including to the service continuity and/or disaster recovery solution.  However, this is rarely the case and the increased risk is generally either passed back to the Customer via an increase in price or absorbed by the Supplier via a reduction in the expected deal margin.  The mechanism has not, however, reduced the risk of the relevant Service Levels being breached, unless the IT solution itself has been changed. 

The Supplier will have costed into its solution a finite resource to deliver the service and clearly if it is incurring Service Credits then it may seek to redeploy resources to the problem to remedy the affected Service Level.  This could have an effect on another aspect of the service or create a financial loss for the Supplier in recruiting additional resource which, ultimately, is not a long term solution because, at some point, the Supplier’s business will be required to make a ‘return’ on its investment.  Therefore, a Service Credit mechanism, particularly a draconian one, will simply result in a Supplier ‘costing’ it in as a risk premium! 

In addition, if the Supplier is paying a large Service Credit it may be less able to provide additional resources to address the problems.  For example, if a Supplier is paying out a 10% Service Credit, that means that it must find 10% extra cost (less any contingency sums included for such failure) simply to maintain the status quo, when, in fact, what a Customer would hope for/expect is that additional sums would be invested in additional resources and/or technology to rectify the issues associated with Service Level performance. 

Whilst it is accepted that there should be a Service Credit, the value should be set at a level that penalises the Supplier without creating such a financial burden that investment is restricted or prevented.  It should therefore focus on the expected profit relating to the failing element of the service but should not be at such a level that the Customer expects the Supplier to make a loss; otherwise a minor service issue may quickly deteriorate into a major service failure. 

Rather than seeking a large financial credit for a failure to meet a Service Level, it is more prudent to look for rectification plans and problem reports addressing the underlying causes of the failure.  Otherwise, a Customer could be left simply receiving monies from the risk premium they paid at the outset of the deal and a declining level of service!

Earnback

Some Service Credit mechanisms also include a Service Credit recovery or earnback process whereby the Supplier can recover Service Credits for excellent performance in the future.  Such earnback provisions work more as a ‘carrot’ to recover occasional poor performance rather than the standard ‘stick’ of Service Credits.  They tend to look at performance over an entire contract year or look to the next few months after a Service Level failure to encourage a longer term view of service performance rather than excessively penalising the Supplier for occasional ‘blips’.  In addition, the most sophisticated mechanisms have a cut-off point whereby performance in a measurement period is so poor that the associated Service Credit cannot be earned back. 

Whilst a Supplier will always welcome an earnback provision (and such provisions do incentivise the Supplier to recover poor performance) the right to earnback will rarely have a material impact on the Supplier’s perception of the risk profile associated with the Service Credit mechanism.  A draconian Service Credit mechanism cannot be made more palatable to a Supplier simply by including a Service Credit recovery process. 

Sole and Exclusive Remedy

It is an expectation of most Suppliers that the payment of Service Credits is the sole and exclusive financial remedy, in its entirety or to a defined threshold (i.e. until a right of termination with respect to the Service Credits is exercised by the Customer), for the failure to meet the contracted Service Levels.  The primary reason for this is to ‘bound’ the potential liability associated with the Service Credit mechanism so that it can be competitively priced.  A Supplier will find it very difficult to understand (and therefore cost) the potential liability exposure arising as a result of it providing a 99.9% service availability when it contracted to deliver 99.95% without such a cap. 

The arbitrary nature of a potential loss arising ‘above’ or ‘below’ the Service Level line also makes it very problematic for a Supplier to aggressively cost and manage this liability risk.  So, for example, the 99.95% availability Service Level referred to above clearly indicates that the Customer has accepted that perfection (100%) is not expected. If Service Credits were not the sole remedy, and if the only breach of the contract were the associated Service Level, then any loss which was incurred by the Customer in the downtime between 100% and 99.95% would be unrecoverable but if it is incurred between the 99.95% and 99.9% then it would be.  This makes for a very complicated process and many arguments during contract negotiations as to whether the downtime should be measured in the month when the Service Level failure is triggered. Does one look at the aggregate loss across the month? Or is it determined purely on a time basis so that. once 21½ minutes (for a 24×7 availability Service Level in a 30 day measurement period) of downtime has arisen the Customer can recover for each extra minute thereafter? How does one then apply the principle of relief events to such a process? 

If such liability cannot be ‘bound’ then risk associated with the mechanism is very difficult to quantify and, as such, will be subject to a significantly increased risk premium and/or the Supplier will find it impossible to quantify its risk and as such may be unable to appropriately cost it.  This does not preclude some Suppliers from entering into such contracts but it does mean that such Suppliers are moving from the ‘costing of risk’ process into the area of gambling on unknown outcomes. This does not augur well for successful delivery of the services.

The Service Credit mechanism is generally to deal with the ‘low level noise’ associated with a minor (in time and number, rather than potential impact) failures to meet Service Levels and as such it seems appropriate that the payment of the Service Credit be the sole and exclusive remedy up to a defined threshold, in such circumstances.

Where a more significant breach arises which goes to the usability of the service, for example, the other provisions of the Agreement should provide for an appropriate process and remedy.

Customer Satisfaction as a Service Level

Customer satisfaction surveys are very important and most respectable suppliers will carry out customer satisfaction surveys at the executive and user level independently of what the contract says.  In such circumstances they are used to improve the services and/or change the contract itself in order to better meet the Customer’s requirements. But many Suppliers are rightly reluctant to agree to a situation where a process for measuring customer satisfaction becomes a Service Level.  This is because customer satisfaction is subjective. 

Service Levels tend to be objective measures of performance and thus it is very easy to prove whether or not the Supplier is delivering the contracted for service.  However, customer satisfaction is by its very nature a subjective measure of how someone ‘feels’ about the service.  This satisfaction is not bound by the requirements of the contract and an end-user may be dissatisfied, for example, that it takes 24 hours for them to receive a new laptop even though that is the contracted for service. 

In addition, it is sometimes difficult to determine whether the opinion relates to the services themselves or the individual’s overarching opinion of IT delivery within the customer organisation. For example, if my laptop breaks and I ring up the service desk to complain, my perception of the service desk service is determined by whether or not they fix my laptop, irrespective of whether it was the service desk responsibility or whether they were capable of fixing the laptop. 

One is also more likely to respond to user surveys in the event that one is dissatisfied with the service.  Results are generally skewed towards the negative rather than the positive and thus paint an unfair picture of the entire user base’s perception of the service.  Therefore, a minimum number of respondees is required in order for the survey results to be perceived as a valid representation of the user population’s views.  But to get the required level of results, one needs to force the user population to respond to the surveys, which in turn could lead to a more negative response being received from the user population.

Where a Customer requires the customer satisfaction survey process to form part of a binary Service Credit mechanism, the capacity of the survey process to provide a true insight into the user population (or executives’) view of the services is lost.  A Supplier will often place a number of restrictions on the process in order to ensure it is a reflection of the contracted service and not what the user would like to have.  In such circumstances, the survey results in a very narrow view of the services and can, in many circumstances, end up solely detailing whether or not the users and/or executives believe that the Supplier is delivering the services in accordance with its contractual obligations, something that the other Service Levels are already measuring.

Therefore, whilst the performance of customer satisfaction surveys is a legitimate requirement of many Customers, its incorporation into a Service Credit mechanism means that some of the benefit (namely an overview of the requirements of the user base and an overarching view of the services themselves) is lost under the mistaken understanding that the Supplier will only address a problem if they are financially penalised if they don’t.  However, the reality is a Supplier wants a happy Customer because a happy Customer will spend more of their IT budget with that Supplier if they have a strong relationship with that Supplier.  No Service Credit mechanism can provide a better incentive than this.

De Minimis

This is a simple mathematical issue.

If an incident based Service Level is set at 95%, it requires a minimum of 20 Incidents in order to be mathematically valid.  Otherwise, any ‘failure’ will give rise to a breach of the Service Level.  For example, if the service only suffered 15 Incidents, and the Supplier failed only 1, the calculation results in a performance percentage of 93.3%, thereby failing the 95% Service Level.  This, in turn, means that a Service Level, without the de minimis principle, is, de facto, a 100% Service Level.  Not only does this give rise to an issue for the Supplier, it is also not what the Customer asked for. 

Therefore, a minimum of one failure should be permissible where a 100% Service Level is not the requirement.  Sometimes this ‘one failure’ is unpalatable for a Customer where incident volumes are low.  In such circumstances, where the number of Incidents consistently falls below the required number of Incidents required to make the calculation mathematically valid, it may be prudent to monitor and report on performance over a longer period (eg three months).  This will correct the mathematical anomaly without diluting the principle.

In addition, where a Service Level mechanism includes Target Service Levels, which trigger the payment of a Service Credit, either in a month or if the Target Service Level is missed a number of times in a 12 month period, and Minimum Service Levels, which trigger payment of the maximum Service Credit for that Service Level, the logic follows that, where the Target Service Level is less than 100%, at least one ‘failure’ should be ‘de minimis’ and the Minimum Service Level should, as a minimum, be two ‘failures’ notwithstanding what the Service Level actually states. 

Therefore, the de minimis principle, simply put, is that where a customer has not requested 100% service performance, the Supplier should be permitted at least one ‘failure’ and still be deemed to have met the affected Service Level. 

Flexibility and variation

More and more Customers are requiring that the Service Credit mechanism allow a degree of variation throughout the term of the Agreement to allow the Service Levels and/or Service Credits to be varied to reflect the changing needs of the Customer’s business. 

Most Suppliers will consent to a certain degree of flexibility but this is solely with respect to the commercial elements of the Service Credit mechanism (and not the underlying Service Levels themselves), and then only within tight constraints so as to ensure that the flexibility does not have a material impact on the risk profile of the mechanism itself (thereby increasing the associated cost of the risk).

The ability to change a Service Level from a Critical Service Level to a Key Measure and vice versa is something many Suppliers will consent to, provided sufficient notice is given.  Additionally, the variation in the Service Credit applicable to a Service Level may also be acceptable to a Supplier provided that limits are placed on the maximum Service Credit applicable to any Service Level such that the primary goal of the Service Credit mechanism remains to incentivise performance rather than as a revenue stream. However, it is very difficult for a Supplier to accept that the Customer can amend performance percentages attributable to a Service Level, even by reference to past performance – which in IT, as in many other industries, is not always an adequate indication of future performance.  Therefore, such changes will need to be ‘impacted’ via the change control procedure to ensure that such a change does not require an amendment to the IT solution itself in order to ensure that the Service Credit mechanism remains within an acceptable risk profile for the Supplier.

Service Levels must demonstrate continuous improvement over the term

Often seen as the Holy Grail of any outsourcing contract, it is very difficult to contract for continuous improvement, as to do so requires a crystal ball and a great deal of creativity.  This has not precluded a number of contracts proposing an ‘objective’ calculation whereby the previous 12 months performance is used as the basis for calculating an increase in the Service Levels for the subsequent year.  Such a calculation is predicated on the principle that IT is, year on year, becoming faster and more efficient.  This view fails to recognise that the Service Levels often do not directly benefit from this increased IT capability.

If the Service Level relates primarily to a human intervention, ie response and resolution times, it depends on the number of resources a Supplier deploys on delivery of a service, not improvements in IT capability. If the Service Level relates to availability, it often relies on a fixed infrastructure (particularly networks) which cannot become faster or more efficient without investment in new faster and more efficient equipment. 

Clearly the benefits of the ever-increasing capability and capacity of IT should be shared by a Supplier with a Customer but both parties should look carefully at what, realistically, could be achieved and the requirements of the Customer’s business. 

For example, if the business is content with a Supplier’s service desk answering calls within 30 seconds 95% of the time in January 2010, why is it, per se, not satisfied with the same Service Level in January 2012?  In a number of Service Level agreements the Supplier is expected to ‘continuously improve’ all its Service Levels, which, in this example, would mean answering the telephone quicker (which either requires more resource (and therefore more cost) or for the Supplier to spend less time on each call (which may result in users feeling rushed or the underlying problem not being satisfactorily identified).  It also penalises consistent high performance by raising the Service Levels, which seems to be counter-intuitive to what the Customer is endeavouring to achieve.

However, if a Customer’s work force is still having to report the same number of Incidents each month (even though they are answered and logged without the same timescales) to a service desk in, for example, January 2012 as it did at the commencement of an arrangement, then this may indicate a degree of stagnation in the solution and/or more fundamental issues with problem and knowledge management, which a standard continuous improvement mechanism would not address.  This call reduction goal tends not to fit within a standard continuous improvement model but actually demonstrates positive change to the overall services.

Both parties should be incentivised to work together to understand where improvements to the base service can be achieved.  However, an automatic uplift in the ‘required’ Service Level performance does not achieve this and focuses both parties’ attention at the minutiae rather than the primary goal of the outsource, whether that be cost savings, technical expertise or both.  

Therefore, any continuous improvement requirement should pick out only key Service Levels that ‘go to the heart of the deal’ and look for an equitable mechanism to seek improvement of these, remembering often that the Supplier will have cost savings and expected efficiencies already built into the deal.

For IT developments which are not known at the time of contract, an equitable gain share mechanism should be deployed to incentivise both parties to look at new technologies and work together to implement a business case that meets the Customer’s needs.  A Supplier who is appropriately incentivised to look for solutions to the Customer’s problems is more likely to expend resource to find them than a Supplier who receives no direct benefit for the resolution.

Conclusion

Whilst this article raises a number of concerns with ‘standard’ Service Credit mechanisms it does not mean that such mechanisms are without value.  A Service Credit mechanism is a sensible and appropriate ‘tool’ to deal with low impact issues with the Supplier’s non-compliance required levels of performance and such mechanisms do focus the Supplier’s mind on the expectation of the Customer and assist in the design and delivery of the solution. 

However, if a Customer simply proposes a Service Level and Credit mechanism ‘out of the box’ it fails to use the mechanism to its full potential and highlight its key concerns regarding levels of performance and the impact on its business of the Supplier.  The mechanism must be developed with the Customer’s business needs in mind.

In order to maximise their potential benefit to the Customer, levels of Service Credit should be set at appropriate commercial levels to avoid an excessive risk premium being included in the cost.  Excessive Service Credits are likely to be counter-productive and increase the potential of a ‘blame culture’ developing between the Supplier and the Customer thereby limiting the flexibility and ‘work together’ ethos necessary for an outsourcing contract to be a success.

Michael Harvey is Senior Legal Adviser, Fujitsu Services Limited

(c) Michael Harvey 2011